{ "paper_id": "J00-2004", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:57:42.921004Z" }, "title": "Models of Translational Equivalence among Words", "authors": [ { "first": "I", "middle": [ "Dan" ], "last": "Melamed", "suffix": "", "affiliation": {}, "email": "dan.melamed@westgroup.com" }, { "first": "West", "middle": [], "last": "Group", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This article presents methods for biasing statistical translation models to reflect these properties. Evaluation with respect to independent human judgments has confirmed that translation models biased in this fashion are significantly more accurate than a baseline knowledge-free model. This article also shows how a statistical translation model can take advantage of preexisting knowledge that might be available about particular language pairs. Even the simplest kinds of languagespecific knowledge, such as the distinction between content words and function words, are shown to reliably boost translation model performance on some tasks. Statistical models that reflect knowledge about the model domain combine the best of both the rationalist and empiricist paradigms.", "pdf_parse": { "paper_id": "J00-2004", "_pdf_hash": "", "abstract": [ { "text": "This article presents methods for biasing statistical translation models to reflect these properties. Evaluation with respect to independent human judgments has confirmed that translation models biased in this fashion are significantly more accurate than a baseline knowledge-free model. This article also shows how a statistical translation model can take advantage of preexisting knowledge that might be available about particular language pairs. Even the simplest kinds of languagespecific knowledge, such as the distinction between content words and function words, are shown to reliably boost translation model performance on some tasks. Statistical models that reflect knowledge about the model domain combine the best of both the rationalist and empiricist paradigms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The idea of a computer system for translating from one language to another is almost as old as the idea of computer systems. Warren Weaver wrote about mechanical translation as early as 1949. More recently, Brown et al. (1988) suggested that it may be possible to construct machine translation systems automatically. Instead of codifying the human translation process from introspection, Brown and his colleagues proposed machine learning techniques to induce models of the process from examples of its input and output. The proposal generated much excitement, because it held the promise of automating a task that forty years of research have proven very labor-intensive and error-prone. Yet very few other researchers have taken up the cause, partly because Brown et al.'s (1988) approach was quite a departure from the paradigm in vogue at the time.", "cite_spans": [ { "start": 207, "end": 226, "text": "Brown et al. (1988)", "ref_id": "BIBREF4" }, { "start": 760, "end": 781, "text": "Brown et al.'s (1988)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Formally, Brown et al. (1988) built statistical models of translational equivalence (or translation models 1, for short). In the context of computational linguistics, translational equivalence is a relation that holds between two expressions with the same meaning, where the two expressions are in different languages. Empirical estimation of statistical translation models is typically based on parallel texts or bitexts--pairs of texts that are translations of each other. As with all statistical models, the best translation models are those whose parameters correspond best with the sources of variance in the data. Probabilistic translation models whose parameters reflect universal properties of translational equivalence and/or existing knowledge about particular languages and language pairs benefit from the best of both the empiricist and rationalist traditions.", "cite_spans": [ { "start": 10, "end": 29, "text": "Brown et al. (1988)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "This article presents three such models, along with methods for efficiently estimating their parameters. Each new method is designed to account for an additional universal property of translational equivalence in bitexts: . . . Most word tokens translate to only one word token. I approximate this tendency with a one-to-one assumption.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Most text segments are not translated word-for-word. I build an explicit noise model. Different linguistic objects have statistically different behavior in translation. I show a way to condition translation models on different word classes to help account for the variety.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Quantitative evaluation with respect to independent human judgments has shown that each of these three estimation biases significantly improves translation model accuracy over a baseline knowledge-free model. However, these biases will not produce the best possible translation models by themselves. Anyone attempting to build an optimal translation model should infuse it with all available knowledge sources, including syntactic, dictionary, and cognate information. My goal here is only to demonstrate the value of some previously unused kinds of information that are always available for translation modeling, and to show how these information sources can be integrated with others.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "A review of some previously published translation models follows an introduction to translation model taxonomy. The core of the article is a presentation of the model estimation biases described above. The last section reports the results of experiments designed to evaluate these innovations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Throughout this article, I shall use CA\u00a3\u00a3/~GT4A/~C letters to denote entire text corpora and other sets of sets, CAPITAL letters to denote collections, including sequences and bags, and italics for scalar variables. I shall also distinguish between types and tokens by using bold font for the former and plain font for the latter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "There are two kinds of applications of translation models: those where word order plays a crucial role and those where it doesn't. Empirically estimated models of translational equivalence among word types can play a central role in both kinds of applications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model Decomposition", "sec_num": "2." }, { "text": "Applications where word order is not essential include", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model Decomposition", "sec_num": "2." }, { "text": "\u2022 cross-language information retrieval (e.g., McCarley 1999),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model Decomposition", "sec_num": "2." }, { "text": "\u2022 multilingual document filtering (e.g., Oard 1997 ),", "cite_spans": [ { "start": 41, "end": 50, "text": "Oard 1997", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Model Decomposition", "sec_num": "2." }, { "text": "\u2022 computer-assisted language learning (e.g., Nerbonne et al. 1997 ),", "cite_spans": [ { "start": 45, "end": 65, "text": "Nerbonne et al. 1997", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Model Decomposition", "sec_num": "2." }, { "text": "\u2022 certain machine-assisted translation tools (e.g., Macklovitch 1994; Melamed 1996a ),", "cite_spans": [ { "start": 52, "end": 69, "text": "Macklovitch 1994;", "ref_id": "BIBREF29" }, { "start": 70, "end": 83, "text": "Melamed 1996a", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Model Decomposition", "sec_num": "2." }, { "text": "\u2022 concordancing for bilingual lexicography (e.g., Catizone, Russell, and Warwick 1989 ; Gale and Church 1991),", "cite_spans": [ { "start": 50, "end": 85, "text": "Catizone, Russell, and Warwick 1989", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Model Decomposition", "sec_num": "2." }, { "text": "\u2022 corpus linguistics (e.g., Svartvik 1992),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model Decomposition", "sec_num": "2." }, { "text": "\u2022 \"crummy\" machine translation (e.g., Church and Hovy 1992; Resnik 1997 ).", "cite_spans": [ { "start": 38, "end": 59, "text": "Church and Hovy 1992;", "ref_id": null }, { "start": 60, "end": 71, "text": "Resnik 1997", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Model Decomposition", "sec_num": "2." }, { "text": "For these applications, empirically estimated models have a number of advantages over handcrafted models such as on-line versions of bilingual dictionaries. Two of the advantages are the possibility of better coverage and the possibility of frequent updates by nonexpert users to keep up with rapidly evolving vocabularies. A third advantage is that statistical models can provide more accurate information about the relative importance of different translations. Such information is crucial for applications such as cross-language information retrieval (CLIR). In the vector space approach to CLIR, the query vector Q' is in a different language (a different vector space) from the document vectors D. A word-to-word translation model T can map QI into a vector Q in the vector space of D. In order for the mapping to be accurate, T must be able to encode many levels of relative importance among the possible translations of each element of QI. A typical bilingual dictionary says only what the possible translations are, which is equivalent to positing a uniform translational distribution. The performance of cross-language information retrieval with a uniform T is likely to be limited in the same way as the performance of conventional information retrieval without term-frequency information, i.e., where the system knows which terms occur in which documents, but not how often (Buckley 1993) .", "cite_spans": [ { "start": 1385, "end": 1399, "text": "(Buckley 1993)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Model Decomposition", "sec_num": "2." }, { "text": "Applications where word order is crucial include speech transcription for translation (Brousseau et al. 1995) , bootstrapping of OCR systems for new languages (Philip Resnik and Tapas Kanungo, personal communication), interactive translation (Foster, Isabelle, and Plamondon 1996) , and fully automatic high-quality machine translation (e.g., A1-Onaizan et al. 1999) . In such applications, a word-to-word translation model can serve as an independent module in a more complex sequence-tosequence translation model. 2 The independence of such a module is desirable for two reasons, one practical and one philosophical. The practical reason is illustrated in this article: Order-independent translation models can be accurately estimated more efficiently in isolation. The philosophical reason is that words are an important epistemological category in our naive mental representations of language. We have many intuitions (and even some testable theories) about what words are and how they behave. We can bring these intuitions to bear on our translation models without being distracted by other facets of language, such as phrase structure. For example, the translation models presented in the last two chapters of Melamed (to appear) capture the intuitions that words can have multiple senses and that spaces in text do not necessarily delimit words.", "cite_spans": [ { "start": 86, "end": 109, "text": "(Brousseau et al. 1995)", "ref_id": "BIBREF3" }, { "start": 242, "end": 280, "text": "(Foster, Isabelle, and Plamondon 1996)", "ref_id": "BIBREF21" }, { "start": 343, "end": 366, "text": "A1-Onaizan et al. 1999)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Model Decomposition", "sec_num": "2." }, { "text": "The independence of a word-to-word translation module in a sequence-to-sequence translation model can be effected by a two-stage decomposition. The first stage is based on the observation that every sequence L is just an ordered bag, and that the bag B can be modeled independently of its order O. For example, the sequence (abc I consists of the bag {c,a, b} and the ordering relation {(b,2), (a, 1), (c,3)}. If we represent each sequence L as a pair (B, O), then", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model Decomposition", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Pr(L) -Pr(B,O)", "eq_num": "(1)" } ], "section": "Translation Model Decomposition", "sec_num": "2." }, { "text": "--Pr(B)-Pr(OIB ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model Decomposition", "sec_num": "2." }, { "text": "2 \"Sentence-to-sentence\" might be a more transparent term than \"sequence-to-sequence,\" but all the models that I'm aware of apply equally well to sequences of words that are not sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model Decomposition", "sec_num": "2." }, { "text": "Now, let L1 and L2 be two sequences and let A be a one-to-one mapping between the elements of L1 and the elements of L2. Borrowing a term from the operations research literature, I shall refer to such mappings as assignments. 3 Let .4 be the set of all possible assignments between L1 and L2. Using assignments, we can decompose conditional and joint probabilities over sequences:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model Decomposition", "sec_num": "2." }, { "text": "Pr(LIIL2) = ~ Pr(L1,A[L2) (3) AG.4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model Decomposition", "sec_num": "2." }, { "text": "Pr(L,,L2) = ~ Pr(L1, A, L2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model Decomposition", "sec_num": "2." }, { "text": "ACA where Pr(L,,A]L2) -Pr(B1,01,AIL2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model Decomposition", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "= Pr(B1,AIL2) \u2022 Pr(OI[B1, A, L2)", "eq_num": "(5)" } ], "section": "Translation Model Decomposition", "sec_num": "2." }, { "text": "Pr(L1,A, L2) ~ Pr(B,, O1, A, B2, 02)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model Decomposition", "sec_num": "2." }, { "text": "= Pr(B1, A, B2). Pr(O1, O2IB1,A, B2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model Decomposition", "sec_num": "2." }, { "text": "Summing bag pair probabilities over all possible assignments, we obtain a bag-to-bag translation model:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model Decomposition", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Pr(B1, B2) = ~ Pr(B,, A, B2)", "eq_num": "(9)" } ], "section": "Translation Model Decomposition", "sec_num": "2." }, { "text": "AEA The second stage of decomposition takes us from bags of words to the words that they contain. The following bag pair generation process illustrates how a wordto-word translation model can be embedded in a bag-to-bag translation model for languages \u00a31 and \u00a32: .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model Decomposition", "sec_num": "2." }, { "text": "Generate a bag size /.4 1 is also the assignment size.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "Generate l language-independent concepts C1,..., C1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "From each concept Ci, 1 < i < I, generate a pair of word sequences (ffi, rTi) from \u00a3~ x \u00a3~, according to the distribution trans(G ~), to lexicalize the concept in the two languages. 5 Some concepts are not lexicalized in some languages, so one of ffi and rTi may be empty.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "A pair of bags containing m and n nonempty word sequences can be generated by a process where l is anywhere between 1 and m + n.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "For notational convenience, the elements of the two bags can be labeled so that B1 -{u~,...,t~} and B 2 ~ {V~ ..... ~}, where some of the 1/'s and \"?'s may be empty. The elements of an assignment, then, are pairs of bag element labels: A --{(h,jl) ..... (h, jl)}, where each i ranges over {IJ 1 ..... 11l}, eachj ranges over {v~ ..... x~}, each i is distinct, and each j is distinct. The label pairs in a given assignment can be generated in any order, so there are I! ways to generate an assignment of size I. 6 It follows that the probability of generating a pair of bags (B1, B2) with a particular assignment A of size l is Pr(B1,A, B2]I,C, trans) : Pr(1). I! n E Pr(C)trans ('fi'vilC) ", "cite_spans": [ { "start": 678, "end": 688, "text": "('fi'vilC)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "\" (i,j) ff A CCC (lO)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "The above equation holds regardless of how we represent concepts. There are many plausible representations, such as pairs of trees from synchronous tree adjoining grammars (Abeill6 et al. 1990; Shieber 1994; Candito 1998 ), lexical conceptual structures (Dorr 1992) and WordNet synsets (Fellbaum 1998; Vossen 1998) . Of course, for a representation to be used, a method must exist for estimating its distribution in data.", "cite_spans": [ { "start": 172, "end": 193, "text": "(Abeill6 et al. 1990;", "ref_id": null }, { "start": 194, "end": 207, "text": "Shieber 1994;", "ref_id": null }, { "start": 208, "end": 220, "text": "Candito 1998", "ref_id": "BIBREF8" }, { "start": 254, "end": 265, "text": "(Dorr 1992)", "ref_id": "BIBREF18" }, { "start": 286, "end": 301, "text": "(Fellbaum 1998;", "ref_id": null }, { "start": 302, "end": 314, "text": "Vossen 1998)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "A useful representation will reduce the entropy of the trans distribution, which is conditioned on the concept distribution as shown in Equation 10. This topic is beyond the scope of this article, however. I mention it only to show how the models presented here may be used as building blocks for models that are more psycholinguistically sophisticated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "To make the translation model estimation methods presented here as general as possible, I shall assume a totally uninformative concept representation--the trans distribution itself. In other words, I shall assume that each different pair of word sequence types is deterministically generated from a different concept, so that trans(.1i,~i]C) is zero for all concepts except one. Now, a bag-to-bag translation model can be fully specified by the distributions of l and trans.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "Pr(B1,A, B2]I, trans) = Pr(l). I! H trans(~,~j)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(i,j) CA", "eq_num": "(11)" } ], "section": "3.", "sec_num": null }, { "text": "The probability distribution trans (.1, ~) is a word-to-word translation model. Unlike the models proposed by Brown et al. (1993b) , this model is symmetric, because both word bags are generated together from a joint probability distribution. Brown and his colleagues' models, reviewed in Section 4.3, generate one half of the bitext given the other hall so they are represented by conditional probability distributions. A sequenceto-sequence translation model can be obtained from a word-to-word translation model by combining Equation 11 with order information as in Equation 8.", "cite_spans": [ { "start": 110, "end": 130, "text": "Brown et al. (1993b)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "The most general word-to-word translation model trans(.1, ~), where ,i and \u00a2\u00a2 range over sequences in \u00a31 and \u00a32, has an infinite number of parameters. This model can be constrained in various ways to make it more practical. The models presented in this article are based on the one-to-one assumption: Each word is translated to at most one other word. In these models, .1 and \u00a2\u00a2 may consist of at most one word each. As before, one of the two sequences (but not both) may be empty. I shall describe empty sequences as consisting of a special NULL word, so that each word sequence will contain exactly one word and can be treated as a scalar. Henceforth, I shall write u and v instead of 11 and ~\u00a2. Under the one-to-one assumption, a pair of bags containing m and n nonempty words can be generated by a process where the bag size I is anywhere between max(m, n) and m + n. The one-to-one assumption is not as restrictive as it may appear: The explanatory power of a model based on this assumption may be raised to an arbitrary level by extending Western notions of what words are to include words that contain spaces (e.g., in English) or several characters (e.g., in Chinese). For example, I have shown elsewhere how to estimate word-to-word translation models where a word can be a noncompositional compound consisting of several space-delimited tokens (Melamed, to appear) . For the purposes of this article, however, words are the tokens generated by my tokenizers and stemmers for the languages in question. Therefore, the models in this article are only a first approximation to the vast complexities of translational equivalence between natural languages. They are intended mainly as stepping stones towards better models.", "cite_spans": [ { "start": 1354, "end": 1374, "text": "(Melamed, to appear)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The One-to-One Assumption", "sec_num": "3." }, { "text": "Most methods for estimating translation models from bitexts start with the following intuition: Words that are translations of each other are more likely to appear in corresponding bitext regions than other pairs of words. Following this intuition, most authors begin by counting the number of times that word types in one half of the bitext co-occur with word types in the other half. Different co-occurrence counting methods stem from different models of co-occurrence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models of Co-occurrence", "sec_num": "4.1" }, { "text": "A model of co-occurrence is a Boolean predicate, which indicates whether a given pair of word tokens co-occur in corresponding regions of the bitext space. Different models of co-occurrence are possible, depending on the kind of bitext map that is available, the language-specific information that is available, and the assumptions made about the nature of translational equivalence. All the translation models reviewed and introduced in this article can be based on any of the co-occurrence models described by Melamed (1998a) . For expository purposes, however, I shall assume a boundarybased model of co-occurrence throughout this article. A boundary-based model of co-occurrence assumes that both halves of the bitext have been segmented into s segments, so that segment Ui in one half of the bitext and segment Vi in the other half are mutual translations, 1 < i < s. Under the boundary-based model of co-occurrence, there are several ways to compute co-occurrence counts cooc (u, v) between word types u and v. In the models of Brown, Della Pietra, Della Pietra, and Mercer (1993) , reviewed in Section 4.3,", "cite_spans": [ { "start": 512, "end": 527, "text": "Melamed (1998a)", "ref_id": "BIBREF32" }, { "start": 982, "end": 988, "text": "(u, v)", "ref_id": null }, { "start": 1034, "end": 1086, "text": "Brown, Della Pietra, Della Pietra, and Mercer (1993)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Models of Co-occurrence", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s COOC(R, V) = ~ ei(u) .j~(V),", "eq_num": "(12)" } ], "section": "Models of Co-occurrence", "sec_num": "4.1" }, { "text": "i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models of Co-occurrence", "sec_num": "4.1" }, { "text": "where ei and j5 are the unigram frequencies of u and v, respectively, in each aligned text segment i. For most translation models, this method produces suboptimal results, however, when ei(u) > 1 and )~(v) > 1. I argue elsewhere (Melamed 1998a ", "cite_spans": [ { "start": 229, "end": 243, "text": "(Melamed 1998a", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Models of Co-occurrence", "sec_num": "4.1" }, { "text": ") that cooc(u, v) = ~ min[ei(u),j~(v)] (13) i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models of Co-occurrence", "sec_num": "4.1" }, { "text": "is preferable, and this is the method used for the models introduced in Section 5. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models of Co-occurrence", "sec_num": "4.1" }, { "text": "Many researchers have proposed greedy algorithms for estimating nonprobabilistic word-to-word translation models, also known as translation lexicons (e.g., Catizone, Russell, and Warwick 1989; Gale and Church 1991; Fung 1995; Kumano and Hirakawa 1994; Melamed 1995; Wu and Xia 1994) . Most of these algorithms can be summarized as follows:", "cite_spans": [ { "start": 156, "end": 192, "text": "Catizone, Russell, and Warwick 1989;", "ref_id": "BIBREF9" }, { "start": 193, "end": 214, "text": "Gale and Church 1991;", "ref_id": "BIBREF23" }, { "start": 215, "end": 225, "text": "Fung 1995;", "ref_id": "BIBREF22" }, { "start": 226, "end": 251, "text": "Kumano and Hirakawa 1994;", "ref_id": "BIBREF27" }, { "start": 252, "end": 265, "text": "Melamed 1995;", "ref_id": "BIBREF31" }, { "start": 266, "end": 282, "text": "Wu and Xia 1994)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Nonprobabilistic Translation Lexicons", "sec_num": "4.2" }, { "text": "1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nonprobabilistic Translation Lexicons", "sec_num": "4.2" }, { "text": "Choose a similarity function S between word types in \u00a31 and word types in \u00a32.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nonprobabilistic Translation Lexicons", "sec_num": "4.2" }, { "text": "Compute association scores S(u,v) for a set of word type pairs (U, V) C (\u00a31 X \u00a32) that occur in training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "Sort the word pairs in descending order of their association scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "Discard all word pairs for which S(u, v) is less than a chosen threshold. The remaining word pairs become the entries in the translation lexicon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "The various proposals differ mainly in their choice of similarity function. Almost all the similarity functions in the literature are based on a model of co-occurrence with some linguistically motivated filtering (see Fung [1995] for a notable exception). Given a reasonable similarity function, the greedy algorithm works remarkably well, considering how simple it is. However, the association scores in Step 2 are typically computed independently of each other. The problem with this independence assumption is illustrated in Figure 1 . The two word sequences represent corresponding regions of an English/French bitext. If nods and hoche co-occur much more often than expected by chance, then any reasonable similarity metric will deem them likely to be mutual translations. Nods and hoche are indeed mutual translations, so their tendency to co-occur is called a direct association. Now, suppose that nods and head often co-occur in English. Then hoche and head will also co-occur more often than expected by chance. The dashed arrow between hoche and head in Figure i represents an indirect association, since the association between hoche and head arises only by virtue of the association between each of them and nods. Models of translational equivalence that are ignorant of indirect associations have \"a tendency.., to be confused by collocates\" (Dagan, Church, and Gale 1993,5) .", "cite_spans": [ { "start": 218, "end": 229, "text": "Fung [1995]", "ref_id": "BIBREF22" }, { "start": 1355, "end": 1387, "text": "(Dagan, Church, and Gale 1993,5)", "ref_id": null } ], "ref_spans": [ { "start": 528, "end": 536, "text": "Figure 1", "ref_id": null }, { "start": 1064, "end": 1072, "text": "Figure i", "ref_id": null } ], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "Paradoxically, the irregularities (noise) in text and in translation mitigate the problem. If noise in the data reduces the strength of a direct association, then the same noise will reduce the strengths of any indirect associations that are based on this direct Table 1 Variables used to describe translation models.", "cite_spans": [], "ref_spans": [ { "start": 263, "end": 270, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "= the two halves of the bitext (U, V) = a pair of aligned text segments in", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(U, V)", "sec_num": null }, { "text": "(/d, V) e(u) = the unigram frequency of u in U f(v) = the unigram frequency of v in V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(U, V)", "sec_num": null }, { "text": "cooc (u, v) = the number of times that u and v co-occur trans(vlu ) = the probability that a token of u will be translated as a token of v association. On the other hand, noise can reduce the strength of an indirect association without affecting any direct associations. Therefore, direct associations are usually stronger than indirect associations. If all the entries in a translation lexicon are sorted by their association scores, the direct associations will be very dense near the top of the list, and sparser towards the bottom. Gale and Church (1991) have shown that entries at the very top of the list can be over 98% correct. Their algorithm gleaned lexicon entries for about 61% of the word tokens in a sample of 800 English sentences. To obtain 98% precision, their algorithm selected only entries for which it had high confidence that the association score was high. These would be the word pairs that co-occur most frequently. A random sample of 800 sentences from the same corpus showed that 61% of the word tokens, where the tokens are of the most frequent types, represent 4.5% of all the word types.", "cite_spans": [ { "start": 5, "end": 11, "text": "(u, v)", "ref_id": null }, { "start": 536, "end": 558, "text": "Gale and Church (1991)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "(U, V)", "sec_num": null }, { "text": "A similar strategy was employed by Wu and Xia (1994) and by Fung (1995) . Fung skimmed off the top 23.8% of the noun-noun entries in her lexicon to achieve a precision of 71.6%. Wu and Xia have reported automatic acquisition of 6,517 lexicon entries from a 3.3-million-word corpus, with a precision of 86%. The first 3.3 million word tokens in an English corpus from a similar genre contained 33,490 different word types, suggesting a recall of roughly 19%. Note, however, that Wu and Xia chose to weight their precision estimates by the probabilities attached to each entry:", "cite_spans": [ { "start": 35, "end": 52, "text": "Wu and Xia (1994)", "ref_id": "BIBREF43" }, { "start": 60, "end": 71, "text": "Fung (1995)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "(U, V)", "sec_num": null }, { "text": "For example, if the translation set for English word detect has the two correct Chinese candidates with 0.533 probability and with 0.277 probability, and the incorrect translation with 0.190 probability, then we count this as 0.810 correct translations and 0.190 incorrect translations. (Wu and Xia 1994, 211) This is a reasonable evaluation method, but it is not comparable to methods that simply count each lexicon entry as either right or wrong (e.g., Daille, Gaussier, and Lang6 1994; Melamed 1996b) . A weighted precision estimate pays more attention to entries that are more frequent and hence easier to estimate. Therefore, weighted precision estimates are generally higher than unweighted ones.", "cite_spans": [ { "start": 287, "end": 309, "text": "(Wu and Xia 1994, 211)", "ref_id": null }, { "start": 455, "end": 488, "text": "Daille, Gaussier, and Lang6 1994;", "ref_id": null }, { "start": 489, "end": 503, "text": "Melamed 1996b)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "(U, V)", "sec_num": null }, { "text": "Most probabilistic translation model reestimation algorithms published to date are variations on the theme proposed by Brown et al. (1993b) . These models involve conditional probabilities, but they can be compared to symmetric models if the latter are normalized by the appropriate marginal distribution. I shall review these models using the notation in Table 1. 4.3.1 Models Using Only Co-occurrence Information. Brown and his colleagues employ the expectation-maximization (EM) algorithm (Dempster, Laird, and Rubin 1977) to estimate the parameters of their Model 1. On iteration i, the EM algorithm reestimates the model parameters transi(v]u) based on their estimates from iteration i-1.", "cite_spans": [ { "start": 119, "end": 139, "text": "Brown et al. (1993b)", "ref_id": "BIBREF6" }, { "start": 492, "end": 525, "text": "(Dempster, Laird, and Rubin 1977)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 356, "end": 364, "text": "Table 1.", "ref_id": null } ], "eq_spans": [], "section": "Reestimated Sequence-to-Sequence Translation Models", "sec_num": "4.3" }, { "text": "In Model 1, the relationship between the new parameter estimates and the old ones is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reestimated Sequence-to-Sequence Translation Models", "sec_num": "4.3" }, { "text": "transi_l(VlU ) \u2022 e(u) -f(v) transi(vlu) = z ~_, (u,v)e(u,v) ~u,eutransi-l(VlU') (14)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reestimated Sequence-to-Sequence Translation Models", "sec_num": "4.3" }, { "text": "where z is a normalizing factor. 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reestimated Sequence-to-Sequence Translation Models", "sec_num": "4.3" }, { "text": "It is instructive to consider the form of Equation 14 when all the translation probabilities trans (v[u) for a particular u are initialized to the same constant p, as Brown et al. (1993b, 273) actually do:", "cite_spans": [ { "start": 99, "end": 104, "text": "(v[u)", "ref_id": null }, { "start": 167, "end": 192, "text": "Brown et al. (1993b, 273)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Reestimated Sequence-to-Sequence Translation Models", "sec_num": "4.3" }, { "text": "transl(v]u) : z E p.e(u) .f(v) (15) (u,v)c(u,v) p. ]U[ : z E e(u) .f(v) (16) (u,v)e(u,v) pU]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reestimated Sequence-to-Sequence Translation Models", "sec_num": "4.3" }, { "text": "The initial translation probability transl(v]u) is set proportional to the co-occurrence count of u and v and inversely proportional to the length of each segment U in which u occurs. The intuition behind the numerator is central to most bitext-based translation models: The more often two words co-occur, the more likely they are to be mutual translations. The intuition behind the denominator is that the co-occurrence count of u and v should be discounted to the degree that v also co-occurs with other words in the same segment pair. Now consider how Equation 16 would behave if all the text segments on each side were of the same length, s so that each token of v co-occurs with exactly c words (where c is constant):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reestimated Sequence-to-Sequence Translation Models", "sec_num": "4.3" }, { "text": "transl(vlu ) : z E e(u).f(v) (17) c (u,v) c (u,v) z ~ e(u) .f(v) (18) c (u,v) e(u,v)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reestimated Sequence-to-Sequence Translation Models", "sec_num": "4.3" }, { "text": "The normalizing coefficient z is constant over all words. The only difference between Equations 16 and 18 is that the former discounts co-occurrences proportionally to the segment lengths. When information about segment lengths is not available, the only information available to initialize Model 1 is the co-occurrence counts. This property makes Model 1 an appropriate baseline for comparison to more sophisticated models that use other information sources, both in the work of Brown and his colleagues and in the work described here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reestimated Sequence-to-Sequence Translation Models", "sec_num": "4.3" }, { "text": "In any bitext, the positions of words relative to the true bitext map correlate with the positions of their translations. The correlation is stronger for language pairs with more similar word order. Brown et al. (1988) introduced the idea that this correlation can be encoded in translation model parameters. Dagan, Church, and Gale (1993) expanded on this idea by replacing Brown et al.'s (1988) word alignment parameters, which were based on absolute word positions in aligned segments, with a much smaller set of relative offset parameters. The much smaller number of parameters allowed Dagan, Church, and Gale's model to be effectively trained on much smaller bitexts. Vogel, Ney, and Tillmann (1996) have shown how some additional assumptions can turn this model into a hidden Markov model, enabling even more efficient parameter estimation.", "cite_spans": [ { "start": 199, "end": 218, "text": "Brown et al. (1988)", "ref_id": "BIBREF4" }, { "start": 309, "end": 339, "text": "Dagan, Church, and Gale (1993)", "ref_id": "BIBREF13" }, { "start": 375, "end": 396, "text": "Brown et al.'s (1988)", "ref_id": null }, { "start": 673, "end": 704, "text": "Vogel, Ney, and Tillmann (1996)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Word Order Correlation Biases.", "sec_num": "4.3.2" }, { "text": "It cannot be overemphasized that the word order correlation bias is just knowledge about the problem domain, which can be used to guide the search for the optimum model parameters. Translational equivalence can be empirically modeled for any pair of languages, but some models and model biases work better for some language pairs than for others. The word order correlation bias is most useful when it has high predictive power, i.e., when the distribution of alignments or offsets has low entropy. The entropy of this distribution is indeed relatively low for the language pair that both Brown and his colleagues and Dagan, Church, and Gale were working with--French and English have very similar word order. A word order correlation bias, as well as the phrase structure biases in Brown et al.'s (1993b) Models 4 and 5, would be less beneficial with noisier training bitexts or for language pairs with less similar word order. Nevertheless, one should use all available information sources, if one wants to build the best possible translation model. Section 5.3 suggests a way to add the word order correlation bias to the models presented in this article.", "cite_spans": [ { "start": 783, "end": 805, "text": "Brown et al.'s (1993b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Word Order Correlation Biases.", "sec_num": "4.3.2" }, { "text": "At about the same time that I developed the models in this article, Hiemstra (1996) independently developed his own bag-to-bag model of translational equivalence. His model is also based on a one-to-one assumption, but it differs from my models in that it allows empty words in only one of the two bags, the one representing the shorter sentence. Thus, Hiemstra's model is similar to the first model in Section 5, but it has a little less explanatory power. Hiemstra's approach also differs from mine in his use of the Iterative Proportional Fitting Procedure (IPFP) (Deming and Stephan 1940) for parameter estimation.", "cite_spans": [ { "start": 68, "end": 83, "text": "Hiemstra (1996)", "ref_id": "BIBREF25" }, { "start": 567, "end": 592, "text": "(Deming and Stephan 1940)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Reestimated Bag-to-Bag Translation Models", "sec_num": "4.4" }, { "text": "The IPFP is quite sensitive to initial conditions, so Hiemstra investigated a number of initialization options. Choosing the most advantageous, Hiemstra has published parts of the translational distributions of certain words, induced using both his method and Brown et al.'s (1993b) Model 1 from the same training bitext. Subjective comparison of these examples suggests that Hiemstra's method is more accurate. Hiemstra (1998) has also evaluated the recall and precision of his method and of Model 1 on a small hand-constructed set of link tokens in a particular bitext. Model 1 fared worse, on average.", "cite_spans": [ { "start": 260, "end": 282, "text": "Brown et al.'s (1993b)", "ref_id": null }, { "start": 412, "end": 427, "text": "Hiemstra (1998)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Reestimated Bag-to-Bag Translation Models", "sec_num": "4.4" }, { "text": "This section describes my methods for estimating the parameters of a symmetric wordto-word translation model from a bitext. For most applications, we are interested in estimating the probability trans (u,v) of jointly generating the pair of words (u,v) .", "cite_spans": [ { "start": 201, "end": 206, "text": "(u,v)", "ref_id": null }, { "start": 247, "end": 252, "text": "(u,v)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation", "sec_num": "5." }, { "text": "Unfortunately, these parameters cannot be directly inferred from a training bitext, because we don't know which words in one half of the bitext were generated together with which words in the other half. The observable features of the bitext are only the co-occurrence counts cooc(u, v) (see Section 4.1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation", "sec_num": "5." }, { "text": "Methods for estimating translation parameters from co-occurrence counts typically involve link counts links (u, v) , which represent hypotheses about the number of times that u and v were generated together, for each u and v in the bitext. A link token is an ordered pair of word tokens, one from each half of the bitext. A link type is an ordered pair of word types. The link counts links(u, v) range over link types. We can always estimate trans(u, v) by normalizing link counts so that Y]~u,v trans(u, v) = 1:", "cite_spans": [ { "start": 108, "end": 114, "text": "(u, v)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation", "sec_num": "5." }, { "text": "trans(u, v) = links(u, v) Y~-u,,v, links(u', v') (19)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation", "sec_num": "5." }, { "text": "For estimation purposes, it is convenient to also employ a separate set of nonprobabilistic parameters score (u, v) , which represent the chances that u and v can ever be mutual translations, i.e., that there exists some context where tokens u and v are generated from the same concept. The relationship between score(u, v) and trans (u, v) can be more or less direct, depending on the model and its estimation method. Each of the models presented below uses a different score formulation.", "cite_spans": [ { "start": 109, "end": 115, "text": "(u, v)", "ref_id": null }, { "start": 334, "end": 340, "text": "(u, v)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation", "sec_num": "5." }, { "text": "All my methods for estimating the translation parameters trans (u,v) share the following general outline:", "cite_spans": [ { "start": 63, "end": 68, "text": "(u,v)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation", "sec_num": "5." }, { "text": ". . .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation", "sec_num": "5." }, { "text": "Initialize the score parameters to a first approximation, based only on the co-occurrence counts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ". .", "sec_num": null }, { "text": "Approximate the expected link counts links (u, v) , as a function of the score parameters and the co-occurrence counts.", "cite_spans": [ { "start": 43, "end": 49, "text": "(u, v)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": ". .", "sec_num": null }, { "text": "Estimate trans (u, v) , by normalizing the link counts as in Equation 19. If less than .0001 of the trans(u, v) distribution changed from the previous iteration, then stop.", "cite_spans": [ { "start": 15, "end": 21, "text": "(u, v)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": ". .", "sec_num": null }, { "text": "Reestimate the parameters score (u, v) , as a function of the link counts and the co-occurrence counts.", "cite_spans": [ { "start": 32, "end": 38, "text": "(u, v)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": ". .", "sec_num": null }, { "text": "Step 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Repeat from", "sec_num": null }, { "text": "Under certain conditions, a parameter estimation process of this sort is an instance of the expectation-maximization (EM) algorithm (Dempster, Laird, and Rubin 1977) . As explained below, meeting these conditions is computationally too expensive for my models. 9 Therefore, I employ some approximations, which lack the EM algorithm's convergence guarantee. The maximum likelihood approach to estimating the unknown parameters is to find the set of parameters ~) that maximize the probability of the training bitext (U, V). ~) = arg rn~x Pr(U, VIO )", "cite_spans": [ { "start": 132, "end": 165, "text": "(Dempster, Laird, and Rubin 1977)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Repeat from", "sec_num": null }, { "text": "The probability of the bitext is a sum over the distribution ~4 of possible assignments:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Repeat from", "sec_num": null }, { "text": "Pr(U, Vie) = ~ Pr(U,A, Vie).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Repeat from", "sec_num": null }, { "text": "AE.,4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Repeat from", "sec_num": null }, { "text": "9 For example, the expectation in Step 2 would need to be computed exactly, rather than merely approximated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Repeat from", "sec_num": null }, { "text": "The munber of possible assignments grows exponentially with the size of aligned text segments in the bitext. Due to the parameter interdependencies introduced by the one-to-one assumption, we are unlikely to find a method for decomposing the assignments into parameters that can be estimated independently of each other as in Brown et al. [1993b, Equation 26] ). Barring such a decomposition method, the MLE approach is infeasible. This is why we must make do with approximations to the EM algorithm.", "cite_spans": [ { "start": 326, "end": 359, "text": "Brown et al. [1993b, Equation 26]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Repeat from", "sec_num": null }, { "text": "In this situation, Brown et al. (1993b, 293) recommend \"evaluating the expectations using only a single, probable alignment.\" The single most probable assignment Ama~ is the maximum a posteriori (MAP) assignment:", "cite_spans": [ { "start": 19, "end": 44, "text": "Brown et al. (1993b, 293)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Repeat from", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Amax = ar~maxPr(U,A, VIO ) (22) --AE~4 = ar~maxPr(l) \u2022 l! II trans(ui, vj) (23) --AE,,4 (i,j) cA = argmaxl\u00b0g [ Pr(1)'l! II-AG,4 (i,j)EAtrans(ui'vJ)]", "eq_num": "(24)" } ], "section": "Repeat from", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "= argmax {log[Pr(l) \u2022 1!] +v AC~4 (i,j) ~EA logtrans(ui, vj)}", "eq_num": "(25)" } ], "section": "Repeat from", "sec_num": null }, { "text": "To simplify things further, let us assume that Pr(l) \u2022 I! is constant, so that Amax = argmax ~ logtrans (ui, vj) .", "cite_spans": [ { "start": 104, "end": 112, "text": "(ui, vj)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Repeat from", "sec_num": null }, { "text": "(26) AE~4 (i,j) cA", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Repeat from", "sec_num": null }, { "text": "If we represent the bitext as a bipartite graph and weight the edges by log trans (u, v) , then the right-hand side of Equation 26 is an instance of the weighted maximum matching problem and Ama~ is its solution. For a bipartite graph G = (V1 U V2, E), with v = IV1 U V21 and e = IEI, the lowest currently known upper bound on the computational complexity of this problem is O(ve + v 2 log v) (Ahuja, Magnati, and Orlin 1993, 500) . Although this upper bound is polynomial, it is still too expensive for typical bitexts. 1\u00b0 Subsection 5.1.2 describes a greedy approximation to the MAP approximation.", "cite_spans": [ { "start": 82, "end": 88, "text": "(u, v)", "ref_id": null }, { "start": 393, "end": 430, "text": "(Ahuja, Magnati, and Orlin 1993, 500)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Repeat from", "sec_num": null }, { "text": "The Competitive Linking Algorithm 5.1.1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method A:", "sec_num": "5.1" }, { "text": "Step 1: Initialization. Almost every translation model estimation algorithm exploits the well-known correlation between translation probabilities and co-occurrence counts. Many algorithms also normalize the co-occurrence counts cooc (u,v) by the marginal frequencies of u and v. However, these quantities account for only the three shaded cells in Table 2 . The statistical interdependence between two word types can be estimated more robustly by considering the whole table. For example, Gale and Church (1991, 154) suggest that \"~b 2, a X2-1ike statistic, seems to be a particularly good choice because it makes good use of the off-diagonal cells\" in the contingency table. Total cooc (-,u,.) II cooc(.,.) In informal experiments described elsewhere (Melamed 1995) , I found that the G 2 statistic suggested by Dunning (1993) slightly outperforms \u00a22. Let the cells of the contingency table be named as follows:", "cite_spans": [ { "start": 233, "end": 238, "text": "(u,v)", "ref_id": null }, { "start": 489, "end": 516, "text": "Gale and Church (1991, 154)", "ref_id": null }, { "start": 687, "end": 707, "text": "(-,u,.) II cooc(.,.)", "ref_id": null }, { "start": 752, "end": 766, "text": "(Melamed 1995)", "ref_id": "BIBREF31" }, { "start": 813, "end": 827, "text": "Dunning (1993)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 348, "end": 355, "text": "Table 2", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Method A:", "sec_num": "5.1" }, { "text": "Now, Ilul ul v a b ~v c d B(a[a + b, pl)B(c[c + d, p2) (27) G2(u,v) = -2log B(al a + b,p)B(c[c + d,p)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method A:", "sec_num": "5.1" }, { "text": "where B(kln, p) = (nk)pk(1--p)n--k are binomial probabilities. The statistic uses maximum likelihood estimates for the probability parameters: Pl = ~'a p2 = 74-d'c P-a+b+c+a' a+c G 2 is easy to compute because the binomial coefficients in the numerator and in the denominator cancel each other out. All my methods initialize the parameters score (u, v) to G2(u,v) , except that any pairing with NULL is initialized to an infinitesimal value.", "cite_spans": [ { "start": 346, "end": 352, "text": "(u, v)", "ref_id": null }, { "start": 356, "end": 363, "text": "G2(u,v)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Method A:", "sec_num": "5.1" }, { "text": "I have also found it useful to smooth the co-occurrence counts, e.g., using the Simple Good-Turing smoothing method (Gale and Sampson 1995) , before computing G 2.", "cite_spans": [ { "start": 116, "end": 139, "text": "(Gale and Sampson 1995)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Method A:", "sec_num": "5.1" }, { "text": "Step 2: Estimation of Link Counts. To further reduce the complexity of estimating link counts, I employ the competitive linking algorithm, which is a greedy approximation to the MAP approximation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.1.2", "sec_num": null }, { "text": "1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.1.2", "sec_num": null }, { "text": "Sort all the score(u, v) from highest to lowest.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.1.2", "sec_num": null }, { "text": "For each score(u, v), in order:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(a)", "eq_num": "(b)" } ], "section": "2.", "sec_num": null }, { "text": "If u (resp., v) is NULL, consider all tokens of v (resp., u) in the bitext linked to NULL. Otherwise, link all co-occurring token pairs (u, v) in the bitext. The one-to-one assumption implies that linked words cannot be linked again. Therefore, remove all linked word tokens from their respective halves of the bitext.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "The competitive linking algorithm can be viewed as a heuristic search for the most likely assignment in the space of all possible assignments. The heuristic is that the most likely assignments contain links that are individually the most likely. The search proceeds by a process of elimination. In the first search iteration, all the assignments that do not contain the most likely link are discarded. In the second iteration, all the assignments that do not contain the second most likely link are discarded, and so on until only one assignment remains, u The algorithm greedily selects the most likely links first, and then selects less likely links only if they don't conflict with previous selections. The probability of a link being rejected increases with the number of links that are selected before it, and thus decreases with the link's score. In this problem domain, the competitive linking algorithm usually finds one of the most likely assignments, as I will show in Section 6. Under an appropriate hashing scheme, the expected running time of the competitive linking algorithm is linear in the size of the input bitext.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "The competitive linking algorithm and its one-to-one assumption are potent weapons against the ever-present sparse data problem. They enable accurate estimation of translational distributions even for words that occur only once, as long as the surrounding words are more frequent. In most translation models, link scores are correlated with co-occurrence frequency. So, links between tokens u and v for which score(u, v) is highest are the ones for which there is the most evidence, and thus also the ones that are easiest to predict correctly. Winner-take-all link assignment methods, such as the competitive linking algorithm, can prevent links based on indirect associations (see Section 4.2), thereby leveraging their accuracy on the more confident links to raise the accuracy of the less confident links. For example, suppose that ul and u2 co-occur with vl and v2 in the training data, and the model estimates score(u1, vl) --.05, score (ul, v2) = .02, and score(u2, v2) = .01. According to the one-to-one assumption, (Ul, v2) is an indirect association and the correct translation of v2 is u2. To the extent that the one-to-one assumption is valid, it reduces the probability of spurious links for the rarer words. The more incorrect candidate translations can be eliminated for a given rare word, the more likely the correct translation is to be found. So, the probability of a correct match for a rare word is proportional to the fraction of words around it that can be linked with higher confidence. This fraction is largely determined by two bitext properties: the distribution of word frequencies, and the distribution of co-occurrence counts. Melamed (to appear) explores these properties in greater depth.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "Step 3: Reestimation of the Model Parameters. Method A reestimates the score parameters as the logarithm of the trans parameters. The competitive linking algorithm only cares about the relative magnitudes of the various score (u, v) . However, Equation 26 is a sum rather than a product, so I scale the trans parameters logarithmically, to be consistent with its probabilistic interpretation: Yarowsky (1993, 271) has shown that \"for several definitions of sense and collocation, an ambiguous word has only one sense in a given collocation with a probability of 90-99%.\" In other words, a single contextual clue can be a highly reliable indicator of a word's sense. One of the definitions of \"sense\" studied by Yarowsky was a word token's translation in the other half of a bitext. For example, the English word sentence may be considered to have two senses, corresponding to its French translations peine (judicial sentence) and phrase (grammatical sentence). If a token of sentence occurs in the vicinity of a word like jury or prison, then it is far more likely to be translated as peine than as phrase. \"In the vicinity of\" is one kind of collocation. Co-occurrence ", "cite_spans": [ { "start": 226, "end": 232, "text": "(u, v)", "ref_id": null }, { "start": 393, "end": 413, "text": "Yarowsky (1993, 271)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "5.1.3", "sec_num": null }, { "text": "scoreA(u, v) = log trans(u, v) (28)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.1.3", "sec_num": null }, { "text": "links(u,v) / cooc(u, v)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method B: Improved Estimation Using an Explicit Noise Model", "sec_num": "5.2" }, { "text": "The ratio links (u, v)/cooc(u, v) , for several values of cooc (u, v) .", "cite_spans": [ { "start": 16, "end": 33, "text": "(u, v)/cooc(u, v)", "ref_id": null }, { "start": 63, "end": 69, "text": "(u, v)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "in bitext space is another kind of collocation. If each word's translation is treated as a sense tag (Resnik and Yarowsky 1997) , then \"translational\" collocations have the unique property that the collocate and the word sense are one and the same! Method B exploits this property under the hypothesis that \"one sense per collocation\" holds for translational collocations. This hypothesis implies that if u and v are possible mutual translations, and a token u co-occurs with a token v in the bitext, then with very high probability the pair (u, v) was generated from the same concept and should be linked. To test this hypothesis, I ran one iteration of Method A on 300,000 aligned sentence pairs from the Canadian Hansards bitext. I then plotted the links (u,v) ratio ~ for several values of cooc (u, v) in Figure 2 . The curves show that the ratio links (u,v) cooc (u,v) tends to be either very high or very low. This bimodality is not an artifact of the competitive linking process, because in the first iteration, linking decisions are based only on the initial similarity metric.", "cite_spans": [ { "start": 101, "end": 127, "text": "(Resnik and Yarowsky 1997)", "ref_id": "BIBREF38" }, { "start": 758, "end": 763, "text": "(u,v)", "ref_id": null }, { "start": 799, "end": 805, "text": "(u, v)", "ref_id": null }, { "start": 857, "end": 862, "text": "(u,v)", "ref_id": null }, { "start": 868, "end": 873, "text": "(u,v)", "ref_id": null } ], "ref_spans": [ { "start": 809, "end": 817, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "Information about how often words co-occur without being linked can be used to links (u,v) bias the estimation of translation model parameters. The smaller the ratio cooc (u,v) , the more likely it is that u and v are not mutual translations, and that links posited between tokens of u and v are noise. The bias can be implemented via auxiliary parameters that model the curve illustrated in Figure 2 . The competitive linking algorithm creates all the links of a given type independently of each other. 12 So, the distribution of the number links (u, v) of links connecting word types u and v can be modeled by a binomial distribution with parameters cooc (u, v) and p (u, v) . p(u, v) is the probability Variables used to describe Method B.", "cite_spans": [ { "start": 85, "end": 90, "text": "(u,v)", "ref_id": null }, { "start": 171, "end": 176, "text": "(u,v)", "ref_id": null }, { "start": 548, "end": 554, "text": "(u, v)", "ref_id": null }, { "start": 657, "end": 663, "text": "(u, v)", "ref_id": null }, { "start": 670, "end": 676, "text": "(u, v)", "ref_id": null } ], "ref_spans": [ { "start": 392, "end": 400, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "links (u, v) B (kIn, p) .~+", "cite_spans": [ { "start": 6, "end": 12, "text": "(u, v)", "ref_id": null }, { "start": 15, "end": 23, "text": "(kIn, p)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "= the number of times that u and v are hypothesized to co-occur as mutual translations = probability of k being generated from a binomial distribution with parameters n and p = probability of a link given mutual translations = probability of a link given not mutual translations = probability of a link = probability of mutual translations = total number of links in the bitext = total number of co-occurrences in the bitext that u and v will be linked when they co-occur. There is never enough data to robustly estimate each p parameter separately. Instead, I shall model all the p's with just two parameters. For u and v that are mutual translations, p(u, v) will average to a relatively high probability, which I will call ~+. for u and v that are not mutual translations, p(u, v) will average to a relatively low probability, which I will call ),-. ~+ and ,klinks(u,v) correspond to the two peaks of the distribution cooc (u,v) , which is illustrated in Figure 2 . The two parameters can also be interpreted as the rates of true and false positives. If the translation in the bitext is consistent and the translation model is accurate, then ~+ will be close to one and ,~-will be close to zero.", "cite_spans": [ { "start": 853, "end": 872, "text": "~+ and ,klinks(u,v)", "ref_id": null }, { "start": 926, "end": 931, "text": "(u,v)", "ref_id": null } ], "ref_spans": [ { "start": 958, "end": 966, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "T K N", "sec_num": null }, { "text": "To find the most likely values of the auxiliary parameters ,k + and )~-, I adopt the standard method of maximum likelihood estimation, and find the values that maximize the probability of the link frequency distributions, under the usual independence assumptions: Table 3 summarizes the variables involved in this auxiliary estimation process.", "cite_spans": [], "ref_spans": [ { "start": 264, "end": 271, "text": "Table 3", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "T K N", "sec_num": null }, { "text": "Pr(linkslm\u00b0del) = H Pr(links(u, v)Icooc(u, v), ~+, ,k-) (29) tlIV", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "T K N", "sec_num": null }, { "text": "The factors on the right-hand side of Equation 29 can be written explicitly with the help of a mixture coefficient. Let ~-be the probability that an arbitrary co-occurring pair of word types are mutual translations. Let B(kln, p) denote the probability that k links are observed out of n co-occurrences, where k has a binomial distribution with parameters n and p. Then the probability that word types u and v will be linked links(u, v) times out of cooc(u, v) co-occurrences is a mixture of two binomials:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "T K N", "sec_num": null }, { "text": "Pr(links(u, v)Icooc(u, v), ,k +, )~-) = TB(links(u, v)Icooc(u, v), )~+) + (1 -~-)B(links(u,v)lcooc(u,v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "T K N", "sec_num": null }, { "text": "One more variable allows us to express -r in terms of A + and ~-: Let )~ be the probability that an arbitrary co-occuring pair of word tokens will be linked, regardless of whether they are mutual translations. Since ~-is constant over all word types, it also represents the probability that an arbitrary co-occurring pair of word tokens are mutual translations. Therefore, = ~-~+ + (1 -T))~-.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "),A-). (30)", "sec_num": null }, { "text": "), can also be estimated empirically. Let K be the total number of links in the bitext and let N be the total number of word token pair co-occurrences:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "),A-). (30)", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "K = Z links(u, v),", "eq_num": "(32)" } ], "section": "),A-). (30)", "sec_num": null }, { "text": "u,v N = ~ cooc(u, v). (33) U,V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "),A-). (30)", "sec_num": null }, { "text": "By definition,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "),A-). (30)", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "A = K/N.", "eq_num": "(34)" } ], "section": "),A-). (30)", "sec_num": null }, { "text": "Equating the right-hand sides of Equations 31 and 34 and rearranging the terms, we get:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "),A-). (30)", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "K/N-A-", "eq_num": "(35)" } ], "section": "),A-). (30)", "sec_num": null }, { "text": "Since r is now a function of )~+ and )~-, only the latter two variables represent degrees of freedom in the model. In the preceding equations, either u or v can be NULL. However, the number of times that a word co-occurs with NULL is not an observable feature of bitexts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "),A-). (30)", "sec_num": null }, { "text": "To make sense of co-occurrences with NULL, we can view co-occurrences as potential links and cooc (u, v) as the maximum number of times that tokens of u and v might be linked. From this point of view, cooc(u, NULL) should be set to the unigram frequency of u, since each token of u represents one potential link to NULL. Similarly for cooc( NULL, V). These co-occurrence counts should be summed together with all the others in Equation 33.", "cite_spans": [ { "start": 98, "end": 104, "text": "(u, v)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "),A-). (30)", "sec_num": null }, { "text": "The probability function expressed by Equations 29 and 30 may have many local maxima. In practice, these local maxima are like pebbles on a mountain, invisible at low resolution. I computed Equation 29 over various combinations of A + and A-after one iteration of Method A over 300,000 aligned sentence pairs from the Canadian Hansard bitext. Figure 3 illustrates that the region of interest in the parameter space, where 1 > A + > )~ > )~-> 0, has only one dominant global maximum. This global maximum can be found by standard hill-climbing methods, as long as the step size is large enough to avoid getting stuck on the pebbles.", "cite_spans": [], "ref_spans": [ { "start": 343, "end": 351, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "),A-). (30)", "sec_num": null }, { "text": "Given estimates for A + and A-, we can compute B(links(u,v) [cooc(u,v) , A +) and B(links(u, v) [cooc(u, v) , A-) for each occurring combination of links and cooc values. These are the probabilities that links (u, v) links were generated out of cooc(u, v) possible links by a process that generates correct links and by a process that generates incorrect links, respectively. The ratio of these probabilities is the likelihood ratio in favor of the types u and v being possible mutual translations, for all u and v:", "cite_spans": [ { "start": 47, "end": 59, "text": "B(links(u,v)", "ref_id": null }, { "start": 60, "end": 70, "text": "[cooc(u,v)", "ref_id": null }, { "start": 82, "end": 95, "text": "B(links(u, v)", "ref_id": null }, { "start": 96, "end": 107, "text": "[cooc(u, v)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "),A-). (30)", "sec_num": null }, { "text": "B(links(u, v)[cooc(u, v), A +) scoreB(u, v) = log B(links(u, v)Icooc(u, v), A-)\" (36)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "),A-). (30)", "sec_num": null }, { "text": "Method B differs from Method A only in its redefinition of the score function in Equation 36. The auxiliary parameters A + and A-and the noise model that they represent can be employed the same way in translation models that are not based on the one-to-one assumption.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "),A-). (30)", "sec_num": null }, { "text": "In Method B, the estimation of the auxiliary parameters A + and A-depends only on the overall distribution of co-occurrence counts and link frequencies. All word pairs that co-occur the same number of times and are linked the same number of times are assigned the same score. More accurate models can be induced by taking into account various features of the linked tokens. For example, frequent words are translated less consistently than rare words (Catizone, Russell, and Warwick 1989) . To account for these differences, we can estimate separate values of A + and A-for different ranges of cooc (u, v) . Similarly, the auxiliary parameters can be conditioned on the linked parts of speech. A kind of word order correlation bias can be effected by conditioning the auxiliary parameters on the relative positions of linked word tokens in their respective texts. Just as easily, we can model link types that coincide with entries in an on-line bilingual dictionary separately from those that do not (cf. Brown et al. 1993) . When the auxiliary parameters are conditioned on different link classes, their optimization is carried out separately for each class: (u, vlZ = class(u, v) ", "cite_spans": [ { "start": 451, "end": 488, "text": "(Catizone, Russell, and Warwick 1989)", "ref_id": "BIBREF9" }, { "start": 599, "end": 605, "text": "(u, v)", "ref_id": null }, { "start": 1005, "end": 1023, "text": "Brown et al. 1993)", "ref_id": null }, { "start": 1160, "end": 1181, "text": "(u, vlZ = class(u, v)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Method C: Improved Estimation Using Preexisting Word Classes", "sec_num": "5.3" }, { "text": "B (links (u, v)[cooc(u, v), A +) scorec", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method C: Improved Estimation Using Preexisting Word Classes", "sec_num": "5.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ") = log B(links(u, v)[cooc(u, v), A z)\"", "eq_num": "(37)" } ], "section": "Method C: Improved Estimation Using Preexisting Word Classes", "sec_num": "5.3" }, { "text": "Section 6.1.1 describes the link classes used in the experiments below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method C: Improved Estimation Using Preexisting Word Classes", "sec_num": "5.3" }, { "text": "This section compares translation model estimation methods A, B, and C to each other and to Brown et al.'s (1993b) Model 1. To reiterate, Model 1 is based on co-occurrence information only; Method A is based on the one-to-one assumption; Method B adds the \"one sense per collocation\" hypothesis to Method A; Method C conditions the auxiliary parameters of Method B on various word classes. Whereas Methods A and B and Model 1 were fully specified in Section 4.3.1 and Section 5, the latter section described a variety of features on which Method C might classify links. For the purposes of the experiments described in this article, Method C employed the simple classification in Table 4 for both languages in the bitext. All classification was performed by table lookup; no context-aware part-of-speech tagger was used. In particular, words that were ambiguous between open classes and closed classes were always deemed to be in the closed class. The only language-specific knowledge involved in this classification Word classes used by Method C for the experiments described in this article. Link classes were constructed by taking the cross-product of the word classes.", "cite_spans": [ { "start": 92, "end": 114, "text": "Brown et al.'s (1993b)", "ref_id": null } ], "ref_spans": [ { "start": 680, "end": 687, "text": "Table 4", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Evaluation at the Token Level", "sec_num": "6.1" }, { "text": "EOS EOP SCM SYM NU C F", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class Code Description", "sec_num": null }, { "text": "End-Of-Sentence punctuation End-Of-Phrase punctuation, such as commas and colons Subordinate Clause Markers, such as \" and ( Symbols, such as ~ and * the NULL word, in a class by itself Content words: nouns, adjectives, adverbs, non-auxiliary verbs all other words, i.e., function words method is the list of function words in class F. Certainly, more sophisticated word classification methods could produce better models, but even the simple classification in Table 4 should suffice to demonstrate the method's potential.", "cite_spans": [], "ref_spans": [ { "start": 461, "end": 468, "text": "Table 4", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Class Code Description", "sec_num": null }, { "text": "6.1.1 Experiment 1. Until now, translation models have been evaluated either subjectively (e.g. White and O'Connell 1993) or using relative metrics, such as perplexity with respect to other models (Brown et al. 1993b) . Objective and more accurate tests can be carried out using a \"gold standard.\" I hired bilingual annotators to link roughly 16,000 corresponding words between on-line versions of the Bible in French and English. This bitext was selected to facilitate widespread use and standardization (see Melamed [1998c] for details). The entire Bible bitext comprised 29,614 verse pairs, of which 250 verse pairs were hand-linked using a specially developed annotation tool. The annotation style guide (Melamed 1998b ) was based on the intuitions of the annotators, so it was not biased towards any particular translation model. The annotation was replicated five times by seven different annotators.", "cite_spans": [ { "start": 96, "end": 121, "text": "White and O'Connell 1993)", "ref_id": "BIBREF42" }, { "start": 197, "end": 217, "text": "(Brown et al. 1993b)", "ref_id": "BIBREF6" }, { "start": 510, "end": 525, "text": "Melamed [1998c]", "ref_id": "BIBREF34" }, { "start": 708, "end": 722, "text": "(Melamed 1998b", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Class Code Description", "sec_num": null }, { "text": "Each of the four methods was used to estimate a word-to-word translation model from the 29,614 verse pairs in the Bible bitext. All methods were deemed to have converged when less than .0001 of the translational probability distribution changed from one iteration to the next. The links assigned by each of methods A, B, and C in the last iteration were normalized into joint probability distributions using Equation 19. I shall refer to these joint distributions as Model A, Model B, and Model C, respectively. Each of the joint probability distributions was further normalized into two conditional probability distributions, one in each direction. Since Model 1 is inherently directional, its conditional probability distributions were estimated separately in each direction, instead of being derived from a joint distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class Code Description", "sec_num": null }, { "text": "The four models' predictions were compared to the gold standard annotations. Each model guessed one translation (either stochastically or deterministically, depending on the task) for each word on one side of the gold standard bitext. Therefore, precision = recall here, and I shall refer to the results simply as \"percent correct.\" The accuracy of each model was averaged over the two directions of translation: English to French and French to English. The five-fold replication of annotations in the test data enabled computation of the statistical significance of the differences in model accuracy. The statistical significance of all results in this section was measured at the c~ --.05 level, using the Wilcoxon signed ranks test. Although the models were evaluated on part of the same bitext on which they were trained, the evaluations were with respect to the translational equivalence relation hidden in this bitext, not with respect to any of the bitext's visible features. Such testing on training data is standard practice for unsupervised learning algorithms, where the objective is to compare several methods. Of course, performance would degrade on previously unseen data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class Code Description", "sec_num": null }, { "text": "In addition to the different translation models, there were two other independent variables in the experiment: method of translation and whether function words were included. Some applications, such as query translation for CLIR, don't care about function words. To get a sense of the relative effectiveness of the different translation model estimation methods when function words are taken out of the equation, I removed from the gold standard all link tokens where one or both of the linked words were closed-class words. Then, I removed all closed-class words (including nonalphabetic symbols) from the models and renormalized the conditional probabilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class Code Description", "sec_num": null }, { "text": "The method of translation was either single-best or whole distribution. Singlebest translation is the kind that somebody might use to get the gist of a foreignlanguage document. The input to the task was one side of the gold standard bitext. The output was the model's single best guess about the translation of each word in the input, together with the input word. In other words, each model produced link tokens consisting of input words and their translations. For some applications, it is insufficient to guess only the single most likely translation of each word in the input. The model is expected to output the whole distribution of possible translations for each input word. This distribution is then combined with other distributions that are relevant to the application. For example, for cross-language information retrieval, the translational distribution can be combined with the distribution of term frequencies. For statistical machine translation, the translational distribution can be decoded with a source language model (Brown et al. 1988; A1-Onaizan et al. 1999) . To predict how the different models might perform in such applications, the whole distribution task was to generate a whole set of links from each input word, weighted according to the probability assigned by the model to each of the input word's translations. Each model was tested on this task with and without function words.", "cite_spans": [ { "start": 1038, "end": 1057, "text": "(Brown et al. 1988;", "ref_id": "BIBREF4" }, { "start": 1058, "end": 1081, "text": "A1-Onaizan et al. 1999)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Class Code Description", "sec_num": null }, { "text": "The mean results are plotted in Figures 4 and 5 with 95% confidence intervals. All four graphs in these figures are on the same scale to facilitate comparison. On both tasks involving the entire vocabulary, each of the biases presented in this article improves the efficiency of modeling the available training data. When closed-class words were ignored, Model 1 performed better than Method A, because open-class words are more likely to violate the one-to-one assumption. However, the explicit noise model in Methods B and C boosted their scores significantly higher than Model 1 and Method A. Method B was better than Method C at choosing the single best open-class links, and the situation was reversed for the whole distribution of open-class links. However, the differences in performance between these two methods were tiny on the open-class tasks, because they left only two classes for Method C to distinguish: content words and NULLS. Most of the scores on the whole distribution task were lower than their counterparts on the single-best translation task, because it is more difficult for any statistical method to correctly model the less common translations. The \"best\" translations are usually the most common. 6.1.2 Experiment 2. To study how the benefits of the various biases vary with training corpus size, I evaluated Models A, B, C, and 1 on the whole distribution translation task, after training them on three different-size subsets of the Bible bitext. The first subset consisted of only the 250 verse pairs in the gold standard. The second subset included these 250 plus another random sample of 2,250 for a total of 2,500, an order of magnitude larger than the first subset. The third subset contained all 29,614 verse pairs in the Bible bitext, roughly an order of magnitude larger than the second subset. All models were compared to the five gold standard annotations, and the scores were ................................................ t .................................................................. . ", "cite_spans": [ { "start": 1916, "end": 2033, "text": "................................................ t ..................................................................", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Class Code Description", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "} I t I Model 1 Model A Model B Model C", "eq_num": "(" } ], "section": "Class Code Description", "sec_num": null }, { "text": "Effects of training set size on model accuracy on the whole distribution task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 6", "sec_num": null }, { "text": "averaged over the two directions of translation, as before. Again, because the total probability assigned to all translations for each source word was one, precision = recall = percent correct on this task. The mean scores over the five gold standard annotations are graphed in Figure 6 , where the right edge of the figure corresponds to the means of Figure 5(a) . The figure supports the hypothesis in Melamed (to appear, Chapter 7) that the biases presented in this article are even more valuable when the training data are more sparse. The one-to-one assumption is useful, even though it forces us to use a greedy approximation to maximum likelihood. In relative terms, the advantage of the one-to-one assumption is much more pronounced on smaller training sets. For example, Model A is 102% more accurate than Model I when trained on only 250 verse pairs. The explicit noise model buys a considerable gain in accuracy across all sizes of training data, as do the link classes of Model C. In concert, when trained and tested only on the gold standard test set, the three biases outperformed Model 1 by up to 125%. This difference is even more significant given the absolute performance ceiling of 82% established by the interannotator agreement rates on the gold standard.", "cite_spans": [], "ref_spans": [ { "start": 278, "end": 286, "text": "Figure 6", "ref_id": null }, { "start": 352, "end": 363, "text": "Figure 5(a)", "ref_id": null } ], "eq_spans": [], "section": "29614", "sec_num": null }, { "text": "An important application of statistical translation models is to help lexicographers compile bilingual dictionaries. Dictionaries are written to answer the question, \"What are the possible translations of X?\" This is a question about link types, rather than about link tokens. Evaluation by link type is a thorny issue. Human judges often disagree about the degree to which context should play a role in judgments of translational equivalence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation at the Type Level", "sec_num": "6.2" }, { "text": "For example, the Harper-Collins French Dictionary (Cousin et al. 1990) gives the following French translations for English appoint: nommer, engager, fixer, d~signer. Likewise, most Distribution of link type scores. The long plateaus correspond to the most common li,k~(u,v). 1/1,2/2, and 3/3. combinations of cooc(u,v) \u2022 lay judges would not consider instituer a correct French translation of appoint. In actual translations, however, when the object of the verb is commission, task force, panel, etc., English appoint is usually translated into French as instituer. To account for this kind of context-dependent translational equivalence, link types must be evaluated with respect to the bitext whence they were induced.", "cite_spans": [ { "start": 50, "end": 70, "text": "(Cousin et al. 1990)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation at the Type Level", "sec_num": "6.2" }, { "text": "I performed a post hoc evaluation of the link types produced by an earlier version of Method B (Melamed 1996b) . The bitext used for this evaluation was the same aligned Hansards bitext used by Gale and Church (1991) , except that I used only 300,000 aligned segment pairs to save time. The bitext was automatically pretokenized to delimit punctuation, English possessive pronouns, and French elisions. Morphological variants in both halves of the bitext were stemmed to a canonical form.", "cite_spans": [ { "start": 95, "end": 110, "text": "(Melamed 1996b)", "ref_id": "BIBREF31" }, { "start": 194, "end": 216, "text": "Gale and Church (1991)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation at the Type Level", "sec_num": "6.2" }, { "text": "The link types assigned by the converged model were sorted by the scores in Equation 36. Figure 7 shows the distribution of these scores on a log scale. The log scale helps to illustrate the plateaus in the curve. The longest plateau represents the set of word pairs that were linked once out of one co-occurrence (1/1) in the bitext. All these word pairs were equally likely to be correct. The second-longest plateau resulted from word pairs that were linked twice out of two co-occurrences (2/2) and the third longest plateau is from word pairs that were linked three times out of three co-occurrences (3/3). As usual, the entries with higher scores were more likely to be correct. By discarding entries with lower scores, coverage could be traded for accuracy. This trade-off was measured at three points, representing cutoffs at the end of each of the three longest plateaus.", "cite_spans": [], "ref_spans": [ { "start": 89, "end": 97, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "Evaluation at the Type Level", "sec_num": "6.2" }, { "text": "The traditional method of measuring coverage requires knowledge of the correct link types, which is impossible to determine without a gold standard. An approximate coverage measure can be based on the number of different words in the corpus. For lexicons extracted from corpora, perfect coverage implies at least one entry containing each word in the corpus. One-sided variants, which consider only source words, have also been used (Gale and Church 1991) . Table 5 shows both the marginal (one-sided) and the combined coverage at each of the three cutoff points. It also shows the absolute number of (non-NULL) entries in each of the three lexicons. Of course, the size of automatically induced lexicons depends on the size of the training bitext. Table 5 shows that, given a sufficiently large bitext, the method can automatically construct translation lexicons with as many entries as published bilingual dictionaries. The next task was to measure accuracy. It would have taken too long to evaluate every lexicon entry manually. Instead, I took five random samples (with replacement) of 100 entries each from each of the three lexicons. Each of the samples was first compared to a translation lexicon extracted from a machine-readable bilingual dictionary (Cousin et al. 1991) . All the entries in the sample that appeared in the dictionary were assumed to be correct. I checked the remaining entries in all the samples by hand. To account for context-dependent translational equivalence, I evaluated the accuracy of the translation lexicons in the context of the bitext whence they were extracted, using a simple bilingual concordancer. A lexicon entry (u,v) was considered correct if u and v ever appeared as direct translations of each other in an aligned segment pair. That is, a link type was considered correct if any of its tokens were correct.", "cite_spans": [ { "start": 433, "end": 455, "text": "(Gale and Church 1991)", "ref_id": "BIBREF23" }, { "start": 1259, "end": 1279, "text": "(Cousin et al. 1991)", "ref_id": null } ], "ref_spans": [ { "start": 458, "end": 465, "text": "Table 5", "ref_id": "TABREF3" }, { "start": 749, "end": 756, "text": "Table 5", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Evaluation at the Type Level", "sec_num": "6.2" }, { "text": "Direct translations come in different flavors. Most entries that I checked by hand were of the plain vanilla variety that you might find in a bilingual dictionary (entry type V). However, a significant munber of words translated into a different part of speech (entry type P). For instance, in the entry (protection, prot6g6), the English word is a noun but the French word is an adjective. This entry appeared because to have protection is often translated as ~tre prot~g~ ('to be protected') in the bitext. The entry will never occur in a bilingual dictionary, but users of translation lexicons, be they human or machine, will want to know that translations often happen this way.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation at the Type Level", "sec_num": "6.2" }, { "text": "The evaluation of translation models at the word type level is complicated by the possibility of phrasal translations, such as imm~diatement ~-~ right away. All the methods being evaluated here produce models of translational equivalence between individual words only. How can we decide whether a single-word translation \"matches\" a phrasal translation? The answer lies in the observation that corpus-based lexicography usually involves a lexicographer. Bilingual lexicographers can work with bilingual concordancing software that can point them to instances of any link type induced from a bitext and display these instances sorted by their contexts (e.g. Simard, Foster, and Perrault 1993) . Given an incomplete link type, the lexicographer can usually reconstruct the complete link type from the contexts in the concordance. For example, if the model proposes an equivalence between immddiatement and right, a bilingual concordance Table 6 Distribution of different types of correct lexicon entries at varying levels of coverage (mean + standard deviation).", "cite_spans": [ { "start": 657, "end": 691, "text": "Simard, Foster, and Perrault 1993)", "ref_id": "BIBREF39" } ], "ref_spans": [ { "start": 935, "end": 942, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Evaluation at the Type Level", "sec_num": "6.2" }, { "text": "Cutoff Coverage % Type V % Type P % Type I Total % Accuracy 3/3 36% 89 4-2.2 3.4 :E 0.5 7.6 + 3.2 99,2 4-0.8 2/2 46% 81 4-3.0 8.0 ::E 2.1 9.8 + 1.8 99.0 4-1.4 1/1 90% 82 + 2.5 4.4 + 0.5 6.0 + 1.9 92.8 + 1.1 can show the lexicographer that the model was really trying to capture the equivalence between imm#diatement and right away or between imm#diatement and right now.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation at the Type Level", "sec_num": "6.2" }, { "text": "I counted incomplete entries in a third category (entry type I). Whether links in this category should be considered correct depends on the application. Table 6 shows the distribution of correct lexicon entries among the types V, P and I. Figure 8 graphs the accuracy of the method against coverage, with 95% confidence intervals. The upper curve represents accuracy when incomplete links are considered correct, and the lower when they are considered incorrect. On the former metric, the method can generate translation lexicons with accuracy and coverage both exceeding 90%, as well as dictionary-size translation lexicons that are over 99% correct.", "cite_spans": [], "ref_spans": [ { "start": 153, "end": 160, "text": "Table 6", "ref_id": null }, { "start": 239, "end": 247, "text": "Figure 8", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "Evaluation at the Type Level", "sec_num": "6.2" }, { "text": "There are many ways to model translational equivalence and many ways to estimate translation models. \"The mathematics of statistical machine translation\" proposed by Brown et al. (1993b) lation. In this article, I have proposed and evaluated new kinds of translation model biases, alternative parameter estimation strategies, and techniques for exploiting preexisting knowledge that may be available about particular languages and language pairs. On a variety of evaluation metrics, each infusion of knowledge about the problem domain resulted in better translation models.", "cite_spans": [ { "start": 166, "end": 186, "text": "Brown et al. (1993b)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "Each innovation presented here opens the way for more research. Model biases can be mixed and matched with each other, with previously published biases like the word order correlation bias, and with other biases yet to be invented. The competitive linking algorithm can be generalized in various ways. New kinds of preexisting knowledge can be exploited to improve accuracy for particular language pairs or even just for particular bitexts. It is difficult to say where the greatest advances will come from. Yet, one thing is clear from our current vantage point: Research on empirical methods for modeling translational equivalence has not run out of steam, as some have claimed, but has only just begun.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "Assignments are different fromBrown, Della Pietra, Della Pietra, and Mercer's (1993) alignments in that assignments can range over pairs of arbitrary labels, not necessarily sequence position indexes. Also, unlike alignments, assignments must be one-to-one. 4 The exact nature of the bag size distribution is immaterial for the present purposes. 5 Since they are put into bags, ffi and r7 i could just as well be bags instead of sequences. I make them sequences only to be consistent with more sophisticated models that account for noncompositional compounds (e.g. Melamed, to appear, Chapter 8).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The number of permutations is smaller when either bag contains two or more identical elements, but this detail will not affect the estimation algorithms presented here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This expression is obtained by substitutingBrown, Della Pietra, Della Pietra, and Mercer's (1993) Equation 17 into their Equation 14. 8 Or, equivalently, if the notion of segments were dispensed with altogether, as under the distance-based model of co-occurrence (Melarned 1998a).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "At least for my current very inefficient implementation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The competitive linking algorithm can be generalized to stop searching before the number of possible assignments is reduced to one, at which point the link counts can be computed as probabilistically weighted averages over the remaining assignments. I use this method to resolve ties.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Except for the case when multiple tokens of the same word type occur near each other, which I hereby sweep under the carpet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Much of this research was performed at the Department of Computer and Information Science at the University of Pennsylvania, where it was supported by an equipment grant from Sun MicroSystems Laboratories and by ARPA Contract #N66001-94C-6043. Many thanks to my former colleagues at UPenn and to the anonymous reviewers for their insightful suggestions for improvement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Using lexicalized tree adjoining grammars for machine translation", "authors": [ { "first": "Abeillg", "middle": [], "last": "Anne", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Schabes", "suffix": "" }, { "first": "Aravind", "middle": [ "K" ], "last": "Joshi", "suffix": "" } ], "year": 1990, "venue": "Proceedings of the 13th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "AbeillG Anne, Yves Schabes, and Aravind K. Joshi. 1990. Using lexicalized tree adjoining grammars for machine translation. In Proceedings of the 13th International Conference on Computational Linguistics. Helsinki, Finland.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Network Flows: Theory, Algorithms, and Applications", "authors": [ { "first": "Ravindra", "middle": [ "K" ], "last": "Ahuja", "suffix": "" }, { "first": "L", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "James", "middle": [ "B" ], "last": "Magnati", "suffix": "" }, { "first": "", "middle": [], "last": "Orlin", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ahuja, Ravindra K., Thomas L. Magnati, and James B. Orlin. 1993. Network Flows: Theory, Algorithms, and Applications.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Statistical machine translation", "authors": [ { "first": "", "middle": [], "last": "A1-Onaizan", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Yaser", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Curin", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Jahr", "suffix": "" }, { "first": "John", "middle": [], "last": "Knight", "suffix": "" }, { "first": "I", "middle": [ "Dan" ], "last": "Lafferty", "suffix": "" }, { "first": "Franz", "middle": [ "J" ], "last": "Melamed", "suffix": "" }, { "first": "David", "middle": [], "last": "Och", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Purdy", "suffix": "" }, { "first": "David", "middle": [], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1999, "venue": "CLSP Technical Report", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A1-Onaizan, Yaser, Jan Curin, Michael Jahr, Kevin Knight, John Lafferty, I. Dan Melamed, Franz J. Och, David Purdy, Noah A. Smith, and David Yarowsky. 1999. Statistical machine translation. CLSP Technical Report. Baltimore, MD. Available at www.clsp.jhu.edu/ws99/ projects/mt/final_report/mt-final- report.ps", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "French speech recognition in an automatic dictation system for translators: The TransTalk project", "authors": [ { "first": "Julie", "middle": [], "last": "Brousseau", "suffix": "" }, { "first": "Caroline", "middle": [], "last": "Drouin", "suffix": "" }, { "first": "George", "middle": [], "last": "Foster", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Isabelle", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Kuhn", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Normandin", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Plamondon", "suffix": "" } ], "year": 1995, "venue": "Proceedings of EuroSpeech'95", "volume": "", "issue": "", "pages": "193--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brousseau, Julie, Caroline Drouin, George Foster, Pierre Isabelle, Roland Kuhn, Yves Normandin, and Pierre Plamondon. 1995. French speech recognition in an automatic dictation system for translators: The TransTalk project. In Proceedings of EuroSpeech'95, pages 193-196, Madrid, Spain.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A statistical approach to language translation", "authors": [ { "first": "Peter", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "John", "middle": [], "last": "Cocke", "suffix": "" }, { "first": "Stephen", "middle": [ "A Della" ], "last": "Pietra", "suffix": "" }, { "first": "Vincent", "middle": [ "J Della" ], "last": "Pietra", "suffix": "" }, { "first": "Fredrick", "middle": [], "last": "Jelinek", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Mercer", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Roossin", "suffix": "" } ], "year": 1988, "venue": "Proceedings of the 12th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "71--76", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brown, Peter F., John Cocke, Stephen A. Della Pietra, Vincent J. Della Pietra, Fredrick Jelinek, Robert L. Mercer, and Paul Roossin. 1988. A statistical approach to language translation. In Proceedings of the 12th International Conference on Computational Linguistics, pages 71-76, Budapest, Hungary.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "But dictionaries are data too", "authors": [ { "first": "Peter", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "A", "middle": [ "Della" ], "last": "Stephen", "suffix": "" }, { "first": "Vincent", "middle": [ "J Della" ], "last": "Pietra", "suffix": "" }, { "first": "Meredith", "middle": [ "J" ], "last": "Pietra", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Goldsmith", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Hajic", "suffix": "" }, { "first": "Surya", "middle": [], "last": "Mercer", "suffix": "" }, { "first": "", "middle": [], "last": "Mohanty", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the ARPA HLT Workshop", "volume": "", "issue": "", "pages": "202--205", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brown, Peter F., Stephen A. Della Pietra, Vincent J. Della Pietra, Meredith J. Goldsmith, Jan Hajic, Robert L. Mercer and Surya Mohanty. 1993a. But dictionaries are data too. In Proceedings of the ARPA HLT Workshop, pages 202-205, Princeton, NJ.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The mathematics of statistical machine translation: Parameter estimation", "authors": [ { "first": "Peter", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "A", "middle": [ "Della" ], "last": "Stephen", "suffix": "" }, { "first": "Vincent", "middle": [ "J" ], "last": "Pietra", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Della Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brown, Peter F., Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993b. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics 19(2):263-311.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The importance of proper weighting methods", "authors": [ { "first": "Chris", "middle": [], "last": "Buckley", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the DARPA Workshop on Human Language Technology", "volume": "", "issue": "", "pages": "349--352", "other_ids": {}, "num": null, "urls": [], "raw_text": "Buckley, Chris. 1993. The importance of proper weighting methods. In Proceedings of the DARPA Workshop on Human Language Technology, pages 349-352, Princeton, NJ.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Building parallel LTAG for French and Italian", "authors": [ { "first": "Marie\u00b0h~l~ne", "middle": [], "last": "Candito", "suffix": "" } ], "year": 1998, "venue": "COLING-ACL \"98:36 Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "211--217", "other_ids": {}, "num": null, "urls": [], "raw_text": "Candito, Marie\u00b0H~l~ne. 1998. Building parallel LTAG for French and Italian. In COLING-ACL \"98:36 Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, pages 211-217, Montreal, Canada.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Deriving translation data from bilingual texts", "authors": [ { "first": "Roberta", "middle": [], "last": "Catizone", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Russell", "suffix": "" }, { "first": "Susan", "middle": [], "last": "Warwick", "suffix": "" } ], "year": 1989, "venue": "Proceedings of the First International Lexical Acquisition Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Catizone, Roberta, Graham Russell, and Susan Warwick. 1989. Deriving translation data from bilingual texts. In Proceedings of the First International Lexical Acquisition Workshop. Detroit, MI.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Good applications for crummy machine translation. Machine Translation 8", "authors": [ { "first": "Kenneth", "middle": [ "W" ], "last": "Church", "suffix": "" }, { "first": "Eduard", "middle": [ "H" ], "last": "Hovy", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Church, Kenneth W., and Eduard H. Hovy. 1993. Good applications for crummy machine translation. Machine Translation 8.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The Harper Collins French Dictionary", "authors": [ { "first": "Pierre-Henri", "middle": [], "last": "Cousin", "suffix": "" }, { "first": "Lorna", "middle": [], "last": "Sinclair", "suffix": "" }, { "first": "Jean-Francois", "middle": [], "last": "Allain", "suffix": "" }, { "first": "Catherine", "middle": [ "E" ], "last": "Love", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cousin, Pierre-Henri, Lorna Sinclair, Jean-Francois Allain, and Catherine E. Love. 1990. The Harper Collins French Dictionary. Harper Collins Publishers, New York, NY. Cousin, Pierre-Henri, Lorna Sinclair, Jean-Francois Allain, and Catherine E.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The Collins Paperback French Dictionary", "authors": [ { "first": "", "middle": [], "last": "Love", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Love. 1991. The Collins Paperback French Dictionary. Harper Collins Publishers, Glasgow.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Robust word alignment for machine aided translation", "authors": [ { "first": "", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Kenneth", "middle": [ "W" ], "last": "Ido", "suffix": "" }, { "first": "William", "middle": [ "A" ], "last": "Church", "suffix": "" }, { "first": "", "middle": [], "last": "Gale", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dagan, Ido, Kenneth W. Church, and William A. Gale. 1993. Robust word alignment for machine aided translation.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Proceedings of the Workshop on Very Large Corpora: Academic and Industrial Perspectives", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "In Proceedings of the Workshop on Very Large Corpora: Academic and Industrial Perspectives, pages 1-8, Columbus, OH.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Towards automatic extraction of monolingual and bilingual terminology", "authors": [ { "first": "B~atrice", "middle": [], "last": "Daille", "suffix": "" }, { "first": "Jean-Marc", "middle": [], "last": "Gaussier", "suffix": "" }, { "first": "", "middle": [], "last": "Lang4", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 15th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "515--521", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daille, B~atrice, l~ric Gaussier, and Jean-Marc Lang4. 1994. Towards automatic extraction of monolingual and bilingual terminology. Proceedings of the 15th International Conference on Computational Linguistics, pages 515-521, Kyoto, Japan.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "On a least squares adjustment of a sampled frequency table when the expected marginal totals are known", "authors": [ { "first": "W", "middle": [], "last": "Deming", "suffix": "" }, { "first": "Frederick", "middle": [ "F" ], "last": "Edwards", "suffix": "" }, { "first": "", "middle": [], "last": "Stephan", "suffix": "" } ], "year": 1940, "venue": "The Annals of Mathematical Statistics", "volume": "11", "issue": "", "pages": "42--444", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deming, W. Edwards, and Frederick F. Stephan. 1940. On a least squares adjustment of a sampled frequency table when the expected marginal totals are known. The Annals of Mathematical Statistics, 11:42~444.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Maximum likelihood from incomplete data via the EM algorithm", "authors": [ { "first": "Arthur", "middle": [ "P" ], "last": "Dempster", "suffix": "" }, { "first": "N", "middle": [ "M" ], "last": "Laird", "suffix": "" }, { "first": "Donald", "middle": [ "B" ], "last": "Rubin", "suffix": "" } ], "year": 1977, "venue": "Journal of the Royal Statistical Society", "volume": "39", "issue": "B", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dempster, Arthur P., N. M. Laird, and Donald B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, 39(B):1-38.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The use of lexical semantics in interlingual machine translation", "authors": [ { "first": "Bonnie", "middle": [ "J" ], "last": "Dorr", "suffix": "" } ], "year": 1992, "venue": "Machine Translation", "volume": "7", "issue": "3", "pages": "135--193", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dorr, Bonnie J. 1992. The use of lexical semantics in interlingual machine translation. Machine Translation, 7(3):135-193.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Accurate methods for the statistics of surprise and coincidence", "authors": [ { "first": "Ted", "middle": [], "last": "Dunning", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "1", "pages": "61--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dunning, Ted. 1993. Accurate methods for the statistics of surprise and coincidence. Computational Linguistics 19(1):61-74.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "WordNet: An Electronic Lexical Database", "authors": [], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fellbaum, Christiane, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Word completion: A first step toward target-text mediated IMT", "authors": [ { "first": "George", "middle": [], "last": "Foster", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Isabelle", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Plamondon", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 16th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "394--399", "other_ids": {}, "num": null, "urls": [], "raw_text": "Foster, George, Pierre Isabelle, and Pierre Plamondon. 1996. Word completion: A first step toward target-text mediated IMT. In Proceedings of the 16th International Conference on Computational Linguistics, pages 394-399, Copenhagen, Denmark.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A pattern matching method for finding noun and proper noun translations from noisy parallel corpora", "authors": [ { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 33rd Annual Meeting", "volume": "", "issue": "", "pages": "236--243", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fung, Pascale. 1995. A pattern matching method for finding noun and proper noun translations from noisy parallel corpora. In Proceedings of the 33rd Annual Meeting, pages 236-243, Boston, MA. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Identifying word correspondences in parallel texts", "authors": [ { "first": "William", "middle": [ "A" ], "last": "Gale", "suffix": "" }, { "first": "Kenneth", "middle": [ "W" ], "last": "Church", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the DARPA SNL Workshop", "volume": "", "issue": "", "pages": "152--157", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gale, William A., and Kenneth W. Church. 1991. Identifying word correspondences in parallel texts. Proceedings of the DARPA SNL Workshop, pages 152-157, Asilomar, CA.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Good-Turing frequency estimation without tears", "authors": [ { "first": "William", "middle": [ "A" ], "last": "Gale", "suffix": "" }, { "first": "Geoff", "middle": [], "last": "Sampson", "suffix": "" } ], "year": 1995, "venue": "Journal of Quantitative Linguistics", "volume": "2", "issue": "", "pages": "217--237", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gale, William A., and Geoff Sampson. 1995. Good-Turing frequency estimation without tears. Journal of Quantitative Linguistics, 2:217-237. Swets & Zeitlinger Publishers, Sassenheim, The Netherlands.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Using Statistical Methods to Create a Bilingual Dictionary", "authors": [ { "first": "Djoerd", "middle": [], "last": "Hiemstra", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hiemstra, Djoerd. 1996. Using Statistical Methods to Create a Bilingual Dictionary. Masters thesis, University of Twente, The Netherlands.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Multilingual domain modeling in twenty-one: Automatic creation of a bi-directional translation lexicon from a parallel corpus", "authors": [ { "first": "Djoerd", "middle": [], "last": "Hiemstra", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the Eighth meeting of Computational Linguistics in the Netherlands (CLIN)", "volume": "", "issue": "", "pages": "41--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hiemstra, Djoerd. 1998. Multilingual domain modeling in twenty-one: Automatic creation of a bi-directional translation lexicon from a parallel corpus. In Proceedings of the Eighth meeting of Computational Linguistics in the Netherlands (CLIN), pages 41-58.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Building an MT dictionary from parallel texts based on linguistic and statistical information", "authors": [ { "first": "Akira", "middle": [], "last": "Kumano", "suffix": "" }, { "first": "Hideki", "middle": [], "last": "Hirakawa", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 15th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "76--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kumano, Akira, and Hideki Hirakawa. 1994. Building an MT dictionary from parallel texts based on linguistic and statistical information. In Proceedings of the 15th International Conference on Computational Linguistics, pages 76-81, Kyoto, Japan.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Should we translate the documents or the queries in cross-language information retrieval", "authors": [ { "first": "J", "middle": [], "last": "Mccarley", "suffix": "" }, { "first": "", "middle": [], "last": "Scott", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 37th Annual Meeting", "volume": "", "issue": "", "pages": "208--214", "other_ids": {}, "num": null, "urls": [], "raw_text": "McCarley, J. Scott. 1999. Should we translate the documents or the queries in cross-language information retrieval? In Proceedings of the 37th Annual Meeting, pages 208-214, College Park, MD. Association for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Using bi-textual alignment for translation validation: The TransCheck system", "authors": [ { "first": "Elliott", "middle": [], "last": "Macklovitch", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 1st Conference of the Association for Machine Translation in the Americas", "volume": "", "issue": "", "pages": "157--168", "other_ids": {}, "num": null, "urls": [], "raw_text": "Macklovitch, Elliott. 1994. Using bi-textual alignment for translation validation: The TransCheck system. In Proceedings of the 1st Conference of the Association for Machine Translation in the Americas, pages 157-168.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Automatic evaluation and uniform filter cascades for inducing N-best translation lexicons", "authors": [ { "first": "I", "middle": [], "last": "Melamed", "suffix": "" }, { "first": "", "middle": [], "last": "Dan", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Cambridge", "suffix": "" }, { "first": "I", "middle": [], "last": "Melamed", "suffix": "" }, { "first": "", "middle": [], "last": "Dan", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 2nd Conference of the Association for Machine Translation in the Americas", "volume": "", "issue": "", "pages": "125--134", "other_ids": {}, "num": null, "urls": [], "raw_text": "Melamed, I. Dan. 1995. Automatic evaluation and uniform filter cascades for inducing N-best translation lexicons. In Proceedings of the Third Workshop on Very Large Corpora, pages 184-198, Cambridge, MA. Melamed, I. Dan. 1996a. Automatic detection of omissions in translations. In Proceedings of the 16th International Conference on Computational Linguistics, pages 764-769, Copenhagen, Denmark. Melamed, I. Dan. 1996b. Automatic construction of clean broad-coverage translation lexicons. In Proceedings of the 2nd Conference of the Association for Machine Translation in the Americas, pages 125-134, Montreal, Canada.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Models of co-occurrence", "authors": [ { "first": "I", "middle": [], "last": "Melamed", "suffix": "" }, { "first": "", "middle": [], "last": "Dan", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Melamed, I. Dan. 1998a. Models of co-occurrence. Institute for Research in Cognitive Science Technical Report #98-05. University of Pennsylvania, Philadelphia, PA.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Annotation style guide for the blinker project. Institute for Research in Cognitive Science Technical Report #98-06", "authors": [ { "first": "I", "middle": [], "last": "Melamed", "suffix": "" }, { "first": "", "middle": [], "last": "Dan", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Melamed, I. Dan. 1998b. Annotation style guide for the blinker project. Institute for Research in Cognitive Science Technical Report #98-06. University of Pennsylvania, Philadelphia, PA.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Manual annotation of translational equivalence: The blinker project. Institute for Research in Cognitive Science Technical Report #98-07", "authors": [ { "first": "I", "middle": [], "last": "Melamed", "suffix": "" }, { "first": "", "middle": [], "last": "Dan", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Melamed, I. Dan. 1998c. Manual annotation of translational equivalence: The blinker project. Institute for Research in Cognitive Science Technical Report #98-07. University of Pennsylvania, Philadelphia, PA. Melamed, I. Dan. To appear. Empirical Methods for Exploiting Parallel Texts, MIT Press.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Reading more into foreign languages", "authors": [ { "first": "John", "middle": [], "last": "Nerbonne", "suffix": "" }, { "first": "Lauri", "middle": [], "last": "Karttunen", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Paskaleva", "suffix": "" }, { "first": "Gabor", "middle": [], "last": "Proszeky", "suffix": "" }, { "first": "Tiit", "middle": [], "last": "Roosmaa", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 5th ACL Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "135--138", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nerbonne, John, Lauri Karttunen, Elena Paskaleva, Gabor Proszeky, and Tiit Roosmaa. 1997. Reading more into foreign languages. In Proceedings of the 5th ACL Conference on Applied Natural Language Processing, pages 135-138, Washington, DC.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Adaptive filtering of multilingual document streams", "authors": [ { "first": "Douglas", "middle": [ "W" ], "last": "Oard", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 5th RIAO Conference on Computer-Assisted Information Retrieval", "volume": "", "issue": "", "pages": "233--253", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oard, Douglas W. 1997. Adaptive filtering of multilingual document streams. In Proceedings of the 5th RIAO Conference on Computer-Assisted Information Retrieval, pages 233-253, Montreal, Canada.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Evaluating multilingual gisting of Web pages", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the AAAI Symposium on Natural Language Processing for the World Wide Web", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Resnik, Philip. 1997. Evaluating multilingual gisting of Web pages. In Proceedings of the AAAI Symposium on Natural Language Processing for the World Wide Web. Stanford, CA.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Restricting the weak-generative capacity of synchronous tree-adjoining grammars", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the SIGLEX Workshop on Tagging Text with Lexical Semantics", "volume": "10", "issue": "", "pages": "371--385", "other_ids": {}, "num": null, "urls": [], "raw_text": "Resnik, Philip, and David Yarowsky. 1997. A perspective on word sense disambiguation methods and their evaluation. In Proceedings of the SIGLEX Workshop on Tagging Text with Lexical Semantics, pages 79-86, Washington, DC. Shieber, Stuart. 1994. Restricting the weak-generative capacity of synchronous tree-adjoining grammars. Computational Intelligence, 10(4):371-385.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "TransSearch: A bilingual concordance tool. Centre d'innovation en technologies de l'information", "authors": [ { "first": "", "middle": [], "last": "Simard", "suffix": "" }, { "first": "George", "middle": [ "F" ], "last": "Michel", "suffix": "" }, { "first": "Francois", "middle": [], "last": "Foster", "suffix": "" }, { "first": "", "middle": [], "last": "Perrault", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simard, Michel, George F. Foster, and Francois Perrault. 1993. TransSearch: A bilingual concordance tool. Centre d'innovation en technologies de l'information, Laval, Canada.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "HMM-based word alignment in statistical translation", "authors": [ { "first": ";", "middle": [], "last": "Svartvik", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" }, { "first": "Christoph", "middle": [], "last": "Tillmann", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the 16th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Svartvik, Jan. 1992. Directions in Corpus Linguistics. Mouton de Gruyter, Berlin. Vogel, Stephan, Hermann Ney, and Christoph Tillmann. 1996. HMM-based word alignment in statistical translation. In Proceedings of the 16th International Conference on Computational Linguistics. Copenhagen, Denmark.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Eurowordnet: A Multilingual Database with Lexical Semantic Networks", "authors": [], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Piek, Vossen, editor. 1998. Eurowordnet: A Multilingual Database with Lexical Semantic Networks. Kluwer Academic Publishers.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Evaluation of machine translation", "authors": [ { "first": "John", "middle": [ "S" ], "last": "White", "suffix": "" }, { "first": "Theresa", "middle": [ "A" ], "last": "O'connell", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the ARPA HLT Workshop", "volume": "", "issue": "", "pages": "206--210", "other_ids": {}, "num": null, "urls": [], "raw_text": "White, John S., and Theresa A. O'Connell. 1993. Evaluation of machine translation. In Proceedings of the ARPA HLT Workshop, pages 206-210, Princeton, NJ.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Learning an English-Chinese lexicon from a parallel corpus", "authors": [ { "first": "Dekai", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Xuanyin", "middle": [], "last": "Xia", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the First Conference of the Association for Machine Translation in the Americas", "volume": "", "issue": "", "pages": "206--213", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu, Dekai, and Xuanyin Xia. 1994. Learning an English-Chinese lexicon from a parallel corpus. In Proceedings of the First Conference of the Association for Machine Translation in the Americas, pages 206-213, Columbia, MD.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "One sense per collocation", "authors": [ { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the DARPA Workshop on Human Language Technology", "volume": "", "issue": "", "pages": "266--271", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yarowsky, David. 1993. One sense per collocation. In Proceedings of the DARPA Workshop on Human Language Technology, pages 266-271, Princeton, NJ.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "nods and hoche often co-occur, as do nods and head. The direct association between nods and hoche, and the direct association between nods and head give rise to an indirect association between hoche and head.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "Pr(linkslmodel), as given in Equation 29, has only one global maximum in the region of interest, where 1 > ),+ > ,~ > ,~-> 0.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF4": { "text": "of model performance on single-best translation task. (a) All links; (b) open-class links only. ....................................... i ................... Comparison of model performance on whole distribution task. (a) All links; (b) open-class links only. ........ Model A ..... \u2022 ..... \u2022 Model 1 ......... ~ ......... 250 2500 number of training verse pairs (on log scale)", "num": null, "uris": null, "type_str": "figure" }, "FIGREF6": { "text": "Figure 7", "num": null, "uris": null, "type_str": "figure" }, "FIGREF7": { "text": "are just one kind of mathematics for one kind of statistical trans-Translation lexicon accuracy with 95% confidence intervals at varying levels of coverage.", "num": null, "uris": null, "type_str": "figure" }, "TABREF0": { "html": null, "type_str": "table", "text": "A co-occurrence contingency table.", "num": null, "content": "
u-~uTotal
vcooc( u,v)
~vcooc(u,~v) cooc(-~u,-~v)cooc(.,~v)
" }, "TABREF1": { "html": null, "type_str": "table", "text": "", "num": null, "content": "" }, "TABREF2": { "html": null, "type_str": "table", "text": "", "num": null, "content": "
" }, "TABREF3": { "html": null, "type_str": "table", "text": "Lexicon coverage at three different minimum score thresholds. The bitext contained 41,028 different English words and 36,314 different French words, for a total of 77,342.", "num": null, "content": "
TotalEnglishFrench
Cutoff Minimum LexiconWordsWords
PlateauScoreEntries Represented % Represented %
3/32832,27414,2993513,40937
2/21843,07518,533451Z13347
1/1988,63336,3718933,01791
" } } } }