|
{ |
|
"paper_id": "Y14-1034", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:44:08.173350Z" |
|
}, |
|
"title": "TakeTwo: A Word Aligner based on Self Learning", |
|
"authors": [ |
|
{ |
|
"first": "Jim", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Tsing Hua University", |
|
"location": { |
|
"addrLine": "101, Guangfu Road", |
|
"settlement": "Hsinchu", |
|
"country": "Taiwan" |
|
} |
|
}, |
|
"email": "jim.chang.nthu@gmail.com" |
|
}, |
|
{ |
|
"first": "Jian-Cheng", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Tsing Hua University", |
|
"location": { |
|
"addrLine": "101, Guangfu Road", |
|
"settlement": "Hsinchu", |
|
"country": "Taiwan" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Chang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Tsing Hua University", |
|
"location": { |
|
"addrLine": "101, Guangfu Road", |
|
"settlement": "Hsinchu", |
|
"country": "Taiwan" |
|
} |
|
}, |
|
"email": "jason.jschang@gmail.com" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "State of the art statistical machine translation systems are typically trained by symmetrizing word alignments in two translation directions. We introduce a new method that improves word alignment results, based on self learning using the initial symmetrized word alignments results. The method involves aligning words and symmetrizing alignments, generating labeled training data, and construct a classifier for predicting word-translation relation in another alignment round. In the first alignment round, we use the original growdiag-final-and procedure, while in the second round, we use the classifier and a modified GDFA procedure to validate and fill in alignment links. We present a prototype system, TakeTwo, which applies the method to improve on GDFA. Preliminary experiments and evaluation on a hand-annotated dataset show that the method significantly increases the precision rate by a wide margin (+16%) with comparable recall rate (-3%). See Figures 1(c) for examples of noisy and missing links, produced by Giza++ with the GDFA sym", |
|
"pdf_parse": { |
|
"paper_id": "Y14-1034", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "State of the art statistical machine translation systems are typically trained by symmetrizing word alignments in two translation directions. We introduce a new method that improves word alignment results, based on self learning using the initial symmetrized word alignments results. The method involves aligning words and symmetrizing alignments, generating labeled training data, and construct a classifier for predicting word-translation relation in another alignment round. In the first alignment round, we use the original growdiag-final-and procedure, while in the second round, we use the classifier and a modified GDFA procedure to validate and fill in alignment links. We present a prototype system, TakeTwo, which applies the method to improve on GDFA. Preliminary experiments and evaluation on a hand-annotated dataset show that the method significantly increases the precision rate by a wide margin (+16%) with comparable recall rate (-3%). See Figures 1(c) for examples of noisy and missing links, produced by Giza++ with the GDFA sym", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The first statistical machine translation (SMT) models are the IBM models, based on statistics collected over a parallel corpus of translated text. These generative IBM models break up the translation process into a number of steps. The most important step is word translation, which is modelled by the lexical translation probability, trained from a parallel corpus, typically with the Expectation Maximization (EM) algorithm (Dempster, Laird, and Rubin 1977) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 427, |
|
"end": 460, |
|
"text": "(Dempster, Laird, and Rubin 1977)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "However, EM word aligners are data-hungry and produce noisy links due to data sparseness. Many researchers (e.g., Gale and Church 1992, Johnson et al., 2007) have pointed out that, even with a large parallel corpus, the EM algorithms running IBM models still produces noisy links for low frequency words and non-literal translations. Koehn, Och, and Marcu (2003) propose an improved word alignment method based on running IBM models in both translation directions for the two languages involved, and symmetrizing the results using a so-called grow-diag-final-and (GDFA) procedure. In a nutshell, GDFA is a heuristic greedy algorithm that starts by accepting reliable links in the intersection of the two alignments. Then, GDFA attempts to add union links neighboring intersection links. Finally, other non-neighboring links are added, subject to 1-1 alignment constraint. This progressively expanding scheme substantially enhances word alignment accuracy. However, the GDFA procedure still leaves much room for improvement, especially for low-frequency translations, non-literal translations, and sentences with extraneous/deleted translations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 122, |
|
"text": "Gale and", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 123, |
|
"end": 157, |
|
"text": "Church 1992, Johnson et al., 2007)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 334, |
|
"end": 362, |
|
"text": "Koehn, Och, and Marcu (2003)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Consider the following English sentence with Mandarin Chinese translation in a parallel corpus:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(1) He made this remark after Heinonen arrived in Tehran. In Figure 1 (c), a hard-to-align link [remark, \u00ab q (tanhua) ] is missed out by GDFA, because [remark, \u00abq] are not common mutual translations (remark is commonly translated into U\u00f7, while [\u00ab q(tanhua)] is commonly translated to talk). For the same reason, the missing link [made, |h (fabiao)] is also hard to align.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 61, |
|
"end": 69, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Intuitively, these hard-to-align links could be identified using a classifier for predicting wordtranslation relation, if we have sufficient training data. Ideally, we should avoid human effort in preparing the training data. Based on the concept of self training, we can generate slightly imperfect training data with the most reliable links (e.g, intersection links of the two initial sets of alignments) as positive instances, and very unreliable links as negative instances (e.g., [hienonen, \u21e7 (xiang) ] and [hienonen, \u00abq (tanhua)] not picked up by GDFA).", |
|
"cite_spans": [ |
|
{ |
|
"start": 485, |
|
"end": 505, |
|
"text": "[hienonen, \u21e7 (xiang)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We present a new system, TakeTwo, that uses the concept of self training to cope with translation vari-ants and non-literal translations, aimed at improving on GDFA. An example TakeTwo alignment for Example (1) is shown in Figure 2 . TakeTwo has used predicted word-translation probability to exclude invalid links [remark, /] and [heinonen, \u00abq], and fill in valid links [made, |h] and [remark, \u00abq], leading to an improved alignment.", |
|
"cite_spans": [ |
|
{ |
|
"start": 315, |
|
"end": 326, |
|
"text": "[remark, /]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 371, |
|
"end": 381, |
|
"text": "[made, |h]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 223, |
|
"end": 231, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The rest of the paper is organized as follows. We review the related work in the next section. Then we present our method for TakeTwo (Section 3). To evaluate the performance of TakeTwo, we compare the quality of alignments produced by TakeTwo with those produced by Giza++ with GDFA (Section 4 and Section 5) over a set of parallel sentences with hand-annotated word alignment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Machine translation (MT) has been an area of active research. (Dorr, 1993) summarizes various approaches to MT, while (Lopez, 2007) surveys recent work on statistical machine translation (SMT). We focus on the first part of developing an SMT system, namely, aligning words in a given parallel corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 74, |
|
"text": "(Dorr, 1993)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 118, |
|
"end": 131, |
|
"text": "(Lopez, 2007)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The state of the art in word alignment focuses on automatically learning generative translation models via Expectation Maximization algorithm (Brown et al., 1990; Brown et al., 1993) . (Och and Ney, 2003) ", |
|
"cite_spans": [ |
|
{ |
|
"start": 142, |
|
"end": 162, |
|
"text": "(Brown et al., 1990;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 163, |
|
"end": 182, |
|
"text": "Brown et al., 1993)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 185, |
|
"end": 204, |
|
"text": "(Och and Ney, 2003)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u00f7 / ( w\u02db\u00c1 \u00b5T \u2211--\u00e5 |h \u21e1 \u21e7 \u00abq \u21e5 ...", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Initial word alignments in two directions (En-Ch and Ch-En):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "he(\u00f7) made this remark(/) after(( \u00e5) heinonen(w\u02db\u00c1 |h \u21e7 \u00abq) arrived(\u00b5T) in tehran(\u2211--) \u00f7(he) / ( w\u02db\u00c1(remark heinonen) \u00b5T(arrive in) \u2211--(tehran) \u00e5(after) |h(made) \u21e1(this) \u21e7 \u00abq", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Crosslingual relatedness:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "x-sim(remark, /) = sim(remark, be) = .0,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "x-sim(heinonen, |h) = sim(heinonen, publish) = .0,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "x-sim(made, |h = sim(make, publish) = .32, x-sim(remark, \u00abq) = sim(remark, talk) = .25", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Output: IBM models, which has since become the tool of choice for developing SMT systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "As an alternative to the EM algorithm, researchers have been exploring various knowledge sources for word alignment, using automatically derived lexicons or handcrafted dictionaries (Gale and Church, 1991; Ker and Chang, 1997), or syntactic structure (Gildea, 2003; Cherry and Lin, 2003; Wang and Zong, 2013) . There has been work on translating phrases using mixed-code web-pages (e.g., (Nagata et al., 2001; Wu and Chang, 2007) ). Similarly, (Lin et al., 2008) propose a method that performs word alignment for parenthetic translation phrases to improve the performance of SMT systems.", |
|
"cite_spans": [ |
|
{ |
|
"start": 251, |
|
"end": 265, |
|
"text": "(Gildea, 2003;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 266, |
|
"end": 287, |
|
"text": "Cherry and Lin, 2003;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 288, |
|
"end": 308, |
|
"text": "Wang and Zong, 2013)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 388, |
|
"end": 409, |
|
"text": "(Nagata et al., 2001;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 410, |
|
"end": 429, |
|
"text": "Wu and Chang, 2007)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 444, |
|
"end": 462, |
|
"text": "(Lin et al., 2008)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Researchers have also studied sublexical models for machine transliteration (Knight and Graehl, 1998) . More recently, (Chang et al., 2012) introduce a method for learning a CRF model to find translations and transliterations of technical terms on the Web. We use similar transliteration-based features derived from transliteration model in a different setting.", |
|
"cite_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 101, |
|
"text": "(Knight and Graehl, 1998)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Word alignment is closely related to measuring word similarity, and especially in the form of crosslingual relatedness. Much work has been done on word similarity and crosslingual relatedness. Early research efforts have been devoted to design the knowledge-based measures, based, in particular, on WordNet (Fellbaum, 1999) . Researchers have extensively investigated WordNet and other taxonomic structure in an attempt to calculate the word similarity by counting conceptual distance (Lin, 1998b) . On the other hand, there has been much work on distributional word similarity, for example, (Lin, 1998a) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 307, |
|
"end": 323, |
|
"text": "(Fellbaum, 1999)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 485, |
|
"end": 497, |
|
"text": "(Lin, 1998b)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 592, |
|
"end": 604, |
|
"text": "(Lin, 1998a)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In the area of cross-lingual relatedness, (Michelbacher et al., 2010) present a graph-based method for building a a cross-lingual thesaurus. The method uses two monolingual corpora and a basic dictionary to build two monolingual word graphs, with nodes representing words and edges representing linguistic relations between words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In the research area of supervised training for word alignment, (Moore, 2005) demonstrates that a discriminative model with the main feature of Log Likelihood Ratio (LLR) could result in a smaller model comparable to more complex generative EM models in alignment accuracy. (Taskar et al., 2005) independently propose a similar approach. (Liu et al., 2005 ) also propose a log-linear model incorporating features (alignment probability, POS correspondence and bilingual dictionary coverage).", |
|
"cite_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 77, |
|
"text": "(Moore, 2005)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 274, |
|
"end": 295, |
|
"text": "(Taskar et al., 2005)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 338, |
|
"end": 355, |
|
"text": "(Liu et al., 2005", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The main difference from our current work is that previous methods use manually labeled data (typically hundreds sentences with thousands of word-translation relations) to train a word alignment model. In contrast, we take a self learning approach and automatically generate labelled training data. More specifically, We train our model based on a much larger training set (hundred of thousand of word-translation instances in partially labeled sentences) based on self learning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Recently, some researchers have begun using syntax in word alignment, by incorporating features such as inversion transduction grammar or parse tree. Supervised (Cherry and Lin, 2006; Setiawan et al., 2010) and unsupervised (Pauls et al., 2010) methods have been proposed, showing that syntax can improve alignment performance. All these features can be used to training the classifier used in TakeTwo.", |
|
"cite_spans": [ |
|
{ |
|
"start": 161, |
|
"end": 183, |
|
"text": "(Cherry and Lin, 2006;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 184, |
|
"end": 206, |
|
"text": "Setiawan et al., 2010)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 224, |
|
"end": 244, |
|
"text": "(Pauls et al., 2010)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In a word alignment approach closer to our method, (Deng and Zhou, 2009) propose a method to optimize word alignment combination to derive a more effective phrase table. Similarly, (Nakov and Tiedemann, 2012) propose combining word-level and character-Level alignment models for improving machine translation between two closely-related languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 51, |
|
"end": 72, |
|
"text": "(Deng and Zhou, 2009)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 181, |
|
"end": 208, |
|
"text": "(Nakov and Tiedemann, 2012)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In contrast to the previous research in word alignment, we present a system that automatically generates instances of word-translation relations based on self learning, with the goal of training a model to estimate translation probability for effective word alignment. We exploit the inherent crosslingual regularity in parallel corpora and use automatically annotated data for training a discriminative model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Aligning words and translation using the EM algorithm based on generative IBM models is not effective for aligning low frequency words and nonliteral translations, especially across disparate languages. To align words and translations reliably in a given parallel corpus, a promising approach is to self-train a classifier with linguistics features, in order to impose additional requirements in combining alignments in two translation directions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The TakeTwo Aligner", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We focus on producing word alignments, i.e., a set of word and translation links (word pairs), in each pair of sentences in a parallel corpus. The word alignment results can be used to estimate lexical and phrasal translation probabilities for machine translation; alternatively they can be helpful for bilingual lexicography and computer aided translation. Thus, it is crucial that we produce high-precision, broad coverage word alignments. We now formally state the problem that we are addressing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Statement", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Problem Statement: We are given a parallel corpus (E, F ), and a monolingual corpus Mono-Corp. The parallel corpus, (E, F ), contains parallel sentences, (E k , F k ), k = 1, N where E k = e k 0 , e k 1 , ..., e k n k , and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Statement", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "F k = f k 0 , f k 1 , ..., f k m k .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Statement", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Our goal is to produce a set of word alignments for each sentence pair (E k , F k ). For this, we use an existing word aligner (e.g., Giza++) to produce two directional alignments and a symmetrized alignment:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Statement", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "E2F = (E2F 0 , E2F 1 , .., E2F N ) F2E = (F2E 0 , F2E 1 , .., F2E N ) SYMM = (SYMM 0 , SYMM 1 , .., SYMM N ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Statement", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Each alignment A of (E k , F k ) in E2F, F2E, and SYMM is represented as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Statement", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "{(i, j)|(e k i , f k j )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Statement", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "is an alignment link in A }. We then use a post-processing stage to improve on SYMM based on word-translation relation, predicted based on a discrimative model derived from E2F, In the rest of this section, we describe our solution to this problem. We describe the self-learning strategy for training a classifier for predicting wordtranslation relation (Section 3.2). In this section, we also describe how to enrich the training data with linguistically motivated features. Finally, we show how TakeTwo aligns each sentence pairs by applying the trained classifier (Section 3.3).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Statement", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We attempt to generate automatically annotated word-translation instances in (E, F ) to train a classifier expected to predict word-translation relation. Our learning process is shown in Figure 3 . 3.2.1 Generating Training Instances. In the first learning stage, we use the initial word alignments to generate positive and negative instances for training a classifier that predicts alignment links via crosslingual relatedness. Therefore, the output of this stage is a set of (k, i, j, Pos or Neg) tuples, where Pos or Neg denotes whether (e k i , f k j ) is a valid alignment link in (E k , F k ). To produce the output, we compute TRAIN k :", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 187, |
|
"end": 195, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Learning to Predict Cross-lingual Relatedness", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "{ (k, i, j Pos) | (i, j) 2 E2F k \\ F2E k } [ { (k, i, j, Neg) | (i, j) 2 E2F k [ F2E k -SYMM k }.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning to Predict Cross-lingual Relatedness", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Finally, we return (TRAIN 0 , TRAIN 1 , .., TRAIN N ) as output.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning to Predict Cross-lingual Relatedness", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning to Predict Cross-lingual Relatedness", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Step (1) of the this stage, we generate two sets of word alignments (E2F, F2E) and symmetrized alignments SYMM. As will be described in Section 4, we used the existing tool Giza++ to generate these three sets of alignments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning to Predict Cross-lingual Relatedness", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "To illustrate, we show in Figure 4 sample training instances, automatically generated for an example sentence pair. As can be seen in Figure 4 , we produce six positive and three negative training instances. In this case, all nine instances are correctly labeled with Pos or Neg.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 26, |
|
"end": 34, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 134, |
|
"end": 142, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Learning to Predict Cross-lingual Relatedness", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "To assess the feasibility of the self learning approach, we have checked the annotated instances against hand-tagged links in a small dataset. We : Example positive and negative instances generated from bidirectional alignments of Ex (1). Each instance is augmented with features involving cross-lingual lexical relatedness (f 1 ), morphological relatedness (f 2 ), transliteration (f 3 ), and syntactic compatibility (f 4 ). In order to generate lexical and syntactic features, the sentences are tagged and lemmatized : \"He/PRP made/VBD this/DET remark/NN after/IN Heinonen/NNP arrived/VBD in Tehran/NNP ./.\", and \"\u00f7/Nh //SHI (/P w\u02db\u00c1/Nb \u00b5T/VC \u2211--/Nca \u00e5/Ng |h/VC \u21e1/Nep \u21e7/Nf \u00abq/Na \u21e5/\u21e5\").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning to Predict Cross-lingual Relatedness", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "found that around 90% of positive instances are correctly labelled, while around 95% of the negative instances are correctly labelled.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning to Predict Cross-lingual Relatedness", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In the second stage of the learning process, we augment each training instance (k, i, j, Pos/Neg) generated in Section 3.2.1 with a set of features. For the sake of generality, we use a set of linguist features, involving lemmatized forms, morpholgical parts, distributional similarity, parts of speech, and transliteration model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generating features.", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "Step (1) of the second stage (see Figure 3), we perform tokenization and POS tagging on all sentences (E k , F k ), k = 1, N . We tokenize F k into words or Chinese characters, in order to perform word alignment on both word and morpheme levels. In Step (2), we estimate word translation probability and morpheme translation probability based on the initial alignment results, using both word-to-word and word-to-morpheme alignments. In Step (3), we estimate syllable-to-syllable transliteration probablity using a bilingual named entity list. In Step (4), we develop a distributional similarity model based on MonoCorp.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 40, |
|
"text": "Figure", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "For this, in", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Finally, in Step (5), we use these models to generate a set of features for each training instance in TRAIN. The set of features we use include:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "For this, in", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Cross-lingual lexical similarity. This lexical feature is based on a simple idea: translating the foreign words f k j into English words e, and then measure similarity between the lemmas of e and e k i . Therefore, we have feature 1 = max e P (e | f k j ) sim (e, e k i ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "For this, in", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Morpheme-based similarity feature. This feature is similar to feature 1 , but is estimated based on word part of a foreign word F k j aimed at handling compounds that might involves 1-to-many alignment (e.g., [preserving water, \u00bf4 (jieshui) ]). For this, we use the word-to-morphme and morpheme-to-word alignments to estimate lexical translation probability. Therefore, we have feature 2 = max e, m2f k j P (e | m)sim(e, e k i ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "For this, in", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Transliteration feature. The transliteration feature is designed to handle hard-to-align name entities appearing only once or twice in the whole corpus. Therefore, we we have", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "For this, in", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "feature 3 = P translit (f k j | e k j ),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "For this, in", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where P translit is a transliteration model trained on a list of bilingual named entities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "For this, in", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "! 288", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PACLIC 28", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Syntactic feature. We use parts of speech to capture cross-lingual regularity of words and translations on the syntactic level. For instance, an English preposition (i.e., IN) tends to align with a Chinese preposition or directional postposition (i.e., P or Ng). Therefore, we have feature 4 = (pos(e k i ), pos(f k j )), where pos returns the part of speech of English word e k i or foreign word f k j in (E k , F k ). See Figure 4 for example training instances augmented with these crosslingual features.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 426, |
|
"end": 434, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "PACLIC 28", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "3.2.3 Training classifier. In the third and final stage of training, we train a classifier on a set of positive and negative feature vectors, generated in Section 3.2.2. The output of this stage is X-Sim, a classifier that provides probabilistic values indicating the likelihood of word-translation relation for (e k i , f k j ) with features calculated in the context of (E k , F k ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PACLIC 28", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Once the classifier X-Sim is trained for predicting word-translation relation, TakeTwo then combine the two initial sets of alignments, using X-Sim to improve performance using the procedure shown in Figure 5 . The alignment procedure is a modified version of GDFA procedure, with four steps: INTER-SECT, GROW-DIAG-SIM, FILL-IN, and FINAL-AND. We use the same INTERSECT and FINAL-AND step, while modifying GROW-DIAG by requiring crosslingual similarity. The additional step of FILL-IN aimed at adding valid links missing from both E2F k and F2E k .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 200, |
|
"end": 208, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Run-time Word Alignment", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In Step (1), we initalize SYMM/SIM to an empty set. In Steps (2) through (5), we combine the two alignments E2F k and F2E k for each sentence pair (E k , F k ). And Finally, in Step (6) we output the new symmetrized alignment results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Run-time Word Alignment", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In Step (2), we start with an alignment with the links in E2Fk \\ F2E k . In Step (3), we execute the GROW-DIAG-SIM step to add additional links neighboring the intersection links. A neighboring union link (E2Fk [ F2E k ), with high predicted probabiliy, are added to the results. In Step (4), we attempt to fill in links which are probably wordtranslation pairs, if the link is not in conflict with the current alignment. In Step (5), we execute the FINAL-AND step the same way as in GDFA.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Run-time Word Alignment", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In Step (6), we accumulate symmetrized alignment for a sentence pair. Finally, we add the symmetrized alignment to SYMM/SIM and return SYMM/SIM as output (in Step 7).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Run-time Word Alignment", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We evaluate our alignment systems directly. We calculate recall, precision, and F-measure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For self learning, we ran Giza++ on the FBIS corpus with 250 thousand parallel setnences (LDC-2003E14) . The training scheme is as follows: 5 iterations of Model 1, followed by 5 iterations of HMM, followed by 5 iterations of Model 3 and then 5 iterations of Model 4. The systems evaluated include:", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 102, |
|
"text": "(LDC-2003E14)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 TakeTwo.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 TakeTwo (no fill-in).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 Giza++: grow-diag-final-and.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 Giza++: intersection.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 Giza++: union.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We manually aligned 300 random selected sentences with English and Chinese words as the reference answers. For simplicity, we do not distinguished between sure and uncertain alignment links as described in (Och and Ney, 2004) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 206, |
|
"end": 225, |
|
"text": "(Och and Ney, 2004)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "For preprocessing and generating syntactic features, we used the Genia Tagger and CKIP Word Segmenter to generate tokens and parts of speech. We also used the Wikipedia Dump (English) to build distributional word similarity measure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In order to train a classifier for word-translation relation, we used SVM classifier with the tool libsvm. We used lexical, morphological, transliteration, and syntactic features, as described in Section 3.2.2. For simplicity, we used an empirically determined values for the thresholds of similarity constraint in T akeT wo.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Each word-translation link in the test sentences produced by a word aligner was judged to be either correct or incorrect in context. Precision was calculated as the fraction of correct pairs among the pair derived, recall was calculated as the fraction all correct pairs in the reference key, and the F-measure was calculated with equal weights for both precision and recall.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In this section, we report the results of the experimental evaluation. Table 1 lists the precision, recall, and F-measure of two T akeT wo variant systems, and the Giza++ derived systems. All six systems were tested and evaluated over the test set of 300 parallel sentences sampled from FBIS.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 71, |
|
"end": 78, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In summary, the T akeT wo with the FILL-IN step has the highest F-measure, while T akeT wo without the FILL-IN step has the second highest F-measure, followed by GIZA++ with GDFA symmetrization. Both T akeT wo systems outperform the state of the art systems and gains of 6% and 3% in F-measure, with higher precision rate (+16% and +9%) with small descreases in recall rate (-3% and -1%). These results indicate that relevance feedback combined with a rich set of linguistic features are very effective in improving word alginment accuracy in a postprocessing setting.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We have presented a new method for word alignment. In our work, we use self learning to generate training data for classifying word-translation relation, based on a rich set of features. The classifier is used in the second word alignment round to val- idate links in inital alignment round 'and to fill in missing links. Preliminary experiments and evaluations show our method is capable of aligning words and translations with high precision.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Many avenues exist for future research and improvement of our system. For example, Bleu score of SMT systems using the word alignment results could be used to evaluate the effectiveness of word alignment. Phrasal translations in the bilingual lexicon could be used to make many-to-many alignment decisions. In addition, natural language processing techniques such as word clustering, and crosslingual relatedness could be attempted to improve recall. Another interesting direction to explore is training an ensemble of classifiers. Yet another direction of research would be to align word from scratch using the classifier in a beam-search algorithm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future work", |
|
"sec_num": "5" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "A statistical approach to machine translation", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Peter F Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen A Della", |
|
"middle": [], |
|
"last": "Cocke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent J Della", |
|
"middle": [], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fredrick", |
|
"middle": [], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Jelinek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Mercer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Roossin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Computational linguistics", |
|
"volume": "16", |
|
"issue": "2", |
|
"pages": "79--85", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter F Brown, John Cocke, Stephen A Della Pietra, Vin- cent J Della Pietra, Fredrick Jelinek, John D Lafferty, Robert L Mercer, and Paul S Roossin. 1990. A statis- tical approach to machine translation. Computational linguistics, 16(2):79-85.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "The mathematics of statistical machine translation: Parameter estimation", |
|
"authors": [ |
|
{ |
|
"first": "Vincent J Della", |
|
"middle": [], |
|
"last": "Peter F Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen A Della", |
|
"middle": [], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert L", |
|
"middle": [], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mercer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational linguistics", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "263--311", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter F Brown, Vincent J Della Pietra, Stephen A Della Pietra, and Robert L Mercer. 1993. The mathematics of statistical machine translation: Parameter estima- tion. Computational linguistics, 19(2):263-311.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Learning to find translations and transliterations on the web", |
|
"authors": [ |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Joseph", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jyh-Shing Roger", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Jang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "130--134", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joseph Z Chang, Jason S Chang, and Jyh-Shing Roger Jang. 2012. Learning to find translations and translit- erations on the web. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguis- tics: Short Papers-Volume 2, pages 130-134. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A probability model to improve word alignment", |
|
"authors": [ |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Cherry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dekang", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "88--95", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Colin Cherry and Dekang Lin. 2003. A probability model to improve word alignment. In Proceedings of the 41st Annual Meeting on Association for Computa- tional Linguistics-Volume 1, pages 88-95. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Soft syntactic constraints for word alignment through discriminative training", |
|
"authors": [ |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Cherry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dekang", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the COLING/ACL on Main conference poster sessions", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "105--112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Colin Cherry and Dekang Lin. 2006. Soft syntactic constraints for word alignment through discriminative training. In Proceedings of the COLING/ACL on Main conference poster sessions, pages 105-112. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Optimizing word alignment combination for phrase table training", |
|
"authors": [ |
|
{ |
|
"first": "Yonggang", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bowen", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the ACL-IJCNLP 2009 Conference Short Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "229--232", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yonggang Deng and Bowen Zhou. 2009. Optimizing word alignment combination for phrase table training. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 229-232. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Machine translation: a view from the Lexicon", |
|
"authors": [ |
|
{ |
|
"first": "Bonnie", |
|
"middle": [ |
|
"Jean" |
|
], |
|
"last": "Dorr", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bonnie Jean Dorr. 1993. Machine translation: a view from the Lexicon. MIT press.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Identifying word correspondences in parallel texts", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "William", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenneth", |
|
"middle": [ |
|
"Ward" |
|
], |
|
"last": "Gale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Church", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "HLT", |
|
"volume": "91", |
|
"issue": "", |
|
"pages": "152--157", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William A Gale and Kenneth Ward Church. 1991. Iden- tifying word correspondences in parallel texts. In HLT, volume 91, pages 152-157.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Loosely tree-based alignment for machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Gildea", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "80--87", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Gildea. 2003. Loosely tree-based alignment for machine translation. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1, pages 80-87. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A class-based approach to word alignment", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Sue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Ker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Computational Linguistics", |
|
"volume": "23", |
|
"issue": "2", |
|
"pages": "313--343", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sue J Ker and Jason S Chang. 1997. A class-based ap- proach to word alignment. Computational Linguistics, 23(2):313-343.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Machine transliteration", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Graehl", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Computational Linguistics", |
|
"volume": "24", |
|
"issue": "4", |
|
"pages": "599--612", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Knight and Jonathan Graehl. 1998. Machine transliteration. Computational Linguistics, 24(4):599- 612.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Mining parenthetical translations from the web by word alignment", |
|
"authors": [ |
|
{ |
|
"first": "Dekang", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shaojun", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marius", |
|
"middle": [], |
|
"last": "Pasca", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "ACL", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "994--1002", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dekang Lin, Shaojun Zhao, Benjamin Van Durme, and Marius Pasca. 2008. Mining parenthetical translations from the web by word alignment. In ACL, volume 8, pages 994-1002.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Automatic retrieval and clustering of similar words", |
|
"authors": [ |
|
{ |
|
"first": "Dekang", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the 17th international conference on Computational linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "768--774", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dekang Lin. 1998a. Automatic retrieval and cluster- ing of similar words. In Proceedings of the 17th in- ternational conference on Computational linguistics- Volume 2, pages 768-774. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "An information-theoretic definition of similarity", |
|
"authors": [ |
|
{ |
|
"first": "Dekang", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "ICML", |
|
"volume": "98", |
|
"issue": "", |
|
"pages": "296--304", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dekang Lin. 1998b. An information-theoretic definition of similarity. In ICML, volume 98, pages 296-304.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Loglinear models for word alignment", |
|
"authors": [ |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shouxun", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "459--466", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yang Liu, Qun Liu, and Shouxun Lin. 2005. Log- linear models for word alignment. In Proceedings of the 43rd Annual Meeting on Association for Compu- tational Linguistics, pages 459-466. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "A survey of statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Lopez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Lopez. 2007. A survey of statistical machine translation. Technical report, DTIC Document.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Building a crosslingual relatedness thesaurus using a graph similarity measure", |
|
"authors": [ |
|
{ |
|
"first": "Lukas", |
|
"middle": [], |
|
"last": "Michelbacher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Florian", |
|
"middle": [], |
|
"last": "Laws", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beate", |
|
"middle": [], |
|
"last": "Dorow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ulrich", |
|
"middle": [], |
|
"last": "Heid", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hinrich", |
|
"middle": [], |
|
"last": "Sch\u00fctze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lukas Michelbacher, Florian Laws, Beate Dorow, Ulrich Heid, and Hinrich Sch\u00fctze. 2010. Building a cross- lingual relatedness thesaurus using a graph similarity measure. In LREC.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "A discriminative framework for bilingual word alignment", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Robert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Moore", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "81--88", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert C Moore. 2005. A discriminative framework for bilingual word alignment. In Proceedings of the con- ference on Human Language Technology and Empir- ical Methods in Natural Language Processing, pages 81-88. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Using the web as a bilingual dictionary", |
|
"authors": [ |
|
{ |
|
"first": "Masaaki", |
|
"middle": [], |
|
"last": "Nagata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Teruka", |
|
"middle": [], |
|
"last": "Saito", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenji", |
|
"middle": [], |
|
"last": "Suzuki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the workshop on Data-driven methods in machine translation", |
|
"volume": "14", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Masaaki Nagata, Teruka Saito, and Kenji Suzuki. 2001. Using the web as a bilingual dictionary. In Proceed- ings of the workshop on Data-driven methods in ma- chine translation-Volume 14, pages 1-8. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Combining word-level and character-level models for machine translation between closely-related languages", |
|
"authors": [ |
|
{ |
|
"first": "Preslav", |
|
"middle": [], |
|
"last": "Nakov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00f6rg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "301--305", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Preslav Nakov and J\u00f6rg Tiedemann. 2012. Combin- ing word-level and character-level models for machine translation between closely-related languages. In Pro- ceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2, pages 301-305. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "A systematic comparison of various statistical alignment models", |
|
"authors": [ |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Franz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Computational linguistics", |
|
"volume": "29", |
|
"issue": "1", |
|
"pages": "19--51", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz Josef Och and Hermann Ney. 2003. A system- atic comparison of various statistical alignment mod- els. Computational linguistics, 29(1):19-51.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "The alignment template approach to statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Franz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Computational linguistics", |
|
"volume": "30", |
|
"issue": "4", |
|
"pages": "417--449", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz Josef Och and Hermann Ney. 2004. The align- ment template approach to statistical machine transla- tion. Computational linguistics, 30(4):417-449.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Unsupervised syntactic alignment with inversion transduction grammars", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Pauls", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "118--126", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Pauls, Dan Klein, David Chiang, and Kevin Knight. 2010. Unsupervised syntactic alignment with inversion transduction grammars. In Human Lan- guage Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 118-126. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Discriminative word alignment with a function word reordering model", |
|
"authors": [ |
|
{ |
|
"first": "Hendra", |
|
"middle": [], |
|
"last": "Setiawan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "534--544", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hendra Setiawan, Chris Dyer, and Philip Resnik. 2010. Discriminative word alignment with a function word reordering model. In Proceedings of the 2010 Con- ference on Empirical Methods in Natural Language Processing, pages 534-544. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "A discriminative matching approach to word alignment", |
|
"authors": [ |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Taskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simon", |
|
"middle": [], |
|
"last": "Lacoste-Julien", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "73--80", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ben Taskar, Simon Lacoste-Julien, and Dan Klein. 2005. A discriminative matching approach to word align- ment. In Proceedings of the conference on Human Language Technology and Empirical Methods in Nat- ural Language Processing, pages 73-80. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Large-scale word alignment using soft dependency cohesion constraints", |
|
"authors": [ |
|
{ |
|
"first": "Zhiguo", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chengqing", |
|
"middle": [], |
|
"last": "Zong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Transactions of Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "6", |
|
"pages": "291--300", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhiguo Wang and Chengqing Zong. 2013. Large-scale word alignment using soft dependency cohesion con- straints. Transactions of Association for Computa- tional Linguistics, 1(6):291-300.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Learning to find english to chinese transliterations on the web", |
|
"authors": [ |
|
{ |
|
"first": "Jian-Cheng", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Chang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "EMNLP-CoNLL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "996--1004", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jian-Cheng Wu and Jason S Chang. 2007. Learning to find english to chinese transliterations on the web. In EMNLP-CoNLL, pages 996-1004.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"num": null, |
|
"text": "Three example alignments produced by Giza++ for Ex. (1): (a) Chinese-English alignment. (b) English-Chinese alignment. (c) The symmetrized alignment of combining (a) and (b) by running the grow-diag-final-and procedure. Note that the dark cells (in Figure 1(c)) represent links in the intersection of two alignments, while the gray cells represent links in the rest of the union.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"text": "metrizing", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"text": "he(\u00f7) made(|h) this(\u21e1) remark(\u00abq) after(\u00e5) heinonen(w\u02db\u00c1) arrived(\u00b5T) in(\u00b5T) tehran(\u2211--) . (\u21e5) Alignment dotplot (see figure on the right) Note that the dark cells represent links in the intersection of two alignments, while the gray cells represent links in the rest of the union", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"num": null, |
|
"text": "An example TakeTwo session and results", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF5": { |
|
"num": null, |
|
"text": "E2F, F2E, SYMM = WordAliger(E, F) (2) E2F-m, F2E-m, SYMM-m = WordAligner(E, F-morph) (3) POSITIVES, NEGATIVES = INTERSECT(E2F, F2E), UNION(E2F, F2E) -SYMM (4) Return TRAIN = POSITIVES + NEGATIVES Stage 2 (Section 3.2.2) (1) Tag each sentence E(k) and F(k) with parts of speech For all English word e, foreign word f, and morpheme m of f (2a) Estimate LTP, P(e|f) based on F2E (2b) Estimate MTP, P(e|m) based on E2F-m (3) Build a transliteration model P_translit(e|f) based on an EF name list (4) Build a distributional similarity model Sim(e, e') based on MonoCorpFor each link (e, f) in training data TRAIN, augment (e, f) with features (5a) f1 = max(e') P(e'|f) Sim(e', e), f3 = P_translit(e|f), (5b) f2 = max(m, e') P(e'|m) Sim(e', e), f4 = (pos(e), pos(f))Stage 3 (Section 3.2.3)(1) Return the classifier X-SIM trained on the feature vectors", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF6": { |
|
"num": null, |
|
"text": "Ouline of the process to train the TakeTwo system. F2E, SYMM, MonoCorp, and other linguistic resources.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF7": { |
|
"num": null, |
|
"text": "/RF(Alignment): Iterate until no new points added For English word e = 0 ... en, foreign word f = 0 ... fm If ( e aligned with f ) For each neighboring point ( e-new, f-new ): If ( ( e-new not aligned or f-new not aligned ) and ( e-new, f-new ) in union( E2F(k), F2E(k) ) and ( X-SIM ( e-new, f-new ) > threshold ) ) Add to Alignment the link ( e-new, f-new ) FILL(alignment): Alignment_candidates = [] For english word e-new = 0 ... en, foreign word f-new = 0 ... fn If ( ( e-new not aligned and f-new not aligned ) and ( X-SIM ( e-new, f-new ) > threshold ) ) Add to Alignment_candidates the link ( e-new, f-new ) Sort Alignment_candidates by decreasing X-SIM values For link (e-new, f-new) in Alignment_candidates If ( e-new not aligned and f-new not aligned ) Add to Alignment the link ( e-new, f-new ) FINAL-AND(Alignment): For English word e-new = 0 ... en, foreign word f-new = 0 ... fn If ( ( e-new not aligned and f-new not aligned ) and ( e-new, f-new ) in alignment ) Add to Alignment the link ( e-new, f-new )", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF8": { |
|
"num": null, |
|
"text": "Aligning word and translation at run-time.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>PACLIC 28</td></tr><tr><td>describe Giza++, an implementation of the</td></tr><tr><td>! 284</td></tr></table>", |
|
"text": "Input: ... He made this remark after Heinonen arrived in Tehran.", |
|
"html": null, |
|
"num": null |
|
} |
|
} |
|
} |
|
} |