Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H05-1009",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:34:25.370126Z"
},
"title": "NeurAlign: Combining Word Alignments Using Neural Networks",
"authors": [
{
"first": "Necip",
"middle": [],
"last": "Fazil",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Maryland College Park",
"location": {
"postCode": "20742",
"region": "MD"
}
},
"email": ""
},
{
"first": "Bonnie",
"middle": [
"J"
],
"last": "Dorr",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Maryland College Park",
"location": {
"postCode": "20742",
"region": "MD"
}
},
"email": "bonnie@umiacs.umd.edu"
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Maryland College Park",
"location": {
"postCode": "20742",
"region": "MD"
}
},
"email": "christof@umiacs.umd.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a novel approach to combining different word alignments. We view word alignment as a pattern classification problem, where alignment combination is treated as a classifier ensemble, and alignment links are adorned with linguistic features. A neural network model is used to learn word alignments from the individual alignment systems. We show that our alignment combination approach yields a significant 20-34% relative error reduction over the best-known alignment combination technique on English-Spanish and English-Chinese data.",
"pdf_parse": {
"paper_id": "H05-1009",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a novel approach to combining different word alignments. We view word alignment as a pattern classification problem, where alignment combination is treated as a classifier ensemble, and alignment links are adorned with linguistic features. A neural network model is used to learn word alignments from the individual alignment systems. We show that our alignment combination approach yields a significant 20-34% relative error reduction over the best-known alignment combination technique on English-Spanish and English-Chinese data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Parallel texts are a valuable resource in natural language processing and essential for projecting knowledge from one language onto another. Word-level alignment is a critical component of a wide range of NLP applications, such as construction of bilingual lexicons (Melamed, 2000) , word sense disambiguation (Diab and Resnik, 2002) , projection of language resources (Yarowsky et al., 2001) , and statistical machine translation. Although word-level aligners tend to perform well when there is sufficient training data, the quality decreases as the size of training data decreases. Even with large amounts of training data, statistical aligners have been shown to be susceptible to mis-aligning phrasal constructions (Dorr et al., 2002) due to many-to-many correspondences, morphological language distinctions, paraphrased and free translations, and a high percentage of function words (about 50% of the tokens in most texts).",
"cite_spans": [
{
"start": 266,
"end": 281,
"text": "(Melamed, 2000)",
"ref_id": "BIBREF12"
},
{
"start": 310,
"end": 333,
"text": "(Diab and Resnik, 2002)",
"ref_id": "BIBREF5"
},
{
"start": 369,
"end": 392,
"text": "(Yarowsky et al., 2001)",
"ref_id": "BIBREF22"
},
{
"start": 719,
"end": 738,
"text": "(Dorr et al., 2002)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper presents a novel approach to alignment combination, NeurAlign, that treats each alignment system as a black box and merges their outputs. We view word alignment as a pattern classification problem and treat alignment combination as a classifier ensemble (Hansen and Salamon, 1990; Wolpert, 1992) . The ensemble-based approach was developed to select the best features of different learning algorithms, including those that may not produce a globally optimal solution (Minsky, 1991) .",
"cite_spans": [
{
"start": 265,
"end": 291,
"text": "(Hansen and Salamon, 1990;",
"ref_id": "BIBREF8"
},
{
"start": 292,
"end": 306,
"text": "Wolpert, 1992)",
"ref_id": "BIBREF21"
},
{
"start": 478,
"end": 492,
"text": "(Minsky, 1991)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We use neural networks to implement the classifier-ensemble approach, as these have previously been shown to be effective for combining classifiers (Hansen and Salamon, 1990) . Neural nets with 2 or more layers and non-linear activation functions are capable of learning any function of the feature space with arbitrarily small error. Neural nets have been shown to be effective with (1) highdimensional input vectors, (2) relatively sparse data, and (3) noisy data with high within-class variability, all of which apply to the word alignment problem.",
"cite_spans": [
{
"start": 148,
"end": 174,
"text": "(Hansen and Salamon, 1990)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows: In Section 2, we describe previous work on improving word alignments and use of classifier ensembles in NLP. Section 3 gives a brief overview of neural networks. In Section 4, we present a new approach, NeurAlign, that learns how to combine individual word alignment systems. Section 5 describes our experimental design and the results on English-Spanish and English-Chinese. We demonstrate that NeurAlign yields significant improvements over the best-known alignment combination technique. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous algorithms for improving word alignments have attempted to incorporate additional knowledge into their modeling. For example, Liu (2005) uses a log-linear combination of linguistic features. Additional linguistic knowledge can be in the form of part-of-speech tags. (Toutanova et al., 2002) or dependency relations (Cherry and Lin, 2003) . Other approaches to improving alignment have combined alignment models, e.g., using a log-linear combination (Och and Ney, 2003) or mutually independent association clues (Tiedemann, 2003) .",
"cite_spans": [
{
"start": 135,
"end": 145,
"text": "Liu (2005)",
"ref_id": "BIBREF11"
},
{
"start": 275,
"end": 299,
"text": "(Toutanova et al., 2002)",
"ref_id": "BIBREF19"
},
{
"start": 324,
"end": 346,
"text": "(Cherry and Lin, 2003)",
"ref_id": "BIBREF3"
},
{
"start": 458,
"end": 477,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF15"
},
{
"start": 520,
"end": 537,
"text": "(Tiedemann, 2003)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "A simpler approach was developed by Ayan et al. 2004, where word alignment outputs are combined using a linear combination of feature weights assigned to the individual aligners. Our method is more general in that it uses a neural network model that is capable of learning nonlinear functions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Classifier ensembles are used in several NLP applications. Some NLP applications for classifier ensembles are POS tagging (Brill and Wu, 1998; Abney et al., 1999) , PP attachment (Abney et al., 1999) , word sense disambiguation (Florian and Yarowsky, 2002) , and parsing (Henderson and Brill, 2000) .",
"cite_spans": [
{
"start": 122,
"end": 142,
"text": "(Brill and Wu, 1998;",
"ref_id": "BIBREF2"
},
{
"start": 143,
"end": 162,
"text": "Abney et al., 1999)",
"ref_id": "BIBREF0"
},
{
"start": 179,
"end": 199,
"text": "(Abney et al., 1999)",
"ref_id": "BIBREF0"
},
{
"start": 228,
"end": 256,
"text": "(Florian and Yarowsky, 2002)",
"ref_id": "BIBREF7"
},
{
"start": 271,
"end": 298,
"text": "(Henderson and Brill, 2000)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The work reported in this paper is the first application of classifier ensembles to the word-alignment problem. We use a different methodology to combine classifiers that is based on stacked generalization (Wolpert, 1992) , i.e., learning an additional model on the outputs of individual classifiers.",
"cite_spans": [
{
"start": 206,
"end": 221,
"text": "(Wolpert, 1992)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "A multi-layer perceptron (MLP) is a feed-forward neural network that consists of several units (neurons) that are connected to each other by weighted links. As illustrated in Figure 1 , an MLP consists of one input layer, one or more hidden layers, and one output layer. The external input is presented to the input layer, propagated forward through the hidden layers and creates the output vector in the output layer. Each unit i in the network computes its output with respect to its net input net i = j w ij a j , where j represents all units in the previous layer that are connected to the unit i. The output of unit i is computed by passing the net input through a non-linear activation function f , i.e. a i = f (net i ).",
"cite_spans": [],
"ref_spans": [
{
"start": 175,
"end": 183,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Neural Networks",
"sec_num": "3"
},
{
"text": "The most commonly used non-linear activation functions are the log sigmoid function",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Networks",
"sec_num": "3"
},
{
"text": "f (x) = 1 1+e \u2212x or hyperbolic tangent sigmoid function f (x) = 1\u2212e \u22122x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Networks",
"sec_num": "3"
},
{
"text": "1+e \u22122x . The latter has been shown to be more suitable for binary classification problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Networks",
"sec_num": "3"
},
{
"text": "The critical question is the computation of weights associated with the links connecting the neurons. In this paper, we use the resilient backpropagation (RPROP) algorithm (Riedmiller and Braun, 1993) , which is based on the gradient descent method, but converges faster and generalizes better.",
"cite_spans": [
{
"start": 172,
"end": 200,
"text": "(Riedmiller and Braun, 1993)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Networks",
"sec_num": "3"
},
{
"text": "We propose a new approach, NeurAlign, that learns how to combine individual word alignment systems. We treat each alignment system as a classifier and transform the combination problem into a classifier ensemble problem. Before describing the NeurAlign approach, we first introduce some terminology used in the description below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NeurAlign Approach",
"sec_num": "4"
},
{
"text": "Let E = e 1 , . . . , e t and F = f 1 , . . . , f s be two sentences in two different languages. An alignment link (i, j) corresponds to a translational equivalence between words e i and f j . Let A k be an alignment between sentences E and F , where each element a \u2208 A k is an alignment link (i, j). Let A = {A 1 , . . . , A l } be a set of alignments between E and F . We refer to the true alignment as T , where each a \u2208 T is of the form (i, j). A neighborhood of an alignment link (i, j)-denoted by N (i, j)consists of 8 possible alignment links in a 3 \u00d7 3 window with (i, j) in the center of the window. Each element of N (i, j) is called a neighboring link of (i, j).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NeurAlign Approach",
"sec_num": "4"
},
{
"text": "Our goal is to combine the information in A 1 , . . . , A l such that the resulting alignment is closer to T . A straightforward solution is to take the intersection or union of the individual alignments, or perform a majority voting for each possible alignment link (i, j). Here, we use an additional model to learn how to combine outputs of A 1 , . . . , A l .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NeurAlign Approach",
"sec_num": "4"
},
{
"text": "We decompose the task of combining word alignments into two steps: (1) Extract features; and (2) Learn a classifier from the transformed data. We describe each of these two steps in turn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NeurAlign Approach",
"sec_num": "4"
},
{
"text": "Given sentences E and F , we create a (potential) alignment instance (i, j) for all possible word combinations. A crucial component of building a classifier is the selection of features to represent the data. The simplest approach is to treat each alignmentsystem output as a separate feature upon which we build a classifier. However, when only a few alignment systems are combined, this feature space is not sufficient to distinguish between instances. One of the strategies in the classification literature is to supply the input data to the set of features as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Features",
"sec_num": "4.1"
},
{
"text": "While combining word alignments, we use two types of features to describe each instance (i, j):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Features",
"sec_num": "4.1"
},
{
"text": "(1) linguistic features and (2) alignment features. Linguistic features include POS tags of both words (e i and f j ) and a dependency relation for one of the words (e i ). We generate POS tags using the MXPOST tagger (Ratnaparkhi, 1996) for English and Chinese, and Connexor for Spanish. Dependency relations are produced using a version of the Collins parser (Collins, 1997 ) that has been adapted for building dependencies.",
"cite_spans": [
{
"start": 218,
"end": 237,
"text": "(Ratnaparkhi, 1996)",
"ref_id": "BIBREF16"
},
{
"start": 361,
"end": 375,
"text": "(Collins, 1997",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Features",
"sec_num": "4.1"
},
{
"text": "Alignment features consist of features that are extracted from the outputs of individual alignment systems. For each alignment A k \u2208 A, the following are some of the alignment features that can be used to describe an instance (i, j):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Features",
"sec_num": "4.1"
},
{
"text": "1. Whether (i, j) is an element of A k or not 2. Translation probability p(f j |e i ) computed over A k 1 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Features",
"sec_num": "4.1"
},
{
"text": ". Fertility of (i.e., number of words in F that are aligned to) e i in A k 4. Fertility of (i.e., number of words in E that are",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Features",
"sec_num": "4.1"
},
{
"text": "aligned to) f j in A k 5. For each neighbor (x, y) \u2208 N (i, j), whether (x, y) \u2208 A k or not (8 features in total) 6. For each neighbor (x, y) \u2208 N (i, j), transla- tion probability p(f y |e x ) computed over A k (8 features in total)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Features",
"sec_num": "4.1"
},
{
"text": "It is also possible to use variants, or combinations, of these features to reduce feature space. Figure 2 shows an example of how we transform the outputs of 2 alignment systems, A 1 and A 2 , for an alignment link (i, j) into data with some of the features above. We use -1 and 1 to represent the absence and existence of a link, respectively. The neighboring links are presented in row-by-row order. For each sentence pair E = e 1 , . . . , e t and F = f 1 , . . . , f s , we generate s \u00d7 t instances to represent the sentence pair in the classification data.",
"cite_spans": [],
"ref_spans": [
{
"start": 97,
"end": 105,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Extracting Features",
"sec_num": "4.1"
},
{
"text": "X X X X X X A 1 A 2 e i-1 e i e i+1 f j-1 f j f j+1 1 (for A 1 ), 0 (for A 2 ) fertility(f j ) 2 (for A 1 ), 1 (for A 2 ) fertility(e i ) 2 (for A 1 ), 3 (for A 2 ) total neighbors 1, -1, -1, 1, 1, -1, -1, 1 neighbors (A 1 \u222a A 2 ) 1, -1, -1, -1, 1, -1, -1, 1 neighbors (A 2 ) -1, -1, -1, 1, -1, -1, -1, 1 neighbors (A 1 ) 1 (for A 1 ), -1 (for A 2 ) outputs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Features",
"sec_num": "4.1"
},
{
"text": "Supervised learning requires the correct output, which here is the true alignment T . If an alignment link (i, j) is an element of T , then we set the correct output to 1, and to \u22121, otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Features",
"sec_num": "4.1"
},
{
"text": "Once we transform the alignments into a set of instances with several features, the remaining task is to learn a classifier from this data. In the case of word alignment combination, there are important issues to consider for choosing an appropriate classifier. First, there is a very limited amount of manually annotated data. This may give rise to poor generalizations because it is very likely that unseen data include lots of cases that are not observed in the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning A Classifier",
"sec_num": "4.2"
},
{
"text": "Second, the distribution of the data according to the classes is skewed. In a preliminary study on an English-Spanish data set, we found out that only 4% of the all word pairs are aligned to each other by humans, among a possible 158K word pairs. Moreover, only 60% of those aligned word pairs were Figure 3 : NeurAlign 1 -Alignment Combination Using All Data At Once also aligned by the individual alignment systems that were tested. Finally, given the distribution of the data, it is difficult to find the right features to distinguish between instances. Thus, it is prudent to use as many features as possible and let the learning algorithm filter out the redundant features.",
"cite_spans": [],
"ref_spans": [
{
"start": 299,
"end": 307,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning A Classifier",
"sec_num": "4.2"
},
{
"text": "Below, we describe how neural nets are used at different levels to build a good classifier. Figure 3 illustrates how we combine alignments using all the training data at the same time (NeurAlign 1 ). First, the outputs of individual alignments systems and the original corpus (enriched with additional linguistic features) are passed to the feature extraction module. This module transforms the alignment problem into a classification problem by generating a training instance for every pair of words between the sentences in the original corpus. Each instance is represented by a set of features (described in Section 4.1). The new training data is passed to a neural net learner, which outputs whether an alignment link exists for each training instance.",
"cite_spans": [],
"ref_spans": [
{
"start": 92,
"end": 100,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning A Classifier",
"sec_num": "4.2"
},
{
"text": "The use of multiple neural networks (NeurAlign 2 ) enables the decomposition of a complex problem into smaller problems. Local experts are learned for each smaller problem and these are then merged. Following Tumer and Ghosh (1996) , we apply spatial partitioning of training instances using proximity of patterns in the input space to reduce the complexity of the tasks assigned to individual classifiers.",
"cite_spans": [
{
"start": 209,
"end": 231,
"text": "Tumer and Ghosh (1996)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NeurAlign 2 : Multiple Neural Networks",
"sec_num": "4.2.2"
},
{
"text": "We conducted a preliminary analysis on 100 randomly selected English-Spanish sentence pairs from a mixed corpus (UN + Bible + FBIS) to observe the SPANISH Adj Adv Comp Det Noun Prep Verb E Adj 18 --82 40 96 66 N Adv -8 --50 67 75 G Comp --12 -46 37 96 L Det ---10 60 100 -I Noun 42 77 100 94 23 98 84 S Prep ---93 70 22 Figure 4 : NeurAlign 2 -Alignment Combination with Partitioning distribution of errors according to POS tags in both languages. We examined the cases in which the individual alignment and the manual annotation were different-a total of 3,348 instances, where 1,320 of those are misclassified by GIZA++ (E-to-S). 2 We use a standard measure of error, i.e., the percentage of misclassified instances out of the total number of instances. Table 1 shows error rates (by percentage) according to POS tags for GIZA++ (E-to-S). 3 Table 1 shows that the error rate is relatively low in cases where both words have the same POS tag. Except for verbs, the lowest error rate is obtained when both words have the same POS tag (the error rates on the diagonal). On the other hand, the error rates are high in several other cases, as much as 100%, e.g., when the Spanish word is a determiner or a preposition. 4 This suggests that dividing the training data according to POS tag, and training neural networks on each subset separately might be better than training on the entire data at once. Figure 4 illustrates the combination approach with neural nets after partitioning the data into dis-joint subsets (NeurAlign 2 ). Similar to NeurAlign 1 , the outputs of individual alignment systems, as well as the original corpus, are passed to the feature extraction module. Then the training data is split into disjoint subsets using a subset of the available features for partitioning. We learn different neural nets for each partition, and then merge the outputs of the individual nets. The advantage of this is that it results in different generalizations for each partition and that it uses different subsets of the feature space for each net.",
"cite_spans": [
{
"start": 665,
"end": 666,
"text": "2",
"ref_id": null
},
{
"start": 874,
"end": 875,
"text": "3",
"ref_id": null
},
{
"start": 1249,
"end": 1250,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 177,
"end": 352,
"text": "Prep Verb E Adj 18 --82 40 96 66 N Adv -8 --50 67 75 G Comp --12 -46 37 96 L Det ---10 60 100 -I Noun 42 77 100 94 23 98 84 S Prep ---93 70 22",
"ref_id": "TABREF2"
},
{
"start": 353,
"end": 361,
"text": "Figure 4",
"ref_id": null
},
{
"start": 789,
"end": 796,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 1432,
"end": 1440,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "NeurAlign 2 : Multiple Neural Networks",
"sec_num": "4.2.2"
},
{
"text": "This section describes our experimental design, including evaluation metrics, data, and settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "Let A be the set of alignment links for a set of sentences. We take S to be the set of sure alignment links and P be the set of probable alignment links (in the gold standard) for the same set of sentences. Precision (P r), recall (Rc) and alignment error rate (AER) are defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5.1"
},
{
"text": "P r = |A \u2229 P | |A| Rc = |A \u2229 S| |S| AER = 1 \u2212 |A \u2229 S| + |A \u2229 P | |A| + |S|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5.1"
},
{
"text": "A manually aligned corpus is used as our gold standard. For English-Spanish data, the manual annotation is done by a bilingual English-Spanish speaker. Every link in the English-Spanish gold standard is considered a sure alignment link (i.e., P = S). For English-Chinese, we used 2002 NIST MT evaluation test set. Each sentence pair was aligned by two native Chinese speakers, who are fluent in English. Each alignment link appearing in both annotations was considered a sure link, and links appearing in only one set were judged as probable. The annotators were not aware of the specifics of our approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5.1"
},
{
"text": "We evaluated NeurAlign 1 and NeurAlign 2 , using 5fold cross validation on two data sets: We computed precision, recall and error rate on the entire set of sentence pairs for each data set. 5 To evaluate NeurAlign, we used GIZA++ in both directions (E-to-F and F -to-E, where F is either Chinese (C) or Spanish (S)) as input and a refined alignment approach (Och and Ney, 2000) that uses a heuristic combination method called grow-diagfinal (Koehn et al., 2003) for comparison. (We henceforth refer to the refined-alignment approach as \"RA.\")",
"cite_spans": [
{
"start": 190,
"end": 191,
"text": "5",
"ref_id": null
},
{
"start": 358,
"end": 377,
"text": "(Och and Ney, 2000)",
"ref_id": "BIBREF14"
},
{
"start": 441,
"end": 461,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Data and Settings",
"sec_num": "5.2"
},
{
"text": "For the English-Spanish experiments, GIZA++ was trained on 48K sentence pairs from a mixed corpus (UN + Bible + FBIS), with nearly 1.2M of words on each side, using 10 iterations of Model 1, 5 iterations of HMM, and 5 iterations of Model 4. For the English-Chinese experiments, we used 107K sentence pairs from FBIS corpus (nearly 4.1M English and 3.3M Chinese words) to train GIZA++, using 5 iterations of Model 1, 5 iterations of HMM, 3 iterations of Model 3, and 3 iterations of Model 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Data and Settings",
"sec_num": "5.2"
},
{
"text": "In our experiments, we used a multi-layer perceptron (MLP) consisting of 1 input layer, 1 hidden layer, and 1 output layer. The hidden layer consists of 10 units, and the output layer consists of 1 unit. All units in the hidden layer are fully connected to the units in the input layer, and the output unit is fully connected to all the units in the hidden layer. We used hyperbolic tangent sigmoid function as the activation function for both layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Settings",
"sec_num": "5.3"
},
{
"text": "One of the potential pitfalls is overfitting as the number of iterations increases. To address this, we used the early stopping with validation set method. In our experiments, we held out (randomly selected) 1/4 of the training set as the validation set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Settings",
"sec_num": "5.3"
},
{
"text": "Neural nets are sensitive to the initial weights. To overcome this, we performed 5 runs of learning for each training set. The final output for each training is obtained by a majority voting over 5 runs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Settings",
"sec_num": "5.3"
},
{
"text": "This section describes the experiments on English-Spanish and English-Chinese data for testing the effects of feature selection, training on the entire data (NeurAlign 1 ) or on the partitioned data (NeurAlign 2 ), using two input alignments: GIZA++ (E-to-F ) and GIZA++ (F -to-E). We used the following additional features, as well as the outputs of individual aligners, for an instance (i, j) (set of features 2-7 below are generated separately for each input alignment A k ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.4"
},
{
"text": "1. posE i , posF j , relE i : POS tags and dependency relation for e i and f j . 2. neigh(i, j): 8 features indicating whether a neighboring link exists in A k . 3. f ertE i , f ertF j : 2 features indicating the fertility of e i and f j in A k . 4. N C(i, j): Total number of existing links in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.4"
},
{
"text": "N (i, j) in A k . 5. T P (i, j): Translation probability p(f j |e i ) in A k . 6. NghTP (i, j): 8 features indicating the trans- lation probability p(f y |e x ) for each (x, y) \u2208 N (i, j) in A k . 7. AvT P (i, j):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.4"
},
{
"text": "Average translation probability of the neighbors of (i, j) in A k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.4"
},
{
"text": "We performed statistical significance tests using two-tailed paired t-tests. Unless otherwise indicated, the differences between NeurAlign and other alignment systems, as well as the differences among NeurAlign variations themselves, were statistically significant within the 95% confidence interval. Table 2 summarizes the precision, recall and alignment error rate values for each of our two alignment system inputs plus the three alternative alignment-combination approaches. Note that the best performing aligner among these is the RA method, with an AER of 21.2%. (We include this in subsequent tables for ease of comparison.) Table 3 presents the results of training neural nets using the entire data (NeurAlign 1 ) with different subsets of the feature space. When we used POS tags and the dependency relation as features, NeurAlign 1 performs worse than RA. Using Table 2 : Results for GIZA++ Alignments and Their Simple Combinations the neighboring links as the feature set gave slightly (not significantly) better results than RA. Using POS tags, dependency relations, and neighboring links also resulted in better performance than RA but the difference was not statistically significant. When we used fertilities along with the POS tags and dependency relations, the AER was 20.0%-a significant relative error reduction of 5.7% over RA. Adding the neighboring links to the previous feature set resulted in an AER of 17.6%-a significant relative error reduction of 17% over RA.",
"cite_spans": [],
"ref_spans": [
{
"start": 301,
"end": 308,
"text": "Table 2",
"ref_id": null
},
{
"start": 632,
"end": 639,
"text": "Table 3",
"ref_id": "TABREF6"
},
{
"start": 872,
"end": 879,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.4"
},
{
"text": "Interestingly, when we removed POS tags and dependency relations from this feature set, there was no significant change in the AER, which indicates that the improvement is mainly due to the neighboring links. This supports our initial claim about the clustering of alignment links, i.e., when there is an alignment link, usually there is another link in its neighborhood. Finally, we tested the effects of using translation probabilities as part of the feature set, and found out that using translation probabilities did no better than the case where they were not used. We believe this happens because the translation probability p(f j |e i ) has a unique value for each pair of e i and f j ; therefore it is not useful to distinguish between alignment links with the same words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection for Training All Data At Once: NeurAlign 1",
"sec_num": null
},
{
"text": "Data: NeurAlign 2 In order to train on partitioned data (NeurAlign 2 ), we needed to establish appropriate features for partitioning the training data. Table 4 presents the evaluation results for NeurAlign 1 (i.e., no partitioning) and NeurAlign 2 with different features for partitioning (English POS tag, Spanish POS tag, and POS tags on both sides). For training on each partition, the feature space included POS tags (e.g., Spanish POS tag in the case where partitioning is based on English POS tag only), dependency relations, neighborhood features, and fertilities. We observed that partitioning based on POS tags on one side reduced the AER to 17.4% and ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection for Training on Partitioned",
"sec_num": null
},
{
"text": "(i, j), N C(i, j) f ertEi, f ertFj neigh(i, j), N C(i, j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection for Training on Partitioned",
"sec_num": null
},
{
"text": "89.7 75.7 17.9 f ertEi, f ertFj posEi, posFj, relEi, 90.0 75.7 17.9 f ertEi, f ertFj, neigh(i, j), N C(i, j), T P (i, j), AvT P (i, j) RA 83.8 74.4 21.2 Once we determined that partitioning by POS tags on both sides brought about the biggest gain, we ran NeurAlign 2 using this partitioning, but with different feature sets. Table 5 shows the results of this experiment. Using dependency relations, word fertilities and translation probabilities (both for the link in question and the neighboring links) yielded a significantly lower AER (18.6%)-a relative error reduction of 12.3% over RA. When the feature set consisted of dependency relations, word fertilities, and neighborhood links, the AER was reduced to 16.9%-a 20.3% relative error reduction over RA. We also tested the effects of adding translation probabilities to this feature set, but as in the case of NeurAlign 1 , this did not improve the alignments.",
"cite_spans": [],
"ref_spans": [
{
"start": 325,
"end": 332,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Feature Selection for Training on Partitioned",
"sec_num": null
},
{
"text": "In the best case, NeurAlign 2 achieved substantial and significant reductions in AER over the input alignment systems: a 28.4% relative error reduction over S-to-E and a 30.5% relative error re-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection for Training on Partitioned",
"sec_num": null
},
{
"text": "Pr Rc AER relEi, f ertEi, f ertFj, 91.9 73.0 18.6 T P (i, j), AvT P (i, j), N ghT P (i, j) neigh (i, j) 90.3 74.0 18.7 relEi, f ertEi, f ertFj, 91.6 76.0 16.9 neigh(i, j), N C(i, j) relEi, f ertEi, f ertFj, 91.4 76.1 16.9 neigh(i, j), N C(i, j), T P (i, j), AvT P (i, j) RA 83.8 74.4 21.2 duction over E-to-S. Compared to RA, NeurAlign 2 also achieved significantly better results over RA: relative improvements of 9.3% in precision, 2.2% in recall, and 20.3% in AER.",
"cite_spans": [
{
"start": 97,
"end": 103,
"text": "(i, j)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": null
},
{
"text": "The results of the input alignments to NeurAlign, i.e., GIZA++ alignments in two different directions, NeurAlign 1 (i.e., no partitioning) and variations of NeurAlign 2 with different features for partitioning (English POS tag, Chinese POS tag, and POS tags on both sides) are shown in Table 6 . For comparsion, we also include the results for RA in the table. For brevity, we include only the features resulting in the best configurations from the English-Spanish experiments, i.e., POS tags, dependency relations, word fertilities, and neighborhood links (the features in the third row of Table 5 ). The ground truth used during the training phase consisted of all the alignment links with equal weight. Without any partitioning, NeurAlign achieves an alignment error rate of 22.2%-a significant relative error reduction of 25.3% over RA. Partitioning the data according to POS tags results in significantly better results over no partitioning. When the data is partitioned according to both POS tags, NeurAlign reduces AER to 19.7%-a significant relative error reduction of 33.7% over RA. Compared to the input alignments, the best version of NeurAlign achieves a relative error reduction of 35.8% and 38.8%, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 286,
"end": 293,
"text": "Table 6",
"ref_id": "TABREF10"
},
{
"start": 591,
"end": 598,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Results for English-Chinese",
"sec_num": "5.4.2"
},
{
"text": "We presented NeurAlign, a novel approach to combining the outputs of different word alignment systems. Our approach treats individual alignment systems as black boxes, and transforms the individual alignments into a set of data with features that are borrowed from their outputs and additional linguistic features (such as POS tags and dependency relations). We use neural nets to learn the true alignments from these transformed data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "We show that using POS tags to partition the transformed data, and learning a different classifier for each partition is more effective than using the entire data at once. Our results indicate that NeurAlign yields a significant 28-39% relative error reduction over the best of the input alignment systems and a significant 20-34% relative error reduction over the best known alignment combination technique on English-Spanish and English-Chinese data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "We should note that NeurAlign is not a standalone word alignment system but a supervised learning approach to improve already existing alignment systems. A drawback of our approach is that it requires annotated data. However, our experiments have shown that significant improvements can be obtained using a small set of annotated data. We will do additional experiments to observe the effects of varying the size of the annotated data while learning neural nets. We are also planning to investigate whether NeurAlign helps when the individual aligners are trained using more data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "We will extend our combination approach to combine word alignment systems based on different models, and investigate the effectiveness of our technique on other language pairs. We also intend to evaluate the effectiveness of our improved alignment approach in the context of machine translation and cross-language projection of resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "The translation probabilities can be borrowed from the existing systems, if available. Otherwise, they can be generated from the outputs of individual alignment systems using likelihood estimates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For this analysis, we ignored the cases where both systems produced an output of -1 (i.e., the words are not aligned).3 Only POS pairs that occurred at least 10 times are shown.4 The same analysis was done for the other direction and resulted in similar distribution of error rates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The number of alignment links varies over each fold. Therefore, we chose to evaluate all data at once instead of evaluating on each fold and then averaging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been supported in part by ONR MURI Contract FCPO.810548265, Cooperative Agreement DAAD190320020, and NSF ITR Grant IIS-0326553.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Boosting applied to tagging and PP attachment",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Abney",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"E"
],
"last": "Schapire",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of EMNLP'1999",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Abney, Robert E. Schapire, and Yoram Singer. 1999. Boosting applied to tagging and PP attachment. In Proceed- ings of EMNLP'1999, pages 38-45.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Multi-Align: Combining linguistic and statistical techniques to improve alignments for adaptable MT",
"authors": [
{
"first": "F",
"middle": [],
"last": "Necip",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [
"J"
],
"last": "Ayan",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of AMTA'2004",
"volume": "",
"issue": "",
"pages": "17--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Necip F. Ayan, Bonnie J. Dorr, and Nizar Habash. 2004. Multi- Align: Combining linguistic and statistical techniques to improve alignments for adaptable MT. In Proceedings of AMTA'2004, pages 17-26.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Classifier combination for improved lexical disambiguation",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Brill and Jun Wu. 1998. Classifier combination for im- proved lexical disambiguation. In Proc. of ACL'1998.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A probability model to improve word alignment",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Cherry and Dekang Lin. 2003. A probability model to improve word alignment. In Proceedings of ACL'2003.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Three generative lexicalized models for statistical parsing",
"authors": [
{
"first": "Micheal",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Micheal Collins. 1997. Three generative lexicalized models for statistical parsing. In Proceedings of ACL'1997.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An unsupervised method for word sense tagging using parallel corpora",
"authors": [
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mona Diab and Philip Resnik. 2002. An unsupervised method for word sense tagging using parallel corpora. In Proceed- ings of ACL'2002.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "DUSTer: A method for unraveling cross-language divergences for statistical word-level alignment",
"authors": [
{
"first": "Bonnie",
"middle": [
"J"
],
"last": "Dorr",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Pearl",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Hwa",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of AMTA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bonnie J. Dorr, Lisa Pearl, Rebecca Hwa, and Nizar Habash. 2002. DUSTer: A method for unraveling cross-language di- vergences for statistical word-level alignment. In Proceed- ings of AMTA'2002.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Modeling consensus: Classifier combination for word sense disambiguation",
"authors": [
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of EMNLP'2002",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radu Florian and David Yarowsky. 2002. Modeling consensus: Classifier combination for word sense disambiguation. In Proceedings of EMNLP'2002, pages 25-32.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Neural network ensembles",
"authors": [
{
"first": "L",
"middle": [],
"last": "Hansen",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Salamon",
"suffix": ""
}
],
"year": 1990,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "12",
"issue": "",
"pages": "993--1001",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Hansen and P. Salamon. 1990. Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelli- gence, 12:993-1001.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Bagging and boosting a treebank parser",
"authors": [
{
"first": "C",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of NAACL'",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John C. Henderson and Eric Brill. 2000. Bagging and boosting a treebank parser. In Proceedings of NAACL'2000.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Statistical phrase-based translation",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of NAACL/HLT'2003",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of NAACL/HLT'2003.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Log-linear models for word alignment",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shouxun",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL'2005",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Qun Liu, and Shouxun Lin. 2005. Log-linear models for word alignment. In Proceedings of ACL'2005.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Models of translational equivalence among words",
"authors": [
{
"first": "I",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Melamed",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics",
"volume": "26",
"issue": "2",
"pages": "221--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Dan Melamed. 2000. Models of translational equivalence among words. Computational Linguistics, 26(2):221-249.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Logical Versus Analogical or Symbolic Versus Connectionist or Neat Versus Scruffy",
"authors": [
{
"first": "Marvin",
"middle": [],
"last": "Minsky",
"suffix": ""
}
],
"year": 1999,
"venue": "AI Magazine",
"volume": "12",
"issue": "",
"pages": "34--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marvin Minsky. 1999. Logical Versus Analogical or Symbolic Versus Connectionist or Neat Versus Scruffy. AI Magazine, 12:34-51.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Improved statistical alignment models",
"authors": [
{
"first": "J",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of ACL'",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz J. Och and Hermann Ney. 2000. Improved statistical alignment models. In Proceedings of ACL'2000.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "J",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "9--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz J. Och and Hermann Ney. 2003. A systematic compari- son of various statistical alignment models. Computational Linguistics, 29(1):9-51, March.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A maximum entropy part-ofspeech tagger",
"authors": [
{
"first": "Adwait",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adwait Ratnaparkhi. 1996. A maximum entropy part-of- speech tagger. In Proceedings of EMNLP'1996.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A direct adaptive method for faster backpropagation learning: The RPROP algorithm",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Riedmiller",
"suffix": ""
},
{
"first": "Heinrich",
"middle": [],
"last": "Braun",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the IEEE Intl. Conf. on Neural Networks",
"volume": "",
"issue": "",
"pages": "586--591",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Riedmiller and Heinrich Braun. 1993. A direct adaptive method for faster backpropagation learning: The RPROP al- gorithm. In Proceedings of the IEEE Intl. Conf. on Neural Networks, pages 586-591.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Combining clues for word alignment",
"authors": [
{
"first": "Jorg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of EACL'2003",
"volume": "",
"issue": "",
"pages": "339--346",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jorg Tiedemann. 2003. Combining clues for word alignment. In Proceedings of EACL'2003, pages 339-346.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Extensions to HMM-based statistical word alignment models",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Tolga Ilhan",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, H. Tolga Ilhan, and Christopher D. Man- ning. 2002. Extensions to HMM-based statistical word alignment models. In Proceedings of EMNLP'2002.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Error correlation and error reduction in ensemble classifiers",
"authors": [
{
"first": "Kagan",
"middle": [],
"last": "Tumer",
"suffix": ""
},
{
"first": "Joydeep",
"middle": [],
"last": "Ghosh",
"suffix": ""
}
],
"year": 1996,
"venue": "Connection Science, Special Issue on Combining Artificial Neural Networks: Ensemble Approaches",
"volume": "8",
"issue": "3-4",
"pages": "385--404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kagan Tumer and Joydeep Ghosh. 1996. Error correlation and error reduction in ensemble classifiers. Connection Science, Special Issue on Combining Artificial Neural Networks: En- semble Approaches, 8(3-4):385-404, December.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Stacked generalization",
"authors": [
{
"first": "H",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wolpert",
"suffix": ""
}
],
"year": 1992,
"venue": "Neural Networks",
"volume": "5",
"issue": "2",
"pages": "241--259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David H. Wolpert. 1992. Stacked generalization. Neural Net- works, 5(2):241-259.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Inducing multilingual text analysis tools via robust projection across aligned corpora",
"authors": [
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "Grace",
"middle": [],
"last": "Ngai",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Wicentowski",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Yarowsky, Grace Ngai, and Richard Wicentowski. 2001. Inducing multilingual text analysis tools via robust projec- tion across aligned corpora. In Proceedings of HLT'2001.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Figure 1: Multilayer Perceptron Overview",
"num": null,
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"text": "An Example of Transforming Alignments into Classification Data",
"num": null,
"uris": null
},
"TABREF2": {
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"4\">: Error Rates according to POS Tags for</td></tr><tr><td colspan=\"3\">GIZA++ (E-to-S) (in percentages)</td><td/></tr><tr><td/><td/><td>Truth</td><td/></tr><tr><td>Classification</td><td/><td/><td/></tr><tr><td>Data</td><td>Part a</td><td>NN a</td><td/></tr><tr><td>Data Partitioning</td><td>Part i</td><td>NN i</td><td>NN Combination</td></tr><tr><td/><td>Part z</td><td>NN z</td><td>Output</td></tr></table>",
"num": null,
"text": ""
},
"TABREF6": {
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"4\">: Combination with Neural Networks:</td></tr><tr><td colspan=\"2\">NeurAlign 1 (All-Data-At-Once)</td><td/><td/></tr><tr><td colspan=\"4\">17.1%, respectively. Using POS tags on both sides</td></tr><tr><td colspan=\"4\">reduced the error rate to 16.9%-a significant rel-</td></tr><tr><td colspan=\"4\">ative error reduction of 5.6% over no partitioning.</td></tr><tr><td colspan=\"4\">All four methods yielded statistically significant er-</td></tr><tr><td colspan=\"4\">ror reductions over RA-we will examine the fourth</td></tr><tr><td>method in more detail below.</td><td/><td/><td/></tr><tr><td>Alignment</td><td>Pr</td><td>Rc</td><td>AER</td></tr><tr><td>NeurAlign1</td><td colspan=\"2\">89.7 75.7</td><td>17.9</td></tr><tr><td>NeurAlign2[posEi]</td><td colspan=\"2\">91.1 75.4</td><td>17.4</td></tr><tr><td>NeurAlign2[posFj]</td><td colspan=\"2\">91.2 76.0</td><td>17.1</td></tr><tr><td colspan=\"3\">NeurAlign2[posEi, posFj] 91.6 76.0</td><td>16.9</td></tr><tr><td>RA</td><td colspan=\"2\">83.8 74.4</td><td>21.2</td></tr></table>",
"num": null,
"text": ""
},
"TABREF7": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": "Effects of Feature Selection for Partitioning"
},
"TABREF8": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": "Combination with Neural Networks: NeurAlign 2 (Partitioned According to POS tags)"
},
"TABREF10": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": "Results on English-Chinese Data"
}
}
}
}