{ "paper_id": "D12-1003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:24:34.684155Z" }, "title": "Bilingual Lexicon Extraction from Comparable Corpora Using Label Propagation", "authors": [ { "first": "Akihiro", "middle": [], "last": "Tamura", "suffix": "", "affiliation": {}, "email": "akihiro.tamura@nict.go.jp" }, { "first": "Taro", "middle": [], "last": "Watanabe", "suffix": "", "affiliation": {}, "email": "taro.watanabe@nict.go.jp" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "", "affiliation": {}, "email": "eiichiro.sumita@nict.go.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper proposes a novel method for lexicon extraction that extracts translation pairs from comparable corpora by using graphbased label propagation. In previous work, it was established that performance drastically decreases when the coverage of a seed lexicon is small. We resolve this problem by utilizing indirect relations with the bilingual seeds together with direct relations, in which each word is represented by a distribution of translated seeds. The seed distributions are propagated over a graph representing relations among words, and translation pairs are extracted by identifying word pairs with a high similarity in the seed distributions. We propose two types of the graphs: a co-occurrence graph, representing co-occurrence relations between words, and a similarity graph, representing context similarities between words. Evaluations using English and Japanese patent comparable corpora show that our proposed graph propagation method outperforms conventional methods. Further, the similarity graph achieved improved performance by clustering synonyms into the same translation.", "pdf_parse": { "paper_id": "D12-1003", "_pdf_hash": "", "abstract": [ { "text": "This paper proposes a novel method for lexicon extraction that extracts translation pairs from comparable corpora by using graphbased label propagation. In previous work, it was established that performance drastically decreases when the coverage of a seed lexicon is small. We resolve this problem by utilizing indirect relations with the bilingual seeds together with direct relations, in which each word is represented by a distribution of translated seeds. The seed distributions are propagated over a graph representing relations among words, and translation pairs are extracted by identifying word pairs with a high similarity in the seed distributions. We propose two types of the graphs: a co-occurrence graph, representing co-occurrence relations between words, and a similarity graph, representing context similarities between words. Evaluations using English and Japanese patent comparable corpora show that our proposed graph propagation method outperforms conventional methods. Further, the similarity graph achieved improved performance by clustering synonyms into the same translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Bilingual lexicons are important resources for bilingual tasks such as machine translation (MT) and cross-language information retrieval (CLIR). Therefore, the automatic building of bilingual lexicons from corpora is one of the issues that have attracted many researchers. As a solution, a number of previous works proposed extracting bilingual lexicons from comparable corpora, in which documents were not direct translations but shared a topic or domain 1 . The use of comparable corpora is motivated by the fact that large parallel corpora are only available for a few language pairs and for limited domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most of the previous methods are based on assumption (I), that a word and its translation tend to appear in similar contexts across languages (Rapp, 1999) . Based on this assumption, many methods calculate word similarity using context and then extract word translation pairs with a high-context similarity. We call these methods context-similaritybased methods. The context similarities are usually computed using a seed bilingual lexicon (e.g. a general bilingual dictionary) by mapping contexts expressed in two different languages into the same space. In the mapping, information not represented by the seed lexicon is discarded. Therefore, the context-similarity-based methods could not find accurate translation pairs if using a small seed lexicon.", "cite_spans": [ { "start": 142, "end": 154, "text": "(Rapp, 1999)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Some of the previous methods tried to alleviate the problem of the limited seed lexicon size (Koehn and Knight, 2002; Morin and Prochasson, 2011; Hazem et al., 2011) , while others did not require any seed lexicon (Rapp, 1995; Fung, 1995; Haghighi et al., 2008; Ismail and Manandhar, 2010; Daum\u00e9 III and Jagarlamudi, 2011) . However, they suffer the problems of high computational cost (Rapp, 1995) , sensitivity to parameters (Hazem et al., 2011) , low accuracy (Fung, 1995; Ismail and Manandhar, 2010) , and ineffectiveness for language pairs with different types of characters (Koehn and Knight, 2002; Haghighi et al., 2008; Daum\u00e9 III and Jagarlamudi, 2011) .", "cite_spans": [ { "start": 93, "end": 117, "text": "(Koehn and Knight, 2002;", "ref_id": "BIBREF17" }, { "start": 118, "end": 145, "text": "Morin and Prochasson, 2011;", "ref_id": "BIBREF23" }, { "start": 146, "end": 165, "text": "Hazem et al., 2011)", "ref_id": "BIBREF11" }, { "start": 214, "end": 226, "text": "(Rapp, 1995;", "ref_id": "BIBREF30" }, { "start": 227, "end": 238, "text": "Fung, 1995;", "ref_id": null }, { "start": 239, "end": 261, "text": "Haghighi et al., 2008;", "ref_id": "BIBREF10" }, { "start": 262, "end": 289, "text": "Ismail and Manandhar, 2010;", "ref_id": "BIBREF13" }, { "start": 290, "end": 322, "text": "Daum\u00e9 III and Jagarlamudi, 2011)", "ref_id": "BIBREF8" }, { "start": 386, "end": 398, "text": "(Rapp, 1995)", "ref_id": "BIBREF30" }, { "start": 427, "end": 447, "text": "(Hazem et al., 2011)", "ref_id": "BIBREF11" }, { "start": 463, "end": 475, "text": "(Fung, 1995;", "ref_id": null }, { "start": 476, "end": 503, "text": "Ismail and Manandhar, 2010)", "ref_id": "BIBREF13" }, { "start": 580, "end": 604, "text": "(Koehn and Knight, 2002;", "ref_id": "BIBREF17" }, { "start": 605, "end": 627, "text": "Haghighi et al., 2008;", "ref_id": "BIBREF10" }, { "start": 628, "end": 660, "text": "Daum\u00e9 III and Jagarlamudi, 2011)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In face of the above problems, we propose a novel method that uses a graph-based label propagation technique (Zhu and Ghahramani, 2002) . The proposed method is based on assumption (II), which is derived by recursively applying assumption (I) to the \"contexts\": a word and its translation tend to have similar co-occurrence (direct and indirect) relations with all bilingual seeds across languages.", "cite_spans": [ { "start": 109, "end": 135, "text": "(Zhu and Ghahramani, 2002)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Based on assumption (II), we propose a threestep approach: (1) constructing a graph for each language with each edge indicating a direct cooccurrence relation, (2) representing every word as a seed translation distribution by iteratively propagating translated seeds in each graph, (3) finding two words in different languages with a high similarity with respect to the seed distributions. By propagating all the seeds on the graph, indirect co-occurrence relations are also considered when computing bilingual relations, which have been neglected in previous methods. In addition to the co-occurrence-based graph construction, we propose a similarity graph, which also takes into account context similarities between words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main contributions of this paper are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We propose a bilingual lexicon extraction method that captures co-occurrence relations with all the seeds, including indirect relations, using graph-based label propagation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In our experiments, we confirm that the proposed method outperforms conventional context-similarity-based methods (Rapp, 1999; Andrade et al., 2010) , and works well even if the coverage of a seed lexicon is low.", "cite_spans": [ { "start": 114, "end": 126, "text": "(Rapp, 1999;", "ref_id": "BIBREF31" }, { "start": 127, "end": 148, "text": "Andrade et al., 2010)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We propose a similarity graph which represents context similarities between words. In our experiments, we confirm that a similarity graph is more effective than a co-occurrence-based graph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The bilingual lexicon extraction from comparable corpora was pioneered in (Rapp, 1995; Fung, 1995) .", "cite_spans": [ { "start": 74, "end": 86, "text": "(Rapp, 1995;", "ref_id": "BIBREF30" }, { "start": 87, "end": 98, "text": "Fung, 1995)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Context-Similarity-based Extraction Method", "sec_num": "2" }, { "text": "The popular similarity-based methods consist of the following steps: modeling contexts, calculating context similarities, and finding translation pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-Similarity-based Extraction Method", "sec_num": "2" }, { "text": "Step 1. Modeling contexts: The context of each word is generally modeled by a vector where each dimension corresponds to a context word and each dimension has a value indicating occurrence correlation. Various definitions for the context have been used: distance-based context (e.g. in a sentence (Laroche and Langlais, 2010) , in a paragraph (Fung and McKeown, 1997) , in a predefined window (Rapp, 1999; Andrade et al., 2010) ), and syntactic-based context (e.g. predecessors and successors in dependency trees (Garera et al., 2009) , certain dependency position (Otero and Campos, 2008) ). Some treated context words equally regardless of their positions (Fung and Yee, 1998), while others treated the words separately for each position (Rapp, 1999) . Various correlation measures have been used: log-likelihood ratio (Rapp, 1999; Chiao and Zweigenbaum, 2002) , tf-idf (Fung and Yee, 1998), pointwise mutual information (PMI) (Andrade et al., 2010) , context heterogeneity (Fung, 1995) , etc. Shao and Ng (2004) represented contexts using language models. Andrade et al. (2010) used a set of words with a positive association as a context. Andrade et al. (2011a) used dependency relations instead of context words. Ismail and Manandhar (2010) used only in-domain words in contexts. Pekar et al. (2006) constructed smoothed context vectors for rare words. Laws et al. (2010) used graphs in which vertices correspond to words and edges indicate three types of syntactic relations such as adjectival modification.", "cite_spans": [ { "start": 297, "end": 325, "text": "(Laroche and Langlais, 2010)", "ref_id": "BIBREF19" }, { "start": 343, "end": 367, "text": "(Fung and McKeown, 1997)", "ref_id": null }, { "start": 393, "end": 405, "text": "(Rapp, 1999;", "ref_id": "BIBREF31" }, { "start": 406, "end": 427, "text": "Andrade et al., 2010)", "ref_id": "BIBREF1" }, { "start": 513, "end": 534, "text": "(Garera et al., 2009)", "ref_id": null }, { "start": 565, "end": 589, "text": "(Otero and Campos, 2008)", "ref_id": "BIBREF27" }, { "start": 740, "end": 752, "text": "(Rapp, 1999)", "ref_id": "BIBREF31" }, { "start": 821, "end": 833, "text": "(Rapp, 1999;", "ref_id": "BIBREF31" }, { "start": 834, "end": 862, "text": "Chiao and Zweigenbaum, 2002)", "ref_id": "BIBREF5" }, { "start": 929, "end": 951, "text": "(Andrade et al., 2010)", "ref_id": "BIBREF1" }, { "start": 976, "end": 988, "text": "(Fung, 1995)", "ref_id": null }, { "start": 996, "end": 1014, "text": "Shao and Ng (2004)", "ref_id": "BIBREF33" }, { "start": 1059, "end": 1080, "text": "Andrade et al. (2010)", "ref_id": "BIBREF1" }, { "start": 1143, "end": 1165, "text": "Andrade et al. (2011a)", "ref_id": "BIBREF2" }, { "start": 1285, "end": 1304, "text": "Pekar et al. (2006)", "ref_id": "BIBREF28" }, { "start": 1358, "end": 1376, "text": "Laws et al. (2010)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Context-Similarity-based Extraction Method", "sec_num": "2" }, { "text": "Step 2. Calculating context similarities: The contexts which are expressed in two different languages are mapped into the same space. Previous methods generally use a seed bilingual lexicon for this mapping. After that, similarities are calculated based on the mapped context vectors using various measures: city-block metric (Rapp, 1999) , cosine similarity (Fung and Yee, 1998) , weighted jaccard index (Hazem et al., 2011) , Jensen-Shannon divergence (Pekar et al., 2006) , the number of overlapping context words (Andrade et al., 2010) , Sim-Rank (Laws et al., 2010) , euclidean distance (Fung, 1995) , etc. Kaji (2005) calculated 2-way similarities.", "cite_spans": [ { "start": 326, "end": 338, "text": "(Rapp, 1999)", "ref_id": "BIBREF31" }, { "start": 359, "end": 379, "text": "(Fung and Yee, 1998)", "ref_id": null }, { "start": 405, "end": 425, "text": "(Hazem et al., 2011)", "ref_id": "BIBREF11" }, { "start": 454, "end": 474, "text": "(Pekar et al., 2006)", "ref_id": "BIBREF28" }, { "start": 517, "end": 539, "text": "(Andrade et al., 2010)", "ref_id": "BIBREF1" }, { "start": 551, "end": 570, "text": "(Laws et al., 2010)", "ref_id": "BIBREF20" }, { "start": 592, "end": 604, "text": "(Fung, 1995)", "ref_id": null }, { "start": 612, "end": 623, "text": "Kaji (2005)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Context-Similarity-based Extraction Method", "sec_num": "2" }, { "text": "Step 3. Finding translation pairs: A pair of words is treated as a translation pair when their context similarity is high. Various clues have been considered when computing the similarities: concept class information obtained from a multilingual thesaurus (D\u00e9jean et al., 2002) , co-occurrence models generated from aligned documents (Prochasson and Fung, 2011) , and transliteration information (Shao and Ng, 2004) .", "cite_spans": [ { "start": 256, "end": 277, "text": "(D\u00e9jean et al., 2002)", "ref_id": null }, { "start": 334, "end": 361, "text": "(Prochasson and Fung, 2011)", "ref_id": "BIBREF29" }, { "start": 396, "end": 415, "text": "(Shao and Ng, 2004)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Context-Similarity-based Extraction Method", "sec_num": "2" }, { "text": "Most of previous methods used a seed bilingual lexicon for mapping modeled contexts in two different languages into the same space. The mapping heavily relies on the entries in a given bilingual lexicon. Therefore, if the coverage of the seed lexicon is low, the context vectors become sparser and its discriminative capability becomes lower, leading to extraction of incorrect translation equivalents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problems from Previous Works", "sec_num": "2.1" }, { "text": "Consider the example in Figure 1 , where a context-similarity-based method and our proposed method find translation equivalents of the Japanese word \" (piranha)\". There are three context words for the query. However, the information on co-occurrence with \" (freshwater)\" disappears after the context vector is mapped, because the seed lexicon does not include \" (freshwater)\". The same thing happens with the English word \"piranha\". As a result, the pair of \" (piranha)\" and \"anaconda\" could be wrongly identified as a translation pair.", "cite_spans": [], "ref_spans": [ { "start": 24, "end": 32, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Problems from Previous Works", "sec_num": "2.1" }, { "text": "Some previous work focused on the problem of seed lexicon limitation. Morin and Prochasson (2011) complemented the seed lexicon with bilingual lexicon extracted from parallel sentences. Koehn and Knight (2002) used identically-spelled words in two languages as a seed lexicon. However, the method is not applicable for language pairs with different types of characters such as English and Japanese. Hazem et al. (2011) exploited k-nearest words for a query, which is very sensitive to the parameter k.", "cite_spans": [ { "start": 70, "end": 97, "text": "Morin and Prochasson (2011)", "ref_id": "BIBREF23" }, { "start": 186, "end": 209, "text": "Koehn and Knight (2002)", "ref_id": "BIBREF17" }, { "start": 399, "end": 418, "text": "Hazem et al. (2011)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Problems from Previous Works", "sec_num": "2.1" }, { "text": "Some previous work did not require any seed lexicon. Rapp (1995) proposed a computationally demanding matrix permutation method which maximizes a similarity between co-occurrence matrices in two languages. Ismail and Manandhar (2010) introduced a similarity measure between two words in different languages without requiring any seed lexicon. Fung (1995) used context heterogeneity vectors where each dimension is independent on language types. However, their performances are worse than those of conventional methods using a small seed lexicon. Haghighi et al. (2008) and Daum\u00e9 III and Jagarlamudi (2011) proposed a generative model based on probabilistic canonical correlation analysis, where words are represented by context features and orthographic features 2 . However, their experiments showed that orthographic features to be important for effectiveness, which means low per-formance for language pairs with different character types.", "cite_spans": [ { "start": 53, "end": 64, "text": "Rapp (1995)", "ref_id": "BIBREF30" }, { "start": 206, "end": 233, "text": "Ismail and Manandhar (2010)", "ref_id": "BIBREF13" }, { "start": 546, "end": 568, "text": "Haghighi et al. (2008)", "ref_id": "BIBREF10" }, { "start": 573, "end": 605, "text": "Daum\u00e9 III and Jagarlamudi (2011)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Problems from Previous Works", "sec_num": "2.1" }, { "text": "As described in Section 2, the performance of previous work is significantly degraded when used with a small seed lexicon. This problem could be resolved by incorporating indirect relations with all the seeds when identifying translation pairs. For example, in Figure 1 , \" (piranha)\" has some degree of association with the seed \" -fish\" through \" (freshwater)\" in both the Japanese side and the English side, although \" (piranha)\" and \" (fish)\" do not co-occur in the same contexts. Moreover, \"anaconda\" has very little association with the seed \" -fish\" in the English side. Therefore, the indirect relation with the seed \" -fish\" helps to discriminate from between \"piranha\" and \"anaconda\" and could be an important clue for identifying a correct translation pair.", "cite_spans": [], "ref_spans": [ { "start": 261, "end": 269, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Lexicon Extraction Based on Label Propagation", "sec_num": "3" }, { "text": "To utilize indirect relations, we introduce assumption (II): a word and its translation tend to have similar co-occurrence (direct and indirect) relations with all bilingual seeds across languages 3 . Based on assumption (II), we propose to identify a word pair as a translation pair when its co-occurrence (direct and indirect) relations with all the seeds are similar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexicon Extraction Based on Label Propagation", "sec_num": "3" }, { "text": "To obtain co-occurrence relations with all the seeds, including indirect relations, we focus on a graph-based label propagation (LP) technique (Zhu and Ghahramani, 2002) . LP transfers labels from labeled data points to unlabeled data points. In the process, all vertices have soft labels that can be interpreted as label distributions. We apply LP to bilingual lexicon extraction by representing each word as a vertex in a graph with each edge encoding a direct co-occurrence relation. Translated seeds are propagated as labels, and seed distributions are obtained for each word. From the seed distributions, we identify translation pairs.", "cite_spans": [ { "start": 143, "end": 169, "text": "(Zhu and Ghahramani, 2002)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Lexicon Extraction Based on Label Propagation", "sec_num": "3" }, { "text": "In summary, our proposed method consists of three steps (see Algorithm 1): (1) graph construc-Algorithm 1 Bilingual Lexicon Extraction Require: comparable corpora D e and D f , a seed lexicon S consists of S e and S f Ensure: Output translation pairs T 1-1: G e = {E e , V e , W e } \u2190 construct-graph(D e ) 1-2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexicon Extraction Based on Label Propagation", "sec_num": "3" }, { "text": "G f = {E f , V f , W f } \u2190 construct-graph(D f ) 2-1:G e = {E e , V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexicon Extraction Based on Label Propagation", "sec_num": "3" }, { "text": "e , W e , Q e } \u2190 propagate-seed (G e , S e ) 2-2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexicon Extraction Based on Label Propagation", "sec_num": "3" }, { "text": "G f = {E f , V f , W f , Q f } \u2190 propagate-seed (G f , S f ) 3: T \u2190 extract-translation (Q e , Q f , S)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexicon Extraction Based on Label Propagation", "sec_num": "3" }, { "text": "tion for each language, (2) seed propagation in each graph, (3) translation pair extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexicon Extraction Based on Label Propagation", "sec_num": "3" }, { "text": "We construct a graph representing the association between words for each language. Each graph is an undirected graph because the association does not have direction. The graphs are constructed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Construction", "sec_num": "3.1" }, { "text": "Step 1. Vertex assignment extracts words from each corpus, and assigns a vertex to each of the extracted words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Construction", "sec_num": "3.1" }, { "text": "Let V = {v 1 , \u2022 \u2022 \u2022 , v n } be a set of vertices.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Construction", "sec_num": "3.1" }, { "text": "Step 2. Edge weight calculation calculates association strength between two words as the weights of edges. Let E and W be a set of edges and that of the weights respectively, and e ij \u2208 E links v i and v j , and w ij \u2208 W is the weight of e ij . Note that |E| = |W |.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Construction", "sec_num": "3.1" }, { "text": "Step 3. Edge pruning excludes edges whose weights are lower than threshold, in order to reduce the computational cost during seed propagations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Construction", "sec_num": "3.1" }, { "text": "We propose two types of graphs that differ in the association measure used in Step 2: a co-occurrence graph and a similarity graph 4 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Construction", "sec_num": "3.1" }, { "text": "A co-occurrence graph directly encodes assumption (II). Each edge in the graph indicates correlation strength between occurrences of two linked words. An example is shown in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 174, "end": 182, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Co-occurrence Graph", "sec_num": "3.1.1" }, { "text": "In edge weight calculation, the co-occurrence frequencies are first computed for each word pair in the same context, and then the correlation strength is estimated. There are various definitions of a context or correlation measures that can be used (e.g. the approaches used for modeling contexts in contextsimilarity-based methods). In this paper, we use words in a predefined window (window size is 10 in our experiments) as the context and PMI as the correlation measure:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Co-occurrence Graph", "sec_num": "3.1.1" }, { "text": "w ij = P M I(v i , v j ) = log p(v i , v j ) p(v i ) \u2022 p(v j ) ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Co-occurrence Graph", "sec_num": "3.1.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Co-occurrence Graph", "sec_num": "3.1.1" }, { "text": "p(v i ) (or p(v j )) is the probability that v i (or v j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Co-occurrence Graph", "sec_num": "3.1.1" }, { "text": "occurs in a context, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Co-occurrence Graph", "sec_num": "3.1.1" }, { "text": "p(v i , v j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Co-occurrence Graph", "sec_num": "3.1.1" }, { "text": "is the probability that v i and v j co-occur within the same context. We estimate P M I(v i , v j ) by the Bayesian method proposed by Andrade et al.(2010) . Then, edges with a negative association,", "cite_spans": [ { "start": 135, "end": 155, "text": "Andrade et al.(2010)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Co-occurrence Graph", "sec_num": "3.1.1" }, { "text": "P M I(v i , v j ) \u2264 0,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Co-occurrence Graph", "sec_num": "3.1.1" }, { "text": "are pruned in edge pruning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Co-occurrence Graph", "sec_num": "3.1.1" }, { "text": "Co-occurrence graphs are very sensitive to accidental relation caused by lower frequent cooccurrence. Thus, we propose a similarity graph where context similarities are employed as weights of edges instead of simple co-occurrence-based correlations. Since the context similarities are computed by the global correlation among words which co-occur, a similarity graph is less subject to accidental co-occurrence. The use of a similarity graph is inspired by assumption (III): a word and its translation tend to have similar context similarities with all bilingual seeds across languages 5 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Graph", "sec_num": "3.1.2" }, { "text": "In edge weight calculation, we first construct a correlation vector representing co-occurrence relations for each word. The correlation vectors are constructed in the same way as the context vectors used in context-similarity-based methods (see Section 2), where context words are words in a predefined window (window size is 4 in our experiment), the association measure is PMI, and context words are treated separately for each position. A correlation vector for each position is computed separately, then concatenated into a single vector within the window. Secondly, we calculate similarities between correlation vectors. There are various similarity measures that can be used, and cosine similarity is used in this paper:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Graph", "sec_num": "3.1.2" }, { "text": "w ij = Cos( \u20d7 f i , \u20d7 f j ) = \u20d7 f i \u2022 \u20d7 f j \u2225 \u20d7 f i \u2225\u2225 \u20d7 f j \u2225 , where \u20d7 f i (or \u20d7 f j ) is the correlation vector of v i (or v j ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Graph", "sec_num": "3.1.2" }, { "text": "Then, in edge pruning, we preserve the edges with top 100 weight for each vertex.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Graph", "sec_num": "3.1.2" }, { "text": "LP is a graph-based technique which transfers the labels from labeled data to unlabeled data in order to infer labels for unlabeled data. This is primarily used when there is scarce labeled data but abundant unlabeled data. LP has been successfully applied in common natural language processing tasks such as word sense disambiguation (Niu et al., 2005; Alexandrescu and Kirchhoff, 2007) , multi-class lexicon acquisition (Alexandrescu and Kirchhoff, 2007) , and part-of-speech tagging (Das and Petrov, 2011) . LP iteratively propagates label information from any vertex to nearby vertices through weighted edges, and then a label distribution for each vertex is generated where the weights of all labels add up to 1.", "cite_spans": [ { "start": 335, "end": 353, "text": "(Niu et al., 2005;", "ref_id": "BIBREF25" }, { "start": 354, "end": 387, "text": "Alexandrescu and Kirchhoff, 2007)", "ref_id": "BIBREF0" }, { "start": 422, "end": 456, "text": "(Alexandrescu and Kirchhoff, 2007)", "ref_id": "BIBREF0" }, { "start": 486, "end": 508, "text": "(Das and Petrov, 2011)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Seed Propagation", "sec_num": "3.2" }, { "text": "We adopt LP to obtain relations with all bilingual seeds including indirect relations by treating each seed as a label. First, each translated seed is assigned to a label, and then the labels are propagated in the graph described in Section 3.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seed Propagation", "sec_num": "3.2" }, { "text": "The seed distribution for each word is initialized as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seed Propagation", "sec_num": "3.2" }, { "text": "q 0 i (z) = \uf8f1 \uf8f2 \uf8f3 1 if v i \u2208 V s and z = v i 0 if v i \u2208 V s and z \u0338 = v i u(z) otherwise ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seed Propagation", "sec_num": "3.2" }, { "text": "where V s is the set of vertices corresponding to translated seeds, u is a uniform distribution, q k", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seed Propagation", "sec_num": "3.2" }, { "text": "i (i = 1 \u2022 \u2022 \u2022 |V |)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seed Propagation", "sec_num": "3.2" }, { "text": "is the seed distribution for v i after k propagation, and q k i (z) is the weight of a label (i.e., a translated seed) z in q k i . After initialization, we iteratively propagate the seeds through weighted edges. In each propagation, seeds are probabilistically propagated from linked vertices under the condition that larger edge weights allow seeds to travel through easier. Thus, the closer vertices are, the more likely they have similar seed distributions. In Figure 1 , the balloons attached to vertices in the graphs show examples of the seed distributions generated by propagations. For example, the English word \"piranha\" has the seed distribution where the weights of the seeds \"Amazon\", \"jungle\", and \"fish\" are 0.5, 0.3, and 0.2, respectively. Specifically, each of seed distributions is updated as follows:", "cite_spans": [], "ref_spans": [ { "start": 465, "end": 473, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Seed Propagation", "sec_num": "3.2" }, { "text": "q m i (z) = \uf8f1 \uf8f2 \uf8f3 q 0 i (z) if v i \u2208 V s \u2211 vj \u2208N (vi) w ij \u2022 q m\u22121 j (z) \u2211 vj \u2208N (vi) w ij otherwise ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seed Propagation", "sec_num": "3.2" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seed Propagation", "sec_num": "3.2" }, { "text": "N (v i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seed Propagation", "sec_num": "3.2" }, { "text": "is the set of vertices linking to v i . We ran this procedure for 10 iterations in our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seed Propagation", "sec_num": "3.2" }, { "text": "After label propagations, we treat a pair of words in different languages with similar seed distributions as a translation pair. Seed distribution can be regarded as a vector where each dimension corresponds to each translated seed and each dimension has updated weight through label propagations. A similarity between seed distributions can therefore be calculated in the same way as a context-similaritybased method. In this paper, we use the cosine similarity defined by the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Pair Extraction", "sec_num": "3.3" }, { "text": "Cos(q f x , q e y ) = \u2211 si\u2208S q f x (v f i ) \u2022 q e y (v e i ) \u221a \u2211 si\u2208S (q f x (v f i )) 2 \u221a \u2211 si\u2208S (q e y (v e i )) 2 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Pair Extraction", "sec_num": "3.3" }, { "text": "where q f x (or q e y ) is the seed distribution for a word x (or y) in the source language (or target language), S is the seed lexicon whose i-th entry s i is a pairing of a translated seed in the source language v f i and one in the target language v e i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Pair Extraction", "sec_num": "3.3" }, { "text": "We used English and Japanese patent documents published between 1993 and 2005 by the US Patent & Trademark Office and the Japanese Patent Office respectively, which were a part of the data used in the NTCIR-8 patent translation task (Fujii et al., 2010) . Note that these documents are not aligned. development data used in the NTCIR-8 patent translation task, which is called NTCIR parallel data hereafter) in the patent data. However, a preliminary examination showed that the NTCIR parallel data covers less than 3% of all words because there are a number of technical terms and neologisms. Therefore, the patent translation task is a task that requires bilingual lexicon extraction from non-parallel data. We selected documents belonging to the physics domain from each monolingual corpus based on International Patent Classification (IPC) code 6 , and then used them as a comparable corpus in our experiments. As a result, we used 1,479,831 Japanese documents and 438,227 English documents. The reason for selecting the physics domain is that this domain contains the most documents of all the domains.", "cite_spans": [ { "start": 233, "end": 253, "text": "(Fujii et al., 2010)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Data", "sec_num": "4.1" }, { "text": "The Japanese texts were segmented and part-ofspeech tagged by ChaSen 7 , and the English texts were tokenized and part-of-speech tagged by Tree-Tagger (Schmid, 1994) . Next, function words were removed since function words with little semantic information spuriously co-occurred with many words. As a result, the number of distinct words in Japanese corpus and English corpus amounted to 1,111,302 and 4,099,825 8 , respectively.", "cite_spans": [ { "start": 151, "end": 165, "text": "(Schmid, 1994)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Data", "sec_num": "4.1" }, { "text": "We employed seed lexicons from two sources: (1) EDR bilingual dictionary (EDR, 1990), (2) automatic word alignments generated by running GIZA++ (Och and Ney, 2003) with the NTCIR parallel data consisting of 3,190,654 parallel sentences. From each source, we extracted pairs of nouns appearing in our corpus. From (2), we excluded word pairs where the average of 2-way translation proba-bilities was lower than 0.5. The pairs from (1) and (2) amounted to 27,353 and 2,853 respectively, and the two sets were not exclusive. In order to measure the impact of seed lexicon size, we prepared two seed lexicons: Lex L , a large seed lexicon that is a union of all the extracted word pairs, and Lex S , a small seed lexicon that is a union of a random sampling one-tenth of the pairs from (1) and one-tenth of the pairs from (2). Table 1 shows the size of each seed lexicon. Note that our seed lexicons include one-to-many or many-to-one translation pairs.", "cite_spans": [ { "start": 144, "end": 163, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 823, "end": 830, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experiment Data", "sec_num": "4.1" }, { "text": "We randomly selected 1,000 Japanese words as our test data which were identified as either a noun or an unknown by ChaSen and were not covered either by the EDR bilingual dictionary or by the NT-CIR parallel data. This is because the purpose of our method is to complement existing bilingual dictionaries or parallel data. Note that the Japanese words in our test data may not have translation equivalents in the English side.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Data", "sec_num": "4.1" }, { "text": "We evaluated two types of our label propagation based methods against two baselines. Cooc employs co-occurrence graphs and Sim uses similarity graphs when constructing graphs for label propagation as described in Section 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Competing Methods", "sec_num": "4.2" }, { "text": "Rapp is a typical context-similarity-based method described in Section 2 (Rapp, 1999) . Context words are words in a window (window size is 10) and are treated separately for each position. Associations with context words are computed using the log-likelihood ratio (Dunning, 1993) . The similarity measure between context vectors is the city-block metric.", "cite_spans": [ { "start": 73, "end": 85, "text": "(Rapp, 1999)", "ref_id": "BIBREF31" }, { "start": 266, "end": 281, "text": "(Dunning, 1993)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Competing Methods", "sec_num": "4.2" }, { "text": "Andrade is a sophisticated method in contextsimilarity-based methods (Andrade et al., 2010) . Context is a set of words with a positive association in a window (window size is 10). The association is calculated using the PMI estimated by a Bayesian method, and a similarity between contexts is estimated based on the number of overlapping words (see the original paper for details). Table 2 shows Top 1 and Top 20 accuracy. We manually 9 evaluated whether translation candidates contained a correct translation equivalent. We did not use recall because we do not know if the translation equivalents of a test word appear in the corpus. Table 2 shows that the proposed methods outperform the baselines both when using Lex S and using Lex L . The improvements are statistically significant in the sign-test with 1% significance-level. The results show that capturing the relations with all the seeds including indirect relations is effective.", "cite_spans": [ { "start": 69, "end": 91, "text": "(Andrade et al., 2010)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 383, "end": 391, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 637, "end": 644, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Competing Methods", "sec_num": "4.2" }, { "text": "The accuracies of the baselines in Table 2 are worse than the previous reports: 14% Acc 1 and 46% Acc 10 (Andrade et al., 2010), and 72% Acc 1 (Rapp, 1999) . This is because previous works evaluated only the queries whose translation equivalents existed in the experiment data, which is not always true in our experiments. Moreover, previous works evaluated only high-frequency words: common nouns (Rapp, 1999) and words with a document frequency of at least 50 (Andrade et al., 2010) . Our test data, on the other hand, includes many lowfrequency words. It is generally true that translation of high-frequency words is much easier than that of low frequency words. We discuss the impact of test word frequencies in detail in Section 5.3. Table 2 also shows that Sim outperforms Cooc both when using Lex S and using Lex L . The improvements of Acc 20 are statistically significant in the sign-test with 5% significance-level. Table 3 shows a list of the top 5 translation candidates for the Japanese word \" (manicdepression)\" for each method, where the ranks of the correct translations are shown in parentheses next to method names. Table 4 shows the top 5 translated seeds which characterize the query, where the values in parentheses indicate weight. Table 3 shows that Cooc(L) can find the correct translation equivalent but Andrade(L) cannot. Table 4 shows that Cooc(L) can utilize more seeds closely tied to the query (e.g. \" (neurosis)\", \" (insomnia)\"), which did not occur in the context of the query in the experiment data. The result shows that indirectly-related seeds are also important clues, and our proposed method can utilize these. Table 2 shows that a reduction of seed lexicon size degrades performance. This is natural for the baseline methods because Lex S cannot translate most of context words, which are necessary for word characterization. Consider Andrade(L) and Andrade(S) in the example in Section 5.1. Table 4 shows that Andrade(S) uses less relevant seeds with the query, and has to express the query by seeds with less association. For example, \" (psychosis)\" cannot be used in Andrade(S) because Lex S does not have the seed. Therefore, it is more difficult for Andrade(S) to find correct translation pairs.", "cite_spans": [ { "start": 143, "end": 155, "text": "(Rapp, 1999)", "ref_id": "BIBREF31" }, { "start": 398, "end": 410, "text": "(Rapp, 1999)", "ref_id": "BIBREF31" }, { "start": 462, "end": 484, "text": "(Andrade et al., 2010)", "ref_id": "BIBREF1" }, { "start": 1874, "end": 1884, "text": "Andrade(L)", "ref_id": null }, { "start": 1889, "end": 1899, "text": "Andrade(S)", "ref_id": null } ], "ref_spans": [ { "start": 35, "end": 42, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 739, "end": 746, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 926, "end": 933, "text": "Table 3", "ref_id": "TABREF6" }, { "start": 1134, "end": 1141, "text": "Table 4", "ref_id": "TABREF7" }, { "start": 1254, "end": 1261, "text": "Table 3", "ref_id": "TABREF6" }, { "start": 1348, "end": 1355, "text": "Table 4", "ref_id": "TABREF7" }, { "start": 1649, "end": 1656, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 1931, "end": 1938, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Experiment Results", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Sim(L) (2) Cooc(L) (5) Andrade(L)", "eq_num": "(" } ], "section": "Experiment Results", "sec_num": "4.3" }, { "text": "The proposed methods also share the same tendency, although each word is expressed by all the seeds in the seed lexicon. Consider Cooc(L) and Cooc(S) in the above example. Table 4 shows that Cooc(S) expresses the query by a smooth seed distribution, which is difficult to discriminate from others. This is because Lex S does not have relevant seeds for the query. This is why Cooc(S) cannot find the correct translation equivalent. On the other hand, Cooc(L) characterizes \" \" and \"manicdepression\" by strongly relevant seeds (e.g. \" (psychosis)\",\" (neurosis)\"), and then finds the correct translation equivalent.", "cite_spans": [], "ref_spans": [ { "start": 172, "end": 179, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Impact of Seed Lexicon Size", "sec_num": "5.2" }, { "text": "To examine the robustness-to-seed lexicon size, we calculated the reduction rate of Acc 20 with the following expression: (Acc 20 with Lex L \u2212 Acc 20 with Lex S ) / Acc 20 with Lex L . The reduction rates of Rapp, Andrade, Cooc, and Sim are 78.4%, 76.1%, 69 .6%, and 62.4% respectively. Moreover, the difference between degradation in Cooc and that in Andrade is statistically significant in the sign-test with 1% significance-level. These results indicate that the proposed methods are more robust to seed lexicon size than the baselines. This is because the proposed methods can utilize seeds with indirect relations while the baselines utilize only seeds in the context.", "cite_spans": [ { "start": 208, "end": 213, "text": "Rapp,", "ref_id": null }, { "start": 214, "end": 222, "text": "Andrade,", "ref_id": null }, { "start": 223, "end": 228, "text": "Cooc,", "ref_id": null }, { "start": 229, "end": 247, "text": "and Sim are 78.4%,", "ref_id": null }, { "start": 248, "end": 254, "text": "76.1%,", "ref_id": null }, { "start": 255, "end": 257, "text": "69", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Impact of Seed Lexicon Size", "sec_num": "5.2" }, { "text": "To verify our claim, we examined the number of test words which occurred with no seeds in the context. There were 570 such words in Rapp(S) lation equivalents. Words such as this occur even if using Lex L , and that number increases when Lex S is used. On the other hand, the proposed methods are able to utilize all the seeds in order to find equivalents for words such as these. Therefore, the proposed methods work well even if the coverage of a seed lexicon is low.", "cite_spans": [ { "start": 132, "end": 139, "text": "Rapp(S)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Impact of Seed Lexicon Size", "sec_num": "5.2" }, { "text": "Our test data includes many low-frequency words which are not covered by the EDR bilingual dictionary or the NTCIR parallel data. 624 words appear in the corpus less than 50 times. Table 5 shows Acc N using Lex L for 624 low-frequency words and 376 high-frequency words. Table 5 shows that performance for low-frequency words is much worse than that for high-frequency words. This is because translation of high-frequency words utilizes abundant and reliable context information, while the context information for low-frequency words is statistically unreliable. In the proposed methods, edges linking rare words are sometimes generated based on accidental co-occurrences, and then unrelated seed information is transferred through the edges. Therefore, even our label propagation based methods, especially for Cooc, could not identify the correct translation equivalents for rare words. Sim alleviated the problem by using a similarity graph in which edges are generated based on global correlation among words, as indicated by Table 5 . Table 5 also suggests that top 20 translation candidates for high-frequency words have potential to contribute to bilingual tasks such as MT and CLIR although the overall performance is still low.", "cite_spans": [], "ref_spans": [ { "start": 181, "end": 188, "text": "Table 5", "ref_id": "TABREF9" }, { "start": 271, "end": 278, "text": "Table 5", "ref_id": "TABREF9" }, { "start": 1029, "end": 1036, "text": "Table 5", "ref_id": "TABREF9" }, { "start": 1039, "end": 1047, "text": "Table 5", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Impact of Word Frequencies", "sec_num": "5.3" }, { "text": "We examined Acc N for synonyms of translated seeds in Japanese. The Acc 1 and Acc 20 of Sim(L) are 15.6% and 56.3%, respectively, and those of Cooc(L) are 9.4% and 37.5%, respectively. The results show that similarity graphs are effective for clustering synonyms into the same translation equivalents. For example, Sim(L) extracted the correct translation pair of the English word \"iodine\" and the Japanese word \" \", a synonym of the translated seed \" (iodine)\" in Japanese. This is because synonyms tend to be linked in the similarity graph and have similar seed distributions. On the other hand, in the co-occurrence graph, synonyms tend to be indirectly linked through mutual context words, so the seed distributions of the two could be far away from each other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of Similarity Graphs", "sec_num": "5.4" }, { "text": "There are in particular many loanwords in patent documents, which are spelled in different ways from person to person. For example, the loan word for the English word \"user\" is often written as \" \", but it is sometimes written as \" \", with an additional prolonged sound mark. Therefore, Sim is particularly effective for the experiment data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of Similarity Graphs", "sec_num": "5.4" }, { "text": "We discuss errors of the proposed methods except the errors for low-frequency words (see Section 5.3). Our test data includes words whose translation equivalents inherently cannot be found. The first of these types are words whose equivalent does not exist in the English corpus. This is an unavoidable problem for methods based on comparable corpora. The second one are words whose English equivalents are compound words. The Japanese morphological analyzer tends to group a compound word into a single word, while the English text analyzer does not perform a collocation of words divided by the delimiter space. For example, the single Japanese word \" \" is equivalent to \"palm pattern\" or \"palm print\", which is composed of two words. This case was counted as an error even though the proposed methods found the word \"palm\" as a equivalent of \" \". A main reason of errors other than those above is word sense ambiguity, which is different in every language. For example, the Japanese word \" \" means \"right\" and \"conservatism\" in English. The proposed methods merge different senses by propagating seeds through these polysemous words in only one language side. This is why translation pairs could have wrong seed distributions and then the proposed methods could not identify correct translation pairs. We will leave this word sense disambiguation problem for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "5.5" }, { "text": "Besides the comparable corpora approach discussed in Section 2, many alternatives have been proposed for bilingual lexicon extraction. The first is a method that finds translation pairs in parallel corpora (Wu and Xia, 1994; Fung and Church, 1994; Och and Ney, 2003) . However, large parallel corpora are only available for a few language pairs and for limited domains. Moreover, even the large parallel corpora are relatively smaller than comparable corpora.", "cite_spans": [ { "start": 206, "end": 224, "text": "(Wu and Xia, 1994;", "ref_id": "BIBREF36" }, { "start": 225, "end": 247, "text": "Fung and Church, 1994;", "ref_id": null }, { "start": 248, "end": 266, "text": "Och and Ney, 2003)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "The second is a method that exploits the Web. extracted translation pairs by mining web anchor texts and link structures. As an alternative, mixed-language web pages are exploited by first retrieving texts including both source and target languages from the web by using a search engine or simple rules, and then extracting translation pairs from the mixed-language texts utilizing various clues: Zhang and Vines (2004) used cooccurrence statistics, Cheng et al. (2004) used cooccurrences and context similarity information, and Huang et al. (2005) used phonetic, semantic and frequency-distance features. Lin et al. (2008) proposed a method for extracting parenthetically translated terms, where a word alignment algorithm is used for establishing the correspondences between in-parenthesis and pre-parenthesis words. However, those methods cannot find translation pairs when they are not connected with each other through link structures, or when they do not co-occur in the same text.", "cite_spans": [ { "start": 397, "end": 419, "text": "Zhang and Vines (2004)", "ref_id": "BIBREF37" }, { "start": 450, "end": 469, "text": "Cheng et al. (2004)", "ref_id": "BIBREF4" }, { "start": 529, "end": 548, "text": "Huang et al. (2005)", "ref_id": "BIBREF12" }, { "start": 606, "end": 623, "text": "Lin et al. (2008)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Transliteration is a completely different way for bilingual lexicon acquisition, in which a word in one language is converted into another language using phonetic equivalence (Knight and Graehl, 1998; Karimi et al., 2011) . Although machine transliteration works particularly well for proper names and loan words, it cannot be employed for phonetically dissimilar translations.", "cite_spans": [ { "start": 175, "end": 200, "text": "(Knight and Graehl, 1998;", "ref_id": null }, { "start": 201, "end": 221, "text": "Karimi et al., 2011)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "All the methods mentioned above may potentially extract translation pairs more precisely than our comparable corpora approach when their underlying assumptions are satisfied. We might improve the performance of our method by augmenting a seed lexicon with translation pairs extracted using the above methods, as experimented with in Section 4, in which additional lexical entries are included from parallel data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "We proposed a novel bilingual lexicon extraction method using label propagation for alleviating the limited seed lexicon size problem. The proposed method captures relations with all the seeds including indirect relations by propagating seed information. Moreover, we proposed using similarity graphs in propagation process in addition to cooccurrence graphs. Our experiments showed that the proposed method outperforms conventional contextsimilarity-based methods (Rapp, 1999; Andrade et al., 2010) , and the similarity graphs improve the performance by clustering synonyms into the same translation.", "cite_spans": [ { "start": 465, "end": 477, "text": "(Rapp, 1999;", "ref_id": "BIBREF31" }, { "start": 478, "end": 499, "text": "Andrade et al., 2010)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "We are planning to investigate the following open problems in future work: word sense disambiguation and translation of compound words as described in (Daille and Morin, 2005; Morin et al., 2007) . In addition, indirect relations have also been used in other tasks, such as paraphrase acquisition from bilingual parallel corpora (Kok and Brockett, 2010) . We will utilize their random walk approach or other graph-based techniques such as modified adsorption (Talukdar and Crammer, 2009) for generating seed distributions. We are also planning an end-toend evaluation, for instance, by employing the extracted bilingual lexicon into an MT system.", "cite_spans": [ { "start": 151, "end": 175, "text": "(Daille and Morin, 2005;", "ref_id": "BIBREF6" }, { "start": 176, "end": 195, "text": "Morin et al., 2007)", "ref_id": "BIBREF24" }, { "start": 329, "end": 353, "text": "(Kok and Brockett, 2010)", "ref_id": "BIBREF18" }, { "start": 459, "end": 487, "text": "(Talukdar and Crammer, 2009)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "AlthoughVuli\u0107 et al. (2011) regarded document-aligned texts such as texts on Wikipedia as comparable corpora, we do not limit comparable corpora to these kinds of texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "InHaghighi et al. (2008) andDaum\u00e9 III and Jagarlamudi (2011), indirect relations with seeds are considered topologically, but our method utilizes degrees of indirect correlations with seeds.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Assumption (I) indicates direct co-occurrence relations between a word and its context words are preserved across different languages. Therefore, assumption (II) is derived by recursively applying assumption (I) to the \"context words\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We can combine the association measures used in a cooccurrence graph and a similarity graph. We will leave this combination approach for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This assumption is justified because context similarities are based on co-occurrence relations that are preserved across different languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "SECTION G of IPC code indicates the physics domain. 7 http://chasen-legacy.sourceforge.jp/ 8 The English words contain words in tables or mathematical formula but the Japanese words do not because the data format differs between English and Japanese. This is why the number of English words is larger than that of Japanese words, even though the number of English documents is smaller than that of Japanese documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We could not evaluate using existing dictionaries because most of the test data are technical terms and neologisms not included in the dictionaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank anonymous reviewers of EMNLP-CoNLL 2012 for helpful suggestions and comments on a first version of this paper. We also thank anonymous reviewers of First Workshop on Multilingual Modeling (MM-2012) for useful comments on this work. Herv\u00e9 D\u00e9jean,\u00c9ric Gaussier, and Fatia Sadat. 2002. An approach based on multilingual thesauri and model combination for bilingual lexicon extraction. In Proceedings of the 19th International Conference on Computational linguistics (COLING 2002) ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Data-Driven Graph Construction for Semi-Supervised Graph-Based Learning in NLP", "authors": [ { "first": "Andrei", "middle": [], "last": "Alexandrescu", "suffix": "" }, { "first": "Katrin", "middle": [], "last": "Kirchhoff", "suffix": "" } ], "year": 2007, "venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference", "volume": "", "issue": "", "pages": "204--211", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrei Alexandrescu and Katrin Kirchhoff. 2007. Data-Driven Graph Construction for Semi-Supervised Graph-Based Learning in NLP. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computa- tional Linguistics; Proceedings of the Main Confer- ence, pages 204-211.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Robust Measurement and Comparison of Context Similarity for Finding Translation Pairs", "authors": [ { "first": "Daniel", "middle": [], "last": "Andrade", "suffix": "" }, { "first": "Tetsuya", "middle": [], "last": "Nasukawa", "suffix": "" }, { "first": "Junichi", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics (COLING 2010)", "volume": "", "issue": "", "pages": "19--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Andrade, Tetsuya Nasukawa, and Junichi Tsu- jii. 2010. Robust Measurement and Comparison of Context Similarity for Finding Translation Pairs. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING 2010), pages 19-27.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Effective Use of Dependency Structure for Bilingual Lexicon Creation", "authors": [ { "first": "Daniel", "middle": [], "last": "Andrade", "suffix": "" }, { "first": "Takuya", "middle": [], "last": "Matsuzaki", "suffix": "" }, { "first": "Junichi", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 12th International Conference on Computational Linguistics and Intelligent Text Processing", "volume": "Part II", "issue": "", "pages": "80--92", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Andrade, Takuya Matsuzaki, and Junichi Tsu- jii. 2011a. Effective Use of Dependency Structure for Bilingual Lexicon Creation. In Proceedings of the 12th International Conference on Computational Linguistics and Intelligent Text Processing (CICLing 2011) -Volume Part II, pages 80-92.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Learning the Optimal Use of Dependencyparsing Information for Finding Translations with Comparable Corpora", "authors": [ { "first": "Daniel", "middle": [], "last": "Andrade", "suffix": "" }, { "first": "Takuya", "middle": [], "last": "Matsuzaki", "suffix": "" }, { "first": "Junichi", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 4th Workshop on Building and Using Comparable Corpora", "volume": "", "issue": "", "pages": "10--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Andrade, Takuya Matsuzaki, and Junichi Tsujii. 2011b. Learning the Optimal Use of Dependency- parsing Information for Finding Translations with Comparable Corpora. In Proceedings of the 4th Work- shop on Building and Using Comparable Corpora, pages 10-18.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Translating Unknown Queries with Web Corpora for Cross-Language Information Retrieval", "authors": [ { "first": "Pu-Jen", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Jei-Wen", "middle": [], "last": "Teng", "suffix": "" }, { "first": "Ruei-Cheng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jenq-Haur", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Wen-Hsiang", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Lee-Feng", "middle": [], "last": "Chien", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 27th Annual International ACM SI-GIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "146--153", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pu-Jen Cheng, Jei-Wen Teng, Ruei-Cheng Chen, Jenq- Haur Wang, Wen-Hsiang Lu, and Lee-Feng Chien. 2004. Translating Unknown Queries with Web Cor- pora for Cross-Language Information Retrieval. In Proceedings of the 27th Annual International ACM SI- GIR Conference on Research and Development in In- formation Retrieval, pages 146-153.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Looking for candidate translational equivalents in specialized, comparable corpora", "authors": [ { "first": "Yun-Chuang", "middle": [], "last": "Chiao", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Zweigenbaum", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 19th International Conference on Computational Linguistics (COLING 2002)", "volume": "", "issue": "", "pages": "1--5", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yun-Chuang Chiao and Pierre Zweigenbaum. 2002. Looking for candidate translational equivalents in spe- cialized, comparable corpora. In Proceedings of the 19th International Conference on Computational Lin- guistics (COLING 2002), pages 1-5.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "French-English Terminology Extraction from Comparable Corpora", "authors": [ { "first": "B\u00e9atrice", "middle": [], "last": "Daille", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Morin", "suffix": "" } ], "year": 2005, "venue": "Proceedings of 2nd International Joint Conference on Natural Language Processing (IJCNLP 2005)", "volume": "", "issue": "", "pages": "707--718", "other_ids": {}, "num": null, "urls": [], "raw_text": "B\u00e9atrice Daille and Emmanuel Morin. 2005. French- English Terminology Extraction from Comparable Corpora. In Proceedings of 2nd International Joint Conference on Natural Language Processing (IJCNLP 2005), pages 707-718.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Unsupervised Partof-Speech Tagging with Bilingual Graph-Based Projections", "authors": [ { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT 2011)", "volume": "", "issue": "", "pages": "600--609", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dipanjan Das and Slav Petrov. 2011. Unsupervised Part- of-Speech Tagging with Bilingual Graph-Based Pro- jections. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies (ACL-HLT 2011), pages 600-609.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Domain Adaptation for Machine Translation by Mining Unseen Words", "authors": [ { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" }, { "first": "Jagadeesh", "middle": [], "last": "Jagarlamudi", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT2011)", "volume": "", "issue": "", "pages": "407--412", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hal Daum\u00e9 III and Jagadeesh Jagarlamudi. 2011. Do- main Adaptation for Machine Translation by Mining Unseen Words. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguis- tics: Human Language Technologies (ACL-HLT2011), pages 407-412.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "ric View on Bilingual Lexicon Extraction from Comparable Corpora", "authors": [], "year": null, "venue": "Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics (ACL 2004)", "volume": "", "issue": "", "pages": "526--533", "other_ids": {}, "num": null, "urls": [], "raw_text": "ric View on Bilingual Lexicon Extraction from Com- parable Corpora. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics (ACL 2004), pages 526-533.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning Bilingual Lexicons from Monolingual Corpora", "authors": [ { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Taylor", "middle": [], "last": "Berg-Kirkpatrick", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL", "volume": "", "issue": "", "pages": "771--779", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. 2008. Learning Bilingual Lexicons from Monolingual Corpora. In Proceedings of the 46th Annual Meeting of the Association for Computa- tional Linguistics (ACL 2008): the Human Language Technology Conference (HLT), pages 771-779.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Bilingual Lexicon Extraction from Comparable Corpora as Metasearch", "authors": [ { "first": "Amir", "middle": [], "last": "Hazem", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Morin", "suffix": "" }, { "first": "Sebastian Pe\u00f1a", "middle": [], "last": "Saldarriaga", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 4th Workshop on Building and Using Comparable Corpora", "volume": "", "issue": "", "pages": "35--43", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amir Hazem, Emmanuel Morin, and Sebastian Pe\u00f1a Sal- darriaga. 2011. Bilingual Lexicon Extraction from Comparable Corpora as Metasearch. In Proceedings of the 4th Workshop on Building and Using Compara- ble Corpora, pages 35-43.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Mining Key Phrase Translations from Web Corpora", "authors": [ { "first": "Fei", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Vogel", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing (HLT-EMNLP 2005)", "volume": "", "issue": "", "pages": "483--490", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fei Huang, Ying Zhang, and Stephan Vogel. 2005. Min- ing Key Phrase Translations from Web Corpora. In Proceedings of Human Language Technology Confer- ence and Conference on Empirical Methods in Natu- ral Language Processing (HLT-EMNLP 2005), pages 483-490.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Bilingual lexicon extraction from comparable corpora using in-domain terms", "authors": [ { "first": "Azniah", "middle": [], "last": "Ismail", "suffix": "" }, { "first": "Suresh", "middle": [], "last": "Manandhar", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics (COLING 2010)", "volume": "", "issue": "", "pages": "481--489", "other_ids": {}, "num": null, "urls": [], "raw_text": "Azniah Ismail and Suresh Manandhar. 2010. Bilin- gual lexicon extraction from comparable corpora us- ing in-domain terms. In Proceedings of the 23rd In- ternational Conference on Computational Linguistics (COLING 2010), pages 481-489.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Extracting Translation Equivalents from Bilingual Comparable Corpora", "authors": [ { "first": "Hiroyuki", "middle": [], "last": "Kaji", "suffix": "" } ], "year": 2005, "venue": "IEICE -Trans. Inf. Syst", "volume": "", "issue": "", "pages": "313--323", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hiroyuki Kaji. 2005. Extracting Translation Equivalents from Bilingual Comparable Corpora. IEICE -Trans. Inf. Syst., E88-D:313-323.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Learning a Translation Lexicon from Monolingual Corpora", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2002, "venue": "Proceedings of ACL Workshop on Unsupervised Lexical Acquisition", "volume": "", "issue": "", "pages": "9--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn and Kevin Knight. 2002. Learning a Translation Lexicon from Monolingual Corpora. In Proceedings of ACL Workshop on Unsupervised Lexi- cal Acquisition, pages 9-16.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Hitting the Right Paraphrases in Good Time", "authors": [ { "first": "Stanley", "middle": [], "last": "Kok", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" } ], "year": 2010, "venue": "Proceedings of Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2010)", "volume": "", "issue": "", "pages": "145--153", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stanley Kok and Chris Brockett. 2010. Hitting the Right Paraphrases in Good Time. In Proceedings of Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2010), pages 145-153.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Revisiting Context-based Projection Methods for Term-Translation Spotting in Comparable Corpora", "authors": [ { "first": "Audrey", "middle": [], "last": "Laroche", "suffix": "" }, { "first": "Philippe", "middle": [], "last": "Langlais", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics (COLING 2010)", "volume": "", "issue": "", "pages": "617--625", "other_ids": {}, "num": null, "urls": [], "raw_text": "Audrey Laroche and Philippe Langlais. 2010. Re- visiting Context-based Projection Methods for Term- Translation Spotting in Comparable Corpora. In Pro- ceedings of the 23rd International Conference on Computational Linguistics (COLING 2010), pages 617-625.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A Linguistically Grounded Graph Model for Bilingual Lexicon Extraction", "authors": [ { "first": "Florian", "middle": [], "last": "Laws", "suffix": "" }, { "first": "Lukas", "middle": [], "last": "Michelbacher", "suffix": "" }, { "first": "Beate", "middle": [], "last": "Dorow", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Scheible", "suffix": "" }, { "first": "Ulrich", "middle": [], "last": "Heid", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics (COLING 2010)", "volume": "", "issue": "", "pages": "614--622", "other_ids": {}, "num": null, "urls": [], "raw_text": "Florian Laws, Lukas Michelbacher, Beate Dorow, Chris- tian Scheible, Ulrich Heid, and Hinrich Sch\u00fctze. 2010. A Linguistically Grounded Graph Model for Bilingual Lexicon Extraction. In Proceedings of the 23rd In- ternational Conference on Computational Linguistics (COLING 2010), pages 614-622.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Mining Parenthetical Translations from the Web by Word Alignment", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Shaojun", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Marius", "middle": [], "last": "Pasca", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "994--1002", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin, Shaojun Zhao, Benjamin Van Durme, and Marius Pasca. 2008. Mining Parenthetical Transla- tions from the Web by Word Alignment. In Proceed- ings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL 2008): the Human Language Technology Conference (HLT), pages 994- 1002.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Anchor Text Mining for Translation of Web Queries: A Transitive Translation Approach", "authors": [ { "first": "Wen-Hsiang", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Lee-Feng", "middle": [], "last": "Chien", "suffix": "" }, { "first": "Hsi-Jian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2004, "venue": "ACM Transactions on Information Systems", "volume": "22", "issue": "2", "pages": "242--269", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wen-Hsiang Lu, Lee-Feng Chien, and Hsi-Jian Lee. 2004. Anchor Text Mining for Translation of Web Queries: A Transitive Translation Approach. ACM Transactions on Information Systems, 22(2):242-269.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Bilingual Lexicon Extraction from Comparable Corpora Enhanced with Parallel Corpora", "authors": [ { "first": "Emmanuel", "middle": [], "last": "Morin", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Prochasson", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 4th Workshop on Building and Using Comparable Corpora", "volume": "", "issue": "", "pages": "27--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emmanuel Morin and Emmanuel Prochasson. 2011. Bilingual Lexicon Extraction from Comparable Cor- pora Enhanced with Parallel Corpora. In Proceedings of the 4th Workshop on Building and Using Compara- ble Corpora, pages 27-34.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Bilingual Terminology Mining -Using Brain, not brawn comparable corpora", "authors": [ { "first": "Emmanuel", "middle": [], "last": "Morin", "suffix": "" }, { "first": "B\u00e9atrice", "middle": [], "last": "Daille", "suffix": "" }, { "first": "Koichi", "middle": [], "last": "Takeuchi", "suffix": "" }, { "first": "Kyo", "middle": [], "last": "Kageura", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL 2007)", "volume": "", "issue": "", "pages": "664--671", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emmanuel Morin, B\u00e9atrice Daille, Koichi Takeuchi, and Kyo Kageura. 2007. Bilingual Terminology Mining - Using Brain, not brawn comparable corpora. In Pro- ceedings of the 45th Annual Meeting of the Associa- tion of Computational Linguistics (ACL 2007), pages 664-671.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Word Sense Disambiguation Using Label Propagation Based Semi-Supervised Learning", "authors": [ { "first": "Zheng-Yu", "middle": [], "last": "Niu", "suffix": "" }, { "first": "Dong-Hong", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Chew Lim", "middle": [], "last": "Tan", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL 2005)", "volume": "", "issue": "", "pages": "395--402", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zheng-Yu Niu, Dong-Hong Ji, and Chew Lim Tan. 2005. Word Sense Disambiguation Using Label Propagation Based Semi-Supervised Learning. In Proceedings of the 43rd Annual Meeting of the Association for Com- putational Linguistics (ACL 2005), pages 395-402.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A Systematic Comparison of Various Statistical Alignment Models", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2003. A Systematic Comparison of Various Statistical Alignment Models. Computational Linguistics, 29:19-51.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Learning Spanish-Galician Translation Equivalents Using a Comparable Corpus and a Bilingual Dictionary", "authors": [ { "first": "Pablo", "middle": [ "Gamallo" ], "last": "Otero", "suffix": "" }, { "first": "Jos\u00e9 Ramom Pichel", "middle": [], "last": "Campos", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 9th International Conference on Computational Linguistics and Intelligent Text Processing", "volume": "", "issue": "", "pages": "423--433", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pablo Gamallo Otero and Jos\u00e9 Ramom Pichel Campos. 2008. Learning Spanish-Galician Translation Equiva- lents Using a Comparable Corpus and a Bilingual Dic- tionary. In Proceedings of the 9th International Con- ference on Computational Linguistics and Intelligent Text Processing (CICLing 2008), pages 423-433.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Finding Translations for Low-Frequency Words in Comparable Corpora", "authors": [ { "first": "Viktor", "middle": [], "last": "Pekar", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Mitkov", "suffix": "" }, { "first": "Dimitar", "middle": [], "last": "Blagoev", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Mulloni", "suffix": "" } ], "year": 2006, "venue": "Machine Translation", "volume": "20", "issue": "", "pages": "247--266", "other_ids": {}, "num": null, "urls": [], "raw_text": "Viktor Pekar, Ruslan Mitkov, Dimitar Blagoev, and An- drea Mulloni. 2006. Finding Translations for Low- Frequency Words in Comparable Corpora. Machine Translation, 20:247-266.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Rare Word Translation Extraction from Aligned Comparable Documents", "authors": [ { "first": "Emmanuel", "middle": [], "last": "Prochasson", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT2011)", "volume": "", "issue": "", "pages": "1327--1335", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emmanuel Prochasson and Pascale Fung. 2011. Rare Word Translation Extraction from Aligned Compara- ble Documents. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguis- tics: Human Language Technologies (ACL-HLT2011), pages 1327-1335.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Identifying Word Translations in Non-Parallel Texts", "authors": [ { "first": "Reinhard", "middle": [], "last": "Rapp", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics (ACL 1995)", "volume": "", "issue": "", "pages": "320--322", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reinhard Rapp. 1995. Identifying Word Translations in Non-Parallel Texts. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguis- tics (ACL 1995), pages 320-322.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Automatic Identification of Word Translations from Unrelated English and German Corpora", "authors": [ { "first": "Reinhard", "middle": [], "last": "Rapp", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL 1999)", "volume": "", "issue": "", "pages": "519--526", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reinhard Rapp. 1999. Automatic Identification of Word Translations from Unrelated English and German Cor- pora. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL 1999), pages 519-526.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Probabilistic Part-of-Speech Tagging Using Decision Trees", "authors": [ { "first": "Helmut", "middle": [], "last": "Schmid", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the International Conference on New Methods in Language Processing", "volume": "", "issue": "", "pages": "44--49", "other_ids": {}, "num": null, "urls": [], "raw_text": "Helmut Schmid. 1994. Probabilistic Part-of-Speech Tag- ging Using Decision Trees. In Proceedings of the In- ternational Conference on New Methods in Language Processing, pages 44-49.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Mining New Word Translations from Comparable Corpora", "authors": [ { "first": "Li", "middle": [], "last": "Shao", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 20th International Conference on Computational Linguistics (COLING 2004)", "volume": "", "issue": "", "pages": "618--624", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Shao and Hwee Tou Ng. 2004. Mining New Word Translations from Comparable Corpora. In Proceed- ings of the 20th International Conference on Compu- tational Linguistics (COLING 2004), pages 618-624.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "New Regularized Algorithms for Transductive Learning", "authors": [ { "first": "Partha", "middle": [], "last": "Pratim Talukdar", "suffix": "" }, { "first": "Koby", "middle": [], "last": "Crammer", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases", "volume": "", "issue": "", "pages": "442--457", "other_ids": {}, "num": null, "urls": [], "raw_text": "Partha Pratim Talukdar and Koby Crammer. 2009. New Regularized Algorithms for Transductive Learning. In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2009), pages 442-457.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Identifying Word Translations from Comparable Corpora Using Latent Topic Models", "authors": [ { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Wim", "middle": [ "De" ], "last": "Smet", "suffix": "" }, { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT 2011)", "volume": "", "issue": "", "pages": "479--484", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Vuli\u0107, Wim De Smet, and Marie-Francine Moens. 2011. Identifying Word Translations from Compara- ble Corpora Using Latent Topic Models. In Proceed- ings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Tech- nologies (ACL-HLT 2011), pages 479-484.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Learning an English-Chinese Lexicon from a Parallel Corpus", "authors": [ { "first": "Dekai", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Xuanyin", "middle": [], "last": "Xia", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the First Conference of the Association for Machine Translation in the Americas (AMTA 1994)", "volume": "", "issue": "", "pages": "206--213", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekai Wu and Xuanyin Xia. 1994. Learning an English- Chinese Lexicon from a Parallel Corpus. In Proceed- ings of the First Conference of the Association for Ma- chine Translation in the Americas (AMTA 1994), pages 206-213.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Using the Web for Automated Translation Extraction in Cross-Language Information Retrieval", "authors": [ { "first": "Ying", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Vines", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "162--169", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ying Zhang and Phil Vines. 2004. Using the Web for Automated Translation Extraction in Cross-Language Information Retrieval. In Proceedings of the 27th Annual International ACM SIGIR Conference on Re- search and Development in Information Retrieval, pages 162-169.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Learning from Labeled and Unlabeled Data with Label Propagation", "authors": [ { "first": "Xiaojin", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Zoubin", "middle": [], "last": "Ghahramani", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaojin Zhu and Zoubin Ghahramani. 2002. Learning from Labeled and Unlabeled Data with Label Propa- gation. Technical report, CMU-CALD-02-107.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "An Example of a Previous Method and our Proposed Method Andrade et al. (2011b) performed a linear transformation of context vectors in accordance with the notion that importance varies by context positions. Gaussier et al. (2004) mapped context vectors via latent classes to capture synonymy and polysemy in a seed lexicon. Fi\u0161er et al. (2011) and", "type_str": "figure", "num": null }, "TABREF2": { "content": "", "type_str": "table", "num": null, "html": null, "text": "Size of Seed Lexicons" }, "TABREF3": { "content": "
Lex SLex L
Acc 1 Rapp 1.5% 3.8%4.8% 17.6%
Andrade 1.9% 4.2%5.6% 17.6%
Cooc3.2% 8.6%9.2% 28.3%
Sim4.1% 11.5% 10.8% 30.6%
", "type_str": "table", "num": null, "html": null, "text": "shows the performance of each method using Lex S or Lex L . Hereafter, M ethod(L) (or M ethod(S)) denotes the M ethod using Lex L (or Acc 20 Acc 1 Acc 20" }, "TABREF4": { "content": "
Lex S ). We measure the performance on bilingual
lexicon extraction as Top N accuracy (Acc N ), which
is the number of test words whose top N translation
candidates contain a correct translation equivalent
over the total number of test words (=1,000).
", "type_str": "table", "num": null, "html": null, "text": "Performance on Bilingual Lexicon Extraction" }, "TABREF6": { "content": "
: Translation Candidates for(manic-
depression)
Cooc(L)Andrade(L)Cooc(S)Andrade(S)
1(0.12)(7.6)(0.016)(5.0)
narcoticnarcoticdementiaposteriori
2(0.11)(6.3)(0.014)(3.7)
psychosisoldalien,stepchilddementia
3(0.08)(6.3)(0.012)(3.2)
neurosispsychosisposterioriulcer
4(0.05)(5.6)(0.012)(2.9)
hormonebronchitiselectropositivityperiod
5(0.04)(5.0)(0.011)(2.5)
insomniaposterioriulcerseriousness
manic-depression
Cooc(L)Andrade(L)Cooc(S)Andrade(S)
1illnessillnessganjagalop
(0.15)(8.6)(0.012)(7.0)
2neurosispsychotherapeuticscarbanilidemadness
(0.11)(7.0)(0.011)(5.4)
3seizuregaloppaludismlibido
(0.07)(7.0)(0.011)(5.2)
4psychosispsychosisresignationvitiligo
(0.06)(6.8)(0.010)(4.6)
5insomniasomnambulismgalopdementia
(0.04)(6.7)(0.009)(4.3)
", "type_str": "table", "num": null, "html": null, "text": "" }, "TABREF7": { "content": "", "type_str": "table", "num": null, "html": null, "text": "Seeds with the Highest" }, "TABREF8": { "content": "
Low Freq.High Freq.
Acc 1 Rapp(L) 0.5% 2.4%7.2% 25.6%
Andrade(L) 0.3% 1.8%8.6% 26.3%
Cooc(L)0.8% 4.3% 13.9% 40.7%
Sim(L)2.2% 6.7% 15.0% 42.0%
,
387 in Rapp(L), 572 in Andrade(S), and 388 in
Andrade(L). The baselines cannot find their trans-
", "type_str": "table", "num": null, "html": null, "text": "Acc 20 Acc 1 Acc 20" }, "TABREF9": { "content": "", "type_str": "table", "num": null, "html": null, "text": "Comparison between Performance for High and Low Frequency Words" } } } }