{ "paper_id": "D18-1046", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:49:43.176361Z" }, "title": "Bootstrapping Transliteration with Constrained Discovery for Low-Resource Languages", "authors": [ { "first": "Shyam", "middle": [], "last": "Upadhyay", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania Philadelphia", "location": { "region": "PA" } }, "email": "shyamupa@seas.upenn.edu" }, { "first": "Jordan", "middle": [], "last": "Kodner", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania", "location": { "settlement": "Philadelphia", "region": "PA" } }, "email": "jkodner@seas.upenn.edu" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania", "location": { "settlement": "Philadelphia", "region": "PA" } }, "email": "danroth@seas.upenn.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Generating the English transliteration of a name written in a foreign script is an important and challenging step in multilingual knowledge acquisition and information extraction. Existing approaches to transliteration generation require a large (>5000) number of training examples. This difficulty contrasts with transliteration discovery, a somewhat easier task that involves picking a plausible transliteration from a given list. In this work, we present a bootstrapping algorithm that uses constrained discovery to improve generation, and can be used with as few as 500 training examples, which we show can be sourced from annotators in a matter of hours. This opens the task to languages for which large number of training examples are unavailable. We evaluate transliteration generation performance itself, as well the improvement it brings to crosslingual candidate generation for entity linking, a typical downstream task. We present a comprehensive evaluation of our approach on nine languages, each written in a unique script. 1", "pdf_parse": { "paper_id": "D18-1046", "_pdf_hash": "", "abstract": [ { "text": "Generating the English transliteration of a name written in a foreign script is an important and challenging step in multilingual knowledge acquisition and information extraction. Existing approaches to transliteration generation require a large (>5000) number of training examples. This difficulty contrasts with transliteration discovery, a somewhat easier task that involves picking a plausible transliteration from a given list. In this work, we present a bootstrapping algorithm that uses constrained discovery to improve generation, and can be used with as few as 500 training examples, which we show can be sourced from annotators in a matter of hours. This opens the task to languages for which large number of training examples are unavailable. We evaluate transliteration generation performance itself, as well the improvement it brings to crosslingual candidate generation for entity linking, a typical downstream task. We present a comprehensive evaluation of our approach on nine languages, each written in a unique script. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Transliteration is the process of transducing names from one writing system to another (e.g., \u0913\u092c\u093e\u092e\u093e in Devanagari to Obama in Latin script) while preserving their pronunciation (Knight and Graehl, 1998; Karimi et al., 2011) . In particular, back-transliteration from foreign languages to English has applications in multilingual knowledge acquisition tasks including named entity recognition (Darwish, 2013) and information retrieval (Virga and Khudanpur, 2003) . Two tasks feature prominently in the transliteration literature: generation (Knight and Graehl, 1998) which involves producing an appropriate transliteration for a given word in an open-ended way, and discovery (Sproat et al., 2006; Klementiev and Roth, 1 code at github.com/shyamupa/hma-translit.", "cite_spans": [ { "start": 177, "end": 202, "text": "(Knight and Graehl, 1998;", "ref_id": "BIBREF26" }, { "start": 203, "end": 223, "text": "Karimi et al., 2011)", "ref_id": null }, { "start": 392, "end": 407, "text": "(Darwish, 2013)", "ref_id": "BIBREF9" }, { "start": 434, "end": 461, "text": "(Virga and Khudanpur, 2003)", "ref_id": "BIBREF41" }, { "start": 540, "end": 565, "text": "(Knight and Graehl, 1998)", "ref_id": "BIBREF26" }, { "start": 675, "end": 696, "text": "(Sproat et al., 2006;", "ref_id": "BIBREF36" }, { "start": 697, "end": 719, "text": "Klementiev and Roth, 1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2008) which involves selecting an appropriate transliteration for a word from a list of candidates. This work develops transliteration generation approaches for low-resource languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Existing transliteration generation models require supervision in the form of source-target name pairs (\u22485-10k), which are often collected from names in Wikipedia inter-language links (Irvine et al., 2010) . However, most languages that use non-Latin scripts are under-represented in terms of such resources. Table 1 illustrates this issue, and the extra coverage one can achieve by extending to low-resource languages. A model that requires 50k name pairs as supervision can only support 6 languages, while one that just needs 500 could support 56. For a model to be widely applicable, it must function in low-resource settings. Wikipedia inter-language links. While previous approaches for transliteration generation were applicable to only 24 languages (spanning 15 scripts), our approach is applicable to 56 languages (23 scripts). When counting scripts we exclude variants (e.g., all Cyrillic scripts and variants count as one).", "cite_spans": [ { "start": 184, "end": 205, "text": "(Irvine et al., 2010)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 309, "end": 316, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We propose a new bootstrapping algorithm that uses a weak generation model to guide discovery of good transliterations, which in turn aids future bootstrapping iterations. 2 By carefully controlling the interaction of discovery and the generation model via constrained inference, we show how to bootstrap a generation model using a dictionary of names in English, a list of words in the foreign script, and little initial supervision (\u2248500 name pairs). To the best of our knowledge, ours is the first work to accomplish transliteration generation in such a low-resource setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Approach Previous Work", "sec_num": null }, { "text": "We demonstrate the practicality of our approach in truly low-resource scenarios and downstream applications through two case studies. First, in \u00a78.1 we show that one can obtain the initial supervision from a single human annotator within a few hours for two languages -Armenian and Punjabi. This is a realistic scenario where language access is limited to a single native informant. Second, in \u00a78.2 we show that our approach benefits a typical downstream application, namely candidate generation for cross-lingual entity linking, by improving recall on two low-resource languages -Tigrinya and Macedonian. We also present an analysis ( \u00a77) of the inherent challenges of transliteration, and the trade-off between native (i.e., source) and foreign (i.e., target) vocabulary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Approach Previous Work", "sec_num": null }, { "text": "We briefly review the limitations of existing generation and discovery approaches, and provide an overview of how our work addresses them. (Haizhou et al., 2004; Jiampojamarn et al., 2009; Ravi and Knight, 2009; Jiampojamarn et al., 2010; Finch et al., 2015, inter alia) requires generous amount of name pairs (\u22485-10k) in order to learn to map words in the source script to the target script. While some approaches (Irvine et al., 2010; Tsai and Roth, 2018) use Wikipedia inter-language links to identify name pairs for supervision, a truly lowresource language (like Tigrinya) is likely to have limited Wikipedia presence as well. (Sproat et al., 2006; Chang et al., 2009) is considerably easier than generation, owing to the smaller search space. However, discovery often uses features derived from resources that are unavailable for low-resource languages, like comparable corpora (Sproat et al., 2006; Klementiev and Roth, 2008) .", "cite_spans": [ { "start": 139, "end": 161, "text": "(Haizhou et al., 2004;", "ref_id": "BIBREF15" }, { "start": 162, "end": 188, "text": "Jiampojamarn et al., 2009;", "ref_id": "BIBREF19" }, { "start": 189, "end": 211, "text": "Ravi and Knight, 2009;", "ref_id": "BIBREF35" }, { "start": 212, "end": 238, "text": "Jiampojamarn et al., 2010;", "ref_id": "BIBREF20" }, { "start": 239, "end": 270, "text": "Finch et al., 2015, inter alia)", "ref_id": null }, { "start": 415, "end": 436, "text": "(Irvine et al., 2010;", "ref_id": "BIBREF17" }, { "start": 437, "end": 457, "text": "Tsai and Roth, 2018)", "ref_id": "BIBREF39" }, { "start": 632, "end": 653, "text": "(Sproat et al., 2006;", "ref_id": "BIBREF36" }, { "start": 654, "end": 673, "text": "Chang et al., 2009)", "ref_id": "BIBREF2" }, { "start": 884, "end": 905, "text": "(Sproat et al., 2006;", "ref_id": "BIBREF36" }, { "start": 906, "end": 932, "text": "Klementiev and Roth, 2008)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "A key limitation of discovery is the assumption that the correct transliteration(s) is in the list of candidates N . Since discovery models always pick something from N , they can produce false positives, if no correct transliteration is present in N .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transliteration Discovery", "sec_num": null }, { "text": "To overcome this, it is prudent to develop generation models which can handle input for which the transliteration does not belong in N .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transliteration Discovery", "sec_num": null }, { "text": "We show that a weak generation model can be iteratively improved using constrained discovery. In particular, our work uses a weak generation model to discover new training pairs, using constraints to drive the bootstrapping. Our generation model is inspired by the success of sequence to sequence generation models (Sutskever et al., 2014; Bahdanau et al., 2015) for string transduction tasks like inflection and derivation generation (Faruqui et al., 2016; Cotterell et al., 2017; Aharoni and Goldberg, 2017; Makarov et al., 2017) . Our bootstrapping framework can be viewed as an instance of constraint driven learning (Chang et al., 2007 (Chang et al., , 2012 .", "cite_spans": [ { "start": 315, "end": 339, "text": "(Sutskever et al., 2014;", "ref_id": "BIBREF37" }, { "start": 340, "end": 362, "text": "Bahdanau et al., 2015)", "ref_id": "BIBREF1" }, { "start": 435, "end": 457, "text": "(Faruqui et al., 2016;", "ref_id": "BIBREF11" }, { "start": 458, "end": 481, "text": "Cotterell et al., 2017;", "ref_id": "BIBREF8" }, { "start": 482, "end": 509, "text": "Aharoni and Goldberg, 2017;", "ref_id": "BIBREF0" }, { "start": 510, "end": 531, "text": "Makarov et al., 2017)", "ref_id": "BIBREF29" }, { "start": 621, "end": 640, "text": "(Chang et al., 2007", "ref_id": "BIBREF3" }, { "start": 641, "end": 662, "text": "(Chang et al., , 2012", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Our Work", "sec_num": null }, { "text": "Monotonic Attention -Seq2Seq(HMA)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transliteration Generation with Hard", "sec_num": "3" }, { "text": "We view generation as a string transduction task and use a sequence to sequence (Seq2Seq) generation model that uses hard monotonic attention (Aharoni and Goldberg, 2017) , henceforth referred to as Seq2Seq(HMA). During generation, Seq2Seq(HMA) directly models the monotonic source-to-target sequence alignments, using a pointer that attends to a single input character at a time. Monotonic attention is a natural fit for transliteration because even though the number of characters needed to represent a sound in the source and target language vary, the sequence of sounds is presented in the same order. 3 We review Seq2Seq(HMA) below, and describe how it can be applied to transliteration generation.", "cite_spans": [ { "start": 142, "end": 170, "text": "(Aharoni and Goldberg, 2017)", "ref_id": "BIBREF0" }, { "start": 606, "end": 607, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Transliteration Generation with Hard", "sec_num": "3" }, { "text": "Encoding Input Word Let \u03a3 f be the source alphabet and \u03a3 e be the English alphabet. Let", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transliteration Generation with Hard", "sec_num": "3" }, { "text": "x = (x 1 , x 2 , \u2022 \u2022 \u2022 , x n ) denote an input word where each character x i \u2208 \u03a3 f .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transliteration Generation with Hard", "sec_num": "3" }, { "text": "The characters are first encoded using a embedding matrix W \u2208 Hard Monotonic Attention, or Seq2Seq(HMA). The figure shows how decoding proceeds for transliterating \"\u0925\u0928\u094b\u0938\" to \"thanos\". During decoding, the model attends to a source character (e.g.,\u0925 shown in blue) and outputs target characters (t, h, a) until a step action is generated, which moves the attention position forward by one character (to \u0928), and so on. . At any time during decoding, the decoder uses its last hidden state, the embedding of the previous action s i and the encoded vector h a of the current attended position to generate the next action s i+1 . If the generated action is step, the decoder increments the attention position by one. This ensures that the decoding is monotonic, as the attention position can only move forward or stay at the same position during generation. We use Inference(G, x) to refer to the above decoding process for a trained generation model G and input word x.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transliteration Generation with Hard", "sec_num": "3" }, { "text": "R |\u03a3 f |\u00d7d to get character embeddings x 1 , x 2 , \u2022 \u2022 \u2022 , x n where each x i \u2208 R d .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transliteration Generation with Hard", "sec_num": "3" }, { "text": "Training requires the oracle action sequence {s i } for input x 1:n that generates the correct transliteration y 1:m . The oracle sequence is generated using the train name pairs and Algorithm 1 in Aharoni and Goldberg (2017) , with the characterlevel alignment between x 1:n and y 1:m being generated using the algorithm in Cotterell et al. (2016) .", "cite_spans": [ { "start": 198, "end": 225, "text": "Aharoni and Goldberg (2017)", "ref_id": "BIBREF0" }, { "start": 325, "end": 348, "text": "Cotterell et al. (2016)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Monotonic Decoding with Hard Attention", "sec_num": null }, { "text": "We describe an unconstrained and a constrained inference strategy to select the best transliteration\u0177 from a beam", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Strategies", "sec_num": null }, { "text": "{y i } k i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Strategies", "sec_num": null }, { "text": "of transliteration hypotheses, sorted in descending order by likelihood. The constrained strategy use a name dictionary N , to guide the inference. These strategies are applicable to any generation model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Strategies", "sec_num": null }, { "text": "\u2022 Unconstrained (U) selects the most likely item y 1 in the beam as\u0177.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Strategies", "sec_num": null }, { "text": "\u2022 Dictionary-Constrained (DC) selects the highest scoring hypothesis that is present in N , and defaults to y 1 if none are in N .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Strategies", "sec_num": null }, { "text": "It is tempting to disallow the model from generating hypotheses which are not in the dictionary N . However, dictionaries are always incomplete, and restricting the search to generate from N inevitably leads to incorrect predictions if the correct transliteration is not in N . This is essentially the same as the problem inherent to discovery models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Strategies", "sec_num": null }, { "text": "A related constrained inference strategy was proposed by Lin et al. (2016) , who use a entity linking system to correct and re-rank hypotheses, using any available context to aid hypothesis correction. Our constrained inference strategy is much simpler, requiring only a name dictionary N . We experimentally show that our approach outperforms that of Lin et al. (2016) .", "cite_spans": [ { "start": 57, "end": 74, "text": "Lin et al. (2016)", "ref_id": "BIBREF28" }, { "start": 352, "end": 369, "text": "Lin et al. (2016)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Other Strategies in Previous Work", "sec_num": null }, { "text": "Low-resource languages will have a limited number of name pairs for training a generation model. To learn a good generation model in this setting, we propose a new bootstrapping algorithm, that uses constrained discovery to mine name pairs to re-train the generation model. Our algorithm requires a small (\u2248500) seed list of name pairs S for supervision, a dictionary N containing names in English, and a list of words V f in the foreign script.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Low-Resource Bootstrapping", "sec_num": "4" }, { "text": "Below we describe our algorithm and the constraints used to guide discovery of new name pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Low-Resource Bootstrapping", "sec_num": "4" }, { "text": "Algorithm 1 shows the pseudo-code of the bootstrapping procedure. We initialize a weak generation model G 0 using a seed list of name pairs S (line 1). At iteration t, the current generation model G t produces the top-k transliteration hypotheses {y i } k i=1 for each word x \u2208 V f (line 5). A source word and hypothesis pair (x, y i ), is added to the set of mined name pairs B if they satisfy a set of discovery constraints (described below) (line 8). A new generation model G t+1 is trained for the next iteration using the union of the seed list S and the mined name pairs B (line 12). B is purged after every iteration (line 3) to prevent G t+1 from being influenced by possibly incorrect name pairs mined in Algorithm 1 Bootstrapping a Transliteration Generation Model via Constrained Discovery", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Bootstrapping Algorithm", "sec_num": "4.1" }, { "text": "English name dictionary N ; Seed training pairs S; Vocabulary in the target language V f . Hyper-parameters:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input:", "sec_num": null }, { "text": "initial minimum length threshold L min 0 ; minimum likelihood threshold \u03b4 min ; length ratio tolerance \u03f5. Output: Generation model GT 1: G0 = train(S) \u25b7 init. generation model. 2: while not converged do 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input:", "sec_num": null }, { "text": "B = \u2205 \u25b7 purge mined set. 4: for x in V f do 5: {yi} k i=1 = argtop k Inference(Gt,x) 6: for yi in {yi} k i=1 do 7:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input:", "sec_num": null }, { "text": "if (x, yi) satisfies constraints in \u00a74.2 then 8:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input:", "sec_num": null }, { "text": "B = B \u222a {(x, yi)} \u25b7 add to mined set. 9:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input:", "sec_num": null }, { "text": "end if 10: end for 11: end for 12:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input:", "sec_num": null }, { "text": "Gt+1 = train (S \u222a B) 13: L min t+1 = L min t \u2212 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input:", "sec_num": null }, { "text": "\u25b7 reduce length threshold. 14:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input:", "sec_num": null }, { "text": "t = t + 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input:", "sec_num": null }, { "text": "\u25b7 track iteration 15: end while earlier iterations. The algorithm converges when accuracy@1 stops increasing on a development set. We note that our bootstrapping approach is applicable to any transliteration generation model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input:", "sec_num": null }, { "text": "To ensure that high quality name pairs are added to the mined set B during bootstrapping, we use the following discovery constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input:", "sec_num": null }, { "text": "A word-transliteration pair (x, y) is added to the set of mined pairs B, only if all the following constraints are satisfied, 1. y \u2208 N . i.e., y belongs in the dictionary. 2. P(y | x) > \u03b4 min . The model is sufficiently confident about the transliteration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discovery Constraints", "sec_num": "4.2" }, { "text": "3. The ratio of lengths |y| |x| should be close to the average ratio estimated from S (Matthews, 2007) . We encode this using the constraint", "cite_spans": [ { "start": 86, "end": 102, "text": "(Matthews, 2007)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Discovery Constraints", "sec_num": "4.2" }, { "text": "| |y| |x| \u2212 r(S)| \u2264 \u03f5,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discovery Constraints", "sec_num": "4.2" }, { "text": "where \u03f5 is a tunable tolerance and r(S) is the average ratio in S.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discovery Constraints", "sec_num": "4.2" }, { "text": ". We found that false positives were more likely to be short hypotheses in early iterations. As the model improves with each iteration, L min t is lowered to allow more new pairs to be mined.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "|y| > L min t", "sec_num": "4." }, { "text": "We note that our bootstrapping algorithm can be formulated as an instance of constraint driven learning (Chang et al., 2007 (Chang et al., , 2012 .", "cite_spans": [ { "start": 104, "end": 123, "text": "(Chang et al., 2007", "ref_id": "BIBREF3" }, { "start": 124, "end": 145, "text": "(Chang et al., , 2012", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "|y| > L min t", "sec_num": "4." }, { "text": "Unless otherwise specified, we evaluate all generation models using the best model prediction\u0177 using acc@1 against the reference transliteration y * .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5" }, { "text": "We use the train and development sets from the Named Entities Workshop 2015 (Duan et al., 2015) (NEWS2015) for Hindi (hi), Kannada (kn), Bengali (bn), Tamil (ta) and Hebrew (he) as our train and evaluation set. 4 The size of the train set was \u223c12k, 10k, 14k, 10k and 10k respectively, and all evaluation sets were \u223c1k.", "cite_spans": [ { "start": 76, "end": 95, "text": "(Duan et al., 2015)", "ref_id": "BIBREF10" }, { "start": 211, "end": 212, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training and Evaluation Dataset", "sec_num": null }, { "text": "For the low resource experiments, we subsample 500 examples from each train set in the NEWS2015 dataset using five random seeds and report the averaged results. We also set aside a 1k name pairs from the corresponding NEWS2015 train set of each language as development data. The foreign script portion of the remaining train data is used as V f in the bootstrapping algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Evaluation Dataset", "sec_num": null }, { "text": "We implemented Seq2Seq(HMA) using PyTorch. 5 We used 50 dimensional character embeddings, and single layer GRU (Cho et al., 2014) encoder with 20 hidden states for all experiments. The Adam (Kingma and Ba, 2014) optimizer was used with default hyperparameters, a learning rate of 0.001, a batch size of 1, and maximum of 20 iterations in all experiments. Beam search used a width of 10. For lowresource experiments, all bootstrapping parameters were tuned on the development data set aside above. L min 0 is chosen from {10, 15, 20, 25}.", "cite_spans": [ { "start": 43, "end": 44, "text": "5", "ref_id": null }, { "start": 111, "end": 129, "text": "(Cho et al., 2014)", "ref_id": "BIBREF6" }, { "start": 190, "end": 211, "text": "(Kingma and Ba, 2014)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Model and Tuning Details", "sec_num": null }, { "text": "We use a name dictionary of 1.05 million names constructed from the English Wikipedia (dump dated 05/20/2017) by taking the list of title tokens in Wikipedia sorted by frequency, and removing tokens which appears only once.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Name Dictionary", "sec_num": null }, { "text": "We compare with the following generation models:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparisons", "sec_num": "5.1" }, { "text": "A probabilistic transliteration generation approach that learns latent alignments between substrings in the source and the target words. The model is trained to score all possible segmentation and their alignments, using an EM-like algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "P&R (Pasternack and Roth, 2009)", "sec_num": null }, { "text": "DirecTL+ (Jiampojamarn et al., 2009) A HMM-like discriminative string transduction model that predicts the output transliteration using many-to-many alignments between the source word and target transliteration. Following Jiampojamarn et al. (2009) , we use the m2maligner (Jiampojamarn et al., 2007) to generate the many-to-many alignments, and the public implementation of DirecTL+ to train models. 6 Lin et al., 2016) A transliteration approach that uses a language-independent entity linking system to jointly correct and re-rank the hypotheses produced by the generation model. We compare to both the unconstrained inference (U) approach and the entity linking constrained inference (+EL) approach.", "cite_spans": [ { "start": 222, "end": 248, "text": "Jiampojamarn et al. (2009)", "ref_id": "BIBREF19" }, { "start": 273, "end": 300, "text": "(Jiampojamarn et al., 2007)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 9, "end": 36, "text": "(Jiampojamarn et al., 2009)", "ref_id": "FIGREF1" }, { "start": 403, "end": 420, "text": "Lin et al., 2016)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "P&R (Pasternack and Roth, 2009)", "sec_num": null }, { "text": "Seq2Seq w/ Att A sequence to sequence generation model which uses soft attention as described in (Bahdanau et al., 2015) . This model does not enforce monotonicity at inference time, and serves as direct comparison for Seq2Seq(HMA).", "cite_spans": [ { "start": 97, "end": 120, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "RPI-ISI (", "sec_num": null }, { "text": "This section aims to analyze: (a) how effective is Seq2Seq(HMA) for transliteration generation when provided all available supervision ( \u00a76.1)? and (b) how effective is the bootstrapping algorithm in the low-resource setting when only 500 examples are available ( \u00a76.2)?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "We compare Seq2Seq(HMA) with previous approaches when provided all available supervision, to see how it fares under standard evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Full Supervision Setting", "sec_num": "6.1" }, { "text": "Results in the unconstrained inference (U) setting (Table 2 top 5 rows) shows Seq2Seq(HMA), denoted by \"Ours\", outperforms previous approaches on Hindi, Kannada, and Bengali, with almost 3-4% gains. Improvements over the Seq2Seq with Attention (Seq2Seq w/ Att) model demonstrate the benefit of imposing the monotonicity constraint in the generation model. On Tamil and Hebrew, Seq2Seq(HMA) is at par with the best approaches, with negligible gap (\u223c0.3) in scores. Overall, we see that Seq2Seq(HMA) can achieve better (and sometimes competitive) scores than state-of-the-art approaches in full supervision settings. When comparing approaches which use constrained inference ( dataset using acc@1 as the evaluation metric. \"Ours\" denotes the Seq2Seq(HMA) model, with (.) denoting the inference strategy. Numbers for RPI-ISI are from Lin et al. (2016) .", "cite_spans": [ { "start": 831, "end": 848, "text": "Lin et al. (2016)", "ref_id": "BIBREF28" } ], "ref_spans": [ { "start": 51, "end": 71, "text": "(Table 2 top 5 rows)", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Full Supervision Setting", "sec_num": "6.1" }, { "text": "that using dictionary-constrained inference (as in Ours(DC)) is more effective than using a entitylinking model for re-ranking (RPI-ISI + EL).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Full Supervision Setting", "sec_num": "6.1" }, { "text": "In Table 2 (rows under \"Low-Resource Setting\"), we evaluate different models in a low-resource setting when provided only 500 name pairs as supervision. Results are averaged over 5 different random sub-samples of 500 examples. The results clearly demonstrate that all generation models suffer a drop in performance when provided limited training data. Note that models like Seq2Seq with Attention suffer a larger drop than those which enforce monotonicity, suggesting that incorporating monotonicity into the inference step in the low-resource setting is essential. After bootstrapping our weak generation model using Algorithm 1, the performance improves substantially (last row in Table 2 ). On almost all languages, the generation model improves by at least 6%, with performance for Hindi and Bengali improving by more than 10%. Bootstrapping results for the languages are within 2-4% of the best model trained with all available supervision.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 2", "ref_id": "TABREF4" }, { "start": 683, "end": 690, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Low-Resource Setting", "sec_num": "6.2" }, { "text": "To better analyze the progress of the transliteration model during bootstrapping, we plot the accuracy@1 of the current transliteration model after each bootstrapping iteration for each of the languages (solid lines in Figure 2 ). For reference, we also show the best performance for a gener- ation model using all available supervision from \u00a76.1 (dotted horizontal lines in Figure 2 ). From Figure 2 , we can see that almost after 5 bootstrapping iterations, the generation model attains competitive performance to respective state-of-the-art models trained with full supervision.", "cite_spans": [], "ref_spans": [ { "start": 219, "end": 227, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 375, "end": 383, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 392, "end": 400, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Low-Resource Setting", "sec_num": "6.2" }, { "text": "Though our model is state of the art, it does present a few weaknesses. We have found that the dictionary sometimes misleads the model during constrained inference. For example, the correct transliteration \"vidyul\" of the Hindi \u0935 \u0941 \u0932, is not present in the dictionary, but another hypothesis \"vidul\" is. Another issue comes from the proportion of native (i.e., from the source language) and foreign (i.e., from English or other languages) names in the training data. It is usually not the case that the source and target scripts have the same transliteration rules. For example, \u092f in Hindi might represent ya in English or Hindi names, but ja in German. Similarly, while \u0905 should be a in Hindi names, it could be any of a few vowels in English. The NEWS2015 dataset does not report a native/foreign ratio, but by our estimation, it is about 70/30 for each language. This native and foreign names dichotomy are some of the inherent challenges in transliteration, that we discuss in detail in the next section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6.3" }, { "text": "The fact that all models in Table 2 perform well or poorly on the same languages suggests that most of the observed performance variation is the result of factors intrinsic to the specific languages. Here we analyze some challenges that are inherent to the transliteration task, and explain why the performance ceiling is well under 100% for all languages, and lower for languages like Tamil and Hebrew than the others.", "cite_spans": [], "ref_spans": [ { "start": 28, "end": 35, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Challenges Inherent to Transliteration", "sec_num": "7" }, { "text": "Source-Driven Some transliteration errors are due to ambiguities in the source scripts. For instance, the Tamil script uses a single character to denote {ta, da, tha, dha}, a single character for {ka, ga, kha, gha}, etc., while the rest of the Indian scripts have unique characters for each of these. Thus, names like Hartley and Hardley are entirely indistinguishable in Tamil but are distinguishable in the other scripts. We illustrate this problem by transliterating back and forth between Tamil and Hindi. When transliterating Hindi\u2192Tamil, the model achieves an accuracy of 31%, which drops to 15% when transliterating Tamil\u2192Hindi, suggesting that the Tamil script is more ambiguous. The Hebrew script also introduces error because it tends to omit vowels or write them ambiguously, leaving the model to guess between plausible choices. For example, the word \u202b\u05de\u05dc\u05da\u202c could be transliterated melech \"king\" just as easily as malach \"he ruled.\" When Hebrew does write vowels, it reuses consonant letters, again ambiguously. For example, \u202b\u05d4\u202c can be used to express a or e, so \u202b\u05e9\u05de\u05d5\u05e0\u05d4\u202c can be either shmona or shmone \"eight masculine/feminine\". The script also does not reliably distinguish b from v or p from f, among others.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source and Target-Specific Issues", "sec_num": "7.1" }, { "text": "All languages run into problems when they are faced with writing sounds that they do not natively distinguish. For example, Hindi does not make a distinction between w and v, so both vest and west are written as \u0935\u0947 \u091f in its script.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source and Target-Specific Issues", "sec_num": "7.1" }, { "text": "These script-specific deficiencies explains why all models struggle on Tamil and Hebrew relative to the others. These issues cannot be completely resolved without memorizing individual sourcetarget pairs and leveraging context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source and Target-Specific Issues", "sec_num": "7.1" }, { "text": "Target-Driven Some errors arise from the challenges presented by target script (here Latin script for English). To handle English's notoriously convoluted orthography, a model has to infer silent let- ters, decide whether to use f or ph for /f/; use k, c, ck, ch, or q for /k/, and so on. The problem is made worse because English is not the only language that uses Latin script. For example, German names like Schmidt should be written with sch instead of sh, and for French names like Margot and Margeau (which are pronounced the same), we have to resort to memorization. The arbitrariness extends into borrowings from the source languages as well. For example, the Indian name Bangalore is written with a silent-e, and the name Lakshadweep contains ee, instead of the expected i.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source and Target-Specific Issues", "sec_num": "7.1" }, { "text": "All these issues come together to create a performance disparity between native names, which are well-integrated into the source language etymologically (Indian names like Jasodhara or Ramanathan for Hindi), and foreign names (French Grenoble or Japanese Honshu for Hindi), which are not. The above datasets include an unspecified mix of native and foreign names. This is a problem since any model must learn essentially separate transliteration schemes for each.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Disparity between Native and Foreign", "sec_num": "7.2" }, { "text": "To quantify the effect of this, we annotate native and foreign names in the test split of the four Indian languages, and evaluate performance for both categories. Table 3 shows that our model performs significantly better on native names for all the languages. A possible reason for is that the source scripts were designed for writing native names (e.g., Tamil script lacks separate {ta, da, tha, dha} characters because the Tamil language does not distinguish these sounds). Furthermore, foreign names have a wide variety of origins with their own conventions as discussed in \u00a77.1. The performance gap is proportionally greatest for Tamil, likely due to its script.", "cite_spans": [], "ref_spans": [ { "start": 163, "end": 170, "text": "Table 3", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Disparity between Native and Foreign", "sec_num": "7.2" }, { "text": "In this section, we evaluate the practical utility of our approach in low-resource settings and for downstream applications through two case studies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case Studies", "sec_num": "8" }, { "text": "We first show that obtaining an adequate seed list is possible with a few hours of manual annotation ( \u00a78.1) from a single human annotator. We then show the positive impact that our approach has on a downstream task, by evaluating its contribution to candidate generation for Tigrinya and Macedonian entity linking ( \u00a78.2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case Studies", "sec_num": "8" }, { "text": "Monolingual Corpus Vocabulary ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language", "sec_num": null }, { "text": "Punjabi Corpus ILCI-II \u2660 30k Armenian TED \u2663 50k Tigrinya Habit Project \u2666 225k Macedonian TED \u2663 60k \u2666=habit-project.eu/wiki/TigrinyaCorpus, \u2660=tdil-dc.in, \u2663=github.com/ajinkyakulkarni14/ TED-Multilingual-Parallel-Corpus", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language", "sec_num": null }, { "text": "The manual annotation exercises simulate a lowresource setting with only a single human annotator is available. We judge the usability of the annotations by training models on them and evaluating the models on test sets of 1000 names each, obtained from Wikipedia inter-language links. For bootstrapping experiments, we use the corpora shown in Table 4 to obtain foreign vocabulary V f .", "cite_spans": [], "ref_spans": [ { "start": 345, "end": 352, "text": "Table 4", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Manual Annotation", "sec_num": "8.1" }, { "text": "We investigate performance on two languages: Armenian and Punjabi. Spoken in Armenia and Turkey, Armenian is an Indo-European language with no close relatives. It has Eastern and Western dialects with different spelling conventions. Armenian Wikipedia is primarily written in the Eastern dialect, while our annotator was a native Western speaker. 7 Punjabi is an Indic language from Northwest India and Pakistan that is closely related to Hindi. Our annotator grew up primarily speaking Hindi.", "cite_spans": [ { "start": 347, "end": 348, "text": "7", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Languages Studied", "sec_num": null }, { "text": "Annotation Guidelines Annotators were given two tasks. First, they were asked to write two names and their English transliterations for each letter in the source script: one beginning with the letter and another containing it elsewhere. (e.g. \"Julia\" and \"Benjamin\" for the letter \"j\" if the source were English). The is done to ensure good coverage over the alphabet. Next, annotators were shown a list of English words and were asked to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Languages Studied", "sec_num": null }, { "text": "Lang. \u2192 Punjabi Armenian Approach \u2193 Ours(U)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Languages Studied", "sec_num": null }, { "text": "33.4 49.9 Ours(U) + Bootstrapping 44.5 55.8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Languages Studied", "sec_num": null }, { "text": "Annotation Time (hours) 5 4 Table 5 : Acc@1 using human annotated seed set and bootstrapping the Seq2Seq(HMA) model. Both languages perform well relative to the other languages investigated so far. Both annotation sub-tasks took roughly the same time.", "cite_spans": [], "ref_spans": [ { "start": 28, "end": 35, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Languages Studied", "sec_num": null }, { "text": "provide plausible transliteration(s) into the target script. The list had a mix of recognizable foreign (e.g., Clinton, Helsinki) and native names (e.g., Sarkessian, Yerevan for Armenian). We collected about 600 and 500 annotated pairs respectively for Armenian and Punjabi. Table 5 shows that the performance of the models trained on the annotated data is comparable to that on the standard test corpora for other languages. This show that our approach is robust to human inconsistencies and regional spelling variations, and that obtaining an adequate seed list is possible with just a few hours of manual annotation.", "cite_spans": [], "ref_spans": [ { "start": 275, "end": 282, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Languages Studied", "sec_num": null }, { "text": "Since transliteration is an intermediate step in many downstream multilingual information extraction tasks (Darwish, 2013; Kim et al., 2012; Jeong et al., 1999; Virga and Khudanpur, 2003; Chen et al., 2006) , it is possibly to gauge its performance extrinsically by the impact it has on such tasks. We use the task of candidate generation (CG), which is a key step in cross-lingual entity linking.", "cite_spans": [ { "start": 107, "end": 122, "text": "(Darwish, 2013;", "ref_id": "BIBREF9" }, { "start": 123, "end": 140, "text": "Kim et al., 2012;", "ref_id": "BIBREF23" }, { "start": 141, "end": 160, "text": "Jeong et al., 1999;", "ref_id": "BIBREF18" }, { "start": 161, "end": 187, "text": "Virga and Khudanpur, 2003;", "ref_id": "BIBREF41" }, { "start": 188, "end": 206, "text": "Chen et al., 2006)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Candidate Generation (CG)", "sec_num": "8.2" }, { "text": "The goal of cross-lingual entity linking (Mc-Namee et al., 2011; Tsai and Roth, 2016; Upadhyay et al., 2018) is to ground spans of text written in any language to an entity in a knowledge base (KB). For instance, grounding [Chicago] in the following German sentence to Chicago_(band). 8", "cite_spans": [ { "start": 41, "end": 64, "text": "(Mc-Namee et al., 2011;", "ref_id": null }, { "start": 65, "end": 85, "text": "Tsai and Roth, 2016;", "ref_id": "BIBREF38" }, { "start": 86, "end": 108, "text": "Upadhyay et al., 2018)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Candidate Generation (CG)", "sec_num": "8.2" }, { "text": "[Chicago] wird in Woodstock aufzutreten.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Generation (CG)", "sec_num": "8.2" }, { "text": "The role of CG in cross-lingual entity linking is to create a set of plausible entities given a string while ensuring the correct KB entity belongs to that set. For the above German sentence, it would provide a list of possible KB entities for the string Chicago: Chicago_(band), Chicago_(city), Chicago_(font), etc., so that entity linking can select the band. Foreign scripts pose an additional challenge for CG because they must be transliter-ated before they are passed on to candidate generation. For instance, any mention of \"Chicago\" in Amharic must first be transliterated from \u123a\u12ab\u130e.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Generation (CG)", "sec_num": "8.2" }, { "text": "Most approaches for CG use Wikipedia interlanguage links to generate the lists of candidates (Tsai and Roth, 2016) . While recent approaches such as Tsai and Roth (2018) have resorted to name translation for CG, they require over 10k examples for languages written in non-Latin scripts, which is prohibitive for low-resource languages with little Wikipedia presence.", "cite_spans": [ { "start": 93, "end": 114, "text": "(Tsai and Roth, 2016)", "ref_id": "BIBREF38" }, { "start": 149, "end": 169, "text": "Tsai and Roth (2018)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Candidate Generation (CG)", "sec_num": "8.2" }, { "text": "We evaluate the extent to which our approach improves recall of a naive CG baseline that generates candidates by performing exact name match. For each span of text to be linked (or query mention), we first check if the naive name matching strategy finds any candidates in the KB. If none are found, the query mention is back-transliterated to English, and at most 20 candidates are generated using a inverted-index from English names to KB entities. The evaluation metric is recall@20, i.e., if the gold KB entity is in the top 20 candidates. We use Tigrinya and Macedonian as our test languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Generation with Transliteration", "sec_num": null }, { "text": "Tigrinya is a South Semitic language related to Amharic, written in the Ethiopic script, and spoken primarily in Eritrea and northern Ethiopia. The Tigrinya Wikipedia has <200 articles, so we use inter-language links (\u223c7.5k) from the Amharic Wikipedia instead to extract 1k name pairs for the seed set. We use the monolingual corpus in Table 4 for bootstrapping and evaluate on the unsequestered set provided under the NIST LoReHLT evaluation, containing 4,630 query mentions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Generation with Transliteration", "sec_num": null }, { "text": "The Ethiopic script is an alphasyllabary, where each character is consonant-vowel pair. For example, the character \u1218 is m\u00e4, \u121a with a tail is mi, and \u121e with a line is mo. With 26 consonants and 8 vowels, this leads to a set of >200 characters creating a sparsity problem since each character has its own Unicode code point. However, the code points are organized so that they can be automatically split 9 into unique consonant and vowel codes without explicitly understanding the script. We assign arbitrary ASCII codes to each consonant and vowel so that \u1218/m\u00e4 becomes \"D 1\" and \u121e/mo becomes \"D 6.\" This consonant-vowel splitting (CV-split) reduces the number of unique input characters to 55. Table 4 is used for bootstrapping. Table 6 shows the results for the two languages. For Tigrinya, candidate generation with transliteration improves on the baseline by 4.2%. Splitting the characters (CV-split) gives another 5.7%, and adding bootstrapping gives 4.9% more. Our approach yields an overall 14.8% improvement in recall over the baseline, showing that we can effectively exploit the little available supervision by bootstrapping. Macedonian yields more dramatic results, where transliteration provides 38.6% improvement (more than double the baseline), with bootstrapping providing another 4.6%. The differences between Tigrinya and Macedonian is likely due both to their test sets, corpora and writing systems.", "cite_spans": [], "ref_spans": [ { "start": 693, "end": 700, "text": "Table 4", "ref_id": "TABREF8" }, { "start": 728, "end": 735, "text": "Table 6", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Candidate Generation with Transliteration", "sec_num": null }, { "text": "We presented a new transliteration generation model, namely Seq2Seq(HMA), and a new bootstrapping algorithm that can iteratively improve a weak generation model using constrained discovery. The model presented here achieves state-ofthe-art results on typical training set sizes, and more importantly, works well in a low-resource setting with the aid of the bootstrapping algorithm. The key benefit of the bootstrapping approach is that it can \"recover\" most of the performance lost in the low-resource setting when little supervision is available by training with a smaller seed set, an English name dictionary, and a list of unannotated words in the target script. Additionally, our bootstrapping algorithm admits any generation model, giving it wide applicability. Through case studies, we showed that collecting an adequate seed list is practical with a few hours of annotation. The benefit of incorporating our transliteration approach in a downstream task, namely candidate generation, was also demonstrated. Finally, we discussed some of the inherent challenges of learning transliteration and the deficits of existing training sets. There are several interesting directions for future work. Performing model combination, either by developing hybrid transliteration models (Nicolai et al., 2015) or by ensembling (Finch et al., 2016) , can further improve low resource transliteration. Jointly leveraging similarities between related languages, such as writing systems or phonetic properties (Kunchukuttan et al., 2018) , also shows promise for low-resource settings. Our analysis suggests value in revisiting \"transliteration in context\" approaches (Goto et al., 2003; Hermjakob et al., 2008) , especially for languages like Hebrew. We would also like to expand on the analyses provided in \u00a77 which uncover challenges inherent to the transliteration task, particularly the impact of the native/foreign distinction in the train and test data, the difficulties posed by specific scripts or pairs of scripts, and how these impact both backand forward-transliteration. Recent work from Merhav and Ash (2018) suggests many useful analyses that we would like to incorporate.", "cite_spans": [ { "start": 1280, "end": 1302, "text": "(Nicolai et al., 2015)", "ref_id": "BIBREF33" }, { "start": 1320, "end": 1340, "text": "(Finch et al., 2016)", "ref_id": "BIBREF13" }, { "start": 1499, "end": 1526, "text": "(Kunchukuttan et al., 2018)", "ref_id": "BIBREF27" }, { "start": 1657, "end": 1676, "text": "(Goto et al., 2003;", "ref_id": "BIBREF14" }, { "start": 1677, "end": 1700, "text": "Hermjakob et al., 2008)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "9" }, { "text": "All generative approaches are also capable of discovery, by using the posterior P(y | x) to select the most likely candidate transliteration, while the opposite is not true.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Many Indic scripts, that sometimes write vowels before the consonants they are pronounced after, seem to violate this claim, but Unicode representations of these scripts actually preserve the consonant-vowel order.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Test set was not available since shared task concluded. 5 github.com/pytorch", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://code.google.com/p/directl-p", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The annotator produced Western Armenian which was mechanically mapped to \"Eastern\" by swapping five Armenian character pairs: \u0564/\u057f, \u057a/\u0562 , \u0584/\u056f , \u0571/\u056e, \u0573/\u057b", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Translation: Chicago will perform at Woodstock.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Consonant = Unicode / 8; Vowel = Unicode % 8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors thank Mitch Marcus, Snigdha Chaturvedi, Stephen Mayhew, Nitish Gupta, Dan Deutsch, and the anonymous reviewers for their useful comments. We are grateful to the Armenian and Punjabi annotators for help with the case studies.This work was supported under DARPA LORELEI by Contract HR0011-15-2-0025, Agreement HR0011-15-2-0023 with DARPA, and an NDSEG fellowship for the second author. Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Morphological Inflection Generation with Hard Monotonic Attention", "authors": [ { "first": "Roee", "middle": [], "last": "Aharoni", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2017, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roee Aharoni and Yoav Goldberg. 2017. Morpholog- ical Inflection Generation with Hard Monotonic At- tention. In Proc. of ACL.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neural Machine Translation by Jointly Learning to Align and Translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "Proc. of ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Proc. of ICLR.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Unsupervised Constraint Driven Learning For Transliteration Discovery", "authors": [ { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Goldwasser", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Yuancheng", "middle": [], "last": "Tu", "suffix": "" } ], "year": 2009, "venue": "Proc. of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ming-Wei Chang, Dan Goldwasser, Dan Roth, and Yuancheng Tu. 2009. Unsupervised Constraint Driven Learning For Transliteration Discovery. In Proc. of NAACL.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Guiding Semi-Supervision with Constraint-Driven Learning", "authors": [ { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Lev", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2007, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ming-Wei Chang, Lev Ratinov, and Dan Roth. 2007. Guiding Semi-Supervision with Constraint-Driven Learning. In Proc. of ACL.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Structured Learning with Constrained Conditional Models", "authors": [ { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Lev", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2012, "venue": "Machine Learning", "volume": "88", "issue": "", "pages": "399--431", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ming-Wei Chang, Lev Ratinov, and Dan Roth. 2012. Structured Learning with Constrained Conditional Models. Machine Learning, 88(3):399-431.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Translating-Transliterating Named Entities for Multilingual Information Access", "authors": [ { "first": "Hsin-Hsi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Wen-Cheng", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Changhua", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Wei-Hao", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2006, "venue": "Journal of the Association for Information Science and Technology", "volume": "57", "issue": "5", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hsin-Hsi Chen, Wen-Cheng Lin, Changhua Yang, and Wei-Hao Lin. 2006. Translating-Transliterating Named Entities for Multilingual Information Access. Journal of the Association for Information Science and Technology, 57(5).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merrienboer", "suffix": "" }, { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" } ], "year": 2014, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. 2014. Learn- ing Phrase Representations using RNN Encoder- Decoder for Statistical Machine Translation. In Proc. of EMNLP.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The SIGMORPHON 2016 Shared Task-Morphological Reinflection", "authors": [ { "first": "Ryan", "middle": [], "last": "Cotterell", "suffix": "" }, { "first": "Christo", "middle": [], "last": "Kirov", "suffix": "" }, { "first": "John", "middle": [], "last": "Sylak-Glassman", "suffix": "" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 2016, "venue": "Proc. of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 Shared Task- Morphological Reinflection. In Proc. of the 14th SIGMORPHON Workshop on Computational Re- search in Phonetics, Phonology, and Morphology.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Paradigm Completion for Derivational Morphology", "authors": [ { "first": "Ryan", "middle": [], "last": "Cotterell", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Vylomova", "suffix": "" }, { "first": "Huda", "middle": [], "last": "Khayrallah", "suffix": "" }, { "first": "Christo", "middle": [], "last": "Kirov", "suffix": "" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 2017, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan Cotterell, Ekaterina Vylomova, Huda Khayral- lah, Christo Kirov, and David Yarowsky. 2017. Paradigm Completion for Derivational Morphology. In Proc. of EMNLP.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Named Entity Recognition using Cross-lingual Resources: Arabic as an Example", "authors": [ { "first": "Kareem", "middle": [], "last": "Darwish", "suffix": "" } ], "year": 2013, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kareem Darwish. 2013. Named Entity Recognition us- ing Cross-lingual Resources: Arabic as an Example. In Proc. of ACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Proc. of the Fifth Named Entity Workshop", "authors": [ { "first": "Xiangyu", "middle": [], "last": "Duan", "suffix": "" }, { "first": "E", "middle": [], "last": "Rafael", "suffix": "" }, { "first": "Min", "middle": [], "last": "Banchs", "suffix": "" }, { "first": "Haizhou", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "A", "middle": [], "last": "Li", "suffix": "" }, { "first": "", "middle": [], "last": "Kumaran", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiangyu Duan, Rafael E Banchs, Min Zhang, Haizhou Li, and A Kumaran, editors. 2015. Proc. of the Fifth Named Entity Workshop.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Morphological Inflection Generation Using Character Sequence to Sequence Learning", "authors": [ { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2016, "venue": "Proc. of NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manaal Faruqui, Yulia Tsvetkov, Graham Neubig, and Chris Dyer. 2016. Morphological Inflection Gener- ation Using Character Sequence to Sequence Learn- ing. In Proc. of NAACL-HLT.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Neural Network Transduction Models in Transliteration Generation", "authors": [ { "first": "Andrew", "middle": [], "last": "Finch", "suffix": "" }, { "first": "Lemao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xiaolin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2015, "venue": "Proc. of the Fifth Named Entity Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Finch, Lemao Liu, Xiaolin Wang, and Eiichiro Sumita. 2015. Neural Network Transduction Models in Transliteration Generation. In Proc. of the Fifth Named Entity Workshop.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Target-Bidirectional Neural Models for Machine Transliteration", "authors": [ { "first": "Andrew", "middle": [], "last": "Finch", "suffix": "" }, { "first": "Lemao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xiaolin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2016, "venue": "Proc. of the Sixth Named Entity Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Finch, Lemao Liu, Xiaolin Wang, and Eiichiro Sumita. 2016. Target-Bidirectional Neural Models for Machine Transliteration. In Proc. of the Sixth Named Entity Workshop.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Transliteration Considering Context Information based on the Maximum Entropy Method", "authors": [ { "first": "Isao", "middle": [], "last": "Goto", "suffix": "" }, { "first": "Naoto", "middle": [], "last": "Kato", "suffix": "" }, { "first": "Noriyoshi", "middle": [], "last": "Uratani", "suffix": "" }, { "first": "Terumasa", "middle": [], "last": "Ehara", "suffix": "" } ], "year": 2003, "venue": "Proc. of MT-Summit IX", "volume": "125132", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isao Goto, Naoto Kato, Noriyoshi Uratani, and Teru- masa Ehara. 2003. Transliteration Considering Con- text Information based on the Maximum Entropy Method. In Proc. of MT-Summit IX, volume 125132.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A Joint Source-Channel Model for Machine Transliteration", "authors": [ { "first": "Li", "middle": [], "last": "Haizhou", "suffix": "" }, { "first": "Zhang", "middle": [], "last": "Min", "suffix": "" }, { "first": "Su", "middle": [], "last": "Jian", "suffix": "" } ], "year": 2004, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Haizhou, Zhang Min, and Su Jian. 2004. A Joint Source-Channel Model for Machine Transliteration. In Proc. of ACL.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Name Translation in Statistical Machine Translation -Learning When to Transliterate", "authors": [ { "first": "Ulf", "middle": [], "last": "Hermjakob", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" } ], "year": 2008, "venue": "Proc. of ACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ulf Hermjakob, Kevin Knight, and Hal Daum\u00e9 III. 2008. Name Translation in Statistical Machine Translation -Learning When to Transliterate. Proc. of ACL-HLT.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Transliterating from All Languages", "authors": [ { "first": "Ann", "middle": [], "last": "Irvine", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Klementiev", "suffix": "" } ], "year": 2010, "venue": "Proc. of AMTA", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ann Irvine, Chris Callison-Burch, and Alexandre Kle- mentiev. 2010. Transliterating from All Languages. In Proc. of AMTA.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Automatic Identification and Back-Transliteration of Foreign Words for Information Retrieval", "authors": [ { "first": "Sung-Hyon", "middle": [], "last": "Kil Soon Jeong", "suffix": "" }, { "first": "Jae", "middle": [ "Sung" ], "last": "Myaeng", "suffix": "" }, { "first": "K-S", "middle": [], "last": "Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 1999, "venue": "Information Processing & Management", "volume": "35", "issue": "4", "pages": "523--540", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kil Soon Jeong, Sung-Hyon Myaeng, Jae Sung Lee, and K-S Choi. 1999. Automatic Identification and Back-Transliteration of Foreign Words for Informa- tion Retrieval. Information Processing & Manage- ment, 35(4):523-540.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "DirecTL: A Language-Independent Approach to Transliteration", "authors": [ { "first": "Sittichai", "middle": [], "last": "Jiampojamarn", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Bhargava", "suffix": "" }, { "first": "Qing", "middle": [], "last": "Dou", "suffix": "" }, { "first": "Kenneth", "middle": [], "last": "Dwyer", "suffix": "" }, { "first": "Grzegorz", "middle": [], "last": "Kondrak", "suffix": "" } ], "year": 2009, "venue": "Proc. of the 2009 Named Entities Workshop: Shared Task on Transliteration", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sittichai Jiampojamarn, Aditya Bhargava, Qing Dou, Kenneth Dwyer, and Grzegorz Kondrak. 2009. DirecTL: A Language-Independent Approach to Transliteration. In Proc. of the 2009 Named Entities Workshop: Shared Task on Transliteration.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Transliteration Generation and Mining with Limited Training Resources", "authors": [ { "first": "Sittichai", "middle": [], "last": "Jiampojamarn", "suffix": "" }, { "first": "Kenneth", "middle": [], "last": "Dwyer", "suffix": "" }, { "first": "Shane", "middle": [], "last": "Bergsma", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Bhargava", "suffix": "" }, { "first": "Qing", "middle": [], "last": "Dou", "suffix": "" }, { "first": "Mi-Young", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Grzegorz", "middle": [], "last": "Kondrak", "suffix": "" } ], "year": 2010, "venue": "Proc. of the 2010 Named Entities Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sittichai Jiampojamarn, Kenneth Dwyer, Shane Bergsma, Aditya Bhargava, Qing Dou, Mi-Young Kim, and Grzegorz Kondrak. 2010. Transliteration Generation and Mining with Limited Training Resources. In Proc. of the 2010 Named Entities Workshop.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Applying Many-to-Many Alignments and Hidden Markov Models to Letter-to-Phoneme Conversion", "authors": [ { "first": "Grzegorz", "middle": [], "last": "Sittichai Jiampojamarn", "suffix": "" }, { "first": "Tarek", "middle": [], "last": "Kondrak", "suffix": "" }, { "first": "", "middle": [], "last": "Sherif", "suffix": "" } ], "year": 2007, "venue": "Proc. of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sittichai Jiampojamarn, Grzegorz Kondrak, and Tarek Sherif. 2007. Applying Many-to-Many Alignments and Hidden Markov Models to Letter-to-Phoneme Conversion. In Proc. of NAACL.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Multilingual Named Entity Recognition using Parallel Data and Metadata from Wikipedia", "authors": [ { "first": "Sungchul", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Hwanjo", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2012, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sungchul Kim, Kristina Toutanova, and Hwanjo Yu. 2012. Multilingual Named Entity Recognition using Parallel Data and Metadata from Wikipedia. In Proc. of ACL.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Adam: A Method for Stochastic Optimization", "authors": [ { "first": "Diederik", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "Proc. of ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. In Proc. of ICLR.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Named Entity Transliteration and Discovery in Multilingual Corpora", "authors": [ { "first": "Alex", "middle": [], "last": "Klementiev", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2008, "venue": "Learning Machine Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Klementiev and Dan Roth. 2008. Named Entity Transliteration and Discovery in Multilingual Cor- pora. In Learning Machine Translation. MIT Press.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Machine Transliteration", "authors": [ { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Graehl", "suffix": "" } ], "year": 1998, "venue": "", "volume": "24", "issue": "", "pages": "599--612", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Knight and Jonathan Graehl. 1998. Machine Transliteration. volume 24, pages 599-612. MIT Press.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Leveraging Orthographic Similarity for Multilingual Neural Transliteration", "authors": [ { "first": "Anoop", "middle": [], "last": "Kunchukuttan", "suffix": "" }, { "first": "Mitesh", "middle": [], "last": "Khapra", "suffix": "" }, { "first": "Gurneet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "" } ], "year": 2018, "venue": "Transactions of the Association for Computational Linguistics", "volume": "6", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anoop Kunchukuttan, Mitesh Khapra, Gurneet Singh, and Pushpak Bhattacharyya. 2018. Leveraging Orthographic Similarity for Multilingual Neural Transliteration. In Transactions of the Association for Computational Linguistics, volume 6.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Leveraging Entity Linking and Related Language Projection to Improve Name Transliteration", "authors": [ { "first": "Ying", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Xiaoman", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Aliya", "middle": [], "last": "Deri", "suffix": "" }, { "first": "Ji", "middle": [], "last": "Heng", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2016, "venue": "Proc. of the Sixth Named Entity Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ying Lin, Xiaoman Pan, Aliya Deri, Heng Ji, and Kevin Knight. 2016. Leveraging Entity Linking and Re- lated Language Projection to Improve Name Translit- eration. In Proc. of the Sixth Named Entity Work- shop.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Align and Copy: UZH at SIGMORPHON 2017 Shared Task for Morphological Reinflection", "authors": [ { "first": "Peter", "middle": [], "last": "Makarov", "suffix": "" }, { "first": "Tatiana", "middle": [], "last": "Ruzsics", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Clematide", "suffix": "" } ], "year": 2017, "venue": "Proc. of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Makarov, Tatiana Ruzsics, and Simon Clematide. 2017. Align and Copy: UZH at SIGMORPHON 2017 Shared Task for Morphological Reinflection. In Proc. of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Machine Transliteration of Proper Names", "authors": [ { "first": "David", "middle": [], "last": "Matthews", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Matthews. 2007. Machine Transliteration of Proper Names. Master's Thesis, University of Ed- inburgh, Edinburgh, United Kingdom.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Cross-Language Entity Linking", "authors": [ { "first": "Paul", "middle": [], "last": "Mcnamee", "suffix": "" }, { "first": "James", "middle": [], "last": "Mayfield", "suffix": "" }, { "first": "Dawn", "middle": [], "last": "Lawrie", "suffix": "" }, { "first": "W", "middle": [], "last": "Douglas", "suffix": "" }, { "first": "David", "middle": [ "S" ], "last": "Oard", "suffix": "" }, { "first": "", "middle": [], "last": "Doermann", "suffix": "" } ], "year": 2011, "venue": "Proc. of IJCNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul McNamee, James Mayfield, Dawn Lawrie, Dou- glas W Oard, and David S Doermann. 2011. Cross- Language Entity Linking. In Proc. of IJCNLP.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Design Challenges in Named Entity Transliteration", "authors": [ { "first": "Yuval", "middle": [], "last": "Merhav", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Ash", "suffix": "" } ], "year": 2018, "venue": "Proc. of COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuval Merhav and Stephen Ash. 2018. Design Chal- lenges in Named Entity Transliteration. In Proc. of COLING.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Multiple System Combination for Transliteration", "authors": [ { "first": "Garrett", "middle": [], "last": "Nicolai", "suffix": "" }, { "first": "Bradley", "middle": [], "last": "Hauer", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Salameh", "suffix": "" }, { "first": "Adam", "middle": [], "last": "St Arnaud", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Grzegorz", "middle": [], "last": "Kondrak", "suffix": "" } ], "year": 2015, "venue": "Proc. of the Fifth Named Entity Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Garrett Nicolai, Bradley Hauer, Mohammad Salameh, Adam St Arnaud, Ying Xu, Lei Yao, and Grzegorz Kondrak. 2015. Multiple System Combination for Transliteration. In Proc. of the Fifth Named Entity Workshop.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Learning Better Transliterations", "authors": [ { "first": "Jeff", "middle": [], "last": "Pasternack", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2009, "venue": "Proc. of CIKM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeff Pasternack and Dan Roth. 2009. Learning Better Transliterations. In Proc. of CIKM.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Learning Phoneme Mappings for Transliteration without Parallel Data", "authors": [ { "first": "Sujith", "middle": [], "last": "Ravi", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2009, "venue": "Proc. of NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sujith Ravi and Kevin Knight. 2009. Learning Phoneme Mappings for Transliteration without Par- allel Data. In Proc. of NAACL-HLT.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Named Entity Transliteration with Comparable Corpora", "authors": [ { "first": "Richard", "middle": [], "last": "Sproat", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Tao", "suffix": "" }, { "first": "Chengxiang", "middle": [], "last": "Zhai", "suffix": "" } ], "year": 2006, "venue": "Proc. of COLING-ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Sproat, Tao Tao, and ChengXiang Zhai. 2006. Named Entity Transliteration with Comparable Cor- pora. In Proc. of COLING-ACL.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Sequence to Sequence Learning with Neural Networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "Proc. of NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to Sequence Learning with Neural Net- works. In Proc. of NIPS.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Cross-lingual Wikification Using Multilingual Embeddings", "authors": [ { "first": "Chen-Tse", "middle": [], "last": "Tsai", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2016, "venue": "Proc. of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen-Tse Tsai and Dan Roth. 2016. Cross-lingual Wik- ification Using Multilingual Embeddings. In Proc. of NAACL.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Learning Better Name Translation for Cross-Lingual Wikification", "authors": [ { "first": "Chen-Tse", "middle": [], "last": "Tsai", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2018, "venue": "Proc. of AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen-Tse Tsai and Dan Roth. 2018. Learning Bet- ter Name Translation for Cross-Lingual Wikification. In Proc. of AAAI.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Joint Multilingual Supervision for Cross-lingual Entity Linking", "authors": [ { "first": "Shyam", "middle": [], "last": "Upadhyay", "suffix": "" }, { "first": "Nitish", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2018, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shyam Upadhyay, Nitish Gupta, and Dan Roth. 2018. Joint Multilingual Supervision for Cross-lingual En- tity Linking. In Proc. of EMNLP.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Transliteration of Proper Names in Cross-lingual Information Retrieval", "authors": [ { "first": "Paola", "middle": [], "last": "Virga", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Khudanpur", "suffix": "" } ], "year": 2003, "venue": "Proc. of the Workshop on Multilingual and Mixed-Language Named Entity Recognition", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paola Virga and Sanjeev Khudanpur. 2003. Transliter- ation of Proper Names in Cross-lingual Information Retrieval. In Proc. of the Workshop on Multilingual and Mixed-Language Named Entity Recognition.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Language and Domain Independent Entity Linking with Quantified Collective Validation", "authors": [ { "first": "Han", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jin", "middle": [ "Guang" ], "last": "Zheng", "suffix": "" }, { "first": "Xiaogang", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Fox", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" } ], "year": 2015, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Han Wang, Jin Guang Zheng, Xiaogang Ma, Peter Fox, and Heng Ji. 2015. Language and Domain Indepen- dent Entity Linking with Quantified Collective Vali- dation. In Proc. of EMNLP.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Transliteration using Seq2Seq transduction with", "num": null, "uris": null }, "FIGREF1": { "type_str": "figure", "text": "Plot showing acc@1 after each bootstrapping iteration for Hindi, Kannada, Bengali, Tamil and Hebrew, starting with only 500 training pairs as supervision. For comparison, the acc@1 of a model trained with all available supervision is also shown (respective dashed lines, marked X-Full).", "num": null, "uris": null }, "TABREF0": { "text": "Name Pairs in Wikipedia Languages Scripts", "type_str": "table", "num": null, "content": "
> 50, 00065
> 10, 0001814
> 5, 0002415
> 1, 0004522
> 5005623
> 09330
", "html": null }, "TABREF1": { "text": "Cumulative number of person name pairs in", "type_str": "table", "num": null, "content": "", "html": null }, "TABREF4": { "text": ", rows 6 and 7), we see", "type_str": "table", "num": null, "content": "
Lang. \u2192 Approach \u2193hiknbntaheAvg.
Full Supervision Setting (5-10k examples)
Seq2Seq w/ Att (U) 35.5 33.4 46.1 17.2 20.3 30.5
P&R (U)37.4 31.6 45.4 20.2 18.7 30.7
DirecTL+ (U)38.9 34.7 48.4 19.9 16.8 31.7
RPI-ISI (U)40.3 29.8 49.4 20.2 21.5 32.2
Ours(U)42.8 38.9 52.4 20.5 23.4 35.6
Approaches Using Constrained Inference
RPI-ISI + EL44.8 37.6 52.0 29.0 37.2 40.1
Ours(DC)51.8 43.3 56.6 28.0 36.1 43.2
Low-Resource Setting (500 examples)
Seq2Seq w/ Att (U) 17.0 13.6 14.5 6.0 9.512.1
P&R (U)21.1 16.6 34.2 9.4 13.0 18.9
DirecTL+ (U)26.6 25.3 35.5 11.8 10.7 22.0
Ours(U)29.1 27.7 37.7 11.5 16.2 24.4
Ours(U) + Boot.40.1 35.1 50.3 17.8 22.8 33.2
", "html": null }, "TABREF5": { "text": "", "type_str": "table", "num": null, "content": "", "html": null }, "TABREF7": { "text": "Acc@1 for native and foreign words for four languages ( \u00a77.2). Ratio is native performance relative to foreign.", "type_str": "table", "num": null, "content": "
", "html": null }, "TABREF8": { "text": "Corpora used for obtaining foreign vocabulary V f for bootstrapping in the case studies in \u00a78.1 and \u00a78.2.", "type_str": "table", "num": null, "content": "
", "html": null }, "TABREF10": { "text": "", "type_str": "table", "num": null, "content": "
: Comparing candidate recall@20 for different ap-
proaches on Tigrinya and Macedonian. CV-split refers to
consonant-vowel splitting. Using our transliteration genera-
tion model with bootstrapping yields the highest recall, im-
proving significantly over a name match baseline.
Macedonian is a South Slavic language closely
related to the languages of the former Yugoslavia
and written in a local variant of the Cyrillic alpha-
bet similar to Serbian's. We use the Macedonian
test set constructed by McNamee et al. (2011) con-
taining 1956 query mentions. A seed set of 1k
name pairs was obtained from the inter-language
Wikipedia links for Macedonian, and the monolin-
gual corpus from
", "html": null } } } }