{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:40:15.169783Z" }, "title": "Evaluating a Joint Training Approach for Learning Cross-lingual Embeddings with Sub-word Information without Parallel Corpora on Lower-resource Languages", "authors": [ { "first": "Ali", "middle": [ "Hakimi" ], "last": "Parizi", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of New Brunswick Fredericton", "location": { "postCode": "E3B 5A3", "region": "NB", "country": "Canada" } }, "email": "" }, { "first": "Paul", "middle": [], "last": "Cook", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of New Brunswick Fredericton", "location": { "postCode": "E3B 5A3", "region": "NB", "country": "Canada" } }, "email": "paul.cook@unb.ca" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Cross-lingual word embeddings provide a way for information to be transferred between languages. In this paper we evaluate an extension of a joint training approach to learning cross-lingual embeddings that incorporates sub-word information during training. This method could be particularly well-suited to lower-resource and morphologically-rich languages because it can be trained on modest size monolingual corpora, and is able to represent out-of-vocabulary words (OOVs). We consider bilingual lexicon induction, including an evaluation focused on OOVs. We find that this method achieves improvements over previous approaches, particularly for OOVs.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Cross-lingual word embeddings provide a way for information to be transferred between languages. In this paper we evaluate an extension of a joint training approach to learning cross-lingual embeddings that incorporates sub-word information during training. This method could be particularly well-suited to lower-resource and morphologically-rich languages because it can be trained on modest size monolingual corpora, and is able to represent out-of-vocabulary words (OOVs). We consider bilingual lexicon induction, including an evaluation focused on OOVs. We find that this method achieves improvements over previous approaches, particularly for OOVs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Word embeddings are an essential component in systems for many natural language processing tasks such as part-of-speech tagging (Al-Rfou' et al., 2013) , dependency parsing (Chen and Manning, 2014) and named entity recognition (Pennington et al., 2014) . Cross-lingual word representations provide a shared space for word embeddings of two languages, and make it possible to transfer information between languages (Ruder et al., 2019) . A common approach to learn cross-lingual embeddings is to learn a matrix to map the embeddings of one language to another using supervised (e.g., Mikolov et al., 2013b) , semi-supervised (Artetxe et al., 2017) , or unsupervised (e.g., Lample et al., 2018) methods. These methods rely on the assumption that the geometric arrangement of embeddings in different languages is the same. However, it has been shown that this assumption does not always hold, and that methods which instead jointly train embeddings for two languages produce embeddings that are more isomorphic and achieve stronger results for bilingual lexicon induction (BLI, Ormazabal et al., 2019) , a well-known in-trinsic evaluation for cross-lingual word representations (Ruder et al., 2019; Anastasopoulos and Neubig, 2020) . The approach of Ormazabal et al. uses a parallel corpus as a cross-lingual signal. Parallel corpora are, however, unavailable for many language pairs, particularly low-resource languages. Duong et al. (2016) introduce a joint training approach that extends CBOW (Mikolov et al., 2013a) to learn cross-lingual word embeddings from modest size monolingual corpora, using a bilingual dictionary as the cross-lingual signal. Bilingual dictionaries are available for many language pairs, e.g., Panlex (Baldwin et al., 2010) provides translations for roughly 5700 languages. These training resource requirements suggest this method could be well-suited to lower-resource languages. However, this word-level approach is unable to form representations for out-of-vocabulary (OOV) words, which could be particularly common in the case of lowresource, and morphologically-rich, languages.", "cite_spans": [ { "start": 128, "end": 151, "text": "(Al-Rfou' et al., 2013)", "ref_id": "BIBREF1" }, { "start": 173, "end": 197, "text": "(Chen and Manning, 2014)", "ref_id": "BIBREF8" }, { "start": 227, "end": 252, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF18" }, { "start": 414, "end": 434, "text": "(Ruder et al., 2019)", "ref_id": "BIBREF19" }, { "start": 583, "end": 605, "text": "Mikolov et al., 2013b)", "ref_id": "BIBREF16" }, { "start": 624, "end": 646, "text": "(Artetxe et al., 2017)", "ref_id": "BIBREF3" }, { "start": 672, "end": 692, "text": "Lample et al., 2018)", "ref_id": "BIBREF14" }, { "start": 1069, "end": 1098, "text": "(BLI, Ormazabal et al., 2019)", "ref_id": null }, { "start": 1175, "end": 1195, "text": "(Ruder et al., 2019;", "ref_id": "BIBREF19" }, { "start": 1196, "end": 1228, "text": "Anastasopoulos and Neubig, 2020)", "ref_id": "BIBREF2" }, { "start": 1419, "end": 1438, "text": "Duong et al. (2016)", "ref_id": "BIBREF10" }, { "start": 1493, "end": 1516, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Hakimi Parizi and Cook (2020b) propose an extension of Duong et al. (2016) that incorporates subword information during training and therefore can generate representations for OOVs in the shared cross-lingual space. This method also does not require parallel corpora for training, and could therefore be particularly well-suited to lower-resource, and morphologically-rich, languages. However, Hakimi Parizi and Cook only evaluate on synthetic low-resource languages. We refer to the methods of Duong et al. and Hakimi Parizi and Cook as DUONG2016 and HAKIMI2020, respectively.", "cite_spans": [ { "start": 7, "end": 30, "text": "Parizi and Cook (2020b)", "ref_id": "BIBREF13" }, { "start": 55, "end": 74, "text": "Duong et al. (2016)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most prior work on BLI focuses on invocabulary (IV) words and well-resourced languages (e.g., Artetxe et al., 2017; Ormazabal et al., 2019; Zhang et al., 2020) , although there has been some work on OOVs (Hakimi Parizi and Cook, 2020a ) and low-resource languages (Anastasopoulos and Neubig, 2020). In this paper, we evaluate HAKIMI2020 on BLI for twelve lower-resource languages, and also consider an evaluation focused on OOVs. Our results indicate that HAKIMI2020 gives improvements over DUONG2016 and several strong baselines, particularly for OOVs. ", "cite_spans": [ { "start": 94, "end": 115, "text": "Artetxe et al., 2017;", "ref_id": "BIBREF3" }, { "start": 116, "end": 139, "text": "Ormazabal et al., 2019;", "ref_id": "BIBREF17" }, { "start": 140, "end": 159, "text": "Zhang et al., 2020)", "ref_id": "BIBREF22" }, { "start": 204, "end": 234, "text": "(Hakimi Parizi and Cook, 2020a", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "O = i\u2208Ds\u222aDt \u03b1 log \u03c3(u T w i h i ) + (1 \u2212 \u03b1) log \u03c3(u T w i h i ) + p j=1 E w j \u223cPn(w) log \u03c3(\u2212u T w j h i )", "eq_num": "(1)" } ], "section": "Introduction", "sec_num": "1" }, { "text": "Following Bojanowski et al. (2017) , HAKIMI2020 modifies Equation 1 by including sub-word information during the joint training process as follows:", "cite_spans": [ { "start": 10, "end": 34, "text": "Bojanowski et al. (2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "O = i\u2208Ds\u222aDt \u03b1 log S(w i , h i ) + (1 \u2212 \u03b1) log S(w i , h i ) + p j=1 E w j \u223cPn(w) log \u2212S(w j , h i ) (2) S(w, h) = 1 |G w| g\u2208Gw z T g h (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "where G w is the set of sub-words appearing in w and z g is the sub-word embedding for g. h is calculated by averaging the representations for each word appearing in the context, where each word is itself represented by the average of its sub-word embeddings. HAKIMI2020 use character n-grams as subwords. Specifically, each word is augmented with special beginning and end of word markers, and then represented as a bag of character n-grams, using n-grams of length 3-6 characters. The entire word itself (with beginning and end of word markers) is also included among the sub-words. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We consider BLI from twelve lower-resource source languages to English. The languages (shown in Table 1 ) were selected to cover a variety of language families, while having small to medium size Wikipedias and BLI evaluation datasets available. We compare HAKIMI2020 with DUONG2016, VECMAP (Artetxe et al., 2018) , and MEEMI (Doval et al., 2018) . In each case, we use cosine similarity to find the closest target language translations for a source language word. We evaluate using precision@N (Ruder et al., 2019) for N = 1, 5, 10.", "cite_spans": [ { "start": 290, "end": 312, "text": "(Artetxe et al., 2018)", "ref_id": "BIBREF4" }, { "start": 325, "end": 345, "text": "(Doval et al., 2018)", "ref_id": "BIBREF9" }, { "start": 494, "end": 514, "text": "(Ruder et al., 2019)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 96, "end": 103, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "The corpus for each language is a Wikipedia dump from 27 July 2020, cleaned using tools from Bojanowski et al. 2017, and tokenized using Eu-ropalExtract (Ustaszewski, 2019) , except for Bengali and Hindi, which are tokenized using NLTK (Bird et al., 2009) . Because DUONG2016 and HAKIMI2020 can learn high quality cross-lingual embeddings from monolingual corpora of only 5M sentences each, we down-sample the English corpus for these two methods to 5M sentences. DUONG2016 benefits from a relatively large training dictionary (Duong et al., 2016) , therefore, for DUONG2016 and HAKIMI2020 we follow Duong et al. to create large training dictionaries by extracting translation pairs from Panlex. Details of the training corpora and Panlex dictionaries are shown in Table 1 .", "cite_spans": [ { "start": 153, "end": 172, "text": "(Ustaszewski, 2019)", "ref_id": "BIBREF20" }, { "start": 236, "end": 255, "text": "(Bird et al., 2009)", "ref_id": "BIBREF6" }, { "start": 527, "end": 547, "text": "(Duong et al., 2016)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 765, "end": 772, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Training Corpora and Dictionaries", "sec_num": "3.1" }, { "text": "We compare against two baselines: VECMAP (Artetxe et al., 2018) , a supervised mapping-based method, and MEEMI (Doval et al., 2018) , a post processing method. We consider various training corpora and dictionaries to create strong baselines.", "cite_spans": [ { "start": 41, "end": 63, "text": "(Artetxe et al., 2018)", "ref_id": "BIBREF4" }, { "start": 111, "end": 131, "text": "(Doval et al., 2018)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "3.2" }, { "text": "Supervised mapping-based approaches tend to see a reduction in performance with seed lexicons larger than roughly 5k pairs (Vuli\u0107 and Korhonen, 2016) . Training translation pairs from MUSE (Lample et al., 2018) are therefore used, except for Azerbaijani, which is not included in MUSE, where training pairs from Anastasopoulos and Neubig (2020) are used. We first train VECMAP using these MUSE pairs, and embeddings learned from the full English corpus, to give this baseline access to as much training data as is available. We then consider this approach, but using the down-sampled English corpus. We found that the smaller English corpus gave higher precision@N (for N = 1, 5, and 10) for both the IV and OOV evaluations in Section 4. This could be due to the smaller corpus having a smaller vocabulary. We then also consider VECMAP trained using Panlex training pairs and embeddings learned from the down-sampled English corpus.", "cite_spans": [ { "start": 123, "end": 149, "text": "(Vuli\u0107 and Korhonen, 2016)", "ref_id": "BIBREF21" }, { "start": 189, "end": 210, "text": "(Lample et al., 2018)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "3.2" }, { "text": "We next consider MEEMI applied to each of the three sets of cross-lingual embeddings obtained from VECMAP. In each case we train MEEMI using the same training pairs (MUSE or Panlex) that were used to train VECMAP. In Section 4 we report results for the baseline that performs best.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "3.2" }, { "text": "Hakimi Parizi and Cook (2020b) show that DUONG2016 performs best using its default parameters, i.e., an embedding size of 200 and window size of 48, but that HAKIMI2020 performs better using an embedding size of 300 and window size of 20. We use these parameter settings here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hyper-Parameter Settings", "sec_num": "3.3" }, { "text": "fastText is used to train monolingual embeddings for VECMAP and MEEMI. We use skipgram with its default settings, except the dimension of the embeddings is set to 300 (Bojanowski et al., 2017) .", "cite_spans": [ { "start": 167, "end": 192, "text": "(Bojanowski et al., 2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Hyper-Parameter Settings", "sec_num": "3.3" }, { "text": "In this section, we present results for BLI for IV words, and then OOV source language words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "4" }, { "text": "For these experiments we use MUSE test data for all languages except Azerbaijani, where we use test data from Anastasopoulos and Neubig (2020) . Because our focus here is on IV words, we only consider translation pairs that are IV with respect to the embedding matrices learned from our corpora. We compare HAKIMI2020 with DUONG2016 and MEEMI trained using the down-sampled English corpus and MUSE training pairs, which performed best of the baselines considered for each evaluation measure. Results are shown in Table 2 . 1 HAKIMI2020 improves over DUONG2016, indicating that DUONG2016 can indeed be improved by incorporating sub-word information during training. Comparing HAKIMI2020 and MEEMI, the results are more mixed. In terms of precision@1, MEEMI substantially outperforms HAKIMI2020, although for precision@10 HAKIMI2020 outperforms MEEMI.", "cite_spans": [ { "start": 110, "end": 142, "text": "Anastasopoulos and Neubig (2020)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 513, "end": 520, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "BLI for In-Vocabulary Words", "sec_num": "4.1" }, { "text": "Following Hakimi Parizi and Cook (2020a) we use Panlex to construct a test dataset of translation pairs in which the source language words are OOV and the target language words are IV. However, Hakimi Parizi and Cook observe that some translations in Panlex are noise. To avoid noisy translations, we use all translation pairs for which the source language word is OOV with respect to the embedding matrix, i.e., the embedding models have no direct knowledge of these words, but is attested in the source language corpus, i.e., there is evidence that this is indeed a word in the source language. 2 The resulting test datasets consist of between 806 translation pairs in the case of Azerbaijani to roughly 11k pairs for Hungarian.", "cite_spans": [ { "start": 597, "end": 598, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "BLI for OOVs", "sec_num": "4.2" }, { "text": "Here we compare against the VECMAP baseline using the down-sampled English corpus and Panlex training pairs, which performed best of the baselines considered for each evaluation measure. For VECMAP, we follow Hakimi Parizi and Cook (2020a) by forming a representation for the OOV source language word from its subword embeddings, and then mapping it into the shared space. We cannot, however, compare directly against DUONG2016 because it is a wordlevel approach that cannot represent OOVs. We therefore instead compare against a baseline in which the OOV source language word is copied into the target language. This approach, referred to as COPY, could work well in the case of borrowings and named entities. 3 Table 3 shows the results. HAKIMI2020 outperforms VECMAP for all languages and evaluation measures. This finding suggests that sub-word information can be more effectively transferred in a cross-lingual setting when sub-words are incorporated into the training process -as is the case for HAKIMI2020 -than when they are not -as for VECMAP here. Comparing HAKIMI2020 to COPY, although there are several languages for which COPY outperforms HAKIMI2020, on average, HAKIMI2020 performs better. In the cases that COPY outperforms HAKIMI2020, it appears to be largely related to the presence of English abbreviations in the source language Wikipedia dump.", "cite_spans": [], "ref_spans": [ { "start": 713, "end": 720, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "BLI for OOVs", "sec_num": "4.2" }, { "text": "Because of the relatively strong performance of COPY on several languages, we propose an approach that combines COPY and HAKIMI2020, referred to as HAKIMI2020+COPY. Given a source language word, we first check whether it is in the target language embedding matrix. If so, we assume it is a word that does not require translation (e.g., a named entity) and copy it into the target language. 4 If the source language word is not in the target language embedding matrix, we apply HAKIMI2020 to find the target language translation under this model. This approach improves over both COPY and HAKIMI2020 for all languages, except Bengali, and gives substantial improvements on average. 5 Although COPY is a very simple approach, it is complementary to HAKIMI2020, and the two approaches can be effectively combined to improve BLI for OOVs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BLI for OOVs", "sec_num": "4.2" }, { "text": "We evaluated an extension of a joint training approach to learning cross-lingual embeddings that incorporates sub-word information during training, which could be well-suited to lower-resource and morphologically-rich languages because it can be trained on modest amounts of monolingual data and can represent OOVs. In two BLI tasks for twelve lower-resource languages focused on IV words and OOVs, we found that this method improved over previous approaches, particularly for OOVs. Evaluation data and code for learning the cross-lingual embeddings is available. 6 In future work we plan to explore the impact of the target language on the quality of the crosslingual embeddings, and in particular consider source and target languages from the same family. We further intend to evaluate these cross-lingual embeddings in down-stream tasks for low-resource languages, such as language modelling (Adams et al., 2017) and part-of-speech tagging (Fang and Cohn, 2017) , and to compare against approaches based on contextualized language models.", "cite_spans": [ { "start": 564, "end": 565, "text": "6", "ref_id": "BIBREF5" }, { "start": 895, "end": 915, "text": "(Adams et al., 2017)", "ref_id": "BIBREF0" }, { "start": 943, "end": 964, "text": "(Fang and Cohn, 2017)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "Results for each of the twelve languages are available in the appendix.2 For each embedding method, we set the minimum frequency for words in the embedding matrix to 5; as such, all methods have the same source language vocabulary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "COPY only produces one target language candidate for a given source word, and as such we only compute precision@1 for this method.4 This assumption can be incorrect, e.g., Afrikaans kits is IV for English, but translates to English moment.5 We also observe that there is little improvement for HAKIMI2020+COPY over HAKIMI2020 on Hindi. For both Hindi and Bengali COPY achieves very low precision, and so little or no improvement can be obtained over HAKIMI2020 by combining COPY with HAKIMI2020.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Cross-lingual word embeddings for low-resource language modeling", "authors": [ { "first": "Oliver", "middle": [], "last": "Adams", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Makarucha", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "937--947", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oliver Adams, Adam Makarucha, Graham Neubig, Steven Bird, and Trevor Cohn. 2017. Cross-lingual word embeddings for low-resource language model- ing. In Proceedings of the 15th Conference of the European Chapter of the Association for Computa- tional Linguistics: Volume 1, Long Papers, pages 937-947, Valencia, Spain. Association for Compu- tational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Polyglot: Distributed word representations for multilingual NLP", "authors": [ { "first": "Rami", "middle": [], "last": "Al-Rfou", "suffix": "" }, { "first": "'", "middle": [], "last": "", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Perozzi", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Skiena", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "183--192", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rami Al-Rfou', Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual NLP. In Proceedings of the Seven- teenth Conference on Computational Natural Lan- guage Learning, pages 183-192, Sofia, Bulgaria. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Should all cross-lingual embeddings speak English?", "authors": [ { "first": "Antonios", "middle": [], "last": "Anastasopoulos", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8658--8679", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.766" ] }, "num": null, "urls": [], "raw_text": "Antonios Anastasopoulos and Graham Neubig. 2020. Should all cross-lingual embeddings speak English? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8658-8679, Online. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Learning bilingual word embeddings with (almost) no bilingual data", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "451--462", "other_ids": { "DOI": [ "10.18653/v1/P17-1042" ] }, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 451-462, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "5012--5019", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. Generalizing and improving bilingual word embed- ding mappings with a multi-step framework of lin- ear transformations. In Proceedings of the Thirty- Second AAAI Conference on Artificial Intelligence, pages 5012-5019.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "PanLex and LEXTRACT: Translating all words of all languages of the world", "authors": [], "year": 2010, "venue": "Coling 2010: Demonstrations", "volume": "", "issue": "", "pages": "37--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "https://github.com/Cons13411/XLing_ Subword Timothy Baldwin, Jonathan Pool, and Susan Colow- ick. 2010. PanLex and LEXTRACT: Translating all words of all languages of the world. In Coling 2010: Demonstrations, pages 37-40, Beijing, China. Col- ing 2010 Organizing Committee.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Natural Language Processing with Python", "authors": [ { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Loper", "suffix": "" }, { "first": "Ewan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Bird, Edward Loper, and Ewan Klein. 2009. Natural Language Processing with Python. O'Reilly Media Inc.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "135--146", "other_ids": { "DOI": [ "10.1162/tacl_a_00051" ] }, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A fast and accurate dependency parser using neural networks", "authors": [ { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "740--750", "other_ids": { "DOI": [ "10.3115/v1/D14-1082" ] }, "num": null, "urls": [], "raw_text": "Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural net- works. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 740-750, Doha, Qatar. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Improving cross-lingual word embeddings by meeting in the middle", "authors": [ { "first": "Yerai", "middle": [], "last": "Doval", "suffix": "" }, { "first": "Jose", "middle": [], "last": "Camacho-Collados", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Espinosa-Anke", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Schockaert", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "294--304", "other_ids": { "DOI": [ "10.18653/v1/D18-1027" ] }, "num": null, "urls": [], "raw_text": "Yerai Doval, Jose Camacho-Collados, Luis Espinosa- Anke, and Steven Schockaert. 2018. Improving cross-lingual word embeddings by meeting in the middle. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 294-304, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning crosslingual word embeddings without bilingual corpora", "authors": [ { "first": "Long", "middle": [], "last": "Duong", "suffix": "" }, { "first": "Hiroshi", "middle": [], "last": "Kanayama", "suffix": "" }, { "first": "Tengfei", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1285--1295", "other_ids": { "DOI": [ "10.18653/v1/D16-1136" ] }, "num": null, "urls": [], "raw_text": "Long Duong, Hiroshi Kanayama, Tengfei Ma, Steven Bird, and Trevor Cohn. 2016. Learning crosslingual word embeddings without bilingual corpora. In Pro- ceedings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, pages 1285- 1295, Austin, Texas. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Model transfer for tagging low-resource languages using a bilingual dictionary", "authors": [ { "first": "Meng", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "587--593", "other_ids": { "DOI": [ "10.18653/v1/P17-2093" ] }, "num": null, "urls": [], "raw_text": "Meng Fang and Trevor Cohn. 2017. Model transfer for tagging low-resource languages using a bilingual dictionary. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 587-593, Vancou- ver, Canada. Association for Computational Linguis- tics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Evaluating sub-word embeddings in cross-lingual models", "authors": [ { "first": "Ali", "middle": [], "last": "Hakimi", "suffix": "" }, { "first": "Parizi", "middle": [], "last": "", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Cook", "suffix": "" } ], "year": 2020, "venue": "Proceedings of The 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "2712--2719", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ali Hakimi Parizi and Paul Cook. 2020a. Evaluat- ing sub-word embeddings in cross-lingual models. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 2712-2719, Mar- seille, France. European Language Resources Asso- ciation.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Joint training for learning cross-lingual embeddings with subword information without parallel corpora", "authors": [ { "first": "Ali", "middle": [], "last": "Hakimi", "suffix": "" }, { "first": "Parizi", "middle": [], "last": "", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Cook", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Ninth Joint Conference on Lexical and Computational Semantics", "volume": "", "issue": "", "pages": "39--49", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ali Hakimi Parizi and Paul Cook. 2020b. Joint train- ing for learning cross-lingual embeddings with sub- word information without parallel corpora. In Pro- ceedings of the Ninth Joint Conference on Lex- ical and Computational Semantics, pages 39-49, Barcelona, Spain (Online). Association for Compu- tational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Word translation without parallel data", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" }, { "first": "Ludovic", "middle": [], "last": "Denoyer", "suffix": "" }, { "first": "Herv\u00e9", "middle": [], "last": "J\u00e9gou", "suffix": "" } ], "year": 2018, "venue": "6th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2018. Word translation without parallel data. In 6th Inter- national Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings. OpenRe- view.net.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of Workshop at the International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. In Proceedings of Workshop at the International Conference on Learning Repre- sentations, 2013, Scottsdale, USA.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Exploiting similarities among languages for machine translation", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Le", "suffix": "" }, { "first": "", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013b. Exploiting similarities among languages for ma- chine translation. CoRR, abs/1309.4168.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Analyzing the limitations of cross-lingual word embedding mappings", "authors": [ { "first": "Aitor", "middle": [], "last": "Ormazabal", "suffix": "" }, { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4990--4995", "other_ids": { "DOI": [ "10.18653/v1/P19-1492" ] }, "num": null, "urls": [], "raw_text": "Aitor Ormazabal, Mikel Artetxe, Gorka Labaka, Aitor Soroa, and Eneko Agirre. 2019. Analyzing the lim- itations of cross-lingual word embedding mappings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4990-4995, Florence, Italy. Association for Compu- tational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": { "DOI": [ "10.3115/v1/D14-1162" ] }, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 1532-1543, Doha, Qatar. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A survey of cross-lingual word embedding models", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuliundefined", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2019, "venue": "Journal of Artificial Intelligence Research", "volume": "65", "issue": "1", "pages": "569--630", "other_ids": { "DOI": [ "10.1613/jair.1.11640" ] }, "num": null, "urls": [], "raw_text": "Sebastian Ruder, Ivan Vuliundefined, and Anders S\u00f8gaard. 2019. A survey of cross-lingual word em- bedding models. Journal of Artificial Intelligence Research, 65(1):569-630.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Optimising the europarl corpus for translation studies with the europarlextract toolkit", "authors": [ { "first": "Michael", "middle": [], "last": "Ustaszewski", "suffix": "" } ], "year": 2019, "venue": "Perspectives", "volume": "27", "issue": "1", "pages": "107--123", "other_ids": { "DOI": [ "10.1080/0907676X.2018.1485716" ] }, "num": null, "urls": [], "raw_text": "Michael Ustaszewski. 2019. Optimising the europarl corpus for translation studies with the europarlex- tract toolkit. Perspectives, 27(1):107-123.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "On the role of seed lexicons in learning bilingual word embeddings", "authors": [ { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "247--257", "other_ids": { "DOI": [ "10.18653/v1/P16-1024" ] }, "num": null, "urls": [], "raw_text": "Ivan Vuli\u0107 and Anna Korhonen. 2016. On the role of seed lexicons in learning bilingual word embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 247-257, Berlin, Germany. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Why overfitting isn't always bad: Retrofitting cross-lingual word embeddings to dictionaries", "authors": [ { "first": "Mozhi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yoshinari", "middle": [], "last": "Fujinuma", "suffix": "" }, { "first": "Michael", "middle": [ "J" ], "last": "Paul", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2214--2220", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.201" ] }, "num": null, "urls": [], "raw_text": "Mozhi Zhang, Yoshinari Fujinuma, Michael J. Paul, and Jordan Boyd-Graber. 2020. Why overfitting isn't always bad: Retrofitting cross-lingual word embeddings to dictionaries. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 2214-2220, Online. As- sociation for Computational Linguistics.", "links": null } }, "ref_entries": { "TABREF2": { "html": null, "num": null, "type_str": "table", "content": "", "text": "The language family, size of corpus, and size of Panlex dictionary, for each source language." }, "TABREF4": { "html": null, "num": null, "type_str": "table", "content": "
", "text": "Precision@N for BLI for IV words, averaged over the twelve languages. The best precision for each evaluation measure is shown in boldface." }, "TABREF6": { "html": null, "num": null, "type_str": "table", "content": "
", "text": "Precision@N for BLI for OOV source language words. The best precision for each dataset and evaluation measure is shown in boldface." } } } }