{ "paper_id": "P16-1024", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:55:14.991676Z" }, "title": "On the Role of Seed Lexicons in Learning Bilingual Word Embeddings", "authors": [ { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "A shared bilingual word embedding space (SBWES) is an indispensable resource in a variety of cross-language NLP and IR tasks. A common approach to the SB-WES induction is to learn a mapping function between monolingual semantic spaces, where the mapping critically relies on a seed word lexicon used in the learning process. In this work, we analyze the importance and properties of seed lexicons for the SBWES induction across different dimensions (i.e., lexicon source, lexicon size, translation method, translation pair reliability). On the basis of our analysis, we propose a simple but effective hybrid bilingual word embedding (BWE) model. This model (HYBWE) learns the mapping between two monolingual embedding spaces using only highly reliable symmetric translation pairs from a seed document-level embedding space. We perform bilingual lexicon learning (BLL) with 3 language pairs and show that by carefully selecting reliable translation pairs our new HYBWE model outperforms benchmarking BWE learning models, all of which use more expensive bilingual signals. Effectively, we demonstrate that a SBWES may be induced by leveraging only a very weak bilingual signal (document alignments) along with monolingual data.", "pdf_parse": { "paper_id": "P16-1024", "_pdf_hash": "", "abstract": [ { "text": "A shared bilingual word embedding space (SBWES) is an indispensable resource in a variety of cross-language NLP and IR tasks. A common approach to the SB-WES induction is to learn a mapping function between monolingual semantic spaces, where the mapping critically relies on a seed word lexicon used in the learning process. In this work, we analyze the importance and properties of seed lexicons for the SBWES induction across different dimensions (i.e., lexicon source, lexicon size, translation method, translation pair reliability). On the basis of our analysis, we propose a simple but effective hybrid bilingual word embedding (BWE) model. This model (HYBWE) learns the mapping between two monolingual embedding spaces using only highly reliable symmetric translation pairs from a seed document-level embedding space. We perform bilingual lexicon learning (BLL) with 3 language pairs and show that by carefully selecting reliable translation pairs our new HYBWE model outperforms benchmarking BWE learning models, all of which use more expensive bilingual signals. Effectively, we demonstrate that a SBWES may be induced by leveraging only a very weak bilingual signal (document alignments) along with monolingual data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Dense real-valued vector representations of words or word embeddings (WEs) have recently gained increasing popularity in natural language processing (NLP), serving as invaluable features in a broad", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Figure 1: A toy example of a 3-dimensional monolingual vs shared bilingual word embedding space (further SBWES) from Gouws et al. (2015) .", "cite_spans": [ { "start": 117, "end": 136, "text": "Gouws et al. (2015)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Monolingual vs Bilingual", "sec_num": null }, { "text": "range of NLP tasks, e.g., (Turian et al., 2010; Collobert et al., 2011; Chen and Manning, 2014) . Several studies have showcased a direct link and comparable performance to \"more traditional\" distributional models (Turney and Pantel, 2010 ). Yet the widely used skip-gram model with negative sampling (SGNS) (Mikolov et al., 2013b) is considered as the state-of-the-art word representation model, due to its simplicity, fast training, as well as its solid and robust performance across a wide variety of semantic tasks (Baroni et al., 2014; Levy and Goldberg, 2014b; .", "cite_spans": [ { "start": 26, "end": 47, "text": "(Turian et al., 2010;", "ref_id": "BIBREF44" }, { "start": 48, "end": 71, "text": "Collobert et al., 2011;", "ref_id": "BIBREF7" }, { "start": 72, "end": 95, "text": "Chen and Manning, 2014)", "ref_id": "BIBREF6" }, { "start": 214, "end": 238, "text": "(Turney and Pantel, 2010", "ref_id": "BIBREF45" }, { "start": 308, "end": 331, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF33" }, { "start": 519, "end": 540, "text": "(Baroni et al., 2014;", "ref_id": "BIBREF3" }, { "start": 541, "end": 566, "text": "Levy and Goldberg, 2014b;", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Monolingual vs Bilingual", "sec_num": null }, { "text": "Research interest has recently extended to bilingual word embeddings (BWEs). BWE learning models focus on the induction of a shared bilingual word embedding space (SBWES) where words from both languages are represented in a uniform language-independent manner such that similar words (regardless of the actual language) have similar representations (see Fig. 1 ). A variety of BWE learning models have been proposed, differing in the essential requirement of a bilingual signal necessary to construct such a SBWES (discussed later in Sect. 2). SBWES may be used to support many tasks, e.g., computing cross-lingual/multilingual semantic word similarity (Faruqui and Dyer, 2014) , learning bilingual word lexicons (Mikolov et al., 2013a; Gouws et al., 2015; , cross-lingual entity linking (Tsai and Roth, 2016) , parsing (Guo et al., 2015; Johannsen et al., 2015) , machine translation (Zou et al., 2013) , or crosslingual information retrieval (Vuli\u0107 and Moens, 2015; Mitra et al., 2016) . BWE models should have two desirable properties: (P1) leverage (large) monolingual training sets tied together through a bilingual signal, (P2) use as inexpensive bilingual signal as possible in order to learn a SBWES in a scalable and widely applicable manner across languages and domains.", "cite_spans": [ { "start": 653, "end": 677, "text": "(Faruqui and Dyer, 2014)", "ref_id": "BIBREF10" }, { "start": 713, "end": 736, "text": "(Mikolov et al., 2013a;", "ref_id": "BIBREF32" }, { "start": 737, "end": 756, "text": "Gouws et al., 2015;", "ref_id": "BIBREF12" }, { "start": 788, "end": 809, "text": "(Tsai and Roth, 2016)", "ref_id": "BIBREF42" }, { "start": 820, "end": 838, "text": "(Guo et al., 2015;", "ref_id": "BIBREF13" }, { "start": 839, "end": 862, "text": "Johannsen et al., 2015)", "ref_id": "BIBREF18" }, { "start": 885, "end": 903, "text": "(Zou et al., 2013)", "ref_id": "BIBREF54" }, { "start": 944, "end": 967, "text": "(Vuli\u0107 and Moens, 2015;", "ref_id": "BIBREF49" }, { "start": 968, "end": 987, "text": "Mitra et al., 2016)", "ref_id": "BIBREF36" } ], "ref_spans": [ { "start": 354, "end": 360, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Monolingual vs Bilingual", "sec_num": null }, { "text": "While we provide a classification of related work, that is, different BWE models according to these properties in Sect. 2.1, the focus of this work is on a popular class of models labeled Post-Hoc Mapping with Seed Lexicons. These models operate as follows (Mikolov et al., 2013a; Ammar et al., 2016) :", "cite_spans": [ { "start": 257, "end": 280, "text": "(Mikolov et al., 2013a;", "ref_id": "BIBREF32" }, { "start": 281, "end": 300, "text": "Ammar et al., 2016)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Monolingual vs Bilingual", "sec_num": null }, { "text": "(1) two separate non-aligned monolingual embedding spaces are induced using any monolingual WE learning model (SGNS is the typical choice), 2given a seed lexicon of word translation pairs as the bilingual signal for training, a mapping function is learned which ties the two monolingual spaces together into a SBWES.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Monolingual vs Bilingual", "sec_num": null }, { "text": "All existing work on this class of models assumes that high-quality training seed lexicons are readily available. In reality, little is understood regarding what constitutes a high quality seed lexicon, even with \"traditional\" distributional models (Gaussier et al., 2004; Holmlund et al., 2005; Vuli\u0107 and Moens, 2013) . Therefore, in this work we ask whether BWE learning could be improved by making more intelligent choices when deciding over seed lexicon entries. In order to do this we delve deeper into the cross-lingual mapping problem by analyzing a spectrum of seed lexicons with respect to controllable parameters such as lexicon source, its size, translation method, and translation pair reliability.", "cite_spans": [ { "start": 249, "end": 272, "text": "(Gaussier et al., 2004;", "ref_id": "BIBREF11" }, { "start": 273, "end": 295, "text": "Holmlund et al., 2005;", "ref_id": "BIBREF16" }, { "start": 296, "end": 318, "text": "Vuli\u0107 and Moens, 2013)", "ref_id": "BIBREF47" } ], "ref_spans": [], "eq_spans": [], "section": "Monolingual vs Bilingual", "sec_num": null }, { "text": "The contributions of this paper are as follows: (C1) We present a systematic study on the importance of seed lexicons for learning mapping functions between monolingual WE spaces. (C2) Given the insights gained, we propose a simple yet effective hybrid BWE model HYBWE that removes the need for readily available seed lexicons, and satisfies properties P1 and P2. HYBWE relies on an inexpensive seed lexicon of highly reliable word translation pairs obtained by a documentlevel BWE model ) from document-aligned comparable data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Monolingual vs Bilingual", "sec_num": null }, { "text": "(C3) Using a careful pair selection process when constructing a seed lexicon, we show that in the BLL task HYBWE outperforms a BWE model of Mikolov et al. (2013a) which relies on readily available seed lexicons. HYBWE also outperforms state-of-the-art models of (Hermann and Blunsom, 2014b; Gouws et al., 2015) which require sentencealigned parallel data.", "cite_spans": [ { "start": 140, "end": 162, "text": "Mikolov et al. (2013a)", "ref_id": "BIBREF32" }, { "start": 291, "end": 310, "text": "Gouws et al., 2015)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Monolingual vs Bilingual", "sec_num": null }, { "text": "Given source and target language vocabularies V S and V T , all BWE models learn a representation of each word w \u2208 V S V T in a SBWES as a realvalued vector:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning SBWES using Seed Lexicons", "sec_num": "2" }, { "text": "w = [f 1 , . . . , f d ],", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning SBWES using Seed Lexicons", "sec_num": "2" }, { "text": "where f k \u2208 R denotes the value for the k-th cross-lingual feature for w within a d-dimensional SBWES. Semantic similarity sim(w, v) between two words w, v \u2208 V S V T is then computed by applying a similarity function (SF), e.g. cosine (cos) on their representations in the SBWES:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning SBWES using Seed Lexicons", "sec_num": "2" }, { "text": "sim(w, v) = SF (w, v) = cos(w, v).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning SBWES using Seed Lexicons", "sec_num": "2" }, { "text": "Bilingual Signals BWE models may be clustered into four different types according to bilingual signals used in training, and properties P1 and P2 (see Sect. 1). Upadhyay et al. (2016) provide a similar overview of recent bilingual embedding learning architectures regarding different bilingual signals required for the embedding induction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work: BWE Models and", "sec_num": "2.1" }, { "text": "(Type 1) Parallel-Only: This group of BWE models relies on sentence-aligned and/or word-aligned parallel data as the only data source (Zou et al., 2013; Hermann and Blunsom, 2014a; Ko\u010disk\u00fd et al., 2014; Hermann and Blunsom, 2014b; Chandar et al., 2014) . In addition to an expensive bilingual signal (colliding with P2), these models do not leverage larger monolingual datasets for training (not satisfying P1).", "cite_spans": [ { "start": 134, "end": 152, "text": "(Zou et al., 2013;", "ref_id": "BIBREF54" }, { "start": 153, "end": 180, "text": "Hermann and Blunsom, 2014a;", "ref_id": null }, { "start": 181, "end": 202, "text": "Ko\u010disk\u00fd et al., 2014;", "ref_id": "BIBREF22" }, { "start": 203, "end": 230, "text": "Hermann and Blunsom, 2014b;", "ref_id": null }, { "start": 231, "end": 252, "text": "Chandar et al., 2014)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work: BWE Models and", "sec_num": "2.1" }, { "text": "(Type 2) Joint Bilingual Training: These models jointly optimize two monolingual objectives, with the cross-lingual objective acting as a cross-lingual regularizer during training (Klementiev et al., 2012; Gouws et al., 2015; Soyer et al., 2015; Shi et al., 2015; Coulmance et al., 2015) . The idea may be summarized by the simplified formulation (Luong et al., 2015) : \u03b3(Mono S +Mono T )+\u03b4Bi. The monolingual objectives M ono S and M ono T ensure that similar words in each language are assigned similar embeddings and aim to capture the semantic structure of each language, whereas the cross-lingual objective Bi ensures that similar words across languages are assigned similar embeddings. It ties the two monolingual spaces together into a SBWES (thus satisfying P1). Parameters \u03b3 and \u03b4 govern the influence of the monolingual and bilingual components. 1 The main disadvantage of Type 2 models is the costly parallel data needed for the bilingual signal (thus colliding with P2).", "cite_spans": [ { "start": 180, "end": 205, "text": "(Klementiev et al., 2012;", "ref_id": "BIBREF21" }, { "start": 206, "end": 225, "text": "Gouws et al., 2015;", "ref_id": "BIBREF12" }, { "start": 226, "end": 245, "text": "Soyer et al., 2015;", "ref_id": "BIBREF40" }, { "start": 246, "end": 263, "text": "Shi et al., 2015;", "ref_id": "BIBREF38" }, { "start": 264, "end": 287, "text": "Coulmance et al., 2015)", "ref_id": "BIBREF8" }, { "start": 347, "end": 367, "text": "(Luong et al., 2015)", "ref_id": "BIBREF28" }, { "start": 856, "end": 857, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work: BWE Models and", "sec_num": "2.1" }, { "text": "(Type 3) Pseudo-Bilingual Training: This set of models requires document alignments as bilingual signal to induce a SBWES. create a collection of pseudo-bilingual documents by merging every pair of aligned documents in training data, in a way that preserves important local information: words that appeared next to other words within the same language and those that appeared in the same region of the document across different languages. This collection is then used to train word embeddings with monolingual SGNS from word2vec. With pseudo-bilingual documents, the \"context\" of a word is redefined as a mixture of neighbouring words (in the original language) and words that appeared in the same region of the document (in the \"foreign\" language). The bilingual contexts for each word in each document steer the final model towards constructing a SBWES. The advantage over other BWE model types lies in exploiting weaker document-level bilingual signals (satisfying P2), but these models are unable to exploit monolingual corpora during training (unlike Type 2 or Type 4; thus colliding with P1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work: BWE Models and", "sec_num": "2.1" }, { "text": "(Type 4) Post-Hoc Mapping with Seed Lexicons: These models learn post-hoc mapping functions between monolingual WE spaces induced separately for two different languages (e.g., by SGNS). All Type 4 models (Mikolov et al., 2013a; Faruqui and Dyer, 2014; rely on readily available seed lexicons of highly frequent words obtained by e.g. Google Translate (GT) to learn the mapping (again colliding with P2), but they are able to satisfy P1.", "cite_spans": [ { "start": 204, "end": 227, "text": "(Mikolov et al., 2013a;", "ref_id": "BIBREF32" }, { "start": 228, "end": 251, "text": "Faruqui and Dyer, 2014;", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work: BWE Models and", "sec_num": "2.1" }, { "text": "1 Type 1 models may be considered a special case of Type 2 models: Setting \u03b3 = 0 reduces Type 2 models to Type 1 models trained solely on parallel data, e.g., (Hermann and Blunsom, 2014b; Chandar et al., 2014) . \u03b3 = 1 results in the models from (Klementiev et al., 2012; Gouws et al., 2015; Soyer et al., 2015; Coulmance et al., 2015) .", "cite_spans": [ { "start": 159, "end": 187, "text": "(Hermann and Blunsom, 2014b;", "ref_id": null }, { "start": 188, "end": 209, "text": "Chandar et al., 2014)", "ref_id": "BIBREF5" }, { "start": 245, "end": 270, "text": "(Klementiev et al., 2012;", "ref_id": "BIBREF21" }, { "start": 271, "end": 290, "text": "Gouws et al., 2015;", "ref_id": "BIBREF12" }, { "start": 291, "end": 310, "text": "Soyer et al., 2015;", "ref_id": "BIBREF40" }, { "start": 311, "end": 334, "text": "Coulmance et al., 2015)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work: BWE Models and", "sec_num": "2.1" }, { "text": "Methodology and Lexicons Key Intuition One may infer that a type-hybrid procedure which would retain only highly reliable translation pairs obtained by a Type 3 model as a seed lexicon for Type 4 models effectively satisfies both requirements: (P1) unlike Type 1 and Type 3, it can learn from monolingual data and tie two monolingual spaces using the highly reliable translation pairs, (P2) unlike Type 1 and Type 2, it does not require parallel data; unlike Type 4, it does not require external lexicons and translation systems. The only bilingual signal required are document alignments. Therefore, our focus is on novel less expensive Type 4 models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-Hoc Mapping with Seed Lexicons:", "sec_num": "2.2" }, { "text": "Overview The standard learning setup we use is as follows: First, two monolingual embedding spaces, R d S and R d T , are induced separately in each of the two languages using a standard monolingual WE model such as CBOW or SGNS. d S and d T denote the dimensionality of monolingual WE spaces. The bilingual signal is a seed lexicon, i.e., a list of word translation pairs", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-Hoc Mapping with Seed Lexicons:", "sec_num": "2.2" }, { "text": "(x i , y i ), where x i \u2208 V S , y i \u2208 V T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-Hoc Mapping with Seed Lexicons:", "sec_num": "2.2" }, { "text": ", and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-Hoc Mapping with Seed Lexicons:", "sec_num": "2.2" }, { "text": "x i \u2208 R d S , y i \u2208 R d T .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-Hoc Mapping with Seed Lexicons:", "sec_num": "2.2" }, { "text": "Learning Objectives Training is cast as a multivariate regression problem: it implies learning a function that maps the source language vectors from the training data to their corresponding target language vectors. A standard approach (Mikolov et al., 2013a; is to assume a linear map W \u2208 R d S \u00d7d T , where a L 2 -regularized least-squares error objective (i.e., ridge regression) is used to learn the map W. The map is learned by solving the following optimization problem (typically by stochastic gradient descent (SGD)):", "cite_spans": [ { "start": 235, "end": 258, "text": "(Mikolov et al., 2013a;", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Post-Hoc Mapping with Seed Lexicons:", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "min W\u2208R d S \u00d7d T ||XW \u2212 Y|| 2 F + \u03bb||W|| 2 F", "eq_num": "(1)" } ], "section": "Post-Hoc Mapping with Seed Lexicons:", "sec_num": "2.2" }, { "text": "X and Y are matrices obtained through the respective concatenation of source language and target language vectors from training pairs. Once the linear map W is estimated, any previously unseen source language word vector x u may be straightforwardly mapped into the target language embedding space", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-Hoc Mapping with Seed Lexicons:", "sec_num": "2.2" }, { "text": "R d T as Wx u . After mapping all vectors x, x \u2208 V S , the target embedding space R d T in fact serves as SBWES. 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-Hoc Mapping with Seed Lexicons:", "sec_num": "2.2" }, { "text": "Prior work on post-hoc mapping with seed lexicons used a translation system (i.e., GT) to translate highly frequent English words to other languages such as Czech, Spanish (Mikolov et al., 2013a; Gouws et al., 2015) or Italian . This method presupposes the availability and high quality of such an external translation system. To simulate this setup, we take as a starting point the BNC word frequency list from Kilgarriff (1997) containing 6, 318 most frequent English lemmas. The list is then translated to other languages via GT. We call the BNC-based lexicons obtained by employing Google Translate BNC+GT.", "cite_spans": [ { "start": 172, "end": 195, "text": "(Mikolov et al., 2013a;", "ref_id": "BIBREF32" }, { "start": 196, "end": 215, "text": "Gouws et al., 2015)", "ref_id": "BIBREF12" }, { "start": 412, "end": 429, "text": "Kilgarriff (1997)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Seed Lexicon Source and Translation Method", "sec_num": null }, { "text": "In this paper, we propose another option: first, we learn the \"first\" SBWES (i.e., SBWES-1) using another BWE model (see Sect. 2.1), and then translate the BNC list through SBWES-1 by retaining the nearest cross-lingual neighbor y i \u2208 V T for each x i in the BNC list which is represented in SBWES-1. The pairs (x i , y i ) constitute the seed lexicon needed for learning the mapping between monolingual spaces, that is, to induce the final SBWES-2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seed Lexicon Source and Translation Method", "sec_num": null }, { "text": "Although in theory any BWE induction model may be used to induce SBWES-1, we rely on a document-level Type 3 BWE induction model from , since it requires only document alignments as (weak) bilingual signal. The resulting hybrid BWE induction model (HYBWE) combines the output of a Type 3 model (SBWES-1) and a Type 4 model (SBWES-2). This seed lexicon and BWE learning variant is called BNC+HYB.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seed Lexicon Source and Translation Method", "sec_num": null }, { "text": "Our new hybrid model allows us to also use source language words occurring in SBWES-1 sorted by frequency as seed lexicon source, again leaning on the intuition that higher frequency phenomena are more reliably translated using statistical models. Their translations can also be found through SBWES-1 to obtain seed lexicon pairs (x i , y i ). This variant is called HFQ+HYB.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seed Lexicon Source and Translation Method", "sec_num": null }, { "text": "Another possibility, recently introduced by Kiros et al. (2015) for vocabulary expansion in monolingual settings, relies on all words shared between two vocabularies to learn the mapping. In this work, we test the ability and limits of such orthographic evidence in cross-lingual settings: seed lexicon pairs are", "cite_spans": [ { "start": 44, "end": 63, "text": "Kiros et al. (2015)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Seed Lexicon Source and Translation Method", "sec_num": null }, { "text": "(x i , x i ), where x i \u2208 V S and x i \u2208 V T .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seed Lexicon Source and Translation Method", "sec_num": null }, { "text": "This seed lexicon variant is called ORTHO.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seed Lexicon Source and Translation Method", "sec_num": null }, { "text": "Seed Lexicon Size While all prior reported only results with restricted seed lexicon sizes only (i.e., 1K, 2K and 5K lexicon pairs are used as standard), in this work we provide a full-fledged analysis of the influence of seed lexicon size on the SBWES performance in cross-lingual tasks. More extreme settings are also investigated, in the attempt to answer two important questions: (1) Can a Type 4 SBWES be induced in a limited setting with only a few hundred lexicon pairs available (e.g., 100-500)? (2) Can the Type 4 models profit from the inclusion of more seed lexicon pairs (e.g., more than 5K, even up to 40K-50K lexicon pairs)?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seed Lexicon Source and Translation Method", "sec_num": null }, { "text": "Translation Pair Reliability When building seed lexicons through SBWES-1 (i.e., BNC+HYB and HFQ+HYB methods), it is possible to control for the reliability of translation pairs to be included in the final lexicon, with the idea that the use of only highly reliable pairs can potentially lead to an improved SBWES-2. A simple yet effective reliability reliability feature for translation pairs is the symmetry constraint (Peirsman and Pad\u00f3, 2010; Vuli\u0107 and Moens, 2013) : two words x i \u2208 V S and y i \u2208 V S are used as seed lexicon pairs only if they are mutual nearest neighbours given their representations in SBWES-1. The two variants of seed lexicons with only symmetric pairs are BNC+HYB+SYM and HFREQ+HYB+SYM. We also test the variants without the symmetry constraint (i.e., BNC+HYB+ASYM and HFQ+HYB+ASYM).", "cite_spans": [ { "start": 420, "end": 445, "text": "(Peirsman and Pad\u00f3, 2010;", "ref_id": "BIBREF37" }, { "start": 446, "end": 468, "text": "Vuli\u0107 and Moens, 2013)", "ref_id": "BIBREF47" } ], "ref_spans": [], "eq_spans": [], "section": "Seed Lexicon Source and Translation Method", "sec_num": null }, { "text": "Even more conservative reliability measures may be applied by exploiting the scores in the lists of translation candidates ranked by their similarity to the cue word x i . We investigate a symmetry constraint with a threshold: two words x i \u2208 V S and y i \u2208 V S are included as seed lexicon pair (x i , y i ) iff they are mutual nearest neighbours in SBWES-1 and it holds:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seed Lexicon Source and Translation Method", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "sim(xi, yi) \u2212 sim(xi, zi) > T HR (2) sim(yi, xi) \u2212 sim(yi, wi) > T HR", "eq_num": "(3)" } ], "section": "Seed Lexicon Source and Translation Method", "sec_num": null }, { "text": "where z i \u2208 V T is the second best translation candidate for x i , and w i \u2208 V S for y i . THR is a parameter which specifies the margin between the two best translation candidates. The intuition is that highly unambiguous and monosemous translation pairs (which is reflected in higher score margins) are also highly reliable. 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seed Lexicon Source and Translation Method", "sec_num": null }, { "text": "Task: Bilingual Lexicon Learning (BLL) After the final SBWES is induced, given a list of n source language words x u1 , . . . , x un , the task is to find a target language word t for each x u in the list using the SBWES. t is the target language word closest to the source language word x u in the induced SBWES, also known as the cross-lingual nearest neighbor. The set of learned n (x u , t) pairs is then run against a gold standard BLL test set. Following the standard practice (Mikolov et al., 2013a; , for all Type 4 models, all pairs containing any of the test words x u1 , . . . , x un are removed from training seed lexicons.", "cite_spans": [ { "start": 483, "end": 506, "text": "(Mikolov et al., 2013a;", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "Test Sets For each language pair, we evaluate on standard 1,000 ground truth one-to-one translation pairs built for three language pairs: Spanish (ES)-, Dutch (NL)-, Italian (IT)-English (EN) by Vuli\u0107 and Moens (2013) . The dataset is generally considered a benchmarking test set for BLL models that learn from non-parallel data, and is available online. 4 We have also experimented with two other benchmarking BLL test sets (Bergsma and Durme, 2011; Leviant and Reichart, 2015) observing a very similar relative performance of all the models in our comparison.", "cite_spans": [ { "start": 195, "end": 217, "text": "Vuli\u0107 and Moens (2013)", "ref_id": "BIBREF47" }, { "start": 355, "end": 356, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "We measure the BLL performance using the standard Top 1 accuracy (Acc 1 ) metric (Gaussier et al., 2004; Mikolov et al., 2013a; Gouws et al., 2015) . 5", "cite_spans": [ { "start": 81, "end": 104, "text": "(Gaussier et al., 2004;", "ref_id": "BIBREF11" }, { "start": 105, "end": 127, "text": "Mikolov et al., 2013a;", "ref_id": "BIBREF32" }, { "start": 128, "end": 147, "text": "Gouws et al., 2015)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": null }, { "text": "Baseline Models To induce SBWES-1, we resort to document-level embeddings of Vuli\u0107 and Moens (2016) (Type 3). We also compare to results obtained directly by their model (BWESG) to measure the performance gains with HYBWE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": null }, { "text": "To compare with a representative Type 2 model, we opt for the BilBOWA model of Gouws et al. (2015) due to its solid performance and robustness in the BLL task when trained on general-domain corpora such as Wikipedia (Luong et al., 2015) , its reduced complexity reflected in fast computations on massive datasets, as well as its public availabilliterature (Smith and Eisner, 2007; Tu and Honavar, 2012; Vuli\u0107 and Moens, 2013 ), but we do not observe any significant gains when resorting to the more complex reliability estimates.", "cite_spans": [ { "start": 79, "end": 98, "text": "Gouws et al. (2015)", "ref_id": "BIBREF12" }, { "start": 216, "end": 236, "text": "(Luong et al., 2015)", "ref_id": "BIBREF28" }, { "start": 356, "end": 380, "text": "(Smith and Eisner, 2007;", "ref_id": "BIBREF39" }, { "start": 381, "end": 402, "text": "Tu and Honavar, 2012;", "ref_id": "BIBREF43" }, { "start": 403, "end": 424, "text": "Vuli\u0107 and Moens, 2013", "ref_id": "BIBREF47" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": null }, { "text": "4 http://people.cs.kuleuven.be/~ivan.vulic/ 5 Similar trends are observed within a more lenient setting with Acc5 and Acc10 scores, but we omit these results for clarity and the fact that the actual BLL performance is best reflected in Acc1 scores (i.e., best translation only). ity. 6 In short, BilBOWA combines the adapted SGNS for monolingual objectives together with a cross-lingual objective that minimizes the L 2 -loss between the bag-of-word vectors of parallel sentences. BilBOWA uses the same training setup as HYBWE (monolingual datasets plus a bilingual signal), but relies on a stronger bilingual signal (sentence alignments as opposed to HYBWE's document alignments).", "cite_spans": [ { "start": 284, "end": 285, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": null }, { "text": "We also compare with a benchmarking Type 1 model from sentence-aligned parallel data called BiCVM (Hermann and Blunsom, 2014b). Finally, a SGNS-based BWE model with the BNC+GT seed lexicon is taken as a baseline Type 4 model (Mikolov et al., 2013a ). 7", "cite_spans": [ { "start": 225, "end": 247, "text": "(Mikolov et al., 2013a", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": null }, { "text": "Training Data and Setup We use standard training data and suggested settings to obtain BWEs for all models involved in comparison. We retain the 100K most frequent words in each language for all models. To induce monolingual WE spaces, two monolingual SGNS models were trained on the cleaned and tokenized Wikipedias from the Polyglot website (Al-Rfou et al., 2013) using SGD with a global learning rate of 0.025. For BilBOWA, as in the original work (Gouws et al., 2015) , the bilingual signal for the cross-lingual regularization is provided by the first 500K sentences from Europarl.v7 (Tiedemann, 2012) . We use SGD with a global rate of 0.15. 8 The window size is varied from 2 to 16 in steps of 2, and the best scoring model is always reported in all comparisons.", "cite_spans": [ { "start": 451, "end": 471, "text": "(Gouws et al., 2015)", "ref_id": "BIBREF12" }, { "start": 589, "end": 606, "text": "(Tiedemann, 2012)", "ref_id": "BIBREF41" }, { "start": 648, "end": 649, "text": "8", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": null }, { "text": "BWESG was trained on the cleaned and tokenized document-aligned Wikipedias available online 9 , SGD on pseudo-bilingual documents with a global rate 0.025. For BiCVM, we use the tool released by its authors 10 and train on the whole Europarl.v7 for each language pair: we train an additive model, with hinge loss margin set to d (i.e., dimensionality) as in the original paper, batch size of 50, and noise parameter of 10. All BiCVM models are trained with 200 iterations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": null }, { "text": "For all models, we obtain BWEs with d = 40, 64, 300, 500, but we report only results with 300-dimensional BWEs as similar trends were observed with other d-s. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": null }, { "text": "Exp. I: Standard BLL Setting First, we replicate the previous BLL setups with Type 4 models from (Mikolov et al., 2013a; by relying on seed lexicons of exactly 5K word pairs (except for BNC+HYB+SYM which exhausts all possible pairs before the 5K limit) sorted by frequency of the source language word. Results with different lexicons for the three language pairs are summarized in Table 2, while Table 1 shows examples of nearest neighbour words for a Spanish word not present in any of the training lexicons. Table 1 provides evidence for our first insight: Type 4 models do not necessarily require external lexicons (such as the BNC+GT model) to learn a semantically plausible SBWES (i.e., the lists of nearest neighbours are similar for all lexicons excluding ORTHO). Table 1 also suggests that the choice of seed lexicon pairs may strongly influence the properties of the resulting SBWES. Due to its design, ORTHO finds a mapping which naturally brings foreign words appearing in the English vo-cabulary closer in the induced SBWES.", "cite_spans": [ { "start": 97, "end": 120, "text": "(Mikolov et al., 2013a;", "ref_id": "BIBREF32" } ], "ref_spans": [ { "start": 381, "end": 403, "text": "Table 2, while Table 1", "ref_id": "TABREF1" }, { "start": 510, "end": 517, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 771, "end": 778, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "This first batch of quantitative results already shows that Type 4 models with inexpensive automatically induced lexicons (i.e., HYBWE) are on a par with or even better than Type 4 models relying on external resources or translation systems. In addition, the best reported scores using the more constrained symmetric BNC/HFQ+HYB+SYM lexicon variants are higher than those for three baseline models (of Type 1, Type 2, and Type 3) that previously held highest scores on the BLL test sets . These improvements over the baseline models and BNC+GT are statistically significant (using McNemar's statistical significance test, p < 0.05). Table 2 also suggests that a careful selection of reliable pairs can lead to peak performances even with a lower number of pairs, i.e., see the results of BNC+HYB+SYM.", "cite_spans": [], "ref_spans": [ { "start": 633, "end": 640, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "Exp. II: Lexicon Size BLL results for ES-EN and NL-EN obtained by varying the seed lexicon sizes are displayed in Fig. 2(a) and 2(b). Results for IT-EN closely follow the patterns observed with ES-EN. BNC+HYB+SYM and HFQ+HYB+ASYM -the two models that do not blindly use all potential training pairs, but rely on sets of symmetric pairs (i.e., they include the simple measure of translation pair reliability) -display the best performance across all lexicon sizes. The finding confirms the intuition that a more intelligent pair selection strategy is essential for Type 4 BWE models. HFQ+HYB+SYM -a simple hybrid BWE model (HYBWE) combining a document-level Type 3 model with a Type 4 model and translation reliability detection -is the strongest BWE model overall (see also Table 2 again). HYBWE-based models which do not perform any pair selection (i.e., BNC/HFQ+HYB+ASYM) closely follow the behaviour of the GT-based model. This demonstrates that an external lexicon or translation system may be safely replaced by a document-level embedding model without any significant performance loss in the BLL task. The ORTHO-based model falls short of its competitors. However, we observe that even this model with the learning setting relying on the cheapest bilingual signal may lead to reasonable BLL scores, especially for the more related NL-EN pair.", "cite_spans": [], "ref_spans": [ { "start": 114, "end": 123, "text": "Fig. 2(a)", "ref_id": "FIGREF0" }, { "start": 774, "end": 789, "text": "Table 2 again).", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "The two models with the symmetry constraint display a particularly strong performance with settings relying on scarce resources (i.e., only a small portion of training pairs is available). For instance, HFQ+HYB+SYM scores 0.129 for ES-EN with only 200 training pairs (vs 0.002 with BNC+GT), and 0.529 with 500 pairs (vs 0.145 with BNC+GT). On the other hand, adding more pairs does not lead to an improved BLL performance. In fact, we observe a slow and steady decrease in performance with lexicons containing 10, 000 and more training pairs for all HYBWE variants. The phenomenon may be attributed to the fact that highly frequent words receive more accurate representations in SBWES-1, and adding less frequent and, consequently, less accurate training pairs to the SBWES-2 learning process brings in additional noise. In plain language, when it comes to seed lexicons Type 4 models prefer quality over quantity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "In the next experiment, we vary the threshold value THR (see sect. 2.2) in the HFQ+HYB+SYM variant with the following values in comparison: 0.0 (None), 0.01, 0.025, 0.05, 0.075, 0.1. We investigate whether retaining only highly unambiguous pairs would lead to even better BLL performance. The results for all three language pairs are summarized in Fig. 3(a)-3(c) . The results for all variant models again decrease when employ-ing larger lexicons (due to the usage of less frequent word pairs in training). We observe that a slightly stricter selection criterion (i.e., THR = 0.01, 0.025) also leads to slightly improved peak BLL scores for ES-EN and IT-EN around the 5K region. The improvements, however, are not statistically significant. On the other hand, a too conservative pair selection criterion with higher threshold values significantly deteriorates the overall performance of HYBWE with HFQ+HYB+SYM. The conservative criteria discard plenty of potentially useful training pairs. Therefore, as one line of future research, we plan to investigate more sophisticated models for the selection of reliable seed lexicon pairs that will lead to a better trade-off between the lexicon size and reliability of the pairs.", "cite_spans": [], "ref_spans": [ { "start": 348, "end": 362, "text": "Fig. 3(a)-3(c)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Exp. III: Translation Pair Reliability", "sec_num": null }, { "text": "Translations in Context (SWTC) In the final experiment, we test whether the findings originating from the BLL task generalize to another crosslingual semantic task: suggesting word translations in context (SWTC) recently proposed by Vuli\u0107 and Moens (2014) . Given an occurrence of a polysemous word w \u2208 V S , the SWTC task is to choose the correct translation in the target language of that particular occurrence of w from the given set T C(w) = {t 1 , . . . , t tq }, T C(w) \u2286 V T , of its tq possible translations/meanings. Whereas in the BLL task the candidate search is performed over the entire vocabulary V T , the set T C(w) typically comprises only a few pre-selected words/senses. One may refer to T C(w) as an inventory of translation candidates for w. The best scoring translation candidate in the ranked list is then the correct translation for that particular occurrence of w observing its local context Con(w). SWTC is an extended Table 3 : Acc 1 scores in the SWTC task. All seed lexicons contain 6K translation pairs, except for BNC+HYB+SYM (its sizes provided in parentheses). * denotes a statistically significant improvement over baselines and BNC+GT using McNemar's statistical significance test with the Bonferroni correction, p < 0.05.", "cite_spans": [ { "start": 233, "end": 255, "text": "Vuli\u0107 and Moens (2014)", "ref_id": "BIBREF48" } ], "ref_spans": [ { "start": 945, "end": 952, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Exp. IV: Another Task -Suggesting Word", "sec_num": null }, { "text": "cross-lingual variant of the task proposed by Huang et al. (2012) which evaluates monolingual contextsensitive semantic similarity of words in sentential context, and it is also very related to cross-lingual lexical substitution (Mihalcea et al., 2010) .", "cite_spans": [ { "start": 46, "end": 65, "text": "Huang et al. (2012)", "ref_id": "BIBREF17" }, { "start": 229, "end": 252, "text": "(Mihalcea et al., 2010)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Exp. IV: Another Task -Suggesting Word", "sec_num": null }, { "text": "To isolate the performance of each BWE induction model from the details of the SWTC setup, we use the same approach with all models: we opt for the SWTC framework proven to yield excellent results with BWEs in the SWTC task . In short, the context bag Con(w) = {cw 1 , . . . , cw r } is obtained by harvesting all r words that occur with w in the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Exp. IV: Another Task -Suggesting Word", "sec_num": null }, { "text": "The vector representation of Con(w) is the ddimensional embedding computed by aggregating over all word embeddings for each cw j \u2208 Con(w) using standard addition as the compositional operator (Mitchell and Lapata, 2008) which was proven a robust choice (Milajevs et al., 2014) :", "cite_spans": [ { "start": 192, "end": 219, "text": "(Mitchell and Lapata, 2008)", "ref_id": "BIBREF35" }, { "start": 253, "end": 276, "text": "(Milajevs et al., 2014)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Exp. IV: Another Task -Suggesting Word", "sec_num": null }, { "text": "Con(w) = cw 1 + cw 2 + . . . + cw r (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Exp. IV: Another Task -Suggesting Word", "sec_num": null }, { "text": "where cw j is the embedding of the j-th context word, and Con(w) is the resulting embedding of the context bag Con(w). Finally, for each t j \u2208 T C(w), the context-sensitive similarity with w is computed as: sim(w, t j , Con(w)) = cos(Con(w), t j ), where Con(w) and t j are representations of the (sentential) context bag and the candidate translation t j in the same SBWES. 11 The evaluation set consists of 360 sentences for 15 polysemous nouns (24 sentences for each noun) in each of the three languages: Spanish, Dutch, Italian, along with the single gold standard single word English translation given the sentential context. 12 Table 3 summarizes the results (Acc 1 scores) in the SWTC task. NO-CONTEXT refers to the contextinsensitive majority baseline obtained by BNC+GT (i.e., it always chooses the most semantically similar translation candidate at the word type level). We also report the results of the best SWTC model from Vuli\u0107 and Moens (2014) .", "cite_spans": [ { "start": 375, "end": 377, "text": "11", "ref_id": null }, { "start": 936, "end": 958, "text": "Vuli\u0107 and Moens (2014)", "ref_id": "BIBREF48" } ], "ref_spans": [ { "start": 634, "end": 641, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Exp. IV: Another Task -Suggesting Word", "sec_num": null }, { "text": "The results largely support the claims established with the BLL evaluation. An exter-nal seed lexicon of BNC+GT may be safely replaced by an automatically induced inexpensive seed lexicon (as in HYBWE with BNC+HYB+SYM/ASYM). The best performing models are again BNC+HYB+SYM and HFQ+HYB+SYM. The comparison of ASYM and SYM lexicon variants further suggests that filtering translation pairs using the symmetry constraint again leads to consistent improvements, but stricter selection criteria with higher thresholds do not lead to significant performance boosts, and may even hurt the performance (see the results for NL-EN). Various HYBWE variants significantly improve over baseline BWE models (Types 1-4), also outperforming previous best SWTC results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Exp. IV: Another Task -Suggesting Word", "sec_num": null }, { "text": "We presented a detailed analysis of the importance and properties of seed bilingual lexicons in learning bilingual word embeddings (BWEs) which are valuable for many cross-lingual/multilingual NLP tasks. On the basis of the analysis, we proposed a simple yet effective hybrid bilingual word embedding model called HYBWE. It learns the mapping between two monolingual embedding spaces using only highly reliable symmetric translation pairs from an inexpensive seed document-level embedding space. The results in the tasks of (1) bilingual lexicon learning and (2) suggesting word translations in context demonstrate that -due to its careful selection of reliable translation pairs for seed lexicons -HYBWE outperforms benchmarking BWE induction models, all of which use more expensive bilingual signals for training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "In future work, we plan to investigate other methods for seed pairs selection, settings with scarce resources (Agi\u0107 et al., 2015; Zhang et al., 2016) , other context types inspired by recent work in the monolingual settings (Levy and Goldberg, 2014a; Melamud et al., 2016) , as well as model adaptations that can work with multi-word expressions. Encouraged by the excellent results, we also plan to test the portability of the approach to more language pairs, and other tasks and applications.", "cite_spans": [ { "start": 110, "end": 129, "text": "(Agi\u0107 et al., 2015;", "ref_id": "BIBREF0" }, { "start": 130, "end": 149, "text": "Zhang et al., 2016)", "ref_id": "BIBREF53" }, { "start": 224, "end": 250, "text": "(Levy and Goldberg, 2014a;", "ref_id": "BIBREF25" }, { "start": 251, "end": 272, "text": "Melamud et al., 2016)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "Another possible objective (found in the zero-shot learning literature) is a margin-based ranking loss. We omit the results with this objective for brevity, and due to the fact that similar trends are observed as with (more standard) linear maps.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Other (more elaborate) reliability measures exist in the", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/gouwsmeister/bilbowa 7 For details concerning all baseline models, the reader is encouraged to check the relevant literature.8 Suggested by the authors (personal correspondence). 9 http://linguatools.org/tools/corpora/ 10 https://github.com/karlmoritz/bicvm", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The same ranking of different models (with lower absolute scores) is observed when adapting the monolingual lexical substitution framework ofMelamud et al. (2015) to the SWTC task as done by.12 The SWTC evaluation set is available online at: http://aclweb.org/anthology/attachments/D/D14/D14-1040.Attachment.zip", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work is supported by ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909). The authors are grateful to Roi Reichart and the anonymous reviewers for their helpful comments and suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "If all you have is a bit of the Bible: Learning POS taggers for truly low-resource languages", "authors": [ { "first": "\u017deljko", "middle": [], "last": "Agi\u0107", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2015, "venue": "ACL", "volume": "", "issue": "", "pages": "268--272", "other_ids": {}, "num": null, "urls": [], "raw_text": "\u017deljko Agi\u0107, Dirk Hovy, and Anders S\u00f8gaard. 2015. If all you have is a bit of the Bible: Learning POS taggers for truly low-resource languages. In ACL, pages 268-272.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Polyglot: Distributed word representations for multilingual NLP", "authors": [ { "first": "Rami", "middle": [], "last": "Al-Rfou", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Perozzi", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Skiena", "suffix": "" } ], "year": 2013, "venue": "CoNLL", "volume": "", "issue": "", "pages": "183--192", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual NLP. In CoNLL, pages 183-192.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Massively multilingual word embeddings", "authors": [ { "first": "Waleed", "middle": [], "last": "Ammar", "suffix": "" }, { "first": "George", "middle": [], "last": "Mulcaire", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A. Smith. 2016. Massively multilingual word embeddings. CoRR, abs/1602.01925.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Don't count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors", "authors": [ { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Georgiana", "middle": [], "last": "Dinu", "suffix": "" }, { "first": "Germ\u00e1n", "middle": [], "last": "Kruszewski", "suffix": "" } ], "year": 2014, "venue": "ACL", "volume": "", "issue": "", "pages": "238--247", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Baroni, Georgiana Dinu, and Germ\u00e1n Kruszewski. 2014. Don't count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors. In ACL, pages 238-247.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Learning bilingual lexicons using the visual similarity of labeled web images", "authors": [ { "first": "Shane", "middle": [], "last": "Bergsma", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2011, "venue": "IJCAI", "volume": "", "issue": "", "pages": "1764--1769", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shane Bergsma and Benjamin Van Durme. 2011. Learning bilingual lexicons using the visual similar- ity of labeled web images. In IJCAI, pages 1764- 1769.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "An autoencoder approach to learning bilingual word representations", "authors": [ { "first": "A", "middle": [ "P" ], "last": "Sarath", "suffix": "" }, { "first": "Stanislas", "middle": [], "last": "Chandar", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Lauly", "suffix": "" }, { "first": "", "middle": [], "last": "Larochelle", "suffix": "" }, { "first": "M", "middle": [], "last": "Mitesh", "suffix": "" }, { "first": "Balaraman", "middle": [], "last": "Khapra", "suffix": "" }, { "first": "", "middle": [], "last": "Ravindran", "suffix": "" }, { "first": "C", "middle": [], "last": "Vikas", "suffix": "" }, { "first": "Amrita", "middle": [], "last": "Raykar", "suffix": "" }, { "first": "", "middle": [], "last": "Saha", "suffix": "" } ], "year": 2014, "venue": "NIPS", "volume": "", "issue": "", "pages": "1853--1861", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sarath A.P. Chandar, Stanislas Lauly, Hugo Larochelle, Mitesh M. Khapra, Balaraman Ravindran, Vikas C. Raykar, and Amrita Saha. 2014. An autoencoder approach to learning bilingual word representations. In NIPS, pages 1853-1861.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A fast and accurate dependency parser using neural networks", "authors": [ { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "", "issue": "", "pages": "740--750", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danqi Chen and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural net- works. In EMNLP, pages 740-750.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "Pavel", "middle": [ "P" ], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493-2537.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Trans-gram, fast cross-lingual word embeddings", "authors": [ { "first": "Jocelyn", "middle": [], "last": "Coulmance", "suffix": "" }, { "first": "Jean-Marc", "middle": [], "last": "Marty", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Amine", "middle": [], "last": "Benhalloum", "suffix": "" } ], "year": 2015, "venue": "EMNLP", "volume": "", "issue": "", "pages": "1109--1113", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jocelyn Coulmance, Jean-Marc Marty, Guillaume Wen- zek, and Amine Benhalloum. 2015. Trans-gram, fast cross-lingual word embeddings. In EMNLP, pages 1109-1113.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Improving zero-shot learning by mitigating the hubness problem", "authors": [ { "first": "Georgiana", "middle": [], "last": "Dinu", "suffix": "" }, { "first": "Angeliki", "middle": [], "last": "Lazaridou", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2015, "venue": "ICLR Workshop Papers", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Georgiana Dinu, Angeliki Lazaridou, and Marco Ba- roni. 2015. Improving zero-shot learning by miti- gating the hubness problem. In ICLR Workshop Pa- pers.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Improving vector space word representations using multilingual correlation", "authors": [ { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2014, "venue": "EACL", "volume": "", "issue": "", "pages": "462--471", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In EACL, pages 462-471.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A geometric view on bilingual lexicon extraction from comparable corpora", "authors": [ { "first": "\u00c9ric", "middle": [], "last": "Gaussier", "suffix": "" }, { "first": "Jean-Michel", "middle": [], "last": "Renders", "suffix": "" }, { "first": "Irina", "middle": [], "last": "Matveeva", "suffix": "" }, { "first": "Cyril", "middle": [], "last": "Goutte", "suffix": "" }, { "first": "Herv\u00e9", "middle": [], "last": "D\u00e9jean", "suffix": "" } ], "year": 2004, "venue": "ACL", "volume": "", "issue": "", "pages": "526--533", "other_ids": {}, "num": null, "urls": [], "raw_text": "\u00c9ric Gaussier, Jean-Michel Renders, Irina Matveeva, Cyril Goutte, and Herv\u00e9 D\u00e9jean. 2004. A geometric view on bilingual lexicon extraction from compara- ble corpora. In ACL, pages 526-533.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "BilBOWA: Fast bilingual distributed representations without word alignments", "authors": [ { "first": "Stephan", "middle": [], "last": "Gouws", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" } ], "year": 2015, "venue": "ICML", "volume": "", "issue": "", "pages": "748--756", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. BilBOWA: Fast bilingual distributed repre- sentations without word alignments. In ICML, pages 748-756.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Cross-lingual dependency parsing based on distributed representations", "authors": [ { "first": "Jiang", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2015, "venue": "ACL", "volume": "", "issue": "", "pages": "1234--1244", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2015. Cross-lingual depen- dency parsing based on distributed representations. In ACL, pages 1234-1244.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Multilingual distributed representations without word alignment", "authors": [], "year": 2014, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karl Moritz Hermann and Phil Blunsom. 2014a. Mul- tilingual distributed representations without word alignment. In ICLR.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Multilingual models for compositional distributed semantics", "authors": [], "year": 2014, "venue": "ACL", "volume": "", "issue": "", "pages": "58--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karl Moritz Hermann and Phil Blunsom. 2014b. Mul- tilingual models for compositional distributed se- mantics. In ACL, pages 58-68.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Creating bilingual lexica using reference wordlists for alignment of monolingual semantic vector spaces", "authors": [ { "first": "Jon", "middle": [], "last": "Holmlund", "suffix": "" }, { "first": "Magnus", "middle": [], "last": "Sahlgren", "suffix": "" }, { "first": "Jussi", "middle": [], "last": "Karlgren", "suffix": "" } ], "year": 2005, "venue": "NODALIDA", "volume": "", "issue": "", "pages": "71--77", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jon Holmlund, Magnus Sahlgren, and Jussi Karlgren. 2005. Creating bilingual lexica using reference wordlists for alignment of monolingual semantic vector spaces. In NODALIDA, pages 71-77.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Improving word representations via global context and multiple word prototypes", "authors": [ { "first": "Eric", "middle": [ "H" ], "last": "Huang", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" } ], "year": 2012, "venue": "ACL", "volume": "", "issue": "", "pages": "873--882", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric H. Huang, Richard Socher, Christopher D. Man- ning, and Andrew Y. Ng. 2012. Improving word representations via global context and multiple word prototypes. In ACL, pages 873-882.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "H\u00e9ctor Mart\u00ednez Alonso, and Anders S\u00f8gaard", "authors": [ { "first": "Anders", "middle": [], "last": "Johannsen", "suffix": "" } ], "year": 2015, "venue": "EMNLP", "volume": "", "issue": "", "pages": "2062--2066", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anders Johannsen, H\u00e9ctor Mart\u00ednez Alonso, and An- ders S\u00f8gaard. 2015. Any-language frame-semantic parsing. In EMNLP, pages 2062-2066.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Putting frequencies in the dictionary", "authors": [ { "first": "Adam", "middle": [], "last": "Kilgarriff", "suffix": "" } ], "year": 1997, "venue": "International Journal of Lexicography", "volume": "10", "issue": "2", "pages": "135--155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Kilgarriff. 1997. Putting frequencies in the dictionary. International Journal of Lexicography, 10(2):135-155.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Raquel Urtasun, and Sanja Fidler. 2015. Skip-thought vectors", "authors": [ { "first": "Ryan", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "Yukun", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Richard", "middle": [ "S" ], "last": "Zemel", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Torralba", "suffix": "" } ], "year": null, "venue": "NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urta- sun, and Sanja Fidler. 2015. Skip-thought vectors. In NIPS.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Inducing crosslingual distributed representations of words", "authors": [ { "first": "Alexandre", "middle": [], "last": "Klementiev", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" }, { "first": "Binod", "middle": [], "last": "Bhattarai", "suffix": "" } ], "year": 2012, "venue": "COLING", "volume": "", "issue": "", "pages": "1459--1474", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representa- tions of words. In COLING, pages 1459-1474.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Learning bilingual word representations by marginalizing alignments", "authors": [ { "first": "Tom\u00e1\u0161", "middle": [], "last": "Ko\u010disk\u00fd", "suffix": "" }, { "first": "Karl", "middle": [ "Moritz" ], "last": "Hermann", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2014, "venue": "ACL", "volume": "", "issue": "", "pages": "224--229", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom\u00e1\u0161 Ko\u010disk\u00fd, Karl Moritz Hermann, and Phil Blun- som. 2014. Learning bilingual word representations by marginalizing alignments. In ACL, pages 224- 229.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Hubness and pollution: Delving into cross-space mapping for zero-shot learning", "authors": [ { "first": "Angeliki", "middle": [], "last": "Lazaridou", "suffix": "" }, { "first": "Georgiana", "middle": [], "last": "Dinu", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2015, "venue": "ACL", "volume": "", "issue": "", "pages": "270--280", "other_ids": {}, "num": null, "urls": [], "raw_text": "Angeliki Lazaridou, Georgiana Dinu, and Marco Ba- roni. 2015. Hubness and pollution: Delving into cross-space mapping for zero-shot learning. In ACL, pages 270-280.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Judgment language matters: Multilingual vector space models for judgment language aware lexical semantics", "authors": [ { "first": "Ira", "middle": [], "last": "Leviant", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ira Leviant and Roi Reichart. 2015. Judgment lan- guage matters: Multilingual vector space models for judgment language aware lexical semantics. CoRR, abs/1508.00106.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Dependencybased word embeddings", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2014, "venue": "ACL", "volume": "", "issue": "", "pages": "302--308", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Levy and Yoav Goldberg. 2014a. Dependency- based word embeddings. In ACL, pages 302-308.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Neural word embedding as implicit matrix factorization", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2014, "venue": "NIPS", "volume": "", "issue": "", "pages": "2177--2185", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Levy and Yoav Goldberg. 2014b. Neural word embedding as implicit matrix factorization. In NIPS, pages 2177-2185.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Improving distributional similarity with lessons learned from word embeddings", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" } ], "year": 2015, "venue": "Transactions of the ACL", "volume": "3", "issue": "", "pages": "211--225", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Im- proving distributional similarity with lessons learned from word embeddings. Transactions of the ACL, 3:211-225.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Bilingual word representations with monolingual quality in mind", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing", "volume": "", "issue": "", "pages": "151--159", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Bilingual word representations with monolingual quality in mind. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 151-159.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "A simple word embedding model for lexical substitution", "authors": [ { "first": "Oren", "middle": [], "last": "Melamud", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing", "volume": "", "issue": "", "pages": "1--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oren Melamud, Omer Levy, and Ido Dagan. 2015. A simple word embedding model for lexical substitu- tion. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 1-7.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "The role of context types and dimensionality in learning word embeddings", "authors": [ { "first": "Oren", "middle": [], "last": "Melamud", "suffix": "" }, { "first": "David", "middle": [], "last": "Mcclosky", "suffix": "" }, { "first": "Siddharth", "middle": [], "last": "Patwardhan", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2016, "venue": "NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oren Melamud, David McClosky, Siddharth Patward- han, and Mohit Bansal. 2016. The role of context types and dimensionality in learning word embed- dings. In NAACL-HLT.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "SemEval-2010 task 2: Cross-lingual lexical substitution", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Ravi", "middle": [], "last": "Sinha", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "" } ], "year": 2010, "venue": "SEMEVAL", "volume": "", "issue": "", "pages": "9--14", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea, Ravi Sinha, and Diana McCarthy. 2010. SemEval-2010 task 2: Cross-lingual lexical substitution. In SEMEVAL, pages 9-14.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Exploiting similarities among languages for machine translation", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Le", "suffix": "" }, { "first": "", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for ma- chine translation. CoRR, abs/1309.4168.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Gregory", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "NIPS", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed rep- resentations of words and phrases and their compo- sitionality. In NIPS, pages 3111-3119.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Evaluating neural word representations in tensor-based compositional settings", "authors": [ { "first": "Dmitrijs", "middle": [], "last": "Milajevs", "suffix": "" }, { "first": "Dimitri", "middle": [], "last": "Kartsaklis", "suffix": "" }, { "first": "Mehrnoosh", "middle": [], "last": "Sadrzadeh", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Purver", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "", "issue": "", "pages": "708--719", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dmitrijs Milajevs, Dimitri Kartsaklis, Mehrnoosh Sadrzadeh, and Matthew Purver. 2014. Evaluating neural word representations in tensor-based compo- sitional settings. In EMNLP, pages 708-719.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Vector-based models of semantic composition", "authors": [ { "first": "Jeff", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2008, "venue": "ACL", "volume": "", "issue": "", "pages": "236--244", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In ACL, pages 236-244.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "A dual embedding space model for document ranking", "authors": [ { "first": "Eric", "middle": [ "T" ], "last": "Bhaskar Mitra", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Nalisnick", "suffix": "" }, { "first": "Rich", "middle": [], "last": "Craswell", "suffix": "" }, { "first": "", "middle": [], "last": "Caruana", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bhaskar Mitra, Eric T. Nalisnick, Nick Craswell, and Rich Caruana. 2016. A dual embed- ding space model for document ranking. CoRR, abs/1602.01137.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Crosslingual induction of selectional preferences with bilingual vector spaces", "authors": [ { "first": "Yves", "middle": [], "last": "Peirsman", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Pad\u00f3", "suffix": "" } ], "year": 2010, "venue": "NAACL", "volume": "", "issue": "", "pages": "921--929", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yves Peirsman and Sebastian Pad\u00f3. 2010. Cross- lingual induction of selectional preferences with bilingual vector spaces. In NAACL, pages 921-929.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Learning cross-lingual word embeddings via matrix co-factorization", "authors": [ { "first": "Tianze", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2015, "venue": "ACL", "volume": "", "issue": "", "pages": "567--572", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianze Shi, Zhiyuan Liu, Yang Liu, and Maosong Sun. 2015. Learning cross-lingual word embeddings via matrix co-factorization. In ACL, pages 567-572.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Bootstrapping feature-rich dependency parsers with entropic priors", "authors": [ { "first": "David", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2007, "venue": "EMNLP-CoNLL", "volume": "", "issue": "", "pages": "667--677", "other_ids": {}, "num": null, "urls": [], "raw_text": "David A. Smith and Jason Eisner. 2007. Bootstrapping feature-rich dependency parsers with entropic priors. In EMNLP-CoNLL, pages 667-677.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Leveraging monolingual data for crosslingual compositional word representations", "authors": [ { "first": "Hubert", "middle": [], "last": "Soyer", "suffix": "" }, { "first": "Pontus", "middle": [], "last": "Stenetorp", "suffix": "" }, { "first": "Akiko", "middle": [], "last": "Aizawa", "suffix": "" } ], "year": 2015, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hubert Soyer, Pontus Stenetorp, and Akiko Aizawa. 2015. Leveraging monolingual data for crosslingual compositional word representations. In ICLR.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Parallel data, tools and interfaces in OPUS", "authors": [ { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2012, "venue": "LREC", "volume": "", "issue": "", "pages": "2214--2218", "other_ids": {}, "num": null, "urls": [], "raw_text": "J\u00f6rg Tiedemann. 2012. Parallel data, tools and inter- faces in OPUS. In LREC, pages 2214-2218.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Cross-lingual wikification using multilingual embeddings", "authors": [ { "first": "Chen-Tse", "middle": [], "last": "Tsai", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2016, "venue": "NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen-Tse Tsai and Dan Roth. 2016. Cross-lingual wikification using multilingual embeddings. In NAACL-HLT.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Unambiguity regularization for unsupervised learning of probabilistic grammars", "authors": [ { "first": "Kewei", "middle": [], "last": "Tu", "suffix": "" }, { "first": "Vasant", "middle": [], "last": "Honavar", "suffix": "" } ], "year": 2012, "venue": "EMNLP-CoNLL", "volume": "", "issue": "", "pages": "1324--1334", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kewei Tu and Vasant Honavar. 2012. Unambiguity regularization for unsupervised learning of proba- bilistic grammars. In EMNLP-CoNLL, pages 1324- 1334.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Word representations: A simple and general method for semi-supervised learning", "authors": [ { "first": "Joseph", "middle": [ "P" ], "last": "Turian", "suffix": "" }, { "first": "Lev-Arie", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2010, "venue": "ACL", "volume": "", "issue": "", "pages": "384--394", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph P. Turian, Lev-Arie Ratinov, and Yoshua Ben- gio. 2010. Word representations: A simple and gen- eral method for semi-supervised learning. In ACL, pages 384-394.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "From frequency to meaning: vector space models of semantics", "authors": [ { "first": "D", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Turney", "suffix": "" }, { "first": "", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2010, "venue": "Journal of Artifical Intelligence Research", "volume": "37", "issue": "1", "pages": "141--188", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter D. Turney and Patrick Pantel. 2010. From frequency to meaning: vector space models of se- mantics. Journal of Artifical Intelligence Research, 37(1):141-188.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Cross-lingual models of word embeddings: An empirical comparison", "authors": [ { "first": "Shyam", "middle": [], "last": "Upadhyay", "suffix": "" }, { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2016, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shyam Upadhyay, Manaal Faruqui, Chris Dyer, and Dan Roth. 2016. Cross-lingual models of word em- beddings: An empirical comparison. In ACL.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "A study on bootstrapping bilingual vector spaces from nonparallel data (and nothing else)", "authors": [ { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2013, "venue": "EMNLP", "volume": "", "issue": "", "pages": "1613--1624", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Vuli\u0107 and Marie-Francine Moens. 2013. A study on bootstrapping bilingual vector spaces from non- parallel data (and nothing else). In EMNLP, pages 1613-1624.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Probabilistic models of cross-lingual semantic similarity in context based on latent cross-lingual concepts induced from comparable data", "authors": [ { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "", "issue": "", "pages": "349--362", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Vuli\u0107 and Marie-Francine Moens. 2014. Proba- bilistic models of cross-lingual semantic similarity in context based on latent cross-lingual concepts in- duced from comparable data. In EMNLP, pages 349-362.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Monolingual and cross-lingual information retrieval models based on (bilingual) word embeddings", "authors": [ { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2015, "venue": "SIGIR", "volume": "", "issue": "", "pages": "363--372", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Vuli\u0107 and Marie-Francine Moens. 2015. Mono- lingual and cross-lingual information retrieval mod- els based on (bilingual) word embeddings. In SIGIR, pages 363-372.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Bilingual distributed word representations from document-aligned comparable data", "authors": [ { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2016, "venue": "Journal of Artificial Intelligence Research", "volume": "55", "issue": "", "pages": "953--994", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Vuli\u0107 and Marie-Francine Moens. 2016. Bilingual distributed word representations from document-aligned comparable data. Journal of Ar- tificial Intelligence Research, 55:953-994.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Multi-modal representations for improved bilingual lexicon learning", "authors": [ { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2016, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Vuli\u0107, Douwe Kiela, Stephen Clark, and Marie- Francine Moens. 2016. Multi-modal representa- tions for improved bilingual lexicon learning. In ACL.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "WSABIE: scaling up to large vocabulary image annotation", "authors": [ { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Samy", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Usunier", "suffix": "" } ], "year": 2011, "venue": "IJCAI", "volume": "", "issue": "", "pages": "2764--2770", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Weston, Samy Bengio, and Nicolas Usunier. 2011. WSABIE: scaling up to large vocabulary im- age annotation. In IJCAI, pages 2764-2770.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Ten pairs to tag -Multilingual POS tagging via coarse mapping between embeddings", "authors": [ { "first": "Yuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "David", "middle": [], "last": "Gaddy", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Tommi", "middle": [], "last": "Jaakkola", "suffix": "" } ], "year": 2016, "venue": "NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuan Zhang, David Gaddy, Regina Barzilay, and Tommi Jaakkola. 2016. Ten pairs to tag -Multilin- gual POS tagging via coarse mapping between em- beddings. In NAACL-HLT.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Bilingual word embeddings for phrase-based machine translation", "authors": [ { "first": "Will", "middle": [ "Y" ], "last": "Zou", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2013, "venue": "EMNLP", "volume": "", "issue": "", "pages": "1393--1398", "other_ids": {}, "num": null, "urls": [], "raw_text": "Will Y. Zou, Richard Socher, Daniel Cer, and Christo- pher D. Manning. 2013. Bilingual word em- beddings for phrase-based machine translation. In EMNLP, pages 1393-1398.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "BLL results (Acc 1 ) across different seed lexicon sizes for all lexicons. x axes are in log scale.", "num": null, "type_str": "figure", "uris": null }, "FIGREF1": { "text": "BLL results across different threshold (THR) values with the HFQ+HYB+SYM seed lexicons. Higher thresholds imply less ambiguous word translation pairs. Thicker horizontal lines denote the best score from any of the baseline models. x axes are in log scale.", "num": null, "type_str": "figure", "uris": null }, "TABREF0": { "text": "Other parameters are: 15 epochs, 15 negatives, subsampling rate 1e \u2212 4.", "type_str": "table", "content": "
BNC+GTBNC+HYB+ASYM BNC+HYB+SYM HFQ+HYB+ASYM HFQ+HYB+SYMORTHO
casamientocasamientocasamientocasamientocasamientocasamiento
marriagemarrymarriagemarriagemarriagemar\u00eda
marrymarriagemarrymarrymarryse\u00f1or
marryingmarryingmarryingbetrothalbetrothaldo\u00f1a
betrothalwedweddingmarryingmarryingjuana
weddingweddingbetrothalweddingweddingnoche
wedbetrothalweddaughterwedamor
elopementremarrymarriagesbetrothedelopementguerra
", "num": null, "html": null }, "TABREF1": { "text": "Nearest EN neighbours of the Spanish word casamiento (marriage) with different seed lexicons.", "type_str": "table", "content": "
ModelES-EN NL-EN IT-EN
BICVM (TYPE 1)0.5320.5830.569
BILBOWA (TYPE 2)0.6320.6360.647
BWESG (TYPE 3)0.6760.6260.643
BNC+GT (Type 4)0.6770.6410.646
ORTHO0.2330.5060.224
BNC+HYB+ASYM0.6730.6260.644
BNC+HYB+SYM0.6810.658*0.663*
(3388; 2738; 3145)
HFQ+HYB+ASYM0.6730.5960.635
HFQ+HYB+SYM0.695*0.657*0.667*
", "num": null, "html": null }, "TABREF2": { "text": "Acc", "type_str": "table", "content": "
1 scores in a standard BLL setup
(for Type 4 models): all seed lexicons contain 5K
translation pairs, except for BNC+HYB+SYM (its
sizes provided in parentheses). * denotes a statisti-
cally significant improvement over baselines and
BNC+GT using McNemar's statistical significance
test with the Bonferroni correction, p < 0.05.
", "num": null, "html": null } } } }