{ "paper_id": "P19-1018", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:32:08.201200Z" }, "title": "Bilingual Lexicon Induction with Semi-supervision in Non-Isometric Embedding Spaces", "authors": [ { "first": "Barun", "middle": [], "last": "Patra", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "bpatra@cs.cmu.edu" }, { "first": "Joel", "middle": [], "last": "Ruben", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "" }, { "first": "Antony", "middle": [], "last": "Moniz", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "jrmoniz@cs.cmu.edu" }, { "first": "Sarthak", "middle": [], "last": "Garg", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "sarthakg@cs.cmu.edu" }, { "first": "Matthew", "middle": [ "R" ], "last": "Gormley", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "mgormley@cs.cmu.edu" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "gneubig@cs.cmu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Recent work on bilingual lexicon induction (BLI) has frequently depended either on aligned bilingual lexicons or on distribution matching, often with an assumption about the isometry of the two spaces. We propose a technique to quantitatively estimate this assumption of the isometry between two embedding spaces and empirically show that this assumption weakens as the languages in question become increasingly etymologically distant. We then propose Bilingual Lexicon Induction with Semi-Supervision (BLISS)-a semi-supervised approach that relaxes the isometric assumption while leveraging both limited aligned bilingual lexicons and a larger set of unaligned word embeddings, as well as a novel hubness filtering technique. Our proposed method obtains state of the art results on 15 of 18 language pairs on the MUSE dataset, and does particularly well when the embedding spaces don't appear to be isometric. In addition, we also show that adding supervision stabilizes the learning procedure, and is effective even with minimal supervision. \u21e4", "pdf_parse": { "paper_id": "P19-1018", "_pdf_hash": "", "abstract": [ { "text": "Recent work on bilingual lexicon induction (BLI) has frequently depended either on aligned bilingual lexicons or on distribution matching, often with an assumption about the isometry of the two spaces. We propose a technique to quantitatively estimate this assumption of the isometry between two embedding spaces and empirically show that this assumption weakens as the languages in question become increasingly etymologically distant. We then propose Bilingual Lexicon Induction with Semi-Supervision (BLISS)-a semi-supervised approach that relaxes the isometric assumption while leveraging both limited aligned bilingual lexicons and a larger set of unaligned word embeddings, as well as a novel hubness filtering technique. Our proposed method obtains state of the art results on 15 of 18 language pairs on the MUSE dataset, and does particularly well when the embedding spaces don't appear to be isometric. In addition, we also show that adding supervision stabilizes the learning procedure, and is effective even with minimal supervision. \u21e4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Bilingual lexicon induction (BLI), the task of finding corresponding words in two languages from comparable corpora (Haghighi et al., 2008; Xing et al., 2015; Zhang et al., 2017a; Artetxe et al., 2017; Lample et al., 2018) , finds use in numerous NLP tasks like POS tagging , parsing (Xiao and Guo, 2014) , document classification (Klementiev et al., 2012) , and machine translation (Irvine and Callison-Burch, 2013; Qi et al., 2018) .", "cite_spans": [ { "start": 116, "end": 139, "text": "(Haghighi et al., 2008;", "ref_id": "BIBREF11" }, { "start": 140, "end": 158, "text": "Xing et al., 2015;", "ref_id": "BIBREF28" }, { "start": 159, "end": 179, "text": "Zhang et al., 2017a;", "ref_id": "BIBREF29" }, { "start": 180, "end": 201, "text": "Artetxe et al., 2017;", "ref_id": "BIBREF1" }, { "start": 202, "end": 222, "text": "Lample et al., 2018)", "ref_id": "BIBREF17" }, { "start": 284, "end": 304, "text": "(Xiao and Guo, 2014)", "ref_id": "BIBREF27" }, { "start": 331, "end": 356, "text": "(Klementiev et al., 2012)", "ref_id": "BIBREF16" }, { "start": 383, "end": 416, "text": "(Irvine and Callison-Burch, 2013;", "ref_id": "BIBREF12" }, { "start": 417, "end": 433, "text": "Qi et al., 2018)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most work on BLI uses methods that learn a mapping between two word embedding spaces \u21e4 Equal Contribution \u21e4 Code to replicate the experiments presented in this work can be found at https://github.com/joelmoniz/ BLISS. (Ruder, 2017) , which makes it possible to leverage pre-trained embeddings learned on large monolingual corpora. A commonly used method for BLI, which is also empirically effective, involves learning an orthogonal mapping between the two embedding spaces (Mikolov et al. (2013a) , Xing et al. (2015) , Artetxe et al. (2016) , Smith et al. (2017) ). However, learning an orthogonal mapping inherently assumes that the embedding spaces for the two languages are isometric (subsequently referred to as the orthogonality assumption). This is a particularly strong assumption that may not necessarily hold true, and consequently we can expect methods relying on this assumption to provide sub-optimal results. In this work, we examine this assumption, identify where it breaks down, and propose a method to alleviate this problem.", "cite_spans": [ { "start": 218, "end": 231, "text": "(Ruder, 2017)", "ref_id": "BIBREF24" }, { "start": 473, "end": 496, "text": "(Mikolov et al. (2013a)", "ref_id": "BIBREF19" }, { "start": 499, "end": 517, "text": "Xing et al. (2015)", "ref_id": "BIBREF28" }, { "start": 520, "end": 541, "text": "Artetxe et al. (2016)", "ref_id": "BIBREF0" }, { "start": 544, "end": 563, "text": "Smith et al. (2017)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We first present a theoretically motivated approach based on the Gromov-Hausdroff (GH) distance to check the extent to which the orthogonality assumption holds ( \u00a72). We show that the constraint indeed does not hold, particularly for etymologically and typologically distant language pairs. Motivated by the above observation, we then propose a framework for Bilingual Lexicon Induction with Semi-Supervision (BLISS) ( \u00a73.2) Besides addressing the limitations of the orthogonality assumption, BLISS also addresses the shortcomings of purely supervised and purely unsupervised methods for BLI ( \u00a73.1). Our framework jointly optimizes for supervised embedding alignment, unsupervised distribution matching, and a weak orthogonality constraint in the form of a back-translation loss. Our results show that the different losses work in tandem to learn a better mapping than any one can on its own ( \u00a74.2). In particular, we show that two instantiations of the semi-supervised framework, corresponding to different supervised loss objectives, outperform their supervised and unsupervised counterparts on nu-merous language pairs across two datasets. Our best model outperforms the state-of-the-art on 10 of 16 language pairs on the MUSE datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our analysis ( \u00a74.4) demonstrates that adding supervision to the learning objective, even in the form of a small seed dictionary, substantially improves the stability of the learning procedure. In particular, for cases where either the embedding spaces are far apart according to GH distance or the quality of the original embeddings is poor, our framework converges where the unsupervised baselines fail to. We also show that for the same amount of available supervised data, leveraging unsupervised learning allows us to obtain superior performance over baseline supervised, semisupervised and unsupervised methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Both supervised and unsupervised BLI often rely on the assumption that the word embedding spaces are isometric to each other. Thus, they learn an orthogonal mapping matrix to map one space to another Xing et al. (2015) .", "cite_spans": [ { "start": 200, "end": 218, "text": "Xing et al. (2015)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Isometry of Embedding Spaces", "sec_num": "2" }, { "text": "This orthogonality assumption might not always hold, particularly for the cases when the language pairs in consideration are etymologically distant - Zhang et al. (2017b) and provide evidence of this by observing a higher Earth Mover's distance and eigenvector similarity metric respectively between etymologically distant languages. In this work, we propose a novel way of a-priori analyzing the validity of the orthogonality assumption using the Gromov Hausdorff (GH) distance to check how well two language embedding spaces can be aligned under an isometric transformation \u2020 .", "cite_spans": [ { "start": 150, "end": 170, "text": "Zhang et al. (2017b)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Isometry of Embedding Spaces", "sec_num": "2" }, { "text": "The Hausdorff distance between two metric spaces is a measure of the worst case or the diametric distance between the spaces. Intuitively, it measures the distance between the nearest neighbours that are the farthest apart. Concretely, given two metric spaces X , and Y with a distance function d(., .), the Hausdorff distance is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Isometry of Embedding Spaces", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "H(X , Y) = max{ sup x2X inf y2Y d(x, y), sup y2Y inf x2X d(x, y) }.", "eq_num": "(1)" } ], "section": "Isometry of Embedding Spaces", "sec_num": "2" }, { "text": "The Gromov-Hausdorff distance minimizes the Hausdorff distance over all isometric transforms \u2020 Note that since we mean center the embeddings, the orthogonal transforms are equivalent to isometric transforms between X and Y, thereby providing a quantitative estimate of the isometry of two spaces", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Isometry of Embedding Spaces", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "H(X , Y) = inf f,g H(f (X ), g(Y)),", "eq_num": "(2)" } ], "section": "Isometry of Embedding Spaces", "sec_num": "2" }, { "text": "where f, g belong to set of isometric transforms. Computing the Gromov-Hausdorff distance involves solving hard combinatorial problems, which is intractable in general. Following Chazal et al. (2009) , we approximate it by computing the Bottleneck distance between the two metric spaces (the details of which can be found in Appendix ( \u00a7A.1)). As can be observed from Table 2, the GH distances are higher for etymologically distant language pairs.", "cite_spans": [ { "start": 169, "end": 199, "text": "Following Chazal et al. (2009)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Isometry of Embedding Spaces", "sec_num": "2" }, { "text": "In this section, we motivate and define our semisupervised framework for BLI. First we describe issues with purely supervised and unsupervised methods, and then lay the framework for tackling them along with orthogonality constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semi-supervised Framework", "sec_num": "3" }, { "text": "Most purely supervised methods for BLI just use words in an aligned bilingual dictionary and do not utilize the rich information present in the topology of the embeddings' space. Purely unsupervised methods, on the other hand, can suffer from poor performance if the distribution of the embedding spaces of the two languages are very different from each other. Moreover, unsupervised methods can successfully align clusters of words, but miss out on fine grained alignment within the clusters. We explicitly show the aforementioned problem of purely unsupervised methods with the help of the toy dataset shown in 1a, and 1b. In this dataset, due to the density difference between the two large blue clusters, unsupervised matching is consistently able to align them properly, but has trouble aligning the smaller embedded green and red sub-clusters. The correct transformation of the source space is a clockwise 90 rotation followed by reflection along the x-axis. Unsupervised matching converges to this correct transformation only half of the time; in rest of the cases, it ignores the alignment of the sub-clusters and converges to a 90 counter-clockwise transformation as shown in 1c.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Drawbacks of Purely Supervised and Unsupervised Methods", "sec_num": "3.1" }, { "text": "We also find evidence of this problem in the real datasets used in our experiments as shown in Ta- Source ! Target Incorrect Predicted", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Drawbacks of Purely Supervised and Unsupervised Methods", "sec_num": "3.1" }, { "text": "aunt ! \u0442\u0435\u0442\u044f \u0431\u0430\u0431\u0443\u0448\u043a\u0430 (Grandmother) uruguay ! \u0443\u0440\u0443\u0433\u0432\u0430\u044f \u0430\u0440\u0433\u0435\u043d\u0442\u0438\u043d\u044b (Argentina) regiments ! \u043f\u043e\u043b\u043a\u043e\u0432 \u043a\u0430\u0432\u0430\u043b\u0435\u0440\u0438\u0439\u0441\u043a\u0438\u0435 (Cavalry)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Drawbacks of Purely Supervised and Unsupervised Methods", "sec_num": "3.1" }, { "text": "comedian ! \u043a\u043e\u043c\u0438\u043a \u0430\u043a\u0442\u0451\u0440 (Actor) Table 1 : Words for which semi-supervised method predicts correctly, but unsupervised method doesn't. The unsupervised method is able to guess the general family but fails to pinpoint exact match. ble 1. It can be seen that the unsupervised method aligns clusters of similar words, but is poor at fine grained alignment. We hypothesize that this problem can be resolved by giving it some supervision in the form of matching anchor points inside these sub-clusters, which correctly aligns them. Analogously, for the task of BLI, generating a small supervised seed lexicon for providing the requisite supervision, is generally feasible for most language pairs, through bilingual speakers, existing dictionary resources, or Wikipedia language links.", "cite_spans": [], "ref_spans": [ { "start": 31, "end": 38, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Drawbacks of Purely Supervised and Unsupervised Methods", "sec_num": "3.1" }, { "text": "In order to alleviate the problems with the orthogonality constraints, the purely unsupervised and supervised approaches, we propose a semisupervised framework, described below. Let X = {x 1 . . . x n } and Y = {y 1 . . . y m }, x i , y i 2 R d be two sets of word embeddings from the source and target language respectively and let", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Semi-supervised Framework", "sec_num": "3.2" }, { "text": "S = {(x s 1 , y s 1 ) . . . (x s k , y s k )} denote the bilingual aligned word embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Semi-supervised Framework", "sec_num": "3.2" }, { "text": "For learning a linear mapping matrix W that maps X to Y we leverage unsupervised distribution matching, aligning known word pairs and a data-driven weak orthogonality constraint. Unsupervised Distribution Matching: Given all word embeddings X and Y, the unsupervised loss L W |D aims to match the distribution of both embedding spaces. In particular, for our formulation, we use an adversarial distribution matching objective, similar to the work of Lample et al. (2018) . Specifically, a mapping matrix W from the source to the target is learned to fool a discriminator D, which is trained to distinguish between the mapped source embeddings W X = {W x 1 . . . W x n } and Y. We parameterize our discriminator with an MLP, and alternatively optimize the mapping matrix and the discriminator with the corresponding objectives:", "cite_spans": [ { "start": 450, "end": 470, "text": "Lample et al. (2018)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "A Semi-supervised Framework", "sec_num": "3.2" }, { "text": "L D|W = 1 n X x i 2X log(1 D(W x i )) 1 m X x i 2Y log D(x i ) L W |D = 1 n X x i 2X log D(W x i ) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Semi-supervised Framework", "sec_num": "3.2" }, { "text": "Aligning Known Word Pairs: Given aligned bilingual word embeddings S, we aim to minimize a similarity function (f s ) which maximizes the similarity between the corresponding matched pairs of words. Specifically, the loss is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Semi-supervised Framework", "sec_num": "3.2" }, { "text": "L W |S = 1 |S| X (x s i ,y s i )2S f s (W x s i , y s i ) (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Semi-supervised Framework", "sec_num": "3.2" }, { "text": "Weak Orthogonality Constraint: Given an embedding space X , we define a consistency loss that maximizes a similarity function f a between x and W T W x, x 2 X . This cyclic consistency loss L W|O encourages orthogonality of the W matrix based on the joint optimization:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Semi-supervised Framework", "sec_num": "3.2" }, { "text": "L W |O = 1 |X | X x i 2X f a (x i , W T W x i ) (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Semi-supervised Framework", "sec_num": "3.2" }, { "text": "The above loss term, used in conjunction with the supervised and unsupervised losses, allows the model to adjust the trade-off between orthogonality and accuracy based on the joint optimization. This is particularly helpful in the embedding spaces where the orthogonality constraint is violated ( \u00a74.4). Moreover, this data driven orthogonality constraint is more robust than an enforced hard constraint (A.3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Semi-supervised Framework", "sec_num": "3.2" }, { "text": "The final loss function for the mapping matrix is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Semi-supervised Framework", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L = L W |D + L W |S + L W |O", "eq_num": "(6)" } ], "section": "A Semi-supervised Framework", "sec_num": "3.2" }, { "text": "L W |D enables the model to leverage the distributional information available from the two embedding spaces, thereby using all available monolingual data. On the other hand, L W |S allows for the correct alignment of labeled pairs when available in the form of a small seed dictionary. Finally, L W |O encourages orthogonality. One can think of L W |O and L W |S as working against each other when the spaces are not isometric. Jointly optimizing both helps the model to strike a balance between them in a data driven manner, encouraging orthogonality but still allowing for flexible mapping.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Semi-supervised Framework", "sec_num": "3.2" }, { "text": "For NN lookup, we use the CSLS distance defined by Lample et al. (2018) . Let A (b) be the average cosine similarity between b and it's k-NN in A. Then CSLS is defined as CSLS", "cite_spans": [ { "start": 51, "end": 71, "text": "Lample et al. (2018)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Nearest Neighbor Retrieval", "sec_num": "3.3" }, { "text": "(x, y) = 2cos(W x, y) Y (W x) W X (y). \u21e4 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nearest Neighbor Retrieval", "sec_num": "3.3" }, { "text": "A common method of improving BLI is iteratively expanding the dictionary and refining the mapping matrix as a post-processing step (Artetxe et al., 2017; Lample et al., 2018) . Given a learnt mapping matrix, Procrustes refinement first finds", "cite_spans": [ { "start": 131, "end": 153, "text": "(Artetxe et al., 2017;", "ref_id": "BIBREF1" }, { "start": 154, "end": 174, "text": "Lample et al., 2018)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Iterative Procrustes Refinement and Hubness Mitigation", "sec_num": "3.4" }, { "text": "\u21e4 W X denotes the set {W x : x 2 X }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Iterative Procrustes Refinement and Hubness Mitigation", "sec_num": "3.4" }, { "text": "the pair of points in the two languages that are very closely matched by the mapping matrix and constructs a bilingual dictionary from these pairs. These pair of points are found by considering the nearest neighbors (NN) of the projected source words in the target space. The mapping matrix is then refined by setting it to be the Procrustes solution of the dictionary obtained. Iterative Procrustes Refinement (also referred as Iterative Dictionary Expansion) applies the above step iteratively. However, learning an orthogonal linear map in such a way leads to some words (known as hubs) to become nearest neighbors of a majority of other words (Radovanovi\u0107 et al., 2010; Dinu and Baroni, 2014) . In order to estimate the hubness of a point, (Radovanovi\u0107 et al., 2010) first compute N x (k), the counts of all points y such that x 2 k NN(y), normalized over all k. The skewness of the distribution over N x (k) is defined as the hubness of the point, with positive skew representing hubs and negative skew representing isolated points. An approximation to this would be N x (1), i.e the number of points for which x is the nearest neighbor.", "cite_spans": [ { "start": 647, "end": 673, "text": "(Radovanovi\u0107 et al., 2010;", "ref_id": "BIBREF23" }, { "start": 674, "end": 696, "text": "Dinu and Baroni, 2014)", "ref_id": "BIBREF7" }, { "start": 744, "end": 770, "text": "(Radovanovi\u0107 et al., 2010)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Iterative Procrustes Refinement and Hubness Mitigation", "sec_num": "3.4" }, { "text": "We use a simple hubness filtering mechanism to filter out words in the target domain that are hubs, i.e., words in the target domain which have more than a threshold number of neighbors in the source domain are not considered in the iterative dictionary expansion. Empirically, this leads to a small boost in performance. In our models, we use iterative Procrustes refinement with hubness filtering at each refinement step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Iterative Procrustes Refinement and Hubness Mitigation", "sec_num": "3.4" }, { "text": "In this section, we measure the GH distances between embedding spaces of various language pairs, and compute their correlation with several empirical measures of orthogonality. Next, we analyze the performance of the instantiations of our semi-supervised framework for two settings of supervised losses, and show that they outperform their supervised and unsupervised counterparts for a majority of the language pairs. Finally we analyze our performance with varying amounts of supervision and highlight the framework's training stability over unsupervised methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "4" }, { "text": "To evaluate the lower bound on the GH distance between the two embedding spaces, we select the ru-uk en-fr en-es es-fr en-uk en-ru en-sv en-el en-hi en-ko |Corr| 5000 most frequent words of the source and target language and compute the Bottleneck distance. These embeddings are mean centered, unit normed and the Euclidean distance is used as the distance metric.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Evaluation of GH Distance", "sec_num": "4.1" }, { "text": "|Corr|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Evaluation of GH Distance", "sec_num": "4.1" }, { "text": "Row 1 of Table 2 summarizes the GH distances obtained for different language pairs. We find that etymologically close languages such as en-fr and ru-uk have a very low GH distance and can possibly be aligned well using orthogonal transforms. In contrast, we find that etymologically distant language pairs such as en-ru and en-hi cannot be aligned well using orthogonal transforms.", "cite_spans": [], "ref_spans": [ { "start": 9, "end": 16, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Empirical Evaluation of GH Distance", "sec_num": "4.1" }, { "text": "To further corroborate this, similar to S\u00f8gaard et al. (2018) , we compute correlations of the GH distance with the accuracies of several methods for BLI. We find that the GH distance exhibits a strong negative correlation with these accuracies, implying that as the GH distance increases, it becomes increasingly difficult to align these language pairs. proposed the eigenvector similarity metric between embedding spaces for measuring similarity between the embedding spaces. We compute their metric over top n (100, 500, 1000, 5000 and 10000) embeddings (Column \u21e4 in Table 2 shows correlation for the best setting of n) and show that the GH distance (Column GH) correlates better with the accuracies than eigenvector similarity. Furthermore, we also compute correlations against an empirical measure of the orthogonality of two embedding spaces by computing ||I W T W || 2 , where W is a mapping from one language to the other obtained from an unsupervised method (MUSE(U)). Note that an advantage of this metric is that it can be computed even when the supervised dictionaries are not available (ru-uk in Table 2 ). We obtain a strong correlation with this metric as well.", "cite_spans": [], "ref_spans": [ { "start": 570, "end": 577, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 1109, "end": 1116, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Empirical Evaluation of GH Distance", "sec_num": "4.1" }, { "text": "Baseline Methods MUSE (U/S/IR): Lample et al. (2018) proposed two models: MUSE(U) and MUSE(S) for unsupervised and supervised BLI respectively. MUSE(U) uses a GAN based distribution matching followed by iterative Procrustes refinement. MUSE(S) learns an orthogonal map between the embedding spaces by minimizing the Euclidean distance between the supervised translation pairs. Note that for unit normed embedding spaces, this is equivalent to maximizing the cosine similarity between these pairs. MUSE(IR) is the semisupervised extension of MUSE(S), which uses iterative refinement using the CSLS distance starting from the mapping learnt by MUSE(S). We also use our proposed hubness filtering technique during the iterative refinement process (MUSE(HR)) which leads to small performance improvements. We consequently use the hubness filtering technique in all our models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Benchmark Tasks: Setup", "sec_num": "4.2" }, { "text": "RCSLS: propose optimizing the CSLS distance \u2021 directly for the supervised matching pairs. This leads to significant improvements over MUSE(S) and achieves state of the art results for a majority of the language pairs at the time of writing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Benchmark Tasks: Setup", "sec_num": "4.2" }, { "text": "VecMap models: Artetxe et al. 2017and Artetxe et al. (2018a) proposed two models, VecMap and VecMap ++ which were based on Iterative Procrustes refinement starting from a small seed lexicon based on numeral matching.", "cite_spans": [ { "start": 38, "end": 60, "text": "Artetxe et al. (2018a)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Benchmark Tasks: Setup", "sec_num": "4.2" }, { "text": "We also compare against two well known methods GeoMM (Jawanpuria et al., 2018) and Vecmap (U ) ++ (Artetxe et al., 2018b) . These methods learn orthogonal mappings for both source and target spaces to a common embedding space, and subsequently translate in the common space.", "cite_spans": [ { "start": 53, "end": 78, "text": "(Jawanpuria et al., 2018)", "ref_id": "BIBREF13" }, { "start": 98, "end": 121, "text": "(Artetxe et al., 2018b)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Benchmark Tasks: Setup", "sec_num": "4.2" }, { "text": "We instantiate two instances of our framework corresponding to the two supervised losses in the baseline methods mentioned above. BLISS(M) optimizes the cosine distance between supervised matching pairs as its supervised loss (L W |S ), while BLISS(R) optimizes the CSLS distance between these matching pairs for its L W |S . We use the unsupervised CSLS metric as a stopping criterion during training. This metric, introduced by Lample et al. (2018) , computes the average cosine similarity between matched source-target pairs using the CSLS distance for NN retrieval; and the authors showed that this correlates well with ground truth accuracy.", "cite_spans": [ { "start": 430, "end": 450, "text": "Lample et al. (2018)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "BLISS models", "sec_num": null }, { "text": "After learning the final mapping matrix, the translations of the words in the source language are mapped to the target space and their nearest neighbors according to the CSLS distance are chosen as the translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BLISS models", "sec_num": null }, { "text": "We evaluate our models against baselines on two popularly used datasets: the MUSE dataset and the VecMap dataset. The MUSE dataset used by Lample et al. (2018) consists of embeddings trained by Bojanowski et al. (2017) on Wikipedia and bilingual dictionaries generated by internal translation tools used at Facebook. The VecMap dataset introduced by Dinu and Baroni (2014) consists of the CBOW embeddings trained on the WacKy crawling corpora. The bilingual dictionaries were obtained from the Europarl word alignments. We use the standard training and test splits available for both the datasets.", "cite_spans": [ { "start": 139, "end": 159, "text": "Lample et al. (2018)", "ref_id": "BIBREF17" }, { "start": 194, "end": 218, "text": "Bojanowski et al. (2017)", "ref_id": "BIBREF4" }, { "start": 350, "end": 372, "text": "Dinu and Baroni (2014)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": null }, { "text": "In Tables 3 and 4 , we group the instantiations of BLISS(M/R) with it's supervised counterparts. We use \u2020 to compare models within a group, and use bold do compare across different groups for a language pair.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 17, "text": "Tables 3 and 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Benchmark Tasks: Results", "sec_num": "4.3" }, { "text": "As can be seen from Table 3 , BLISS(M/R) outperform baseline methods within their groups for 9 of 10 language pairs. Moreover BLISS(R) gives the best accuracy across all baseline methods for 6 out of 10 language pairs.", "cite_spans": [], "ref_spans": [ { "start": 20, "end": 27, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Benchmark Tasks: Results", "sec_num": "4.3" }, { "text": "We observe a similar trend for the VecMap datasets, where BLISS(M/R) outperforms its supervised and unsupervised counterparts (Table 4) .", "cite_spans": [], "ref_spans": [ { "start": 126, "end": 135, "text": "(Table 4)", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Benchmark Tasks: Results", "sec_num": "4.3" }, { "text": "It can be seen that BLISS(M) and BLISS(R) outperform the MUSE baselines (MUSE(U), MUSE(R)) and RCSLS respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Benchmark Tasks: Results", "sec_num": "4.3" }, { "text": "We observe that GeoMM and VecMap(U) ++ outperform BLISS models on the VecMap datasets. A potential reason for this could be the slight disadvantage that BLISS suffers from because of translating in the target space, as opposed to in the common embedding space. This hypothesis is also supported by the results of Kementchedjhieva et al. (2018) .", "cite_spans": [ { "start": 313, "end": 343, "text": "Kementchedjhieva et al. (2018)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Benchmark Tasks: Results", "sec_num": "4.3" }, { "text": "All the hyperparameters for the experiments can be found in the Appendix ( \u00a7A.4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Benchmark Tasks: Results", "sec_num": "4.3" }, { "text": "Languages with high GH distance: As can be seen from Table 2 , BLISS(R) substantially outperforms RCSLS on 6 of 9 language pairs, especially when the GH distance between the pairs is high (en-uk (2.4%), en-sv (3.4%), en-el (0.9%), en-hi(0.8%), en-ko (2.4%)). Results from Table 3 also underscores this point, wherein BLISS(R) performs at least at par with (and often better than) RCSLS on European languages, and performs significantly better on en-zh (2.8%) and zhen (0.9%).", "cite_spans": [], "ref_spans": [ { "start": 53, "end": 60, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 272, "end": 280, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Benefits of BLISS", "sec_num": "4.4" }, { "text": "Performance with varying amount of supervision: Table 5 shows the performance of BLISS(R) as a function of the number of data points provided for supervision. As can be observed, the model performs reasonably well even for low amounts of supervision and outperforms the unsupervised baseline MUSE(U) and it's supervised counterpart RCSLS. Moreover, note that the difference is more prominent for the etymologically distant pair en$zh. In this case the baseline models completely fail to train for 50 points, whereas BLISS(R) performs reasonably well.", "cite_spans": [], "ref_spans": [ { "start": 48, "end": 55, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Benefits of BLISS", "sec_num": "4.4" }, { "text": "Stability of Training: We also observe that providing even a little bit of supervision helps stabilize the training process, when compared to purely unsupervised distribution matching. We measure the stability during training using both the ground truth accuracy and the unsupervised CSLS metric. As can be seen from Figure 2 , BLISS(M) is significantly more stable than MUSE(U), converging to better accuracy and CSLS values. Furthermore, for en$zh, Vecmap(U) ++ fails to converge, while MUSE is somewhat unstable. However, BLISS does not suffer from this issue.", "cite_spans": [], "ref_spans": [ { "start": 317, "end": 325, "text": "Figure 2", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Benefits of BLISS", "sec_num": "4.4" }, { "text": "When the word vectors are not rich enough 80.9 82.9 80.4 82.5 72.5 70.9 51.3 63.8 42.5 41.9 BLISS(R)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Benefits of BLISS", "sec_num": "4.4" }, { "text": "82.4 \u2020 84.9 \u2020 82.6 \u2020 83.9 \u2020 75.7 \u2020 72.5 \u2020 52.1 \u2020 65.2 42.5 42.8 \u2020 Table 5 : Performance with different levels of supervision. \u2020 marks the best performance at a given level of supervision, while bold marks the best for a language pair.", "cite_spans": [], "ref_spans": [ { "start": 66, "end": 73, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Benefits of BLISS", "sec_num": "4.4" }, { "text": "(word2vec (Mikolov et al., 2013b) instead of fast-Text), the unsupervised method can completely fail to train. This can be observed for the case of en-de in Table 4 . BLISS(M/R) does not face this problem: adding supervision, even in the form of 50 mapped words for the case of en-de, helps it to achieve reasonable performance.", "cite_spans": [ { "start": 10, "end": 33, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 157, "end": 164, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Benefits of BLISS", "sec_num": "4.4" }, { "text": "Mikolov et al. (2013a) first used anchor points to align two embedding spaces, leveraging the fact that these spaces exhibit similar structure across languages. Since then, several approaches have been proposed for learning bilingual dictionaries (Faruqui and Dyer, 2014; Zou et al., 2013; Xing et al., 2015). Xing et al. (2015) showed that adding an orthogonal constraint significantly improves performance, and admits a closed form solution. This was further corroborated by the work of Smith et al. 2017, who showed that in orthogonality was necessary for self-consistency. Artetxe et al. (2016) showed the equivalence between the different methods, and their subsequent work (Artetxe et al., 2018a) analyzed different techniques proposed in various works (like embedding centering, whitening etc.), and showed that leveraging a combination of different methods showed significant performance gains.", "cite_spans": [ { "start": 247, "end": 271, "text": "(Faruqui and Dyer, 2014;", "ref_id": "BIBREF9" }, { "start": 272, "end": 289, "text": "Zou et al., 2013;", "ref_id": "BIBREF32" }, { "start": 290, "end": 294, "text": "Xing", "ref_id": "BIBREF28" }, { "start": 310, "end": 328, "text": "Xing et al. (2015)", "ref_id": "BIBREF28" }, { "start": 577, "end": 598, "text": "Artetxe et al. (2016)", "ref_id": "BIBREF0" }, { "start": 679, "end": 702, "text": "(Artetxe et al., 2018a)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "However, the validity of this orthogonality assumption has of late come into question: Zhang et al. (2017b) found that the Wasserstein distance between distant language pairs was considerably higher , while explored the orthogonality assumption using eigenvector similarity. We find our weak orthogonality constraint (along the lines of Zhang et al. (2017a)) when used in our semi-supervised framework to be more robust to this.", "cite_spans": [ { "start": 87, "end": 107, "text": "Zhang et al. (2017b)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "There has also recently been an increasing focus on generating these bilingual mappings without an aligned bilingual dictionary, i.e., in an unsupervised manner. Zhang et al. (2017a) and Lample et al. (2018) both use adversarial training for aligning two monolingual embedding spaces without any seed lexicon, while Zhang et al. (2017b) used a Wasserstein GAN to achieve this adversarial alignment, and use an earth-mover based finetuning approach; while formulate this as a joint estimation of an orthogonal matrix and a permutation matrix. However, we show that adding a little supervision, which is usually easy to obtain, improves performance.", "cite_spans": [ { "start": 162, "end": 182, "text": "Zhang et al. (2017a)", "ref_id": "BIBREF29" }, { "start": 316, "end": 336, "text": "Zhang et al. (2017b)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Another vein of research (Jawanpuria et al., 2018; Artetxe et al., 2018b; Kementchedjhieva et al., 2018) has been to learn orthogonal map-pings from both the source and the target embedding spaces into a common embedding space and doing the translations in the common embedding space. Artetxe et al. (2017) and motivate the utility of using both the supervised seed dictionaries and, to some extent, the structure of the monolingual embedding spaces. They use iterative Procrustes refinement starting with a small seed dictionary to learn a mapping; but doing may lead to sub-optimal performance for distant language pairs. However, these methods are close to our methods in spirit, and consequently form the baselines for our experiments.", "cite_spans": [ { "start": 25, "end": 50, "text": "(Jawanpuria et al., 2018;", "ref_id": "BIBREF13" }, { "start": 51, "end": 73, "text": "Artetxe et al., 2018b;", "ref_id": "BIBREF3" }, { "start": 74, "end": 104, "text": "Kementchedjhieva et al., 2018)", "ref_id": "BIBREF15" }, { "start": 285, "end": 306, "text": "Artetxe et al. (2017)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Another avenue of research has been to try and modify the underlying embedding generation algorithms. Cao et al. (2016) modify the CBOW algorithm (Mikolov et al., 2013b) by augmenting the CBOW loss to match the first and second order moments from the source and target latent spaces, thereby ensuring the source and target embedding spaces follow the same distribution. Luong et al. (2015) , in their work, use the aligned words to jointly learn the embedding spaces of both the source and target language, by trying to predict the context of a word in the other language, given an alignment. An issue with the proposed method is that it requires the retraining of embeddings, and cannot leverage a rich collection of precomputed vectors (like ones provided by Word2Vec (Mikolov et al., 2013b) , Glove (Pennington et al., 2014) and FastText (Bojanowski et al., 2017) ).", "cite_spans": [ { "start": 102, "end": 119, "text": "Cao et al. (2016)", "ref_id": "BIBREF5" }, { "start": 146, "end": 169, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF20" }, { "start": 370, "end": 389, "text": "Luong et al. (2015)", "ref_id": "BIBREF18" }, { "start": 770, "end": 793, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF20" }, { "start": 802, "end": 827, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF21" }, { "start": 841, "end": 866, "text": "(Bojanowski et al., 2017)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "In this work, we analyze the validity of the orthogonality assumption and show that it breaks for distant language pairs. We motivate the task of semisupervised BLI by showing the shortcomings of purely supervised and unsupervised approaches. We finally propose a semi-supervised framework which combines the advantages of supervised and unsupervised approaches and uses a joint optimization loss to enforce a weak and flexible orthogonality constraint. We provide two instantiations of our framework, and show that both outperform their supervised and unsupervised counterparts. On analyzing the model errors, we find that a large fraction of them arise due to polysemy and antonymy (An interested reader can find the details in Appendix ( \u00a7A.2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "We also find that translating in a common embedding space, as opposed to the target embedding space, obtains orthogonal gains for BLI, and plan on investigating this in the semi-supervised setting in future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "\u2021 Since the CSLS distance requires computing the nearest neighbors over the whole embedding space, this can also be considered a semi-supervised method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Sebastian Ruder and Anders S\u00f8gaard for their assistance in helping with the computation the eigenvector similarity metric. We would also like to thank Paul Michel and Junjie Hu for their invaluable feedback and discussions that helped shape the paper into its current form. Finally, we would also like to thank the anonymous reviewers for their valuable comments and helpful suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Learning principled bilingual mappings of word embeddings while preserving monolingual invariance", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2289--2294", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word em- beddings while preserving monolingual invariance. In Proceedings of the 2016 Conference on Empiri- cal Methods in Natural Language Processing, pages 2289-2294.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Learning bilingual word embeddings with (almost) no bilingual data", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "451--462", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 451-462, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intel- ligence (AAAI-18).", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "789--798", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018b. A robust self-learning method for fully un- supervised cross-lingual mappings of word embed- dings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), volume 1, pages 789-798.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "135--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A distribution-based model to learn bilingual word embeddings", "authors": [ { "first": "Hailong", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Tiejun", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Shu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yao", "middle": [], "last": "Meng", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "1818--1827", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hailong Cao, Tiejun Zhao, Shu Zhang, and Yao Meng. 2016. A distribution-based model to learn bilin- gual word embeddings. In Proceedings of COLING 2016, the 26th International Conference on Compu- tational Linguistics: Technical Papers, pages 1818- 1827.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Gromov-hausdorff stable signatures for shapes using persistence", "authors": [ { "first": "Fr\u00e9d\u00e9ric", "middle": [], "last": "Chazal", "suffix": "" }, { "first": "David", "middle": [], "last": "Cohen-Steiner", "suffix": "" }, { "first": "Leonidas", "middle": [ "J" ], "last": "Guibas", "suffix": "" }, { "first": "Facundo", "middle": [], "last": "M\u00e9moli", "suffix": "" }, { "first": "Steve", "middle": [ "Y" ], "last": "Oudot", "suffix": "" } ], "year": 2009, "venue": "Computer Graphics Forum", "volume": "28", "issue": "", "pages": "1393--1403", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fr\u00e9d\u00e9ric Chazal, David Cohen-Steiner, Leonidas J Guibas, Facundo M\u00e9moli, and Steve Y Oudot. 2009. Gromov-hausdorff stable signatures for shapes us- ing persistence. In Computer Graphics Forum, vol- ume 28, pages 1393-1403. Wiley Online Library.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Improving zero-shot learning by mitigating the hubness problem", "authors": [ { "first": "Georgiana", "middle": [], "last": "Dinu", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Georgiana Dinu and Marco Baroni. 2014. Improving zero-shot learning by mitigating the hubness prob- lem. volume abs/1412.6568.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Persistent homology: theory and practice", "authors": [ { "first": "Herbert", "middle": [], "last": "Edelsbrunner", "suffix": "" }, { "first": "Dmitriy", "middle": [], "last": "Morozov", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Herbert Edelsbrunner and Dmitriy Morozov. 2013. Persistent homology: theory and practice.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Improving vector space word representations using multilingual correlation", "authors": [ { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "462--471", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manaal Faruqui and Chris Dyer. 2014. Improving vec- tor space word representations using multilingual correlation. In Proceedings of the 14th Conference of the European Chapter of the Association for Com- putational Linguistics, pages 462-471.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Unsupervised alignment of embeddings with wasserstein procrustes", "authors": [ { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Quentin", "middle": [], "last": "Berthet", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1805.11222" ] }, "num": null, "urls": [], "raw_text": "Edouard Grave, Armand Joulin, and Quentin Berthet. 2018. Unsupervised alignment of embeddings with wasserstein procrustes. arXiv preprint arXiv:1805.11222.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Learning bilingual lexicons from monolingual corpora", "authors": [ { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Taylor", "middle": [], "last": "Berg-Kirkpatrick", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "771--779", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. 2008. Learning bilingual lexicons from monolingual corpora. In Proceedings of ACL- 08: HLT, pages 771-779, Columbus, Ohio. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Combining bilingual and comparable corpora for low resource machine translation", "authors": [ { "first": "Ann", "middle": [], "last": "Irvine", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Eighth Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "262--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ann Irvine and Chris Callison-Burch. 2013. Combin- ing bilingual and comparable corpora for low re- source machine translation. In Proceedings of the Eighth Workshop on Statistical Machine Transla- tion, pages 262-270, Sofia, Bulgaria. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Learning multilingual word embeddings in latent metric space: a geometric approach", "authors": [ { "first": "Pratik", "middle": [], "last": "Jawanpuria", "suffix": "" }, { "first": "Arjun", "middle": [], "last": "Balgovind", "suffix": "" }, { "first": "Anoop", "middle": [], "last": "Kunchukuttan", "suffix": "" }, { "first": "Bamdev", "middle": [], "last": "Mishra", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1808.08773" ] }, "num": null, "urls": [], "raw_text": "Pratik Jawanpuria, Arjun Balgovind, Anoop Kunchukuttan, and Bamdev Mishra. 2018. Learn- ing multilingual word embeddings in latent metric space: a geometric approach. arXiv preprint arXiv:1808.08773.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Loss in translation: Learning bilingual word mapping with a retrieval criterion", "authors": [ { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Herv\u00e9", "middle": [], "last": "J\u00e9gou", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2979--2984", "other_ids": {}, "num": null, "urls": [], "raw_text": "Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Herv\u00e9 J\u00e9gou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, pages 2979-2984.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Generalizing procrustes analysis for better bilingual dictionary induction", "authors": [ { "first": "Yova", "middle": [], "last": "Kementchedjhieva", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Cotterell", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "211--220", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yova Kementchedjhieva, Sebastian Ruder, Ryan Cot- terell, and Anders S\u00f8gaard. 2018. Generalizing pro- crustes analysis for better bilingual dictionary induc- tion. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 211-220.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Inducing crosslingual distributed representations of words", "authors": [ { "first": "Alexandre", "middle": [], "last": "Klementiev", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" }, { "first": "Binod", "middle": [], "last": "Bhattarai", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "1459--1474", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexandre Klementiev, Ivan Titov, and Binod Bhat- tarai. 2012. Inducing crosslingual distributed rep- resentations of words. pages 1459-1474.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Word translation without parallel data", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" }, { "first": "Ludovic", "middle": [], "last": "Denoyer", "suffix": "" }, { "first": "Herv\u00e9", "middle": [], "last": "J\u00e9gou", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2018. Word translation without parallel data. In Interna- tional Conference on Learning Representations.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Bilingual word representations with monolingual quality in mind", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing", "volume": "", "issue": "", "pages": "151--159", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thang Luong, Hieu Pham, and Christopher D Man- ning. 2015. Bilingual word representations with monolingual quality in mind. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 151-159.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Exploiting similarities among languages for machine translation", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Le", "suffix": "" }, { "first": "", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for ma- chine translation.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in Neural Information Processing Systems", "volume": "26", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed represen- tations of words and phrases and their composition- ality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Ad- vances in Neural Information Processing Systems 26, pages 3111-3119. Curran Associates, Inc.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "When and why are pre-trained word embeddings useful for neural machine translation?", "authors": [ { "first": "Ye", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Devendra", "middle": [], "last": "Sachan", "suffix": "" }, { "first": "Matthieu", "middle": [], "last": "Felix", "suffix": "" }, { "first": "Sarguna", "middle": [], "last": "Padmanabhan", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "529--535", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ye Qi, Devendra Sachan, Matthieu Felix, Sarguna Pad- manabhan, and Graham Neubig. 2018. When and why are pre-trained word embeddings useful for neural machine translation? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 2 (Short Pa- pers), pages 529-535, New Orleans, Louisiana. As- sociation for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Hubs in space: Popular nearest neighbors in high-dimensional data", "authors": [ { "first": "Milo\u0161", "middle": [], "last": "Radovanovi\u0107", "suffix": "" }, { "first": "Alexandros", "middle": [], "last": "Nanopoulos", "suffix": "" }, { "first": "Mirjana", "middle": [], "last": "Ivanovi\u0107", "suffix": "" } ], "year": 2010, "venue": "", "volume": "11", "issue": "", "pages": "2487--2531", "other_ids": {}, "num": null, "urls": [], "raw_text": "Milo\u0161 Radovanovi\u0107, Alexandros Nanopoulos, and Mir- jana Ivanovi\u0107. 2010. Hubs in space: Popular near- est neighbors in high-dimensional data. volume 11, pages 2487-2531.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A survey of cross-lingual embedding models", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Ruder. 2017. A survey of cross-lingual em- bedding models. CoRR, abs/1706.04902.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Offline bilingual word vectors, orthogonal transformations and the inverted softmax", "authors": [ { "first": "L", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" }, { "first": "H", "middle": [ "P" ], "last": "David", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Turban", "suffix": "" }, { "first": "Nils", "middle": [ "Y" ], "last": "Hamblin", "suffix": "" }, { "first": "", "middle": [], "last": "Hammerla", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel L Smith, David HP Turban, Steven Hamblin, and Nils Y Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "On the limitations of unsupervised bilingual dictionary induction", "authors": [ { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "778--788", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anders S\u00f8gaard, Sebastian Ruder, and Ivan Vuli\u0107. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 778-788.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Distributed word representation learning for cross-lingual dependency parsing", "authors": [ { "first": "Min", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Yuhong", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "119--129", "other_ids": {}, "num": null, "urls": [], "raw_text": "Min Xiao and Yuhong Guo. 2014. Distributed word representation learning for cross-lingual dependency parsing. In Proceedings of the Eighteenth Confer- ence on Computational Natural Language Learning, pages 119-129.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Normalized word embedding and orthogonal transform for bilingual word translation", "authors": [ { "first": "Chao", "middle": [], "last": "Xing", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yiye", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1006--1011", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal trans- form for bilingual word translation. In Proceed- ings of the 2015 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1006-1011.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Adversarial training for unsupervised bilingual lexicon induction", "authors": [ { "first": "Meng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Huanbo", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1959--1970", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017a. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), vol- ume 1, pages 1959-1970.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Earth mover's distance minimization for unsupervised bilingual lexicon induction", "authors": [ { "first": "Meng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Huanbo", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1934--1945", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017b. Earth mover's distance minimization for unsupervised bilingual lexicon induction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1934-1945.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Ten pairs to tagmultilingual pos tagging via coarse mapping between embeddings", "authors": [ { "first": "Yuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "David", "middle": [], "last": "Gaddy", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Tommi", "middle": [], "last": "Jaakkola", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuan Zhang, David Gaddy, Regina Barzilay, and Tommi Jaakkola. 2016. Ten pairs to tag- multilingual pos tagging via coarse mapping be- tween embeddings. Association for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Bilingual word embeddings for phrase-based machine translation", "authors": [ { "first": "Y", "middle": [], "last": "Will", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zou", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Cer", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1393--1398", "other_ids": {}, "num": null, "urls": [], "raw_text": "Will Y Zou, Richard Socher, Daniel Cer, and Christo- pher D Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proceed- ings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1393-1398.", "links": null } }, "ref_entries": { "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": "A toy dataset demonstrating the shortcomings of unsupervised distribution matching.Fig. a) and b) show two different distributions (source and target respectively) over six classes. Classes 1 and 2; classes 3 and 4; classes 5 and 6 were respectively drawn from a uniform distribution over a sphere, rectangle and triangle respectively." }, "FIGREF2": { "type_str": "figure", "uris": null, "num": null, "text": "Fig. c)shows the misprojected source distribution obtained from unsupervised distribution matching which fails to align with the target distribution ofFig. b)." }, "FIGREF3": { "type_str": "figure", "uris": null, "num": null, "text": "Training Stability of different language pairs (en-de), (en-ru), (en-zh)" }, "TABREF0": { "text": "84.1 87.1 38.3 57.9 61.7 47.6 37.3 37.5 0.74 0.52 GeoMM * 82.1 81.4 87.8 39.1 51.3 65.0 47.8 39.8 34.6", "type_str": "table", "content": "
(GH) (\u21e4)
GH0.18 0.17 0.2 0.24 0.34 0.44 0.46 0.47 0.5 0.92**
\u21e416.4 4.1 5.9 4.1 11.7 14.7 7.3 11.5 7.76.6**
MUSE(U)*82.3 81.7 85.5 29.1 44.0 53.3 37.9 34.6 5.10.87 0.61
RCSLS*83.3 0.76 0.49
BLISS(R)*83.9 84.3 87.1 40.7 57.1 65.1 48.5 38.1 39.90.73 0.50
||I W T W || 20.03 0.01 0.03 0.02 59.8 54.3 71.6 72.6 106.3 98.460.84 0.75
", "num": null, "html": null }, "TABREF1": { "text": "Correlation of GH and Eigenvector similarity with performance of BLI methods. Bold marks best metrics.", "type_str": "table", "content": "", "num": null, "html": null }, "TABREF2": { "text": "ModelType Objective Translation en-es es-en en-fr fr-en en-de de-en en-ru ru-en en-zh zh-en Space \u2020 83.3 82.5 83.2 75.7 \u2020 72.8 52.8 64.1 \u2020 42.7 \u2020 36.7 BLISS(M) Semi Cos + GAN target 82.3 \u2020 84.3 \u2020 83.3 \u2020 83.9 \u2020 75.7 \u2020 73.8 \u2020 55.7 \u2020 63.7 41.1 41.4 \u2020 \u2020 86.2 83.9 \u2020 84.7 \u2020 79.1 \u2020 76.6 \u2020 57.1 67.7 \u2020 48.7 \u2020 47.3 \u2020", "type_str": "table", "content": "
MUSE(U) UnsupGANtarget81.7 83.3 82.3 82.1 74.0 72.2 44.0 59.1 32.5 31.4
MUSE(S)SupCostarget81.4 82.9 81.1 82.4 73.5 72.4 51.7 63.7 42.7 \u2020 36.7
MUSE(IR) SemiCos + IRtarget81.9 83.5 82.1 82.4 74.3 72.7 51.7 63.7 42.7 \u2020 36.7
MUSE(HR) Semi 82.3 RCSLS Cos + IR target Semi CSLS target 84.1 86.3 \u2020 83.3 84.1 79.1 \u2020 76.3 57.9 \u2020 67.2 45.9 46.4 BLISS(R) Semi CSLS + GAN target 84.3 GeoMM Sup Classification common 81.4 85.5 82.1 84.1 74.7 76.7 51.3 67.6 49.1 45.3 Loss
Vecmap(U) ++ UnsupNN Based Dist common matching + IR82.2 84.5 82.5 83.6 75.2 74.2 48.5 65.1 0.00.0
", "num": null, "html": null }, "TABREF3": { "text": "Performance comparison of BLISS on the MUSE dataset. Sup, Unsup and Semi refer to supervised, unsupervised and semi-supervised methods. Objective refers to the metric optimized. \u2020 marks the best in each category, while bold marks the best performance across all groups for a language pair.", "type_str": "table", "content": "
Pairs# seeds Map Map ++ (U) Vec Vec MUSE MUSE BLISS RCSLS (IR) (M)BLISS GeoMM (R)Vec Map(U) ++
en-itall 39.7 45.3 Num. 37.3 -45.8 45.8 \u202045.3 0.745.9 \u2020 44.345.4 0.346.2 \u2020 44.6 \u202048.3 1.248.5 48.5
en-deall 40.9 44.1 Num. 39.6 -0.0 0.047.0 39.9 47.2 \u2020 48.3 \u202047.3 1.048.1 \u2020 46.5 \u202048.9 2.348.1 48.1
", "num": null, "html": null }, "TABREF4": { "text": "Performance of different models on the VecMap dataset. \u2020 marks the best in each category, while bold marks the best performance across different levels of supervision for a language pair. \u2020 83.6 \u2020 82.8 \u2020 83.0 \u2020 75.1 \u2020 72.7 \u2020 39.3 \u2020 61.0 \u2020 32.6 \u2020 32.5 \u2020 \u2020 83.4 82.3 \u2020 82.9 \u2020 74.7 \u2020 73.1 \u2020 41.6 \u2020 63.0 \u2020 36.3 \u2020 35.1 \u2020", "type_str": "table", "content": "
# DatapointsModelen-es es-en en-fr fr-en en-de de-en en-ru ru-en en-zh zh-en
*MUSE(U)81.7 83.3 82.3 82.1 74.0 72.2 44.0 59.1 32.5 \u2020 31.4 \u2020
*Vecmap(U) ++82.2 \u2020 84.5 \u2020 82.5 \u2020 83.6 \u2020 75.2 \u2020 74.2 \u2020 48.5 \u2020 65.1 \u2020 0.00.0
MUSE(IR)0.3 82.7 0.5 1.6 31.9 72.7 \u2020 0.10.00.30.3
50GeoMM RCSLS0.3 0.11.9 0.3 1.0 0.4 0.0 0.30.3 0.10.3 0.10.0 0.10.6 0.10.0 0.00.0 0.0
BLISS (R) MUSE(IR) GeoMM 82.1 500 81.6 83.5 \u2020 82.1 82.0 73.1 72.7 40.3 62 34.5 32.2 31.9 46.6 34.4 44.7 13.5 14.7 10.6 20.5 3.9 2.9 RCSLS 22.9 44.9 22.4 43.5 9.9 10.2 7.9 19.6 6.6 7.1
BLISS(R) MUSE(IR) GeoMM 82.3 5000 81.9 82.8 82.2 82.1 75.2 72.4 50.4 63.7 39.2 36.3 79.7 82.7 79.9 83.2 71.7 70.6 49.7 65.5 \u2020 43.7 \u2020 40.1 RCSLS
", "num": null, "html": null } } } }