{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:16:29.136217Z" }, "title": "An Empirical Study on Crosslingual Transfer in Probabilistic Topic Models", "authors": [ { "first": "Shudong", "middle": [], "last": "Hao", "suffix": "", "affiliation": { "laboratory": "", "institution": "Bard College at Simon's Rock Division of Science", "location": { "settlement": "Mathematics" } }, "email": "" }, { "first": "Michael", "middle": [ "J" ], "last": "Paul", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Colorado", "location": {} }, "email": "mpaul@colorado.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Probabilistic topic modeling is a common first step in crosslingual tasks to enable knowledge transfer and extract multilingual features. Although many multilingual topic models have been developed, their assumptions about the training corpus are quite varied, and it is not clear how well the different models can be utilized under various training conditions. In this article, the knowledge transfer mechanisms behind different multilingual topic models are systematically studied, and through a broad set of experiments with four models on ten languages, we provide empirical insights that can inform the selection and future development of multilingual topic models.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Probabilistic topic modeling is a common first step in crosslingual tasks to enable knowledge transfer and extract multilingual features. Although many multilingual topic models have been developed, their assumptions about the training corpus are quite varied, and it is not clear how well the different models can be utilized under various training conditions. In this article, the knowledge transfer mechanisms behind different multilingual topic models are systematically studied, and through a broad set of experiments with four models on ten languages, we provide empirical insights that can inform the selection and future development of multilingual topic models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Popularized by Latent Dirichlet Allocation (Blei, Ng, and Jordan 2003) , probabilistic topic models have been an important tool for analyzing large collections of texts (Blei 2012 (Blei , 2018 . Their simplicity and interpretability make topic models popular for many natural language processing tasks, such as discovery of document networks (Chen et al. 2013; and authorship attribution (Seroussi, Zukerman, and Bohnert 2014) .", "cite_spans": [ { "start": 43, "end": 70, "text": "(Blei, Ng, and Jordan 2003)", "ref_id": "BIBREF5" }, { "start": 169, "end": 179, "text": "(Blei 2012", "ref_id": "BIBREF3" }, { "start": 180, "end": 192, "text": "(Blei , 2018", "ref_id": "BIBREF4" }, { "start": 342, "end": 360, "text": "(Chen et al. 2013;", "ref_id": "BIBREF9" }, { "start": 388, "end": 426, "text": "(Seroussi, Zukerman, and Bohnert 2014)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Topic models take a corpus D as input, where each document d \u2208 D is usually represented as a sparse vector in a vocabulary space, and project these documents to a lower-dimensional topic space. In this sense, topic models are often used as a dimensionality reduction technique to extract representative and human-interpretable features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Text collections, however, are often not in a single language, and thus there has been a need to generalize topic models from monolingual to multilingual settings. Given a corpus D (1,...,L) in languages \u2208 {1, . . . , L}, multilingual topic models learn topics in each of the languages. From a human's view, each topic should be related to the same theme, even if the words are not in the same language (Figure 1(b) ). From a machine's view, the word probabilities within a topic should be similar across languages, such that the low-dimensional representation of documents is not dependent on the language. In other words, the topic space in multilingual topic models is language agnostic (Figure 1(a) ).", "cite_spans": [ { "start": 181, "end": 190, "text": "(1,...,L)", "ref_id": null } ], "ref_spans": [ { "start": 403, "end": 415, "text": "(Figure 1(b)", "ref_id": "FIGREF0" }, { "start": 690, "end": 702, "text": "(Figure 1(a)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "This article presents two major contributions to multilingual topic models. We first provide an alternative view of multilingual topic models by explicitly formulating a crosslingual knowledge transfer process during posterior inference (Section 3). Based on this analysis, we unify different multilingual topic models by defining a function called the transfer operation. This function provides an abstracted view of the knowledge transfer mechanism behind these models, while enabling further generalizations and improvements. Using this formulation, we analyze several existing multilingual topic models (Section 4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Second, in our experiments we compare four representative models under different training conditions (Section 5). The models are trained and evaluated on ten languages from various language families to increase language diversity in the experiments. In particular, we include five languages with relatively high resources and five others with low resources. To quantitatively evaluate the models, we focus on topic quality in Section 5.3.1, and performance of downstream tasks using crosslingual document classification in Section 5.3.2. We investigate how sensitive the models are to different language resources (i.e., parallel/comparable corpus and dictionaries), and analyze what factors cause this difference (Sections 6 and 7). Overview of multilingual topic models. (a) Multilingual topic models project-language specific and high-dimensional features from the vocabulary space to a language-agnostic and low-dimensional topic space. This figure shows a t-SNE (Maaten and Hinton 2008) representation of a real data set. (b) Multilingual topic models produce theme-aligned topics for all languages. From a human's view, each topic contains different languages but the words are describing the same thing.", "cite_spans": [ { "start": 967, "end": 991, "text": "(Maaten and Hinton 2008)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "We first review monolingual topic models, focusing on Latent Dirichlet Allocation, and then describe two families of multilingual extensions. Based on the types of supervision added to multilingual topic models, we separate the two model families into documentlevel and word-level supervision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2." }, { "text": "Topic models provide a high-level view of latent thematic structures in a corpus. Two main branches for topic models are non-probabilistic approaches such as Latent Semantic Analysis (LSA; Deerwester et al. 1990) and Non-Negative Matrix Factorization (Xu, Liu, and Gong 2003) , and probabilistic ones such as Latent Dirichlet Allocation (LDA; Blei, Ng, and Jordan 2003) and probabilistic LSA (pLSA; Hofmann 1999) . All these models were originally developed for monolingual data and later adapted to multilingual situations. Though there has been work to adapt non-probabilistic models, for example, based on \"pseudo-bilingual\" corpora approaches (Littman, Dumais, and Landauer 1998) , most multilingual topic models that are trained on multilingual corpora are based on probabilistic models, especially LDA. Therefore, our work is focused on the probabilistic topic models, and in the following section we start by describing LDA.", "cite_spans": [ { "start": 189, "end": 212, "text": "Deerwester et al. 1990)", "ref_id": "BIBREF10" }, { "start": 251, "end": 275, "text": "(Xu, Liu, and Gong 2003)", "ref_id": "BIBREF50" }, { "start": 343, "end": 369, "text": "Blei, Ng, and Jordan 2003)", "ref_id": "BIBREF5" }, { "start": 388, "end": 398, "text": "LSA (pLSA;", "ref_id": null }, { "start": 399, "end": 412, "text": "Hofmann 1999)", "ref_id": "BIBREF19" }, { "start": 647, "end": 683, "text": "(Littman, Dumais, and Landauer 1998)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2." }, { "text": "The most popular topic model is LDA, introduced by Blei, Ng, and Jordan (2003) . This model assumes each document d is represented by a multinomial distribution \u03b8 d over topics, and each \"topic\" k is a multinomial distribution \u03c6 (k) over the vocabulary V. In the generative process, each \u03b8 and \u03c6 are generated from Dirichlet distributions parameterized by \u03b1 and \u03b2, respectively. The hyperparameters for Dirichlet distributions can be asymmetric (Wallach, Mimno, and McCallum 2009) , though in this work we use symmetric priors. Figure 2 shows the plate notation of LDA.", "cite_spans": [ { "start": 51, "end": 78, "text": "Blei, Ng, and Jordan (2003)", "ref_id": "BIBREF5" }, { "start": 229, "end": 232, "text": "(k)", "ref_id": null }, { "start": 445, "end": 480, "text": "(Wallach, Mimno, and McCallum 2009)", "ref_id": "BIBREF49" } ], "ref_spans": [ { "start": 528, "end": 536, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Monolingual Topic Models", "sec_num": "2.1" }, { "text": "We now describe a variety of multilingual topic models, organized into two families based on the type of supervision they use. Later, in Section 4, we focus on a subset of the models described here for deeper analysis using our knowledge transfer formulation, selecting the most general and representative models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multilingual Topic Models", "sec_num": "2.2" }, { "text": "2.2.1 Document Level. The first model proposed to process multilingual corpora using LDA is the Polylingual Topic Model (PLTM; Mimno et al. 2009; Ni et al. 2009) . This model extracts language-consistent topics from parallel or highly comparable multilingual corpora (for example, Wikipedia articles aligned across languages), assuming that document translations share the same topic distributions. This model has been", "cite_spans": [ { "start": 127, "end": 145, "text": "Mimno et al. 2009;", "ref_id": "BIBREF37" }, { "start": 146, "end": 161, "text": "Ni et al. 2009)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Multilingual Topic Models", "sec_num": "2.2" }, { "text": "N d \u21b5 K w z \u2713 k D Figure 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multilingual Topic Models", "sec_num": "2.2" }, { "text": "Plate notation of LDA. \u03b1 and \u03b2 are Dirichlet hyperparameters for \u03b8 and {\u03c6 (k) } K k=1 . Topic assignments are denoted as z, and w denotes observed tokens. extensively used and adapted in various ways for different crosslingual tasks (Krstovski and Smith 2011; Moens and Vulic 2013; Vuli\u0107 and Moens 2014; Liu, Duh, and Matsumoto 2015; Krstovski and Smith 2016) .", "cite_spans": [ { "start": 233, "end": 259, "text": "(Krstovski and Smith 2011;", "ref_id": "BIBREF27" }, { "start": 260, "end": 281, "text": "Moens and Vulic 2013;", "ref_id": "BIBREF38" }, { "start": 282, "end": 303, "text": "Vuli\u0107 and Moens 2014;", "ref_id": "BIBREF47" }, { "start": 304, "end": 333, "text": "Liu, Duh, and Matsumoto 2015;", "ref_id": "BIBREF34" }, { "start": 334, "end": 359, "text": "Krstovski and Smith 2016)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Multilingual Topic Models", "sec_num": "2.2" }, { "text": "In the generative process, PLTM first generates language-specific topic-word distributions \u03c6 ( ,k) \u223c Dir \u03b2 ( ) , for topics k = 1, . . . , K and languages = 1, . . . , L. Then, for each document tuple d = d (1) , . . . , d (L) , it generates a tuple-topic distribution \u03b8 d \u223c Dir (\u03b1). Every topic in this document tuple is generated from \u03b8 d , and the word tokens in this document tuple are then generated from language-specific word distributions \u03c6 ( ,k) for each language. To apply PLTM, the corpus must be parallel or closely comparable to provide document-level supervision. We refer to this as the document links model (DOCLINK) .", "cite_spans": [ { "start": 93, "end": 98, "text": "( ,k)", "ref_id": null }, { "start": 207, "end": 210, "text": "(1)", "ref_id": null }, { "start": 223, "end": 226, "text": "(L)", "ref_id": null }, { "start": 449, "end": 454, "text": "( ,k)", "ref_id": null }, { "start": 623, "end": 632, "text": "(DOCLINK)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Multilingual Topic Models", "sec_num": "2.2" }, { "text": "Models that transfer knowledge on the document level have many variants, including SOFTLINK , comparable bilingual LDA (C-BILDA; Heyman, Vulic, and Moens 2016), the partially connected multilingual topic model (PCMLTM; Liu, Duh, and Matsumoto 2015) , and multi-level hyperprior polylingual topic model (MLHPLTM; Krstovski, Smith, and Kurtz 2016) . SOFTLINK generalizes DOCLINK by using a dictionary, so that documents can be linked based on overlap in their vocabulary, even if the corpus is not parallel or comparable. C-BILDA is a direct extension of DOCLINK that also models language-specific distributions to distinguish topics that are shared across languages from language-specific topics. PCMLTM adds an additional observed variable to indicate the absence of a language in a document tuple. MLHPLTM uses a hierarchy of hyperparameters to generate section-topic distributions. This model was motivated by applications to scientific research articles, where each section s has its own topic distribution \u03b8 (s) shared by both languages.", "cite_spans": [ { "start": 219, "end": 248, "text": "Liu, Duh, and Matsumoto 2015)", "ref_id": "BIBREF34" }, { "start": 312, "end": 345, "text": "Krstovski, Smith, and Kurtz 2016)", "ref_id": "BIBREF28" }, { "start": 1012, "end": 1015, "text": "(s)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Multilingual Topic Models", "sec_num": "2.2" }, { "text": "Level. Instead of document-level connections between languages, Boyd-Graber and and Jagarlamudi and Daum\u00e9 III (2010) proposed to model connections between languages through words using a multilingual dictionary and apply hyper-Dirichlet Type-I distributions (Andrzejewski, Zhu, and Craven 2009; Dennis III 1991) . We refer to these approaches as the vocabulary links model (VOCLINK).", "cite_spans": [ { "start": 84, "end": 116, "text": "Jagarlamudi and Daum\u00e9 III (2010)", "ref_id": "BIBREF22" }, { "start": 258, "end": 294, "text": "(Andrzejewski, Zhu, and Craven 2009;", "ref_id": "BIBREF0" }, { "start": 295, "end": 311, "text": "Dennis III 1991)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Word", "sec_num": "2.2.2" }, { "text": "Specifically, VOCLINK uses a dictionary to create a tree structure where each internal node contains word translations, and words that are not translated are attached directly to the root of the tree r as leaves. In the generative process, for each language , VOCLINK first generates K multinomial distributions over all internal nodes and word types that are not translated, \u03c6 (r, ,k) \u223c Dir \u03b2 (r, ) , where \u03b2 (r, ) is a vector of Dirichlet prior from root r to internal nodes and untranslated words in language . Then, under each internal node i, for each language , VOCLINK generates a multinomial \u03c6 (i, ,k) \u223c Dir \u03b2 (i, ) over word types in language under the node i. Note that both \u03b2 (r, ) and \u03b2 (i, ) are vectors. In the first vector \u03b2 (r, ) , each cell is parameterized by scalar \u03b2 and scaled by the number of word translations under that internal node. For the second vector \u03b2 (i, ) , it is a symmetric hyperparameter where every cell uses the same scalar \u03b2 . See Figure 3 for an illustration.", "cite_spans": [ { "start": 378, "end": 385, "text": "(r, ,k)", "ref_id": null }, { "start": 394, "end": 399, "text": "(r, )", "ref_id": null }, { "start": 410, "end": 415, "text": "(r, )", "ref_id": null }, { "start": 602, "end": 609, "text": "(i, ,k)", "ref_id": null }, { "start": 618, "end": 623, "text": "(i, )", "ref_id": null }, { "start": 687, "end": 692, "text": "(r, )", "ref_id": null }, { "start": 699, "end": 704, "text": "(i, )", "ref_id": null }, { "start": 740, "end": 745, "text": "(r, )", "ref_id": null }, { "start": 883, "end": 888, "text": "(i, )", "ref_id": null } ], "ref_spans": [ { "start": 970, "end": 978, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Word", "sec_num": "2.2.2" }, { "text": "Thus, to draw a word in language is equivalent to generating a path from the root to leaf nodes:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word", "sec_num": "2.2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "r \u2192 i, i \u2192 w ( ) or r \u2192 w ( ) : Pr r \u2192 i, i \u2192 w ( ) |k = Pr (i|k) \u2022 Pr w ( ) |k, i", "eq_num": "(1)" } ], "section": "Word", "sec_num": "2.2.2" }, { "text": "Pr", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word", "sec_num": "2.2.2" }, { "text": "r \u2192 w ( ) |k = Pr w ( ) |k (2) animal green djur gr\u00f6n m\u00e5ngata djuren k = 1, . . . , K English Swedish stor turqoise i 1 i 2 r k = 1, . . . , K (i1,en,k) \u21e0 Dir \u21e3 (i1,en) \u2318 = Dir([ 00 ]) (i1,sv,k) \u21e0 Dir \u21e3 (i1,sv) \u2318 = Dir([ 00 , 00 ]) n (i,`,k) \u21e0 Dir \u21e3 (i,`) \u2318o I i=1 (r,en,k) \u21e0 Dir \u21e3 (r,en) \u2318 = Dir \u21e3h (r,en) 1 , (r,en) 2 , (r,en) 3 i\u2318 = Dir([3 0 , 2 0 , 0 ]) (r,sv,k) \u21e0 Dir \u21e3 (r,sv) \u2318 = Dir \u21e3h (r,sv) 1 , (r sv) 2 , (r,sv) 3 , (r,sv) 4 i\u2318 = Dir([3 0 , 2 0 , 0 , 0 ]) (r,`,k) \u21e0 Dir \u21e3 (r,`) \u2318", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word", "sec_num": "2.2.2" }, { "text": "From root to internal nodes and untranslated words From internal nodes to leaves k = 1, . . . , K", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word", "sec_num": "2.2.2" }, { "text": "An illustration of the tree structure used in word-level models. Hyperparameters \u03b2 (r, ) and \u03b2 (i, ) are both vectors, and \u03b2 and \u03b2 are scalars. In the figure, i 1 has three translations, so the corresponding hyperparameter \u03b2 (r,EN) 1 = \u03b2 (r,SV) 1 = 3\u03b2 . Document-topic distributions \u03b8 d are generated in the same way as monolingual LDA, because no document translation is required.", "cite_spans": [ { "start": 83, "end": 88, "text": "(r, )", "ref_id": null }, { "start": 95, "end": 100, "text": "(i, )", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 3", "sec_num": null }, { "text": "The use of dictionaries to model similarities across topic-word distributions has been formulated in other ways as well. PROBBILDA (Ma and Nasukawa 2017) uses inverted indexing (S\u00f8gaard et al. 2015) to encode assumptions that word translations are generated from same distributions. PROBBILDA does not use tree structures in the parameters as in VOCLINK, but the general idea of sharing distributions among word translations is similar. Guti\u00e9rrez et al. (2016) use part-of-speech taggers to separate topic words (nouns) and perspective words (adjectives and verbs), developed for the application of detecting cultural differences, such as how different languages have different perspectives on the same topic. Topic words are modeled in the same way as in VOCLINK, whereas perspective words are modeled in a monolingual fashion.", "cite_spans": [ { "start": 177, "end": 198, "text": "(S\u00f8gaard et al. 2015)", "ref_id": "BIBREF43" }, { "start": 437, "end": 460, "text": "Guti\u00e9rrez et al. (2016)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 3", "sec_num": null }, { "text": "Conceptually, the term \"knowledge transfer\" indicates that there is a process of carrying information from a source to a destination. Using the representations of graphical models, the process can be visualized as the dependence of random variables. For example, X \u2192 Y implies that the generation of variable Y is conditioned on X, and thus the information of X is carried to Y. If X represents a probability distribution, the distribution of Y is informed by X, presenting a process of knowledge transfer, as we define it in this work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Crosslingual Transfer in Probabilistic Topic Models", "sec_num": "3." }, { "text": "In our study, \"knowledge\" can be loosely defined as K multinomial distributions over the vocabularies: {\u03c6 (k) } K k=1 . Thus, to study the transfer mechanisms in topic models is to reveal how the models transfer {\u03c6 (k) } K k=1 from one language to another. To date, this transfer process has not been obvious in most models, because typical multilingual topic models assume the tokens in multiple languages are generated jointly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Crosslingual Transfer in Probabilistic Topic Models", "sec_num": "3." }, { "text": "In this section, we present a reformulation of these models that breaks down the cogeneration assumption of current models and instead explicitly show the dependencies between languages. Starting with a simple example in Section 3.1, we show that our alternative formulation derives the same collapsed Gibbs sampler, and thus the same posterior distribution over samples, as in the original model. With this prerequisite, in Section 3.3 we introduce the transfer operation, which will be used to generalize and extend current multilingual topic models in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Crosslingual Transfer in Probabilistic Topic Models", "sec_num": "3." }, { "text": "We start with a simple graphical model, where \u03b8 \u2208 R K + is a K-dimensional categorical distribution, drawn from a Dirichlet parameterized by \u03b1, a symmetric hyperparameter (Figure 4(a) ). Using \u03b8, the model generates two variables, X and Y, and we use x and y to denote the generated observations. In the co-generation assumption, the variables X and Y are generated from the same \u03b8 at the same time, without dependencies between each other. Thus, we call this the joint model denoted as G (X,Y) and the probability of the sample (x, y) is Pr x, y; \u03b1, G (X,Y) .", "cite_spans": [ { "start": 489, "end": 494, "text": "(X,Y)", "ref_id": null }, { "start": 553, "end": 558, "text": "(X,Y)", "ref_id": null } ], "ref_spans": [ { "start": 171, "end": 183, "text": "(Figure 4(a)", "ref_id": null } ], "eq_spans": [], "section": "Transfer Dependencies", "sec_num": "3.1" }, { "text": "According to Bayes' theorem, there are two equivalent ways to expand the probability of (x, y):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer Dependencies", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Pr (x, y; \u03b1) = Pr (x|y; \u03b1) \u2022 Pr (y; \u03b1)", "eq_num": "(3)" } ], "section": "Transfer Dependencies", "sec_num": "3.1" }, { "text": "Pr (x, y; \u03b1) = Pr (y|x; \u03b1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer Dependencies", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2022 Pr (x; \u03b1)", "eq_num": "(4)" } ], "section": "Transfer Dependencies", "sec_num": "3.1" }, { "text": "where we notice that the generated sample is conditioned on another sample: Pr (x|y; \u03b1) and Pr (y|x; \u03b1), which fits into our concept of \"transfer.\" We show both cases in Figures 4(b) and 4(c), and denote the graphical structures as G (Y|X) and G (X|Y) , respectively, to show the dependencies between the two variables. In this formulation, the model generates \u03b8 x from Dirichlet (\u03b1) first and uses \u03b8 x to generate the sample of x. Using the histogram of x denoted as n", "cite_spans": [ { "start": 234, "end": 239, "text": "(Y|X)", "ref_id": null }, { "start": 246, "end": 251, "text": "(X|Y)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Transfer Dependencies", "sec_num": "3.1" }, { "text": "x = [n 1|x , n 2|x , . . . , n K|x ] x y \u21b5 \u2713 N x N y (a) \u21b5 x y N x N y \u2713 x \u2713 y|x (b) \u21b5 x y N x N y \u2713 y \u2713 x|y (c) Figure 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer Dependencies", "sec_num": "3.1" }, { "text": "(a) The co-generation assumption generates x and y at the same time from the same \u03b8. (b) To make the transfer process clear, we make the generation of y conditional on x and highlight the dependency in red. Because both x and y are exchangeable, the dependency can go the other way, as shown in (c).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer Dependencies", "sec_num": "3.1" }, { "text": "where n k|x is the number of instances of X assigned to category k, together with hyperparameter \u03b1, the model then generates a categorical distribution \u03b8 y|x \u223c Dir (n x + \u03b1), from which the sample y is drawn.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer Dependencies", "sec_num": "3.1" }, { "text": "This differs from the original joint model in that original parameter vector \u03b8 has been replaced with two variable-specific parameter vectors. The next section derives posterior inference with Gibbs sampling after integrating out the \u03b8 parameters, and we show that the sampler for each of two model formulations is equivalent and thus samples from an equivalent posterior distribution over x and y.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer Dependencies", "sec_num": "3.1" }, { "text": "General approaches to infer posterior distributions over graphical model variables include Gibbs sampling, variational inference, and hybrid approaches (Kim, Voelker, and Saul 2013) . We focus on collapsed Gibbs sampling (Griffiths and Steyvers 2004) , which marginalizes out the parameters (\u03b8 in the example above) to focus on the variables of interest (x and y in the example).", "cite_spans": [ { "start": 152, "end": 181, "text": "(Kim, Voelker, and Saul 2013)", "ref_id": "BIBREF24" }, { "start": 221, "end": 250, "text": "(Griffiths and Steyvers 2004)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Collapsed Gibbs Sampling", "sec_num": "3.2" }, { "text": "Continuing with the example from the previous section, in each iteration of Gibbs sampling (a \"sweep\" of samples), the sampler goes through each example in the data, which can be viewed as sampling from the full posterior of a joint model G (X,Y) as in Figure 5 (a). Thus, when sampling an instance x i \u2208 x, the collapsed conditional likelihood is", "cite_spans": [ { "start": 241, "end": 246, "text": "(X,Y)", "ref_id": null } ], "ref_spans": [ { "start": 253, "end": 261, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Collapsed Gibbs Sampling", "sec_num": "3.2" }, { "text": "Pr x = k|x \u2212 , y; \u03b1 = Pr(x = k, x \u2212 , y; \u03b1) Pr(x \u2212 , y; \u03b1) (5) = \u0393 \u03b1 k + n k|x + n k|y \u0393 N x + N y + 1 \u03b1 \u2022 \u0393 N x + N (\u2212i) y + 1 \u03b1 \u0393 \u03b1 k + n (\u2212i) k|x + n k|y (6) = n (\u2212i) k|x + n k|y + \u03b1 k N (\u2212i) x + N y + 1 \u03b1 (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collapsed Gibbs Sampling", "sec_num": "3.2" }, { "text": "where x \u2212 is the set of tokens excluding the current one and n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collapsed Gibbs Sampling", "sec_num": "3.2" }, { "text": "(\u2212i)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collapsed Gibbs Sampling", "sec_num": "3.2" }, { "text": "k|x is the number of instances x assigned to category k except the current x i . Note that in this equation, \u03b1 is the hyperparameter for the Dirichlet prior, which gets added to the counts in the formula after integrating out the parameters \u03b8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collapsed Gibbs Sampling", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "G (X,Y ) \u2026 sweep 1 sweep 2 sweep t x sample y sample G (X,Y ) G (X,Y ) (a) \u2026 sweep 1 sweep 2 sweep t x sample y sample G (X|Y ) G (Y |X) G (X|Y ) G (Y |X) G (X|Y ) G (Y |X)", "eq_num": "(b)" } ], "section": "Collapsed Gibbs Sampling", "sec_num": "3.2" }, { "text": "Sampling from a joint model G (X,Y) (a) and two conditional models G (X|Y) and G (Y|X) (b) yields the same MAP estimates.", "cite_spans": [ { "start": 69, "end": 74, "text": "(X|Y)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "Using our formulation from the previous section, we can separate each sweep into two subprocedures, one for each variable. When sampling an instance of x i \u2208 x, the histogram of sample y is fixed, and therefore it is sampling from the conditional model of G (X|Y) . Thus, the conditional likelihood is", "cite_spans": [ { "start": 258, "end": 263, "text": "(X|Y)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Pr x = k|x \u2212 ; y, \u03b1, G (X|Y) = Pr(x = k, x \u2212 ; y, \u03b1) Pr(x \u2212 ; y, \u03b1) (8) = \u0393 n k|x + (n k|y + \u03b1 k ) \u0393 N x + (N y + 1 \u03b1) \u2022 \u0393 N x + (N (\u2212i) y + 1 \u03b1) \u0393 n (\u2212i) k|x + (n k|y + \u03b1 k )", "eq_num": "(9)" } ], "section": "Figure 5", "sec_num": null }, { "text": "= n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(\u2212i) k|x + (n k|y + \u03b1 k ) N (\u2212i) x + (N y + 1 \u03b1)", "eq_num": "(10)" } ], "section": "Figure 5", "sec_num": null }, { "text": "where the hyperparameter for variable X and category k becomes n k|y + \u03b1 k . Similarly, when sampling y i \u2208 y which is generated from the model G (Y|X) , the conditional likelihood is", "cite_spans": [ { "start": 146, "end": 151, "text": "(Y|X)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Pr y = k|y \u2212 ; x, \u03b1, G (Y|X) = n (\u2212i) k|y + (n k|x + \u03b1 k ) N (\u2212i) y + (N x + 1 \u03b1)", "eq_num": "(11)" } ], "section": "Figure 5", "sec_num": null }, { "text": "with n k|x + \u03b1 k as the hyperparameter for Y. This process is shown in Figure 5 (b). From the calculation perspective, although the meaning of Equations (7), (10), and (11) are different, their formulae are identical. This allows us to analyze similar models using the conditional formulation without changing the posterior estimation. A similar approach is the pseudo-likelihood approximation, where a joint model is reformulated as the combination of two conditional models, and the optimal parameters for the pseudo-likelihood function are the same as for the original joint likelihood function (Besag 1975; Koller and Friedman 2009; Lepp\u00e4-aho et al. 2017) .", "cite_spans": [ { "start": 598, "end": 610, "text": "(Besag 1975;", "ref_id": "BIBREF2" }, { "start": 611, "end": 636, "text": "Koller and Friedman 2009;", "ref_id": "BIBREF26" }, { "start": 637, "end": 659, "text": "Lepp\u00e4-aho et al. 2017)", "ref_id": null } ], "ref_spans": [ { "start": 71, "end": 79, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "Now that we have made the transfer process explicit and showed that this alternative formulation yields same collapsed posterior, we are able to describe a similar process in detail in the context of multilingual topic models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer Operation", "sec_num": "3.3" }, { "text": "If we treat X and Y in the previous example as two languages, and the samples x and y as either words, tokens, or documents from the two languages, we have a bilingual data set (x, y). Topic models have more complex graphical structures, where the examples (tokens) are organized within certain scopes (e.g., documents). To define the transfer process for a specific topic model, when generating samples in one language based on the transfer process of the model, we have to specify what examples we want to use from another language, how much, and where we want to use them. To this end, we define the transfer operation, which allows us to examine different models under a unified framework to compare them systematically.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer Operation", "sec_num": "3.3" }, { "text": "Let \u2126 \u2208 R M be the target distribution of knowledge transfer with dimensionality M. A transfer operation on \u2126 from language 1 to 2 is defined as a function", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 1 (Transfer operation)", "sec_num": null }, { "text": "h \u2126 : R L 2 \u00d7L 1 \u00d7 N L 1 \u00d7M \u00d7 R L 2 \u00d7M + \u2192 R L 2 \u00d7M (12)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 1 (Transfer operation)", "sec_num": null }, { "text": "where L 1 and L 2 are the relevant dimensionalities for languages 1 and 2 , respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 1 (Transfer operation)", "sec_num": null }, { "text": "In this definition, the first argument of the transfer operation is where the two languages connect to each other, and can be defined as any bilingual supervision needed to enable transfer. The actual values of L 1 and L 2 depend on specific models. In an example of generating a document in language 2 , L 1 is the number of documents in languages 1 and L 2 = 1, and \u03b4 \u2208 R L 1 could be an binary vector where \u03b4 i = 1 if document i is the translation to current document in 2 , or zero otherwise. This is the core of crosslingual transfer through the transfer operation; later we will see that different multilingual topic models mostly only differ in the input of this argument, and designing this matrix is critical for an efficient knowledge transfer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 1 (Transfer operation)", "sec_num": null }, { "text": "The second argument in the transfer operation is the sufficient statistics of the transfer source ( 1 in the definition). After generating instances in language 1 , the statistics are organized into a matrix. The last argument is a prior distribution over the possible target distributions \u2126.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 1 (Transfer operation)", "sec_num": null }, { "text": "The output of the transfer operation depends on and has the same dimensionality as the target distribution, which will be used as the prior to generate a multinomial distribution. Let \u2126 be the target distribution from which a topic of language 2 is generated: z \u223c Multinomial (\u2126). With a transfer operation, a topic is generated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 1 (Transfer operation)", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2126 \u223c Dirichlet h \u2126 \u03b4, N ( 1 ) , \u03be (13) z \u223c Multinomial (\u2126)", "eq_num": "(14)" } ], "section": "Definition 1 (Transfer operation)", "sec_num": null }, { "text": "where \u03b4 is bilingual supervision, N ( 1 ) the generated sample of language 1 , and \u03be a prior distribution with the same dimensionality as \u2126. See Figure 6 for an illustration. In summary, this definition highlights three elements that are necessary to enable transfer:", "cite_spans": [], "ref_spans": [ { "start": 145, "end": 153, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Definition 1 (Transfer operation)", "sec_num": null }, { "text": "(1) language transformations or supervision from the transfer source to destination;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 1 (Transfer operation)", "sec_num": null }, { "text": "(2) data statistics in the source; and (3) a prior on the destination.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 1 (Transfer operation)", "sec_num": null }, { "text": "In the next section, we show how different topic models can be formulated with transfer operations, as well as how transfer operations can be used in the design of new models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 1 (Transfer operation)", "sec_num": null }, { "text": "In this section, we describe four representative multilingual topic models in terms of the transfer operation formulation. These are also the models we will experiment on in Section 5. The plate notations of these models are shown in Figure 7 , and we provide notations frequently used in these models in Table 1 . Statistics from source N (`1) Prior knowledge", "cite_spans": [ { "start": 340, "end": 344, "text": "(`1)", "ref_id": null } ], "ref_spans": [ { "start": 234, "end": 242, "text": "Figure 7", "ref_id": null }, { "start": 305, "end": 312, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Representative Models", "sec_num": "4." }, { "text": "\u21e0 h \u2326 \u21e3 , N (`1) , \u21e0 \u2318 biologi djur \u00f6vers\u00e4tt", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representative Models", "sec_num": "4." }, { "text": "An illustration of a transfer operation on a 3-dimensional Dirichlet distribution. The first argument of h \u2126 is a bilingual supervision \u03b4, which is a 3 \u00d7 3 matrix, where L 1 = L 2 = 3, indicating word translations between two languages. The second argument N ( 1 ) is the statistics (or histogram) from the sample in language 1 , whose dimension is aligned with \u03b4, and M = 1. With \u03be as the prior knowledge (a symmetric hyperparameter), the result of h \u2126 is then used as hyperparameters for the Dirichlet distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 6", "sec_num": null }, { "text": "\u2318 z \u21b5 w w z D (`1,`2) \u2713 K (`1,k) (`2,k) K D (`1,`2) \u21b5 K w z \u2713 K (`1,k) (`2,k) K` z \u21b5 K w w z \u2713 \u2713 D (`1) D (`2) \u21b5 i (i,`1,k) (i,`2,k) I (r,k) \u21b5 D (`1) D (`2) K (`1,k) (`2,k) K z w w z \u2713 d,`2 \u2713 d,`1 e d`2 doclink voclink softlink c-bilda (i,`1) (i,`2) (r) (`1) (`1) (`1) (`2) (`2) (`2) N d,`1 N d,`1 N d,`1 N d,`2 N d,`2 N d,`2 N d,`1 + N d,`2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 6", "sec_num": null }, { "text": "Plate notations of DOCLINK, C-BILDA, SOFTLINK, and VOCLINK (from left to right). We use red lines to make the knowledge transfer component clear. Note that in VOCLINK we assume every word is translated, so the plate notation does not include untranslated words. \u03b2 (r, ) An asymmetric Dirichlet prior vector of size I + V ( ,\u2212) , where I is the number of internal nodes in a Dirichlet tree, and V ( ,\u2212) the number of untranslated words in language . Each cell is denoted as \u03b2 (r, ) i , indicating a scalar prior to a specific node i or an untranslated word type.", "cite_spans": [ { "start": 264, "end": 269, "text": "(r, )", "ref_id": null }, { "start": 321, "end": 326, "text": "( ,\u2212)", "ref_id": null }, { "start": 475, "end": 480, "text": "(r, )", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "\u03b2 (i, ) A symmetric Dirichlet prior vector of size", "cite_spans": [ { "start": 2, "end": 7, "text": "(i, )", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "V ( ) i , where V ( ) i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "is the number of word types in language under internal node i.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "\u03c6 ( ,k) Multinomial distribution over word types in language of topic k for topic k.", "cite_spans": [ { "start": 2, "end": 7, "text": "( ,k)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "\u03c6 (r, ,k) Multinomial distribution over internal nodes in a Dirichlet tree for topic k.", "cite_spans": [ { "start": 2, "end": 9, "text": "(r, ,k)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "\u03c6 (i, ,k) Multinomial distribution over all word types in language under internal node i for topic k.", "cite_spans": [ { "start": 2, "end": 9, "text": "(i, ,k)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "Typical multilingual topic models are designed based on simple observations of multilingual data, such as parallel corpora and dictionaries. We focus on three popular models, and re-formulate them using the conditional generation assumption and the transfer operation we introduced in the previous sections. 15where D ( 1 ) is the number of documents in language 1 . Thus, the transfer operation for each document d 2 can be defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Standard Models", "sec_num": "4.1" }, { "text": "h \u03b8 d, 2 \u03b4, N ( 1 ) , \u03b1 = \u03b4 \u2022 N ( 1 ) + \u03b1 (16)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Standard Models", "sec_num": "4.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Standard Models", "sec_num": "4.1" }, { "text": "N ( 1 ) \u2208 N D ( 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Standard Models", "sec_num": "4.1" }, { "text": "\u00d7K is the sufficient statistics from language 1 , and each cell n dk is the count of topic k appearing in document d. We call this a \"document-level\" model, because the transfer target distribution is document-wise. On the other hand, DOCLINK does not have any word-level knowledge, such as dictionaries, so the transfer operation on \u03c6 in DOCLINK is straightforward. For every topic k = 1, . . . , K and each word type w regardless of its language,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Standard Models", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h \u03c6 ( 2 ,k) 0, N ( 1 ) , \u03b2 ( 2 ) = 0 \u2022 N ( 1 ) + \u03b2 ( 2 ) = \u03b2 ( 2 )", "eq_num": "(17)" } ], "section": "Standard Models", "sec_num": "4.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Standard Models", "sec_num": "4.1" }, { "text": "\u03b2 ( 2 ) \u2208 R V ( 2 ) +", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Standard Models", "sec_num": "4.1" }, { "text": "is a symmetric Dirichlet prior for the topic-vocabulary distributions \u03c6 ( 2 ,k) , and V 2is the size of vocabulary in language 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Standard Models", "sec_num": "4.1" }, { "text": "As a variation of DOCLINK, C-BILDA has all of the components of DOCLINK and has the same transfer operations on \u03b8 and \u03c6 as in Equations 16and 17, so this model is considered as a document-level model as well. Recall that C-BILDA additionally models topic-language distributions \u03b7. 1 For each document pair d and each topic k, a bivariate Bernoulli distribution over the two languages", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C-BILDA.", "sec_num": "4.1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b7 (k,d) \u2208 R 2 + is drawn from a Beta distribution parameterized by \u03c7 (d, 1 ) , \u03c7 (d, 2 ) : \u03b7 (k,d) \u223c Beta \u03c7 (d, 1 ) , \u03c7 (d, 2 ) (18) (k,m) \u223c Bernoulli \u03b7 (k,d)", "eq_num": "(19)" } ], "section": "C-BILDA.", "sec_num": "4.1.2" }, { "text": "where (k,m) is the language of the m-th token assigned to topic k in the entire document pair d. Intuitively, \u03b7 (k,d) is the probability of generating a token in language given the current document pair d and topic k. Before diving into the specific definition of the transfer operation for this model, we need to take a closer look at the generative process of C-BILDA first, because in this model, language itself is a random variable as well. We describe the generative process in terms of the conditional formulation where one language is conditioned on the other. As usual, a monolingual model first generates documents in 1 , and at this point each document pair d only has tokens in one language. Then for each document pair d, the conditional model additionally generates a number of topics z using the transfer operation on \u03b8 as defined in Equation (16). Instead of directly drawing a new word type in language 2 according to z, C-BILDA adds a step to generate a language from \u03b7 (z,d) . Because the current token is supposed to be in language 2 , if = 2 , this token is dropped, and the model keeps drawing the next topic z; otherwise, a word type is drawn from \u03c6 (z, 2 ) and attached to the document pair d. Once this process is over, each ", "cite_spans": [ { "start": 6, "end": 11, "text": "(k,m)", "ref_id": null }, { "start": 112, "end": 117, "text": "(k,d)", "ref_id": null }, { "start": 988, "end": 993, "text": "(z,d)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "C-BILDA.", "sec_num": "4.1.2" }, { "text": "h \u2713 N (en) h \u2713 h\u2318 (k ,d ) . . . topics counts \u2713 \u2713 \u2318 K", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C-BILDA.", "sec_num": "4.1.2" }, { "text": "An illustration of difference between DOCLINK and C-BILDA in sequential generating process. DOCLINK uses a transfer operation on \u03b8 to generate topics and then word types in Swedish (SV). Additionally, C-BILDA uses a transfer operation on \u03b7 to generate a language label according to a topic z. If the language generated is in Swedish, it draws a word type from the vocabulary; otherwise, the token is discarded.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "document pair d contains tokens from two languages, and by separating the tokens based on their languages we can obtain the corresponding set of comparable document pairs. Conceptually, C-BILDA adds an additional \"selector\" in the generative process to decide if a topic should appear more in 2 based on topics in 1 . We use Figure 8 as an illustration to show the difference between DOCLINK and C-BILDA. It is clear that the generation of tokens in language 2 is affected by that of language 1 ; thus we define an additional transfer operation on \u03b7 (k,d) . The bilingual supervision \u03b4 is the same as Equation 15, which is a vector of dimension D ( 1 ) indicating document translations. We denote the statistics term N", "cite_spans": [ { "start": 550, "end": 555, "text": "(k,d)", "ref_id": null } ], "ref_spans": [ { "start": 325, "end": 333, "text": "Figure 8", "ref_id": null } ], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "( 1 ) k \u2208 R D ( 1 ) \u00d72", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": ", where each cell in the first column n dk is the counts of topic k in document d, while the second column is a zero vector. Lastly, the prior term is also a two-dimensional vector \u03c7 (d) ", "cite_spans": [ { "start": 183, "end": 186, "text": "(d)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "= \u03c7 (d, 1 ) , \u03c7 (d, 1 ) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "Together, we have the transfer operation defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h \u03b7 (k,d) \u03b4, N ( 1 ) k , \u03c7 (d) = \u03b4 \u2022 N ( 1 ) k + \u03c7 (d)", "eq_num": "(20)" } ], "section": "Figure 8", "sec_num": null }, { "text": "4.1.3 VOCLINK. Jagarlamudi and Daum\u00e9 III (2010) and Boyd-Graber and introduced another type of multilingual topic model, which uses a dictionary for wordlevel supervision instead of parallel/comparable documents as supervision, and we call this model VOCLINK. 2 Because no document-level supervision is used, the transfer operation on \u03b8 is simply defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h \u03b8 d, 2 0, N ( 1 ) , \u03b1 = 0 \u2022 N ( 1 ) + \u03b1 = \u03b1", "eq_num": "(21)" } ], "section": "Figure 8", "sec_num": null }, { "text": "We now construct the transfer operation on the topic-word distribution \u03c6 based on the tree-structued priors in VOCLINK ( Figure 3 ). Recall that each word w ( ) is associated with at least one path, denoted as \u03bb w ( ) . If w ( ) is translated, the path is", "cite_spans": [], "ref_spans": [ { "start": 121, "end": 129, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "\u03bb w ( ) = r \u2192 i, i \u2192 w ( )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "where r is the root and i an internal node; otherwise, the path is simply the edge from root to that word. Thus, on the first level of the tree, the Dirichlet \u2212) , where I is the number of internal nodes (i.e., word translation entries), and V ( 2 ,\u2212) are the untranslated word types in language", "cite_spans": [ { "start": 159, "end": 161, "text": "\u2212)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "distribution \u03c6 (r, 2 ,k) is of dimension I + V ( 2 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "2 . Let \u03b4 \u2208 R (I+V ( 2 ,\u2212) )\u00d7V1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "+ be an indicator matrix where V 1 is the number of translated words in language 1 , and each cell is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "\u03b4 i,w ( 1 ) = 1 w ( 1 ) is under node i (22)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "Given a topic k, the statistics argument N ( 1 ) \u2208 R V 1 is a vector where each cell n w is the count of word w assigned to topic k. Note that in the tree structure, the prior for Dirichlet is asymmetric and is scaled by the number of translations under each internal node. Thus, the transfer operation on \u03c6 (r, 2 ,k) is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h \u03c6 (r, 2 ,k) \u03b4, N ( 1 ) , \u03b2 (r, 2 ) = \u03b4 \u2022 N ( 1 ) + \u03b2 (r, 2 )", "eq_num": "(23)" } ], "section": "Figure 8", "sec_num": null }, { "text": "Under each internal node, the Dirichlet is only related to specific languages, so no transfer happens, and the transfer operation on \u03c6 (i, 2 ,k) for an internal node i is simply \u03b2 (i, 2 ) :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h \u03c6 (i, 2 ,k) 0, N ( 1 ) , \u03b2 (i, 2 ) = 0 \u2022 N ( 1 ) + \u03b2 (i, 2 ) = \u03b2 (i, 2 )", "eq_num": "(24)" } ], "section": "Figure 8", "sec_num": null }, { "text": "We have formulated three representative multilingual topic models by defining transfer operations for each model above. Our recent work, called SOFTLINK , is explicitly designed according to the understanding of this transfer process. We present this model as a demonstration of how transfer operations can be used to build new multilingual topic models, which might not have an equivalent formulation using the standard co-generation model, by modifying the transfer operation. In DOCLINK, the supervision argument \u03b4 in the transfer operation is constructed using comparable data sets. This requirement, however, substantially limits the data that can be used. Moreover, the supervision \u03b4 is also limited by the data; if there is no translation available to a target document, \u03b4 is an all-zero vector, and the transfer operation defined in Equation (16) will cancel out all the available information N ( 1 ) for the target document, which is an ineffective use of the resource. Unlike parallel corpora, dictionaries are widely available and often easy to obtain for many languages. Thus, the general idea of SOFTLINK is to use a dictionary to retrieve as much as possible information from 1 to construct \u03b4 in a way that links potentially comparable documents together, even if the corpus itself does not explicitly link together documents. Specifically, for a document d 2 , instead of a pre-defined indicator vector, SOFTLINK defines \u03b4 as a probabilistic distribution over all documents in language 1 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SOFTLINK: A Transfer Operation-Based Model", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b4 d 1 \u221d | w ( 1 ) \u2229 w ( 2 ) | | w ( 1 ) \u222a w ( 2 ) |", "eq_num": "(25)" } ], "section": "SOFTLINK: A Transfer Operation-Based Model", "sec_num": "4.2" }, { "text": "where {w ( ) } contains all the word types that appear in document d , and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SOFTLINK: A Transfer Operation-Based Model", "sec_num": "4.2" }, { "text": "w ( 1 ) \u2229 w ( 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SOFTLINK: A Transfer Operation-Based Model", "sec_num": "4.2" }, { "text": "indicates all word pairs w ( 1 ) , w ( 2 ) in a dictionary as translations. Thus, \u03b4 d 1 can be interpreted as the \"probability\" of d 1 being the translation of d 2 . We call \u03b4 the transfer distribution. See Figure 9 for an illustration. topic 2 topic 3 topic 1 topic 2 topic 3 1 2 3 1 2 3 1 2 3 1 2 3 topics", "cite_spans": [], "ref_spans": [ { "start": 207, "end": 215, "text": "Figure 9", "ref_id": null } ], "eq_spans": [], "section": "SOFTLINK: A Transfer Operation-Based Model", "sec_num": "4.2" }, { "text": "counts = [0, 0, 0, 0] = [0.05, 0.15, 0.7, 0] doclink softlink h\u2713 \u21e3 , N (`1) , \u21b5 \u2318 h\u2713 \u21e3 , N (`1) , \u21b5 \u2318 d 1 d 2 d 3 d 4 topic 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SOFTLINK: A Transfer Operation-Based Model", "sec_num": "4.2" }, { "text": "An example of how different inputs of transfer operation result in different Dirichlet priors through DOCLINK and SOFTLINK. The middle is a mini-corpus in language 1 and each document's topic histogram. When a document in 2 is not translation to any of those in 1 , DOCLINK defines \u03b4 as an all-zero vector which leads to an uninformative symmetric prior. In contrast, SOFTLINK uses a dictionary to create \u03b4 as a distribution so that the topic histogram in each document in 1 can still be proportionally transferred.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 9", "sec_num": null }, { "text": "In our initial work, we show that instead of a dense distribution, it is more efficient to make the transfer distributions sparse by thresholding,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 9", "sec_num": null }, { "text": "\u03b4 d 1 \u221d 1 \u03b4 d 1 > \u03c0 \u2022 max(\u03b4) \u2022 \u03b4 d 1 (26)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 9", "sec_num": null }, { "text": "where \u03c0 \u2208 [0, 1] is a fixed threshold parameter. With the same definition of N ( 1 ) and \u03b1 in Equation 16and \u03b4 defined as Equation 25, SOFTLINK completes the same transfer operations,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 9", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h \u03b8 d, 2 \u03b4, N ( 1 ) , \u03b1 = \u03b4 \u2022 N ( 1 ) + \u03b1 (27) h \u03c6 ( 2 ,k) 0, N ( 1 ) , \u03b2 ( 2 ) = 0 \u2022 N ( 1 ) + \u03b2 ( 2 ) = \u03b2 ( 2 )", "eq_num": "(28)" } ], "section": "Figure 9", "sec_num": null }, { "text": "We categorize transfer operations into two groups based on the target transfer distribution. Document-level operations transfer knowledge on distributions related to the entire document, such as \u03b8 in DOCLINK, C-BILDA, and SOFTLINK, and \u03b7 in C-BILDA. Word-level operations transfer knowledge on those related to the entire vocabulary or specific word types, such as \u03c6 in VOCLINK. When a model only has transfer operations on just one specific level, we also use the transfer level to refer the model. For example, DOCLINK, C-BILDA, and SOFTLINK are all document-level models, while VOCLINK is a word-level model. Those that transfer knowledge on multiple levels, such as Hu et al. (2014b), are called mixed-level models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary: Transfer Levels and Transfer Models", "sec_num": "4.3" }, { "text": "We summarize the transfer operation definitions for different models in Table 2 , and add monolingual LDA as a reference to show how transfer operations are defined when no transfer takes place. We will experiment on the four multilingual models in Sections 4.1.1 through 4.2.", "cite_spans": [], "ref_spans": [ { "start": 72, "end": 79, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Summary: Transfer Levels and Transfer Models", "sec_num": "4.3" }, { "text": "From discussions above, we are able to describe various multilingual topic models by defining different transfer operations, which explicitly represent the language transfer process. When designing and applying those transfer operations in practice, some Table 2 Summary of transfer operations defined in the compared models, where we assume the direction of transfer is from 1 to 2 .", "cite_spans": [], "ref_spans": [ { "start": 255, "end": 262, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Experiment Settings", "sec_num": "5." }, { "text": "Document level Word level Parameters of h", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "LDA \u03b1 \u03b2 ( 2 ) - DOCLINK \u03b4 \u2022 N ( 1 ) + \u03b1 \u03b2 ( 2 ) \u03b4: indicator vector; \u03b4 \u2022 N ( 1 ) + \u03b1, N ( 1 ) : doc-by-topic matrix; C-BILDA \u03b4 \u2022 N ( 1 ) k + \u03c7 (d) \u03b2 ( 2 ) supervision: comparable documents; \u03b4: transfer distribution; SOFTLINK \u03b4 \u2022 N ( 1 ) + \u03b1 \u03b2 ( 2 ) N ( 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": ": doc-by-topic matrix; supervision: dictionary; \u03b4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "indicator vector; VOCLINK \u03b1 \u03b4 \u2022 N ( 1 ) + \u03b2 (r, 2 ) N ( 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": ": node-by-word matrix; supervision: dictionary; natural questions arise, such as which transfer operation is more effective in what type of situation, and how to design a model that is more generalizable regardless of availability of multilingual resources. To study the model behaviors empirically, we train the four models described in the previous section-DOCLINK, C-BILDA, SOFTLINK, and VOCLINK-in ten languages. Considering the resources available, we separate the ten languages into two groups: high-resource languages (HIGHLAN) and low-resource languages (LOWLAN). For HIGHLAN, we have relatively abundant resources such as dictionary entries and document translations. We additionally use these languages to simulate the settings of LOWLAN by training multilingual topic models with different amounts of resources. For LOWLAN, we use all resources available to verify experiment results and conclusions from HIGHLAN.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "We separate the ten languages into two groups: HIGHLAN and LOWLAN. In this section, we describe the preprocessing details of these languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Groups and Preprocessing", "sec_num": "5.1" }, { "text": "Languages in this group have a relatively large amount of resources, and have been widely experimented on in multilingual studies. Considering language diversity, we select representative languages from five different families: Arabic (AR, Semitic), German (DE, Germanic), Spanish (ES, Romance), Russian (RU, Slavic), and Chinese (ZH, Sinitic). We follow standard preprocessing procedures: We first use stemmers to process both documents and dictionaries (segmenter for Chinese), then we remove stopwords based on a fixed list and the most 100 frequent word types in the training corpus. The tools for preprocessing are listed in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 630, "end": 637, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "HIGHLAN.", "sec_num": "5.1.1" }, { "text": "Languages in this group have much fewer resources than those in HIGH-LAN, considered as low-resource languages. We similarly select five languages from different families: Amharic (AM, Afro-Asiatic), Aymara (AY, Aymaran), Macedonian (MK, Indo-European), Swahili (SW, Niger-Congo), and Tagalog (TL, Austronesian). Note that some of these are not strictly \"low-resource\" compared with many endangered languages. For the truly low-resource languages, it is very difficult to test the models with enough data, and, therefore, we choose languages that are understudied in natural language processing literature. Preprocessing in this language group needs more consideration. Because they represent low-resource languages that most natural language processing tools are not available for, we do not use a fixed stopword list. Stemmers are also not available for these languages, so we do not apply stemming.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LOWLAN.", "sec_num": "5.1.2" }, { "text": "There are many resources available for multilingual research, such as the European Parliament Proceedings parallel corpus (EUROPARL; Koehn 2005), the Bible, and Wikipedia. EuroParl provides a perfectly parallel corpus with precise translations, but it only contains 21 European languages, which limits its generalizability to most of the languages. The Bible, on the other hand, is also perfectly parallel and is widely available in 2,530 languages. 7 Its disadvantages, however, are that the contents are very limited (mostly about family and religion), the data set size is small (1,189 chapters), and many languages do not have digital format (Christodoulopoulos and Steedman 2015).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Sets and Model Configurations", "sec_num": "5.2" }, { "text": "Compared with EUROPARL and the Bible, Wikipedia provides comparable documents in many languages with a large range of content, making it a very popular choice for many multilingual studies. In our experiments, we create ten bilingual Wikipedia corpora, each containing documents in one of the languages in either HIGHLAN or LOWLAN, paired with documents in English (EN). Though most multilingual topic models are not restricted to training bilingual corpora paired with English, this is a helpful way to focus our experiments and analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Sets and Model Configurations", "sec_num": "5.2" }, { "text": "We present the statistics of the training corpus of Wikipedia and the dictionary we use (from Wiktionary) in the experiments in Table 4 . Note that we train topic models on bilingual pairs, where one of the languages is always English, so in the table we show statistics of English in every bilingual pair as well.", "cite_spans": [], "ref_spans": [ { "start": 128, "end": 135, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Training Sets and Model Configurations", "sec_num": "5.2" }, { "text": "Lastly, we summarize the model configurations in Table 5 . The goal of this study is to bring current multilingual topic models together, studying their corresponding strengths and limitations. To keep the experiments as comparable as possible, we use constant hyperparameters that are consistent across the models. For all models, we set the Dirichlet hyperparameter \u03b1 k = 0.1 for each topic k = 1, . . . , K. We run 1,000 Gibbs sampling iterations on the training set and 200 iterations on the test sets. The number of topics K is set to 20 by default for efficiency reasons. ", "cite_spans": [], "ref_spans": [ { "start": 49, "end": 56, "text": "Table 5", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Training Sets and Model Configurations", "sec_num": "5.2" }, { "text": "We set \u03b2 to be a symmetric vector where each cell \u03b2 i = 0.01 for all word types of all the languages, and use the MALLET implementation for training (McCallum 2002) . To enable consistent comparison, we disable hyperparameter optimization provided in the package.", "cite_spans": [ { "start": 149, "end": 164, "text": "(McCallum 2002)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "DOCLINK", "sec_num": null }, { "text": "Following the experiment results from Heyman, Vulic, and Moens (2016), we set \u03c7 = 2 to make the results more competitive to DOCLINK. The rest of the settings are the same as for DOCLINK.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C-BILDA", "sec_num": null }, { "text": "We use the document-wise thresholding approach for calculating the transfer distributions. The focus threshold is set to 0.8. The rest of the settings are the same as for DOCLINK.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SOFTLINK", "sec_num": null }, { "text": "We set the scalar \u03b2 = 0.01 for hyperparameter \u03b2 (r, ) from the root to both internal nodes or leaves. For those from internal nodes to leaves, we set \u03b2 = 100, following the settings in Hu et al. (2014b) .", "cite_spans": [ { "start": 48, "end": 53, "text": "(r, )", "ref_id": null }, { "start": 185, "end": 202, "text": "Hu et al. (2014b)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "VOCLINK", "sec_num": null }, { "text": "We evaluate all models using both intrinsic and extrinsic metrics. Intrinsic evaluation is used to measure the topic quality or coherence learned from the training set, and extrinsic evaluation measures performance after applying the trained distributions to downstream crosslingual applications. For all the following experiments and tasks, we start by analyzing languages in HIGHLAN. Then we apply the analyzed results to LOWLAN. We choose topic coherence (Hao, Boyd-Graber, and Paul 2018) and crosslingual document classification (Smet, Tang, and Moens 2011) as intrinsic and extrinsic evaluation tasks, respectively. The reason for choosing these two tasks is that they examine the models from different angles: Topic coherence looks at topic-word distributions, whereas classification focuses on document-topic distributions. Other evaluation tasks, such as word translation detection and crosslingual information retrieval, also utilize the trained distributions, but here we focus on a straightforward and representative task.", "cite_spans": [ { "start": 458, "end": 491, "text": "(Hao, Boyd-Graber, and Paul 2018)", "ref_id": "BIBREF16" }, { "start": 533, "end": 561, "text": "(Smet, Tang, and Moens 2011)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5.3" }, { "text": "5.3.1 Intrinsic Evaluation: Topic Quality. Intrinsic evaluation refers to evaluating the learned model directly without applying it to any particular task; for topic models, this is usually based on the quality of the topics. Standard evaluation measures for monolingual models, such as perplexity (or held-out likelihood; Wallach et al. 2009) and Normalized Pointwise Mutual Information (NPMI, Lau, Newman, and Baldwin (2014)), could potentially be considered for crosslingual models. However, when evaluating multilingual topics, how words in different languages make sense together is also a critical criterion in addition to coherence within each of the languages.", "cite_spans": [ { "start": 323, "end": 343, "text": "Wallach et al. 2009)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5.3" }, { "text": "In monolingual studies, show that held-out likelihood is not always positively correlated with human judgments of topics. Held-out likelihood is additionally suboptimal for multilingual topic models, because this measure is only calculated within each language, and the important crosslingual information is ignored.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5.3" }, { "text": "Crosslingual Normalized Pointwise Mutual Information (CNPMI; Hao, Boyd-Graber, and Paul 2018) is a measure designed specifically for multilingual topic models. Extended from the widely used NPMI to measure topic quality in multilingual settings, CNPMI uses a parallel reference corpus to extract crosslingual coherence. CNPMI correlates well with bilingual speakers' judgments on topic quality and predictive performance in downstream applications. Therefore, we use CNPMI for intrinsic evaluations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5.3" }, { "text": "Normalized Pointwise Mutual Information, CNPMI) Let W ( 1 , 2 ) C", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2 (Crosslingual", "sec_num": null }, { "text": "be the set of top C words in a bilingual topic, and R ( 1 , 2 ) a parallel reference corpus. The CNPMI of this topic is calculated as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2 (Crosslingual", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "CNPMI W ( 1 , 2 ) C = \u2212 1 C 2 w i ,w j \u2208W ( 1 , 2 ) C log Pr(w i ,w j ) Pr(w i ) Pr(w j ) log Pr w i , w j", "eq_num": "(29)" } ], "section": "Definition 2 (Crosslingual", "sec_num": null }, { "text": "where w i and w j are from languages 1 and 2 , respectively. Let", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2 (Crosslingual", "sec_num": null }, { "text": "d = d 1 , d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2 (Crosslingual", "sec_num": null }, { "text": "2 be a pair of parallel documents from the reference corpus R ( 1 , 2 ) , whose size is denoted as R ( 1 , 2 ) . d :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2 (Crosslingual", "sec_num": null }, { "text": "w i \u2208 d 1 , w j \u2208 d 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2 (Crosslingual", "sec_num": null }, { "text": "is the number of parallel document pairs in which w i and w j appear. The co-occurrence probability of a word pair and the probability of a single word are calculated as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2 (Crosslingual", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Pr w i , w j d : w i \u2208 d 1 , w j \u2208 d 2 R ( 1 , 2 ) (30) Pr (w i ) d : w i \u2208 d 1 R ( 1 , 2 )", "eq_num": "(31)" } ], "section": "Definition 2 (Crosslingual", "sec_num": null }, { "text": "Intuitively, a coherent topic should contain words that make sense or fit in a specific context together. In the multilingual case, CNPMI measures how likely it is that a bilingual word pair appears in a similar context provided by the parallel reference corpus. We provide toy examples in Figure 10 , where we show three bilingual topics. In Topic A, both languages are about \"language,\" and all the bilingual word pairs have high probability of appearing in the same comparable document pairs. Thus Topic A is coherent crosslingually, and thus expected to have a high CNPMI score. Although we can identify the themes within each language in Topic B, that is, education in English and biology in Swahili, most of the bilingual word pairs do not make sense or appear in the same context, which gives us a low CNPMI score. The last topic is not coherent even within each language, so it has low CNPMI as well. Through this example, we see that CNPMI detects crosslingual coherence in multiple ways, unlike other intrinsic measures that might be adapted for crosslingual models.", "cite_spans": [], "ref_spans": [ { "start": 290, "end": 299, "text": "Figure 10", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Definition 2 (Crosslingual", "sec_num": null }, { "text": "In our experiments, we use 10, 000 linked Wikipedia article pairs for each language pair (EN, ) (20, 000 in total) as the reference corpus, and set C = 10 by default. Note that HIGHLAN has more Wikipedia articles, and we make sure the articles used for evaluating CNPMI scores do not appear in the training set. However, for LOWLAN, because the number of linked Wikipedia articles is extremely limited, we use all the available pairs to evaluate CNPMI scores. The statistics are shown in Table 6 . Topic A (English-Amharic) Topic B (English-Swahili) Topic C (English-Macedonian) cnpmi = 0.3632 cnpmi = 0.0094 cnpmi = 0.0643", "cite_spans": [], "ref_spans": [ { "start": 488, "end": 495, "text": "Table 6", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Definition 2 (Crosslingual", "sec_num": null }, { "text": "CNPMI measures how likely a bilingual word pair appears in a similar context in two languages, provided by a reference corpus. Topic A has a high CNPMI score because both languages are talking about the same theme. Both Topic B and Topic C are incoherent multilingual topics, although Topic B is coherent within each language. and Moens 2011; Vuli\u0107 et al. 2015; Heyman, Vulic, and Moens 2016) . Typically, a model is trained on a multilingual training set D ( 1 , 2 ) in languages 1 and 2 . Using the trained topic-vocabulary distributions \u03c6, the model infers topics in test sets D ( 1 ) and D 2. In multilingual topic models, document-topic distributions \u03b8 can be used as features for classification, where the \u03b8 d, 1 vectors in language 1 train a classifier tested by the \u03b8 d, 2 vectors in language 2 . A better classification performance indicates more consistent features across languages. See Figure 11 for an illustration. In our experiments, we use a linear support vector machine to train multilabel classifiers with five-fold cross-validation. Then, we use micro-averaged F-1 scores to evaluate and compare performance across different models.", "cite_spans": [ { "start": 343, "end": 361, "text": "Vuli\u0107 et al. 2015;", "ref_id": "BIBREF48" }, { "start": 362, "end": 392, "text": "Heyman, Vulic, and Moens 2016)", "ref_id": null } ], "ref_spans": [ { "start": 898, "end": 907, "text": "Figure 11", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Figure 10", "sec_num": null }, { "text": "For crosslingual classification, we also require held-out test data with labels or annotations. In our experiments, we construct test sets from two sources: TED Talks 2013 (TED) and Global Voices (GV). TED contains parallel documents in all languages in HIGHLAN, whereas GV contains all languages from both HIGHLAN and LOWLAN. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 10", "sec_num": null }, { "text": "D (`1) D (`2) n b (`2,k) o K k=1 n b (`1,k) o K k=1 test corpus w/ labels D 0 (`1) = n\u21e3 b \u2713 d,`1 , y \u2318o test corpus D 0 (`2) = n\u21e3 b \u2713 d,`2 , \u2022 \u2318o test train", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 10", "sec_num": null }, { "text": "An illustration of crosslingual document classification. After training multilingual topic models, the topics, { \u03c6 ( , k) } are used to infer document-topic distributions \u03b8 of unseen documents in both languages. A classifier is trained with the inferred distributions \u03b8 d, 1 as features and the labels y in language 1 , and predicts labels in language 2 .", "cite_spans": [ { "start": 115, "end": 121, "text": "( , k)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 11", "sec_num": null }, { "text": "Using the two multilingual sources, we create two types of test sets for HIGHLAN-TED + TED and TED + GV, and only one type for LOWLAN-TED+GV. In TED+TED, we infer document-topic distributions on documents from TED in English and the paired language. This only applies to HIGHLAN, because TED do not have documents in LOWLAN. In TED+GV, we infer topics on English documents from TED, and infer topics on documents from GV in the paired language (both HIGHLAN and LOWLAN). The two types of test sets also represent different application situations. TED + TED implies that the test documents in both languages are parallel and come from the same source, whereas TED + GV represents how the topic model performs when the two languages have different data sources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 11", "sec_num": null }, { "text": "Both corpora are retrieved from http://opus.nlpl.eu/ (Tiedemann 2012). The labels, however, are manually retrieved from http://ted.com/ and http://globalvoices. org/. In TED corpus, each document is a transcript of a talk and is assigned to multiple categories on the Web page, such as \"technology,\" \"arts,\" and so forth. We collect all categories for the entire TED corpus, and use the three most frequent categoriestechnology, culture, science-as document labels. Similarly, in GV corpus, each document is a news story, and has been labeled with multiple categories on the Web page of the story. Because in TED + GV, the two sets are from different sources, and training and testing is only possible when both sets share the same labels, we apply the same three labels from TED to GV as well. This processing requires minor mappings, for example, from \"arts-culture\" in GV to \"culture\" in TED. The data statistics are presented in Table 7 .", "cite_spans": [], "ref_spans": [ { "start": 933, "end": 940, "text": "Table 7", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Figure 11", "sec_num": null }, { "text": "We first explore the empirical characteristics of document-level transfer, using DOC-LINK, C-BILDA, and SOFTLINK.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document-Level Transfer and Its Limitations", "sec_num": "6." }, { "text": "Multilingual corpora can be loosely categorized into three types: parallel, comparable, and incomparable. A parallel corpus contains exact document translations across languages, of which EUROPARL and the Bible, discussed before, are examples. A comparable corpus contains document pairs (in the bilingual case), where each document in one language has a related counterpart in the other language. However, these document pairs are not exact translations of each other, and they can only be connected through a loosely defined \"theme.\" Wikipedia is an example, where document pairs are linked by article titles. Incomparable corpora contain potentially unrelated documents across languages, with no explicit indicators of document pairs. With different levels of comparability comes different availabilities of such corpora: It is much harder to find parallel corpora in low-resource languages. Therefore, we first focus on HIGHLAN, and use Wikipedia to simulate the low-resource situation in Section 6.1, where we find that DOCLINK and C-BILDA are very sensitive to the training corpus, and thus might not be the best option when it comes to low-resource languages. We then examine LOWLAN in Section 6.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document-Level Transfer and Its Limitations", "sec_num": "6." }, { "text": "We first vary the comparability of the training corpus and study how different models behave under different situations. All models are potentially affected by the comparability of the training set, although only DOCLINK and C-BILDA explicitly rely on this information to define transfer operations. This experiment shows that models transferring knowledge on the document level (DOCLINK and C-BILDA) are very sensitive to the training set, but can be almost entirely insensitive with appropriate modifications to the transfer operation as in SOFTLINK.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sensitivity to Training Corpus", "sec_num": "6.1" }, { "text": "6.1.1 Experiment Settings. For each language pair (EN, ), we construct a random subsample of 2, 000 documents from Wikipedia in each language (4, 000 in total). To vary the comparability, we vary the proportion of linked Wikipedia articles between the two languages, from 0.0, 0.01, 0.05, 0.1, 0.2, 0.4, 0.8, to 1. When the percentage is zero, the bilingual corpus is entirely incomparable, that is, no document-level translations can be found in another language, and DOCLINK and C-BILDA degrade into monolingual LDAs. The indicator matrix used by transfer operations in Section 4.1.1 is a zero matrix \u03b4 = 0. When the percentage is one, meaning each document from one language is linked to one document from another language, the corpus is considered fully comparable, and \u03b4 is an identity matrix 1. Any number between 0 and 1 makes the corpus partially comparable to different degrees. The CNPMI and crosslingual classification results are shown in Figure 12 , and the shades indicate the standard deviations across five Gibbs sampling chains. For VOCLINK and SOFTLINK, we use all the dictionary entries. 6.1.2 Results. In terms of topic coherence (CNPMI), both DOCLINK and C-BILDA have competitive performance on CNPMI, and achieve full potential when the corpus is fully comparable. As expected, models transferring knowledge at the document level (DOCLINK and C-BILDA) are very sensitive to the training corpus: The more aligned the corpus is, the better topics the model learns. For the word-level model, VOCLINK roughly stays at the same performance level, which is also expected, because this model does not use linked documents as supervision. However, its performance on Russian is surprisingly low compared with other languages and models. In the next section, we will look closer at this problem by investigating the impact of dictionaries.", "cite_spans": [], "ref_spans": [ { "start": 951, "end": 960, "text": "Figure 12", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Sensitivity to Training Corpus", "sec_num": "6.1" }, { "text": "It is notable that SOFTLINK, a document-level model, is also insensitive to the training corpus and outperforms other models most of the time. Recall that on the document level, SOFTLINK defines transfer operation on document-topic distributions \u03b8, Both SOFTLINK and VOCLINK stay at a stable performance level of either CNPMI or F-1 scores, whereas DOCLINK and C-BILDA expectedly have better performance as there are more linked Wikipedia articles. similarly to DOCLINK and C-BILDA, but using dictionary resources. This implies that good design of the supervision \u03b4 in the transfer operation could lead to a more stable performance across different training situations. When it comes to the classification task, the F-1 scores of DOCLINK and C-BILDA have very large variations, and the increasing trend of F-1 scores is less obvious than with CNPMI. This is especially true when the percentage of linked documents is very small. For one, when the percentage is small, the transfer on the document level is less constrained, leaving the projection of two languages into the same topic space less predictive. The evaluation scope of CNPMI is actually much smaller and more concentrated than classification, because it only focuses on the top C words, which does not lead to large variations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sensitivity to Training Corpus", "sec_num": "6.1" }, { "text": "One consistent result we notice is that SOFTLINK still performs well on classification with very small variations and stable F-1 scores, which again benefits from the definition of transfer operation in SOFTLINK. When transferring topics to another language, SOFTLINK uses dictionary constraints as in VOCLINK, but instead of a simple one-on-one word type mapping, it expands the transfer scope to the entire document. Additionally, SOFTLINK distributionally transfers knowledge from the entire corpus in another language, which actually reinforces the transfer efficiency without relying on direct supervision at the document level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sensitivity to Training Corpus", "sec_num": "6.1" }, { "text": "In this section, we take a look at languages in LOWLAN. For SOFTLINK and VOCLINK, we use all dictionary entries to train languages in LOWLAN, because the sizes of dictionaries in these languages are already very small. We again use a subsample of 2, 000 Wikipedia document pairs with English to make the results comparable with HIGHLAN. In Figure 13(a) , we also present results of models for HIGHLAN using fully comparable training corpora and full dictionaries for direct comparison of the effect of language resources.", "cite_spans": [], "ref_spans": [ { "start": 340, "end": 352, "text": "Figure 13(a)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Performance on LOWLAN", "sec_num": "6.2" }, { "text": "In most cases, transfer on document level (particularly C-BILDA) performs better than on word levels, in both HIGHLAN and LOWLAN. Considering the number of dictionary entries available from Table 4 , it is reasonable to suspect that the dictionary is a major factor affecting the performance of word-level transfer.", "cite_spans": [], "ref_spans": [ { "start": 190, "end": 197, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Performance on LOWLAN", "sec_num": "6.2" }, { "text": "On the other hand, although SOFTLINK does not model vocabularies directly as in VOCLINK, transferring knowledge at the document level with a limited dictionary still yields competitive CNPMI scores. Therefore, in this experiment on LOWLAN, we see that with the same lexicon resource, it is generally more efficient to transfer knowledge at the document level. We will also explore this in detail in Section 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance on LOWLAN", "sec_num": "6.2" }, { "text": "We also present a comparison of micro-averaged F-1 scores between HIGHLAN and LOWLAN in Figure 13(b) . The test set used for this comparison is TED + GV, since TED does not have articles available in LOWLAN. Also, languages such as Amharic (AM) have fewer than 50 GV articles available, which is an extremely small number for training a robust classifier, so in these experiments, we only train classifiers on English (TED articles) and test them on languages in HIGHLAN and LOWLAN (GV articles).", "cite_spans": [], "ref_spans": [ { "start": 88, "end": 100, "text": "Figure 13(b)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Performance on LOWLAN", "sec_num": "6.2" }, { "text": "Similarly, the classification results are generally better in document-level transfer, and both C-BILDA and SOFTLINK give similar scores. However, it is worth noting that VOCLINK has very large variations in all languages, and the F-1 scores are very low. This again suggests that transferring knowledge on the word level is less effective, and in Section 7 we study in detail why this is the case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance on LOWLAN", "sec_num": "6.2" }, { "text": "In the previous section, we compared different multilingual topic models with a focus on document-level models. We draw conclusions that DOCLINK and C-BILDA are very sensitive to the training corpus, which is natural due to their definition of supervision as a one-to-one document pair mapping. On the other hand, the word-level model VOCLINK in general has lower performance, especially with LOWLAN, even if the corpus is entirely comparable. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-Level Transfer and Its Limitations", "sec_num": "7." }, { "text": "Topic quality evaluation and classification performance on both HIGHLAN and LOWLAN. We notice that VOCLINK has lower CNPMI and F-1 scores in general, with large standard deviations. C-BILDA, on the other hand, outperforms other models in most of the languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 13", "sec_num": null }, { "text": "One interesting result we observed from the previous section is that SOFTLINK and VOCLINK use the same dictionary resource while transferring topics on different levels, and SOFTLINK generally has better performance than VOCLINK. Therefore, in this section, we explore the characteristics of the word-level model VOCLINK and compare it with SOFTLINK to study why it does not use the same dictionary resource as effectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 13", "sec_num": null }, { "text": "To this end, we first vary the amount of dictionary entries available and compare how SOFTLINK and VOCLINK perform (Section 7.1). Based on the results, we analyze word-level transfer from three different angles: dictionary usage (Section 7.2) as an intuitive explanation of the models, topic analysis (Section 7.3) from a more qualitative perspective, and comparing transfer strength (Section 7.4) as a quantitative analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 13", "sec_num": null }, { "text": "Word-level models such as VOCLINK use a dictionary as supervision, and thus will naturally be affected by the dictionary used. Although SOFTLINK transfers knowledge on the document level, it uses the dictionary to calculate the transfer distributions used in its document-level transfer operation. In this section, we focus on the comparison of SOFTLINK and VOCLINK. 7.1.1 Sampling the Dictionary Resource. The dictionary is the essential part of SOFTLINK and VOCLINK and is used in different ways to define transfer operations. The availability of dictionaries, however, varies among different languages. From Table 4 , we notice that for LOWLAN the number of available dictionary entries is very limited, which suggests it could be a major factor affecting the performance of word-level topic models. Therefore, in this experiment, we sample different numbers of dictionary entries in HIGHLAN to study how this alters performance of SOFTLINK and VOCLINK.", "cite_spans": [], "ref_spans": [ { "start": 611, "end": 618, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Sensitivity to Dictionaries", "sec_num": "7.1" }, { "text": "Given a bilingual dictionary, we add only a proportion of entries in it to SOFTLINK and VOCLINK. As in the previous experiments varying the proportion of document links, we change the proportion from 0, 0.01, 0.05, 0.1, 0.2, 0.4, 0.8, to 1.0. When the proportion is 0, both SOFTLINK and VOCLINK become monolingual LDA and no transfer happens; when the proportion is 1, both models reach their highest potential with all the dictionary entries available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sensitivity to Dictionaries", "sec_num": "7.1" }, { "text": "We also sample the dictionary in two manners: random-and frequency-based. In random-based, the entries are randomly chosen from the dictionary, and the five chains have different entries added to the models. In frequency-based, we select the most frequent word types from the training corpus. Figure 14 shows a detailed comparison among different evaluations and languages. As expected, adding more dictionary entries helps both SOFTLINK and VOCLINK, with increasing CNPMI scores and F-1 scores in general. However, we notice that adding more dictionary entries can boost SOFTLINK's performance very quickly, whereas the increase in VOCLINK's CNPMI scores is slower. Similar trends can be observed in the classification task as well, where adding more words does not necessarily increase VOCLINK's F-1 scores, and the variations are very high.", "cite_spans": [], "ref_spans": [ { "start": 293, "end": 302, "text": "Figure 14", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Sensitivity to Dictionaries", "sec_num": "7.1" }, { "text": "This comparison provides an interesting insight to increasing lexical resources efficiently. In some applications, especially related to low-resource languages, the number of available lexicon resources is very small, and one way to solve this problem is to incorporate human feedback, such as interactive topic modeling proposed by Hu et al. (2014a) . In our case, a native speaker of the low-resource language could provide word translations that could be incorporated into topic models. Because of limited time and financial budget, however, it is impossible to translate all the word types that appear in the corpus, so the challenge is how to boost the performance of the target task as much as possible with less effort from humans. In this comparison, we see that if the target task is to train coherent multilingual topics, training SOFTLINK is a more efficient way than VOCLINK. SOFTLINK produces better topics and is more capable of crosslingual classification tasks than VOCLINK when the number of dictionary entries is very limited.", "cite_spans": [ { "start": 333, "end": 350, "text": "Hu et al. (2014a)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Sensitivity to Dictionaries", "sec_num": "7.1" }, { "text": "7.1.2 Varying Comparability of the Corpus. For SOFTLINK and VOCLINK, the dictionary is only one aspect of the training situation. As discussed in our document-level experiments, the training corpus is also an important factor that could affect the performance of all topic models. Although corpus comparability is not an explicit requirement of SOFTLINK and VOCLINK, the comparability of the corpus might affect the coverage provided by the dictionary or affect performance in other ways. In SOFTLINK, comparability could also affect the transfer operator's ability to find similar documents to link to. In this section, we study the relationship between dictionary coverage and comparability of the training corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sensitivity to Dictionaries", "sec_num": "7.1" }, { "text": "Similar to the previous section, we vary the dictionary coverage from 0.01, 0.05, 0.1, 0.2, 0.4, 0.8, to 1, using the frequency-based method as in the last experiment. We also vary the number of linked Wikipedia articles from 0, 0.01, 0.05, 0.1, 0.2, 0.4, 0.8, to 1. We present CNPMI scores in Figure 15(a) , where the results are averaged over all five languages in HIGHLAN. It is clear that SOFTLINK outperforms VOCLINK, regardless of training corpus and dictionary size. This implies that SOFTLINK could potentially learn coherent multilingual topics even when the training conditions are unfavorable: for example, when the training corpus is incomparable and there is only a small number of dictionary entries. .152 .157 .160 .162 .156 .165 .158 .155 .153 .153 .149 .154 .147 .153 .147 .165 .109 .114 .120 .102 .115 .113 .116 .117 . .167 .170 .169 .168 .169 .178 .179 .166 .165 .157 .158 .170 .171 .166 .174 .184 .146 .155 .141 .152 .147 .159 .161 .167 .138 .151 .140 .137 .147 .141 .147 .149 .121 .122 .129 .128 .132 .133 .129 .133 .096 .105 .101 .108 .106 .099 .110 .107 . .422 .459 .462 .465 .449 .468 .476 .413 .427 .470 .431 .362 .373 .470 .400 .387 .326 .438 .366 .352 .346 .367 .369 .320 .261 .317 .302 .403 .302 .347 .249 .303 .294 .191 .239 .250 .313 .216 .353 .327 .262 .249 .244 .195 .312 .309 .165 .167 .095 .251 .320 .326 .216 .212 .284 .220 .334 .331 .354 .335 .352 .373 .367 .365 .270 .347 .301 .339 .314 .324 .319 .357 .288 .331 .321 .326 .307 .302 .294 .301 .290 .298 .221 .293 .279 .285 .217 .257 .273 .272 .266 .230 .267 .242 .251 .203 .270 .228 .290 .261 .317 .281 .184 .261 .207 .211 .256 .263 .212 .213 .263 .482 .487 .467 .452 .515 .473 .480 .521 .477 .495 .468 .456 .501 .482 .474 .457 .407 .484 .465 .513 .470 .465 .460 .461 .464 .455 .478 .445 .465 .478 .478 .445 .405 .469 .428 .473 .417 .424 .426 .455 .410 .427 .442 .409 .425 .424 .448 .460 .386 .352 .374 .410 .344 .390 .383 .384 .381 .391 .410 .406 .336 .382 .400 .373 .372 .348 .387 .393 .373 .398 .399 .393 .425 .373 .390 .364 .397 .349 .354 .381 .360 .401 .342 .365 .357 .374 .365 .335 .331 .336 .361 .350 .319 .342 .371 .372 .373 .351 .359 .366 .333 .332 .340 .304 .334 .290 .349 .322 .286 .286 .286 Test set: TED + TED ", "cite_spans": [ { "start": 715, "end": 834, "text": ".152 .157 .160 .162 .156 .165 .158 .155 .153 .153 .149 .154 .147 .153 .147 .165 .109 .114 .120 .102 .115 .113 .116 .117", "ref_id": null }, { "start": 837, "end": 1076, "text": ".167 .170 .169 .168 .169 .178 .179 .166 .165 .157 .158 .170 .171 .166 .174 .184 .146 .155 .141 .152 .147 .159 .161 .167 .138 .151 .140 .137 .147 .141 .147 .149 .121 .122 .129 .128 .132 .133 .129 .133 .096 .105 .101 .108 .106 .099 .110 .107", "ref_id": null }, { "start": 1079, "end": 1358, "text": ".422 .459 .462 .465 .449 .468 .476 .413 .427 .470 .431 .362 .373 .470 .400 .387 .326 .438 .366 .352 .346 .367 .369 .320 .261 .317 .302 .403 .302 .347 .249 .303 .294 .191 .239 .250 .313 .216 .353 .327 .262 .249 .244 .195 .312 .309 .165 .167 .095 .251 .320 .326 .216 .212 .284 .220", "ref_id": null }, { "start": 1359, "end": 1633, "text": ".334 .331 .354 .335 .352 .373 .367 .365 .270 .347 .301 .339 .314 .324 .319 .357 .288 .331 .321 .326 .307 .302 .294 .301 .290 .298 .221 .293 .279 .285 .217 .257 .273 .272 .266 .230 .267 .242 .251 .203 .270 .228 .290 .261 .317 .281 .184 .261 .207 .211 .256 .263 .212 .213 .263", "ref_id": null }, { "start": 1634, "end": 1908, "text": ".482 .487 .467 .452 .515 .473 .480 .521 .477 .495 .468 .456 .501 .482 .474 .457 .407 .484 .465 .513 .470 .465 .460 .461 .464 .455 .478 .445 .465 .478 .478 .445 .405 .469 .428 .473 .417 .424 .426 .455 .410 .427 .442 .409 .425 .424 .448 .460 .386 .352 .374 .410 .344 .390 .383", "ref_id": null }, { "start": 1909, "end": 2188, "text": ".384 .381 .391 .410 .406 .336 .382 .400 .373 .372 .348 .387 .393 .373 .398 .399 .393 .425 .373 .390 .364 .397 .349 .354 .381 .360 .401 .342 .365 .357 .374 .365 .335 .331 .336 .361 .350 .319 .342 .371 .372 .373 .351 .359 .366 .333 .332 .340 .304 .334 .290 .349 .322 .286 .286 .286", "ref_id": null } ], "ref_spans": [ { "start": 294, "end": 306, "text": "Figure 15(a)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Sensitivity to Dictionaries", "sec_num": "7.1" }, { "text": "Adding more dictionary entries has a higher impact on word-level model VOCLINK. SOFTLINK learns better quality topics than VOCLINK. SOFTLINK also generally performs better on classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 15", "sec_num": null }, { "text": "The results of crosslingual classification are shown in Figure 15(b) . When the test sets are from the same source (TED + TED), SOFTLINK utilizes the dictionary more efficiently and performs better than VOCLINK. In particular, F-1 scores of SOFTLINK using only 20% of dictionary entries is already outperforming VOCLINK using the full dictionary. A similar comparison can also be drawn when the test sets are from different sources such as TED + GV. 7.1.3 Discussion. From the results so far, it is empirically clear that transferring knowledge on the word level tends to be less efficient than the document level. This is arguably counter-intuitive. Recall that the goal of multilingual topic models is to let semantically related words and translations have similar distributions over topics. The word-level model VOCLINK directly uses this information-dictionary entries-to define transfer operations, yet its CNPMI scores are lower. In the following sections, therefore, we try to explain this apparent contradiction. We first analyze the dictionary usage of VOCLINK (Section 7.2), and then lead our discussion on the transfer strength comparisons between document and word levels for all models (Sections 7.3 and 7.4).", "cite_spans": [], "ref_spans": [ { "start": 56, "end": 68, "text": "Figure 15(b)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Figure 15", "sec_num": null }, { "text": "In practice, the assumption of VOCLINK is also often weakened by another important factor: the presence of word translations in the training corpus. Given a word pair w ( 1 ) , w ( 2 ) , the assumption of VOCLINK is valid only when both words appear in the training corpus in their respective languages. If w 2is not in D 2, w ( 1 ) will be treated as an untranslated word instead. Figure 16 shows an example of how tree structures in VOCLINK are affected by the corpus and the dictionary.", "cite_spans": [], "ref_spans": [ { "start": 382, "end": 391, "text": "Figure 16", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Dictionary Usage", "sec_num": "7.2" }, { "text": "In Figure 17 , we present the statistics of word types from different sources on a logarithmic scale. \"Dictionary\" is the number of word types that appeared in the original dictionary as shown in the last column of Table 4 , and we use the same preprocessing to the dictionary as to the training corpus to make sure the quantities are comparable. \"Training set\" is the number of word types that appeared in the training ", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 12, "text": "Figure 17", "ref_id": "FIGREF0" }, { "start": 215, "end": 222, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Dictionary Usage", "sec_num": "7.2" }, { "text": "The dictionary used by VOCLINK is affected by its overlap with the corpus. In this example, the three entries in Dictionary A can all be found in the corpus, so the tree structure has all of them. However, only one entry in Dictionary B can be found in the corpus. Although the Swedish word \"heterotrofa\" is also in the dictionary, its English translation cannot be found in the corpus, so Dictionary B ends up a tree with only one entry. Word type statistics", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 16", "sec_num": null }, { "text": "Training set Dictionary Linked by voclink", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 16", "sec_num": null }, { "text": "The number of word types that are linked in VOCLINK is far less than the original dictionary and even than that of word types in the training sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 17", "sec_num": null }, { "text": "set, and \"Linked by VOCLINK\" is the number of word types that are actually used in VOCLINK, that is, the number of non-zero entries in \u03b4 in the transfer operation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 17", "sec_num": null }, { "text": "Note that even when we use the complete dictionary to create the tree structure in VOCLINK, in LOWLAN, there are far more word types in the training set than those in the dictionary. In other words, the supervision matrix \u03b4 used by h \u03c6 (r,k) is never actually full rank, and thus, the full potential of VOCLINK is very difficult to achieve due to the properties of the training corpus. This situation is as if the document-level model DOCLINK had only half of the linked documents in the training corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 17", "sec_num": null }, { "text": "On the other hand, we notice that in HIGHLAN, the number of word types in the dictionary is usually comparable to that of the training set (except in AR). For LOWLAN, however, the situation is quite the contrary: There are more word types in the training set than in the dictionary. Thus, the availability of sufficient dictionary entries is especially a problem for LOWLAN.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 17", "sec_num": null }, { "text": "We conclude from Figure 15 (a) that adding more dictionary entries will slowly improve VOCLINK, but even when there are enough dictionary items, due to model assumptions, VOCLINK will not achieve its full potential unless every word in the training corpus is in the dictionary. A possible solution is to first extract word alignments from parallel corpora, and then create a tree structure using those word alignments, as experimented in Hu et al. (2014b) . However, when parallel corpora are available, we have shown that document-level models such as DOCLINK work better anyway, and the accuracy of word aligners is another possible limitation to consider.", "cite_spans": [ { "start": 438, "end": 455, "text": "Hu et al. (2014b)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 17, "end": 26, "text": "Figure 15", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Figure 17", "sec_num": null }, { "text": "Whereas VOCLINK uses a dictionary to directly model word translations, SOFTLINK uses the same dictionary to define the supervision in transfer operation differently on the document level. Experiments show that transferring knowledge on the document level with a dictionary (i.e., SOFTLINK) is more efficient, resulting in stable and lowvariance topic qualities in various training situations. A natural question is why the same resource results in different performance on different levels of transfer operations. To answer this question from another angle, we further look into the actual topics trained from SOFTLINK and VOCLINK in this section. The general idea is to look into the same topic output from SOFTLINK and VOCLINK and see what topic words they have in common (denoted as W + ), and what words they have exclusively, denoted as W \u2212,SOFT and W \u2212,VOC for SOFTLINK and VOCLINK, respectively. The words in W \u2212,VOC are those with lower topic coherence and are thus the key to understanding the suboptimal performance of VOCLINK. 7.3.1 Aligning Topics. To this end, the first step is to align possible topics between VOCLINK and SOFTLINK, since the initialization of Gibbs samplers is random. Let {W VOC k } K k=1 and {W SOFT k } K k=1 be the K topics learned by VOCLINK and SOFTLINK respectively, from the same training conditions. For each topic pair (k, k ) we calculate the Jaccard index W VOC k and W SOFT k , one for each language, and use the average over the two languages as the matching score m k,k of the topic pair:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Analysis", "sec_num": "7.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "m k, k = 1 2 J W VOC k, 1 , W SOFT k , 1 + J W VOC k, 2 , W SOFT k , 2", "eq_num": "(32)" } ], "section": "Topic Analysis", "sec_num": "7.3" }, { "text": "where J(X, Y) is the Jaccard index between sets X and Y. Thus, there are K 2 matching scores with a number of topics K. We set a threshold of 0.8, so that a matching score is valid only when it is greater than 0.8 \u2022 max m k,k over all the K 2 scores. For each topic k, if its matching score is valid, we align W VOC k with W SOFT k , and treat them as potentially the same topic. When multiple matching scores are valid, we use the topic with the highest score and ignore the rest. 7.3.2 Comparing Document Frequency. Using the approximate alignment algorithm we described above, we are now able to compare each aligned topic pair between VOCLINK and SOFTLINK.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Analysis", "sec_num": "7.3" }, { "text": "For a word type w, we define the document frequency as the percentage of documents where w appears. A low document frequency of word w implies that w only appears in a small number of documents. For every aligned topic pair W i , W j where W i and W j are topic word sets from SOFTLINK and VOCLINK, respectively, we have three sets of topic words derived from this pair:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Analysis", "sec_num": "7.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "W + = W i \u2229 W j (33) W \u2212,VOC = W i \\ W j (34) W \u2212,SOFT = W j \\ W i", "eq_num": "(35)" } ], "section": "Topic Analysis", "sec_num": "7.3" }, { "text": "Then we calculate the average document frequencies over all the words in each of the sets, and we show the results in Figure 18 .", "cite_spans": [], "ref_spans": [ { "start": 118, "end": 127, "text": "Figure 18", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Topic Analysis", "sec_num": "7.3" }, { "text": "We observe that the average document frequencies over words in W \u2212,VOC are consistently lower in every language, whereas those in W + are higher. This implies that VOCLINK tends to give rare words higher probability in the topic-word distributions. In other words, VOCLINK gives high probabilities to words that only appear in specific contexts, such as named entities. Thus, when evaluating topics using a reference corpus, the co-occurrence of such words with other words is relatively low due to lack of that specific context in the reference corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Analysis", "sec_num": "7.3" }, { "text": "We show an example of an aligned topic in Figure 19 . In this example, we see that although both VOCLINK and SOFTLINK can discover semantically coherent words shown in W + , VOCLINK focuses more on words that only appear in specific contexts: There are many words (mostly named entities) in W \u2212,VOC that only appear in one document. Due to lack of this very specific context in the reference corpora, the ", "cite_spans": [], "ref_spans": [ { "start": 42, "end": 51, "text": "Figure 19", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Topic Analysis", "sec_num": "7.3" }, { "text": "Average document frequencies of W \u2212,VOC are generally lower than W \u2212,SOFT and W + , shown in the triangle markers. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 18", "sec_num": null }, { "text": "An example of real data showing the topic words of SOFTLINK and VOCLINK. Words that appear in both models are in W + ; words that only appear in SOFTLINK or VOCLINK are included in W \u2212,SOFT or W \u2212,VOC , respectively. co-occurrence of these words with other more general words is likely to be zero, resulting in lower CNPMI.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 19", "sec_num": null }, { "text": "While we have looked at the topics to explain what kind of words produced by VOC-LINK make the model's performance lower than SOFTLINK, in this section, we try to explain why this happens by analyzing their transfer operations. Recall that VOCLINK defines transfer operations on topic-node distributions {\u03c6 k,r } K k=1 (Equation (23)), while SOFTLINK defines transfer on document-topic distributions \u03b8. The differences between transfer levels with the same resources leads to a suspicion that document level has a \"stronger\" transfer power.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparing Transfer Strength", "sec_num": "7.4" }, { "text": "The first question is to understand how this transfer operation actually functions in the training of topic models. During Gibbs' sampling of monolingual LDA, the conditional distribution for a token, denoted as P, is calculated by conditioning on all the other tokens and their topics, and can be factorized into two conditionals: documentlevel P \u03b8 and word-level P \u03c6 . Let the current token be of word type w, and w \u2212 and z \u2212 all the other words and their current topic assignments in the corpus. The conditional is then", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparing Transfer Strength", "sec_num": "7.4" }, { "text": "P k = Pr z = k|w, w \u2212 , z \u2212 (36) \u221d n k|d + \u03b1 k \u2022 n w|k + \u03b2 w n \u2022|k + 1 \u03b2 (37) = P \u03b8k \u2022 P \u03c6k (38)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparing Transfer Strength", "sec_num": "7.4" }, { "text": "where n k|d is the number of topic k in document d, n w|k the number of word type w in topic k, n \u2022|k the number of tokens assigned to topic k, and 1 an all-one vector. In this equation, the final conditional distribution can be treated as a \"vote\" from the two conditionals: P \u03b8 and P \u03c6 (Yuan et al. 2015) . If P \u03c6 is a uniform distribution, then P = P \u03b8 , meaning the conditional on document P \u03b8 dominates the decision of choosing a topic, while the conditional on word P \u03c6 is uninformative. We apply this similar idea to multilingual topic models. For a token in language 2 , we let w be its word type, and P can also generally be factorized to two individual conditionals,", "cite_spans": [ { "start": 288, "end": 306, "text": "(Yuan et al. 2015)", "ref_id": "BIBREF51" } ], "ref_spans": [], "eq_spans": [], "section": "Comparing Transfer Strength", "sec_num": "7.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P k = Pr z = k|w, w \u2212 , z \u2212 (39) \u221d n k|d + h \u03b8 \u03b4, N ( 1 ) , \u03b1 k P DOC, k \u2022 n w|k + h \u03c6 \u03b4 , N ( 1 ) , \u03b2 w n \u2022|k + 1 h \u03c6 \u03b4 , N ( 1 ) , \u03b2 P VOC, k", "eq_num": "(40)" } ], "section": "Comparing Transfer Strength", "sec_num": "7.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "= P DOC,k \u2022 P VOC,k", "eq_num": "(41)" } ], "section": "Comparing Transfer Strength", "sec_num": "7.4" }, { "text": "where the transfer operation is clearly incorporated into the calculation of the conditional, and P DOC and P VOC are conditional distributions on document and word levels, respectively. Thus, it is easy to see how transfers on different levels contribute to the decision of a topic. This is also where our comparison of \"transfer strength\" starts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparing Transfer Strength", "sec_num": "7.4" }, { "text": "To apply this idea, for each token, we first obtain three distributions described before: P, P DOC , and P VOC . Then we calculate cosine similarities cos (P DOC , P ) and cos (P VOC , P ). If r = cos(P DOC ,P ) cos(P VOC ,P ) > 1, we know that P DOC is dominant and helps shape the conditional distribution P; in other words, the document level transfer is stronger. We calculate the ratio of similarities r = cos(P DOC ,P ) cos(P VOC ,P ) for all the tokens in every model, and take the model-wise average over all the tokens (Figure 20) . The most balanced situation is r = 1, meaning transfers on both word and document levels are contributing equally to the conditional distributions.", "cite_spans": [], "ref_spans": [ { "start": 528, "end": 539, "text": "(Figure 20)", "ref_id": null } ], "eq_spans": [], "section": "Comparing Transfer Strength", "sec_num": "7.4" }, { "text": "From the results, we notice that both DOCLINK and C-BILDA have stronger transfer strength on the document level, which means that the transfer operations on the document levels are actually informing the decision of a token's topic. However, we also notice that VOCLINK has very comparable transfer strength to DOCLINK and C-BILDA, which makes less sense, because VOCLINK defines transfer operations on the word level. This implies that transferring knowledge on the word level is weaker. This also explains Comparisons of transfer strength. A value of one (shown in red dotted line) means an equal balance of transfer between document and word levels. We notice SOFTLINK has the most balanced transfer strength, whereas VOCLINK has stronger transfer at the document level although its transfer operation is defined on the word level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparing Transfer Strength", "sec_num": "7.4" }, { "text": "why, in the previous section, VOCLINK tends to find topic words appearing in only a few documents. It is also interesting to see SOFTLINK having a relatively good balance between document and word levels, with consistently the most balanced transfer strengths across all models and languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparing Transfer Strength", "sec_num": "7.4" }, { "text": "Multilingual topic models use corpora in multiple languages as input with additional language resources as supervision. The traits of these models inevitably lead to a wide variety of training scenarios, especially when a language's resources are scarce, whereas most previous studies on multilingual topic models have not analyzed in depth the appropriateness of different models for different training situations and resource availability. For example, experiments are most often done in European languages, with models that are typically trained on parallel or comparable corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Remarks and Conclusions", "sec_num": "8." }, { "text": "The contributions of our study are providing a unifying framework of these different models, and systematically analyzing their efficacy in different training situations. We conclude by summarizing our findings along two dimensions: training corpora characteristics and dictionary characteristics, since these are the necessary components to enable crosslingual knowledge transfer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Remarks and Conclusions", "sec_num": "8." }, { "text": "Document-level models are shown to work best when the corpus is parallel or at least comparable. In terms of learning high-quality topics, DOCLINK and C-BILDA yield very similar results. However, since C-BILDA has a \"language selector\" mechanism in the generative process, it is slightly more efficient for training Wikipedia articles in lowresource languages, where the document lengths have large gaps compared to English. SOFTLINK, on the other hand, only needs a small dictionary to enable document-level transfer, and yields very competitive results. This is especially useful for low-resource languages when the dictionary size is small and only a small number of comparable document pairs are available for training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Selection", "sec_num": "8.1" }, { "text": "Word-level models are harder to achieve full potential of transfer, due to limits in the dictionary size and training sets, and unrealistic assumptions of the generative process regarding dictionary coverage. The representative model, VOCLINK, has similarly good performance on document classification as other models, but the topic qualities according to coherence-based metrics are lower. Comparing to SOFTLINK, which also requires a dictionary as resource, directly modeling word translations in VOCLINK turns out to be a less efficient way of transferring dictionary knowledge. Therefore, when using dictionary information, we recommend SOFTLINK over VOCLINK.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Selection", "sec_num": "8.1" }, { "text": "As an alternative method to learning crosslingual representations, crosslingual word embeddings have been gaining attention (Ruder, Vulic, and S\u00f8gaard 2019; Upadhyay et al. 2016) . Recent crosslingual embedding architectures have been applied to a wider range of applications in natural language processing, and achieve state-of-the-art performance. Similar to the topic space in multilingual topic models, crosslingual embeddings learn semantically consistent features in a shared embedding space for all languages.", "cite_spans": [ { "start": 124, "end": 156, "text": "(Ruder, Vulic, and S\u00f8gaard 2019;", "ref_id": "BIBREF41" }, { "start": 157, "end": 178, "text": "Upadhyay et al. 2016)", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "Crosslingual Representations", "sec_num": "8.2" }, { "text": "Both approaches-topic modeling and embedding-have advantages and limitations. Multilingual topic models still rely on supervised data to learn crosslingual representations. The choice of such supervision and model is important, which leads to our main discussion of this work. Topic models have the advantage of being interpretable. Embedding methods are powerful in many natural language processing tasks, and the representations are more fine-grained. Recent advancements in crosslingual embedding training do not require crosslingual supervision resources such as dictionary or parallel data (Artetxe, Labaka, and Agirre 2018; Lample et al. 2018) , which is a large step toward generalization of crosslingual modeling. Although it is an open problem on how to interpret the results and how to reduce the heavy computing resources required, embedding based methods are a promising research direction.", "cite_spans": [ { "start": 595, "end": 629, "text": "(Artetxe, Labaka, and Agirre 2018;", "ref_id": "BIBREF1" }, { "start": 630, "end": 649, "text": "Lample et al. 2018)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Crosslingual Representations", "sec_num": "8.2" }, { "text": "Relations to Topic Models. A very common strategy for learning crosslingual embeddings is to use a projection matrix as supervision or sub-objective to learn a projection matrix that projects independently trained monolingual embeddings into a shared crosslingual space (Dinu and Baroni 2014; Faruqui and Dyer 2014; Tsvetkov and Dyer 2016; Vuli\u0107 and Korhonen 2016) .", "cite_spans": [ { "start": 270, "end": 292, "text": "(Dinu and Baroni 2014;", "ref_id": "BIBREF12" }, { "start": 293, "end": 315, "text": "Faruqui and Dyer 2014;", "ref_id": "BIBREF13" }, { "start": 316, "end": 339, "text": "Tsvetkov and Dyer 2016;", "ref_id": "BIBREF44" }, { "start": 340, "end": 364, "text": "Vuli\u0107 and Korhonen 2016)", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "Crosslingual Representations", "sec_num": "8.2" }, { "text": "In multilingual topic models, the supervision matrix \u03b4 plays the role of a projection matrix between languages. In DOCLINK, for example, \u03b4 d 2 ,d 1 projects document d 2 to the document space of 1 (Equation (15)). SOFTLINK provides a simple extension by forming \u03b4 to a matrix of transfer distirbutions based on word-level document similarities. VOCLINK applies projections in the form of word translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Crosslingual Representations", "sec_num": "8.2" }, { "text": "Thus, we can see that the formation of projection matrices in multilingual topic models is still static and restricted to an identity matrix or a simple pre-calculated matrix. A generalization would be to add learning the projection matrix itself as an objective into multilingual topic models. This could be a way to improve VOCLINK by extending word associations to polysemy across languages, and making it less dependent on context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Crosslingual Representations", "sec_num": "8.2" }, { "text": "Our study inspires future work in two directions. The first direction is to increase the efficiency of word-level knowledge transfer. For example, it is possible to use colocation information of translated words to transfer knowledge, though cautiously, to untranslated words. It has been shown that word-level models can help find new word translations, for example, by using the existing dictionary as \"seed,\" and gradually adding more internal nodes to the tree structure using trained topic-word distributions. Additionally, our analysis showed the benefits of using a \"language selector\" in C-BILDA to make the generative process of DOCLINK more realistic, and one could also implement a similar mechanism in VOCLINK to make the conditional distributions for tokens less dependent on specific context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Directions", "sec_num": "8.3" }, { "text": "The second direction is more general. By systematically synthesizing various models and abstracting the knowledge transfer mechanism through an explicit transfer operation, we can construct models that shape the probabilistic distributions of a target language using that of a source language. By defining different transfer operations, more complex and robust models can be developed, and this transfer formulation may provide new ways of constructing models than with a traditional joint formulation (Hao and Paul 2019) . For example, SOFTLINK is generalization DOCLINK based on transfer operations that does not have an equivalent joint formulation. This framework for thinking about multilingual topic models may lead to new ideas for other models.", "cite_spans": [ { "start": 502, "end": 521, "text": "(Hao and Paul 2019)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Future Directions", "sec_num": "8.3" }, { "text": "The original notation for topic-language distribution is \u03b4 (Heyman, Vulic, and Moens 2016). To avoid confusion in Equation(15), we change to \u03b7. We also follow the original paper where the model is for a bilingual case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Although some models, as inHu et al. (2014b), transfer knowledge at both document and word levels, in this analysis, we only focus on the word level where no transfer happens on the document level. The generalization simply involves using the same transfer operation on \u03b8 that is used in DOCLINK.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://snowball.tartarus.org. 4 http://arabicstemmer.com. 5 https://github.com/6/stopwords-json. 6 https://github.com/fxsjy/jieba. 7 https://www.unitedbiblesocieties.org/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Incorporating domain knowledge into topic modeling via Dirichlet forest priors", "authors": [ { "first": "David", "middle": [], "last": "Andrzejewski", "suffix": "" }, { "first": "Xiaojin", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Craven", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 26th Annual International Conference on Machine Learning", "volume": "", "issue": "", "pages": "25--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrzejewski, David, Xiaojin Zhu, and Mark Craven. 2009. Incorporating domain knowledge into topic modeling via Dirichlet forest priors. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 25-32, Montreal.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "789--798", "other_ids": {}, "num": null, "urls": [], "raw_text": "Artetxe, Mikel, Gorka Labaka, and Eneko Agirre. 2018. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 789-798, Melbourne.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Statistical analysis of non-lattice data", "authors": [ { "first": "Julian", "middle": [], "last": "Besag", "suffix": "" } ], "year": 1975, "venue": "Journal of the Royal Statistical Society. Series D (The Statistician)", "volume": "24", "issue": "", "pages": "179--195", "other_ids": {}, "num": null, "urls": [], "raw_text": "Besag, Julian. 1975. Statistical analysis of non-lattice data. Journal of the Royal Statistical Society. Series D (The Statistician), 24:179-195.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Probabilistic topic models", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" } ], "year": 2012, "venue": "Communications of the ACM", "volume": "55", "issue": "4", "pages": "77--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "Blei, David M. 2012. Probabilistic topic models. Communications of the ACM, 55(4):77-84.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Technical perspective: expressive probabilistic models and scalable method of moments", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" } ], "year": 2018, "venue": "Communications of the ACM", "volume": "61", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Blei, David M. 2018. Technical perspective: expressive probabilistic models and scalable method of moments. Communications of the ACM, 61(4):84.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Latent Dirichlet allocation", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "Y", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Jordan", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "993--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "Blei, David M., Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993-1022.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Multilingual topic models for unaligned text", "authors": [ { "first": "Jordan", "middle": [ "L" ], "last": "Boyd-Graber", "suffix": "" }, { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" } ], "year": 2009, "venue": "UAI 2009, Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence", "volume": "", "issue": "", "pages": "75--82", "other_ids": {}, "num": null, "urls": [], "raw_text": "Boyd-Graber, Jordan L. and David M. Blei. 2009. Multilingual topic models for unaligned text. In UAI 2009, Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 75-82, Montreal.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Relational topic models for document networks", "authors": [ { "first": "Jonathan", "middle": [], "last": "Chang", "suffix": "" }, { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics", "volume": "", "issue": "", "pages": "81--88", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang, Jonathan and David M. Blei. 2009. Relational topic models for document networks. In Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics, AISTATS 2009, pages 81-88, Clearwater Beach, FL.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Reading tea leaves: How humans interpret topic models", "authors": [ { "first": "Jonathan", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Jordan", "middle": [ "L" ], "last": "Boyd-Graber", "suffix": "" }, { "first": "Sean", "middle": [], "last": "Gerrish", "suffix": "" }, { "first": "Chong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" } ], "year": 2009, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "288--296", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang, Jonathan, Jordan L. Boyd-Graber, Sean Gerrish, Chong Wang, and David M. Blei. 2009. Reading tea leaves: How humans interpret topic models. In Advances in Neural Information Processing Systems, pages 288-296, Vancouver.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Christodoulopoulos, Christos and Mark Steedman. 2015. A massively parallel corpus: The bible in 100 languages", "authors": [ { "first": "Ning", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2013, "venue": "IJCAI 2013, Proceedings of the 23rd International Joint Conference on Artificial Intelligence", "volume": "49", "issue": "", "pages": "375--395", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, Ning, Jun Zhu, Fei Xia, and Bo Zhang. 2013. Generalized relational topic models with data augmentation. In IJCAI 2013, Proceedings of the 23rd International Joint Conference on Artificial Intelligence, pages 1273-1279, Beijing, China. Christodoulopoulos, Christos and Mark Steedman. 2015. A massively parallel corpus: The bible in 100 languages. Language Resources and Evaluation, 49(2):375-395.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Indexing by latent semantic analysis", "authors": [ { "first": "Scott", "middle": [ "C" ], "last": "Deerwester", "suffix": "" }, { "first": "T", "middle": [], "last": "Susan", "suffix": "" }, { "first": "Thomas", "middle": [ "K" ], "last": "Dumais", "suffix": "" }, { "first": "George", "middle": [ "W" ], "last": "Landauer", "suffix": "" }, { "first": "Richard", "middle": [ "A" ], "last": "Furnas", "suffix": "" }, { "first": "", "middle": [], "last": "Harshman", "suffix": "" } ], "year": 1990, "venue": "Journal of the American Society for Information Science", "volume": "41", "issue": "6", "pages": "391--407", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deerwester, Scott C., Susan T. Dumais, Thomas K. Landauer, George W. Furnas, and Richard A. Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6):391-407.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "On the hyper-Dirichlet type 1 and hyper-Liouville distributions", "authors": [ { "first": "Iii", "middle": [], "last": "Dennis", "suffix": "" }, { "first": "Y", "middle": [], "last": "Samuel", "suffix": "" } ], "year": 1991, "venue": "Communications in Statistics -Theory and Methods", "volume": "20", "issue": "12", "pages": "4069--4081", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dennis III, Samuel Y. 1991. On the hyper- Dirichlet type 1 and hyper-Liouville distributions. Communications in Statistics -Theory and Methods, 20(12):4069-4081.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Improving zero-shot learning by mitigating the hubness problem", "authors": [ { "first": "Georgiana", "middle": [], "last": "Dinu", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dinu, Georgiana and Marco Baroni. 2014. Improving zero-shot learning by mitigating the hubness problem. CoRR, abs/1412.6568.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Improving vector space word representations using multilingual correlation", "authors": [ { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "462--471", "other_ids": {}, "num": null, "urls": [], "raw_text": "Faruqui, Manaal and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 462-471, Gothenburg.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Finding scientific topics", "authors": [ { "first": "Thomas", "middle": [ "L" ], "last": "Griffiths", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steyvers", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the National Academy of Sciences", "volume": "101", "issue": "1", "pages": "5228--5235", "other_ids": {}, "num": null, "urls": [], "raw_text": "Griffiths, Thomas L. and Mark Steyvers. 2004. Finding scientific topics. Proceedings of the National Academy of Sciences, 101(suppl 1):5228-5235.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Detecting cross-cultural differences using a multilingual topic model", "authors": [ { "first": "E", "middle": [], "last": "Guti\u00e9rrez", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Dario", "suffix": "" }, { "first": "Patricia", "middle": [], "last": "Shutova", "suffix": "" }, { "first": "Gerard", "middle": [], "last": "Lichtenstein", "suffix": "" }, { "first": "Luca", "middle": [], "last": "De Melo", "suffix": "" }, { "first": "", "middle": [], "last": "Gilardi", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "47--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guti\u00e9rrez, E. Dario, Ekaterina Shutova, Patricia Lichtenstein, Gerard de Melo, and Luca Gilardi. 2016. Detecting cross-cultural differences using a multilingual topic model. Transactions of the Association for Computational Linguistics, 4:47-60.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Lessons from the Bible on modern topics: Low-resource multilingual topic model evaluation", "authors": [ { "first": "", "middle": [], "last": "Hao", "suffix": "" }, { "first": "Jordan", "middle": [ "L" ], "last": "Shudong", "suffix": "" }, { "first": "Michael", "middle": [ "J" ], "last": "Boyd-Graber", "suffix": "" }, { "first": "", "middle": [], "last": "Paul", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1090--1100", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao, Shudong, Jordan L. Boyd-Graber, and Michael J. Paul. 2018. Lessons from the Bible on modern topics: Low-resource multilingual topic model evaluation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, pages 1090-1100, New Orleans, LA.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Learning multilingual topics from incomparable corpora", "authors": [ { "first": "Shudong", "middle": [], "last": "Hao", "suffix": "" }, { "first": "Michael", "middle": [ "J" ], "last": "Paul", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018", "volume": "", "issue": "", "pages": "2595--2609", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao, Shudong and Michael J. Paul. 2018. Learning multilingual topics from incomparable corpora. In Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, pages 2595-2609, Santa Fe, NM.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Vulic, and Marie-Francine Moens. 2016. C-BiLDA extracting cross-lingual topics from non-parallel texts by distinguishing shared from unshared content", "authors": [ { "first": "Shudong", "middle": [], "last": "Hao", "suffix": "" }, { "first": "Michael", "middle": [ "J" ], "last": "Paul", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019", "volume": "30", "issue": "", "pages": "1299--1323", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao, Shudong and Michael J. Paul. 2019. Analyzing Bayesian crosslingual transfer in topic models. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, pages 1551-1565, Minneapolis, MN. Heyman, Geert, Ivan Vulic, and Marie-Francine Moens. 2016. C-BiLDA extracting cross-lingual topics from non-parallel texts by distinguishing shared from unshared content. Data Mining and Knowledge Discovery, 30(5):1299-1323.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Probabilistic latent semantic indexing", "authors": [ { "first": "Thomas", "middle": [], "last": "Hofmann", "suffix": "" } ], "year": 1999, "venue": "SIGIR '99: Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "50--57", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hofmann, Thomas. 1999. Probabilistic latent semantic indexing. In SIGIR '99: Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 50-57, Berkeley, CA.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Interactive topic modeling", "authors": [ { "first": "Yuening", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Jordan", "middle": [ "L" ], "last": "Boyd-Graber", "suffix": "" }, { "first": "Brianna", "middle": [], "last": "Satinoff", "suffix": "" }, { "first": "Alison", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2014, "venue": "Machine Learning", "volume": "95", "issue": "", "pages": "423--469", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hu, Yuening, Jordan L. Boyd-Graber, Brianna Satinoff, and Alison Smith. 2014a. Interactive topic modeling. Machine Learning, 95(3):423-469.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Polylingual tree-based topic models for translation domain adaptation", "authors": [ { "first": "Yuening", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Ke", "middle": [], "last": "Zhai", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Eidelman", "suffix": "" }, { "first": "Jordan", "middle": [ "L" ], "last": "Boyd-Graber", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014", "volume": "", "issue": "", "pages": "1166--1176", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hu, Yuening, Ke Zhai, Vladimir Eidelman, and Jordan L. Boyd-Graber. 2014b. Polylingual tree-based topic models for translation domain adaptation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, pages 1166-1176, Baltimore, MD.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Extracting multilingual topics from unaligned comparable corpora", "authors": [ { "first": "Jagadeesh", "middle": [], "last": "Jagarlamudi", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" } ], "year": 2010, "venue": "Advances in Information Retrieval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jagarlamudi, Jagadeesh and Hal Daum\u00e9 III. 2010. Extracting multilingual topics from unaligned comparable corpora. In Advances in Information Retrieval, 32nd", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "European Conference on IR Research, ECIR 2010", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "444--456", "other_ids": {}, "num": null, "urls": [], "raw_text": "European Conference on IR Research, ECIR 2010, pages 444-456, Milton Keynes.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A variational approximation for topic modeling of hierarchical corpora", "authors": [ { "first": "", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Geoffrey", "middle": [ "M" ], "last": "Do-Kyum", "suffix": "" }, { "first": "Lawrence", "middle": [ "K" ], "last": "Voelker", "suffix": "" }, { "first": "", "middle": [], "last": "Saul", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 30th International Conference on Machine Learning, ICML 2013", "volume": "", "issue": "", "pages": "55--63", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kim, Do-kyum, Geoffrey M. Voelker, and Lawrence K. Saul. 2013. A variational approximation for topic modeling of hierarchical corpora. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, pages 55-63, Atlanta, GA.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Europarl: A Parallel Corpus for Statistical Machine Translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2005, "venue": "MT Summit", "volume": "5", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koehn, Philipp. 2005. Europarl: A Parallel Corpus for Statistical Machine Translation. MT Summit, 5:79-86.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Probabilistic Graphical Models -Principles and Techniques", "authors": [ { "first": "Daphne", "middle": [], "last": "Koller", "suffix": "" }, { "first": "Nir", "middle": [], "last": "Friedman", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koller, Daphne and Nir Friedman. 2009. Probabilistic Graphical Models -Principles and Techniques. MIT Press.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A minimally supervised approach for detecting and ranking document translation pairs", "authors": [ { "first": "Kriste", "middle": [], "last": "Krstovski", "suffix": "" }, { "first": "David", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation, WMT@EMNLP 2011", "volume": "", "issue": "", "pages": "207--216", "other_ids": {}, "num": null, "urls": [], "raw_text": "Krstovski, Kriste and David A. Smith. 2011. A minimally supervised approach for detecting and ranking document translation pairs. In Proceedings of the Sixth Workshop on Statistical Machine Translation, WMT@EMNLP 2011, pages 207-216, Edinburgh.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Bootstrapping translation detection and sentence extraction from comparable corpora", "authors": [ { "first": "Kriste", "middle": [], "last": "Krstovski", "suffix": "" }, { "first": "David", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Krstovski", "suffix": "" }, { "first": "David", "middle": [ "A" ], "last": "Kriste", "suffix": "" }, { "first": "Michael", "middle": [ "J" ], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Kurtz", "suffix": "" } ], "year": 2016, "venue": "NAACL HLT 2016, the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1127--1132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Krstovski, Kriste and David A. Smith. 2016. Bootstrapping translation detection and sentence extraction from comparable corpora. In NAACL HLT 2016, the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1127-1132, San Diego, CA. Krstovski, Kriste, David A. Smith, and Michael J. Kurtz. 2016. Online multilingual topic models with multi-level hyperpriors.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "NAACL HLT 2016, the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "454--459", "other_ids": {}, "num": null, "urls": [], "raw_text": "In NAACL HLT 2016, the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 454-459, San Diego, CA.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Word translation without parallel data", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" }, { "first": "Ludovic", "middle": [], "last": "Denoyer", "suffix": "" }, { "first": "Herv\u00e9", "middle": [], "last": "J\u00e9gou", "suffix": "" } ], "year": 2018, "venue": "6th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lample, Guillaume, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2018. Word translation without parallel data. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Machine reading tea leaves: automatically evaluating topic coherence and topic model quality", "authors": [ { "first": "Jey", "middle": [], "last": "Lau", "suffix": "" }, { "first": "David", "middle": [], "last": "Han", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Newman", "suffix": "" }, { "first": "", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "83", "issue": "", "pages": "21--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lau, Jey Han, David Newman, and Timothy Baldwin. 2014. Machine reading tea leaves: automatically evaluating topic coherence and topic model quality. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2014, pages 530-539, Gothenburg. Lepp\u00e4-aho, Janne, Johan Pensar, Teemu Roos, and Jukka Corander. 2017. Learning Gaussian graphical models with fractional marginal pseudo-likelihood. International Journal of Approximate Reasoning, 83:21-42.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "In Automatic cross-language information retrieval using latent semantic indexing", "authors": [ { "first": "Michael", "middle": [ "L" ], "last": "Littman", "suffix": "" }, { "first": "T", "middle": [], "last": "Susan", "suffix": "" }, { "first": "Thomas", "middle": [ "K" ], "last": "Dumais", "suffix": "" }, { "first": "", "middle": [], "last": "Landauer", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Littman, Michael L., Susan T. Dumais, and Thomas K. Landauer. 1998. In Automatic cross-language information retrieval using latent semantic indexing, In G.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Cross-Language Information Retrieval", "authors": [ { "first": "Ed", "middle": [], "last": "Grefenstette", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "51--62", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grefenstette, ed., Cross-Language Information Retrieval, Springer, pages 51-62.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Multilingual topic models for bilingual dictionary extraction", "authors": [ { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Duh", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2015, "venue": "ACM Transactions on Asian & Low-Resource Language Information Processing", "volume": "14", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liu, Xiaodong, Kevin Duh, and Yuji Matsumoto. 2015. Multilingual topic models for bilingual dictionary extraction. ACM Transactions on Asian & Low-Resource Language Information Processing, 14(3):11:1-11:22.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Inverted bilingual topic models for lexicon extraction from non-parallel data", "authors": [ { "first": "Tengfei", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Tetsuya", "middle": [], "last": "Nasukawa", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence", "volume": "9", "issue": "", "pages": "2579--2605", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ma, Tengfei and Tetsuya Nasukawa. 2017. Inverted bilingual topic models for lexicon extraction from non-parallel data. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, pages 4075-4081, Melbourne. Maaten, Laurens van der and Geoffrey Hinton. 2008. Visualizing Data Using t-SNE. Journal of Machine Learning Research, 9(Nov):2579-2605.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "MALLET: A machine learning for language toolkit", "authors": [ { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "", "middle": [], "last": "Kachites", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "McCallum, Andrew Kachites. 2002. MALLET: A machine learning for language toolkit. http://mallet.cs. umass.edu.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Polylingual topic models", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Mimno", "suffix": "" }, { "first": "M", "middle": [], "last": "Hanna", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Wallach", "suffix": "" }, { "first": "David", "middle": [ "A" ], "last": "Naradowsky", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "880--889", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mimno, David M., Hanna M. Wallach, Jason Naradowsky, David A. Smith, and Andrew McCallum. 2009. Polylingual topic models. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, EMNLP 2009, pages 880-889, Singapore.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Monolingual and cross-lingual probabilistic topic models and their applications in information retrieval", "authors": [ { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vulic", "suffix": "" } ], "year": 2013, "venue": "Advances in Information Retrieval -35th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moens, Marie-Francine and Ivan Vulic. 2013. Monolingual and cross-lingual probabilistic topic models and their applications in information retrieval. In Advances in Information Retrieval -35th", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Mining multilingual topics from Wikipedia", "authors": [ { "first": "Xiaochuan", "middle": [], "last": "Ni", "suffix": "" }, { "first": "Jian-Tao", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Zheng", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 18th International Conference on World Wide Web", "volume": "", "issue": "", "pages": "1155--1156", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ni, Xiaochuan, Jian-Tao Sun, Jian Hu, and Zheng Chen. 2009. Mining multilingual topics from Wikipedia. In Proceedings of the 18th International Conference on World Wide Web, WWW 2009, pages 1155-1156, Madrid.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "A survey of crosslingual word embedding models", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vulic", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2019, "venue": "Journal of Artificial Intelligence Research", "volume": "65", "issue": "", "pages": "569--631", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruder, Sebastian, Ivan Vulic, and Anders S\u00f8gaard. 2019. A survey of cross- lingual word embedding models. Journal of Artificial Intelligence Research, 65:569-631.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Authorship attribution with topic models", "authors": [ { "first": "Yanir", "middle": [], "last": "Seroussi", "suffix": "" }, { "first": "Ingrid", "middle": [], "last": "Zukerman", "suffix": "" }, { "first": "Fabian", "middle": [], "last": "Bohnert", "suffix": "" } ], "year": 2014, "venue": "Computational Linguistics", "volume": "40", "issue": "2", "pages": "269--310", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seroussi, Yanir, Ingrid Zukerman, and Fabian Bohnert. 2014. Authorship attribution with topic models. Computational Linguistics, 40(2):269-310.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Knowledge transfer across multilingual corpora via latent topics", "authors": [ { "first": "Wim", "middle": [], "last": "Smet", "suffix": "" }, { "first": "Jie", "middle": [], "last": "De", "suffix": "" }, { "first": "Marie-Francine", "middle": [], "last": "Tang", "suffix": "" }, { "first": ";", "middle": [], "last": "Moens", "suffix": "" }, { "first": "", "middle": [], "last": "Shenzhen", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" }, { "first": "Zeljko", "middle": [], "last": "Agic", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "H\u00e9ctor Mart\u00ednez Alonso", "suffix": "" }, { "first": "Bernd", "middle": [], "last": "Plank", "suffix": "" }, { "first": "Anders", "middle": [], "last": "Bohnet", "suffix": "" }, { "first": "", "middle": [], "last": "Johannsen", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing", "volume": "", "issue": "", "pages": "1713--1722", "other_ids": {}, "num": null, "urls": [], "raw_text": "Smet, Wim De, Jie Tang, and Marie-Francine Moens. 2011. Knowledge transfer across multilingual corpora via latent topics. In Advances in Knowledge Discovery and Data Mining -15th Pacific-Asia Conference, PAKDD 2011, pages 549-560, Shenzhen. S\u00f8gaard, Anders, Zeljko Agic, H\u00e9ctor Mart\u00ednez Alonso, Barbara Plank, Bernd Bohnet, and Anders Johannsen. 2015. Inverted indexing for cross-lingual NLP. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, pages 1713-1722, Beijing.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Cross-lingual bridges with models of lexical borrowing", "authors": [ { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" }, { "first": "", "middle": [], "last": "Istanbul", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation, LREC 2012", "volume": "55", "issue": "", "pages": "63--93", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tiedemann, J\u00f6rg. 2012. Parallel data, tools and interfaces in OPUS. In Proceedings of the Eighth International Conference on Language Resources and Evaluation, LREC 2012, pages 2214-2218, Istanbul. Tsvetkov, Yulia and Chris Dyer. 2016. Cross-lingual bridges with models of lexical borrowing. Journal of Artificial Intelligence Research, 55:63-93.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Cross-lingual models of word embeddings: An empirical comparison", "authors": [ { "first": "Shyam", "middle": [], "last": "Upadhyay", "suffix": "" }, { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016", "volume": "", "issue": "", "pages": "1661--1670", "other_ids": {}, "num": null, "urls": [], "raw_text": "Upadhyay, Shyam, Manaal Faruqui, Chris Dyer, and Dan Roth. 2016. Cross-lingual models of word embeddings: An empirical comparison. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, pages 1661-1670, Berlin.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "On the role of seed lexicons in learning bilingual word embeddings", "authors": [ { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016", "volume": "", "issue": "", "pages": "247--257", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vuli\u0107, Ivan and Anna Korhonen. 2016. On the role of seed lexicons in learning bilingual word embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, pages 247-257, Berlin.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Probabilistic models of cross-lingual semantic similarity in context based on latent cross-lingual concepts induced from comparable data", "authors": [ { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "349--362", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vuli\u0107, Ivan and Marie-Francine Moens. 2014. Probabilistic models of cross-lingual semantic similarity in context based on latent cross-lingual concepts induced from comparable data. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, pages 349-362, Doha.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Probabilistic topic modeling in multilingual settings: An overview of its methodology and applications", "authors": [ { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Wim", "middle": [ "De" ], "last": "Smet", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2015, "venue": "Information Processing & Management", "volume": "51", "issue": "", "pages": "111--147", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vuli\u0107, Ivan, Wim De Smet, Jie Tang, and Marie-Francine Moens. 2015. Probabilistic topic modeling in multilingual settings: An overview of its methodology and applications. Information Processing & Management, 51(1):111-147.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Evaluation methods for topic models", "authors": [ { "first": "Hanna", "middle": [ "M" ], "last": "Wallach", "suffix": "" }, { "first": "M", "middle": [], "last": "David", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mimno", "suffix": "" }, { "first": ";", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "", "middle": [], "last": "Vancouver", "suffix": "" }, { "first": "", "middle": [], "last": "Wallach", "suffix": "" }, { "first": "M", "middle": [], "last": "Hanna", "suffix": "" }, { "first": "Iain", "middle": [], "last": "Murray", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "David", "middle": [ "M" ], "last": "Mimno", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 26th Annual International Conference on Machine Learning", "volume": "22", "issue": "", "pages": "1105--1112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wallach, Hanna M., David M. Mimno, and Andrew McCallum. 2009. Rethinking LDA: Why priors matter. In Advances in Neural Information Processing Systems 22, pages 1973-1981, Vancouver. Wallach, Hanna M., Iain Murray, Ruslan Salakhutdinov, and David M. Mimno. 2009. Evaluation methods for topic models. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML 2009, pages 1105-1112, Montreal.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Document clustering based on non-negative matrix factorization", "authors": [ { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yihong", "middle": [], "last": "Gong", "suffix": "" } ], "year": 2003, "venue": "SIGIR 2003: Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "267--273", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu, Wei, Xin Liu, and Yihong Gong. 2003. Document clustering based on non-negative matrix factorization. In SIGIR 2003: Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 267-273, Toronto.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "LightLDA: Big topic models on modest computer clusters", "authors": [ { "first": "Jinhui", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Qirong", "middle": [], "last": "Ho", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Jinliang", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Xun", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Eric", "middle": [ "Po" ], "last": "Xing", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Wei-Ying", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 24th International Conference on World Wide Web", "volume": "", "issue": "", "pages": "1351--1361", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuan, Jinhui, Fei Gao, Qirong Ho, Wei Dai, Jinliang Wei, Xun Zheng, Eric Po Xing, Tie-Yan Liu, and Wei-Ying Ma. 2015. LightLDA: Big topic models on modest computer clusters. In Proceedings of the 24th International Conference on World Wide Web, WWW 2015, pages 1351-1361, Florence.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Figure 1 Overview of multilingual topic models. (a) Multilingual topic models project-language specific and high-dimensional features from the vocabulary space to a language-agnostic and low-dimensional topic space. This figure shows a t-SNE (Maaten and Hinton 2008) representation of a real data set. (b) Multilingual topic models produce theme-aligned topics for all languages. From a human's view, each topic contains different languages but the words are describing the same thing.", "type_str": "figure", "uris": null }, "FIGREF4": { "num": null, "text": "Figure 12 Both SOFTLINK and VOCLINK stay at a stable performance level of either CNPMI or F-1 scores, whereas DOCLINK and C-BILDA expectedly have better performance as there are more linked Wikipedia articles.", "type_str": "figure", "uris": null }, "FIGREF5": { "num": null, "text": "score comparison of different models and languages with cardinality C = 10. Micro-averaged F-1 scores of different models and languages on TED+GV corpora.", "type_str": "figure", "uris": null }, "FIGREF6": { "num": null, "text": "Figure 14 SOFTLINK produces better topics and is more capable of crosslingual classification tasks than VOCLINK when the number of dictionary entries is very limited.", "type_str": "figure", "uris": null }, "FIGREF9": { "num": null, "text": "Test set: TED + GV (b) Multilabel crosslingual document classification F-1 scores in HIGHLAN.", "type_str": "figure", "uris": null }, "FIGREF10": { "num": null, "text": "Figure 20 Comparisons of transfer strength. A value of one (shown in red dotted line) means an equal balance of transfer between document and word levels. We notice SOFTLINK has the most balanced transfer strength, whereas VOCLINK has stronger transfer at the document level although its transfer operation is defined on the word level.", "type_str": "figure", "uris": null }, "TABREF0": { "content": "
Notations Descriptions
zThe topic assignment to a token.
w ( )A word type in language .
V ( )The size of vocabulary in language .
D ( )The size of corpus in language .
D ( 1 , 2 )The number of document pairs in languages 1 and 2 .
\u03b1A symmetric Dirichlet prior vector of size K, where K is the number of topics,
and each cell is denoted as \u03b1 k .
\u03b8 d,Multinomial distribution over topics for a document d in language .
\u03b2 ( )A symmetric Dirichlet prior vector of size V ( ) , where V ( ) is the size of
vocabulary in language .
", "text": "Notation table.", "type_str": "table", "html": null, "num": null }, "TABREF1": { "content": "", "text": "4.1.1 DOCLINK. The document links model (DOCLINK) uses parallel/comparable data sets, so that each bilingual document pair shares the same distribution over topics. Assume the document d in language 1 is paired with d in language 2 . Thus, the transfer target distribution is \u03b8 d, 2 \u2208 R K where K is the number of topics. For a document d 2 , let", "type_str": "table", "html": null, "num": null }, "TABREF2": { "content": "
LanguageFamilyStemmerStopwords
ENGermanicSnowBallStemmer 3NLTK
DEGermanicSnowBallStemmerNLTK
ESRomanceSnowBallStemmerNLTK
RUSlavicSnowBallStemmerNLTK
ARSemiticAssem's Arabic Light Stemmer 4GitHub 5
ZHSiniticJieba 6GitHub
", "text": "List of source of stemmers and stopwords used in experiments for HIGHLAN.", "type_str": "table", "html": null, "num": null }, "TABREF3": { "content": "
English (EN)Paired languageWiktionary
#docs#tokens#types#docs #tokens #types#entries
HIGHLANAR DE ES RU ZH2,000 2,000 2,000 2,000 2,000616,524 332,794 369,181 410,530 392,74548,133 35,921 37,100 39,870 38,2172,000 181,946 25,510 2,000 254,179 55,610 2,000 239,189 30,258 2,000 227,987 37,928 2,000 168,804 44,22816,127 32,225 31,563 33,574 23,276
LOWLANAM AY MK SW TL2,000 3,589,268 161,879 2,000 1,758,811 84,064 2,000 1,777,081 100,767 2,000 2,513,838 143,691 2,000 2,017,643 261,9192,000 251,708 65,368 2,000 169,439 24,136 2,000 489,953 87,329 2,000 353,038 46,359 2,000 232,891 41,6184,588 1,982 6,895 15,257 6,552
", "text": "Statistics of training Wikipedia corpus and Wiktionary.", "type_str": "table", "html": null, "num": null }, "TABREF4": { "content": "", "text": "Model specifications.", "type_str": "table", "html": null, "num": null }, "TABREF5": { "content": "
word\u124b\u1295\u124b (language)degreebakteria (bacteria)food\u043a\u0440\u0430\u0432\u0430 (cow)
dialect\u1243\u120b\u1275 (words)scienceasidi (acid)secret\u043a\u043d\u0438\u0433\u0430 (book)
vowel\u134a\u12f0\u120d (letter)professorspishi (species)under\u0431\u0435\u0437\u0431\u0435\u0434\u043d\u043e\u0441\u0442 (security)
latin\u133d\u1215\u1348\u1275 (writing)awardamino (amino)bridge\u0443\u043d\u0438\u0432\u0435\u0440\u0437\u0443\u043c\u043e\u0442 (universe)
spoken\u12a0\u120d\u134b\u1264\u1275 (alphabet)bachelorseli (cells)pills\u0437\u0430\u043f\u043e\u0447\u043d\u0435\u0442\u0435 (start)
letter\u12f5\u121d\u133d (audio)programaina (type)diploma\u0441\u043f\u0438\u0441\u0430\u043d\u0438\u0435 (magazine)
Arabic\u12a5\u1295\u130d\u120a\u12dd\u129b (English)academicbata (duck)frogs\u0434\u0440\u0432\u0458\u0430 (trees)
speaker\u120d\u1233\u1293\u1275 (tongues)institutemaji (water)lights\u0437\u0430\u0432\u0435\u0441\u0430 (curtain)
verb\u121d\u120d\u12ad\u1275 (signal)studentwanyama (animals)lie\u0447\u0443\u0434\u043e (miracle)
linguist\u1230\u12ce\u127d (people)chemistryprotini (protein)donuts\u0432\u0438\u0442\u0430\u043c\u0438\u043d (vitamin)
", "text": "Evaluation: Crosslingual Classification. Crosslingual document classification is the most common downstream application for multilingual topic models(Smet, Tang,", "type_str": "table", "html": null, "num": null }, "TABREF6": { "content": "
EnglishPaired language
#docs#tokens#types#docs#tokens#types
AR10,0003,597,322 128,92610,000996,80164,197
DE10,0002,155,680 103,81210,000 1,459,015 166,763
HIGHLANES10,0003,021,732 149,42310,000 1,737,312 142,086
RU10,0003,016,795 154,44210,000 2,299,332 284,447
ZH10,0001,982,452 112,17410,000 1,335,922 144,936
AM4,3169,632,700 269,7724,316403,15891,295
AY4,1875,231,260 167,5314,187280,19432,424
LOWLANMK10,000 11,080,304 301,02610,000 3,175,182 245,687
SW10,000 13,931,839 341,23110,000 1,755,514 134,152
TL6,4717,720,517 645,5346,471 1,124,04983,967
", "text": "Statistics of Wikipedia corpus for topic coherence evaluation (CNPMI).", "type_str": "table", "html": null, "num": null }, "TABREF7": { "content": "
Corpus statisticsLabel distributions
#docs#types #tokens#technology culture science
AR1,112 1,066,75415,124384304290
DE1,063774,73419,826364289276
TEDES1,152933,37613,088401312295
RU1,010831,87317,020346275261
ZH1,123 1,032,70819,594386315290
AR2,000325,87913,07251048933
GV (HIGHLAN)DE ES RU1,481 2,000 2,000269,470 367,631 488,87816,031 11,104 16,157346 457 516344 387 36942 38 62
ZH2,000528,37018,19449936656
AM3910,5894,047331
GV (LOWLAN)AY MK SW674 1,992 1,38366,076 388,713 359,0664,939 29,022 14,07276 343 137100 426 11046 182 71
TL25426,0726,138326719
", "text": "Statistics of TED Talks 2013 (TED) and Global Voices (GV) corpus.", "type_str": "table", "html": null, "num": null } } } }