{ "paper_id": "C16-1015", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:01:41.552073Z" }, "title": "Integrating Topic Modeling with Word Embeddings by Mixtures of vMFs", "authors": [ { "first": "Ximing", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Jilin University", "location": { "country": "China" } }, "email": "liximing86@gmail.com" }, { "first": "Jinjin", "middle": [], "last": "Chi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Jilin University", "location": { "country": "China" } }, "email": "" }, { "first": "Changchun", "middle": [], "last": "Li", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Jihong", "middle": [], "last": "Ouyang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Jilin University", "location": { "country": "China" } }, "email": "" }, { "first": "Bo", "middle": [], "last": "Fu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Jilin University", "location": { "country": "China" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Gaussian LDA integrates topic modeling with word embeddings by replacing discrete topic distribution over word types with multivariate Gaussian distribution on the embedding space. This can take semantic information of words into account. However, the Euclidean similarity used in Gaussian topics is not an optimal semantic measure for word embeddings. Acknowledgedly, the cosine similarity better describes the semantic relatedness between word embeddings. To employ the cosine measure and capture complex topic structure, we use von Mises-Fisher (vMF) mixture models to represent topics, and then develop a novel mix-vMF topic model (MvTM). Using public pre-trained word embeddings, we evaluate MvTM on three real-world data sets. Experimental results show that our model can discover more coherent topics than the state-of-the-art baseline models, and achieve competitive classification performance.", "pdf_parse": { "paper_id": "C16-1015", "_pdf_hash": "", "abstract": [ { "text": "Gaussian LDA integrates topic modeling with word embeddings by replacing discrete topic distribution over word types with multivariate Gaussian distribution on the embedding space. This can take semantic information of words into account. However, the Euclidean similarity used in Gaussian topics is not an optimal semantic measure for word embeddings. Acknowledgedly, the cosine similarity better describes the semantic relatedness between word embeddings. To employ the cosine measure and capture complex topic structure, we use von Mises-Fisher (vMF) mixture models to represent topics, and then develop a novel mix-vMF topic model (MvTM). Using public pre-trained word embeddings, we evaluate MvTM on three real-world data sets. Experimental results show that our model can discover more coherent topics than the state-of-the-art baseline models, and achieve competitive classification performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Topic models such as latent Dirichlet allocation (LDA) (Blei et al., 2003) are hierarchical probabilistic models of document collections. They can effectively uncover the main themes of corpora by using latent topics learnt from observed collections (Blei, 2012) , however, they neglect semantic information of words. In topic modeling, a \"topic\" is a multinomial distribution over a fixed vocabulary, i.e., a word type proportion. Because words are represented by unordered indexes, with statistical inference algorithms, related words are grouped into topics mainly by using document-level word co-occurrence information (Wang and McCallum, 2006) , rather than semantics of words. That is why LDA often outputs many low-quality topics, and views in (Das et al., 2015) even suggest that any such observation of semantically coherent topics in topic models is, in some sense, accidental.", "cite_spans": [ { "start": 55, "end": 74, "text": "(Blei et al., 2003)", "ref_id": "BIBREF3" }, { "start": 250, "end": 262, "text": "(Blei, 2012)", "ref_id": "BIBREF4" }, { "start": 623, "end": 648, "text": "(Wang and McCallum, 2006)", "ref_id": "BIBREF19" }, { "start": 751, "end": 769, "text": "(Das et al., 2015)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To mix with semantics of words, a recent Gaussian LDA (G-LDA) (Das et al., 2015) model integrates topic modeling with word embeddings, which can effectively capture lexico-semantic regularities in language from a large unlabeled corpus (Mikolov et al., 2013) . This hot technique transforms words into vectors (i.e., word vector). To model documents of word vectors, G-LDA replaces the discrete topic distributions over word types with multivariate Gaussian distributions on the word embedding space. Because words with similar semantic properties are closer to each other in the embedding space, semantic information of words can be taken into consideration by using Gaussian distributions to describe semantic centrality location of topics.", "cite_spans": [ { "start": 62, "end": 80, "text": "(Das et al., 2015)", "ref_id": "BIBREF5" }, { "start": 236, "end": 258, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "An issue of G-LDA is that the word weights in Gaussian topics are measured by the Euclidean similarity between word embeddings. However, the Euclidean similarity is not an optimal semantic measure, since most of word embedding algorithms use exponentiated cosine similarity as the link function (Li et al., 2016a) . The cosine similarity may be a better choice to describe the semantic relatedness between word embeddings. Following this idea, in this paper we use von Mises-Fisher (vMF) distributions on the embedding space to represent topics, replacing Gaussian topics in G-LDA. The vMF distribution defines a probability density over vectors on a unit sphere, parameterized by mean \u00b5 and concentration parameter \u03ba. Its density function for x \u2208 R M , x = 1, \u00b5 = 1, \u03ba \u2265 0 is given by:", "cite_spans": [ { "start": 295, "end": 313, "text": "(Li et al., 2016a)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "p (x|\u00b5, \u03ba) = c p (\u03ba) exp \u03ba\u00b5 T x (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "where c p (\u03ba) is the normalization constant. Note that vMF concerns the cosine similarity defined by \u00b5 T x. It is a better way to represent topics of word embeddings. Another issue we face is that topics often contain many words that are far away from each other in the embedding space. That is, the true distributions of topics often form two or more dominant clumps. However, a simple vMF distribution is unable to capture such structure. For example, the topic sof tware, user, net, f eedback, grade contains some \"dissimilar\" words, such as net and grade 1 . In this case, a simple vMF topic distribution can not simultaneously place high probabilities on these \"dissimilar\" words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To address the problem mentioned above, we further use mixtures of vMFs to describe topics, rather than a single vMF. We then develop a novel mix-vMF topic model (MvTM). Mixtures of vMFs can help us capture complex topic structure that forms more dominant clumps. In MvTM, we consider two settings with respect to the topic, i.e., disjoint setting and overlapping setting. Naturally, in disjoint settings all mixtures of vMFs use disjoint vMF bases; and in overlapping setting some mixtures of vMFs share the same vMF bases. An advantages of the overlapping setting is that it can describe topic correlation in some degree. We have conducted a number of experiments on three real-world data sets. Experimental results show that our MvTM can discover more coherent topics than the state-of-the-art baseline topic models, and achieve competitive performance on the classification task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we simply review LDA and G-LDA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "2" }, { "text": "LDA (Blei et al., 2003) is a representative probabilistic topic model of document collections. In LDA, the main themes of corpora are described by topics, where each topic is a multinomial distribution \u03c6 over a fixed vocabulary (i.e., a word type proportion). Each document is a multinomial distribution \u03b8 over topics (i.e., a topic proportion). For simplification, distributions \u03c6 and \u03b8 are designed to be sampled from the conjugate Dirichlet priors parameterized by \u03b2 and \u03b1, respectively. Suppose that D, K and V denote the number of documents, topics and word types. The generative process of LDA is as follows:", "cite_spans": [ { "start": 4, "end": 23, "text": "(Blei et al., 2003)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "LDA", "sec_num": "2.1" }, { "text": "1. For each topic k \u2208 {1, 2, \u2022 \u2022 \u2022 , K} (a) Sample a topic \u03c6 k \u223c Dir (\u03b2) 2. For each document d \u2208 {1, 2, \u2022 \u2022 \u2022 , D} (a) Sample a topic proportion: \u03b8 d \u223c Dir (\u03b1) (b) For each of the N d words embeddings i. Sample a topic indicator z dn \u223c M ultinomial (\u03b8 d ) ii. Sample a word w dn \u223c M ultinomial (\u03c6 z dn )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LDA", "sec_num": "2.1" }, { "text": "Reviewing the definition above, we note that a topic in LDA is a discrete distribution over observable word types (i.e., word indexes). In this sense, LDA neglects semantic information of words and precludes new word types to be added into topics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LDA", "sec_num": "2.1" }, { "text": "G-LDA (Das et al., 2015) integrates topic modeling with word embeddings. This model replaces the discrete topic distributions over word types with multivariate Gaussian distributions on an M-dimensional embedding space, and concurrently replaces the Dirichlet priors with the conjugate Normal-Inverse-Wishart (NIW) priors on Gaussian topics. Because word embeddings learnt from large unlabeled corpora effectively capture semantic information of words (Bengio et al., 2003) , G-LDA can handle, in some sense, words' semantics and new word types. Let N (\u00b5 k , \u03a3 k ) be the Gaussian topic k with mean \u00b5 k and covariance matrix \u03a3 k . The generative process of G-LDA is as follows:", "cite_spans": [ { "start": 6, "end": 24, "text": "(Das et al., 2015)", "ref_id": "BIBREF5" }, { "start": 452, "end": 473, "text": "(Bengio et al., 2003)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "G-LDA", "sec_num": "2.2" }, { "text": "1. For each topic k \u2208 {1, 2, \u2022 \u2022 \u2022 , K} (a) Sample a Gaussian topic N (\u00b5 k , \u03a3 k ) \u223c N IW (\u00b5 0 , \u03ba 0 , \u03a8 0 , \u03bd 0 ) 2. For each document d \u2208 {1, 2, \u2022 \u2022 \u2022 , D} (a) Sample a topic proportion: \u03b8 d \u223c Dir (\u03b1) (b) For each of the N d word embeddings i. Sample a Gaussian topic indicator z dn \u223c M ultinomial (\u03b8 d ) ii. Sample a word embedding w dn \u223c N (\u00b5 z dn , \u03a3 z dn )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "G-LDA", "sec_num": "2.2" }, { "text": "3 MvTM G-LDA defines Gaussian topics, which measure word weights in topics by the Euclidean similarity between word embeddings. However, the Euclidean similarity is not an optimal semantic measure of word embeddings. People often prefer the cosine similarity (Li et al., 2016a) . To upgrade G-LDA, a novel mix-vMF topic model (MvTM) is proposed, where we replace the Gaussian topic in G-LDA with mixture of vMFs. In this work, we use mixture of vMFs with C mixture components (Banerjee et al., 2005) described by:", "cite_spans": [ { "start": 259, "end": 277, "text": "(Li et al., 2016a)", "ref_id": "BIBREF9" }, { "start": 476, "end": 499, "text": "(Banerjee et al., 2005)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "G-LDA", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p (x|\u03c0 1:C , \u00b5 1:C , \u03ba) = C c=1 \u03c0 c p c (x|\u00b5 c , \u03ba)", "eq_num": "(2)" } ], "section": "G-LDA", "sec_num": "2.2" }, { "text": "where p c (x|\u00b5 c , \u03ba) is the mixture vMF component (i.e., base); \u03c0 c is the mixture weight and such that C c=1 \u03c0 c = 1. The design of MvTM has two advantages. First, the vMF distribution defines a probability density over normalized vectors on a unit sphere. Reviewing Eq.1, it can be seen that vMF concerns the cosine similarity. Second, using linear vMF mixture model can help us capture complex topic structure, which forms two or more dominant clumps.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "G-LDA", "sec_num": "2.2" }, { "text": "Formally, MvTM models documents consisting of normalized word embeddings w in an Mdimensional space, i.e., w = 1 and w \u2208 R M . Suppose that there are K topics in total. We characterize each topic k as a mixture of vMFs with parameter \u2206 k = \u03c0 k| 1:C , \u00b5 k|1:C , \u03ba k . Besides the topic design, again suppose that each document is a topic proportion \u03b8, drawn from a Dirichlet prior \u03b1. Let D and N d be the number of documents and the number of words in document d, respectively. The generative process of MvTM is given by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "G-LDA", "sec_num": "2.2" }, { "text": "1. For each document d \u2208 {1, 2, \u2022 \u2022 \u2022 , D} (a) Sample a topic proportion: \u03b8 d \u223c Dir (\u03b1) (b) For each of the N d word embeddings i. Sample a vMF mixture topic indicator z dn \u223c M ultinomial (\u03b8 d ) ii. Sample a word vector w dn \u223c vMF (\u2206 z dn )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "G-LDA", "sec_num": "2.2" }, { "text": "In MvTM, the vMF bases of different topics can be either disjoint or overlapping. For disjoint MvTM (abbr. MvTM d ), the vMF bases of different topics are disjoint. In MvTM d , the total number of vMF bases is C \u00d7K. For overlapping MvTM (abbr. MvTM o ), vMF bases are allowed to be shared by different topics. An advantage is that the overlapping setting can describe topic correlation in some degree. For example, if two topics share a same vMF base and their corresponding mixture weights are close to each other, they may be semantically correlated. In previous study, we have examined several overlapping patterns, e.g., all topics share a same set of vMF bases. However, an issue is that such patterns often output many twinborn topics. In this work, we use the following overlapping scheme: suppose that there are G groups of K topics. In a group, each topic consists of C personal vMF bases, and all topics in this group share P public vMF bases, where C + P = C. In this setting, the total number of vMF bases is G \u00d7 (K \u00d7 C + P ), and topics in a group G g use a same \u03ba g , i.e., \u03ba", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "G-LDA", "sec_num": "2.2" }, { "text": "g = \u03ba k = \u2022 \u2022 \u2022 = \u03ba k if k \u2022 \u2022 \u2022 k \u2208 G g .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "G-LDA", "sec_num": "2.2" }, { "text": "The intuition behind overlapping by topic groups is that only a small set of topics may be semantically correlated. Besides, the personal vMF base design can effectively avoid the outputs of twinborn topics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "G-LDA", "sec_num": "2.2" }, { "text": "For MvTM, the topic proportions {\u03b8 d } d=D d=1 and the topic assignments", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.1" }, { "text": "{z dn } d=D,n=N d d=1,n=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.1" }, { "text": "are hidden variables; and the topics {vMF (\u2206 k )} k=K k=1 are model parameters. Given an observable document collection W consisting of word embeddings, we wish to compute the posterior distribution over \u03b8 and z, and to estimate vMF (\u2206).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.1" }, { "text": "Because the exact posterior distribution p(\u03b8, z|W, \u03b1, \u2206) is intractable to be computed, we must resort approximation inference algorithms. Due to the multinomial-Dirichlet design, the topic proportion \u03b8 can be analytically integrated out. We then use hybrid variational-Gibbs (HVG) (Mimno et al., 2012) to approximate a posterior over the topic assignment z: p(z|W, \u03b1, \u2206). A variational distribution of the following form is used:", "cite_spans": [ { "start": 282, "end": 302, "text": "(Mimno et al., 2012)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "q(z) = D d=1 q(z d )", "eq_num": "(3)" } ], "section": "Inference", "sec_num": "3.1" }, { "text": "where q(z d ) is a single distribution over the K N d possible topic configurations, rather than a product of N d distributions. By using this variational distribution, we obtain an Evidence Lower BOund (ELBO) L as follows :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "log p(z|W, \u03b1, \u2206) \u2265 L(z d , \u2206) \u2206 = E q [log p(W, z|\u03b1, \u2206)] \u2212 E q [log q(z)]", "eq_num": "(4)" } ], "section": "Inference", "sec_num": "3.1" }, { "text": "We then develop an expectation maximization (EM) process to optimize this ELBO, where in the Estep we maximize L with respect to the variational distribution q(z), and in the M-step we maximize L with respect to the model parameter \u2206, holding q(z) fixed. Optimizing q(z) directly is expensive because for each document d it needs to enumerate all K N d possible topic configurations. We therefore apply Monte-Carlo approximation to this ELBO L in Eq.4 by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L(z d , \u2206) \u2206 = E q [log p(W, z|\u03b1, \u2206)] \u2212 E q [log q(z)] \u2248 1 B B b=1 log p(W, z (b) |\u03b1, \u2206) \u2212 log q z (b)", "eq_num": "(5)" } ], "section": "Inference", "sec_num": "3.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.1" }, { "text": "z (b) b=B b=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.1" }, { "text": "are samples drawn from q(z). Because the variational distributions q(z d ) are independent from each other, reviewing Eq.3, each document d drives a personal sampling process with respect to q(z d ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.1" }, { "text": "In the E-step, for each document d we use Gibbs sampling to draw B samples from q(z d ). This sequentially samples topic assignment to each word embedding from the posterior distribution conditioned on all other variables and the data. The sampling equation is given by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(z dn = k|z \u2212n d , \u03b1, \u2206) \u221d (N \u2212n dk + \u03b1) \u00d7 vMF(w dn |\u2206 k )", "eq_num": "(6)" } ], "section": "Inference", "sec_num": "3.1" }, { "text": "where N dk is the number of word embeddings assigned to topic k in document d; the superscript \"-n\" is a quantity that excludes the word embedding w dn . During per-document Gibbs sampling, we iteratively run the MCMC chain a fixed number of times and save the last B samples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.1" }, { "text": "In the M-step, we optimize \u2206 given all samples of z obtained in E-step. This is achieved by maximizing the following approximate ELBO L :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L \u2206 = 1 B B b=1 log p(W, z (b) |\u03b1, \u2206) + const", "eq_num": "(7)" } ], "section": "Inference", "sec_num": "3.1" }, { "text": "For the disjoint setting, i.e., MvTM d , the optimization of L is equivalent to independently estimate \u2206 k for each topic k. Due to space limit, we omit the derivation details (Gopal and Yang, 2014) . Extracting all N k word embeddings assigned to topic k, for each word embedding w i we compute its weights for all C vMF bases by:", "cite_spans": [ { "start": 176, "end": 198, "text": "(Gopal and Yang, 2014)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "weight ic = \u03c0 k|c vMF(w i |\u00b5 k|c , \u03ba k ) C j=1 \u03c0 k|j vMF(w i |\u00b5 k|j , \u03ba k )", "eq_num": "(8)" } ], "section": "Inference", "sec_num": "3.1" }, { "text": "and then update \u2206 k by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "R k|c = N k i=1 weight ic \u00d7 w i , r k = C c=1 R k|c N k \u00b5 k|c = R k|c R k|c , \u03c0 k|c = N k i=1 weight ic N k , \u03ba k = r k M \u2212 r 3 k 1 \u2212 r 2 k", "eq_num": "(9)" } ], "section": "Inference", "sec_num": "3.1" }, { "text": "For the overlapping setting, i.e., MvTM o , there are a few changes to the optimization of L . In each topic group G g , the updates of \u03c0 and \u00b5 of personal vMF bases remain unchanged, whereas the mean \u00b5 of public vMF bases and \u03ba of this group are updated by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "r g = C c=1 k\u2208Gg R k|c k\u2208Gg N k , \u00b5 k|p = k\u2208Gg R k|p k\u2208Gg R k|p , \u03ba g = r g M \u2212 r 3 g 1 \u2212 r 2 g", "eq_num": "(10)" } ], "section": "Inference", "sec_num": "3.1" }, { "text": "where \u00b5 k|p is the mean of the pth public vMF base for topic k and note that \u00b5 k|p = \u00b5 k |p if k, k \u2208 G g . For clarity, the overall EM inference algorithm for MvTM is outlined in Algorithm 1. For MvTM 0 , optimize \u2206 using Eq.8, 9 and 10. 10: End for", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.1" }, { "text": "We first analyze the time complexities of E-step and M-step, and then present the overall time cost of MvTM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Complexity", "sec_num": "3.2" }, { "text": "In the E-step, the main time cost is the topic assignment sampling of each word embedding over K topics. Reviewing Eq.6, one sampling process needs to compute the probabilities of the current word embedding, i.e., vMF(w dn |\u2206 k ), in all K topics, which requires O(KCM ) time. Fortunately, the topics are fixed in the E-step, thus we only need to compute the value of vMF(w|\u2206 k ) for each word embedding at the beginning of each EM sweep, and save them in the memory. This requires O(V KCM ) time, where V is the number of word embeddings. Consequently, the topic sampling process of MvTM is equivalent to the sampling of Gibbs sampling LDA, requiring O(K) time. We present that the periteration time complexity of E-step is given by O(V KCM + \u03b6N V K), where \u03b6 is the iteration number in per-document Gibbs sampling and N V is the total number of word embeddings occurred in a corpus. Recently, sparse sampling algorithms (Yao et al., 2009; Li et al., 2014) effectively accelerate the sampling of topic models. Inspired by (Li et al., 2016b) , we employ the Alias method (Walker, 1977; Marsaglia et al., 2004) to reduce the per-word sampling cost from O(K) to O(K d ), where K d is the number of instantiated topics in document d and commonly K d K. The per-iteration time complexity of E-step now is O(V KCM", "cite_spans": [ { "start": 922, "end": 940, "text": "(Yao et al., 2009;", "ref_id": "BIBREF20" }, { "start": 941, "end": 957, "text": "Li et al., 2014)", "ref_id": "BIBREF8" }, { "start": 1023, "end": 1041, "text": "(Li et al., 2016b)", "ref_id": "BIBREF10" }, { "start": 1071, "end": 1085, "text": "(Walker, 1977;", "ref_id": "BIBREF17" }, { "start": 1086, "end": 1109, "text": "Marsaglia et al., 2004)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Time Complexity", "sec_num": "3.2" }, { "text": "+ \u03b6N V K d ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Complexity", "sec_num": "3.2" }, { "text": "In the M-step, the time cost of MvTM d and that of MvTM o are almost the same. We only present the time complexity of MvTM d . Reviewing the M-step, we see that the most expensive updates include Eq.8, the first and the fourth equations in Eq.9. They require O(V CM ), O(V CM ) and O(V C). Thus we present that the (per-iteration) time complexity of M-step is O(V CM ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Complexity", "sec_num": "3.2" }, { "text": "Overall, we see that in each EM sweep the E-step dominates the run-time, giving an approximate total per-iteration time complexity O(V KCM", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Complexity", "sec_num": "3.2" }, { "text": "+ \u03b6N V K d ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Complexity", "sec_num": "3.2" }, { "text": "Clearly, MvTM is much efficient than Gibbs sampling G-LDA (Das et al., 2015) , because G-LDA needs to repeatedly compute the determinant and inverse of the covariance matrix in Gaussian topics. For each word occurring, this spends O(M 2 ) time, even using Cholesky decomposition.", "cite_spans": [ { "start": 58, "end": 76, "text": "(Das et al., 2015)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Time Complexity", "sec_num": "3.2" }, { "text": "In this section, we evaluate MvTM qualitatively and quantitatively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "4" }, { "text": "Data set Three data sets were used in our experiments, including Newsgroup (NG), NIPS and Wikipedia (Wiki). The NG data set is a collection of newsgroup documents, consisting of 20 classes. We will use NG to examine the classification performance of MvTM in Section 4.3. The NIPS data set is a collection of papers in the NIPS conference. The processed versions of these two data sets were downloaded from the open source of G-LDA 2 . For the Wiki data set, we downloaded a number of documents from online English Wikipedia, and processed these documents using a standard vocabulary 3 . The statistics of the three data sets are listed in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 639, "end": 646, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experimental Setting", "sec_num": "4.1" }, { "text": "Baseline model: In the experiments, we used two baseline models, including LDA 4 and G-LDA 2 . For both baseline models, we use their open source codes publicly available on the net. A pre-trained 50dimensional word embeddings 5 were used. Especially for MvTM, we normalized the word embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setting", "sec_num": "4.1" }, { "text": "We use the PMI score (Newman et al., 2010) to evaluate the quality of topics learnt by topic models. This metric is based on the pointwise mutual information of a power-law reference corpus. For a topic k, given T most probable words the PMI score is computed by:", "cite_spans": [ { "start": 21, "end": 42, "text": "(Newman et al., 2010)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation on Topics", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P M I (k) = 1 T (T \u2212 1) 1\u2264i\u2264j\u2264T log p (w i , w j ) p (w i ) p (w j )", "eq_num": "(11)" } ], "section": "Evaluation on Topics", "sec_num": "4.2" }, { "text": "where p (w i ) and p (w i , w j ) are the probabilities of occurring word w i and co-occurring word pattern (w i , w j ) estimated by the reference corpus, respectively. In the experiments, we use the Palmetto 6 tool Figure 1 : PMI performance of 15 top words on NG, NIPS and Wiki. to compute PMI scores of the top 15 words. We train baseline models and our MvTM with 50 topics, and evaluate the average PMI score of all topics. For MvTM d , the number of vMF bases is set to 2, i.e., C = 2. For MvTM o , topics are organized into ten groups, where each group consists of five topics; and the numbers of personal vMF bases and public vMF bases are set to 2 and 3, respectively 7 .", "cite_spans": [], "ref_spans": [ { "start": 217, "end": 225, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Evaluation on Topics", "sec_num": "4.2" }, { "text": "The experimental PMI results on three data sets are shown in Figure 1 . It is clearly seen that MvTM performs better than LDA and G-LDA. This implies that MvTM outputs more coherent topics. Some examples of top topic words are listed in Table 2 . Overall, we see that the topics of MvTM seem more coherent than those of baseline models. The topics of LDA contain some noise words, e,g., \"mr\" and \"don\"; and G-LDA contains some less relevant words, e.g., the second topic of G-LDA is incoherent. In contrast, the topics of MvTM are more precise and clean. Besides, for MvTM o we measure topic correlation by computing the cosine between vMF weights of topics in the same group. Some topic pairs with high cosine similarity scores, such as patients, treatments, therapy, treatment, diabetes and blood, skin, heart, stomach, breathing , seem semantically correlated. ", "cite_spans": [], "ref_spans": [ { "start": 61, "end": 69, "text": "Figure 1", "ref_id": null }, { "start": 237, "end": 244, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Evaluation on Topics", "sec_num": "4.2" }, { "text": "We compare the classification performance of MvTM with baseline topic models across NG. Two new baselines are used, i.e., topical word embedding (TWE) (Liu et al., 2015) and infvoc (Zhai and Boyd-Graber, 2013) . For all models, we learn the topic proportions (K=50) as features of documents, and then use the SVM classifier implemented by LibSVM 8 .", "cite_spans": [ { "start": 151, "end": 169, "text": "(Liu et al., 2015)", "ref_id": "BIBREF11" }, { "start": 181, "end": 209, "text": "(Zhai and Boyd-Graber, 2013)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation on Classification", "sec_num": "4.3" }, { "text": "The results of original test documents are shown in Figure 2(a) . Clearly, MvTM achieves better performance than LDA, G-LDA and TWE. MvTM can handle absent words in training data. To examine this ability, we compare MvTM with G-LDA and infvoc 9 , where the two also can handle unseen words. We replace a number of words in test documents with synonyms by using WordNet as in (Das et al., 2015) . The classification results are shown in Figure 2(b) . It can be seen that MvTM outperforms G-LDA and infvoc. The results imply that MvTM works well even future documents containing new words. This may be insignificant in practice.", "cite_spans": [ { "start": 375, "end": 393, "text": "(Das et al., 2015)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 52, "end": 63, "text": "Figure 2(a)", "ref_id": "FIGREF0" }, { "start": 436, "end": 447, "text": "Figure 2(b)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Evaluation on Classification", "sec_num": "4.3" }, { "text": "Some early works have attempted to combine topic modeling with embeddings. (Hu et al., 2012) proposed a model to describe indexing representations for audio retrieval, which is similar with G-LDA. Another work (Wan et al., 2012) jointly estimates parameters of a topic model and a neural network to represent topics of images.", "cite_spans": [ { "start": 75, "end": 92, "text": "(Hu et al., 2012)", "ref_id": "BIBREF7" }, { "start": 210, "end": 228, "text": "(Wan et al., 2012)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Recently, (Liu et al., 2015) proposed a straightforward TWE model. This model separately trains a topic model and word embeddings on the same corpus, and then uses the average of embeddings assigned to the same topic as the topic embedding. A limitation of TWE is that it lacks statistical foundations. Another modification latent feature topic modeling (LFTM) (Nguyen et al., 2015) extends LDA and Dirichlet multinomial mixture by incorporating word embeddings as latent features. However, LFTM may be infeasible for large-scale data sets, since it, i.e., the code provided by its authors, is timeconsuming. A most recent nonparametric model (Batmanghelich et al., 2016) also uses vMF to describe the topic over word embeddings, where a topic is represented by a single vMF on the embedding space. By contrast, it may be less effective to capture complex topic structure.", "cite_spans": [ { "start": 10, "end": 28, "text": "(Liu et al., 2015)", "ref_id": "BIBREF11" }, { "start": 643, "end": 671, "text": "(Batmanghelich et al., 2016)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "In this paper, we investigate how to improve topic modeling with word embeddings. A previous art G-LDA defines Gaussian topics over word embeddings, however, the word weights of topics are measured by the Euclidean similarity. To address this problem and further capture complex topic structure, we use mixtures of vMFs to model topics, and then propose a novel MvTM algorithm. The vMF bases of topics in MvTM can be either disjoint or overlapping, leading to two versions of MvTM. The overlapping MvTM can describe topic correlation in some degree. In empirical evaluations, we use the per-trained GloVe word embeddings, and then compare MvTM with LDA and G-LDA on three real-world data sets. The experimental results indicate that compared to the state-of-the-art baseline models MvTM can discover more coherent topics measured by PMI, and achieve competitive classification performance. In the future, we are interested in supervised versions of MvTM, directly applying to basic document tasks such as sentiment analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Discussion", "sec_num": "6" }, { "text": "This work is licensed under a Creative Commons Attribution 4.0 International License. Page numbers and proceedings footer are added by the organisers. License details: http://creativecommons.org/licenses/by/4.0/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This means that the cosine similarity between word embeddings of net and grade is small.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/rajarshd/Gaussian LDA 3 http://www.cs.princeton.edu/\u223cmdhoffma/ 4 http://gibbslda.sourceforge.net/ 5 GloVe word embeddings available at http://nlp.stanford.edu/projects/glove/ 6 http://aksw.org/Projects/Palmetto.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In previous experiments, we found that using mixtures of vMFs with 2 bases is able to better represent topics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.csie.ntu.edu.tw/\u223ccjlin/libsvm/ 9 For fair comparison, we train infvoc by a batch optimization procedure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by National Natural Science Foundation of China (NSFC) under the Grant No. 61133011, and 61103091. We thank the reviewers for their useful comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Clustering on the unit hypersphere using von Mises-Fisher distributions", "authors": [ { "first": "Arindam", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "S", "middle": [], "last": "Inderjit", "suffix": "" }, { "first": "Joydeep", "middle": [], "last": "Dhillon", "suffix": "" }, { "first": "Suvrit", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "", "middle": [], "last": "Sra", "suffix": "" } ], "year": 2005, "venue": "Journal of Machine Learning Research", "volume": "6", "issue": "", "pages": "1345--1382", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arindam Banerjee, Inderjit S. Dhillon, Joydeep Ghosh, and Suvrit Sra. 2005. Clustering on the unit hypersphere using von Mises-Fisher distributions. Journal of Machine Learning Research, 6:1345-1382.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Nonparametric spherical topic modeling with word embeddings", "authors": [ { "first": "Kayhan", "middle": [], "last": "Batmanghelich", "suffix": "" }, { "first": "Ardavan", "middle": [], "last": "Saeedi", "suffix": "" }, { "first": "Karthik", "middle": [], "last": "Narasimhan", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gershman", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1604.00126v1" ] }, "num": null, "urls": [], "raw_text": "Kayhan Batmanghelich, Ardavan Saeedi, Karthik Narasimhan, and Sam Gershman. 2016. Nonparametric spheri- cal topic modeling with word embeddings. arXiv:1604.00126v1.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A neural probabilistic language model", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Rjean", "middle": [], "last": "Ducharme", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Jauvin", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "1137--1155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, Rjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137-1155.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Latent Dirichlet allocation", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Jordan", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "993--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993-1022.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Probabilistic topic models", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" } ], "year": 2012, "venue": "Communications of the ACM", "volume": "55", "issue": "4", "pages": "77--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M. Blei. 2012. Probabilistic topic models. Communications of the ACM, 55(4):77-84.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Gaussian LDA for topic models with word embeddings", "authors": [ { "first": "Rajarshi", "middle": [], "last": "Das", "suffix": "" }, { "first": "Manzil", "middle": [], "last": "Zaheer", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2015, "venue": "Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "795--804", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rajarshi Das, Manzil Zaheer, and Chris Dyer. 2015. Gaussian LDA for topic models with word embeddings. In Annual Meeting of the Association for Computational Linguistics, pages 795-804.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Von mises-fisher clustering models", "authors": [ { "first": "Siddharth", "middle": [], "last": "Gopal", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2014, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "154--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siddharth Gopal and Yiming Yang. 2014. Von mises-fisher clustering models. In International Conference on Machine Learning, pages 154-162.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Latent topic model based on Gaussian-LDA for audio retrieval", "authors": [ { "first": "Pengfei", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Wenju", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Zhanlei", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2012, "venue": "Pattern Recognition", "volume": "321", "issue": "", "pages": "556--563", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pengfei Hu, Wenju Liu, Wei Jiang, and Zhanlei Yang. 2012. Latent topic model based on Gaussian-LDA for audio retrieval. In Pattern Recognition, volume 321 of CCIS, pages 556-563.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Reducing the sampling complexity of topic models", "authors": [ { "first": "Aaron", "middle": [ "Q" ], "last": "Li", "suffix": "" }, { "first": "Amr", "middle": [], "last": "Ahmed", "suffix": "" }, { "first": "Sujith", "middle": [], "last": "Ravi", "suffix": "" }, { "first": "Alexander", "middle": [ "J" ], "last": "Smola", "suffix": "" } ], "year": 2014, "venue": "International Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aaron Q. Li, Amr Ahmed, Sujith Ravi, and Alexander J. Smola. 2014. Reducing the sampling complexity of topic models. In International Conference on Knowledge Discovery and Data Mining.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Generative topic embedding: a continuous representation of documents", "authors": [ { "first": "Shaohua", "middle": [], "last": "Li", "suffix": "" }, { "first": "Tat-Seng", "middle": [], "last": "Chua", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Chunyan", "middle": [], "last": "Miao", "suffix": "" } ], "year": 2016, "venue": "Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shaohua Li, Tat-Seng Chua, Jun Zhu, and Chunyan Miao. 2016a. Generative topic embedding: a continuous representation of documents. In Annual Meeting of the Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Sparse hybrid variational-gibbs algorithm for latent Dirichlet allocation", "authors": [ { "first": "Ximing", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jihong", "middle": [], "last": "Ouyang", "suffix": "" }, { "first": "Xiaotang", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2016, "venue": "SIAM International Conference on Data Mining", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ximing Li, Jihong Ouyang, and Xiaotang Zhou. 2016b. Sparse hybrid variational-gibbs algorithm for latent Dirichlet allocation. In SIAM International Conference on Data Mining.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Topical word embeddings", "authors": [ { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Tat-Seng", "middle": [], "last": "Chua", "suffix": "" }, { "first": "Maosongsun", "middle": [], "last": "", "suffix": "" } ], "year": 2015, "venue": "Association for the Advancement of Artificial Intelligence", "volume": "", "issue": "", "pages": "2418--2424", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang Liu, Zhiyuan Liu, Tat-Seng Chua, and MaosongSun. 2015. Topical word embeddings. In Association for the Advancement of Artificial Intelligence, pages 2418-2424.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Fast generation of discrete random variables", "authors": [ { "first": "George", "middle": [], "last": "Marsaglia", "suffix": "" }, { "first": "Wai", "middle": [ "Wan" ], "last": "Tsang", "suffix": "" }, { "first": "Jingbo", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2004, "venue": "Journal of Statistical Software", "volume": "11", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Marsaglia, Wai Wan Tsang, and Jingbo Wang. 2004. Fast generation of discrete random variables. Journal of Statistical Software, 11:1-8.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Linguistic regularities in continuous space word representations", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Wen Tau Yih", "suffix": "" }, { "first": "", "middle": [], "last": "Zweig", "suffix": "" } ], "year": 2013, "venue": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Wen tau Yih, and Geoffrey Zweig. 2013. Linguistic regularities in continuous space word representations. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Sparse stochastic inference for latent Dirichlet allocation", "authors": [ { "first": "David", "middle": [], "last": "Mimno", "suffix": "" }, { "first": "Matthew", "middle": [ "D" ], "last": "Hoffman", "suffix": "" }, { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" } ], "year": 2012, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Mimno, Matthew D. Hoffman, and David M. Blei. 2012. Sparse stochastic inference for latent Dirichlet allocation. In International Conference on Machine Learning.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Automatic evaluation of topic coherence", "authors": [ { "first": "David", "middle": [], "last": "Newman", "suffix": "" }, { "first": "Jey", "middle": [ "Han" ], "last": "Lau", "suffix": "" }, { "first": "Karl", "middle": [], "last": "Grieser", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwi", "suffix": "" } ], "year": 2010, "venue": "Annual Conference of the North American Chapter of the ACL", "volume": "", "issue": "", "pages": "100--108", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Newman, Jey Han Lau, Karl Grieser, and Timothy Baldwi. 2010. Automatic evaluation of topic coherence. In Annual Conference of the North American Chapter of the ACL, pages 100-108.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Improving topic models with latent feature word representations", "authors": [ { "first": "Richard", "middle": [], "last": "Dat Quoc Nguyen", "suffix": "" }, { "first": "Lan", "middle": [], "last": "Billingsley", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Du", "suffix": "" }, { "first": "", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2015, "venue": "Transactions of the Association for Computational Linguistics", "volume": "3", "issue": "", "pages": "299--313", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dat Quoc Nguyen, Richard Billingsley, Lan Du, and Mark Johnson. 2015. Improving topic models with latent feature word representations. Transactions of the Association for Computational Linguistics, 3:299-313.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "An efficient method for generating discrete random variables with general distributions", "authors": [ { "first": "Alastair", "middle": [ "J" ], "last": "Walker", "suffix": "" } ], "year": 1977, "venue": "ACM Transactions on Mathematical Software", "volume": "3", "issue": "3", "pages": "253--256", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alastair J. Walker. 1977. An efficient method for generating discrete random variables with general distributions. ACM Transactions on Mathematical Software, 3(3):253-256.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A hybrid neural network-latent topic model", "authors": [ { "first": "Li", "middle": [], "last": "Wan", "suffix": "" }, { "first": "Leo", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Fergus", "suffix": "" } ], "year": 2012, "venue": "International Conference on Artificial Intelligence and Statistics", "volume": "", "issue": "", "pages": "1287--1294", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Wan, Leo Zhu, and Rob Fergus. 2012. A hybrid neural network-latent topic model. In International Conference on Artificial Intelligence and Statistics, pages 1287-1294.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Topics over time: A non-Markov continuous-time model of topical trends", "authors": [ { "first": "Xuerui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2006, "venue": "International Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "424--433", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xuerui Wang and Andrew McCallum. 2006. Topics over time: A non-Markov continuous-time model of topical trends. In International Conference on Knowledge Discovery and Data Mining, pages 424-433.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Efficient methods for topic model inference on streaming document collections", "authors": [ { "first": "Limin", "middle": [], "last": "Yao", "suffix": "" }, { "first": "David", "middle": [], "last": "Mimno", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2009, "venue": "International Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Limin Yao, David Mimno, and Andrew McCallum. 2009. Efficient methods for topic model inference on stream- ing document collections. In International Conference on Knowledge Discovery and Data Mining.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Online latent Dirichlet allocation with infinite vocabulary", "authors": [ { "first": "Ke", "middle": [], "last": "Zhai", "suffix": "" }, { "first": "Jordan", "middle": [ "L" ], "last": "Boyd-Graber", "suffix": "" } ], "year": 2013, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "561--569", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ke Zhai and Jordan L. Boyd-Graber. 2013. Online latent Dirichlet allocation with infinite vocabulary. In Interna- tional Conference on Machine Learning, pages 561-569.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "Classification performance on NG: (a) original test documents and (b) test documents with new words." }, "TABREF1": { "num": null, "html": null, "text": "Summarization of data sets used in our experiments. N V is the total number of word tokens; N V /D is the average document length; \"label\" denotes the number of pre-assigned classes.", "content": "
Data setVDNVNV /D label
NG18,127 18,768 1,946,48710420
NIPS4,8051,7402,097,7461,206\u2212
Wiki7,70244,819 6,851,615153\u2212
", "type_str": "table" }, "TABREF2": { "num": null, "html": null, "text": "Random selected examples of top words learnt by baseline models and our MvTM on NG.", "content": "
LDA
", "type_str": "table" } } } }