|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:52:52.697138Z" |
|
}, |
|
"title": "Topic Modeling in Embedding Spaces", |
|
"authors": [ |
|
{ |
|
"first": "Adji", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Dieng", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [ |
|
"J R" |
|
], |
|
"last": "Ruiz", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "david.blei@columbia.edu" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Topic modeling analyzes documents to learn meaningful patterns of words. However, existing topic models fail to learn interpretable topics when working with large and heavytailed vocabularies. To this end, we develop the embedded topic model (ETM), a generative model of documents that marries traditional topic models with word embeddings. More specifically, the ETM models each word with a categorical distribution whose natural parameter is the inner product between the word's embedding and an embedding of its assigned topic. To fit the ETM, we develop an efficient amortized variational inference algorithm. The ETM discovers interpretable topics even with large vocabularies that include rare words and stop words. It outperforms existing document models, such as latent Dirichlet allocation, in terms of both topic quality and predictive performance.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Topic modeling analyzes documents to learn meaningful patterns of words. However, existing topic models fail to learn interpretable topics when working with large and heavytailed vocabularies. To this end, we develop the embedded topic model (ETM), a generative model of documents that marries traditional topic models with word embeddings. More specifically, the ETM models each word with a categorical distribution whose natural parameter is the inner product between the word's embedding and an embedding of its assigned topic. To fit the ETM, we develop an efficient amortized variational inference algorithm. The ETM discovers interpretable topics even with large vocabularies that include rare words and stop words. It outperforms existing document models, such as latent Dirichlet allocation, in terms of both topic quality and predictive performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Topic models are statistical tools for discovering the hidden semantic structure in a collection of documents (Blei et al., 2003; Blei, 2012) . Topic models and their extensions have been applied to many fields, such as marketing, sociology, political science, and the digital humanities. Boyd-Graber et al. (2017) provide a review.", |
|
"cite_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 129, |
|
"text": "(Blei et al., 2003;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 130, |
|
"end": 141, |
|
"text": "Blei, 2012)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Most topic models build on latent Dirichlet allocation (LDA) (Blei et al., 2003) . LDA is a hierarchical probabilistic model that represents each topic as a distribution over terms and represents each document as a mixture of the topics. When fit to a collection of documents, the topics summarize their contents, and the topic proportions provide a low-dimensional representation of each document. LDA can be fit to large datasets of text by using variational inference and stochastic optimization (Hoffman et al., 2010 (Hoffman et al., , 2013 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 61, |
|
"end": 80, |
|
"text": "(Blei et al., 2003)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 499, |
|
"end": 520, |
|
"text": "(Hoffman et al., 2010", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 521, |
|
"end": 544, |
|
"text": "(Hoffman et al., , 2013", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "LDA is a powerful model and it is widely used. However, it suffers from a pervasive technical problem-it fails in the face of large vocabularies. Practitioners must severely prune their vocabularies in order to fit good topic models-namely, those that are both predictive and interpretable. This is typically done by removing the most and least frequent words. On large collections, this pruning may remove important terms and limit the scope of the models. The problem of topic modeling with large vocabularies has yet to be addressed in the research literature.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In parallel with topic modeling came the idea of word embeddings. Research in word embeddings begins with the neural language model of Bengio et al. (2003) , published in the same year and journal as Blei et al. (2003) . Word embeddings eschew the ''one-hot'' representation of words-a vocabulary-length vector of zeros with a single one-to learn a distributed representation, one where words with similar meanings are close in a lower-dimensional vector space (Rumelhart and Abrahamson, 1973; Bengio et al., 2006) . As for topic models, researchers scaled up embedding methods to large datasets (Mikolov et al., 2013a,b; Pennington et al., 2014; Levy and Goldberg, 2014; Mnih and Kavukcuoglu, 2013) . Word embeddings have been extended and developed in many ways. They have become crucial in many applications of natural language processing (Maas et al., 2011; Li and Yang, 2018) , and they have also been extended to datasets beyond text (Rudolph et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 155, |
|
"text": "Bengio et al. (2003)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 200, |
|
"end": 218, |
|
"text": "Blei et al. (2003)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 461, |
|
"end": 493, |
|
"text": "(Rumelhart and Abrahamson, 1973;", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 494, |
|
"end": 514, |
|
"text": "Bengio et al., 2006)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 596, |
|
"end": 621, |
|
"text": "(Mikolov et al., 2013a,b;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 622, |
|
"end": 646, |
|
"text": "Pennington et al., 2014;", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 647, |
|
"end": 671, |
|
"text": "Levy and Goldberg, 2014;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 672, |
|
"end": 699, |
|
"text": "Mnih and Kavukcuoglu, 2013)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 842, |
|
"end": 861, |
|
"text": "(Maas et al., 2011;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 862, |
|
"end": 880, |
|
"text": "Li and Yang, 2018)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 940, |
|
"end": 962, |
|
"text": "(Rudolph et al., 2016)", |
|
"ref_id": "BIBREF42" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we develop the embedded topic model (ETM), a document model that marries LDA and word embeddings. The ETM enjoys the good properties of topic models and the good properties Figure 1 : Ratio of the held-out perplexity on a document completion task and the topic coherence as a function of the vocabulary size for the ETM and LDA on the 20NewsGroup corpus. The perplexity is normalized by the size of the vocabulary. While the performance of LDA deteriorates for large vocabularies, the ETM maintains good performance. of word embeddings. As a topic model, it discovers an interpretable latent semantic structure of the documents; as a word embedding model, it provides a low-dimensional representation of the meaning of words. The ETM robustly accommodates large vocabularies and the long tail of language data. Figure 1 illustrates the advantages. This figure shows the ratio between the perplexity on held-out documents (a measure of predictive performance) and the topic coherence (a measure of the quality of the topics), as a function of the size of the vocabulary. (The perplexity has been normalized by the vocabulary size.) This is for a corpus of 11.2K articles from the 20NewsGroup and for 100 topics. The red line is LDA; its performance deteriorates as the vocabulary size increases-the predictive performance and the quality of the topics get worse. The blue line is the ETM; it maintains good performance, even as the vocabulary size become large.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 188, |
|
"end": 196, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 826, |
|
"end": 834, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Like LDA, the ETM is a generative probabilistic model: Each document is a mixture of topics and each observed word is assigned to a particular topic. In contrast to LDA, the per-topic conditional probability of a term has a log-linear form that involves a low-dimensional representation of the vocabulary. Each term is represented by an embedding and each topic is a point in that embedding space. The topic's distribution over terms is proportional to the exponentiated inner product of the topic's embedding and each term's embedding. Figures 2 and 3 show topics from a 300-topic ETM of The New York Times. The figures show each topic's embedding and its closest words; these topics are about Christianity and sports.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 537, |
|
"end": 552, |
|
"text": "Figures 2 and 3", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Representing topics as points in the embedding space allows the ETM to be robust to the presence of stop words, unlike most topic models. When stop words are included in the vocabulary, the ETM assigns topics to the corresponding area of the embedding space (we demonstrate this in Section 6).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "As for most topic models, the posterior of the topic proportions is intractable to compute. We derive an efficient algorithm for approximating the posterior with variational inference (Jordan et al., 1999; Hoffman et al., 2013; Blei et al., 2017) and additionally use amortized inference to efficiently approximate the topic proportions (Kingma and Welling, 2014; Rezende et al., 2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 184, |
|
"end": 205, |
|
"text": "(Jordan et al., 1999;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 206, |
|
"end": 227, |
|
"text": "Hoffman et al., 2013;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 228, |
|
"end": 246, |
|
"text": "Blei et al., 2017)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 337, |
|
"end": 363, |
|
"text": "(Kingma and Welling, 2014;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 364, |
|
"end": 385, |
|
"text": "Rezende et al., 2014)", |
|
"ref_id": "BIBREF40" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The resulting algorithm fits the ETM to large corpora with large vocabularies. This algorithm can either use previously fitted word embeddings, or fit them jointly with the rest of the parameters. (In particular, Figures 1 to 3 were made using the version of the ETM that uses pre-fitted skip-gram word embeddings.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We compared the performance of the ETM to LDA, the neural variational document model (NVDM) , and PRODLDA (Srivastava and Sutton, 2017) . 1 The NVDM is a form of multinomial matrix factorization and PRODLDA is a modern version of LDA that uses a product of experts to model the distribution over words. We also compare to a document model that combines PRODLDA with pre-fitted word embeddings. The ETM yields better predictive performance, as measured by held-out log-likelihood on a document completion task (Wallach et al., 2009b) . It also discovers more meaningful topics, as measured by topic coherence (Mimno et al., 2011) and topic diversity. The latter is a metric we introduce in this paper that, together with topic coherence, gives a better indication of the quality of the topics. The ETM is especially robust to large vocabularies.", |
|
"cite_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 135, |
|
"text": "(Srivastava and Sutton, 2017)", |
|
"ref_id": "BIBREF45" |
|
}, |
|
{ |
|
"start": 138, |
|
"end": 139, |
|
"text": "1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 509, |
|
"end": 532, |
|
"text": "(Wallach et al., 2009b)", |
|
"ref_id": "BIBREF49" |
|
}, |
|
{ |
|
"start": 608, |
|
"end": 628, |
|
"text": "(Mimno et al., 2011)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This work develops a new topic model that extends LDA. LDA has been extended in many ways, and topic modeling has become a subfield of its own. For a review, see Blei (2012) and Boyd-Graber et al. (2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 162, |
|
"end": 173, |
|
"text": "Blei (2012)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 178, |
|
"end": 203, |
|
"text": "Boyd-Graber et al. (2017)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A broader set of related works are neural topic models. These mainly focus on improving topic modeling inference through deep neural networks (Srivastava and Sutton, 2017; Card et al., 2017; Cong et al., 2017; Zhang et al., 2018) . Specifically, these methods reduce the dimension of the text data through amortized inference and the variational auto-encoder (Kingma and Welling, 2014; Rezende et al., 2014) . To perform inference in the ETM, we also avail ourselves of amortized inference methods (Gershman and Goodman, 2014) . As a document model, the ETM also relates to works that learn per-document representations as part of an embedding model (Le and Mikolov, 2014; Moody, 2016; . In contrast to these works, the docu-ment variables in the ETM are part of a larger probabilistic topic model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 142, |
|
"end": 171, |
|
"text": "(Srivastava and Sutton, 2017;", |
|
"ref_id": "BIBREF45" |
|
}, |
|
{ |
|
"start": 172, |
|
"end": 190, |
|
"text": "Card et al., 2017;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 191, |
|
"end": 209, |
|
"text": "Cong et al., 2017;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 210, |
|
"end": 229, |
|
"text": "Zhang et al., 2018)", |
|
"ref_id": "BIBREF54" |
|
}, |
|
{ |
|
"start": 386, |
|
"end": 407, |
|
"text": "Rezende et al., 2014)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 498, |
|
"end": 526, |
|
"text": "(Gershman and Goodman, 2014)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 650, |
|
"end": 672, |
|
"text": "(Le and Mikolov, 2014;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 673, |
|
"end": 685, |
|
"text": "Moody, 2016;", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "One of the goals in developing the ETM is to incorporate word similarity into the topic model, and there is previous research that shares this goal. These methods either modify the topic priors (Petterson et al., 2010; Zhao et al., 2017b; Shi et al., 2017; Zhao et al., 2017a) or the topic assignment priors (Xie et al., 2015) . For example, Petterson et al. (2010) use a word similarity graph (as given by a thesaurus) to bias LDA towards assigning similar words to similar topics. As another example, Xie et al. (2015) model the perword topic assignments of LDA using a Markov random field to account for both the topic proportions and the topic assignments of similar words. These methods use word similarity as a type of ''side information'' about language; in contrast, the ETM directly models the similarity (via embeddings) in its generative process of words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 194, |
|
"end": 218, |
|
"text": "(Petterson et al., 2010;", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 219, |
|
"end": 238, |
|
"text": "Zhao et al., 2017b;", |
|
"ref_id": "BIBREF56" |
|
}, |
|
{ |
|
"start": 239, |
|
"end": 256, |
|
"text": "Shi et al., 2017;", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 257, |
|
"end": 276, |
|
"text": "Zhao et al., 2017a)", |
|
"ref_id": "BIBREF55" |
|
}, |
|
{ |
|
"start": 308, |
|
"end": 326, |
|
"text": "(Xie et al., 2015)", |
|
"ref_id": "BIBREF50" |
|
}, |
|
{ |
|
"start": 342, |
|
"end": 365, |
|
"text": "Petterson et al. (2010)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 503, |
|
"end": 520, |
|
"text": "Xie et al. (2015)", |
|
"ref_id": "BIBREF50" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "However, a more closely related set of works directly combine topic modeling and word embeddings. One common strategy is to convert the discrete text into continuous observations of embeddings, and then adapt LDA to generate real-valued data (Das et al., 2015; Xun et al., 2016; Batmanghelich et al., 2016; Xun et al., 2017) . With this strategy, topics are Gaussian distributions with latent means and covariances, and the likelihood over the embeddings is modeled with a Gaussian (Das et al., 2015) or a Von-Mises Fisher distribution (Batmanghelich et al., 2016) . The ETM differs from these approaches in that it is a model of categorical data, one that goes through the embeddings matrix. Thus it does not require pre-fitted embeddings and, indeed, can learn embeddings as part of its inference process. The ETM also differs from these approaches in that it is amenable to large datasets with large vocabularies.", |
|
"cite_spans": [ |
|
{ |
|
"start": 242, |
|
"end": 260, |
|
"text": "(Das et al., 2015;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 261, |
|
"end": 278, |
|
"text": "Xun et al., 2016;", |
|
"ref_id": "BIBREF52" |
|
}, |
|
{ |
|
"start": 279, |
|
"end": 306, |
|
"text": "Batmanghelich et al., 2016;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 307, |
|
"end": 324, |
|
"text": "Xun et al., 2017)", |
|
"ref_id": "BIBREF53" |
|
}, |
|
{ |
|
"start": 482, |
|
"end": 500, |
|
"text": "(Das et al., 2015)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 536, |
|
"end": 564, |
|
"text": "(Batmanghelich et al., 2016)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "There are few other ways of combining LDA and embeddings. Nguyen et al. (2015) mix the likelihood defined by LDA with a log-linear model that uses pre-fitted word embeddings; Bunk and Krestel (2018) randomly replace words drawn from a topic with their embeddings drawn from a Gaussian; Xu et al. (2018) adopt a geometric perspective, using Wasserstein distances to learn topics and word embeddings jointly; and Keya et al. 2019propose the neural embedding allocation (NEA), which has a similar generative process to the ETM but is fit using a pre-fitted LDA model as a target distribution. Because it requires LDA, the NEA suffers from the same limitation as LDA. These models often lack scalability with respect to the vocabulary size and are fit using Gibbs sampling, limiting their scalability to large corpora.", |
|
"cite_spans": [ |
|
{ |
|
"start": 175, |
|
"end": 198, |
|
"text": "Bunk and Krestel (2018)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The ETM builds on two main ideas, LDA and word embeddings. Consider a corpus of D documents, where the vocabulary contains V distinct terms. Let w dn \u2208 {1, . . . , V } denote the n th word in the d th document.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Latent Dirichlet Allocation. LDA is a probabilistic generative model of documents (Blei et al., 2003) . It posits K topics \u03b2 1:K , each of which is a distribution over the vocabulary. LDA assumes each document comes from a mixture of topics, where the topics are shared across the corpus and the mixture proportions are unique for each document. The generative process for each document is the following:", |
|
"cite_spans": [ |
|
{ |
|
"start": 82, |
|
"end": 101, |
|
"text": "(Blei et al., 2003)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "1. Draw topic proportion \u03b8 d \u223c Dirichlet(\u03b1 \u03b8 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "2. For each word n in the document:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(a) Draw topic assignment z dn \u223c Cat(\u03b8 d ). (b) Draw word w dn \u223c Cat(\u03b2 z dn ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Here, Cat(\u2022) denotes the categorical distribution. LDA places a Dirichlet prior on the topics,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u03b2 k \u223c Dirichlet(\u03b1 \u03b2 ) for k = 1, . . . , K.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The concentration parameters \u03b1 \u03b2 and \u03b1 \u03b8 of the Dirichlet distributions are fixed model hyperparameters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Word Embeddings. Word embeddings provide models of language that use vector representations of words (Rumelhart and Abrahamson, 1973; Bengio et al., 2003) . The word representations are fitted to relate to meaning, in that words with similar meanings will have representations that are close. (In embeddings, the ''meaning'' of a word comes from the contexts in which it is used [Harris, 1954] .)", |
|
"cite_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 133, |
|
"text": "(Rumelhart and Abrahamson, 1973;", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 134, |
|
"end": 154, |
|
"text": "Bengio et al., 2003)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 379, |
|
"end": 393, |
|
"text": "[Harris, 1954]", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We focus on the continuous bag-of-words (CBOW) variant of word embeddings (Mikolov et al., 2013b) . In CBOW, the likelihood of each word w dn is", |
|
"cite_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 97, |
|
"text": "(Mikolov et al., 2013b)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "w dn \u223c softmax(\u03c1 \u22a4 \u03b1 dn ).", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Background", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The embedding matrix \u03c1 is a L \u00d7 V matrix whose columns contain the embedding representations of the vocabulary, \u03c1 v \u2208 R L . The vector \u03b1 dn is the context embedding. The context embedding is the sum of the context embedding vectors (\u03b1 v for each word v) of the words surrounding w dn .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The ETM is a topic model that uses embedding representations of both words and topics. It contains two notions of latent dimension. First, it embeds the vocabulary in an L-dimensional space. These embeddings are similar in spirit to classical word embeddings. Second, it represents each document in terms of K latent topics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Embedded Topic Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In traditional topic modeling, each topic is a full distribution over the vocabulary. In the ETM, however, the k th topic is a vector \u03b1 k \u2208 R L in the embedding space. We call \u03b1 k a topic embeddingit is a distributed representation of the k th topic in the semantic space of words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Embedded Topic Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In its generative process, the ETM uses the topic embedding to form a per-topic distribution over the vocabulary. Specifically, the ETM uses a loglinear model that takes the inner product of the word embedding matrix and the topic embedding. With this form, the ETM assigns high probability to a word v in topic k by measuring the agreement between the word's embedding and the topic's embedding.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Embedded Topic Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Denote the L \u00d7 V word embedding matrix by \u03c1; the column \u03c1 v is the embedding of term v. Under the ETM, the generative process of the d th document is the following:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Embedded Topic Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "1. Draw topic proportions \u03b8 d \u223c LN (0, I).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Embedded Topic Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "2. For each word n in the document:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Embedded Topic Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "a. Draw topic assignment z dn \u223c Cat(\u03b8 d ). b. Draw the word w dn \u223c softmax(\u03c1 \u22a4 \u03b1 z dn ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Embedded Topic Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Embedded Topic Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Step 1, LN (\u2022) denotes the logistic-normal distribution (Aitchison and Shen, 1980; Blei and Lafferty, 2007) ; it transforms a standard Gaussian random variable to the simplex. A draw \u03b8 d from this distribution is obtained as", |
|
"cite_spans": [ |
|
{ |
|
"start": 56, |
|
"end": 82, |
|
"text": "(Aitchison and Shen, 1980;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 83, |
|
"end": 107, |
|
"text": "Blei and Lafferty, 2007)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Embedded Topic Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b4 d \u223c N (0, I); \u03b8 d = softmax(\u03b4 d ).", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "The Embedded Topic Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "(We replaced the Dirichlet with the logistic normal to easily use reparameterization in the inference algorithm; see Section 5.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Embedded Topic Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Steps 1 and 2a are standard for topic modeling: They represent documents as distributions over topics and draw a topic assignment for each observed word.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Embedded Topic Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Step 2b is different; it uses the embeddings of the vocabulary \u03c1 and the assigned topic embedding \u03b1 z dn to draw the observed word from the assigned topic, as given by z dn .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Embedded Topic Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The topic distribution in Step 2b mirrors the CBOW likelihood in Eq. 1. Recall CBOW uses the surrounding words to form the context vector \u03b1 dn . In contrast, the ETM uses the topic embedding \u03b1 z dn as the context vector, where the assigned topic z dn is drawn from the per-document variable \u03b8 d . The ETM draws its words from a document context, rather than from a window of surrounding words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Embedded Topic Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The ETM likelihood uses a matrix of word embeddings \u03c1, a representation of the vocabulary in a lower dimensional space. In practice, it can either rely on previously fitted embeddings or learn them as part of its overall fitting procedure. When the ETM learns the embeddings as part of the fitting procedure, it simultaneously finds topics and an embedding space.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Embedded Topic Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "When the ETM uses previously fitted embeddings, it learns the topics of a corpus in a particular embedding space. This strategy is particularly useful when there are words in the embedding that are not used in the corpus. The ETM can hypothesize how those words fit in to the topics because it can calculate \u03c1 \u22a4 v \u03b1 k even for words v that do not appear in the corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Embedded Topic Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We are given a corpus of documents {w 1 , . . . , w D }, where the d th document w d is a collection of N d words. How do we fit the ETM to this corpus?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The Marginal Likelihood. The parameters of the ETM are the word embeddings \u03c1 1:V and the topic embeddings \u03b1 1:K ; each \u03b1 k is a point in the word embedding space. We maximize the log marginal likelihood of the documents,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L(\u03b1, \u03c1) = D d=1 log p(w d | \u03b1, \u03c1).", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The problem is that the marginal likelihood of each document-p(w d | \u03b1, \u03c1)-is intractable to compute. It involves a difficult integral over the topic proportions, which we write in terms of the untransformed proportions \u03b4 d in Eq. 2,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(w d | \u03b1, \u03c1) = p(\u03b4 d ) N d n=1 p(w dn | \u03b4 d , \u03b1, \u03c1) d\u03b4 d .", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The conditional distribution p(w", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "dn | \u03b4 d , \u03b1, \u03c1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "of each word marginalizes out the topic assignment", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "z dn , p(w dn | \u03b4 d , \u03b1, \u03c1) = K k=1 \u03b8 dk \u03b2 k,w dn .", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Here, \u03b8 dk denotes the (transformed) topic proportions (Eq. 2) and \u03b2 k,v denotes a traditional ''topic,'' that is, a distribution over words, induced by the word embeddings \u03c1 and the topic embedding \u03b1 k ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b2 kv = softmax(\u03c1 \u22a4 \u03b1 k ) v .", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Eqs. 4, 5, 6 flesh out the likelihood in Eq. 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Variational Inference. We sidestep the intractable integral in Eq. eq:integral with variational inference (Jordan et al., 1999; Blei et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 127, |
|
"text": "(Jordan et al., 1999;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 128, |
|
"end": 146, |
|
"text": "Blei et al., 2017)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Variational inference optimizes a sum of perdocument bounds on the log of the marginal likelihood of Eq. 4. To begin, posit a family of distributions of the untransformed topic proportions q(\u03b4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "d ; w d , \u03bd).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "This family of distributions is parameterized by \u03bd. We use amortized inference, where q(\u03b4 d ; w d , \u03bd) (called a variational distribution) depends on both the document w d and shared parameters \u03bd. In particular, q(\u03b4 d ; w d , \u03bd) is a Gaussian whose mean and variance come from an ''inference network,'' a neural network parameterized by \u03bd (Kingma and Welling, 2014). The inference network ingests a bag-of-words representation of the document w d and outputs the mean and covariance of \u03b4 d . (To accommodate documents of varying length, we form the input of the inference network by normalizing the bag-of-word representation of the document by the number of words N d .)", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 216, |
|
"end": 228, |
|
"text": "d ; w d , \u03bd)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We use this family of distributions to bound the log of the marginal likelihood in Eq. 4. The bound is called the evidence lower bound (ELBO) and is a function of the model parameters and the variational parameters,", |
|
"cite_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 141, |
|
"text": "(ELBO)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "L(\u03b1, \u03c1, \u03bd) = D d=1 N d n=1 Eq[ log p(w nd | \u03b4 d , \u03c1, \u03b1) ] \u2212 D d=1 KL(q(\u03b4 d ; w d , \u03bd) || p(\u03b4 d )). (7)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The first term of the ELBO (Eq. 7) encourages variational distributions q(\u03b4 d ; w d , \u03bd) that place mass on topic proportions \u03b4 d that explain the observed words and the second term encourages q(\u03b4 d ; w d , \u03bd) to be close to the prior p(\u03b4 d ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 197, |
|
"end": 209, |
|
"text": "d ; w d , \u03bd)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Maximizing the ELBO with respect to the model parameters (\u03b1, \u03c1) is equivalent to maximizing the expected complete log-likelihood,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "d log p(\u03b4 d , w d | \u03b1, \u03c1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The ELBO in Eq. 7 is intractable because the expectation is intractable. However, we can form a Monte Carlo approximation of the ELBO,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "L(\u03b1, \u03c1, \u03bd) = 1 S D d=1 N d n=1 S s=1 log p(w nd | \u03b4 (s) d , \u03c1, \u03b1) \u2212 D d=1 KL(q(\u03b4 d ; w d , \u03bd) || p(\u03b4 d )), (8) where \u03b4 (s) d \u223c q(\u03b4 d ; w d , \u03bd) for s = 1 . . . S.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To form an unbiased estimator of the ELBO and its gradients, we use the reparameterization trick when sampling the unnormalized proportions \u03b4 That is, we sample \u03b4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "(s) d from q(\u03b4 d ; w d , \u03bd) as \u01eb (s) d \u223c N (0, I) and \u03b4 (s) d = \u00b5 d + \u03a3 1 2 d \u01eb (s) d , (9)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "where \u00b5 d and \u03a3 d are the mean and covariance of q (\u03b4 d ; w d , \u03bd) respectively, which depend implicitly on \u03bd and w d via the inference network. We use a diagonal covariance matrix \u03a3 d .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 51, |
|
"end": 66, |
|
"text": "(\u03b4 d ; w d , \u03bd)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We also use data subsampling to handle large collections of documents (Hoffman et al., 2013) . Denote by B a minibatch of documents. Then the approximation of the ELBO using data subsampling is", |
|
"cite_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 92, |
|
"text": "(Hoffman et al., 2013)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L(\u03b1, \u03c1, \u03bd) = D |B| d\u2208B N d n=1 S s=1 log p(w nd | \u03b4 (s) d , \u03c1, \u03b1) \u2212 D |B| d\u2208B KL(q(\u03b4 d ; w d , \u03bd) || p(\u03b4 d )).", |
|
"eq_num": "(10)" |
|
} |
|
], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Algorithm 1 Topic modeling with the ETM Initialize model and variational parameters for iteration i = 1, 2, . . . do Compute \u03b2 k = softmax(\u03c1 \u22a4 \u03b1 k ) for each topic k Choose a minibatch B of documents for each document d in B do Get normalized bag-of-word representat.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "x d Compute \u00b5 d = NN(x d ; \u03bd \u00b5 ) Compute \u03a3 d = NN(x d ; \u03bd \u03a3 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Sample \u03b8 d using Eq. 9 and \u03b8 d = softmax(\u03b4 d ) for each word in the document do Given that the prior p(\u03b4 d ) and q(\u03b4 d ; w d , \u03bd) are both Gaussians, the KL admits a closed-form expression,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Compute p(w dn | \u03b8 d , \u03c1, \u03b1) = \u03b8 \u22a4 d \u03b2 \u2022,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "KL(q(\u03b4 d ; w d , \u03bd) || p(\u03b4 d )) = 1 2 tr(\u03a3 d ) + \u00b5 \u22a4 d \u00b5 d \u2212 log det(\u03a3 d ) \u2212 K .", |
|
"eq_num": "(11)" |
|
} |
|
], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We optimize the stochastic ELBO in Equation 10 with respect to both the model parameters (\u03b1, \u03c1) and the variational parameters \u03bd. We set the learning rate with Adam (Kingma and Ba, 2015). The procedure is shown in Algorithm 1, where we set the number of Monte Carlo samples S = 1 and the notation NN(x ; \u03bd) represents a neural network with input x and parameters \u03bd.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference and Estimation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We study the performance of the ETM and compare it to other unsupervised document models. A good document model should provide both coherent patterns of language and an accurate distribution of words, so we measure performance in terms of both predictive accuracy and topic interpretability. We measure accuracy with log-likelihood on a document completion task (Rosen-Zvi et al., 2004; Wallach et al., 2009b) ; we measure topic interpretability as a blend of topic coherence and diversity. We find that, of the interpretable models, the ETM is the one that provides better predictions and topics. In a separate analysis (Section 6.1), we study the robustness of each method in the presence Corpora. We study the 20Newsgroups corpus and the New York Times corpus; the statistics of both corpora are summarized in Table 1 . The 20Newsgroup corpus is a collection of newsgroup posts. We preprocess the corpus by filtering stop words, words with document frequency above 70%, and tokenizing. To form the vocabulary, we keep all words that appear in more than a certain number of documents, and we vary the threshold from 100 (a smaller vocabulary, where V = 3,102) to 2 (a larger vocabulary, where V = 52,258). After preprocessing, we further remove one-word documents from the validation and test sets. We split the corpus into a training set of 11,260 documents, a test set of 7,532 documents, and a validation set of 100 documents.", |
|
"cite_spans": [ |
|
{ |
|
"start": 362, |
|
"end": 386, |
|
"text": "(Rosen-Zvi et al., 2004;", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 387, |
|
"end": 409, |
|
"text": "Wallach et al., 2009b)", |
|
"ref_id": "BIBREF49" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 813, |
|
"end": 820, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The New York Times corpus is a larger collection of news articles. It contains more than 1.8 million articles, spanning the years 1987-2007. We follow the same preprocessing steps as for 20Newsgroups. We form versions of this corpus with vocabularies ranging from V = 9,842 to V = 212,237. After preprocessing, we use 85% of the documents for training, 10% for testing, and 5% for validation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Models. We compare the performance of the ETM against several document models. We briefly describe each below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We consider latent Dirichlet allocation (LDA) (Blei et al., 2003) , a standard topic model that posits Dirichlet priors for the topics \u03b2 k and topic proportions \u03b8 d . (We set the prior hyperparameters to 1.) It is a conditionally conjugate model, amenable to variational inference with coordinate ascent. We consider LDA because it is the most commonly used topic model, and it has a similar generative process as the ETM.", |
|
"cite_spans": [ |
|
{ |
|
"start": 46, |
|
"end": 65, |
|
"text": "(Blei et al., 2003)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We also consider the neural variational document model (NVDM) . The NVDM is a multinomial factor model of documents; it posits the likelihood", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "w dn \u223c softmax (\u03b2 \u22a4 \u03b8 d ), where the K-dimensional vector \u03b8 d \u223c N (0, I K )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "is a per-document variable, and \u03b2 is a realvalued matrix of size K \u00d7 V . The NVDM uses a per-document real-valued latent vector \u03b8 d to average over the embedding matrix \u03b2 in the logit space. Like the ETM, the NVDM uses amortized variational inference to jointly learn the approximate posterior over the document representation \u03b8 d and the model parameter \u03b2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "NVDM is not interpretable as a topic model; its latent variables are unconstrained. We study a more interpretable variant of the NVDM which constrains \u03b8 d to lie in the simplex, replacing its Gaussian prior with a logistic normal (Aitchison and Shen, 1980) . (This can be thought of as a semi-nonnegative matrix factorization.) We call this document model \u2206-NVDM.", |
|
"cite_spans": [ |
|
{ |
|
"start": 230, |
|
"end": 256, |
|
"text": "(Aitchison and Shen, 1980)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We also consider PRODLDA (Srivastava and Sutton, 2017) . It posits the likelihood w dn \u223c softmax(\u03b2 \u22a4 \u03b8 d ) where the topic proportions \u03b8 d are from the simplex. Contrary to LDA, the topic-matrix \u03b2 s unconstrained.", |
|
"cite_spans": [ |
|
{ |
|
"start": 25, |
|
"end": 54, |
|
"text": "(Srivastava and Sutton, 2017)", |
|
"ref_id": "BIBREF45" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "PRODLDA shares the generative model with \u2206-NVDM but it is fit differently. PRODLDA uses amortized variational inference with batch normalization (Ioffe and Szegedy, 2015) and dropout (Srivastava et al., 2014) . Finally, we consider a document model that combines PRODLDA with pre-fitted word embeddings \u03c1, by using the likelihood w dn \u223c softmax (\u03c1 \u22a4 \u03b8 d ). We call this document model PRODLDA-PWE, where PWE stands for Pre-fitted Word Embeddings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 170, |
|
"text": "(Ioffe and Szegedy, 2015)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 183, |
|
"end": 208, |
|
"text": "(Srivastava et al., 2014)", |
|
"ref_id": "BIBREF46" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We study two variants of the ETM, one where the word embeddings are pre-fitted and one where they are learned jointly with the rest of the parameters. The variant with pre-fitted embeddings is called the ETM-PWE.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "For PRODLDA-PWE and the ETM-PWE, we first obtain the word embeddings (Mikolov et al., 2013b) by training skip-gram on each corpus. (We reuse the same embeddings across the experiments with varying vocabulary sizes.)", |
|
"cite_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 92, |
|
"text": "(Mikolov et al., 2013b)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Algorithm Settings. Given a corpus, each model comes with an approximate posterior inference problem. We use variational inference for all of the models and employ SVI (Hoffman et al., 2013) to speed up the optimization. The minibatch size is 1,000 documents. For LDA, we set the learning rate as suggested by Hoffman et al. (2013) : the delay is 10 and the forgetting factor is 0.85.", |
|
"cite_spans": [ |
|
{ |
|
"start": 168, |
|
"end": 190, |
|
"text": "(Hoffman et al., 2013)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 310, |
|
"end": 331, |
|
"text": "Hoffman et al. (2013)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Within SVI, LDA enjoys coordinate ascent variational updates; we use five inner steps to optimize the local variables. For the other models, we use amortized inference over the local variables \u03b8 d . We use 3-layer inference networks and we set the local learning rate to 0.002. We use \u2113 2 regularization on the variational parameters (the weight decay parameter is 1.2 \u00d7 10 \u22126 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Qualitative Results. We first examine the embeddings. The ETM, NVDM, \u2206-NVDM, and PRODLDA all learn word embeddings. We illustrate them by fixing a set of terms and showing the closest words in the embedding space (as measured by cosine distance). For comparison, we also illustrate word embeddings learned by the skip-gram model. Table 2 illustrates the embeddings of the different models. All the methods provide interpretable embeddings-words with related meanings are close to each other. The ETM, the NVDM, and PRODLDA learn embeddings that are similar to those from the skip-gram. The embeddings of \u2206-NVDM are different; the simplex constraint on the local variable and the inference procedure change the nature of the embeddings. LDA time year officials mr city percent state day million public president building million republican back money department bush street company party good pay report white park year bill long tax state clinton house billion We next look at the learned topics. Table 3 displays the seven most used topics for all methods, as given by the average of the topic proportions \u03b8 d . LDA and both variants of the ETM provide interpretable topics. The rest of the models do not provide interpretable topics; their matrices \u03b2 are unconstrained and thus are not interpretable as distributions over the vocabulary that mix to Figure 4 : Interpretability as measured by the exponentiated topic quality (the higher the better) vs. predictive performance as measured by log-likelihood on document completion (the higher the better) on the 20NewsGroup dataset. Both interpretability and predictive power metrics are normalized by subtracting the mean and dividing by the standard deviation across models. Better models are on the top right corner. Overall, the ETM is a better topic model. form documents. \u2206-NVDM also suffers from this effect although it is less apparent (see, e.g., the fifth listed topic for \u2206-NVDM).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 330, |
|
"end": 337, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 736, |
|
"end": 995, |
|
"text": "LDA time year officials mr city percent state day million public president building million republican back money department bush street company party good pay report white park year bill long tax state clinton house billion", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1032, |
|
"end": 1039, |
|
"text": "Table 3", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 1386, |
|
"end": 1394, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Quantitative Results. We next study the models quantitatively. We measure the quality of the topics and the predictive performance of the model. We found that among the models with interpretable topics, the ETM provides the best predictions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We measure topic quality by blending two metrics: topic coherence and topic diversity. Topic coherence is a quantitative measure of the interpretability of a topic (Mimno et al., 2011) . It is the average pointwise mutual information of two words drawn randomly from the same document,", |
|
"cite_spans": [ |
|
{ |
|
"start": 164, |
|
"end": 184, |
|
"text": "(Mimno et al., 2011)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "TC = 1 K K k=1 1 45 10 i=1 10 j=i+1 f (w (k) i , w (k) j ),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "where {w", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "(k) 1 , . . . , w", |
|
"eq_num": "(k)" |
|
} |
|
], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "10 } denotes the top-10 most likely words in topic k. We choose f (\u2022, \u2022) as the normalized pointwise mutual information (Bouma, 2009; Lau et al., 2014) ,", |
|
"cite_spans": [ |
|
{ |
|
"start": 120, |
|
"end": 133, |
|
"text": "(Bouma, 2009;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 134, |
|
"end": 151, |
|
"text": "Lau et al., 2014)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 72, |
|
"text": "(\u2022, \u2022)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "f (w i , w j ) = log P (w i ,w j ) P (w i )P (w j ) \u2212 log P (w i , w j ) .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Here, P (w i , w j ) is the probability of words w i and w j co-occurring in a document and P (w i ) is the marginal probability of word w i . We approximate these probabilities with empirical counts. The idea behind topic coherence is that a coherent topic will display words that tend to occur in the same documents. In other words, the most likely words in a coherent topic should have high mutual information. Document models with higher topic coherence are more interpretable topic models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We combine coherence with a second metric, topic diversity. We define topic diversity to be the percentage of unique words in the top 25 words of all topics. Diversity close to 0 indicates redundant topics; diversity close to 1 indicates more varied topics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We define the overall quality of a model's topics as the product of its topic diversity and topic coherence. A good topic model also provides a good distribution of language. To measure predictive power, we calculate log likelihood on a document completion task (Rosen-Zvi et al., 2004; Wallach et al., 2009b) . We divide each test document into two sets of words. The first half is observed: it induces a distribution over topics which, in turn, induces a distribution over the next words in the document. We then evaluate the second half under this distribution. A good document model should provide high log-likelihood on the second half. (For all methods, we approximate the likelihood by setting \u03b8 d to the variational mean.)", |
|
"cite_spans": [ |
|
{ |
|
"start": 262, |
|
"end": 286, |
|
"text": "(Rosen-Zvi et al., 2004;", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 287, |
|
"end": 309, |
|
"text": "Wallach et al., 2009b)", |
|
"ref_id": "BIBREF49" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We study both corpora and with different vocabularies. Figures 4 and 5 show interpretability of the topics as a function of predictive power. (To ease visualization, we exponentiate topic quality and normalize all metrics by subtracting the mean and dividing by the standard deviation across Figure 5 : Interpretability as measured by the exponentiated topic quality (the higher the better) vs. predictive performance as measured by log-likelihood on document completion (the higher the better) on the New York Times dataset. Both interpretability and predictive power metrics are normalized by subtracting the mean and dividing by the standard deviation across models. Better models are on the top right corner. Overall, the ETM is a better topic model. methods.) The best models are on the upper right corner.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 55, |
|
"end": 70, |
|
"text": "Figures 4 and 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 292, |
|
"end": 300, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "LDA predicts worst in almost all settings. On the 20NewsGroups, the NVDM's predictions are in general better than LDA but worse than for the other methods; on the New York Times, the NVDM gives the best predictions. However, topic quality for the NVDM is far below the other methods. (It does not provide ''topics'', so we assess the interpretability of its \u03b2 matrix.) In prediction, both versions of the ETM are at least as good as the simplex-constrained \u2206-NVDM. More importantly, both versions of the ETM outperform the PRODLDA-PWE; signaling the ETM provides a better way of integrating word embeddings into a topic model. These figures show that, of the interpretable models, the ETM provides the best predictive performance while keeping interpretable topics. It is robust to large vocabularies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical Study", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We now study a version of the New York Times corpus that includes all stop words. We remove infrequent words to form a vocabulary of size 10,283. Our goal is to show that the ETM-PWE provides interpretable topics even in the presence of stop words, another regime where topic models typically fail. In particular, given that stop words appear in many documents, traditional topic models learn topics that contain stop words, regardless of the actual semantics of the topic. This leads to poor topic interpretability. There are extensions of topic models specifically designed Table 4 : Topic quality on the New York Times data in the presence of stop words. Topic quality here is given by the product of topic coherence and topic diversity (higher is better). The ETM-PWE is robust to stop words; it achieves similar topic coherence than when there are no stop words.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 576, |
|
"end": 583, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Stop Words", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "to cope with stop words Chemudugunta et al., 2006; Wallach et al., 2009a) ; our goal here is not to establish comparisons with these methods but to show the performance of the ETM-PWE in the presence of stop words. We fit LDA, the \u2206-NVDM, the PRODLDA-PWE, and the ETM-PWE with K = 300 topics. (We do not report the NVDM because it does not provide interpretable topics.) Table 4 shows the topic quality (the product of topic coherence and topic diversity). Overall, the ETM-PWE gives the best performance in terms of topic quality.", |
|
"cite_spans": [ |
|
{ |
|
"start": 24, |
|
"end": 50, |
|
"text": "Chemudugunta et al., 2006;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 51, |
|
"end": 73, |
|
"text": "Wallach et al., 2009a)", |
|
"ref_id": "BIBREF48" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 371, |
|
"end": 378, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Stop Words", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "While the ETM has a few ''stop topics'' that are specific for stop words (see, e.g., Figure 6 ), \u2206-NVDM and LDA have stop words in almost every topic. (The topics are not displayed here for space constraints.) The reason is that stop words cooccur in the same documents as every other word; Figure 6 : A topic containing stop words found by the ETM-PWE on The New York Times. The ETM is robust even in the presence of stop words. therefore traditional topic models have difficulties telling apart content words and stop words. The ETM-PWE recognizes the location of stop words in the embedding space; its sets them off on their own topic.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 85, |
|
"end": 93, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 291, |
|
"end": 299, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Stop Words", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "We developed the ETM, a generative model of documents that marries LDA with word embeddings. The ETM assumes that topics and words live in the same embedding space, and that words are generated from a categorical distribution whose natural parameter is the inner product of the word embeddings and the embedding of the assigned topic.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The ETM learns interpretable word embeddings and topics, even in corpora with large vocabularies. We studied the performance of the ETM against several document models. The ETM learns both coherent patterns of language and an accurate distribution of words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Code is available at https://github.com/ adjidieng/ETM.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "DB and AD are supported by ONR N00014-17-1-2131, ONR N00014-15-1-2209, NIH 1U01MH115727-01, NSF CCF-1740833, DARPA SD2 FA8750-18-C-0130, Amazon, NVIDIA, and the Simons Foundation. FR received funding from the EU's Horizon 2020 R&I programme under the Marie Sk\u0142odowska-Curie grant agreement 706760. AD is supported by a Google PhD Fellowship.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Logistic normal distributions: Some properties and uses", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Aitchison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shir", |
|
"middle": [ |
|
"Ming" |
|
], |
|
"last": "Shen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1980, |
|
"venue": "Biometrika", |
|
"volume": "67", |
|
"issue": "2", |
|
"pages": "261--272", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Aitchison and Shir Ming Shen. 1980. Logis- tic normal distributions: Some properties and uses. Biometrika, 67(2):261-272.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Nonparametric spherical topic modeling with word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Kayhan", |
|
"middle": [], |
|
"last": "Batmanghelich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ardavan", |
|
"middle": [], |
|
"last": "Saeedi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karthik", |
|
"middle": [], |
|
"last": "Narasimhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Gershman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Association for Computational Linguistics", |
|
"volume": "2016", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kayhan Batmanghelich, Ardavan Saeedi, Karthik Narasimhan, and Sam Gershman. 2016. Non- parametric spherical topic modeling with word embeddings. In Association for Computational Linguistics, volume 2016, page 537.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A neural probabilistic language model", |
|
"authors": [ |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R\u00e9jean", |
|
"middle": [], |
|
"last": "Ducharme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pascal", |
|
"middle": [], |
|
"last": "Vincent", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Janvin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "1137--1155", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. Journal of Ma- chine Learning Research, 3:1137-1155.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Neural probabilistic language models", |
|
"authors": [ |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean-S\u00e9bastien", |
|
"middle": [], |
|
"last": "Sen\u00e9cal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fr\u00e9deric", |
|
"middle": [], |
|
"last": "Morin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean-Luc", |
|
"middle": [], |
|
"last": "Gauvain", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Innovations in Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoshua Bengio, Holger Schwenk, Jean-S\u00e9bastien Sen\u00e9cal, Fr\u00e9deric Morin, and Jean-Luc Gauvain. 2006, Neural probabilistic language models. In Innovations in Machine Learning.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Probabilistic topic models", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Communications of the ACM", |
|
"volume": "55", |
|
"issue": "4", |
|
"pages": "77--84", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David M. Blei. 2012. Probabilistic topic models. Communications of the ACM, 55(4):77-84.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Variational inference: A review for statisticians", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alp", |
|
"middle": [], |
|
"last": "Kucukelbir", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jon", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Mcauliffe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Journal of the American Statistical Association", |
|
"volume": "112", |
|
"issue": "518", |
|
"pages": "859--877", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David M. Blei, Alp Kucukelbir, and Jon D. McAuliffe. 2017. Variational inference: A re- view for statisticians. Journal of the American Statistical Association, 112(518):859-877.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "A correlated topic model of Science", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jon", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "The Annals of Applied Statistics", |
|
"volume": "1", |
|
"issue": "1", |
|
"pages": "17--35", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David M. Blei and Jon D. Lafferty. 2007. A correlated topic model of Science. The Annals of Applied Statistics, 1(1):17-35.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Latent Dirichlet allocation", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Jordan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "993--1022", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3(Jan):993-1022.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Normalized (pointwise) mutual information in collocation extraction", |
|
"authors": [ |
|
{ |
|
"first": "Gerlof", |
|
"middle": [], |
|
"last": "Bouma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "German Society for Computational Linguistics and Language Technology Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gerlof Bouma. 2009. Normalized (pointwise) mutual information in collocation extraction. In German Society for Computational Linguistics and Language Technology Conference.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Applications of topic models", |
|
"authors": [ |
|
{ |
|
"first": "Jordan", |
|
"middle": [], |
|
"last": "Boyd-Graber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuening", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mimno", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Foundations and Trends in Information Retrieval", |
|
"volume": "11", |
|
"issue": "2-3", |
|
"pages": "143--296", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jordan Boyd-Graber, Yuening Hu, and David Mimno. 2017. Applications of topic models. Foundations and Trends in Information Retrieval, 11(2-3):143-296.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "WELDA: Enhancing topic models by incorporating local word context", |
|
"authors": [ |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Bunk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralf", |
|
"middle": [], |
|
"last": "Krestel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "ACM/IEEE Joint Conference on Digital Libraries", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stefan Bunk and Ralf Krestel. 2018. WELDA: Enhancing topic models by incorporating local word context. In ACM/IEEE Joint Conference on Digital Libraries.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "A neural framework for generalized topic models", |
|
"authors": [ |
|
{ |
|
"first": "Dallas", |
|
"middle": [], |
|
"last": "Card", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chenhao", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1705.09296" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dallas Card, Chenhao Tan, and Noah A. Smith. 2017. A neural framework for generalized topic models. In arXiv:1705.09296.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Modeling general and specific aspects of documents with a probabilistic topic model", |
|
"authors": [ |
|
{ |
|
"first": "Chaitanya", |
|
"middle": [], |
|
"last": "Chemudugunta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Padhraic", |
|
"middle": [], |
|
"last": "Smyth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steyvers", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chaitanya Chemudugunta, Padhraic Smyth, and Mark Steyvers. 2006. Modeling general and specific aspects of documents with a probab- ilistic topic model. In Advances in Neural Information Processing Systems.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Deep latent Dirichlet allocation with topic-layer-adaptive stochastic gradient Riemannian MCMC", |
|
"authors": [ |
|
{ |
|
"first": "Yulai", |
|
"middle": [], |
|
"last": "Cong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bo", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongwei", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mingyuan", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yulai Cong, Bo C. Chen, Hongwei Liu, and Mingyuan Zhou. 2017. Deep latent Dirichlet allocation with topic-layer-adaptive stochastic gradient Riemannian MCMC. In International Conference on Machine Learning.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Gaussian LDA for topic models with word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Rajarshi", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manzil", |
|
"middle": [], |
|
"last": "Zaheer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Association for Computational Linguistics and International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rajarshi Das, Manzil Zaheer, and Chris Dyer. 2015. Gaussian LDA for topic models with word embeddings. In Association for Computa- tional Linguistics and International Joint Conference on Natural Language Processing (Volume 1: Long Papers).", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Amortized inference in probabilistic reasoning", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Samuel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Gershman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Goodman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Annual Meeting of the Cognitive Science Society", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samuel J. Gershman and Noah D. Goodman. 2014. Amortized inference in probabilistic reasoning. In Annual Meeting of the Cognitive Science Society.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Integrating topics and syntax", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steyvers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joshua", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Tenenbaum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas L. Griffiths, Mark Steyvers, David M. Blei, and Joshua B. Tenenbaum. 2004. Integrat- ing topics and syntax. In Advances in Neural Information Processing Systems.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Distributional structure. Word", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Zellig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Harris", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1954, |
|
"venue": "", |
|
"volume": "10", |
|
"issue": "", |
|
"pages": "146--162", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zellig S. Harris. 1954. Distributional structure. Word, 10(2-3):146-162.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Online learning for latent Dirichlet allocation", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Hoffman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francis", |
|
"middle": [], |
|
"last": "Bach", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew D. Hoffman, David M. Blei, and Francis Bach. 2010. Online learning for latent Dirichlet allocation. In Advances in Neural Information Processing Systems.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Stochastic variational inference", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Hoffman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chong", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Paisley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "14", |
|
"issue": "", |
|
"pages": "1303--1347", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew D. Hoffman, David M. Blei, Chong Wang, and John Paisley. 2013. Stochastic varia- tional inference. Journal of Machine Learning Research, 14:1303-1347.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", |
|
"authors": [ |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Ioffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Szegedy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "An introduction to variational methods for graphical models", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Jordan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zoubin", |
|
"middle": [], |
|
"last": "Ghahramani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tommi", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Jaakkola", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lawrence", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Saul", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Machine Learning", |
|
"volume": "37", |
|
"issue": "", |
|
"pages": "183--233", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael I. Jordan, Zoubin Ghahramani, Tommi S. Jaakkola, and Lawrence K. Saul. 1999. An introduction to variational methods for graphi- cal models. Machine Learning, 37(2):183-233.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Neural embedding allocation: Distributed representations of topic models", |
|
"authors": [ |
|
{ |
|
"first": "Yannis", |
|
"middle": [], |
|
"last": "Kamrun Naher Keya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Papanikolaou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Foulds", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1909.04702" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kamrun Naher Keya, Yannis Papanikolaou, and James R. Foulds. 2019. Neural embedding allocation: Distributed representations of topic models. arXiv preprint arXiv:1909.04702.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P. Kingma and Jimmy L. Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Auto-encoding variational Bayes", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Welling", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P. Kingma and Max Welling. 2014. Auto-encoding variational Bayes. In Interna- tional Conference on Learning Representa- tions.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Jey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Lau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Newman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Conference of the European Chapter", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jey H. Lau, David Newman, and Timothy Baldwin. 2014. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In Conference of the European Chapter of the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Distributed representations of sentences and documents", |
|
"authors": [ |
|
{ |
|
"first": "Quoc", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In International Conference on Machine Learn- ing.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Neural word embedding as implicit matrix factorization", |
|
"authors": [ |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. In Neural Information Processing Systems.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Generative topic embedding: A continuous representation of documents", |
|
"authors": [ |
|
{ |
|
"first": "Shaohua", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tat-Seng", |
|
"middle": [], |
|
"last": "Chua", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chunyan", |
|
"middle": [], |
|
"last": "Miao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shaohua Li, Tat-Seng Chua, Jun Zhu, and Chun- yan Miao. 2016. Generative topic embedding: A continuous representation of documents. In Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers).", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Word Embedding for Understanding Natural Language: A Survey", |
|
"authors": [ |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yang Li and Tao Yang. 2018. Word Embedding for Understanding Natural Language: A Survey, Springer International Publishing.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Learning word vectors for sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Maas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Daly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Annual Meeting of the Association for Computational Linguistics: Human Language Technologies.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Neural variational inference for text processing", |
|
"authors": [ |
|
{ |
|
"first": "Yishu", |
|
"middle": [], |
|
"last": "Miao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lei", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In International Conference on Machine Learning.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Efficient estimation of word representations in vector space", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1301.3781" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Kai Chen, Greg S. Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Neural Information Processing Systems.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Optimizing semantic coherence in topic models", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mimno", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hanna", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Wallach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edmund", |
|
"middle": [], |
|
"last": "Talley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miriam", |
|
"middle": [], |
|
"last": "Leenders", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Mimno, Hanna M. Wallach, Edmund Talley, Miriam Leenders, and Andrew McCallum. 2011. Optimizing semantic coher- ence in topic models. In Conference on Empirical Methods in Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Learning word embeddings efficiently with noise-contrastive estimation", |
|
"authors": [ |
|
{ |
|
"first": "Andriy", |
|
"middle": [], |
|
"last": "Mnih", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Koray", |
|
"middle": [], |
|
"last": "Kavukcuoglu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andriy Mnih and Koray Kavukcuoglu. 2013. Learning word embeddings efficiently with noise-contrastive estimation. In Neural Inform- ation Processing Systems.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Mixing Dirichlet topic models and word embeddings to make LDA2vec", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Moody", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1605.02019" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher E. Moody. 2016. Mixing Dirichlet topic models and word embeddings to make LDA2vec. arXiv:1605.02019.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Improving topic models with latent feature word representations", |
|
"authors": [ |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Dat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lan", |
|
"middle": [], |
|
"last": "Billingsley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "299--313", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dat Q. Nguyen, Richard Billingsley, Lan Du, and Mark Johnson. 2015. Improving topic models with latent feature word representations. Transactions of the Association for Computa- tional Linguistics, 3:299-313.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "GloVe: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Conference on Empirical Methods on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Conference on Empirical Methods on Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Word features for latent dirichlet allocation", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Petterson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wray", |
|
"middle": [], |
|
"last": "Buntine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Shravan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tib\u00e9rio", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Narayanamurthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Caetano", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smola", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Petterson, Wray Buntine, Shravan M. Narayanamurthy, Tib\u00e9rio S. Caetano, and Alex J. Smola. 2010. Word features for latent dirichlet allocation. In Advances in Neural Information Processing Systems.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Stochastic backpropagation and approximate inference in deep generative models", |
|
"authors": [ |
|
{ |
|
"first": "Danilo", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Rezende", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Danilo J. Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. In International Conference on Machine Learning.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "The author-topic model for authors and documents", |
|
"authors": [ |
|
{ |
|
"first": "Michal", |
|
"middle": [], |
|
"last": "Rosen-Zvi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steyvers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Padhraic", |
|
"middle": [], |
|
"last": "Smyth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Uncertainty in Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michal Rosen-Zvi, Thomas Griffiths, Mark Steyvers, and Padhraic Smyth. 2004. The author-topic model for authors and documents. In Uncertainty in Artificial Intelligence.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Exponential family embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Maja", |
|
"middle": [], |
|
"last": "Rudolph", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Francisco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Ruiz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Mandt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Blei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maja Rudolph, Francisco J. R. Ruiz, Stephan Mandt, and David M. Blei. 2016. Exponential family embeddings. In Advances in Neural Information Processing Systems.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "A model for analogical reasoning", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Rumelhart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adele", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Abrahamson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1973, |
|
"venue": "Cognitive Psychology", |
|
"volume": "5", |
|
"issue": "1", |
|
"pages": "1--28", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David E. Rumelhart and Adele A. Abrahamson. 1973. A model for analogical reasoning. Cognitive Psychology, 5(1):1-28.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Jointly learning word embeddings and latent topics", |
|
"authors": [ |
|
{ |
|
"first": "Bei", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wai", |
|
"middle": [], |
|
"last": "Lam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shoaib", |
|
"middle": [], |
|
"last": "Jameel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Schockaert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kwun", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Lai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "ACM SIGIR Conference on Research and Development in Information Retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bei Shi, Wai Lam, Shoaib Jameel, Steven Schockaert, and Kwun P. Lai. 2017. Jointly learning word embeddings and latent topics. In ACM SIGIR Conference on Research and Development in Information Retrieval.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Autoencoding variational inference for topic models", |
|
"authors": [ |
|
{ |
|
"first": "Akash", |
|
"middle": [], |
|
"last": "Srivastava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Charles", |
|
"middle": [], |
|
"last": "Sutton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Akash Srivastava and Charles Sutton. 2017. Auto- encoding variational inference for topic models. In International Conference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Dropout: a simple way to prevent neural networks from overfitting", |
|
"authors": [ |
|
{ |
|
"first": "Nitish", |
|
"middle": [], |
|
"last": "Srivastava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Hinton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Krizhevsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "15", |
|
"issue": "1", |
|
"pages": "1929--1958", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929-1958.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Doubly stochastic variational Bayes for non-conjugate inference", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Michalis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Titsias", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "L\u00e1zaro-Gredilla", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michalis K. Titsias and Miguel L\u00e1zaro-Gredilla. 2014. Doubly stochastic variational Bayes for non-conjugate inference. In International Con- ference on Machine Learning.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "Rethinking LDA: Why priors matter", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Hanna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Wallach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mimno", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hanna M. Wallach, David M. Mimno, and Andrew McCallum. 2009a. Rethinking LDA: Why priors matter. In Advances in Neural Information Processing Systems.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "Evaluation methods for topic models", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Hanna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iain", |
|
"middle": [], |
|
"last": "Wallach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Murray", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mimno", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hanna M. Wallach, Iain Murray, Ruslan Salakhutdinov, and David Mimno. 2009b. Evaluation methods for topic models. In International Conference on Machine Learning.", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "Incorporating word correlation knowledge into topic modeling", |
|
"authors": [ |
|
{ |
|
"first": "Pengtao", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diyi", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Xing", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pengtao Xie, Diyi Yang, and Eric Xing. 2015. Incorporating word correlation knowledge into topic modeling. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.", |
|
"links": null |
|
}, |
|
"BIBREF51": { |
|
"ref_id": "b51", |
|
"title": "Distilled Wasserstein learning for word embedding and topic modeling", |
|
"authors": [ |
|
{ |
|
"first": "Hongteng", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenlin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lawrence", |
|
"middle": [], |
|
"last": "Carin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hongteng Xu, Wenlin Wang, Wei Liu, and Lawrence Carin. 2018. Distilled Wasserstein learning for word embedding and topic modeling. In Advances in Neural Information Processing Systems.", |
|
"links": null |
|
}, |
|
"BIBREF52": { |
|
"ref_id": "b52", |
|
"title": "Topic discovery for short texts using word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Guangxu", |
|
"middle": [], |
|
"last": "Xun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vishrawas", |
|
"middle": [], |
|
"last": "Gopalakrishnan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fenglong", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yaliang", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jing", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidong", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "IEEE International Conference on Data Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guangxu Xun, Vishrawas Gopalakrishnan, Fenglong Ma, Yaliang Li, Jing Gao, and Aidong Zhang. 2016. Topic discovery for short texts using word embeddings. In IEEE International Conference on Data Mining.", |
|
"links": null |
|
}, |
|
"BIBREF53": { |
|
"ref_id": "b53", |
|
"title": "A correlated topic model using word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Guangxu", |
|
"middle": [], |
|
"last": "Xun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yaliang", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wayne", |
|
"middle": [ |
|
"Xin" |
|
], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jing", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidong", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Joint Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guangxu Xun, Yaliang Li, Wayne Xin Zhao, Jing Gao, and Aidong Zhang. 2017. A correlated topic model using word embeddings. In Joint Conference on Artificial Intelligence.", |
|
"links": null |
|
}, |
|
"BIBREF54": { |
|
"ref_id": "b54", |
|
"title": "WHAI: Weibull hybrid autoencoding inference for deep topic modeling", |
|
"authors": [ |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bo", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dandan", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mingyuan", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hao Zhang, Bo Chen, Dandan Guo, and Mingyuan Zhou. 2018. WHAI: Weibull hybrid autoencoding inference for deep topic modeling. In International Conference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF55": { |
|
"ref_id": "b55", |
|
"title": "A word embeddings informed focused topic model", |
|
"authors": [ |
|
{ |
|
"first": "He", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lan", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wray", |
|
"middle": [], |
|
"last": "Buntine", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Asian Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "He Zhao, Lan Du, and Wray Buntine. 2017a. A word embeddings informed focused topic model. In Asian Conference on Machine Learning.", |
|
"links": null |
|
}, |
|
"BIBREF56": { |
|
"ref_id": "b56", |
|
"title": "MetaLDA: A topic model that efficiently incorporates meta information", |
|
"authors": [ |
|
{ |
|
"first": "He", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lan", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wray", |
|
"middle": [], |
|
"last": "Buntine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "IEEE International Conference on Data Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "He Zhao, Lan Du, Wray Buntine, and Gang Liu. 2017b. MetaLDA: A topic model that efficiently incorporates meta information. In IEEE International Conference on Data Mining.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "A topic about Christianity found by the ETM on The New York Times. The topic is a point in the word embedding space." |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Topics about sports found by the ETM on The New York Times. Each topic is a point in the word embedding space." |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "and Welling, 2014; Titsias and L\u00e1zaro-Gredilla, 2014;Rezende et al., 2014)." |
|
}, |
|
"TABREF2": { |
|
"content": "<table><tr><td>DF denotes document frequency, K denotes</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"text": "Statistics of the different corpora studied.", |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null, |
|
"text": "Word embeddings learned by all document models (and skip-gram) on the New York Times with vocabulary size 118,363.", |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null, |
|
"text": "Top five words of seven most used topics from different document models on 1.8M documents of the New York Times corpus with vocabulary size 212,237 and K = 300 topics.", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |