Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N18-1034",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:50:51.243564Z"
},
"title": "Deep Dirichlet Multinomial Regression",
"authors": [
{
"first": "Adrian",
"middle": [],
"last": "Benton",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University Baltimore",
"location": {
"postCode": "21218",
"region": "MD",
"country": "USA"
}
},
"email": "adrian@cs.jhu.edu"
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University Baltimore",
"location": {
"postCode": "21218",
"region": "MD",
"country": "USA"
}
},
"email": "mdredze@cs.jhu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Dirichlet Multinomial Regression (DMR) and other supervised topic models can incorporate arbitrary document-level features to inform topic priors. However, their ability to model corpora are limited by the representation and selection of these featuresa choice the topic modeler must make. Instead, we seek models that can learn the feature representations upon which to condition topic selection. We present deep Dirichlet Multinomial Regression (dDMR), a generative topic model that simultaneously learns document feature representations and topics. We evaluate dDMR on three datasets: New York Times articles with fine-grained tags, Amazon product reviews with product images, and Reddit posts with subreddit identity. dDMR learns representations that outperform DMR and LDA according to heldout perplexity and are more effective at downstream predictive tasks as the number of topics grows. Additionally, human subjects judge dDMR topics as being more representative of associated document features. Finally, we find that supervision leads to faster convergence as compared to an LDA baseline and that dDMR's model fit is less sensitive to training parameters than DMR.",
"pdf_parse": {
"paper_id": "N18-1034",
"_pdf_hash": "",
"abstract": [
{
"text": "Dirichlet Multinomial Regression (DMR) and other supervised topic models can incorporate arbitrary document-level features to inform topic priors. However, their ability to model corpora are limited by the representation and selection of these featuresa choice the topic modeler must make. Instead, we seek models that can learn the feature representations upon which to condition topic selection. We present deep Dirichlet Multinomial Regression (dDMR), a generative topic model that simultaneously learns document feature representations and topics. We evaluate dDMR on three datasets: New York Times articles with fine-grained tags, Amazon product reviews with product images, and Reddit posts with subreddit identity. dDMR learns representations that outperform DMR and LDA according to heldout perplexity and are more effective at downstream predictive tasks as the number of topics grows. Additionally, human subjects judge dDMR topics as being more representative of associated document features. Finally, we find that supervision leads to faster convergence as compared to an LDA baseline and that dDMR's model fit is less sensitive to training parameters than DMR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Fifteen years of research on topic models, starting from Latent Dirichlet Allocation (LDA) , have led to a variety of models for numerous data settings. These models identify sets (distributions) of related words that reflect semantic topics in a large corpus of text data. Topic models are now routinely used in the social sciences and humanities to analyze text collections (Schmidt, 2012) . Document collections are often accompanied by metadata and annotations, such as a book's author, an article's topic descriptor tags, images associated with a product review, or structured patient in-formation associated with clinical records. These document-level annotations can provide additional supervision for guiding topic model learning. Additional information can be integrated into topic models using either downstream or upstream models. Downstream models, such as supervised LDA (Mcauliffe and Blei, 2008) , assume that these additional document features are generated from each document's topic distribution. These models are most helpful when you desire topics that are predictive of the output, such as models for predicting the sentiment of product reviews. Upstream models, such as Dirichlet Multinomial Regression (DMR), condition each document's topic distribution on document features, such as author (Rosen-Zvi et al., 2004) , social network (McCallum et al., 2007) , or document labels (Ramage et al., 2009) . Previous work has demonstrated that upstream models tend to outperform downstream models in terms of model fit, as well as extracting topics that are useful in prediction of related tasks (Benton et al., 2016) .",
"cite_spans": [
{
"start": 376,
"end": 391,
"text": "(Schmidt, 2012)",
"ref_id": "BIBREF21"
},
{
"start": 884,
"end": 910,
"text": "(Mcauliffe and Blei, 2008)",
"ref_id": "BIBREF13"
},
{
"start": 1314,
"end": 1338,
"text": "(Rosen-Zvi et al., 2004)",
"ref_id": "BIBREF19"
},
{
"start": 1356,
"end": 1379,
"text": "(McCallum et al., 2007)",
"ref_id": "BIBREF14"
},
{
"start": 1401,
"end": 1422,
"text": "(Ramage et al., 2009)",
"ref_id": "BIBREF18"
},
{
"start": 1613,
"end": 1634,
"text": "(Benton et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "DMR is an upstream topic model with a particularly attractive method for incorporating arbitrary document features. Rather than defining specific random variables in the graphical model for each new document feature, DMR treats the document annotations as features in a log-linear model. The log-linear model parameterizes the Dirichlet prior for the document's topic distribution, making the Dirichlet's hyperparameter (typically \u03b1) documentspecific. By making no assumptions on model structure of new random variables, DMR is flexible to incorporating different types of features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite this flexibility, DMR models are typically restricted to a small number of document features. Several reasons account for this restriction: 1) Many text corpora only have a small number of documentlevel features; 2) Model hyperparameters become less interpretable as the dimensionality grows; and 3) DMR is liable to overfit the hyperparameters when the dimensionality of document features is high. In practice, applications of DMR are limited to settings with a small number of features, or where the analyst selects a few meaningful features by hand.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A solution to this restriction is to learn lowdimensional representations of document features. Neural networks have shown wide-spread success at learning generalizable representations, often obviating the need for hand designed features (Collobert and Weston, 2008) . A prime example is word embedding features in natural language processing, which supplant traditional lexical features (Brown et al., 1992; Mikolov et al., 2013; Pennington et al., 2014) . Jointly learning networks that construct feature representations along with the parameters of a standard NLP model has become a common approach. For example, (Yu et al., 2015) used a tensor decomposition to jointly learn features from both word embeddings and traditional NLP features, along with the parameters of a relation extraction model. Additionally, neural networks can handle a variety of data types, including text, images and general metadata features. This makes them appropriate for addressing dimensionality reduction in DMR.",
"cite_spans": [
{
"start": 238,
"end": 266,
"text": "(Collobert and Weston, 2008)",
"ref_id": "BIBREF8"
},
{
"start": 388,
"end": 408,
"text": "(Brown et al., 1992;",
"ref_id": "BIBREF3"
},
{
"start": 409,
"end": 430,
"text": "Mikolov et al., 2013;",
"ref_id": "BIBREF15"
},
{
"start": 431,
"end": 455,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF17"
},
{
"start": 616,
"end": 633,
"text": "(Yu et al., 2015)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose deep Dirichlet Multinomial Regression (dDMR), a model that extends DMR by introducing a deep neural network that learns a transformation of the input metadata into features used to form the Dirichlet hyperparameter. Whereas DMR parameterizes the document-topic priors as a log-linear function of document features, dDMR jointly learns a feature representation for each document along with a log-linear function that best captures the distribution over topics. Since the function mapping document features to topic prior is a neural network, we can jointly optimize the topic model and the neural network parameters by gradient ascent and back-propagation. We show that dDMR can use network architectures to better fit text corpora with high-dimensional document features as compared to other supervised topic models. The topics learned by dDMR are judged as being more representative of document features by human subjects. We also find that dDMR tends to converge in many fewer iterations than LDA, and also does not suffer from tuning difficulties that DMR encounters when applied to high-dimensional document features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our model builds on the generative model of DMR: an LDA-style topic model that replaces the hyperparameter (vector) of the topic distribution Dirichlet prior with a hyperparameter that is output from a log-linear model given the document features. Our model deep DMR (dDMR) replaces this log-linear model with an arbitrary function f that maps a real-valued vector of dimension F to a representation of dimension K. For simplicity we make no assumptions on the choice of this function, only that it can be optimized to minimize a cost on its output by gradient ascent. In practice, we define this function as a neural network, where the architecture of this network is informed by the type of document features, e.g. a convolutional neural network for images. We use neural networks since they are expressive, generalize well to unseen data, and can be jointly trained using straightforward gradient ascent with back-propagation. The generative story for dDMR is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "1. Representation function f \u2208 R F \u2192 R K 2. Topic-word prior parameters: \u03c9 bias \u2208 R V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "3. For each document m with features \u03b1 m \u2208 R F , generate document prior:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "(a) \u03b8 m = exp(f (\u03b1 m )) (b) \u03b8 m \u223c Dirichlet( \u03b8 m )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "4. For each topic k, generate word distribution:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "(a) \u03c6 k = exp(\u03c9 bias ) (b) \u03c6 k \u223c Dirichlet( \u03c6 k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "5. For each token (m, n), generate data:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "(a) Topic (unobserved): z m,n \u223c \u03b8 m (b) Word (observed): w m,n \u223c \u03c6 zm,n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "where V is the vocabulary size and K are the number of topics. In practice, the document features need not be restricted to fixed-length feature vectors, e.g. f may be an RNN that maps from a sequence of characters to a fixed length vector in R k . DMR is a special case of dDMR with the choice of a linear function for f . Figure 1 displays the graphical model diagram for dDMR.",
"cite_spans": [],
"ref_spans": [
{
"start": 324,
"end": 332,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "We infer the random variables of the topic model using collapsed Gibbs sampling, and estimate the model parameters using gradient ascent with backpropagation. We use alternating optimization: one iteration of collapsed Gibbs sampling (sample topics for each word) and then an update of the parameters of f by gradient ascent to maximize the log-likelihood of the tokens and topic assignments. Given the parameters, the sampling step remains unchanged from LDA (Griffiths and . The network parameters are estimated via backpropagation through the network for a fixed sample. Eq. 1 shows the gradient of the data log-likelihood, L , with respect to \u03b8 m,k = exp(f (\u03b1 m ) k ), the prior weight of topic k for document m. \u03c8 is the digamma function (derivative of the log-gamma function), n m is the number of tokens in document m, and n m,k is the count of how many tokens topic k was assigned to in document m.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference and Parameter Estimation",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b4L \u03b4 \u03b8 m,k = \u03c8( K k=1 \u03b8 m,k ) \u2212 \u03c8( K k=1 \u03b8 m,k + nm) +\u03c8( \u03b8 m,k + n m,k ) \u2212 \u03c8( \u03b8 m,k )",
"eq_num": "(1)"
}
],
"section": "Inference and Parameter Estimation",
"sec_num": "2.1"
},
{
"text": "3 Data",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference and Parameter Estimation",
"sec_num": "2.1"
},
{
"text": "We explore the flexibility of our model by considering three different datasets that include different types of metadata associated with each document. For each dataset, we describe the documents and metadata.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference and Parameter Estimation",
"sec_num": "2.1"
},
{
"text": "New York Times The New York Times Annotated Corpus (Sandhaus, 2008) contains articles with extensive metadata used for indexing by the newspaper. For supervision, we used the \"descriptor\" tags associated with each article assigned by archivists. These tags reflect the topic of an article, as well as organizations or people mentioned in the article. We selected all articles published in 1998, and kept those tags that were associated with at least 3 articles in that year -2424 unique tags. 20 of the 200 most frequent tags were held out from training for validation purposes: { \"education and schools\", \"law and legislation\", \"advertising\", \"budgets and budgeting\", \"freedom and human rights\", \"telephones and telecommunications\", \"bombs and explosives\", \"sexual harassment\", \"reform and reorganization\", \"teachers and school employees\", \"tests and testing\", \"futures and options trading\", \"boxing\", \"firearms\", \"company reports\", \"embargoes and economic sanctions\", \"hospitals\", \"states (us)\", \"bridge (card game)\", and \"auctions\"}. Articles contained a mean of 2.1 tags, with 738 articles not containing any of these tags. Tags were represented using a one-hot encoding. Articles were tokenized by non-alphanumeric characters and numerals were replaced by a special token. Words occurring in more than 40% of documents were removed, and only the 15,000 most frequent types were retained. There were a total of 89,397 articles with an average length of 158 tokens per article.",
"cite_spans": [
{
"start": 51,
"end": 67,
"text": "(Sandhaus, 2008)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference and Parameter Estimation",
"sec_num": "2.1"
},
{
"text": "Amazon Reviews The Amazon product reviews corpus(McAuley and Yang, 2016) contains reviews of products as well as images of the product. We sampled 100,000 Amazon product reviews: 20,000 reviews sampled uniformly from the Musical Instruments, Patio, Lawn, & Garden, Grocery & Gourmet Food, Automotive, and Pet Supplies product categories. We hypothesize that knowing information about the product's appearance will indicate which words appear in the review, especially for product images occurring in these categories. 66 of the reviews we sampled contained only highly infrequent tokens, and were therefore removed from our data, leaving 99,934 product reviews. Articles were preprocessed identically to the New York Times data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference and Parameter Estimation",
"sec_num": "2.1"
},
{
"text": "We include images as supervision by using the 4096-dimensional second fully-connected layer of the Caffe convolutional neural network reference model, trained to predict ImageNet object categories 1 . Using these features as supervision to dDMR is similar to fine-tuning a pre-trained CNN to predict a new set of labels. Since the Caffe reference model is already trained on a large corpus of images, we chose to fine-tune only the final layers so as to learn a transformation of the already learned representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference and Parameter Estimation",
"sec_num": "2.1"
},
{
"text": "Reddit We selected a sample of Reddit posts made in January 2016. A standard stop list was used to remove frequent function words and we restricted the vocabulary to the 30,000 most frequent types. We restricted posts made to subreddits, collections of topically-related threads, with at least ten comments in this month (26,830 subreddits), and made by users with at least five comments across these subreddits (total of 1,351,283 million users). We then sampled 10,000 users uniformly at random and used all their comments as a corpus, for a total of 389,234 comments over 7,866 subreddits (token length mean: 16.3, median: 9) 2 . This corpus differs from the others in two ways. First, Reddit documents are very short, which is problematic for topic models that rely on detecting correlations in token use. Second, the Reddit metadata that may be useful for topic modeling is necessarily high-dimensional (e.g. subreddit identity, a proxy for topical content). DMR may have trouble exploiting high-dimensional supervision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference and Parameter Estimation",
"sec_num": "2.1"
},
{
"text": "Model Estimation We used the same procedure for training topic models on each dataset. Hyperparameter gradient updates were performed after a burnin period of 100 Gibbs sampling iterations. Hyperparameters were updated with the adaptive learning rate algorithm Adadelta (Zeiler, 2012), with a tuned base learning rate and fixed \u03c1 = 0.95 3 . All models were trained for a maximum of 15,000 epochs, with early stopping if heldout perplexity showed no improvements after 200 epochs (evaluated once every 20 epochs). Hyperparameters were fit on every other token in the corpus, and (heldout) log-likelihood/perplexity was calculated on the remaining tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "For the architecture of the dDMR model we used single-hidden-layer multi-layer perceptrons (MLPs), with rectified linear unit (ReLU) activations on the hidden layer, and linear activation on the output layer. We sampled three architectures for each dataset, by drawing layer widths independently at random from [10, 500] , and also included two architectures with (50, 10) and (100, 50), (hidden, output) layers 4 . We compare the performance of dDMR to DMR trained on the same feature set as well as LDA.",
"cite_spans": [
{
"start": 311,
"end": 315,
"text": "[10,",
"ref_id": null
},
{
"start": 316,
"end": 320,
"text": "500]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "For the New York Times dataset, we also compare dDMR to DMR trained on features after applying principal components analysis (PCA) to reduce the dimensionality of descriptor feature supervision, sweeping over PCA projection width in {10, 50, 100, 250, 500, 1000}. Comparing performance of dDMR to PCA-reduced DMR tests two modeling choices. First, it tests the hypothesis that explicitly learning a representation for document annotations to maximize data likelihood produces a \"better-fit\" topic model than learning this annotation representation in unsupervised fashion -a two-step process. It also lets us determine if a linear dimensionality reduction technique is sufficient to learning a good feature representation for topic modeling, as opposed to learning a non-linear transformation of the document supervision. Note that we cannot apply PCA to reduce the dimensionality for subreddit id in Reddit since it is a one-hot feature. Documents in each dataset were partitioned into ten equally-sized folds. Model training parameters of L1 and L2 regularization penalties on feature weights for DMR and dDMR and the base learning rate for each model class were tuned to minimize heldout perplexity on the first fold. These were 3 We found this adaptive learning rate algorithm improved model fit in many fewer iterations than gradient descent with tuned step size and decay rate for all models. 4 We included these two very narrow architectures to ensure that some architecture learned a small feature representation, generalizing better when features are very noisy or only provide a weak signal for topic modeling. We restricted ourselves to only train dDMR models with single-hidden-layer MLPs in the priors for simplicity and to avoid model fishing.",
"cite_spans": [
{
"start": 1232,
"end": 1233,
"text": "3",
"ref_id": null
},
{
"start": 1399,
"end": 1400,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "tuned independently for each model, with number of topics fixed to 10, and dDMR architecture fixed to narrow layer widths (50, 10). Model selection was based on the macro-averaged performance on the next eight folds, and we report performance on the remaining fold. We selected models separately for each evaluation metric. For dDMR, model selection amounts to selecting the document prior architecture, and for DMR with PCA-reduced feature supervision, model selection involved selecting the PCA projection width.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Evaluation Each model was evaluated according to heldout perplexity, topic coherence by normalized pointwise mutual information (NPMI) (Lau et al., 2014) , and a dataset-specific predictive task.",
"cite_spans": [
{
"start": 135,
"end": 153,
"text": "(Lau et al., 2014)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Heldout perplexity was computed by only aggregating document-topic and topic-word counts from every other token in the corpus, and evaluating perplexity on the remaining heldout tokens. This corresponds to the \"document completion\" evaluation method as described in (Wallach et al., 2009) , where instead of holding out the words in the second half of a document, every other word is held out.",
"cite_spans": [
{
"start": 266,
"end": 288,
"text": "(Wallach et al., 2009)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "NPMI (Lau et al., 2014) computes a an automatic measure of topic quality, the sum of pointwise mutual information between pairs of m most likely words normalized by the negative log of each pair jointly occurring within a document (Eq. 2). We calculated this topic quality metric on the top 20 most probable words in each topic, and averaged over the most coherent 1, 5, 10, and over all learned topics. However, models were selected to only maximize average NPMI over all topics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "NPMI = m i=1 m j=i+1 log P (w i ,w j )) P (w i )P (w j ) \u2212 log P (wi, wj)",
"eq_num": "(2)"
}
],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "For prediction tasks, we used the sampled topic distribution associated with a document, averaged over the last 100 iterations, as features to predict a document-level label. For New York Times articles we predicted 10 of the 200 most frequent descriptor tags restricting to articles with exactly one of these descriptors. For Amazon, we predicted the product category a document belonged to (one of five), and for Reddit we predicted a heldout set of document subreddit IDs. In the case of Reddit, these heldout subreddits were 10 out of the 100 most prevalent in our data, and were held out similar to the New York Times evaluation. SVM models were fit on inferred topic distribution features and were then evaluated according to accuracy, F1-score, and area under the ROC curve. The SVM slack parameter was tuned by 4-fold cross-validation on 60% of the documents, and evaluated on the remaining 40%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We also collected human topic judgments using Amazon Mechanical Turk (Callison- Dredze, 2010). Each subject was presented with a human-readable version of the features used for supervision. For New York Times articles we showed the descriptor tags, for Amazon the product image, and for Reddit the name, title, and public description of the subreddit. We showed the top twenty words for the most probable topic sampled for the document with those features, as learned by two different models. One topic was learned by dDMR and the other was either learned by LDA or DMR. The topics presented were from the 200topic model architecture that maximized NPMI on development folds. Annotators were asked \"to choose which word list best describes a document . . . \" with the displayed features. The topic learned by dDMR was shuffled to lie on either the right or left for each Human Intelligence Task (HIT). We obtained judgments on 1,000 documents for each dataset and each model evaluation pair -6,000 documents in all. This task can be difficult for many of the features, which may be unclear (e.g. descriptor tags without context) or difficult to interpret (e.g. images of automotive parts). We excluded the document text since we did not want subjects to evaluate topic quality based on token overlap with the actual document.",
"cite_spans": [
{
"start": 69,
"end": 79,
"text": "(Callison-",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Model Fitting dDMR achieves lower perplexity than LDA or DMR for most combinations of number of topics and dataset (Table 1 ). It is striking that DMR achieves higher perplexity than LDA in many of these conditions. This is particularly true for the Amazon dataset, where DMR consistently lags behind LDA. Supervision alone does not improve topic model fit if it is too high-dimensional for learning. Perplexity is higher on the Reddit data for all models due to both a larger vocabulary size and shorter documents.",
"cite_spans": [],
"ref_spans": [
{
"start": 115,
"end": 123,
"text": "(Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "It is also worth noting that finding a lowdimensional linear projection of the supervision features with PCA does not improve model fit as well as dDMR. dDMR benefits both from joint learning to maximize corpus log-likelihood and possibly by the flexibility of learning non-linear projection (through the hidden layer ReLU activations).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Another striking result is the difference in speed of convergence between the supervised models and LDA (Figure 2 ). Even supervision that provides a weak signal for topic modeling, such as Amazon product image features, can speed convergence over LDA. In certain cases (Figure 2 left) , training dDMR for 1,000 iterations results in a lower perplexity model than LDA trained for over 10,000 iterations.",
"cite_spans": [],
"ref_spans": [
{
"start": 104,
"end": 113,
"text": "(Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 270,
"end": 285,
"text": "(Figure 2 left)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In terms of actual run time, parallelization of model training differs between the supervised model and LDA. Gradient updates necessary for learning the representation can be trivially distributed across multiple cores using optimized linear algebra libraries (e.g. BLAS), mitigating the additional cost incurred by hyperparameter updates in supervised models. In contrast, the Gibbs sampling iterations can also be parallelized, but not as easily, ultimately making resampling topics the most expensive step in model training. Because of this, the potential difference in runtime for a single iteration between dDMR and LDA is small, with the former converging in far fewer iterations. In our experiments, per iteration time taken by DMR or dDMR was at most twice as long as LDA across all experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "dDMR performance is also insensitive to training parameters relative to DMR. While DMR requires heavy L1 and L2 regularization and a very small step size to achieve low heldout perplexity, dDMR is relatively insensitive to the penalty on regularization and benefits from a higher base learning rate (Figure 3) . We found that dDMR is easier to tune than DMR, requiring less exploration of the training parameters. This is also corroborated by higher variance in perplexity achieved by DMR across different cross-validation folds (Table 1) .",
"cite_spans": [],
"ref_spans": [
{
"start": 299,
"end": 309,
"text": "(Figure 3)",
"ref_id": null
},
{
"start": 529,
"end": 538,
"text": "(Table 1)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Topic Quality Results for the automatic topic quality evaluation, NPMI, are mixed across datasets. In many cases, LDA and DMR score highly according to NPMI, despite achieving higher heldout perplexity than dDMR (Table 2) . This may not be surprising as previous work has found that perplexity does not correlate well with human judgments of topic coherence (Lau et al., 2014) .",
"cite_spans": [
{
"start": 358,
"end": 376,
"text": "(Lau et al., 2014)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 212,
"end": 221,
"text": "(Table 2)",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "However, in the human evaluation, subjects find that dDMR-learned topics are more representative of document annotations than DMR (Table 3) . While subjects only statistically significantly favored dDMR models over LDA on the Reddit data, they favored dDMR topics over LDA across all datasets, and significantly preferred dDMR top- 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 10 1 10 2 Heldout Perplexity DMR dDMR 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 10 1 10 2 L2 10 -3 10 -2 10 -1 10 0 10 1",
"cite_spans": [],
"ref_spans": [
{
"start": 130,
"end": 140,
"text": "(Table 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Step Size Figure 3 : Heldout perplexity on the Amazon data tuning fold for DMR (orange) and dDMR (purple) with a (50, 10) layer architecture as a function of training parameters: L1, L2 feature weight regularization, and base learning rate. All models were trained for a fixed 5,000 iterations, with horizontal jitter added to each point. Table 3 : % HITs where humans preferred dDMR topics as more representative of document supervision than the competing model. * denotes statistical significance according to a one-tailed binomial test at the p = 0.05 level.",
"cite_spans": [],
"ref_spans": [
{
"start": 10,
"end": 18,
"text": "Figure 3",
"ref_id": null
},
{
"start": 339,
"end": 346,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Base",
"sec_num": null
},
{
"text": "ics over DMR on two of the three datasets. This is contrary to themodel rankings according to NPMI, which suggest that DMR topics are often higher quality when it comes to human interpretability. We also qualitatively explored the product image representations DMR and dDMR learned on the Amazon data. To do so, we computed and normalized the prior document distribution for a sample of documents for lowest perplexity DMR and dDMR Z = 200 topic models:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Base",
"sec_num": null
},
{
"text": "p(k|m) = \u03b8m Z k=1 \u03b8 m,k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Base",
"sec_num": null
},
{
"text": ", the prior probability of sampling topic k, conditioned on the features for document m. We then marginalize over topics to yield the conditional probability of a word w given document m: Table 4 contains a sample of these probable words given document supervision. We find that dDMR identifies words likely to appear in a review of the product pictured. However, some images lead dDMR down a garden path. For example, a bottle of \"Turtle Food\" should not be associated with words for human consumables like \"coffee\" and \"chocolate\", despite the container resembling some of these products. However, the image-specific document priors DMR learned are not as sensitive to the actual product image as those learned by dDMR. The prior conditional probabilities p(w|m) for \"Turtle Food\", \"Slushy Magic Cup\", and \"Rawhide Dog Bones\" product images are all ranked identically by DMR.",
"cite_spans": [],
"ref_spans": [
{
"start": 188,
"end": 195,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Base",
"sec_num": null
},
{
"text": "p(w|m) = Z k=1 p(w|k)p(k|m).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Base",
"sec_num": null
},
{
"text": "Predictive Performance Finally, we consider the utility of the learned topic distributions for downstream prediction tasks, a common use of topic models. Although token perplexity is a standard measure of topic model fit, it has no direct relationship with how topic models are typically used: to identify consistent themes or reduce the dimensionality of a document corpus. We found that features based on topic distributions from dDMR outperform LDA and DMR on the Amazon and Reddit data when the number of topics fit is large, although they fail to outperform DMR on New York Times (Table 5 ). Heldout perplexity is strongly correlated with predictive performance, with a Pearson correlation coefficient, \u03c1 = 0.898 between F1-score and heldout perplexity on the Amazon data. This strong correlation is likely due to the tight rela-tionship between words used in product reviews and product category: a model that assigns high likelihood to a words in a product review corpus should also be informative of the product categories. Prior work showed that upstream supervised topic models, such as DMR, learn topic distributions that are effective at downstream prediction tasks (Benton et al., 2016) . We find that topic distributions learned by dDMR improve over DMR in certain cases, particularly as the number of topics increases.",
"cite_spans": [
{
"start": 1178,
"end": 1199,
"text": "(Benton et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 585,
"end": 593,
"text": "(Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Base",
"sec_num": null
},
{
"text": "With the widespread adoption of neural networks, others have sought to combine topic and neural models. One line of work replaces generative, LDAbased, topic models with discriminatively-trained models based on neural networks. (Cao et al., 2015) model \u03b8 and \u03c6 using neural networks with softmax output layers and learn network parameters that maximize data likelihood. They also learn n-gram embeddings to identify topics whose elements are not restricted to unigrams. (Chen et al., 2015) similarly expresses the (smoothed) supervised LDA (Mcauliffe and Blei, 2008) generative model as a neural network, and give an algorithm to discriminatively train it. (Wan et al., 2012 ) take a similar approach to dDMR where they use a neural network to extract image representations that maximize the probability of SIFT descriptors extracted from the image. However, this model is used for image classification, not for exploring a corpus of documents as is typical of topic models. These models are computationally attractive in that they avoid approximating the posterior distribution of topic assignments given tokens by dropping the assumption that \u03b8 and \u03c6 are drawn from Dirichlet priors. Model fitting is performed by back-propagation of a max-margin cost. In contrast, we use neural networks to learn feature representations for documents, not as a replacement for the LDA generative story. This is similar to variants of SPRITE (Paul and Dredze, 2015), where many document-level factors are combined to generate a document-topic prior. In contrast to several of these models, the core of our topic model remains unchanged, meaning that dDMR is agnostic to many other extensions of LDA.",
"cite_spans": [
{
"start": 228,
"end": 246,
"text": "(Cao et al., 2015)",
"ref_id": "BIBREF6"
},
{
"start": 470,
"end": 489,
"text": "(Chen et al., 2015)",
"ref_id": "BIBREF7"
},
{
"start": 540,
"end": 566,
"text": "(Mcauliffe and Blei, 2008)",
"ref_id": "BIBREF13"
},
{
"start": 657,
"end": 674,
"text": "(Wan et al., 2012",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "There has been extensive work in modeling both textual and visual topics. Models such as Corr-LDA (Blei and Jordan, 2003) suppose that a text document and associated image features are generated by a shared latent topic. This property is shared by other topic models over images, such as STM-TwitterLDA (Cai et al., 2015) and (Zhang et al., 2015) . While these models try to model images, we instead use images in the Amazon data to better estimate topic distributions.",
"cite_spans": [
{
"start": 89,
"end": 121,
"text": "Corr-LDA (Blei and Jordan, 2003)",
"ref_id": null
},
{
"start": 303,
"end": 321,
"text": "(Cai et al., 2015)",
"ref_id": "BIBREF4"
},
{
"start": 326,
"end": 346,
"text": "(Zhang et al., 2015)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Our experiment on using images to model Ama- Table 4 : Top twenty words associated with each of the product images -learned by dDMR vs. DMR (Z = 200). These images were drawn at random from the Amazon corpus (no cherry-picking involved). Word lists were generated by marginalizing over the prior topic distribution associated with that image, and then normalizing each word's probability by subtracting off its mean marginal probability across all images in the corpus. This is done to avoid displaying highly frequent words. Words that differ between each model's ranked list are in bold.",
"cite_spans": [],
"ref_spans": [
{
"start": 45,
"end": 52,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "zon product reviews resembles work on image caption generation, yet the similarity is superficial. The relationship between an image and its caption is relatively tight (Fang et al., 2015) -objects in the image will likely be referenced in the caption. For Amazon product reviews, visual features of the product, like color, may be explicitly mentioned in the review, but then again, they may not. Also, the aim of topic models is to extract common themes of co-occurring words, and how those themes are distributed across each document. The similarity between our work and captioning lies only in the fact that we extract image features from a CNN trained as an object recognizer to inform documenttopic distributions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "We present deep Dirichlet Multinomial Regression, a supervised topic model which both learns a representation of document-level features and how to use that representation for informing a topic distribution. We demonstrate the flexibility of our model on three corpora with different types of metadata: topic descriptor tags, images, and subreddit IDs. dDMR is better able to fit text corpora with high-dimensional supervision compared to LDA or DMR. Furthermore, we find that document supervision greatly reduces the number of Gibbs sampling iterations for a topic model to converge, and that the dDMR prior architecture makes it more robust to training parameters than DMR. We also find that the topic distributions learned by dDMR are more predictive of external document labels such as known topic tags or product category as the number of topics grows and that dDMR topics are judged as more representative of the document metadata by human subjects. Source code for training dDMR can be found at http://www.github.com/abenton/deep-dmr.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Features used directly from http://jmcauley. ucsd.edu/data/amazon/ 2 The sampled comment IDs can be found here: https://github.com/abenton/deep-dmr/blob/ master/resources/reddit_comment_ids.txt",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Collective supervision of topic models for predicting surveys with social media",
"authors": [
{
"first": "Adrian",
"middle": [],
"last": "Benton",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Braden",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Hancock",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "2892--2898",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adrian Benton, Michael J Paul, Braden Hancock, and Mark Dredze. 2016. Collective supervision of topic models for predicting surveys with social media. In Proceedings of the AAAI Conference on Artificial Intelligence. pages 2892-2898.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Modeling annotated data",
"authors": [
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Michael I Jordan",
"middle": [],
"last": "Blei",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Retrieval",
"volume": "",
"issue": "",
"pages": "127--134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M Blei and Michael I Jordan. 2003. Model- ing annotated data. In Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Re- trieval . ACM, pages 127-134.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Latent dirichlet allocation",
"authors": [
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Blei",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Michael I Jordan",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research 3(Jan):993-1022.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Class-based n-gram models of natural language",
"authors": [
{
"first": "",
"middle": [],
"last": "Peter F Brown",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Desouza",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "Vincent J Della",
"middle": [],
"last": "Mercer",
"suffix": ""
},
{
"first": "Jenifer C",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lai",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational linguistics",
"volume": "18",
"issue": "4",
"pages": "467--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F Brown, Peter V Desouza, Robert L Mercer, Vincent J Della Pietra, and Jenifer C Lai. 1992. Class-based n-gram models of natural language. Computational linguistics 18(4):467-479.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "What are popular: exploring twitter features for event detection, tracking and visualization",
"authors": [
{
"first": "Hongyun",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xuefei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Zi",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 23rd ACM International Conference on Multimedia",
"volume": "",
"issue": "",
"pages": "89--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongyun Cai, Yang Yang, Xuefei Li, and Zi Huang. 2015. What are popular: exploring twitter fea- tures for event detection, tracking and visualiza- tion. In Proceedings of the 23rd ACM Interna- tional Conference on Multimedia. ACM, pages 89-98.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Creating speech and language data with amazon's mechanical turk",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Callison",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Burch",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2010,
"venue": "NAACL-HLT Workshop on Creating Speech and Language Data With Mechanical Turk",
"volume": "",
"issue": "",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Callison-Burch and Mark Dredze. 2010. Cre- ating speech and language data with amazon's mechanical turk. In NAACL-HLT Workshop on Creating Speech and Language Data With Me- chanical Turk . pages 1-12.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A novel neural topic model and its supervised extension",
"authors": [
{
"first": "Ziqiang",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the AAAI conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "2210--2216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziqiang Cao, Sujian Li, Yang Liu, Wenjie Li, and Heng Ji. 2015. A novel neural topic model and its supervised extension. In Proceedings of the AAAI conference on Artificial Intelligence. pages 2210-2216.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "End-to-end learning of LDA by mirror-descent back propagation over a deep architecture",
"authors": [
{
"first": "Jianshu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Yelong",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Xinying",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1765--1773",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianshu Chen, Ji He, Yelong Shen, Lin Xiao, Xi- aodong He, Jianfeng Gao, Xinying Song, and Li Deng. 2015. End-to-end learning of LDA by mirror-descent back propagation over a deep ar- chitecture. In Advances in Neural Information Processing Systems. pages 1765-1773.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A unified architecture for natural language processing: Deep neural networks with multitask learning",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 25th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert and Jason Weston. 2008. A uni- fied architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning. ACM, pages 160-167.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "From captions to visual concepts and back",
"authors": [
{
"first": "Saurabh",
"middle": [],
"last": "Hao Fang",
"suffix": ""
},
{
"first": "Forrest",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Iandola",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Rupesh",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "John",
"middle": [
"C"
],
"last": "Mitchell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Platt",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "1473--1482",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Fang, Saurabh Gupta, Forrest Iandola, Ru- pesh K Srivastava, Li Deng, Piotr Doll\u00e1r, Jian- feng Gao, Xiaodong He, Margaret Mitchell, John C Platt, et al. 2015. From captions to visual concepts and back. In Proceedings of the IEEE Conference on Computer Vision and Pat- tern Recognition. pages 1473-1482.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Finding scientific topics",
"authors": [
{
"first": "L",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Griffiths",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Steyvers",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the National Academy of Sciences",
"volume": "101",
"issue": "1",
"pages": "5228--5235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas L Griffiths and Mark Steyvers. 2004. Find- ing scientific topics. Proceedings of the National Academy of Sciences 101(suppl 1):5228-5235.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality",
"authors": [
{
"first": "David",
"middle": [],
"last": "Jey Han Lau",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Newman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "530--539",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jey Han Lau, David Newman, and Timothy Bald- win. 2014. Machine reading tea leaves: Automat- ically evaluating topic coherence and topic model quality. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. pages 530-539.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Addressing complex and subjective product-related queries with customer reviews",
"authors": [
{
"first": "Julian",
"middle": [],
"last": "Mcauley",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 25th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee",
"volume": "",
"issue": "",
"pages": "625--635",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julian McAuley and Alex Yang. 2016. Addressing complex and subjective product-related queries with customer reviews. In Proceedings of the 25th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, pages 625-635.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Supervised topic models",
"authors": [
{
"first": "D",
"middle": [],
"last": "Jon",
"suffix": ""
},
{
"first": "David M",
"middle": [],
"last": "Mcauliffe",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Blei",
"suffix": ""
}
],
"year": 2008,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "121--128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jon D Mcauliffe and David M Blei. 2008. Super- vised topic models. In Advances in Neural Infor- mation Processing Systems. pages 121-128.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Topic and role discovery in social networks with experiments on enron and academic email",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Xuerui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Andres",
"middle": [],
"last": "Corrada-Emmanuel",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal of Artificial Intelligence Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew McCallum, Xuerui Wang, and Andres Corrada-Emmanuel. 2007. Topic and role dis- covery in social networks with experiments on enron and academic email. Journal of Artificial Intelligence Research .",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed repre- sentations of words and phrases and their com- positionality. In Advances in Neural Information Processing Systems. pages 3111-3119.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Sprite: Generalizing topic models with structured priors",
"authors": [
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "43--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael J Paul and Mark Dredze. 2015. Sprite: Generalizing topic models with structured pri- ors. Transactions of the Association for Compu- tational Linguistics 3:43-57.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods on Natural Language Processing",
"volume": "14",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods on Natu- ral Language Processing. volume 14, pages 1532- 1543.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Labeled LDA: A supervised topic model for credit attribution in multi-labeled corpora",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Ramage",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "248--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Ramage, David Hall, Ramesh Nallapati, and Christopher D Manning. 2009. Labeled LDA: A supervised topic model for credit attribution in multi-labeled corpora. In Proceedings of the 2009 Conference on Empirical Methods in Natural Lan- guage Processing. Association for Computational Linguistics, pages 248-256.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The author-topic model for authors and documents",
"authors": [
{
"first": "Michal",
"middle": [],
"last": "Rosen-Zvi",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steyvers",
"suffix": ""
},
{
"first": "Padhraic",
"middle": [],
"last": "Smyth",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "487--494",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michal Rosen-Zvi, Thomas Griffiths, Mark Steyvers, and Padhraic Smyth. 2004. The author-topic model for authors and documents. In Proceed- ings of the 20th Conference on Uncertainty in Artificial Intelligence. AUAI Press, pages 487- 494.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The new york times annotated corpus. Linguistic Data Consortium",
"authors": [
{
"first": "Evan",
"middle": [],
"last": "Sandhaus",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "6",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evan Sandhaus. 2008. The new york times an- notated corpus. Linguistic Data Consortium, Philadelphia 6(12):e26752.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Words alone: Dismantling topic models in the humanities",
"authors": [
{
"first": "M",
"middle": [],
"last": "Benjamin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schmidt",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of Digital Humanities",
"volume": "2",
"issue": "1",
"pages": "49--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin M Schmidt. 2012. Words alone: Disman- tling topic models in the humanities. Journal of Digital Humanities 2(1):49-65.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Evaluation methods for topic models",
"authors": [
{
"first": "Iain",
"middle": [],
"last": "Hanna M Wallach",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Murray",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mimno",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 26th Annual International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1105--1112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hanna M Wallach, Iain Murray, Ruslan Salakhutdi- nov, and David Mimno. 2009. Evaluation meth- ods for topic models. In Proceedings of the 26th Annual International Conference on Machine Learning. ACM, pages 1105-1112.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A hybrid neural network-latent topic model",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 15th International Conference on Artificial Intelligence and Statistics",
"volume": "12",
"issue": "",
"pages": "1287--1294",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Wan, Leo Zhu, and Rob Fergus. 2012. A hy- brid neural network-latent topic model. In Pro- ceedings of the 15th International Conference on Artificial Intelligence and Statistics. volume 12, pages 1287-1294.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Combining word embeddings and feature embeddings for fine-grained relation extraction",
"authors": [
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Matthew R Gormley",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 14th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1374--1379",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mo Yu, Matthew R Gormley, and Mark Dredze. 2015. Combining word embeddings and feature embeddings for fine-grained relation extraction. In Proceedings of the 14th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 1374-1379.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Adadelta: an adaptive learning rate method",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zeiler",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1212.5701"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew D Zeiler. 2012. Adadelta: an adap- tive learning rate method. arXiv preprint arXiv:1212.5701 .",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Dynamic topic modeling for monitoring market competition from online text and image data",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Gunhee",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "1425--1434",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Zhang, Gunhee Kim, and Eric P Xing. 2015. Dynamic topic modeling for monitoring market competition from online text and image data. In Proceedings of the 21th ACM SIGKDD Interna- tional Conference on Knowledge Discovery and Data Mining. ACM, pages 1425-1434.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The graphical model for dDMR. f is shown as a feedforward fully-connected network, and the document features are given by the image (a cat carrier).",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "Heldout perplexity as a function of iteration for lowest-perplexity models with Z = 100. The vertical dashed line indicates when models are burned in and hyperparameter optimization begins.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"content": "<table><tr><td>: Test fold heldout perplexity for each dataset and model for number of topics Z. Stan-dard error of mean heldout perplexity over all cross-validation folds in parentheses.</td></tr></table>",
"html": null,
"text": "",
"type_str": "table",
"num": null
},
"TABREF3": {
"content": "<table><tr><td>LDA New York Times 51.1% Amazon 51.9% Reddit 55.5%</td><td>DMR 51.9% 61.4% *</td></tr></table>",
"html": null,
"text": "Top-1, 5, 10, and overall topic NPMI across all datasets. Models that maximized overall NPMI across dev folds were chosen and the best-performing model is in bold.",
"type_str": "table",
"num": null
},
"TABREF4": {
"content": "<table><tr><td>Image</td><td>Item</td><td>dDMR Probable Words</td><td>DMR Probable Words</td></tr><tr><td/><td colspan=\"2\">Guitar Foot Rest grill easy cover Bark Collar fit battery 0000 light install car sound easy work unit amp 00 lights mic power works 000 took replace installed</td><td>fit easy well works car light work quality sound would guitar 0000 cover nice bought looks install battery 00 fits</td></tr><tr><td/><td>Turtle Food</td><td>taste coffee flavor food like love cat tea product tried dog eat chocolate litter cats good best bag sugar loves</td><td>taste coffee dog like love flavor food cat product tea cats tried water dogs loves eat chocolate toy mix sugar</td></tr><tr><td/><td>Slushy Magic Cup</td><td>food taste cat coffee flavor love like dog tea litter cats eat tried product chocolate loves bag good best smell</td><td>taste coffee dog like love flavor food cat product tea cats tried water dogs loves eat chocolate toy mix good</td></tr><tr><td/><td>Rawhide Dog Bones</td><td>food cat dog cats litter dogs loves love product smell eat box tried pet bag hair taste vet like seeds</td><td>taste coffee dog like love flavor food cat product tea cats tried water dogs loves eat chocolate toy mix good</td></tr><tr><td/><td>Instrument Cable</td><td>sound amp guitar mic pedal sounds price volume quality cable great bass microphone strings music play recording 000 tone unit</td><td>sound guitar fit easy well 0000 works car quality light music cover work one set nice looks 00 install unit</td></tr></table>",
"html": null,
"text": "well fit mower fits job gas hose light heavy easily stand back nice works use enough pressure fit easy well works car light sound quality work guitar would 0000 cover nice looks bought install battery 00 fits",
"type_str": "table",
"num": null
},
"TABREF6": {
"content": "<table/>",
"html": null,
"text": "Top F-score, accuracy, and AUC on prediction tasks for all datasets.",
"type_str": "table",
"num": null
}
}
}
}