{ "paper_id": "D15-1030", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:28:50.052223Z" }, "title": "Birds of a Feather Linked Together: A Discriminative Topic Model using Link-based Priors", "authors": [ { "first": "Weiwei", "middle": [], "last": "Yang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Computer Science University of Maryland College Park", "location": { "region": "MD" } }, "email": "wwyang@cs.umd.edu" }, { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "", "affiliation": { "laboratory": "Jordan.Boyd.Graber@ colorado.edu", "institution": "University of Colorado Boulder", "location": { "region": "CO" } }, "email": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "", "affiliation": { "laboratory": "", "institution": "UMIACS University of Maryland", "location": { "settlement": "College Park", "region": "MD" } }, "email": "resnik@umd.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "A wide range of applications, from social media to scientific literature analysis, involve graphs in which documents are connected by links. We introduce a topic model for link prediction based on the intuition that linked documents will tend to have similar topic distributions, integrating a max-margin learning criterion and lexical term weights in the loss function. We validate our approach on the tweets from 2,000 Sina Weibo users and evaluate our model's reconstruction of the social network.", "pdf_parse": { "paper_id": "D15-1030", "_pdf_hash": "", "abstract": [ { "text": "A wide range of applications, from social media to scientific literature analysis, involve graphs in which documents are connected by links. We introduce a topic model for link prediction based on the intuition that linked documents will tend to have similar topic distributions, integrating a max-margin learning criterion and lexical term weights in the loss function. We validate our approach on the tweets from 2,000 Sina Weibo users and evaluate our model's reconstruction of the social network.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Many application areas for text analysis involve documents connected by links of one or more types-for example, analysis of scientific papers (citations, co-authorship), Web pages (hyperlinks), legislation (co-sponsorship, citations), and social media (followers, mentions, etc.) . In this paper we work within the widely used framework of topic modeling (Blei et al., 2003, LDA) to develop a model that is simple and intuitive, but which identifies high quality topics while also accurately predicting link structure.", "cite_spans": [ { "start": 252, "end": 279, "text": "(followers, mentions, etc.)", "ref_id": null }, { "start": 355, "end": 379, "text": "(Blei et al., 2003, LDA)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our work here is inspired by the phenomenon of homophily, the tendency of people to associate with others who are like themselves (McPherson et al., 2001) . As manifested in social networks, the intuition is that people who are associated with one another are likely to discuss similar topics, and vice versa. The new topic model we propose therefore takes association links into account so that a document's topic distribution is influenced by the topic distributions of its neighbors. Specifically, we propose a joint model that uses link structure to define clusters (cliques) of documents and, following the intuition that documents in the same cluster are likely to have similar topic distributions, assigns each cluster its own separate Dirichlet prior over the cluster's topic distribution. This use of priors is consistent with previous work that has shown document-topic priors to be useful in encoding various types of prior knowledge and improving topic modeling performance (Mimno and McCallum, 2008) . We then use distributed representations to \"seed\" the topic representations before getting down to modeling the documents. Our joint objective function uses a discriminative, max-margin approach (Zhu et al., 2012; Zhu et al., 2014) to both model the contents of documents and produce good predictions of links; in addition, it improves prediction by including lexical terms in the decision function (Nguyen et al., 2013) .", "cite_spans": [ { "start": 130, "end": 154, "text": "(McPherson et al., 2001)", "ref_id": "BIBREF9" }, { "start": 986, "end": 1012, "text": "(Mimno and McCallum, 2008)", "ref_id": "BIBREF11" }, { "start": 1210, "end": 1228, "text": "(Zhu et al., 2012;", "ref_id": "BIBREF19" }, { "start": 1229, "end": 1246, "text": "Zhu et al., 2014)", "ref_id": "BIBREF20" }, { "start": 1414, "end": 1435, "text": "(Nguyen et al., 2013)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our baseline for comparison is the Relational Topic Model (Chang and Blei, 2010, henceforth RTM) , which jointly captures topics and binary link indicators in a style similar to supervised LDA (McAuliffe and Blei, 2008, sLDA) , instead of modeling links alone, e.g., as in the Latent Multi-group Membership Graph model (Kim and Leskovec, 2012, LMMG) . We also compare our approach with Daum\u00e9 III (2009) , who uses document links to create a Markov random topic field (MRTF). Daum\u00e9 does not, however, look at link prediction, as his upstream model (Mimno and McCallum, 2008) only generates documents conditioned on links. In contrast, our downstream model allows the prediction of links, like RTM.", "cite_spans": [ { "start": 58, "end": 96, "text": "(Chang and Blei, 2010, henceforth RTM)", "ref_id": null }, { "start": 193, "end": 225, "text": "(McAuliffe and Blei, 2008, sLDA)", "ref_id": null }, { "start": 319, "end": 349, "text": "(Kim and Leskovec, 2012, LMMG)", "ref_id": null }, { "start": 386, "end": 402, "text": "Daum\u00e9 III (2009)", "ref_id": "BIBREF5" }, { "start": 547, "end": 573, "text": "(Mimno and McCallum, 2008)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our model's primary contribution is in its novel combination of a straightforward joint modeling approach, max-margin learning, and exploitation of lexical information in both topic seeding and regression, yielding a simple but effective model for topic-informed discriminative link prediction. Like other topic models which treat binary values \"probabilistically\", our model can convert binary link indicators into non-zero weights, with potential application to improving models like Volkova", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "N d K L N d' \u03b1 \u03b8 d z d,n w d,n \u03b2 \u03b1' \u03c0 \u03c6 \u03b8 d' z d',n w d',n y d,d' \u03b7 \u03c4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Figure 1: A graphical model of our model for two documents. The contribution of our model is the use of document clusters (\u03c0), the use of words (w) in the prediction of document links (y), and a maxmargin objective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "et al. 2014, who use neighbor relationships to improve prediction of user-level attributes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our corpus is collected from Sina Weibo with three types of links between documents. We first conduct a reality check of our model against LDA and MRTF and then perform link prediction tasks. We demonstrate improvements in link prediction as measured by predictive link rank and provide both qualitative and quantitative perspectives on the improvements achieved by the model. Figure 1 is a two-document segment of our model, which has the following generative process:", "cite_spans": [], "ref_spans": [ { "start": 377, "end": 385, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. For each related-document cluster l \u2208 {1, . . . , L} Draw \u03c0 l \u223c Dir(\u03b1 ) 2. For each topic k \u2208 {1, . . . , K} (a) Draw word distribution \u03c6 k \u223c Dir(\u03b2) (b) Draw topic regression parameter \u03b7 k \u223c N (0, \u03bd 2 ) 3. For each word v \u2208 {1, . . . , V } Draw lexical regression parameter \u03c4v \u223c N (0, \u03bd 2 ) 4. For each document d \u2208 {1, . . . , D} (a) Draw topic proportions \u03b8 d \u223c Dir(\u03b1\u03c0 l d ) (b) For each word t d,n in document d i. Draw a topic assignment z d,n \u223c Mult(\u03b8 d ) ii. Draw a word t d,n \u223c Mult(\u03c6z d,n )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discriminative Links from Topics", "sec_num": "2" }, { "text": "Draw binary link indicator", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For each linked pair of documents d and d", "sec_num": "5." }, { "text": "y d,d |z d , z d , w d , w d \u223c \u03a8(\u2022|z d , z d , w d , w d , \u03b7, \u03c4 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For each linked pair of documents d and d", "sec_num": "5." }, { "text": "Step 1: Identifying birds of a feather. Prior to the generative process, given a training set of documents and document-to-document links, we begin by identifying small clusters or cliques using strongly connected components, which automatically determines the number of clusters from the link graph. Intuitively, documents in the same clique are likely to have similar topic distributions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For each linked pair of documents d and d", "sec_num": "5." }, { "text": "Therefore, each of the L cliques l (the \"birds of a feather\" of our title) is assigned a separate Dirichlet prior \u03c0 l over K topics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For each linked pair of documents d and d", "sec_num": "5." }, { "text": "Step 2a: Using seed words to improve topic quality. To improve topic quality, we identify seed words for the K topics using distributed lexical representations: the key idea is to complement the more global information captured in LDAstyle topics with representations based on local contextual information. We cluster the most frequent words' word2vec representations (Mikolov et al., 2013) into K word-clusters using the kmeans algorithm, based on the training corpus. 1 We then enforce a one-to-one association between these discovered word clusters and the K topics. For any word token w d,n whose word type is in cluster k, the associated topic assignment z d,n can only be k. To choose topic k's seed words, within its word-cluster we compute each word w k,i 's skip-gram transition probability sum S k,i to the other words as", "cite_spans": [ { "start": 368, "end": 390, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF10" }, { "start": 470, "end": 471, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "For each linked pair of documents d and d", "sec_num": "5." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "S k,i = N k j=1,j =i p(w k,j | w k,i ),", "eq_num": "(1)" } ], "section": "For each linked pair of documents d and d", "sec_num": "5." }, { "text": "where N k denotes the number of words in topic k.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For each linked pair of documents d and d", "sec_num": "5." }, { "text": "We then select the three words with the highest sum of transition probabilities as the seed words for topic k. In the sampling process (Section 3), seed words are only assigned to their corresponding topics, similar to the use of hard constraints by Andrzejewski and Zhu (2009) .", "cite_spans": [ { "start": 250, "end": 277, "text": "Andrzejewski and Zhu (2009)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "For each linked pair of documents d and d", "sec_num": "5." }, { "text": "Steps 2b-3: Link regression parameters. Given two documents d and d , we want to predict whether they are linked by taking advantage of their topic patterns: the more similar two documents are, the more likely it is that they should be linked together. Like RTM, we will compute a regression in Step 5 using the topic distributions of d and d ; however, we follow Nguyen et al. 2013by also including a document's word-level distribution as a regression input. 2 The regression value of document d and d is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For each linked pair of documents d and d", "sec_num": "5." }, { "text": "R d,d = \u03b7 T (z d \u2022 z d ) + \u03c4 T (w d \u2022 w d ), (2) where z d = 1 N d n z d,n , and w d = 1 N d n w d,n ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For each linked pair of documents d and d", "sec_num": "5." }, { "text": "\u2022 denotes the Hadamard product; \u03b7 and \u03c4 are the weight vectors for topic-based and lexically-based predictions, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For each linked pair of documents d and d", "sec_num": "5." }, { "text": "Step 4: Generating documents. Documents are generated as in LDA, where each document's topic distribution \u03b8 is drawn from the cluster's topic prior (a parametric analog to the HDP of Teh et al. (2006) ) and each word's topic assignment is drawn from the document's topic distribution (except for seed words, as described above).", "cite_spans": [ { "start": 183, "end": 200, "text": "Teh et al. (2006)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "For each linked pair of documents d and d", "sec_num": "5." }, { "text": "Step 5: Generating links. Our model is a \"downstream\" supervised topic model, i.e., the prediction of the observable variable (here, document links) is informed by the documents' topic distributions, as in sLDA (Blei and McAuliffe, 2007) . In contrast to Chang and Blei (2010) , who use a sigmoid as their link prediction function \u03a8, we instead use hinge loss: the probability \u03a8 that two documents d and d are linked is", "cite_spans": [ { "start": 211, "end": 237, "text": "(Blei and McAuliffe, 2007)", "ref_id": "BIBREF1" }, { "start": 255, "end": 276, "text": "Chang and Blei (2010)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "For each linked pair of documents d and d", "sec_num": "5." }, { "text": "p(y d,d = 1 | z d , z d , w d , w d ) = exp(\u22122c max(0, \u03b6 d,d )),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For each linked pair of documents d and d", "sec_num": "5." }, { "text": "where c is the regularization parameter. In the hinge loss function", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For each linked pair of documents d and d", "sec_num": "5." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ", \u03b6 d,d is \u03b6 d,d = 1 \u2212 y d,d R d,d .", "eq_num": "(3)" } ], "section": "For each linked pair of documents d and d", "sec_num": "5." }, { "text": "3 Posterior Inference Sampling Topics. Following Polson and Scott (2011), by introducing an auxiliary variable \u03bb d,d , we derive the conditional probability of a topic assignment", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For each linked pair of documents d and d", "sec_num": "5." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(z d,n = k | z \u2212d,n , w \u2212d,n , w d,n = v) \u221d N \u2212d,n k,v + \u03b2 N \u2212d,n k,\u2022 + V \u03b2 \u00d7 (N \u2212d,n d,k + \u03b1\u03c0 \u2212d,n l d ,k )\u00d7 d exp \u2212 (c\u03b6 d,d + \u03bb d,d ) 2 2\u03bb d,d ,", "eq_num": "(4)" } ], "section": "For each linked pair of documents d and d", "sec_num": "5." }, { "text": "where N k,v denotes the count of word v assigned to topic k; N d,k is the number of tokens in document d that are assigned to topic k. 3 Marginal counts are denoted by \u2022; \u2212d,n denotes that the count excludes token n in document d; d denotes the indexes of documents which are linked to document d; \u03c0 \u2212d,n l d ,k is estimated based on the maximal path assumption (Wallach, 2008) Optimizing topic and lexical regression parameters. While topic regression parameters \u03b7 and lexical regression parameters \u03c4 can be sampled (Zhu et al., 2014) , the associated covariance matrix is huge (approximately 12K \u00d7 12K in our experiments). Instead, we optimize these parameters using L-BFGS.", "cite_spans": [ { "start": 362, "end": 377, "text": "(Wallach, 2008)", "ref_id": "BIBREF17" }, { "start": 517, "end": 535, "text": "(Zhu et al., 2014)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "For each linked pair of documents d and d", "sec_num": "5." }, { "text": "Sampling auxiliary variables. The likelihood of auxiliary variables \u03bb follows a generalized inverse Gaussian distribution", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For each linked pair of documents d and d", "sec_num": "5." }, { "text": "GIG(\u03bb d,d ; 1 2 , 1, c 2 \u03b6 2 d,d ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For each linked pair of documents d and d", "sec_num": "5." }, { "text": "Thus we sample", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For each linked pair of documents d and d", "sec_num": "5." }, { "text": "\u03bb \u22121 d,d from a an inverse Gaussian distribution p(\u03bb \u22121 d,d | z, w, \u03b7, \u03c4 ) = IG \u03bb \u22121 d,d ; 1 c|\u03b6 d,d | , 1 . (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For each linked pair of documents d and d", "sec_num": "5." }, { "text": "4 Experimental Results", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For each linked pair of documents d and d", "sec_num": "5." }, { "text": "We crawl data from Sina Weibo, the largest Chinese micro-blog platform. The dataset contains 2,000 randomly-selected verified users, each represented by a single document aggregating all the user's posts. We also crawl links between pairs of users when both are in our dataset. Links correspond to three types of interactions on Weibo: mentioning, retweeting and following. 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "As an initial reality check, we first apply a simplified version of our model which only uses user interactions for topic modeling and does not predict links. This permits a direct comparison of our model's performance against LDA and Markov random topic fields (Daum\u00e9 III, 2009, MRTF) by evaluating perplexity.", "cite_spans": [ { "start": 262, "end": 285, "text": "(Daum\u00e9 III, 2009, MRTF)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Perplexity Results", "sec_num": "4.2" }, { "text": "We set \u03b1 = \u03b1 = 15 and run the models on 20 topics for all models in this and following sections. The results are the average values of five independent runs. Following Daum\u00e9, in each run, for each document, 80% of its tokens are randomly selected for training and the remaining 20% are for test. As the training corpus is generated randomly, seeding is not applied in this section. The results are given in Table 1 , where I-denotes that the model incorporates user interactions.", "cite_spans": [], "ref_spans": [ { "start": 407, "end": 414, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Perplexity Results", "sec_num": "4.2" }, { "text": "The results confirm that our model outperforms both LDA and MRTF and that its use of user interactions holds promise. Figure 2 : Lex-IS-MED-RTM, combining all three extensions, performs the best on predicting mentioning and following links, although IS-RTM achieves a close value on mentioning links and even a slightly better value on retweeting links. User interactions (denoted by \"I\") sometimes bring down the performance, as cluster priors are not applied in this intrinsic evaluation. ", "cite_spans": [], "ref_spans": [ { "start": 118, "end": 126, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Perplexity Results", "sec_num": "4.2" }, { "text": "In this section, we apply our model on link prediction tasks and evaluate by predictive link rank (PLR). A document's PLR is the average rank, among all documents, of the documents to which it actually links. This means that lower values of PLR are better. Figure 2 breaks out the 5-fold cross validation results and the distinct extensions of RTM. 5 The results support the value in combining all three extensions using Lex-IS-MED-RTM, although for mentioning and retweeting, Lex-IS-MED-RTM and IS-RTM are quite close.", "cite_spans": [ { "start": 349, "end": 350, "text": "5", "ref_id": null } ], "ref_spans": [ { "start": 257, "end": 265, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Link Prediction Results", "sec_num": "4.3" }, { "text": "Applying user interactions does not always produce improvements. This is because in our intrinsic evaluation, we assume that the links on the test set are not observable and cluster priors are ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Link Prediction Results", "sec_num": "4.3" }, { "text": "Topic Quality. Automatic coherence detection (Lau et al., 2014) is an alternative to manual evaluations of topic quality (Chang et al., 2009) . In each topic, the top n words' average pointwise mutual information (PMI)-based on a reference corpus-serves as a measure of topic coherence. 7 Topic quality improves with user interactions and max-margin learning (Table 3) . PMI drops when lexical terms are added to the link probability function, however. This is consistent with the role of lexical terms in the model; their purpose is to improve link prediction performance, not improve topic quality.", "cite_spans": [ { "start": 45, "end": 63, "text": "(Lau et al., 2014)", "ref_id": "BIBREF7" }, { "start": 121, "end": 141, "text": "(Chang et al., 2009)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 359, "end": 368, "text": "(Table 3)", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Quantitative Analysis", "sec_num": "4.5" }, { "text": "Average Regression Value. One way to assess the quality of link prediction is to compare the scores of (ground-truth) linked documents to documents in general. In Table 3 , the Average Regression Values show this comparison as a ratio. The higher the ratio, the more linked document pairs differ from unlinked pairs, which means that linked documents are easier to distinguish. This ratio improves as RTM extensions are added, indicating better link modeling quality.", "cite_spans": [], "ref_spans": [ { "start": 163, "end": 170, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Quantitative Analysis", "sec_num": "4.5" }, { "text": "In the SD/Avg row of Table 3 , we also compute a ratio of standard deviations to mean values. Ratios given by the models with hinge loss are lower than those not using hinge loss. This means that the regression values given by the models with hinge loss are more concentrated around the average value, suggesting that these models can better identify linked pairs, even though the ratio of linked pairs' average regression value to all pairs' average value is lower.", "cite_spans": [], "ref_spans": [ { "start": 21, "end": 28, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Quantitative Analysis", "sec_num": "4.5" }, { "text": "We introduce a new topic model that takes advantage of document links, incorporating link information straightforwardly by deriving clusters from the link graph and assigning each cluster a separate Dirichlet prior. We also take advantage of locally-derived distributed representations to \"seed\" the model's latent topics in an informed way, and we integrate max-margin prediction and lexical regression to improve link prediction quality. Our quantitative results show improvements in predictive link rank, and our qualitative and quantitative analysis illustrate that the model's behavior is intuitively plausible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "In future work, we plan to engage in further model analysis and comparison, to explore alterations to model structure, e.g. introducing hierarchical topic models, to use other clustering methods to obtain priors, and to explore the value of predicted links for downstream tasks such as friend recommendation (Pennacchiotti and Gurumurthy, 2011) and inference of user attributes (Volkova et al., 2014) .", "cite_spans": [ { "start": 308, "end": 344, "text": "(Pennacchiotti and Gurumurthy, 2011)", "ref_id": "BIBREF13" }, { "start": 378, "end": 400, "text": "(Volkova et al., 2014)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "In the experiment, seed words must appear at least 1,000 times.2 Both approaches contrast with the links-only approach ofKim and Leskovec (2012).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "\u03c0 \u2212d,n l d ,k = d \u2208S(l d ) N \u2212d,n d ,k + \u03b1 d \u2208S(l d ) N \u2212d,n d ,\u2022 + K\u03b1 ,(5)where S(l d ) denotes the cluster which contains document d (Step 1 in the generative process).3 More details here and throughout this section appear in the supplementary materials.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use ICTCLAS(Zhang et al., 2003) for segmentation. After stopword and low-frequency word removal, the vocabulary includes 12,257 words, with \u223c755 tokens per document and 5,404 links.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "IS-denotes that the model incorporates user interactions and seed words, Lex-means that lexical terms were included in the link probability function (Equation 3), and MED-denotes max-margin learning(Zhu et al., 2014;Zhu et al., 2012). Each type of link is applied separately; e.g., inFigure 2(a)results are based only on mentioning links, ignoring retweeting and following links.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Numerically its proportion is consistently lower for User A, whose interests are more diverse.7 We set n = 20 and use a reference corpus of 1,143,525 news items from Sogou Lab, comprising items from June to July 2012, http://www.sogou.com/labs/dl/ca. html. Each averages \u223c347 tokens, using the same segmentation scheme as the experimental corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank Hal Daum\u00e9 III for providing his code. This work was supported in part by NSF award 1211153. Boyd-Graber is supported by NSF Grants CCF-1409287, IIS-1320538, and NCSE-1422492. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "not applied. However, according to the training performance (extrinsic evaluations which we are still in progress), user interactions do benefit link prediction performance when links are partially available, e.g., suggesting more links based on observed links. In contrast, hinge loss and lexical term weights do not depend on metadata availability and generally produce improvements in link prediction performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "We illustrate model behavior qualitatively by looking at two test set users, designated A and B. User A is a reporter who runs \"We Media\" on his account, sending news items to followers, and B is a consultant with a wide range of interests. Their tweets reveal that both are interested in social news-a topic emphasizing words like society, country, government, laws, leaders, political party, news, etc. Both often retweet news related to unfairness in society and local government scandals (government, police, leaders, party, policy, chief secretary). For example, User A retweeted a report that a person about to be executed was unable to take a photo with his family before his execution, writing I feel heartbroken. User B retweeted news that a mayor was fired and investigated because of a bribe; in his retweet, he expresses his dissatisfaction with what the mayor did when he was in power. In addition, User A follows new technology (smart phone, Apple, Samsung, software, hardware, etc.) and B is interested in food (snacks, noodles, wine, fish, etc.).As ground truth, there is a mentioning link from A to B; Table 2 shows this link's PLR in the mentioning models, which generally improves with model sophistication. The mentioning tweet is a news item that is consistent with the model's", "cite_spans": [], "ref_spans": [ { "start": 1119, "end": 1126, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Illustrative Example", "sec_num": "4.4" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Latent Dirichlet allocation with topic-in-set knowledge", "authors": [ { "first": "David", "middle": [], "last": "Andrzejewski", "suffix": "" }, { "first": "Xiaojin", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2009, "venue": "Conference of the North American Chapter", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Andrzejewski and Xiaojin Zhu. 2009. Latent Dirichlet allocation with topic-in-set knowledge. In Conference of the North American Chapter of the Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Supervised topic models", "authors": [ { "first": "M", "middle": [], "last": "David", "suffix": "" }, { "first": "Jon", "middle": [ "D" ], "last": "Blei", "suffix": "" }, { "first": "", "middle": [], "last": "Mcauliffe", "suffix": "" } ], "year": 2007, "venue": "Proceedings of Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M. Blei and Jon D. McAuliffe. 2007. Super- vised topic models. In Proceedings of Advances in Neural Information Processing Systems.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Latent Dirichlet allocation", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Jordan", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "993--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Ma- chine Learning Research, 3:993-1022.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Hierarchical relational models for document networks", "authors": [ { "first": "Jonathan", "middle": [], "last": "Chang", "suffix": "" }, { "first": "M", "middle": [], "last": "David", "suffix": "" }, { "first": "", "middle": [], "last": "Blei", "suffix": "" } ], "year": 2010, "venue": "The Annals of Applied Statistics", "volume": "", "issue": "", "pages": "124--150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Chang and David M. Blei. 2010. Hierarchi- cal relational models for document networks. The Annals of Applied Statistics, pages 124-150.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Reading tea leaves: How humans interpret topic models", "authors": [ { "first": "Jonathan", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Sean", "middle": [], "last": "Gerrish", "suffix": "" }, { "first": "Chong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jordan", "middle": [ "L" ], "last": "Boyd-Graber", "suffix": "" }, { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" } ], "year": 2009, "venue": "Proceedings of Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Chang, Sean Gerrish, Chong Wang, Jordan L. Boyd-Graber, and David M. Blei. 2009. Reading tea leaves: How humans interpret topic models. In Proceedings of Advances in Neural Information Pro- cessing Systems.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Markov random topic fields", "authors": [ { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hal Daum\u00e9 III. 2009. Markov random topic fields. In Proceedings of the Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Latent multi-group membership graph model", "authors": [ { "first": "Myunghwan", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Jure", "middle": [], "last": "Leskovec", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the International Conference of Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Myunghwan Kim and Jure Leskovec. 2012. La- tent multi-group membership graph model. In Pro- ceedings of the International Conference of Machine Learning.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality", "authors": [ { "first": "David", "middle": [], "last": "Jey Han Lau", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Newman", "suffix": "" }, { "first": "", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jey Han Lau, David Newman, and Timothy Baldwin. 2014. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In Proceedings of the Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Supervised topic models", "authors": [ { "first": "Jon", "middle": [ "D" ], "last": "Mcauliffe", "suffix": "" }, { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" } ], "year": 2008, "venue": "Proceedings of Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jon D. McAuliffe and David M. Blei. 2008. Super- vised topic models. In Proceedings of Advances in Neural Information Processing Systems.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Birds of a feather: Homophily in social networks", "authors": [ { "first": "Miller", "middle": [], "last": "Mcpherson", "suffix": "" }, { "first": "Lynn", "middle": [], "last": "Smith-Lovin", "suffix": "" }, { "first": "James", "middle": [ "M" ], "last": "Cook", "suffix": "" } ], "year": 2001, "venue": "Annual Review of Sociology", "volume": "", "issue": "", "pages": "415--444", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miller McPherson, Lynn Smith-Lovin, and James M. Cook. 2001. Birds of a feather: Homophily in social networks. Annual Review of Sociology, pages 415- 444.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Proceedings of Advances in Neural Informa- tion Processing Systems.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Topic models conditioned on arbitrary features with Dirichlet-multinomial regression", "authors": [ { "first": "M", "middle": [], "last": "David", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mimno", "suffix": "" }, { "first": "", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2008, "venue": "Proceedings of Uncertainty in Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M. Mimno and Andrew McCallum. 2008. Topic models conditioned on arbitrary features with Dirichlet-multinomial regression. In Proceedings of Uncertainty in Artificial Intelligence.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Lexical and hierarchical topic regression", "authors": [ { "first": "", "middle": [], "last": "Viet-An", "suffix": "" }, { "first": "Jordan", "middle": [ "L" ], "last": "Nguyen", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Boyd-Graber", "suffix": "" }, { "first": "", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2013, "venue": "Proceedings of Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Viet-An Nguyen, Jordan L. Boyd-Graber, and Philip Resnik. 2013. Lexical and hierarchical topic regres- sion. In Proceedings of Advances in Neural Infor- mation Processing Systems.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Investigating topic models for social media user recommendation", "authors": [ { "first": "Marco", "middle": [], "last": "Pennacchiotti", "suffix": "" }, { "first": "Siva", "middle": [], "last": "Gurumurthy", "suffix": "" } ], "year": 2011, "venue": "Proceedings of World Wide Web Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Pennacchiotti and Siva Gurumurthy. 2011. In- vestigating topic models for social media user rec- ommendation. In Proceedings of World Wide Web Conference.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Data augmentation for support vector machines", "authors": [ { "first": "G", "middle": [], "last": "Nicholas", "suffix": "" }, { "first": "Steven", "middle": [ "L" ], "last": "Polson", "suffix": "" }, { "first": "", "middle": [], "last": "Scott", "suffix": "" } ], "year": 2011, "venue": "Bayesian Analysis", "volume": "6", "issue": "1", "pages": "1--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nicholas G. Polson and Steven L. Scott. 2011. Data augmentation for support vector machines. Bayesian Analysis, 6(1):1-23.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Hierarchical Dirichlet processes", "authors": [ { "first": "Yee Whye", "middle": [], "last": "Teh", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Jordan", "suffix": "" }, { "first": "Matthew", "middle": [ "J" ], "last": "Beal", "suffix": "" }, { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" } ], "year": 2006, "venue": "Journal of the American Statistical Association", "volume": "101", "issue": "476", "pages": "1566--1581", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yee Whye Teh, Michael I. Jordan, Matthew J. Beal, and David M. Blei. 2006. Hierarchical Dirichlet processes. Journal of the American Statistical Asso- ciation, 101(476):1566-1581.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Inferring user political preferences from streaming communications", "authors": [ { "first": "Svitlana", "middle": [], "last": "Volkova", "suffix": "" }, { "first": "Glen", "middle": [], "last": "Coppersmith", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Svitlana Volkova, Glen Coppersmith, and Benjamin Van Durme. 2014. Inferring user political prefer- ences from streaming communications. In Proceed- ings of the Association for Computational Linguis- tics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Structured topic models for language", "authors": [ { "first": "M", "middle": [], "last": "Hanna", "suffix": "" }, { "first": "", "middle": [], "last": "Wallach", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hanna M. Wallach. 2008. Structured topic models for language. Ph.D. thesis, University of Cambridge.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "HHMM-based Chinese lexical analyzer ICTCLAS", "authors": [ { "first": "Hua-Ping", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Hong-Kui", "middle": [], "last": "Yu", "suffix": "" }, { "first": "De-Yi", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the second SIGHAN workshop on Chinese language processing", "volume": "17", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hua-Ping Zhang, Hong-Kui Yu, De-Yi Xiong, and Qun Liu. 2003. HHMM-based Chinese lexical analyzer ICTCLAS. In Proceedings of the second SIGHAN workshop on Chinese language processing-Volume 17.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "MedLDA: Maximum margin supervised topic models", "authors": [ { "first": "Jun", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Amr", "middle": [], "last": "Ahmed", "suffix": "" }, { "first": "Eric", "middle": [ "P" ], "last": "Xing", "suffix": "" } ], "year": 2012, "venue": "Journal of Machine Learning Research", "volume": "13", "issue": "1", "pages": "2237--2278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun Zhu, Amr Ahmed, and Eric P. Xing. 2012. MedLDA: Maximum margin supervised topic mod- els. Journal of Machine Learning Research, 13(1):2237-2278.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Gibbs max-margin topic models with data augmentation", "authors": [ { "first": "Jun", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Ning", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Hugh", "middle": [], "last": "Perkins", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2014, "venue": "Journal of Machine Learning Research", "volume": "15", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun Zhu, Ning Chen, Hugh Perkins, and Bo Zhang. 2014. Gibbs max-margin topic models with data augmentation. Journal of Machine Learning Re- search, 15(1).", "links": null } }, "ref_entries": { "TABREF3": { "type_str": "table", "content": "
ModelRTMIS-RTM Lex-IS-RTM MED-RTM IS-MED-RTM Lex-IS-MED-RTM
Topic PMI1.1861.2241.2161.2141.2941.229
Average Regression ValuesLinked Pairs All Pairs Ratio SD/Avg0.2403 0.06636 0.07729 0.3692 3.621 4.777 0.9415 1.20810.4031 0.08020 5.026 1.26710.7220 0.2482 2.909 0.63640.6321 0.2041 3.097 0.72540.7668 0.2428 3.158 0.7353
", "num": null, "text": "Data for Illustrative Example", "html": null }, "TABREF4": { "type_str": "table", "content": "
: Values for Quantitative Analysis
characterization of the users' interests (particu-
larly social news and technology): a Samsung
Galaxy S4 exploded and caused a fire while charg-
ing. Consistent with intuition, the prevalence of
the social news topic also generally increases as
the models grow more sophisticated. 6
", "num": null, "text": "", "html": null } } } }