{ "paper_id": "D12-1049", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:23:38.512341Z" }, "title": "Modelling Sequential Text with an Adaptive Topic Model", "authors": [ { "first": "Lan", "middle": [], "last": "Du", "suffix": "", "affiliation": {}, "email": "lan.du@mq.edu.au" }, { "first": "Wray", "middle": [], "last": "Buntine", "suffix": "", "affiliation": {}, "email": "wray.buntine@nicta.com.au" }, { "first": "Huidong", "middle": [], "last": "Jin", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Topic models are increasingly being used for text analysis tasks, often times replacing earlier semantic techniques such as latent semantic analysis. In this paper, we develop a novel adaptive topic model with the ability to adapt topics from both the previous segment and the parent document. For this proposed model, a Gibbs sampler is developed for doing posterior inference. Experimental results show that with topic adaptation, our model significantly improves over existing approaches in terms of perplexity, and is able to uncover clear sequential structure on, for example, Herman Melville's book \"Moby Dick\". * This work was partially done when Du was at College of Engineering & Computer Science, the Australian National University when working together with Buntine and Jin there.", "pdf_parse": { "paper_id": "D12-1049", "_pdf_hash": "", "abstract": [ { "text": "Topic models are increasingly being used for text analysis tasks, often times replacing earlier semantic techniques such as latent semantic analysis. In this paper, we develop a novel adaptive topic model with the ability to adapt topics from both the previous segment and the parent document. For this proposed model, a Gibbs sampler is developed for doing posterior inference. Experimental results show that with topic adaptation, our model significantly improves over existing approaches in terms of perplexity, and is able to uncover clear sequential structure on, for example, Herman Melville's book \"Moby Dick\". * This work was partially done when Du was at College of Engineering & Computer Science, the Australian National University when working together with Buntine and Jin there.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Natural language text usually consists of topically structured and coherent components, such as groups of sentences that form paragraphs and groups of paragraphs that form sections. Topical coherence in documents facilitates readers' comprehension, and reflects the author's intended structure. Capturing this structural topical dependency should lead to improved topic modelling. It also seems reasonable to propose that text analysis tasks that involve the structure of a document, for instance, summarisation and segmentation, should also be improved by topic models that better model that structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, topic models are increasingly being used for text analysis tasks such as summarisa-tion (Arora and Ravindran, 2008) and segmentation (Misra et al., 2011; Eisenstein and Barzilay, 2008) , often times replacing earlier semantic techniques such as latent semantic analysis (Deerwester et al., 1990) . Topic models can be improved by better modelling the semantic aspects of text, for instance integrating collocations into the model (Johnson, 2010; Hardisty et al., 2010) or encouraging topics to be more semantically coherent (Newman et al., 2011) based on lexical coherence models (Newman et al., 2010) , modelling the structural aspects of documents, for instance modelling a document as a set of segments (Du et al., 2010; Wang et al., 2011; Chen et al., 2009) , or improving the underlying statistical methods Wallach et al., 2009) . Topic models, like statistical parsing methods, are using more sophisticated latent variable methods in order to model different aspects of these problems.", "cite_spans": [ { "start": 98, "end": 125, "text": "(Arora and Ravindran, 2008)", "ref_id": "BIBREF0" }, { "start": 143, "end": 163, "text": "(Misra et al., 2011;", "ref_id": "BIBREF16" }, { "start": 164, "end": 194, "text": "Eisenstein and Barzilay, 2008)", "ref_id": "BIBREF11" }, { "start": 280, "end": 305, "text": "(Deerwester et al., 1990)", "ref_id": "BIBREF8" }, { "start": 440, "end": 455, "text": "(Johnson, 2010;", "ref_id": "BIBREF15" }, { "start": 456, "end": 478, "text": "Hardisty et al., 2010)", "ref_id": "BIBREF14" }, { "start": 534, "end": 555, "text": "(Newman et al., 2011)", "ref_id": "BIBREF18" }, { "start": 590, "end": 611, "text": "(Newman et al., 2010)", "ref_id": "BIBREF17" }, { "start": 716, "end": 733, "text": "(Du et al., 2010;", "ref_id": "BIBREF9" }, { "start": 734, "end": 752, "text": "Wang et al., 2011;", "ref_id": "BIBREF24" }, { "start": 753, "end": 771, "text": "Chen et al., 2009)", "ref_id": "BIBREF6" }, { "start": 822, "end": 843, "text": "Wallach et al., 2009)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we are interested in developing a new topic model which can take into account the structural topic dependency by following the higher level document subject structure, but we hope to retain the general flavour of topic models, where components (e.g., sentences) can be a mixture of topics. Thus we need to depart from the earlier HMM style models, see, e.g., (Blei and Moreno, 2001; Gruber et al., 2007) . Inspired by the idea that documents usually exhibits internal structure (e.g., (Wang et al., 2011) ), in which semantically related units are clustered together to form semantically structural segments, we treat documents as sequences of segments (e.g., sentences, paragraphs, sections, or chapters). In this way, we can model the topic correlation be-", "cite_spans": [ { "start": 374, "end": 397, "text": "(Blei and Moreno, 2001;", "ref_id": "BIBREF3" }, { "start": 398, "end": 418, "text": "Gruber et al., 2007)", "ref_id": "BIBREF13" }, { "start": 500, "end": 519, "text": "(Wang et al., 2011)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u00b5 \u03bd 1 \u03bd 2 \u03bd 3 \u03bd 4 \u00b5 \u03bd 1 \u03bd 2 \u03bd 3 \u03bd 4 \u00b5 \u03bd 1 \u03bd 2 \u03bd 3 \u03bd 4 (H) (S) (M) \u00b5 \u03bd 1 \u03bd 2 \u03bd 3 \u03bd 4 (B)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Figure 1: Different structural relationships for topics of sections in a 4-part document, hierarchical (H), sequential (S), both (B) or mixed (M).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "tween the segments in a \"bag of segments\" fashion, i.e., beyond the \"bag of words\" assumption, and reveal how topics evolve among segments. Indeed, we were impressed by the improvement in perplexity obtained by the segmented topic model (STM) (Du et al., 2010) , so we considered the problem of whether one can add sequence information into a structured topic model as well. Figure 1 illustrates the type of structural information being considered, where the vectors are some representation of the content. STM is represented by the hierarchical model. A strictly sequential model would seem unrealistic for some documents, for instance books. A topic model using the strictly sequential model was developed (Du et al., 2012) but it reportedly performs halfway between STM and LDA. In this paper, we develop an adaptive topic model to go beyond a strictly sequential model while allow some hierarchical influence. There are two possible hybrids, one called \"mixed\" has distinct breaks in the sequence, while the other called \"both\" overlays both sequence and hierarchy and there could be relative strengths associated with the arrows. We employ the \"both\" hybrid but use the relative strengths to adaptively allow it to approximate the \"mixed\" hybrid.", "cite_spans": [ { "start": 243, "end": 260, "text": "(Du et al., 2010)", "ref_id": "BIBREF9" }, { "start": 708, "end": 725, "text": "(Du et al., 2012)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 375, "end": 383, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Research in Machine Learning and Natural Language Processing has attempted to model various topical dependencies. Some work considers structure within the sentence level by mixing hidden Markov models (HMMs) and topics on a word by word basis: the aspect HMM (Blei and Moreno, 2001 ) and the HMM-LDA model (Griffiths et al., 2005 ) that models both short-range syntactic dependencies and longer semantic dependencies. These models operate at a finer level than we are considering at a segment (like paragraph or section) level. To make a tool like the HMM work at higher levels, one needs to make stronger assumptions, for instance assigning each sentence a single topic and then topic specific word models can be used: the hidden topic Markov model (Gruber et al., 2007) that models the transitional topic structure; a global model based on the generalised Mallows model (Chen et al., 2009) , and a HMM based content model (Barzilay and Lee, 2004) . Researchers have also considered timeseries of topics: various kinds of dynamic topic models, following early work of (Blei and Lafferty, 2006) , represent a collection as a sequence of subcollections in epochs. Here, one is modelling the collections over broad epochs, not the structure of a single document that our model considers. This paper is organised as follows. We first present background theory in Section 2. Then the new model is presented in Section 3, followed by Gibbs sampling theory and algorithm in Sections 4 and 5 respectively. Experiments are reported in Section 6 with a conclusion in Section 7.", "cite_spans": [ { "start": 259, "end": 281, "text": "(Blei and Moreno, 2001", "ref_id": "BIBREF3" }, { "start": 306, "end": 329, "text": "(Griffiths et al., 2005", "ref_id": "BIBREF12" }, { "start": 750, "end": 771, "text": "(Gruber et al., 2007)", "ref_id": "BIBREF13" }, { "start": 872, "end": 891, "text": "(Chen et al., 2009)", "ref_id": "BIBREF6" }, { "start": 924, "end": 948, "text": "(Barzilay and Lee, 2004)", "ref_id": "BIBREF1" }, { "start": 1069, "end": 1094, "text": "(Blei and Lafferty, 2006)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The basic topic model is first presented in Section 2.1, as a point of departure. In seeking to develop a general sequential topic model, we hope to go beyond a strictly sequential model and allow some hierarchical influence. This, however, presents two challenges: modelling and statistical inference. Hierarchical inference (and thus sequential inference) over probability vectors can be handled using the theory of hierarchical Poisson-Dirichlet processes (PDPs). This is presented in Section 2.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "The benchmark model for topic modelling is latent Dirichlet allocation (LDA) (Blei et al., 2003) , a latent variable model of documents. Documents are indexed by i, and words w are observed data. The latent variables are \u00b5 i (the topic distribution for a document) and z (the topic assignments for observed words), and the model parameter of \u03c6 k 's (word distributions). These notation are later extended in Ta-ble 1. The generative model is as follows:", "cite_spans": [ { "start": 77, "end": 96, "text": "(Blei et al., 2003)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "The LDA model", "sec_num": "2.1" }, { "text": "\u03c6 k \u223c Dirichlet W ( \u03b3) \u2200 k \u00b5 i \u223c Dirichlet K ( \u03b1) \u2200 i z i,l \u223c Discrete K ( \u00b5 i ) \u2200 i, l w i,l \u223c Discrete K \u03c6 z i,l \u2200 i, l . Dirichlet K (\u2022) is a K-dimensional Dirichlet distribu- tion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The LDA model", "sec_num": "2.1" }, { "text": "The hyper-parameter \u03b3 is a Dirichlet prior on word distributions (i.e., a Dirichlet smoothing on the multinomial parameter \u03c6 k (Blei et al., 2003) ) and the Dirichlet prior \u03b1 on topic distributions.", "cite_spans": [ { "start": 127, "end": 146, "text": "(Blei et al., 2003)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "The LDA model", "sec_num": "2.1" }, { "text": "A discrete probability vector \u00b5 of finite dimension K is sampled from some distribution F \u03c4 ( \u00b5 0 ) with a parameter set, say \u03c4 , and is also dependent on a parent probability vector \u00b5 0 also of finite dimension K. Then a sample of size N is taken according to the probability vector \u00b5, represented as z \u2208 {1, ..., K} N . This data is collected into counts n = (n 1 , ..., n K ) where n k is the number of data in z with value k and k n k = N . This situation is represented as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical PDPs", "sec_num": "2.2" }, { "text": "\u00b5 \u223c F \u03c4 ( \u00b5 0 ); z i \u223c Discrete K ( \u00b5) for i = 1, ..., N .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical PDPs", "sec_num": "2.2" }, { "text": "Commonly in topic modelling, the Dirichlet distribution is used for discrete probability vectors. In this case", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical PDPs", "sec_num": "2.2" }, { "text": "F \u03c4 ( \u00b5 0 ) \u2261 Dirichlet K (b \u00b5 0 ), \u03c4 \u2261 (K, b)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical PDPs", "sec_num": "2.2" }, { "text": "where b is the concentration parameter. Bayesian analysis yields a marginalised likelihood, after integrating out \u00b5, of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical PDPs", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p z \u03c4, \u00b5 0 , Dirichlet = Beta ( n + b \u00b5 0 ) Beta (b \u00b5 0 ) ,", "eq_num": "(1)" } ], "section": "Hierarchical PDPs", "sec_num": "2.2" }, { "text": "where Beta(\u2022) is the vector valued function normalising the Dirichlet distribution. A problem here is that p( z|b, \u00b5 0 ) is an intractable function of \u00b5 0 . Dirichlet processes and Poisson-Dirichlet processes alleviate this problem by using an auxiliary variable trick (Robert and Casella, 2004) . That is, we introduce an auxiliary variable over which we also sample but do not need to record. The auxiliary variable is the table count 1 which is a t k for each n k and it represents the number of \"tables\" over which the n k \"customers\" are spread out. Thus the following constraints hold:", "cite_spans": [ { "start": 269, "end": 295, "text": "(Robert and Casella, 2004)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Hierarchical PDPs", "sec_num": "2.2" }, { "text": "0 \u2264 t k \u2264 n k and t k = 0 iff n k = 0 . (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical PDPs", "sec_num": "2.2" }, { "text": "When the distribution over probability vectors follows a Poisson-Dirichlet process which has two parameters \u03c4 \u2261 (a, b) and the parent distribution \u00b5 0 , then F \u03c4 ( \u00b5 0 ) \u2261 PDP(a, b, \u00b5 0 ). Here a is the discount parameter, b the concentration parameter and \u00b5 0 the base measure. In this case Bayesian analysis yields an augmented marginalised likelihood (Buntine and Hutter, 2012), after integrating out \u00b5, of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical PDPs", "sec_num": "2.2" }, { "text": "p z, t \u03c4, \u00b5 0 , PDP = (b|a) T (b) N k S n k t k ,a (\u00b5 0,k ) t k (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical PDPs", "sec_num": "2.2" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical PDPs", "sec_num": "2.2" }, { "text": "T = k t k , (x|y) N = N \u22121 n=0 (x + ny) de- notes the Pochhammer symbol, (x) N = (x|1) N , and S N", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical PDPs", "sec_num": "2.2" }, { "text": "M,a is a generalized Stirling number that is readily tabulated (Buntine and Hutter, 2012) .", "cite_spans": [ { "start": 63, "end": 89, "text": "(Buntine and Hutter, 2012)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Hierarchical PDPs", "sec_num": "2.2" }, { "text": "There are two fundamental things to notice about Equation 3. Positively, the term in \u00b5 0 takes the form of a multinomial likelihood, so we can propagate it up and perform inference on \u00b5 0 unencumbered by the functional mess of Equation (1). Thus Poisson-Dirichlet processes allow one to do Bayesian reasoning on hierarchies of probability vectors (Teh, 2006; . Negatively, however, one needs to sample the auxiliary variables t leading to some problems: The range of t k , {0, ..., n k }, is broad. Also, contributions from individual data z i have been lost so the mixing of the MCMC can sometimes be slow. We confirmed these problems on our first implementation of the Adaptive Topic Model presented next in Section 3.", "cite_spans": [ { "start": 347, "end": 358, "text": "(Teh, 2006;", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Hierarchical PDPs", "sec_num": "2.2" }, { "text": "A further improvement on PDP sampling is achieved in (Chen et al., 2011) , where another auxiliary variable is introduced, a so-called table indicator, that for each datum z i indicates whether it is the \"head of its table\" (recall the n k data are spread over t k tables, each table has one and only one \"head\"). Let r i = 1 if z i is the \"head of its table,\" and zero otherwise. According to this \"table\" logic, the number of tables for n k must be the number of data z i that are also head of table, so", "cite_spans": [ { "start": 53, "end": 72, "text": "(Chen et al., 2011)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Hierarchical PDPs", "sec_num": "2.2" }, { "text": "t k = N i=1 1 z i =k 1 r i =1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical PDPs", "sec_num": "2.2" }, { "text": ". Moreover, given this definition, the first constraint of Equation (2) on t k is automatically satisfied. Finally, with t k tables then there must be exactly t k heads of table, and we are indifferent about which data are heads of table, thus", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical PDPs", "sec_num": "2.2" }, { "text": "p z, r \u03c4, \u00b5 0 , PDP = p z, t \u03c4, \u00b5 0 , PDP k n k t k \u22121 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical PDPs", "sec_num": "2.2" }, { "text": "(4) When using this marginalised likelihood in a Gibbs sampler, the z i themselves are usually latent so also sampled, and we develop a blocked Gibbs sampler for (z i , r i ). Since r only appears indirectly through the table counts t, one does not need to store the r, instead just resamples an r i when needed according to the proportion t w /n w where z i = w.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical PDPs", "sec_num": "2.2" }, { "text": "In this section an adaptive topic model (AdaTM) is developed, a fully structured topic model, by using a PDP to simultaneously model the hierarchical and the sequential topic structures. Documents are assumed to be broken into a sequence of segments. Topic distributions are used to mimic the subjects of documents and subtopics of their segments. The notations and terminologies used in the following sections are given in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 424, "end": 431, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "The proposed Adaptive Topic Model", "sec_num": "3" }, { "text": "In AdaTM, the two topic structures are captured by drawing topic distributions from the PDPs with two base distributions as follows. The document topic distribution \u00b5 i and the j th segment topic dis- ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The proposed Adaptive Topic Model", "sec_num": "3" }, { "text": "J i number of segments in document i L i,j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The proposed Adaptive Topic Model", "sec_num": "3" }, { "text": "number of words in document i, segment j W number of words in dictionary \u00b5 i document topic probabilities for document i \u03b1 K-dimensional prior for each \u00b5 i \u03bd i,j segment topic probabilities for document i and segment j \u03c1 i,j mixture weight associating with the link between \u03bd i.j and \u03bd i,j\u22121 \u03a6 word probability vectors as a K \u00d7 W matrix \u03c6 k word probability vector for topic k, entries in \u03a6 \u03b3 W -dimensional prior for each Figure 2 : The adaptive topic model: \u00b5 is the document topic distribution, \u03bd 1 , \u03bd 2 , . . . , \u03bd J are the segment topic distributions, and \u03c1 is a set of the mixture weights.", "cite_spans": [], "ref_spans": [ { "start": 423, "end": 431, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "The proposed Adaptive Topic Model", "sec_num": "3" }, { "text": "\u03c6 k w i,j,l word in document i, segment j, position l z i,j,l topic for word w i,j,l w L z I K \u03b1 \u03bc \u03bd \u03b3 \u03c6 1 \u03bd 2 1 w L z 2 \u3002\u3002\u3002 \u03bd J w L z J \u3002\u3002\u3002 \u03bb", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The proposed Adaptive Topic Model", "sec_num": "3" }, { "text": "tribution \u03bd i,j are linearly combined to give a base distribution for the (j + 1) th segment's topic distribution \u03bd i,j+1 . The topic distribution of the first segment, i.e., \u03bd i,1 , is drawn directly with the base distribution \u00b5 i . Call this generative process topic adaptation. The graphical representation of AdaTM is shown in Figure 2 , and clearly shows the combination of sequence and hierarchy for the topic probabilities. Note the linear combination at each node \u03bd i,j is weighted with latent proportions \u03c1 i,j . The resultant model for AdaTM is:", "cite_spans": [], "ref_spans": [ { "start": 331, "end": 339, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "The proposed Adaptive Topic Model", "sec_num": "3" }, { "text": "\u03c6 k \u223c Dirichlet W ( \u03b3) \u2200 k \u00b5 i \u223c Dirichlet K ( \u03b1) \u2200 i \u03c1 i,j \u223c Beta(\u03bb S , \u03bb T ) \u2200 i, j \u03bd i,j \u223c PDP (\u03c1 i,j \u03bd i,j\u22121 + (1 \u2212 \u03c1 i,j ) \u00b5 i , a, b) z i,j,l \u223c Discrete K ( \u03bd i,j ) \u2200 i, j, l w i,j,l \u223c Discrete K \u03c6 z i,j,l \u2200 i, j, l .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The proposed Adaptive Topic Model", "sec_num": "3" }, { "text": "For notational convenience, let \u03bd i,0 = \u00b5 i . Assume the dimensionality of the Dirichlet distribution (i.e., the number of topics) is known and fixed, and word probabilities are parameterised with a K \u00d7W matrix \u03a6 = ( \u03c6 1 , ..., \u03c6 K ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The proposed Adaptive Topic Model", "sec_num": "3" }, { "text": "Given observations and model parameters, computing the posterior distribution of latent variables is infeasible for AdaTM due to the intractable computa- M i,k,w the total number of words in document i with dictionary index w and being assigned to topic", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "k M k,w total M i,k,w for document i, i.e., i M i,k,w M k vector of W values M k,w n i,j,k topic count in document i segment j for topic k N i,j topic total in document i segment j, i.e., K k=1 n i,j,k t i,j,k", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "table count in the CPR for document i and paragraph j, for topic k that is inherited back to paragraph j \u2212 1 and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "\u00b5 i,j\u22121 . s i,j,k", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "table count in the CPR for document i and paragraph j, for topic k that is inherited back to the document and \u00b5 i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "T i,j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "total table count in the CRP for document i and segment j, equal to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "K k=1 t i,j,k . S i,j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "total table count in the CRP for document i and segment j, equal to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "K k=1 s i,j,k . t i,j table count vector of t i,j,k 's for segment j. s i,j table count vector of s i,j,k 's for segment j.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "tion of marginal probabilities. Therefore, we have to use approximate inference techniques. This section proposes a blocked Gibbs sampling algorithm based on methods from Chen et al. (2011) . Table 2 lists all statistics needed in the algorithm. Note for easier understanding, terminologies of the Chinese Restaurant Process will be used, i.e., customers, dishes and restaurants, correspond to words, topics and segments respectively. The first major complication, over the use of the hierarchical PDP and Equation (3) and the table indicator trick of Equation 4, is handling the linear combination of \u03c1 i,j \u03bd i,j\u22121 + (1 \u2212 \u03c1 i,j ) \u00b5 i used in the PDPs. We manage this as follows: First, Equation 3shows that a contribution of the form (\u00b5 0,k ) t k results. In our case, this becomes", "cite_spans": [ { "start": 171, "end": 189, "text": "Chen et al. (2011)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 192, "end": 199, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "k (\u03c1 i,j \u03bd i,j\u22121,k + (1 \u2212 \u03c1 i,j )\u00b5 i,k ) t i,j,k", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "where t i,j,k is the corresponding introduced auxiliary variable the table count which is involved with constraints on n i,j,k +t i,j+1,k , from Equation (2). To deal with this power of a sum, we break the counts t i,j,k into two parts, those that contribute to \u03bd i,j\u22121 and those that contribute to \u00b5 i . We call these parts t i,j,k and s i,j,k respectively. The product can then be expanded and \u03c1 i,j integrated out. This yields:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "Beta (S i,j + \u03bb S , T i,j + \u03bb T ) k \u03bd t i,j,k i,j\u22121,k \u00b5 s i,j,k i,k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "The powers \u03bd", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "t i,j,k i,j\u22121,k and \u00b5 s i,j,k i,k", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "can then be pushed up to the next nodes in the PDP/Dirichlet hierarchy. Note the standard constraints and table indicators are also needed here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "The precise form of the table indicators needs to be considered as well since there is a hierarchy for them, and this is the second major complication in the model. As discussed in Chen et al. (2011) , table indicators are not required to be recorded, instead, randomly sampled in Gibbs cycles. The table indicators when known can be used to reconstruct the table counts t i,j,k and s i,j,k , and are reconstructed by sampling from them. For now, denote the table indicators as u i,j,l for word w i,j,l .", "cite_spans": [ { "start": 181, "end": 199, "text": "Chen et al. (2011)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "To complete a formulation suitable for Gibbs sampling, we first compute the marginal distribution of the observations w 1:I,1:J (words), the topic assignments z 1:I,1:J and the table indicators u 1:I,1:J . The Dirichlet integral is used to integrate out the document topic distributions \u00b5 1:I and the topicby-words matrix \u03a6, and the joint posterior distribution computed for a PDP is used to recursively marginalise out the segment topic distributions \u03bd 1:I,1:J . With these variables marginalised out, we derive the following marginal distribution p( z 1:I,1:J , w 1:I,1:J , u 1:I,1:J \u03b1, \u03b3, a, b) = (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "I i=1 Beta K \u03b1 + J i j=1 s i,j Beta K ( \u03b1) K k=1 Beta W \u03b3 + M k Beta W ( \u03b3) I i=1 J i j=1 Beta (S i,j + \u03bb S , T i,j + \u03bb T ) (b|a) T i,j +S i,j (b) N i,j +T i,j+1 I i=1 J i j=1 K k=1 (n i,j,k + t i,j+1,k ) (t i,j,k + s i,j,k ) \u22121 S n i,j,k +t i,j+1,k t i,j,k +s i,j,k ,a .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "And the following constraints apply:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "t i,j,k + s i,j,k \u2264 n i,j,k + t i,j+1,k ,", "eq_num": "(6)" } ], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "t i,j,k + s i,j,k = 0 iff n i,j,k + t i,j+1,k = 0 . 7The first constraint falls out naturally when table indicators are used. For convenience of the formulas, set t i,J i +1,k = 0 (there is no J i + 1 segment) and t i,1,k = 0 (the first segment only uses \u00b5 i ). Now let us consider again the table indicators u i,j,l for word w i,j,l . If this word is in topic k at document i and segment j, then it contributes a count to n i,j,k . It also indicates if it contributes a new table, or a count to t i,j,k for the PDP at this node. However, as we discussed above, this then contributes to either t i,j,k or s i,j,k . If it contributes to t i,j,k , then it recurses up to contribute a data count to the PDP for document i segment j \u2212 1. Thus it also needs a table indicator at that node. Consequently, the table indicator u i,j,l for word w i,j,l must specify whether it contributes a table to all PDP nodes reachable by it in the graph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "We define u i,j,l specifically as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "u i,j,l = (u 1 , u 2 ) such that u 1 \u2208 [\u22121, 0, 1] and u 2 \u2208 [1, \u2022 \u2022 \u2022 , j],", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "where u 2 indicates segment denoted by node \u03bd j up to which w i,j,l contributes a table. Given u 2 , u 1 = \u22121 denotes w i,j,l contributes a table count to s i,u 2 ,k and t i,j ,k for u 2 < j \u2264 j; u 1 = 0 denotes w i,j,l does not contribute a table to node u 2 , but contributes a table count to t i,j ,k for u 2 < j \u2264 j; and u 1 = 1 denotes w i,j,l contributes a table count to each t i,j ,k for u 2 \u2264 j \u2264 j.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "Now, we are ready to compute the conditional probabilities for jointly sampling topics and table indicators from the model posterior of Equation (5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Formulation", "sec_num": "4" }, { "text": "The Gibbs sampler iterates over words, doing a blocked sample of (z i,j,l , u i,j,l ). The first task is to reconstruct u i,j,l since it is not stored. Since the posterior of Equation (5) does not explicitly mention the u i,j,l 's, they occur indirectly through the table counts, and we can randomly reconstruct them by sampling them uniformly from the space of possibilities. Following this, we then remove the values (z i,j,l , u i,j,l ) from the full set of statistics. Finally, we block sample new values for (z i,j,l , u i,j,l ) and add them to the statistics. The new u i,j,l is subsequently forgotten and the z i,j,l recorded.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Algorithm", "sec_num": "5" }, { "text": "Reconstructing table indicator u i,j,l : We start at the node indexed i, j. If s i,j,k +t i,j,k = 1 and n i,j,k + t i,j+1,k > 1 then no tables can be removed since there is only one table but several customers at the table. Thus u i,j,l = (u 1 , u 2 ) = (0, j) and there is no sampling. Otherwise, by symmetry arguments, we sample u 1 via", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Algorithm", "sec_num": "5" }, { "text": "p(u 1 = \u22121, 0, 1|u 2 = j, z i,j,l = k) \u221d (s i,j,k , t i,j,k , n i,j,k + t i,j+1,k \u2212 s i,j,k \u2212 t i,j,k ) ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Algorithm", "sec_num": "5" }, { "text": "since there are n i,j,k +t i,j+1,k data distributed across the three possibilities. If after sampling u 1 = \u22121, the data contributes a table count up to \u00b5 i and so u i,j,l = (u 1 , u 2 ) = (\u22121, j). If u 1 = 0, the u i,j,l = (u 1 , u 2 ) = (0, j). Otherwise, the data contributes a table count up to the parent PDP for \u03bd i,j\u22121 and we recurse, repeating the sampling process at the parent node. Note, however, that the table indicator (0, j ) for j < j is equivalent to the table indicator (1, j + 1) as far as statistics is concerned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Algorithm", "sec_num": "5" }, { "text": "Block sampling (z i,j,l , u i,j,l ): The full set of possibilities are, for each possible topic z i,j,l = k:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Algorithm", "sec_num": "5" }, { "text": "\u2022 no tables are created, so u i,j,l = (0, j),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Algorithm", "sec_num": "5" }, { "text": "\u2022 tables are created contributing a table count all the way up to node j (\u2264 j) but stop at j and do not subsequently contribute a count to \u00b5 i , so", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Algorithm", "sec_num": "5" }, { "text": "u i,j,l = (1, j ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Algorithm", "sec_num": "5" }, { "text": "\u2022 tables are created contributing a table count all the way up to node j \u2264 j but stop at j and also subsequently contribute a count to \u00b5 i , so u i,j,l = (\u22121, j ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Algorithm", "sec_num": "5" }, { "text": "These three possibilities lead to detailed but fairly straight forward changes to the posterior of Equation (5). Thus a full blocked sampler for (z i,j,l , u i,j,l ) can be constructed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Algorithm", "sec_num": "5" }, { "text": "Estimates: learnt values of \u00b5 i , \u03bd i,j , \u03c6 k are needed for evaluation, perplexity calculations, etc. These are estimated by taking averages after the Gibbs sampler has burnt in, using the standard posterior means for Dirichlets and Poisson-Dirichlets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gibbs Sampling Algorithm", "sec_num": "5" }, { "text": "In the experimental work, we have three objectives:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "(1) to explore the setting of hyper-parameters, (2) to compare the model with the earlier sequential LDA (SeqLDA) of (Du et al., 2012) , STM of (Du et al., 2010) and standard LDA, and (3) to view the results in detail on a number of characteristic problems. ", "cite_spans": [ { "start": 117, "end": 134, "text": "(Du et al., 2012)", "ref_id": "BIBREF10" }, { "start": 144, "end": 161, "text": "(Du et al., 2010)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "For general testing, five patent datasets are randomly selected from U.S. patents granted in 2009 and 2010. Patents in Pat-A are selected from international patent class (IPC) \"A\", which is about \"HUMAN NECESSITIES\"; those in Pat-B are selected from class \"B60\" about \"VEHICLES IN GENERAL\"; those in Pat-H are selected from class \"H\" about \"ELECTRICITY\"; those in Pat-F are selected from class \"F\" about \"MECHAN-ICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING\"; and those in Pat-G are selected from class \"G06\" about \"COMPUTING; CALCULATING; COUNTING\". All the patents in these five datasets are split into paragraphs that are taken as segments, and the sequence of paragraphs in each patent is reserved in order to maintain the original layout. All the stop words, the top 10 common words, the uncommon words (i.e., words in less than five patents) and numbers have been removed. Two books used for more detailed investigation are \"The Prince\" by Niccol\u00f2 Machiavelli and \"Moby Dick\" by Herman Melville. They are split into chapters and/or paragraphs which are treated as segments, and only stop-words are removed. Table 3 shows in detail the statistics of these datasets after preprocessing.", "cite_spans": [], "ref_spans": [ { "start": 1122, "end": 1129, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Datasets", "sec_num": "6.1" }, { "text": "Perplexity, a standard measure of dictionary-based compressibility, is used for comparison. When reporting test perplexities, the held-out perplexity measure (Rosen-Zvi et al., 2004) is used to evaluate the generalisation capability to the unseen data. This is known to be unbiased. To compute the held-out perplexity, 20% of patents in each data set was ran- domly held out from training to be used for testing. For this, 1000 Gibbs cycles were done for burn-in followed by 500 cycles with a lag for 100 for parameter estimation.", "cite_spans": [ { "start": 158, "end": 182, "text": "(Rosen-Zvi et al., 2004)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Design", "sec_num": "6.2" }, { "text": "We implemented all the four models, e.g., LDA, STM, SeqTM and AdaTM in C, and ran them on a desktop with Intel Core i5 CPU (2.8GHz\u00d74), even though our code is not multi-threaded. Perplexity calculations, data input and handling, etc., were the same for all algorithms. We note that the current AdaTM implementation is an order of magnitude slower than regular LDA per major Gibbs cycle.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Design", "sec_num": "6.2" }, { "text": "Experiments on the impact of the hyper-parameters on the patent data sets were as follows: First, fixing K = 50, the Beta parameters \u03bb T = 1 and \u03bb S = 1, optimise symmetric \u03b1, and do two variations fix-a: a = 0.0, trying b = 1, 5, 10, 25, ..., 300, and fix-b: b = 10, trying a = 0.1, 0.2, ..., 0.9. Second, fix-\u03bb T (fix-\u03bb S ): fix a = 0.2 and \u03bb T (\u03bb S ) = 1, optimise b and \u03b1, change \u03bb S (\u03bb T ) = 0.1, 1, 10, 50, 100, 200. Figures 3 and 4 show the corresponding plots. perplexity. In contrast, Figure 3(a) shows different b values significantly change perplexity. Therefore, we sought to optimise b. The experiment of fixing \u03bb S = 1 and changing \u03bb T shows a small \u03bb T is preferred.", "cite_spans": [], "ref_spans": [ { "start": 423, "end": 438, "text": "Figures 3 and 4", "ref_id": "FIGREF0" }, { "start": 494, "end": 505, "text": "Figure 3(a)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Hyper-parameters in AdaTM", "sec_num": "6.3" }, { "text": "Perplexity comparisons were done with the default settings a = 0.2, \u03b1 = 0.1, \u03b3 = 0.01, \u03bb S = 1, \u03bb T = 1 and b optimised automatically using the scheme from (Du et al., 2012) . Figure shows the results on these five patent datasets for different numbers of topics. LDA D is LDA run on whole patents, and LDA P is LDA run on the paragraphs within patents. Table 4 gives the p-values of a onetail paired t-test for AdaTM versus the others, where lower p-value indicates AdaTM has statistically significant lower perplexity. From this we can see that AdaTM is statistically significantly better than Se-qLDA and LDA, and somewhat better than STM.", "cite_spans": [ { "start": 156, "end": 173, "text": "(Du et al., 2012)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 176, "end": 188, "text": "Figure shows", "ref_id": null }, { "start": 354, "end": 361, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Perplexity Comparison", "sec_num": "6.4" }, { "text": "In addition, we ran another set of experiments by randomly shuffling the order of paragraphs in each patent several times before running AdaTM. Then, we calculate the difference between perplexities with and without random shuffle. Figure 5(f) shows the plot of differences in each data sets. The positive difference means randomly shuffling the order of paragraphs indeed increases the perplexity.", "cite_spans": [], "ref_spans": [ { "start": 232, "end": 243, "text": "Figure 5(f)", "ref_id": null } ], "eq_spans": [], "section": "Perplexity Comparison", "sec_num": "6.4" }, { "text": "It can further prove that there does exist sequential topic structure in patents, which confirms the finding in (Du et al., 2012) .", "cite_spans": [ { "start": 112, "end": 129, "text": "(Du et al., 2012)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Perplexity Comparison", "sec_num": "6.4" }, { "text": "All the comparison experiments reported in this section are run with 20 topics, the upper limit for easy visualisation, and without optimising any parameters. The Dirichlet Priors are fixed as \u03b1 k = 0.1 and \u03b3 w = 0.01. For AdaTM, SeqLDA, and STM, a = 0.0 and b = 100 for \"The Prince\" and b = 200 for \"Moby Dick\". These settings have proven robust in experiments. To align the topics so visualisations match, the sequential models are initialised using an LDA model built at the chapter level. Moreover, all the models are run at both the chapter and the paragraph level. With the common initialisation, both paragraph level and chapter level models can be aligned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Evolution Comparisons", "sec_num": "6.5" }, { "text": "To visualise topic evolution, we use a plot with one colour per topic displayed over the sequence. Figure 6 (a) shows this for LDA run on paragraphs of \"The Prince\". The proportion of 20 topics is the Y-axis, spread across the unit interval. The paragraphs run along the X-axis, so the topic evolution is clearly displayed. One can see there is no sequential structure in this derived by the LDA model, and similar plots result from \"Moby Dick\" for LDA. Figure 6 (b) shows the alignment of topics between the initialising model (LDA+chapters) and AdaTM run on chapters. Each point in the matrix gives the Hellinger distance between the corresponding topics, color coded. The plots for the other models, chapters or paragraphs, are similar so plots like Figure 6(a) for the other models can be meaningfully compared. Figure 7 then shows the corresponding evolution plots for AdaTM and SeqLDA on chapters and paragraphs. The contrast of these with LDA is stark. The large improvement in perplexity for AdaTM (see Section 6.4) along with no change in lexical coherence (see Section 6.2) means that the se- quential information is actually beneficial statistically. Note that SeqLDA, while exhibiting slightly stronger sequential structure than AdaTM in these figures has significantly worse test perplexity, so its sequential affect is too strong and harming results. Also, note that some topics have different time sequence profiles between AdaTM and SeqLDA. Indeed, inspection of the top words for each show these topics differ somewhat. So while the LDA to AdaTM/SeqLDA topic correspondences are quite good due to the use of LDA initialisation, the correspondences between AdaTM and SeqLDA have degraded. We see that AdaTM has nearly as good sequential characteristics as SeqLDA. Furthermore, segment topic distribution \u03bd i,j of SeqLDA are gradually deviating from the document topic distribution \u00b5 i , which is not the case for AdaTM.", "cite_spans": [], "ref_spans": [ { "start": 99, "end": 107, "text": "Figure 6", "ref_id": "FIGREF3" }, { "start": 454, "end": 462, "text": "Figure 6", "ref_id": "FIGREF3" }, { "start": 753, "end": 764, "text": "Figure 6(a)", "ref_id": "FIGREF3" }, { "start": 816, "end": 824, "text": "Figure 7", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Topic Evolution Comparisons", "sec_num": "6.5" }, { "text": "Results for \"Moby Dick\" on chapters are comparable. Figure 8 shows similar topic evolution plots for LDA, STM and AdaTM. In contrast, the AdaTM topic evolutions are much clearer for the less frequent topics, as shown in Figure 8(c) . Various parts of this are readily interpreted from the storyline. Here we briefly discuss topics by their colour: black: Captain Peleg and the business of signing on; yellow: inns, housing, bed; mauve: Queequeg; azure: (around chapters 60-80) details of whales aqua: (peaks at 8, 82, 88) pulpit, schools and mythology of whaling.", "cite_spans": [], "ref_spans": [ { "start": 52, "end": 60, "text": "Figure 8", "ref_id": "FIGREF6" }, { "start": 220, "end": 231, "text": "Figure 8(c)", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Topic Evolution Comparisons", "sec_num": "6.5" }, { "text": "We see that AdaTM can be used to understand the topics with regards to the sequential structure of a book. In contrast, the sequential nature for LDA and STM is lost in the noise. It can be very interesting to apply the proposed topic models to some text analysis tasks, such as topic segmentation, summarisation, and semantic title evaluation, which are subject to our future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Evolution Comparisons", "sec_num": "6.5" }, { "text": "A model for adaptive sequential topic modelling has been developed to improve over a simple exchangeable segments model STM (Du et al., 2010) and a naive sequential model SeqLDA (Du et al., 2012) in terms of perplexity and its confirmed ability to uncover sequential structure in the topics. One could extract meaningful topics from a book like Herman Melville's \"Moby Dick\" and concurrently gain their sequential profile. The current Gibbs sampler is slower than regular LDA, so future work is to speed up the algorithm.", "cite_spans": [ { "start": 124, "end": 141, "text": "(Du et al., 2010)", "ref_id": "BIBREF9" }, { "start": 178, "end": 195, "text": "(Du et al., 2012)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Based on the Chinese Restaurant analogy, each table has a dish, a data value, while data, the customer, is assigned to tables, and multiple tables can serve the same dish.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors would like to thank all the anonymous reviewers for their valuable comments. Lan Du was supported under the Australian Research Council's Discovery Projects funding scheme (project numbers DP110102506 and DP110102593). Dr. Huidong Jin was partly supported by CSIRO Mathematics, Informatics and Statistics for this work. NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the Australian Research Council through the ICT Center of Excellence program.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Latent Dirichlet allocation and singular value decomposition based multidocument summarization", "authors": [ { "first": "R", "middle": [], "last": "Arora", "suffix": "" }, { "first": "B", "middle": [], "last": "Ravindran", "suffix": "" } ], "year": 2008, "venue": "ICDM '08: Proc. of 2008 Eighth IEEE Inter. Conf. on Data Mining", "volume": "", "issue": "", "pages": "713--718", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Arora and B. Ravindran. 2008. Latent Dirichlet allo- cation and singular value decomposition based multi- document summarization. In ICDM '08: Proc. of 2008 Eighth IEEE Inter. Conf. on Data Mining, pages 713-718.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Catching the drift: Probabilistic content models, with applications to generation and summarization", "authors": [ { "first": "R", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "L", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2004, "venue": "HLT-NAACL 2004: Main Proceedings", "volume": "", "issue": "", "pages": "113--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Barzilay and L. Lee. 2004. Catching the drift: Prob- abilistic content models, with applications to genera- tion and summarization. In HLT-NAACL 2004: Main Proceedings, pages 113-120. Association for Compu- tational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Dynamic topic models", "authors": [ { "first": "M", "middle": [], "last": "Blei", "suffix": "" }, { "first": "J", "middle": [ "D" ], "last": "Lafferty", "suffix": "" } ], "year": 2006, "venue": "ICML '06: Proc. of 23rd international conference on Machine learning", "volume": "", "issue": "", "pages": "113--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Blei and J.D. Lafferty. 2006. Dynamic topic mod- els. In ICML '06: Proc. of 23rd international confer- ence on Machine learning, pages 113-120.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Topic segmentation with an aspect hidden Markov model", "authors": [ { "first": "D", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "P", "middle": [ "J" ], "last": "Moreno", "suffix": "" } ], "year": 2001, "venue": "Proc. of 24th annual international ACM SIGIR conference on Research and development in information retrieval", "volume": "", "issue": "", "pages": "343--348", "other_ids": {}, "num": null, "urls": [], "raw_text": "D.M. Blei and P.J. Moreno. 2001. Topic segmenta- tion with an aspect hidden Markov model. In Proc. of 24th annual international ACM SIGIR conference on Research and development in information retrieval, pages 343-348.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Latent Dirichlet allocation", "authors": [ { "first": "M", "middle": [], "last": "Blei", "suffix": "" }, { "first": "A", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "M", "middle": [ "I" ], "last": "Jordan", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "993--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Blei, A.Y. Ng, and M.I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Re- search, 3:993-1022.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A Bayesian view of the Poisson-Dirichlet process", "authors": [ { "first": "W", "middle": [], "last": "Buntine", "suffix": "" }, { "first": "M", "middle": [], "last": "Hutter", "suffix": "" } ], "year": 2012, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1007.0296v2" ] }, "num": null, "urls": [], "raw_text": "W. Buntine and M. Hutter. 2012. A Bayesian view of the Poisson-Dirichlet process. Technical Report arXiv:1007.0296v2, ArXiv, Cornell, February.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Global models of document structure using latent permutations", "authors": [ { "first": "H", "middle": [], "last": "Chen", "suffix": "" }, { "first": "S", "middle": [ "R K" ], "last": "Branavan", "suffix": "" }, { "first": "R", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "D", "middle": [ "R" ], "last": "Karger", "suffix": "" } ], "year": 2009, "venue": "Proceedings of Human Language Technologies: The 2009 Annual Conf. of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "371--379", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Chen, S.R.K. Branavan, R. Barzilay, and D.R. Karger. 2009. Global models of document structure using la- tent permutations. In Proceedings of Human Lan- guage Technologies: The 2009 Annual Conf. of the North American Chapter of the Association for Com- putational Linguistics, pages 371-379, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Sampling for the Poisson-Dirichlet process", "authors": [ { "first": "C", "middle": [], "last": "Chen", "suffix": "" }, { "first": "L", "middle": [], "last": "Du", "suffix": "" }, { "first": "W", "middle": [], "last": "Buntine", "suffix": "" } ], "year": 2011, "venue": "European Conf. on Machine Learning and Principles and Practice of Knowledge Discovery in Database", "volume": "", "issue": "", "pages": "296--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Chen, L. Du, and W. Buntine. 2011. Sampling for the Poisson-Dirichlet process. In European Conf. on Ma- chine Learning and Principles and Practice of Knowl- edge Discovery in Database, pages 296-311.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Indexing by latent semantic analysis", "authors": [ { "first": "S", "middle": [ "C" ], "last": "Deerwester", "suffix": "" }, { "first": "S", "middle": [ "T" ], "last": "Dumais", "suffix": "" }, { "first": "T", "middle": [ "K" ], "last": "Landauer", "suffix": "" }, { "first": "G", "middle": [ "W" ], "last": "Furnas", "suffix": "" }, { "first": "R", "middle": [ "A" ], "last": "Harshman", "suffix": "" } ], "year": 1990, "venue": "Journal of the American Society of Information Science", "volume": "41", "issue": "6", "pages": "391--407", "other_ids": {}, "num": null, "urls": [], "raw_text": "S.C. Deerwester, S.T. Dumais, T.K. Landauer, G.W. Fur- nas, and R.A. Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society of Information Science, 41(6):391-407.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A segmented topic model based on the two-parameter Poisson-Dirichlet process", "authors": [ { "first": "L", "middle": [], "last": "Du", "suffix": "" }, { "first": "W", "middle": [], "last": "Buntine", "suffix": "" }, { "first": "H", "middle": [], "last": "Jin", "suffix": "" } ], "year": 2010, "venue": "Machine Learning", "volume": "81", "issue": "", "pages": "5--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Du, W. Buntine, and H. Jin. 2010. A segmented topic model based on the two-parameter Poisson-Dirichlet process. Machine Learning, 81:5-19.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Sequential latent dirichlet allocation", "authors": [ { "first": "L", "middle": [], "last": "Du", "suffix": "" }, { "first": "W", "middle": [], "last": "Buntine", "suffix": "" }, { "first": "H", "middle": [], "last": "Jin", "suffix": "" }, { "first": "C", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2012, "venue": "Knowledge and Information Systems", "volume": "31", "issue": "3", "pages": "475--503", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Du, W. Buntine, H. Jin, and C. Chen. 2012. Sequential latent dirichlet allocation. Knowledge and Information Systems, 31(3):475-503.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Bayesian unsupervised topic segmentation", "authors": [ { "first": "J", "middle": [], "last": "Eisenstein", "suffix": "" }, { "first": "R", "middle": [], "last": "Barzilay", "suffix": "" } ], "year": 2008, "venue": "Proc. of Conf. on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "334--343", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Eisenstein and R. Barzilay. 2008. Bayesian unsuper- vised topic segmentation. In Proc. of Conf. on Empir- ical Methods in Natural Language Processing, pages 334-343. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Integrating topics and syntax", "authors": [ { "first": "T", "middle": [ "L" ], "last": "Griffiths", "suffix": "" }, { "first": "M", "middle": [], "last": "Steyvers", "suffix": "" }, { "first": "D", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "J", "middle": [], "last": "", "suffix": "" } ], "year": 2005, "venue": "Advances in Neural Information Processing Systems", "volume": "17", "issue": "", "pages": "537--544", "other_ids": {}, "num": null, "urls": [], "raw_text": "T.L. Griffiths, M. Steyvers, D.M. Blei, and J.B. Tenen- baum. 2005. Integrating topics and syntax. In Ad- vances in Neural Information Processing Systems 17, pages 537-544.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Hidden topic markov models", "authors": [ { "first": "A", "middle": [], "last": "Gruber", "suffix": "" }, { "first": "Y", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "M", "middle": [], "last": "Rosen-Zvi", "suffix": "" } ], "year": 2007, "venue": "Journal of Machine Learning Research -Proceedings Track", "volume": "2", "issue": "", "pages": "163--170", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Gruber, Y. Weiss, and M. Rosen-Zvi. 2007. Hidden topic markov models. Journal of Machine Learning Research -Proceedings Track, 2:163-170.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Modeling perspective using adaptor grammars", "authors": [ { "first": "E", "middle": [ "A" ], "last": "Hardisty", "suffix": "" }, { "first": "J", "middle": [], "last": "Boyd-Graber", "suffix": "" }, { "first": "P", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2010, "venue": "Proc. of the 2010 Conf. on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "284--292", "other_ids": {}, "num": null, "urls": [], "raw_text": "E.A. Hardisty, J. Boyd-Graber, and P. Resnik. 2010. Modeling perspective using adaptor grammars. In Proc. of the 2010 Conf. on Empirical Methods in Nat- ural Language Processing, pages 284-292, Strouds- burg, PA, USA. Association for Computational Lin- guistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "PCFGs, topic models, adaptor grammars and learning topical collocations and the structure of proper names", "authors": [ { "first": "M", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2010, "venue": "Proc. of 48th Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "1148--1157", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Johnson. 2010. PCFGs, topic models, adaptor gram- mars and learning topical collocations and the struc- ture of proper names. In Proc. of 48th Annual Meeting of the ACL, pages 1148-1157, Uppsala, Sweden, July. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Text segmentation: A topic modeling perspective", "authors": [ { "first": "H", "middle": [], "last": "Misra", "suffix": "" }, { "first": "F", "middle": [], "last": "Yvon", "suffix": "" }, { "first": "O", "middle": [], "last": "Capp", "suffix": "" }, { "first": "J", "middle": [], "last": "Jose", "suffix": "" } ], "year": 2011, "venue": "Information Processing & Management", "volume": "47", "issue": "4", "pages": "528--544", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Misra, F. Yvon, O. Capp, and J. Jose. 2011. Text seg- mentation: A topic modeling perspective. Information Processing & Management, 47(4):528-544.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Automatic evaluation of topic coherence", "authors": [ { "first": "D", "middle": [], "last": "Newman", "suffix": "" }, { "first": "J", "middle": [ "H" ], "last": "Lau", "suffix": "" }, { "first": "K", "middle": [], "last": "Grieser", "suffix": "" }, { "first": "T", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2010, "venue": "North American Chapter of the Association for Computational Linguistics -Human Language Technologies", "volume": "", "issue": "", "pages": "100--108", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Newman, J.H. Lau, K. Grieser, and T. Baldwin. 2010. Automatic evaluation of topic coherence. In North American Chapter of the Association for Computa- tional Linguistics -Human Language Technologies, pages 100-108.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Improving topic coherence with regularized topic models", "authors": [ { "first": "D", "middle": [], "last": "Newman", "suffix": "" }, { "first": "E", "middle": [ "V" ], "last": "Bonilla", "suffix": "" }, { "first": "W", "middle": [], "last": "Buntine", "suffix": "" } ], "year": 2011, "venue": "Advances in Neural Information Processing Systems", "volume": "24", "issue": "", "pages": "496--504", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Newman, E.V. Bonilla, and W. Buntine. 2011. Im- proving topic coherence with regularized topic mod- els. In J. Shawe-Taylor, R.S. Zemel, P. Bartlett, F.C.N. Pereira, and K.Q. Weinberger, editors, Ad- vances in Neural Information Processing Systems 24, pages 496-504.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Monte Carlo statistical methods", "authors": [ { "first": "C", "middle": [ "P" ], "last": "Robert", "suffix": "" }, { "first": "G", "middle": [], "last": "Casella", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C.P. Robert and G. Casella. 2004. Monte Carlo statisti- cal methods. Springer. second edition.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The author-topic model for authors and documents", "authors": [ { "first": "M", "middle": [], "last": "Rosen-Zvi", "suffix": "" }, { "first": "T", "middle": [], "last": "Griffiths", "suffix": "" }, { "first": "M", "middle": [], "last": "Steyvers", "suffix": "" }, { "first": "P", "middle": [], "last": "Smyth", "suffix": "" } ], "year": 2004, "venue": "Proc. of 20th conference on Uncertainty in Artificial Intelligence", "volume": "", "issue": "", "pages": "487--494", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Rosen-Zvi, T. Griffiths, M. Steyvers, and P. Smyth. 2004. The author-topic model for authors and docu- ments. In Proc. of 20th conference on Uncertainty in Artificial Intelligence, pages 487-494.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Hierarchical Dirichlet processes", "authors": [ { "first": "Y", "middle": [ "W" ], "last": "Teh", "suffix": "" }, { "first": "M", "middle": [ "I" ], "last": "Jordan", "suffix": "" }, { "first": "M", "middle": [ "J" ], "last": "Beal", "suffix": "" }, { "first": "D", "middle": [ "M" ], "last": "Blei", "suffix": "" } ], "year": 2006, "venue": "Journal of the American Statistical Association", "volume": "101", "issue": "", "pages": "1566--1581", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. 2006. Hierarchical Dirichlet processes. Journal of the Amer- ican Statistical Association, 101:1566-1581.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A hierarchical Bayesian language model based on Pitman-Yor processes", "authors": [ { "first": "Y", "middle": [ "W" ], "last": "Teh", "suffix": "" } ], "year": 2006, "venue": "Proc. of 21st Inter. Conf. on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "985--992", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. W. Teh. 2006. A hierarchical Bayesian language model based on Pitman-Yor processes. In Proc. of 21st Inter. Conf. on Computational Linguistics and the 44th annual meeting of the Association for Computa- tional Linguistics, pages 985-992.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Rethinking LDA: Why priors matter", "authors": [ { "first": "H", "middle": [], "last": "Wallach", "suffix": "" }, { "first": "D", "middle": [], "last": "Mimno", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2009, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Wallach, D. Mimno, and A. McCallum. 2009. Re- thinking LDA: Why priors matter. In Advances in Neural Information Processing Systems 19.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Structural topic model for latent topical structure analysis", "authors": [ { "first": "H", "middle": [], "last": "Wang", "suffix": "" }, { "first": "D", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "C", "middle": [], "last": "Zhai", "suffix": "" } ], "year": 2011, "venue": "Proc. of 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1526--1535", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Wang, D. Zhang, and C. Zhai. 2011. Structural topic model for latent topical structure analysis. In Proc. of 49th Annual Meeting of the Association for Compu- tational Linguistics: Human Language Technologies - Volume 1, pages 1526-1535, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "Analysis of parameters of Poisson-Dirichlet process. (a) shows how perplexity changes with b; (b) shows how it changes with a. fix \u03bbS = 1", "uris": null }, "FIGREF1": { "num": null, "type_str": "figure", "text": "Analysis of the two parameters for Beta distribution. (a) how perplexity changes with \u03bb S ; (b) how it changes with \u03bb T .", "uris": null }, "FIGREF2": { "num": null, "type_str": "figure", "text": "andFigure 4(a) show that varying the values of a and \u03bb S does not significantly change the", "uris": null }, "FIGREF3": { "num": null, "type_str": "figure", "text": "Analysis on \"The Prince\".", "uris": null }, "FIGREF5": { "num": null, "type_str": "figure", "text": "Topic Evolution on \"The Prince\".", "uris": null }, "FIGREF6": { "num": null, "type_str": "figure", "text": "Topic Evolution on \"Moby Dick\".", "uris": null }, "TABREF0": { "text": "List of notation for AdaTM", "html": null, "content": "
Knumber of topics
Inumber of documents
", "num": null, "type_str": "table" }, "TABREF1": { "text": "List of statistics for AdaTM", "html": null, "content": "", "num": null, "type_str": "table" }, "TABREF2": { "text": "", "html": null, "content": "
: Datasets
#docs#segs#wordsvocab
Pat-A500 51,748 2,146,464 16,573
Pat-B3979,123417,6317,663
Pat-G06500 11,938655,6946,844
Pat-H500 11,662562,439 10,114
Pat-F1403,181166,0914,674
Prince-C12610,5883,292
Prince-P119210.5883,292
Moby Dick113588,802 16,223
", "num": null, "type_str": "table" }, "TABREF3": { "text": "", "html": null, "content": "
: P-values for one-tail paired t-test on the five
patent datasets.
AdaTM
Pat-G Pat-A Pat-F Pat-H Pat-B
LDA D .0001 .0001 .0002 .0001 .0001
LDA P .0041 .0030 .0022 .0071 .0096
SeqLDA .0029 .0047 .0003 .0012 .0023
STM .0220 .0066 .0210 .0629 .0853
", "num": null, "type_str": "table" } } } }