{ "paper_id": "D12-1009", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:23:11.521997Z" }, "title": "Mixed Membership Markov Models for Unsupervised Conversation Modeling", "authors": [ { "first": "Michael", "middle": [ "J" ], "last": "Paul", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University Baltimore", "location": { "postCode": "21218", "region": "MD", "country": "USA" } }, "email": "mpaul@cs.jhu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Recent work has explored the use of hidden Markov models for unsupervised discourse and conversation modeling, where each segment or block of text such as a message in a conversation is associated with a hidden state in a sequence. We extend this approach to allow each block of text to be a mixture of multiple classes. Under our model, the probability of a class in a text block is a log-linear function of the classes in the previous block. We show that this model performs well at predictive tasks on two conversation data sets, improving thread reconstruction accuracy by up to 15 percentage points over a standard HMM. Additionally, we show quantitatively that the induced word clusters correspond to speech acts more closely than baseline models.", "pdf_parse": { "paper_id": "D12-1009", "_pdf_hash": "", "abstract": [ { "text": "Recent work has explored the use of hidden Markov models for unsupervised discourse and conversation modeling, where each segment or block of text such as a message in a conversation is associated with a hidden state in a sequence. We extend this approach to allow each block of text to be a mixture of multiple classes. Under our model, the probability of a class in a text block is a log-linear function of the classes in the previous block. We show that this model performs well at predictive tasks on two conversation data sets, improving thread reconstruction accuracy by up to 15 percentage points over a standard HMM. Additionally, we show quantitatively that the induced word clusters correspond to speech acts more closely than baseline models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The proliferation of social media in recent years has lead to an increased use of informal Web data in the language processing community. With this rising interest in social domains, it is natural to consider models which explicitly incorporate the conversational patterns of social text. Compared to the naive approach of treating conversations as flat documents, models which include conversation structure have been shown to improve tasks such as forum search (Elsas and Carbonell, 2009; Seo et al., 2009) , question answering and expert finding (Xu et al., 2008; Wang et al., 2011a) , and interpersonal relationship identification (Diehl et al., 2007) .", "cite_spans": [ { "start": 463, "end": 490, "text": "(Elsas and Carbonell, 2009;", "ref_id": "BIBREF13" }, { "start": 491, "end": 508, "text": "Seo et al., 2009)", "ref_id": "BIBREF28" }, { "start": 549, "end": 566, "text": "(Xu et al., 2008;", "ref_id": "BIBREF36" }, { "start": 567, "end": 586, "text": "Wang et al., 2011a)", "ref_id": "BIBREF33" }, { "start": 635, "end": 655, "text": "(Diehl et al., 2007)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While conversational features may be important, Web-derived corpora are not always annotated with this information, and the nature of conversations on the Web can vary wildly across domains and venues. Addressing these concerns, there has been recent work with unsupervised models of Web conversations based on hidden Markov models (Ritter et al., 2010) , where each state corresponds to a conversational class or \"act.\" Unlike more traditional uses of HMMs in which a single token is emitted per time step, HMM emissions in conversations correspond to entire blocks of text, such that an entire message is generated at each step. Because each time step is associated with a block of variables, we refer to this type of HMM as a block HMM (Fig. 1a ).", "cite_spans": [ { "start": 332, "end": 353, "text": "(Ritter et al., 2010)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 739, "end": 747, "text": "(Fig. 1a", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While block HMMs offer a concise model of inter-message structure, they have the limitation that each text block (message) belongs to exactly one class. Many modern generative models of text, in contrast, allow documents to contain many latent classes. For example, topic models such as Latent Dirichlet Allocation (LDA) (Blei et al., 2003) assume each document has its own distribution over multiple classes (often called \"topics\"). For many predictive tasks, topic models outperform singleclass generative models such as Naive Bayes. These properties could similarly be desirable in conversation modeling. An email might contain a request, a question, and an answer to a previous questionthree distinct dialog acts within a single message. This motivates the desire to allow a message to be a mixture of classes.", "cite_spans": [ { "start": 321, "end": 340, "text": "(Blei et al., 2003)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we introduce a new type of model which combines the functionality of topic models, which posit latent class assignments to each individual token, with Markovian sequence models, which", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u03b1 \u03c0 z 1 z 2 z 3 . . . w 1 w 2 w 3 . . . N N N (a) Block HMM \u03b1 \u03b8 1 \u03b8 2 \u03b8 3 . . . z 1 z 2 z 3 . . . w 1 w 2 w 3 . . . N N N (b) LDA \u03bb \u03c0 1 \u03c0 2 \u03c0 3 . . . z 1 z 2 z 3 . . . w 1 w 2 w 3 . . . N N N (c) M 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Figure 1: The graphical models for the block HMM (left) where each block of tokens depends on exactly one latent class, LDA (center) where each token individually depends on a latent class, and M 4 (right) where the class distributions are dependent across blocks. Some parameters are omitted for simplicity. This figure depicts the Bayesian variant of the block HMM (Ritter et al., 2010) where the transition distributions \u03c0 depend on a Dirichlet(\u03b1) prior.", "cite_spans": [ { "start": 367, "end": 388, "text": "(Ritter et al., 2010)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "govern the transitions between text blocks in a sequence. We generalize the block HMM approach so that there is no longer a one-to-one correspondence between states in the Markov chain and latent discourse classes. Instead, we allow a state in the HMM to correspond to a mixture of many classes: we refer to this family of models as mixed membership Markov models (M 4 ). Instead of defining explicit transition probabilities from one class to another as in a traditional HMM, we define the distribution over classes as a function of the entire histogram of class assignments of the previous text segment. We define our model using the same number of parameters as a standard HMM ( \u00a72), and we present a straightforward approximate inference algorithm ( \u00a73).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While we introduce a general model, we will focus on the task of unsupervised conversation modeling. Specifically, we build off the Bayesian block HMMs used by Ritter et al. (2010) for modeling Twitter conversations, which will be our primary baseline. After discussing related work ( \u00a74), we present experimental results on a set of Twitter conversations as well as a set of threads from CNET discussion forums ( \u00a75). We show that M 4 increases thread reconstruction accuracy by up to 15% compared to the HMM of Ritter et al. (2010) , and we reduce variation of information against speech act annotations by an average of 18% from HMM and LDA baselines. To the best of our knowledge, this work is the first attempt to quantitatively compare unsupervised models against gold standard speech act annotations.", "cite_spans": [ { "start": 160, "end": 180, "text": "Ritter et al. (2010)", "ref_id": "BIBREF25" }, { "start": 513, "end": 533, "text": "Ritter et al. (2010)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we extend the block HMM by introducing mixed membership Markov models (M 4 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "M 4 : Mixed Membership Markov Models", "sec_num": "2" }, { "text": "Under the block HMM, as utilized by Ritter et al. (2010) , messages in a conversation flow according to a Markov process, where the words of messages are generated according to language models associated with a state in a hidden Markov model. The intuition is that HMM states should correspond to some notion of a conversation \"act\" such as QUESTION or ANSWER. The intuition is the same under M 4 , but now each token in a message is given its own class assignment, according to a class distribution for that particular message. A message's class distribution depends on the class assignments of the previous message, yielding a model that retains sequential dependencies between messages, while allowing for finer grained class allocation than the block HMM. Modeling messages (or more generally, text blocks) as a mixture of multiple classes rather than a single class gives rise to the \"mixed membership\" property.", "cite_spans": [ { "start": 36, "end": 56, "text": "Ritter et al. (2010)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "M 4 : Mixed Membership Markov Models", "sec_num": "2" }, { "text": "In the subsections below, we formalize and analyze this new model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "M 4 : Mixed Membership Markov Models", "sec_num": "2" }, { "text": "We first define the discourse structure and terminology we will be assuming. The discourse structure is a directed graph, where nodes correspond to segments of a document (which we will refer to as \"blocks\" of text), and the edges define the dependencies between them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structure Assumptions", "sec_num": "2.1" }, { "text": "Thus, a text block is a set of tokens, while a document consists of the discourse graph and all blocks associated with it. In the context of modeling conversation threads, which will be the focus of our experiments later, we will assume a block corresponds to a single message in a thread. The parent of a message m is the message to which it is a response; if a message is not in response to anything in particular, then it has no parent. Any replies to the message m are the children of m. The thread as a whole is called a document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structure Assumptions", "sec_num": "2.1" }, { "text": "The discourse graph should be acyclic. A directed acyclic graph (DAG) offers a flexible representation of discourse (Ros\u00e9 et al., 1995) , but for simplicity, we will restrict this and assume that each subgraph is a tree; i.e. no message has multiple parents. The graph as a whole may be a forest: for example, someone could write a new message in a conversation that is not directly in reply to any previous message, so this message would not have any parents, and would form the root of a new tree in the forest.", "cite_spans": [ { "start": 116, "end": 135, "text": "(Ros\u00e9 et al., 1995)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Structure Assumptions", "sec_num": "2.1" }, { "text": "Extending the block HMM, latent classes in M 4 are now associated with each individual token, rather than one class for an entire block. The key difference between the generative process behind M 4 and the block HMM is that the transition distributions are defined with a log-linear model, which uses class assignments in a block as features to define the distribution over classes for the children of that block. Put another way, a state in M 4 corresponds to a class histogram, and transitions between states are functions of the log-linear parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Story", "sec_num": "2.2" }, { "text": "Given a block b, we will use the notation b to denote the block's feature vector, which consists of the histogram of latent class assignments for the tokens of b. 1 There are K classes. Additionally, we assume each feature vector has an extra cell containing an indicator denoting whether the block has no parent -this allows us to learn transitions from a \"start\" state. We also include a bias feature that is always 1, to learn a default weight for each class. There 1 One could also use other functions of the class histograms rather than the raw counts themselves. For example, we experimented with binary indicator features (i.e. \"does class k appear anywhere in block b?\"), but this performed consistently worse in early experiments, and we do not consider this further. are thus K + 2 features which are used to predict the probability of each of the K classes. The features are weighted by transition parameters, denoted \u03bb. The random variable z denotes a latent class, and \u03c6 z is a discrete distribution over word types -that is, each class is associated with a unigram language model. The transition distribution over classes is denoted \u03c0, which is given in terms of \u03bb and the feature vector of the parent block.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Story", "sec_num": "2.2" }, { "text": "Under this model, a corpus D is generated by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Story", "sec_num": "2.2" }, { "text": "1. For each (j, k) in the transition matrix \u039b K\u00d7K+2 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Story", "sec_num": "2.2" }, { "text": "(a) Draw transition weight \u03bb jk \u223c N (0, \u03c3 2 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Story", "sec_num": "2.2" }, { "text": "2. For each class j:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Story", "sec_num": "2.2" }, { "text": "(a) Draw word distribution \u03c6 j \u223c Dirichlet(\u03c9).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Story", "sec_num": "2.2" }, { "text": "3. For each block b of each document d in D:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Story", "sec_num": "2.2" }, { "text": "(a) Set class probability \u03c0 bj = exp(\u03bb T j a) j exp(\u03bb T j a)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Story", "sec_num": "2.2" }, { "text": "for all classes j, where a is the feature vector for block a, the parent of b. (b) For each token n in block b:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Story", "sec_num": "2.2" }, { "text": "i. Sample class z (b,n) \u223c \u03c0 b . ii. Sample word w (b,n) \u223c \u03c6 z .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Story", "sec_num": "2.2" }, { "text": "For each block of text in a document (e.g. each message in a conversation), the distribution over classes \u03c0 is computed as a function of the feature vector of the block's parent and the transition parameters (feature weights) \u039b. Each \u03bb jk has an intuitive interpretation: a positive value means that the occurrence of class k in a parent block increases the probability that j will appear in the next block, while a negative value reduces this probability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Story", "sec_num": "2.2" }, { "text": "The observed words of each block are generated by repeatedly sampling classes from the block's distribution \u03c0, and for each sampled class z, a single word is sampled from the class-specific distribution over words \u03c6 z . In contrast, under the block HMM, a class z is sampled once from the transition distribution, and words are repeatedly sampled from \u03c6 z .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Story", "sec_num": "2.2" }, { "text": "We place a symmetric Dirichlet prior on each \u03c6 with concentration parameter \u03c9, which smoothes the word distributions, and we place a 0-mean Gaussian prior on each \u03bb parameter, which acts as a regularizer. The graphical diagram is shown in Figure 1 along with the block HMM and LDA. This figure shows how M 4 combines the sequential dependencies of the block HMM with the token-specific class assignments of LDA.", "cite_spans": [], "ref_spans": [ { "start": 239, "end": 247, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Generative Story", "sec_num": "2.2" }, { "text": "Like the block HMM, M 4 is a type of HMM. A latent sequence under M 4 forms a Markov chain in which a state corresponds to a histogram of classes. (For simplicity, we are ignoring the extra features of the start state indicator and bias in this discussion.) If we assume a priori that the length of a block is unbounded, then this state space is N K where 0 \u2208 N. The probability of transitioning from a state b to another stateb \u2208 N K is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "2.3" }, { "text": "P (b \u2192b) \u221d \u03b6 N Multinomial(b|\u03c0(b), N ) (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "2.3" }, { "text": "where N = kb k , \u03b6 N is the probability that a block has N tokens, 2 and \u03c0(b) is the transition distribution given a vector b. This follows from the generative story defined above, with an additional step of generating the number of tokens N from the distribution \u03b6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "2.3" }, { "text": "We currently define a block b's distribution \u03c0 b in terms of the discrete feature vector a given by its parent a. We could have instead made \u03c0 b a function of the parent's distribution \u03c0 a -this would lead to a model that assumes a dynamical system over a continuous space rather than a Markov chain. However, as a generative story we believe it makes more sense for a block's distribution to depend on the actual class values which are emitted by the parent. Similar arguments are made by Blei and Mcauliffe (2007) when designing supervised topic models.", "cite_spans": [ { "start": 490, "end": 515, "text": "Blei and Mcauliffe (2007)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "2.3" }, { "text": "Under a block HMM with one class per block, there are K states corresponding to the K classes, requiring K\u00d7K parameters to define the transition matrix. Under M 4 , there is a countably infinite number of states, but the transitions are still defined by K\u00d7K parameters (ignoring extra features). M 4 thus utilizes a larger state space without increasing the number of free parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "2.3" }, { "text": "We must infer the values of the hidden variables z as well as the parameters for the word distributions \u03a6 and transition weights \u039b. Standard HMM dynamic programming algorithms cannot straightforwardly be used for M 4 because of the unboundedly large state space. We instead turn to Markov chain Monte Carlo (MCMC) methods as a tool for approximate inference. We derive a stochastic EM algorithm in which we alternate between sampling class assignments for the word tokens and optimizing the transition parameters, outlined in the following two subsections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference and Parameter Estimation", "sec_num": "3" }, { "text": "To explore the posterior distribution over latent classes, we use a collapsed Gibbs sampler such that we marginalize out each word multinomial \u03c6 and only need to sample the token assignments z conditioned on each other. Given the current state of the sampler, we sample a token's class according to:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Class Sampling", "sec_num": "3.1" }, { "text": "P (z (b,n) = k|z \u2212(b,n) , w, \u03bb, \u03c9) \u221d (2) exp(\u03bb T k a) k exp(\u03bb T k a) n w k + \u03c9 n k + W \u03c9 c\u2208C j exp(\u03bb T j b) j exp(\u03bb T j b) n j c", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Class Sampling", "sec_num": "3.1" }, { "text": "The notation n w k indicates the number of tokens with word type w that have been assigned to topic k. W is the vocabulary size. a is the parent block of b, and C is the set of b's children. b is the feature vector corresponding to block b (i.e. the class histogram plus the bias feature), where the histogram includes the incremented count of the candidate class k. This sampling distribution is very similar to that of LDA (Griffiths and Steyvers, 2004) , but the distribution over \"topics\" is now a function of the previous block, which gives the leftmost term. The rightmost term is a result of the dependency of the child blocks (C) on the class assignments of b.", "cite_spans": [ { "start": 425, "end": 455, "text": "(Griffiths and Steyvers, 2004)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Latent Class Sampling", "sec_num": "3.1" }, { "text": "Due to the rightmost term, the complexity of computing the sampling distribution is quadratic in the number of classes, rather than the linear complexity of a single-class HMM. Our assumption is that the number of sequence-dependent classes (e.g. speech acts or discourse states) will be reasonably small. If it is desired to have a large number of latent topics as is common in LDA, this model could be combined with a standard topic model without sequential dependencies, as explored by Ritter et al. (2010) .", "cite_spans": [ { "start": 489, "end": 509, "text": "Ritter et al. (2010)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Latent Class Sampling", "sec_num": "3.1" }, { "text": "Differentiating the corpus likelihood with respect to \u039b yields the standard equation for log-linear models:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition Parameter Optimization", "sec_num": "3.2" }, { "text": "\u2202 \u2202\u03bb zk = b a k n z b \u2212 n b exp(\u03bb T z a) z exp(\u03bb T z a) \u2212 \u03bb zk \u03c3 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition Parameter Optimization", "sec_num": "3.2" }, { "text": "(3) where a is the parent of block b, a is the feature vector associated with a, n z b is the number of times class z occurs in block b and n b is the total number of tokens in block b.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition Parameter Optimization", "sec_num": "3.2" }, { "text": "Standard optimization methods can be used to learn these parameters. In our experiments, we find that we obtain good results by simply performing a single iteration of gradient ascent after each sampling iteration t, 3 with the following update:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition Parameter Optimization", "sec_num": "3.2" }, { "text": "\u03bb (t+1) zk = \u03bb (t) zk + \u03b7(t) \u2202 \u2202\u03bb zk (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition Parameter Optimization", "sec_num": "3.2" }, { "text": "where \u03b7 is a step size function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition Parameter Optimization", "sec_num": "3.2" }, { "text": "Hidden Markov models have a recent history as simple models of document structure. Stolcke et al. (2000) used HMMs as a general model of discourse with an application to speech acts (or dialog acts) in conversations. Barzilay and Lee (2004) applied HMMs as an unsupervised model of discourse. This work used HMMs to model the progression of sentences in articles, and was shown to be useful for ordering sentences and generating summaries of news articles. More recently, Wang et al. (2011b) experimented with similar tasks using a related HMMbased model called the Structural Topic Model. Unsupervised HMMs were applied to conversational data by Ritter et al. (2010) who experimented with Twitter conversations. The authors also experimented with incorporating a topic model on top of the HMM to distinguish speech acts from topical clusters, with mixed results. Joty et al. (2011) extended this work by enriching the emission distributions and using additional features such as speaker and position information. An approach to unsupervised discourse modeling that does not use HMMs is the latent permutation model of Chen et al. (2009) . This model assumes each segment (e.g. paragraph) in a document is associated with a latent class or topic, and the ordering of topics within a document is modeled as a deviation from some canonical ordering.", "cite_spans": [ { "start": 83, "end": 104, "text": "Stolcke et al. (2000)", "ref_id": "BIBREF29" }, { "start": 217, "end": 240, "text": "Barzilay and Lee (2004)", "ref_id": "BIBREF1" }, { "start": 647, "end": 667, "text": "Ritter et al. (2010)", "ref_id": "BIBREF25" }, { "start": 1119, "end": 1137, "text": "Chen et al. (2009)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "Extensions to the block HMM have incorporated mixed membership properties within blocks, notably the Markov Clustering Topic Model (Hospedales et al., 2009) , which allows each HMM state to be associated with its own distribution over topics in a topic model. Like the block HMM, this still assumes a relatively small number of HMM states, but with an extra layer of latent variables before the observations are emitted. This is more restrictive than the unbounded state space of M 4 .", "cite_spans": [ { "start": 131, "end": 156, "text": "(Hospedales et al., 2009)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "Decoupling HMM states from latent classes was considered by Beal et al. (1997) with the Factorial HMM, which uses factorized state representations. The Factorial HMM is most often used to model independent Markov chains, whereas M 4 has a dense graphical model topology: the probability of each of the latent classes depends on the counts of all of the classes in the previous block. The trick in M 4 is to define the transition matrix via a function of a limited number of parameters, allowing tractable inference in a model with arbitrarily many states.", "cite_spans": [ { "start": 60, "end": 78, "text": "Beal et al. (1997)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "In topic models, log-linear formulations of latent class distributions 4 are utilized in correlated topic models (Blei and Lafferty, 2007) as a means of incorporating covariance structure among topic probabilities. Applying log-linear regression to potentially many features was combined with LDA by Mimno and McCallum (2008) , who model the Dirichlet prior over topics as a function of document features. In M 4 , such features would correspond to the class histograms of previous blocks, introducing additional dependencies between documents.", "cite_spans": [ { "start": 113, "end": 138, "text": "(Blei and Lafferty, 2007)", "ref_id": "BIBREF3" }, { "start": 300, "end": 325, "text": "Mimno and McCallum (2008)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "One topic model that imposes sequential dependencies between documents is Sequential LDA (Du et al., 2010) , which models a document as a sequence of segments (such as paragraphs) governed by a Pitman-Yor process, in which the latent topic distribution of one segment serves as the base distribution for the next segment. This is in the spirit of our work, where the latent classes in a segment depend on the class distribution of the previous segment. By using the Pitman-Yor process, however, this work assumes topics are positively correlated, i.e. the occurrence of a topic in one segment makes it likely to appear in the next. In contrast, we wish to learn arbitrary transitions, both positive and negative, between the latent classes.", "cite_spans": [ { "start": 89, "end": 106, "text": "(Du et al., 2010)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "We experiment with two corpora of text-based asynchronous conversations on the Web. One of these is annotated with speech act labels, against which we compare our unsupervised clusters. We measure the predictive capabilities of the model via perplexity experiments and the task of thread reconstruction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments with Conversation Data", "sec_num": "5" }, { "text": "First, we use a corpus of discussion threads from CNET forums (Kim et al., 2010), which are mostly technical discussion and support. This corpus includes 321 threads and a total of 1309 messages, with an average message length of 78 tokens after preprocessing. 5 Second, we use the Twitter data set created by Ritter et al. (2010) . We consider 36K conversation threads for a total of 100K messages with average length 13.4 tokens.", "cite_spans": [ { "start": 310, "end": 330, "text": "Ritter et al. (2010)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Data Sets", "sec_num": "5.1" }, { "text": "Both data sets are already annotated with the reply structure, so the discourse graph is given. We preprocess the data by treating contiguous blocks of punctuation as tokens, and we remove infrequent words. The Twitter corpus has some additional preprocessing, such as converting URLs to a single word type.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Sets", "sec_num": "5.1" }, { "text": "Our work is motivated by the Bayesian HMM approach of Ritter et al. (2010) -the model we refer to as the block HMM (BHMM) -and we consider this our primary baseline. (See also (Goldwater and Griffiths, 2007) for more details on Bayesian HMMs with Dirichlet priors.) We also compare against LDA, which makes latent assignments at the token-level, but blocks of text are independent of each other. In other words, BHMM models sequential dependencies but allows only single-class membership, whereas LDA uses no sequence information but has a mixed membership property. M 4 combines these two properties.", "cite_spans": [ { "start": 54, "end": 74, "text": "Ritter et al. (2010)", "ref_id": "BIBREF25" }, { "start": 191, "end": 207, "text": "Griffiths, 2007)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "5.2" }, { "text": "We use standard Gibbs samplers for both baseline models, and we optimize the Dirichlet hyperparameters (for the transition and topic distributions) using Minka's fixed-point iterations (2003) .", "cite_spans": [ { "start": 154, "end": 191, "text": "Minka's fixed-point iterations (2003)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "5.2" }, { "text": "In our experiments, we find that the intrusion of common stop words can make the results difficult to interpret, but we do not want to perform simple stop word removal because common function words often play important roles in the latent classes (i.e. speech acts) of the conversation data we consider here. We instead handle this by extending our model to include a \"background\" distribution over words which is independent of the latent classes in a document; this was also done by Wang et al. (2011b) .", "cite_spans": [ { "start": 485, "end": 504, "text": "Wang et al. (2011b)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Incorporating Background Distributions", "sec_num": "5.3" }, { "text": "The idea is to introduce a binary switching variable x into the model which determines whether a word is generated from the general background distribution or from the distribution specific to a latent class z. Loosely, if the marginal probability of a word was given by p(w) = z p(w|z)p(z), the introduction of a background distribution gives the marginal probability p(w) = p(x = 0)p(w|B) + p(x = 1) z p(w|z). This is common practice and we will not go into detail; see (Chemudugunta et al., 2006) for a general example on sampling switching variables. We augment all three models with a background distribution in exactly the same way, so that the comparison is fair. We use a Beta(10.0, 10.0) prior over the switching distribution.", "cite_spans": [ { "start": 472, "end": 499, "text": "(Chemudugunta et al., 2006)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Incorporating Background Distributions", "sec_num": "5.3" }, { "text": "All of our results are averaged across four randomly initialized chains which are run for 5000 iterations, with five samples collected during the final 500 iterations. We take small gradient steps of decreasing size with \u03b7(t) = 0.1/(1000 + t).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.4" }, { "text": "We set \u03c3 2 = 10.0 as the variance of the \u03bb weights. We use optimized asymmetric priors as described in \u00a75.2, and we use a symmetric Dirichlet for the word distributions, following Wallach et al. (2009) . We sample the scaling hyperparameter \u03c9 via Metropolis-Hastings proposals: we add Gaussiandistributed noise to the log of the current \u03c9, then exponentiate this to yield the proposed \u03c9 (new) . This log-space proposal ensures that \u03c9 is always positive. When computing the transition distributions for M 4 , we normalize the class histograms so that the counts to sum to 1. This helps with numeric stability because the input vectors stay within a small bounded range. 6", "cite_spans": [ { "start": 180, "end": 201, "text": "Wallach et al. (2009)", "ref_id": "BIBREF31" }, { "start": 387, "end": 392, "text": "(new)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.4" }, { "text": "We begin with standard measures of the perplexity of held-out data. For these experiments, we train on 75% of the data, and test on the remaining 25%. We run the sampler for 500 iterations using the word distributions and transition parameters learned during training; we compute the average perplexity from the final ten sampling iterations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Perplexity", "sec_num": "5.5.1" }, { "text": "Results for different numbers of classes are shown in Table 1 . These results demonstrate the advantage of models with the mixed membership property. Although LDA outperforms both sequence models, this is be expected. Each block's topic distribution is stochastically generated with LDA, whereas in the two sequence models, the distribution over classes is simply a deterministic function of the previous block. This allows LDA to infer parameters that fit the data more tightly. Comparing only the two sequence models, we find that M 4 does significantly better than BHMM in all cases with p < 0.05. If capturing sequence information is not important, then LDA may provide a better fit to a corpus than sequence models. In the next two subsections, we will consider tasks where the sequential structure is important, thus LDA is not an appropriate choice.", "cite_spans": [], "ref_spans": [ { "start": 54, "end": 61, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Perplexity", "sec_num": "5.5.1" }, { "text": "A natural predictive task of the sequence models is to reconstruct the discourse graph of a document where the structure is unknown. In the conversation domain, this corresponds to the task of thread reconstruction (Yeh and Harnly, 2006; Wang et al., 2011c) . Given only a flat structure, can we recover the reply structure of messages in the conversation?", "cite_spans": [ { "start": 215, "end": 237, "text": "(Yeh and Harnly, 2006;", "ref_id": "BIBREF37" }, { "start": 238, "end": 257, "text": "Wang et al., 2011c)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Thread Reconstruction", "sec_num": "5.5.2" }, { "text": "Previous work with BHMM found the optimal structure by computing the likelihood of all permutations of a thread or sequence (Ritter et al., 2010; Wang et al., 2011b) . We take a more practical approach and find the optimal structure as part of our inference procedure. We do this by treating the parent of each block as a hidden variable to be inferred. The parent of block b is the random variable r b , and we alternate between sampling values of the latent classes z and the parents r. The sampling distributions are annealed, as a search technique to find the best configuration of assignments (Finkel et al., 2005) . At temperature \u03c4 , we sample a block's parent according to:", "cite_spans": [ { "start": 124, "end": 145, "text": "(Ritter et al., 2010;", "ref_id": "BIBREF25" }, { "start": 146, "end": 165, "text": "Wang et al., 2011b)", "ref_id": "BIBREF34" }, { "start": 598, "end": 619, "text": "(Finkel et al., 2005)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Thread Reconstruction", "sec_num": "5.5.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (r b = a|z, \u03bb) \u221d j exp(\u03bb T j a) j exp(\u03bb T j a) n j b /\u03c4", "eq_num": "(5)" } ], "section": "Thread Reconstruction", "sec_num": "5.5.2" }, { "text": "For each conversation thread, any message is a candidate for the parent of block b (except b itself) including the dummy \"start\" block.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Thread Reconstruction", "sec_num": "5.5.2" }, { "text": "As before, we train on 75% of the data, and run this experiment on the remaining 25%. We run the sampler for 500 iterations, cooling \u03c4 by 1% after each iteration, where \u03c4 (0) = 1. We measure accuracy as the percentage of blocks whose assignment for r b matches the true parent. For each fold, we run this estimation procedure from five random initializations and average the results. Like Ritter et al. 2010, we do not enforce temporal constraints in the thread structure for this experiment. We are purely evaluating the predictive abilities of the model rather than its performance in a full-fledged reconstruction setup, which would require richer features beyond the scope of this paper. Figure 2 shows results comparing M 4 against BHMM. Because all blocks are independent under LDA, it cannot be used in this experiment; using LDA would amount to a random baseline.", "cite_spans": [], "ref_spans": [ { "start": 692, "end": 700, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Thread Reconstruction", "sec_num": "5.5.2" }, { "text": "We plot the distribution of results from various samples and various numbers of classes in {5, . . . , 25}. Most of the variance is across folds and samples; we find that there is not a strong trend in accuracy as a function of the number of classes. This suggests that most of the sequence predictions are carried by a small subset of the classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Thread Reconstruction", "sec_num": "5.5.2" }, { "text": "On average, M 4 outperforms BHMM by more than 15 points on the CNET corpus. M 4 is also better on the Twitter corpus, but the difference is not so stark. This seems to confirm our intuition that the advantage of M 4 over BHMM is greater when the blocks are longer; tweets may be short enough that the single-class assumption is not as limiting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Thread Reconstruction", "sec_num": "5.5.2" }, { "text": "Thus far, we have investigated the predictive power of the model, but we would also like to determine if the inferred clusters correspond to humaninterpretible classes. In the case of conversation data, our hope is that some of the latent classes represent speech acts or dialog acts (Searle, 1975) . While there is a body of work in supervised speech act classification (Cohen et al., 2004; Bangalore et al., 2006; Surendran and Levow, 2006; Qadir and Riloff, 2011) , the variety of conversation domains on the Web motivates the use of unsupervised ap- proaches. The CNET corpus is annotated with twelve speech act classes: QUESTION and ANSWER, which are both broken down into multiple sub-classes, as well as RESOLUTION, REPRODUCTION, and OTHER (Kim et al., 2010) . We would like to quantitatively measure how closely the latent states induced by our model match these annotations. 7 We can measure this with variation of information (Meila, 2003) , which has been used in recent years for unsupervised evaluation, e.g. in part-ofspeech clustering (Goldwater and Griffiths, 2007) . Given two sets of variable assignments z and z , the variation of information is defined as H(Z|Z ) + H(Z |Z). In other words, given one clustering, how much uncertainty do we have about the other? Results are shown in Figure 3 : a lower value corresponds to higher similarity.", "cite_spans": [ { "start": 284, "end": 298, "text": "(Searle, 1975)", "ref_id": "BIBREF27" }, { "start": 371, "end": 391, "text": "(Cohen et al., 2004;", "ref_id": "BIBREF9" }, { "start": 392, "end": 415, "text": "Bangalore et al., 2006;", "ref_id": "BIBREF0" }, { "start": 416, "end": 442, "text": "Surendran and Levow, 2006;", "ref_id": "BIBREF30" }, { "start": 443, "end": 466, "text": "Qadir and Riloff, 2011)", "ref_id": "BIBREF24" }, { "start": 747, "end": 765, "text": "(Kim et al., 2010)", "ref_id": "BIBREF20" }, { "start": 884, "end": 885, "text": "7", "ref_id": null }, { "start": 936, "end": 949, "text": "(Meila, 2003)", "ref_id": "BIBREF21" }, { "start": 1050, "end": 1081, "text": "(Goldwater and Griffiths, 2007)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 1303, "end": 1311, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Speech Act Discovery", "sec_num": "5.5.3" }, { "text": "On the CNET corpus, M 4 outperforms both baselines in all cases by a very significant margin. Qualitatively, we see clusters and transition parameters that make sense. For example, the class with top words {i, my, have, computer, am, ?, tried, help} is most likely to begin a thread (with \u03bb = +1.94) and appears to describe questions or requests for help. The class is not likely to be followed by itself (\u03bb = \u22120.32) but is likely to be followed by the class with words {you, your, /, com, ., http, windows} (with \u03bb = +1.38).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech Act Discovery", "sec_num": "5.5.3" }, { "text": "The Twitter corpus does not have speech act annotations, so we offer example output in Figure 4 . We again see patterns that we might expect to find in social media conversations, and some classes appear to correspond to speech acts such a declarations, personal questions, and replies. For example, the class in the center of the figure has words like you and but which suggests it is used in reply to other messages, and indeed we see that it has a positive weight of following almost every class, but a negative weight for actually starting a thread. Conversely, the class containing URLs (which corresponds to the act of sharing news or media) is likely to begin a thread, but is not likely to follow other classes except itself.", "cite_spans": [], "ref_spans": [ { "start": 87, "end": 95, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Speech Act Discovery", "sec_num": "5.5.3" }, { "text": "How well unsupervised models can truly capture speech acts is an open question. Much as LDA \"topics\" do not always correspond to what humans would judge to be semantic classes (Chang et al., 2009) , the conversation classes inferred by unsupervised sequence models are similarly unlikely to be a perfect fit to human-assigned classes. Nevertheless, these results suggest M 4 is a step forward.", "cite_spans": [ { "start": 176, "end": 196, "text": "(Chang et al., 2009)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Speech Act Discovery", "sec_num": "5.5.3" }, { "text": "Our model provides a framework for defining intermessage transitions as functions of multiple classes, which will be a desirable property for many corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech Act Discovery", "sec_num": "5.5.3" }, { "text": "We have presented mixed membership Markov models (M 4 ), which extend the simple HMM approach to discourse modeling by positing class assignments at the level of individual tokens. This allows blocks of text to belong to potentially multiple classes, a property that relates M 4 to topic models. This type of model can be viewed as an HMM with an expanded state space, but because the transition probabilities are a function of a small number of parameters, the output remains human-interpretible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "M 4 can be taken as a general family of models and can be readily extended. In this work, we focused on introducing a model of inter-message structure, but certainly more sophisticated models of intramessage structure beyond unigram language models could be incorporated into M 4 . Standard topic model extensions such as n-gram models (Wallach, 2006) can straightforwardly be applied here, and indeed we already applied such an extension by incorporating background distributions in \u00a75.3. For conversational data, it could make sense to segment messages (e.g. into sentences) and constraint each segment to belong to one class or speech act; modifications along these lines have been applied to topic models as well (Gruber et al., 2007) . While we have focused on conversation modeling, M 4 is a general probabilistic model that could be applied to other discourse applications, for example modeling sentences or paragraphs in articles rather than messages in conversations; it could also be applied to data beyond text.", "cite_spans": [ { "start": 336, "end": 351, "text": "(Wallach, 2006)", "ref_id": "BIBREF32" }, { "start": 717, "end": 738, "text": "(Gruber et al., 2007)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Compared to a Bayesian block HMM, M 4 performs much better at a variety of tasks. A drawback is that the time complexity of inference as presented here is quadratic in the number of classes rather than linear. Improving this may be the subject of future research. Another potential avenue of future work is to model transitions such that a Dirichlet prior for the class distribution of a block, rather than the class distribution itself, depends on the previous class assignments. This would yield a model that more closely resembles LDA, but with topic priors that encode sequence information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "The distribution over the number of tokens can be arbitrary, as this is observed and does not affect inference. In topic models, this is sometimes assumed to be Poisson(Blei et al., 2003).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Incremental updates are justified under the generalized EM algorithm(Dempster et al., 1977). Each gradient step with respect to \u03bb corresponds to a generalized M-step, while each sampling iteration corresponds to a stochastic E-step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This formulation corresponds to the natural parameterization of the multinomial distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Three messages in this corpus have multiple parents. For the sake of conciseness, we simply remove these threads rather than introducing a method to model multiple parents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Implementations of both M 4 and the block HMM will be available at http://cs.jhu.edu/\u02dcmpaul", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Some messages have multiple labels. Since messages are not annotated at finer granularities, we handle this by simply duplicating such messages, once per label, and measuring clustering performance on this expanded set of labeled data which now has one label per token.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Thanks to Matt Gormley, Mark Dredze, Jason Eisner, the members of my lab and the anonymous reviewers for helpful feedback and discussions. This material is based upon work supported by a National Science Foundation Graduate Research Fellowship under Grant No. DGE-0707427 and a Dean's Fellowship from the Johns Hopkins University Whiting School of Engineering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Learning the structure of task-driven human-human dialogs", "authors": [ { "first": "Srinivas", "middle": [], "last": "Bangalore", "suffix": "" }, { "first": "Giuseppe", "middle": [ "Di" ], "last": "Fabbrizio", "suffix": "" }, { "first": "Amanda", "middle": [], "last": "Stent", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44", "volume": "", "issue": "", "pages": "201--208", "other_ids": {}, "num": null, "urls": [], "raw_text": "Srinivas Bangalore, Giuseppe Di Fabbrizio, and Amanda Stent. 2006. Learning the structure of task-driven human-human dialogs. In Proceedings of the 21st In- ternational Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44, pages 201-208.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Catching the drift: Probabilistic content models, with applications to generation and summarization", "authors": [ { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2004, "venue": "HLT-NAACL 2004: Main Proceedings", "volume": "", "issue": "", "pages": "113--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In HLT-NAACL 2004: Main Proceedings, pages 113-120, Boston, Massachusetts, USA, May 2 -May 7. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Factorial hidden markov models", "authors": [ { "first": "M", "middle": [ "J" ], "last": "Beal", "suffix": "" }, { "first": "Z", "middle": [], "last": "Ghahramani", "suffix": "" }, { "first": "C", "middle": [ "E" ], "last": "Rasmussen", "suffix": "" } ], "year": 1997, "venue": "Machine Learning", "volume": "29", "issue": "", "pages": "29--245", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. J. Beal, Z. Ghahramani, and C. E. Rasmussen. 1997. Factorial hidden markov models. In Machine Learn- ing, volume 29, pages 29-245.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A correlated topic model of science", "authors": [ { "first": "D", "middle": [], "last": "Blei", "suffix": "" }, { "first": "J", "middle": [], "last": "Lafferty", "suffix": "" } ], "year": 2007, "venue": "Annals of Applied Statistics", "volume": "1", "issue": "1", "pages": "17--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Blei and J. Lafferty. 2007. A correlated topic model of science. Annals of Applied Statistics, 1(1):17-35.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Supervised topic models", "authors": [ { "first": "M", "middle": [], "last": "David", "suffix": "" }, { "first": "Jon", "middle": [ "D" ], "last": "Blei", "suffix": "" }, { "first": "", "middle": [], "last": "Mcauliffe", "suffix": "" } ], "year": 2007, "venue": "Advances in Neural Information Processing Systems 21", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M. Blei and Jon D. Mcauliffe. 2007. Supervised topic models. In Advances in Neural Information Pro- cessing Systems 21.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Latent dirichlet allocation", "authors": [ { "first": "David", "middle": [], "last": "Blei", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Jordan", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Blei, Andrew Ng, and Michael Jordan. 2003. La- tent dirichlet allocation. Journal of Machine Learning Research, 3.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Reading Tea Leaves: How Humans Interpret Topic Models", "authors": [ { "first": "Jonathan", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "" }, { "first": "Sean", "middle": [], "last": "Gerrish", "suffix": "" }, { "first": "Chong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "David", "middle": [], "last": "Blei", "suffix": "" } ], "year": 2009, "venue": "Neural Information Processing Systems (NIPS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Chang, Jordan Boyd-Graber, Sean Gerrish, Chong Wang, and David Blei. 2009. Reading Tea Leaves: How Humans Interpret Topic Models. In Neu- ral Information Processing Systems (NIPS).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Modeling general and specific aspects of documents with a probabilistic topic model", "authors": [ { "first": "Chaitanya", "middle": [], "last": "Chemudugunta", "suffix": "" }, { "first": "Padhraic", "middle": [], "last": "Smyth", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steyvers", "suffix": "" } ], "year": 2006, "venue": "NIPS", "volume": "", "issue": "", "pages": "241--248", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chaitanya Chemudugunta, Padhraic Smyth, and Mark Steyvers. 2006. Modeling general and specific as- pects of documents with a probabilistic topic model. In NIPS, pages 241-248.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Global models of document structure using latent permutations", "authors": [ { "first": "S", "middle": [ "R K" ], "last": "Harr Chen", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Branavan", "suffix": "" }, { "first": "David", "middle": [ "R" ], "last": "Barzilay", "suffix": "" }, { "first": "", "middle": [], "last": "Karger", "suffix": "" } ], "year": 2009, "venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL '09", "volume": "", "issue": "", "pages": "371--379", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harr Chen, S. R. K. Branavan, Regina Barzilay, and David R. Karger. 2009. Global models of document structure using latent permutations. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the As- sociation for Computational Linguistics, NAACL '09, pages 371-379.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Learning to classify email into \"speech acts", "authors": [ { "first": "William", "middle": [ "W" ], "last": "Cohen", "suffix": "" }, { "first": "R", "middle": [], "last": "Vitor", "suffix": "" }, { "first": "Tom", "middle": [ "M" ], "last": "Carvalho", "suffix": "" }, { "first": "", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 2004, "venue": "Proceedings of EMNLP 2004", "volume": "", "issue": "", "pages": "309--316", "other_ids": {}, "num": null, "urls": [], "raw_text": "William W. Cohen, Vitor R. Carvalho, and Tom M. Mitchell. 2004. Learning to classify email into \"speech acts\". In Proceedings of EMNLP 2004, pages 309-316, Barcelona, Spain, July. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Maximum likelihood from incomplete data via the em algorithm", "authors": [ { "first": "A", "middle": [ "P" ], "last": "Dempster", "suffix": "" }, { "first": "N", "middle": [ "M" ], "last": "Laird", "suffix": "" }, { "first": "D", "middle": [ "B" ], "last": "Rubin", "suffix": "" } ], "year": 1977, "venue": "Journal of the Royal Statistical Society. Series B (Methodological)", "volume": "39", "issue": "1", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society. Se- ries B (Methodological), 39(1):1-38.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Relationship identication for social network discovery", "authors": [ { "first": "Christopher", "middle": [ "P" ], "last": "Diehl", "suffix": "" }, { "first": "Galileo", "middle": [], "last": "Namata", "suffix": "" }, { "first": "Lise", "middle": [], "last": "Getoor", "suffix": "" } ], "year": 2007, "venue": "AAAI'07", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher P. Diehl, Galileo Namata, and Lise Getoor. 2007. Relationship identication for social network dis- covery. In AAAI'07.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Sequential latent dirichlet allocation: Discover underlying topic structures within a document", "authors": [ { "first": "Lan", "middle": [], "last": "Du", "suffix": "" }, { "first": "Wray", "middle": [], "last": "Buntine", "suffix": "" }, { "first": "Huidong", "middle": [], "last": "Jin", "suffix": "" } ], "year": 2010, "venue": "IEEE International Conference on Data Mining", "volume": "", "issue": "", "pages": "148--157", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lan Du, Wray Buntine, and Huidong Jin. 2010. Se- quential latent dirichlet allocation: Discover underly- ing topic structures within a document. 2010 IEEE International Conference on Data Mining, pages 148- 157.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "It pays to be picky: An evaluation of thread retrieval in online forums", "authors": [ { "first": "Jonathan", "middle": [ "L" ], "last": "Elsas", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" } ], "year": 2009, "venue": "32nd Annual International ACM SIGIR Conference on Research and Development on Information Retrieval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan L. Elsas and Jaime Carbonell. 2009. It pays to be picky: An evaluation of thread retrieval in online fo- rums. In 32nd Annual International ACM SIGIR Con- ference on Research and Development on Information Retrieval(SIGIR 2009).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Incorporating non-local information into information extraction systems by gibbs sampling", "authors": [ { "first": "Jenny", "middle": [ "Rose" ], "last": "Finkel", "suffix": "" }, { "first": "Trond", "middle": [], "last": "Grenager", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jenny Rose Finkel, Trond Grenager, and Christopher D. Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In ACL.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A fully bayesian approach to unsupervised part-of-speech tagging", "authors": [ { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Griffiths", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "744--751", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sharon Goldwater and Tom Griffiths. 2007. A fully bayesian approach to unsupervised part-of-speech tag- ging. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 744-751, Prague, Czech Republic, June. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Finding scientific topics", "authors": [ { "first": "Tom", "middle": [], "last": "Griffiths", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steyvers", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the National Academy of Sciences of the United States of America", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Griffiths and Mark Steyvers. 2004. Finding scien- tific topics. In Proceedings of the National Academy of Sciences of the United States of America.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Hidden topic markov models", "authors": [ { "first": "Amit", "middle": [], "last": "Gruber", "suffix": "" }, { "first": "Michal", "middle": [], "last": "Rosen-Zvi", "suffix": "" }, { "first": "Yair", "middle": [], "last": "Weiss", "suffix": "" } ], "year": 2007, "venue": "Artificial Intelligence and Statistics (AISTATS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amit Gruber, Michal Rosen-Zvi, and Yair Weiss. 2007. Hidden topic markov models. In Artificial Intelligence and Statistics (AISTATS), San Juan, Puerto Rico.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A markov clustering topic model for mining behaviour in video", "authors": [ { "first": "Timothy", "middle": [], "last": "Hospedales", "suffix": "" }, { "first": "Shaogang", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Xiang", "suffix": "" } ], "year": 2009, "venue": "International Conference on Computer Vision (ICCV)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Hospedales, Shaogang Gong, and Tao Xiang. 2009. A markov clustering topic model for mining behaviour in video. In International Conference on Computer Vision (ICCV).", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Unsupervised modeling of dialog acts in asynchronous conversations", "authors": [ { "first": "R", "middle": [], "last": "Shafiq", "suffix": "" }, { "first": "Giuseppe", "middle": [], "last": "Joty", "suffix": "" }, { "first": "Chin-Yew", "middle": [], "last": "Carenini", "suffix": "" }, { "first": "", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2011, "venue": "IJCAI", "volume": "", "issue": "", "pages": "1807--1813", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shafiq R. Joty, Giuseppe Carenini, and Chin-Yew Lin. 2011. Unsupervised modeling of dialog acts in asyn- chronous conversations. In IJCAI, pages 1807-1813.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Tagging and linking web forum posts", "authors": [ { "first": "Nam", "middle": [], "last": "Su", "suffix": "" }, { "first": "Li", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Wang", "suffix": "" }, { "first": "", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning, CoNLL '10", "volume": "", "issue": "", "pages": "192--202", "other_ids": {}, "num": null, "urls": [], "raw_text": "Su Nam Kim, Li Wang, and Timothy Baldwin. 2010. Tagging and linking web forum posts. In Proceedings of the Fourteenth Conference on Computational Natu- ral Language Learning, CoNLL '10, pages 192-202.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Comparing clusterings by the variation of information. Learning Theory and Kernel Machines", "authors": [ { "first": "Marina", "middle": [], "last": "Meila", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "173--187", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marina Meila. 2003. Comparing clusterings by the vari- ation of information. Learning Theory and Kernel Ma- chines, pages 173-187.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Topic models conditioned on arbitrary features with dirichlet-multinomial regression", "authors": [ { "first": "D", "middle": [], "last": "Mimno", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2008, "venue": "UAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Mimno and A. McCallum. 2008. Topic models condi- tioned on arbitrary features with dirichlet-multinomial regression. In UAI.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Estimating a dirichlet distribution", "authors": [ { "first": "Tom", "middle": [], "last": "Minka", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Minka. 2003. Estimating a dirichlet distribution.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Classifying sentences as speech acts in message board posts", "authors": [ { "first": "Ashequl", "middle": [], "last": "Qadir", "suffix": "" }, { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "748--758", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashequl Qadir and Ellen Riloff. 2011. Classifying sen- tences as speech acts in message board posts. In Pro- ceedings of the 2011 Conference on Empirical Meth- ods in Natural Language Processing, pages 748-758, Edinburgh, Scotland, UK., July. Association for Com- putational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Unsupervised modeling of twitter conversations", "authors": [ { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2010, "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10", "volume": "", "issue": "", "pages": "172--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Unsu- pervised modeling of twitter conversations. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10, pages 172-180.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Discourse processing of dialogues with multiple threads", "authors": [ { "first": "Barbara", "middle": [ "Di" ], "last": "Carolyn Penstein Ros\u00e9", "suffix": "" }, { "first": "Lori", "middle": [ "S" ], "last": "Eugenio", "suffix": "" }, { "first": "Carol", "middle": [], "last": "Levin", "suffix": "" }, { "first": "", "middle": [], "last": "Van Ess-Dykema", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 33rd annual meeting on Association for Computational Linguistics, ACL '95", "volume": "", "issue": "", "pages": "31--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carolyn Penstein Ros\u00e9, Barbara Di Eugenio, Lori S. Levin, and Carol Van Ess-Dykema. 1995. Discourse processing of dialogues with multiple threads. In Pro- ceedings of the 33rd annual meeting on Association for Computational Linguistics, ACL '95, pages 31-38.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A taxonomy of illocutionary acts", "authors": [ { "first": "John", "middle": [], "last": "Searle", "suffix": "" } ], "year": 1975, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Searle, 1975. A taxonomy of illocutionary acts. University of Minnesota Press, Minneapolis.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Online community search using thread structure", "authors": [ { "first": "Jangwon", "middle": [], "last": "Seo", "suffix": "" }, { "first": "W", "middle": [ "Bruce" ], "last": "Croft", "suffix": "" }, { "first": "David", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2009, "venue": "ACM Conference on Information and Knowledge Management (CIKM 2009)", "volume": "", "issue": "", "pages": "1907--1910", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jangwon Seo, W. Bruce Croft, and David A. Smith. 2009. Online community search using thread struc- ture. In ACM Conference on Information and Knowl- edge Management (CIKM 2009), pages 1907-1910.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Dialogue act modeling for automatic tagging and recognition of conversational speech", "authors": [ { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Coccaro", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Bates", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Taylor", "suffix": "" }, { "first": "Carol", "middle": [], "last": "Van Ess-Dykema", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Ries", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Shriberg", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Marie", "middle": [], "last": "Meteer", "suffix": "" } ], "year": 2000, "venue": "Computational Linguistics", "volume": "26", "issue": "3", "pages": "339--373", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Stolcke, Noah Coccaro, Rebecca Bates, Paul Taylor, Carol Van Ess-Dykema, Klaus Ries, Eliza- beth Shriberg, Daniel Jurafsky, Rachel Martin, and Marie Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational Linguistics, 26(3):339-373, September.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Dialog act tagging with support vector machines and hidden markov models", "authors": [ { "first": "Dinoj", "middle": [], "last": "Surendran", "suffix": "" }, { "first": "Gina-Anne", "middle": [], "last": "Levow", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dinoj Surendran and Gina-Anne Levow. 2006. Dialog act tagging with support vector machines and hidden markov models. In Interspeech.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Rethinking LDA: Why priors matter", "authors": [ { "first": "M", "middle": [], "last": "Hanna", "suffix": "" }, { "first": "David", "middle": [], "last": "Wallach", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mimno", "suffix": "" }, { "first": "", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2009, "venue": "NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hanna M. Wallach, David Mimno, and Andrew McCal- lum. 2009. Rethinking LDA: Why priors matter. In NIPS.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Topic modeling: beyond bag-ofwords", "authors": [ { "first": "M", "middle": [], "last": "Wallach", "suffix": "" } ], "year": 2006, "venue": "ICML '06: Proceedings of the 23rd international conference on Machine learning", "volume": "", "issue": "", "pages": "977--984", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Wallach. 2006. Topic modeling: beyond bag-of- words. In ICML '06: Proceedings of the 23rd inter- national conference on Machine learning, pages 977- 984.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Learning online discussion structures by conditional random fields", "authors": [ { "first": "Hongning", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chengxiang", "middle": [], "last": "Zhai", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Han", "suffix": "" } ], "year": 2011, "venue": "34th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'11)", "volume": "", "issue": "", "pages": "435--444", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hongning Wang, Chi Wang, ChengXiang Zhai, and Ji- awei Han. 2011a. Learning online discussion struc- tures by conditional random fields. In 34th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'11), pages 435-444.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Structural topic model for latent topical structure analysis", "authors": [ { "first": "Hongning", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Duo", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chengxiang", "middle": [], "last": "Zhai", "suffix": "" } ], "year": 2011, "venue": "The Association for Computer Linguistics", "volume": "", "issue": "", "pages": "1526--1535", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hongning Wang, Duo Zhang, and ChengXiang Zhai. 2011b. Structural topic model for latent topical struc- ture analysis. In ACL, pages 1526-1535. The Associ- ation for Computer Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Predicting thread discourse structure over technical web forums", "authors": [ { "first": "Li", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Lui", "suffix": "" }, { "first": "Su", "middle": [ "Nam" ], "last": "Kim", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2011, "venue": "Proceedings of EMNLP 2011", "volume": "", "issue": "", "pages": "13--25", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Wang, Marco Lui, Su Nam Kim, Joakim Nivre, and Timothy Baldwin. 2011c. Predicting thread discourse structure over technical web forums. In Proceedings of EMNLP 2011, pages 13-25.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Fora: Leveraging the power of internet communities for question answering", "authors": [ { "first": "Gu", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Wei-Ying", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2008, "venue": "1st International Workshop on Question Answering on the Web", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gu Xu, Hang Li, and Wei-Ying Ma. 2008. Fora: Lever- aging the power of internet communities for question answering. In 1st International Workshop on Question Answering on the Web (QAWeb08).", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Email thread reassembly using similarity matching", "authors": [ { "first": "Jen-Yuan", "middle": [], "last": "Yeh", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Harnly", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 3rd Conference on Email and Anti-Spam (CEAS 2006)", "volume": "", "issue": "", "pages": "64--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jen-Yuan Yeh and Aaron Harnly. 2006. Email thread reassembly using similarity matching. In Proceedings of the 3rd Conference on Email and Anti-Spam (CEAS 2006), pages 64-71.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Accuracy at the task of thread reconstruction. The horizontal bar indicates a random baseline.", "type_str": "figure", "num": null }, "FIGREF1": { "uris": null, "text": "The variation of information between the human-created speech act annotations of the CNET corpus and the latent class assignments by various models.", "type_str": "figure", "num": null }, "FIGREF2": { "uris": null, "text": "Example output from a model trained on the Twitter corpus with 15 classes (7 shown). Each node corresponds to a class learned by the model, and the most probable words are shown for each class. The symbols + and \u2212 on the directed edges denote the sign of the \u03bb associated with transitioning from one class to another, and the size of the symbols is scaled by the magnitude of \u03bb. Non-edge arrows going into a node represent the weight of starting a conversation with that class. Low-magnitude weights are not shown, and some edges are omitted to avoid clutter.", "type_str": "figure", "num": null }, "TABREF0": { "num": null, "content": "
510152025
CNET
Unigram 63Twitter
Unigram 93.00 93.00 93.00 93.00 93.00
LDA83.70 78.40 74.01 70.91 70.16
BHMM90.51 89.94 89.68 89.59 89.38
M 488.44 86.17 85.50 85.55 86.31
Table 1: Average perplexity of held-out data for various
numbers of latent classes.
", "type_str": "table", "text": ".07 63.07 63.07 63.07 63.07 LDA 57.16 54.35 52.88 51.63 50.50 BHMM 61.26 61.06 60.92 60.86 60.85 M 4 60.38 59.58 59.26 59.21 59.25", "html": null } } } }