Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N09-1042",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:42:19.371545Z"
},
"title": "Global Models of Document Structure Using Latent Permutations",
"authors": [
{
"first": "Harr",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "Artificial Intelligence Laboratory",
"institution": "Massachusetts Institute of Technology",
"location": {}
},
"email": ""
},
{
"first": "S",
"middle": [
"R K"
],
"last": "Branavan",
"suffix": "",
"affiliation": {
"laboratory": "Artificial Intelligence Laboratory",
"institution": "Massachusetts Institute of Technology",
"location": {}
},
"email": "branavan@csail.mit.edu"
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": "",
"affiliation": {
"laboratory": "Artificial Intelligence Laboratory",
"institution": "Massachusetts Institute of Technology",
"location": {}
},
"email": "regina@csail.mit.edu"
},
{
"first": "David",
"middle": [
"R"
],
"last": "Karger",
"suffix": "",
"affiliation": {
"laboratory": "Artificial Intelligence Laboratory",
"institution": "Massachusetts Institute of Technology",
"location": {}
},
"email": "karger@csail.mit.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a novel Bayesian topic model for learning discourse-level document structure. Our model leverages insights from discourse theory to constrain latent topic assignments in a way that reflects the underlying organization of document topics. We propose a global model in which both topic selection and ordering are biased to be similar across a collection of related documents. We show that this space of orderings can be elegantly represented using a distribution over permutations called the generalized Mallows model. Our structureaware approach substantially outperforms alternative approaches for cross-document comparison and single-document segmentation. 1",
"pdf_parse": {
"paper_id": "N09-1042",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a novel Bayesian topic model for learning discourse-level document structure. Our model leverages insights from discourse theory to constrain latent topic assignments in a way that reflects the underlying organization of document topics. We propose a global model in which both topic selection and ordering are biased to be similar across a collection of related documents. We show that this space of orderings can be elegantly represented using a distribution over permutations called the generalized Mallows model. Our structureaware approach substantially outperforms alternative approaches for cross-document comparison and single-document segmentation. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In this paper, we introduce a novel latent topic model for the unsupervised learning of document structure. Traditional topic models assume that topics are randomly spread throughout a document, or that the succession of topics in a document is Markovian. In contrast, our approach takes advantage of two important discourse-level properties of text in determining topic assignments: first, that each document follows a progression of nonrecurring coherent topics (Halliday and Hasan, 1976) ; and second, that documents from the same domain tend to present similar topics, in similar orders (Wray, 2002) . We show that a topic model incorporating these long-range dependencies outperforms al-ternative approaches for segmentation and crossdocument comparison.",
"cite_spans": [
{
"start": 464,
"end": 490,
"text": "(Halliday and Hasan, 1976)",
"ref_id": null
},
{
"start": 591,
"end": 603,
"text": "(Wray, 2002)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For example, consider a collection of encyclopedia articles about cities. The first constraint captures the notion that a single topic, such as Architecture, is expressed in a contiguous block within the document, rather than spread over disconnected sections. The second constraint reflects our intuition that all of these related articles will generally mention some major topics associated with cities, such as History and Culture, and will often exhibit similar topic orderings, such as placing History before Culture.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We present a Bayesian latent topic model over related documents that encodes these discourse constraints by positing a single distribution over a document's entire topic structure. This global view on ordering is able to elegantly encode discourse-level properties that would be difficult to represent using local dependencies, such as those induced by hidden Markov models. Our model enforces that the same topic does not appear in disconnected portions of the topic sequence. Furthermore, our approach biases toward selecting sequences with similar topic ordering, by modeling a distribution over the space of topic permutations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Learning this ordering distribution is a key technical challenge in our proposed approach. For this purpose, we employ the generalized Mallows model, a permutation distribution that concentrates probability mass on a small set of similar permutations. It directly captures the intuition of the second constraint, and uses a small parameter set to control how likely individual topics are to be reordered. We evaluate our model on two challenging document-level tasks. In the alignment task, we aim to discover paragraphs across different documents that share the same topic. We also consider the segmentation task, where the goal is to partition each document into a sequence of topically coherent segments. We find that our structure modeling approach substantially outperforms state-of-the-art baselines for both tasks. Furthermore, we demonstrate the importance of explicitly modeling a distribution over topic permutations; our model yields significantly better results than variants that either use a fixed ordering, or are order-agnostic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Topic and Content Models Our work is grounded in topic modeling approaches, which posit that latent state variables control the generation of words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In earlier topic modeling work such as latent Dirichlet allocation (LDA) (Blei et al., 2003; Griffiths and Steyvers, 2004) , documents are treated as bags of words, where each word receives a separate topic assignment; the topic assignments are auxiliary variables to the main task of language modeling. More recent work has attempted to adapt the concepts of topic modeling to more sophisticated representations than a bag of words; they use these representations to impose stronger constraints on topic assignments (Griffiths et al., 2005; Wallach, 2006; Purver et al., 2006; Gruber et al., 2007) . These approaches, however, generally model Markovian topic or state transitions, which only capture local dependencies between adjacent words or blocks within a document. For instance, content models (Barzilay and Lee, 2004; Elsner et al., 2007) are implemented as HMMs, where the states correspond to topics of domain-specific information, and transitions reflect pairwise ordering preferences. Even approaches that break text into contiguous chunks (Titov and McDonald, 2008) assign topics based on local context. While these locally constrained models can implicitly reflect some discourse-level constraints, they cannot capture long-range dependencies without an explosion of the parameter space. In contrast, our model captures the entire sequence of topics using a compact representation. As a result, we can explicitly and tractably model global discourse-level constraints.",
"cite_spans": [
{
"start": 73,
"end": 92,
"text": "(Blei et al., 2003;",
"ref_id": "BIBREF6"
},
{
"start": 93,
"end": 122,
"text": "Griffiths and Steyvers, 2004)",
"ref_id": "BIBREF12"
},
{
"start": 517,
"end": 541,
"text": "(Griffiths et al., 2005;",
"ref_id": "BIBREF13"
},
{
"start": 542,
"end": 556,
"text": "Wallach, 2006;",
"ref_id": "BIBREF29"
},
{
"start": 557,
"end": 577,
"text": "Purver et al., 2006;",
"ref_id": "BIBREF25"
},
{
"start": 578,
"end": 598,
"text": "Gruber et al., 2007)",
"ref_id": "BIBREF14"
},
{
"start": 801,
"end": 825,
"text": "(Barzilay and Lee, 2004;",
"ref_id": "BIBREF0"
},
{
"start": 826,
"end": 846,
"text": "Elsner et al., 2007)",
"ref_id": "BIBREF9"
},
{
"start": 1052,
"end": 1078,
"text": "(Titov and McDonald, 2008)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Modeling Ordering Constraints Sentence ordering has been extensively studied in the context of probabilistic text modeling for summarization and generation (Barzilay et al., 2002; Lapata, 2003; Karamanis et al., 2004) . The emphasis of that body of work is on learning ordering constraints from data, with the goal of reordering new text from the same domain. Our emphasis, however, is on applications where ordering is already observed, and how that ordering can improve text analysis. From the methodological side, that body of prior work is largely driven by local pairwise constraints, while we aim to encode global constraints.",
"cite_spans": [
{
"start": 156,
"end": 179,
"text": "(Barzilay et al., 2002;",
"ref_id": "BIBREF1"
},
{
"start": 180,
"end": 193,
"text": "Lapata, 2003;",
"ref_id": "BIBREF17"
},
{
"start": 194,
"end": 217,
"text": "Karamanis et al., 2004)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our document structure learning problem can be formalized as follows. We are given a corpus of D related documents. Each document expresses some subset of a common set of K topics. We assign a single topic to each paragraph, 2 incorporating the notion that paragraphs are internally topically consistent (Halliday and Hasan, 1976) . To capture the discourse constraint on topic progression described in Section 1, we require that topic assignments be contiguous within each document. 3 Furthermore, we assume that the underlying topic sequences exhibit similarity across documents. Our goal is to recover a topic assignment for each paragraph in the corpus, subject to these constraints.",
"cite_spans": [
{
"start": 304,
"end": 330,
"text": "(Halliday and Hasan, 1976)",
"ref_id": null
},
{
"start": 484,
"end": 485,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3"
},
{
"text": "Our formulation shares some similarity with the standard LDA setup, in that a common set of topics is assigned across a collection of documents. However, in LDA each word's topic assignment is conditionally independent, following the bag of words view of documents. In contrast, our constraints on how topics are assigned let us connect word distributional patterns to document-level topic structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3"
},
{
"text": "We propose a generative Bayesian model that explains how a corpus of D documents, given as sequences of paragraphs, can be produced from a set of hidden topic variables. Topic assignments to each paragraph, ranging from 1 to K, are the model's final output, implicitly grouping topically similar paragraphs. At a high level, the process first selects the bag of topics to be expressed in the document, and how they are ordered; these topics then determine the selection of words for each paragraph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4"
},
{
"text": "For each document d with N d paragraphs, we separately generate a bag of topics t d and a topic ordering \u03c0 d . The unordered bag of topics, which contains N d elements, expresses how many paragraphs of the document are assigned to each of the K topics. Note that some topics may not appear at all. Variable t d is constructed by taking N d samples from a distribution over topics \u03c4 , a multinomial representing the probability of each topic being expressed. Sharing \u03c4 between documents captures the intuition that certain topics are more likely across the entire corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4"
},
{
"text": "The topic ordering variable \u03c0 d is a permutation over the numbers 1 through K that defines the order in which topics appear in the document. We draw \u03c0 d from the generalized Mallows model, a distribution over permutations that we explain in Section 4.1. As we will see, this particular distribution biases the permutation selection to be close to a single centroid, reflecting the discourse constraint of preferring similar topic structures across documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4"
},
{
"text": "Together, a document's bag of topics t d and ordering \u03c0 d determine the topic assignment z d,p for each of its paragraphs. For example, in a corpus with K = 4, a seven-paragraph document d with t d = {1, 1, 1, 1, 2, 4, 4} and \u03c0 d = (2 4 3 1) would induce the topic sequence z d = (2 4 4 1 1 1 1). The induced topic sequence z d can never assign the same topic to two unconnected portions of a document, thus satisfying the constraint of topic contiguity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4"
},
{
"text": "As with LDA, we assume that each topic k is associated with a language model \u03b8 k . The words of a paragraph assigned to topic k are then drawn from that topic's language model \u03b8 k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4"
},
{
"text": "Before turning to a more formal discussion of the generative process, we first provide background on the permutation model for topic ordering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4"
},
{
"text": "A central challenge of the approach we take is modeling the distribution over possible topic permutations. For this purpose we use the generalized Mallows model (GMM) (Fligner and Verducci, 1986; Lebanon and Lafferty, 2002; Meil\u0203 et al., 2007) , which exhibits two appealing properties in the context of this task. First, the model concentrates probability mass on some \"canonical\" ordering and small perturbations of that ordering. This characteristic matches our constraint that documents from the same domain exhibit structural similarity. Second, its parameter set scales linearly with the permutation length, making it sufficiently constrained and tractable for inference. In general, this distribution could potentially be applied to other NLP applications where ordering is important.",
"cite_spans": [
{
"start": 167,
"end": 195,
"text": "(Fligner and Verducci, 1986;",
"ref_id": "BIBREF10"
},
{
"start": 196,
"end": 223,
"text": "Lebanon and Lafferty, 2002;",
"ref_id": "BIBREF18"
},
{
"start": 224,
"end": 243,
"text": "Meil\u0203 et al., 2007)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Generalized Mallows Model",
"sec_num": "4.1"
},
{
"text": "Permutation Representation Typically, permutations are represented directly as an ordered sequence of elements. The GMM utilizes an alternative representation defined as a vector (v 1 , . . . , v K\u22121 ) of inversion counts with respect to the identity permutation (1, . . . , K). Term v j counts the number of times a value greater than j appears before j in the permutation. 4 For instance, given the standard-form permutation (3 1 5 2 4), v 2 = 2 because 3 and 5 appear before 2; the entire inversion count vector would be (1 2 0 1). Every vector of inversion counts uniquely identifies a single permutation.",
"cite_spans": [
{
"start": 375,
"end": 376,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Generalized Mallows Model",
"sec_num": "4.1"
},
{
"text": "The Distribution The GMM assigns probability mass according to the distance of a given permutation from the identity permutation {1, . . . , K}, based on K \u2212 1 real-valued parameters (\u03c1 1 , . . . \u03c1 K\u22121 ). 5 Using the inversion count representation of a permutation, the GMM's probability mass function is expressed as an independent product of probabilities for each v j :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Generalized Mallows Model",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "GMM(v | \u03c1) = e \u2212 j \u03c1 j v j \u03c8(\u03c1) = n\u22121 j=1 e \u2212\u03c1 j v j \u03c8 j (\u03c1 j ) ,",
"eq_num": "(1)"
}
],
"section": "The Generalized Mallows Model",
"sec_num": "4.1"
},
{
"text": "where \u03c8 j (\u03c1 j ) is a normalization factor with value:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Generalized Mallows Model",
"sec_num": "4.1"
},
{
"text": "\u03c8 j (\u03c1 j ) = 1 \u2212 e \u2212(K\u2212j+1)\u03c1 j 1 \u2212 e \u2212\u03c1 j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Generalized Mallows Model",
"sec_num": "4.1"
},
{
"text": "Due to the exponential form of the distribution, requiring that \u03c1 j > 0 constrains the GMM to assign highest probability mass to each v j being zero, corresponding to the identity permutation. A higher value for \u03c1 j assigns more probability mass to v j being close to zero, biasing j to have fewer inversions. The GMM elegantly captures our earlier requirement for a probability distribution that concentrates mass around a global ordering, and uses few parameters to do so. Because the topic numbers in our task are completely symmetric and not linked to any extrinsic observations, fixing the identity permutation to be that global ordering does not sacrifice any representational power. Another major benefit of the GMM is its membership in the exponential family of distributions; this means that it is particularly amenable to a Bayesian representation, as it admits a natural conjugate prior:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Generalized Mallows Model",
"sec_num": "4.1"
},
{
"text": "GMM 0 (\u03c1 j | v j,0 , \u03bd 0 ) \u221d e (\u2212\u03c1 j v j,0 \u2212log \u03c8 j (\u03c1 j ))\u03bd 0 . (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Generalized Mallows Model",
"sec_num": "4.1"
},
{
"text": "Intuitively, this prior states that over \u03bd 0 prior trials, the total number of inversions was \u03bd 0 v j,0 . This distribution can be easily updated with the observed v j to derive a posterior distribution. 6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Generalized Mallows Model",
"sec_num": "4.1"
},
{
"text": "We now fully specify the details of our model. We observe a corpus of D documents, each an ordered sequence of paragraphs, and a specification of a number of topics K. Each paragraph is represented as a bag of words. The model induces a set of hidden variables that probabilistically explain how the words of the corpus were produced. Our final desired output is the distributions over the paragraphs' hidden topic assignment variables. In the following, variables subscripted with 0 are fixed prior hyperparameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Generative Process",
"sec_num": "4.2"
},
{
"text": "1. For each topic k, draw a language model \u03b8 k \u223c Dirichlet(\u03b8 0 ). As with LDA, these are topicspecific word distributions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Generative Process",
"sec_num": "4.2"
},
{
"text": "2. Draw a topic distribution \u03c4 \u223c Dirichlet(\u03c4 0 ), which expresses how likely each topic is to appear regardless of position.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Generative Process",
"sec_num": "4.2"
},
{
"text": "1 exp(\u03c1 0 )\u22121 \u2212 K\u2212j+1 exp((K\u2212j+1)\u03c1 0 )\u22121 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Generative Process",
"sec_num": "4.2"
},
{
"text": "3. Draw the topic ordering distribution parameters \u03c1 j \u223c GMM 0 (\u03c1 0 , \u03bd 0 ) for j = 1 to K \u2212 1. These parameters control how rapidly probability mass decays for having more inversions for each topic. A separate \u03c1 j for every topic allows us to learn that some topics are more likely to be reordered than others. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Generative Process",
"sec_num": "4.2"
},
{
"text": "p: w d,p,j \u223c Multinomial(\u03b8 z d,p ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Generative Process",
"sec_num": "4.2"
},
{
"text": "The variables that we aim to infer are the topic assignments z of each paragraph, which are determined by the bag of topics t and ordering \u03c0 for each document. Thus, our goal is to estimate the marginal distributions of t and \u03c0 given the document text. We accomplish this inference task through Gibbs sampling (Bishop, 2006) . A Gibbs sampler builds a Markov chain over the hidden variable state space whose stationary distribution is the actual posterior of the joint distribution. Each new sample is drawn from the distribution of a single variable conditioned on previous samples of the other variables. We can \"collapse\" the sampler by integrating over some of the hidden variables in the model, in effect reducing the state space of the Markov chain. Collapsed sampling has been previously demonstrated to be effective for LDA and its variants (Griffiths and Steyvers, 2004; Porteous et al., 2008; Titov and McDonald, 2008) . Our sampler integrates over all but three sets of hidden variables: bags of topics t, orderings \u03c0, and permutation inversion parameters \u03c1. After a burn-in period, we treat the last samples of t and \u03c0 as a draw from the true posterior. Document Probability As a preliminary step, consider how to calculate the probability of a single document's words w d given the document's paragraph topic assignments z d , and other documents and their topic assignments. Note that this probability is decomposable into a product of probabilities over individual paragraphs, where paragraphs with different topics have conditionally independent word probabilities. Let w \u2212d and z \u2212d indicate the words and topic assignments to documents other than d, and W be the vocabulary size. The probability of the words in d is then:",
"cite_spans": [
{
"start": 310,
"end": 324,
"text": "(Bishop, 2006)",
"ref_id": "BIBREF5"
},
{
"start": 849,
"end": 879,
"text": "(Griffiths and Steyvers, 2004;",
"ref_id": "BIBREF12"
},
{
"start": 880,
"end": 902,
"text": "Porteous et al., 2008;",
"ref_id": "BIBREF24"
},
{
"start": 903,
"end": 928,
"text": "Titov and McDonald, 2008)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (w d | z, w \u2212d , \u03b8 0 ) = K k=1 \u03b8 k P (w d | z d , \u03b8 k )P (\u03b8 k | z, w \u2212d , \u03b8 0 )d\u03b8 k = K k=1 DCM({w d,i : z d,i = k} | {w \u2212d,i : z \u2212d,i = k}, \u03b8 0 ),",
"eq_num": "(3)"
}
],
"section": "Inference",
"sec_num": "5"
},
{
"text": "where DCM(\u2022) refers to the Dirichlet compound multinomial distribution, the result of integrating over multinomial parameters with a Dirichlet prior (Bernardo and Smith, 2000) . For a Dirichlet prior with parameters \u03b1 = (\u03b1 1 , . . . , \u03b1 W ), the DCM assigns the following probability to a series of observations x = {x 1 , . . . , x n }:",
"cite_spans": [
{
"start": 149,
"end": 175,
"text": "(Bernardo and Smith, 2000)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5"
},
{
"text": "DCM(x | \u03b1) = \u0393( j \u03b1 j ) j \u0393(\u03b1 j ) W i=1 \u0393(N (x, i) + \u03b1 i ) \u0393(|x| + j \u03b1 j ) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5"
},
{
"text": "where N (x, i) refers to the number of times word i appears in x. Here, \u0393(\u2022) is the Gamma function, a generalization of the factorial for real numbers. Some algebra shows that the DCM's posterior probability density function conditioned on a series of observations y = {y 1 , . . . , y n } can be computed by updating each \u03b1 i with counts of how often word i appears in y:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "DCM(x | y, \u03b1) = DCM(x | \u03b1 1 + N (y, 1), . . . , \u03b1 W + N (y, W )).",
"eq_num": "(4)"
}
],
"section": "Inference",
"sec_num": "5"
},
{
"text": "Equation 3 and 4 will be used again to compute the conditional distributions of the hidden variables.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5"
},
{
"text": "We now turn to a discussion of how each individual random variable is resampled.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5"
},
{
"text": "Bag of Topics First we consider how to resample t d,i , the ith topic draw for document d conditioned on all other parameters being fixed (note this is not the topic of the ith paragraph, as we reorder topics using \u03c0 d ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5"
},
{
"text": "P (t d,i = t | . . .) \u221d P (t d,i = t | t \u2212(d,i) , \u03c4 0 )P (w d | t d , \u03c0 d , w \u2212d , z \u2212d , \u03b8 0 ) \u221d N (t \u2212(d,i) , t) + \u03c4 0 |t \u2212(d,i) | + K\u03c4 0 P (w d | z, w \u2212d , \u03b8 0 ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5"
},
{
"text": "where t d is updated to reflect t d,i = t, and z d is deterministically computed by mapping t d and \u03c0 d to actual paragraph topic assignments. The first step reflects an application of Bayes rule to factor out the term for w d . In the second step, the first term arises out of the DCM, by updating the parameters \u03c4 0 with observations t \u2212(d,i) as in equation 4 and dropping constants. The document probability term is computed using equation 3. The new t d,i is selected by sampling from this probability computed over all possible topic assignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5"
},
{
"text": "Ordering The parameterization of a permutation \u03c0 as a series of inversion values v j reveals a natural way to decompose the search space for Gibbs sampling. For a single ordering, each v j can be sampled independently, according to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5"
},
{
"text": "P (v j = v | . . .) \u221d P (v j = v | \u03c1 j )P (w d | t d , \u03c0 d , w \u2212d , z \u2212d , \u03b8 0 ) = GMM j (v | \u03c1 j )P (w d | z d , w \u2212d , z \u2212d , \u03b8 0 ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5"
},
{
"text": "where \u03c0 d is updated to reflect v j = v, and z d is computed according to t d and \u03c0 d . The first term refers to the jth multiplicand of equation 1; the second is computed using equation 3. Term v j is sampled according to the resulting probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5"
},
{
"text": "GMM Parameters For each j = 1 to K \u2212 1, we resample \u03c1 j from its posterior distribution:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5"
},
{
"text": "P (\u03c1 j | . . .) = GMM 0 \u03c1 j i v j,i + v j,0 \u03bd 0 N + \u03bd 0 , N + \u03bd 0 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5"
},
{
"text": "where GMM 0 is evaluated according to equation 2. The normalization constant of this distribution is unknown, meaning that we cannot directly compute and invert the cumulative distribution function to sample from this distribution. However, the distribution itself is univariate and unimodal, so we can expect that an MCMC technique such as slice sampling (Neal, 2003) should perform well. In practice, the MATLAB black-box slice sampler provides a robust draw from this distribution. In each data set, the articles' noisy section headings induce a reference structure to compare against. This reference structure assumes that two paragraphs are aligned if and only if their section headings are identical, and that section boundaries provide the correct segmentation of each document. These headings are only used for evaluation, and are not provided to any of the systems.",
"cite_spans": [
{
"start": 356,
"end": 368,
"text": "(Neal, 2003)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5"
},
{
"text": "Using the section headings to build the reference structure can be problematic, as the same topic may be referred to using different titles across different documents, and sections may be divided at differing levels of granularity. Thus, for the Cities data set, we manually annotated each article's paragraphs with a consistent set of section headings, providing us an additional reference structure to evaluate against. In this clean section headings set, we found approximately 18 topics that were expressed in more than one document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5"
},
{
"text": "We study performance on the tasks of alignment and segmentation. In the former task, we measure whether paragraphs identified to be the same topic by our model have the same section headings, and vice versa. First, we identify the \"closest\" topic to each section heading, by finding the topic that is most commonly assigned to paragraphs under that section heading. We compute the proportion of paragraphs where the model's topic assignment matches the section heading's topic, giving us a recall score. High recall indicates that paragraphs of the same section headings are always being assigned to the same topic. Conversely, we can find the closest section heading to each topic, by finding the section heading that is most common for the paragraphs assigned to a single topic. We then compute the proportion of paragraphs from that topic whose section heading is the same as the reference heading for that topic, yielding a precision score. High precision means that paragraphs assigned to a single topic usually correspond to the same section heading. The harmonic mean of recall and precision is the summary F-score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tasks and Metrics",
"sec_num": null
},
{
"text": "Statistical significance in this setup is measured with approximate randomization (Noreen, 1989) , a nonparametric test that can be directly applied to nonlinear metrics such as F-score. This test has been used in prior evaluations for information extraction and machine translation (Chinchor, 1995; Riezler and Maxwell, 2005 ).",
"cite_spans": [
{
"start": 82,
"end": 96,
"text": "(Noreen, 1989)",
"ref_id": "BIBREF22"
},
{
"start": 283,
"end": 299,
"text": "(Chinchor, 1995;",
"ref_id": "BIBREF7"
},
{
"start": 300,
"end": 325,
"text": "Riezler and Maxwell, 2005",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tasks and Metrics",
"sec_num": null
},
{
"text": "For the second task, we take the boundaries at which topics change within a document to be a segmentation of that document. We evaluate using the standard penalty metrics P k and WindowDiff (Beeferman et al., 1999; Pevzner and Hearst, 2002) . Both pass a sliding window over the documents and compute the probability of the words at the ends of the windows being improperly segmented with respect to each other. WindowDiff requires that the number of segmentation boundaries between the endpoints be correct as well. 8 Our model takes a parameter K which controls the upper bound on the number of latent topics. Note that our algorithm can select fewer than K topics for each document, so K does not determine the number of segments in each document. We report results using both K = 10 and 20 (recall that the cleanly annotated Cities data set had 18 topics).",
"cite_spans": [
{
"start": 190,
"end": 214,
"text": "(Beeferman et al., 1999;",
"ref_id": "BIBREF2"
},
{
"start": 215,
"end": 240,
"text": "Pevzner and Hearst, 2002)",
"ref_id": "BIBREF23"
},
{
"start": 517,
"end": 518,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tasks and Metrics",
"sec_num": null
},
{
"text": "We consider baselines from the literature that perform either alignment or segmentation. For the first task, we compare against the hidden topic Markov model (HTMM) (Gruber et al., 2007) , which represents topic transitions between adjacent paragraphs in a Markovian fashion, similar to the approach taken in content modeling work. Note that HTMM can only capture local constraints, so it would allow topics to recur noncontiguously throughout a document.",
"cite_spans": [
{
"start": 165,
"end": 186,
"text": "(Gruber et al., 2007)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines and Model Variants",
"sec_num": null
},
{
"text": "We also compare against the structure-agnostic approach of clustering the paragraphs using the CLUTO toolkit, 9 which uses repeated bisection to maximize a cosine similarity-based objective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines and Model Variants",
"sec_num": null
},
{
"text": "For the segmentation task, we compare to BayesSeg (Eisenstein and Barzilay, 2008) , 10 a Bayesian topic-based segmentation model that outperforms previous segmentation approaches (Utiyama and Isahara, 2001; Galley et al., 2003; Purver et al., 2006; Malioutov and Barzilay, 2006) . BayesSeg enforces the topic contiguity constraint that motivated our model. We provide this baseline with the benefit of knowing the correct number of segments for each document, which is not provided to our system. Note that BayesSeg processes each document individually, so it cannot capture structural relatedness across documents.",
"cite_spans": [
{
"start": 50,
"end": 81,
"text": "(Eisenstein and Barzilay, 2008)",
"ref_id": "BIBREF8"
},
{
"start": 179,
"end": 206,
"text": "(Utiyama and Isahara, 2001;",
"ref_id": "BIBREF28"
},
{
"start": 207,
"end": 227,
"text": "Galley et al., 2003;",
"ref_id": "BIBREF11"
},
{
"start": 228,
"end": 248,
"text": "Purver et al., 2006;",
"ref_id": "BIBREF25"
},
{
"start": 249,
"end": 278,
"text": "Malioutov and Barzilay, 2006)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines and Model Variants",
"sec_num": null
},
{
"text": "To investigate the importance of our ordering model, we consider two variants of our model that alternately relax and tighten ordering constraints. In the constrained model, we require all documents to follow the same canonical ordering of topics. This is equivalent to forcing the topic permutation distribution to give all its probability to one ordering, and can be implemented by fixing all inversion counts v to zero during inference. At the other extreme, we consider the uniform model, which assumes a uniform distribution over all topic permutations instead of biasing toward a small related set. In our implementation, this can be simulated by forcing the GMM parameters \u03c1 to always be zero. Both variants still enforce topic contiguity, and allow segments across documents to be aligned by topic assignment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines and Model Variants",
"sec_num": null
},
{
"text": "Evaluation Procedures For each evaluation of our model and its variants, we run the Gibbs sampler from five random seed states, and take the 10,000th iteration of each chain as a sample. Results shown are the average over these five samples. All Dirichlet prior hyperparameters are set to 0.1, encouraging sparse distributions. For the GMM, we set the prior decay parameter \u03c1 0 to 1, and the sample size prior \u03bd 0 to be 0.1 times the number of documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines and Model Variants",
"sec_num": null
},
{
"text": "For the baselines, we use implementations publicly released by their authors. We set HTMM's priors according to values recommended in the authors' original work. For BayesSeg, we use its built-in hyperparameter re-estimation mechanism.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines and Model Variants",
"sec_num": null
},
{
"text": "Alignment Table 1 presents the results of the alignment evaluation. In every case, the best performance is achieved using our full model, by a statistically significant and usually substantial margin.",
"cite_spans": [],
"ref_spans": [
{
"start": 10,
"end": 17,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "In both domains, the baseline clustering method performs competitively, indicating that word cues alone are a good indicator of topic. While the simpler variations of our model achieve reasonable performance, adding the richer GMM distribution consistently yields superior results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "Across each of our evaluations, HTMM greatly underperforms the other approaches. Manual examination of the actual topic assignments reveals that HTMM often selects the same topic for disconnected paragraphs of the same document, violating the topic contiguity constraint, and demonstrating the importance of modeling global constraints for document structure tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "We also compare performance measured on the manually annotated section headings against the actual noisy headings. The ranking of methods by performance remains mostly unchanged between these two evaluations, indicating that the noisy headings are sufficient for gaining insight into the comparative performance of the different approaches. Table 2 : Comparison of the segmentations produced by our model and a series of baselines and model variations, for both 10 and 20 topics, evaluated against clean and noisy sets of section headings. Lower scores are better. \u2020BayesSeg is given the true number of segments, so its segments count reflects the reference structure's segmentation.",
"cite_spans": [],
"ref_spans": [
{
"start": 341,
"end": 348,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "outperforms the BayesSeg baseline by a substantial margin regardless of K. This result provides strong evidence that learning connected topic models over related documents leads to improved segmentation performance. In effect, our model can take advantage of shared structure across related documents. In all but one case, the best performance is obtained by the full version of our model. This result indicates that enforcing discourse-motivated structural constraints allows for better segmentation induction. Encoding global discourse-level constraints leads to better language models, resulting in more accurate predictions of segment boundaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation",
"sec_num": null
},
{
"text": "In this paper, we have shown how an unsupervised topic-based approach can capture document structure. Our resulting model constrains topic assignments in a way that requires global modeling of entire topic sequences. We showed that the generalized Mallows model is a theoretically and empirically appealing way of capturing the ordering component of this topic sequence. Our results demonstrate the importance of augmenting statistical models of text analysis with structural constraints motivated by discourse theory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "Code, data, and annotations used in this work are available at http://groups.csail.mit.edu/rbg/code/mallows/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that our analysis applies equally to other levels of textual granularity, such as sentences.3 That is, if paragraphs i and j are assigned the same topic, every paragraph between them must have that topic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The sum of a vector of inversion counts is simply that permutation's Kendall's \u03c4 distance to the identity permutation.5 In our work we take the identity permutation to be the fixed centroid, which is a parameter in the full GMM. As we explain later, our model is not hampered by this apparent restriction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Because each vj has a different range, it is inconvenient to set the prior hyperparameters vj,0 directly. In our work, we instead fix the mode of the prior distribution to a value \u03c10, which works out to setting vj,0 =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Multiple permutations can contribute to the probability of a single document's topic assignments z d , if there are topics that do not appear in t d . As a result, our current formulation is biased toward assignments with fewer topics per document. In practice, we do not find this to negatively impact model performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Statistical significance testing is not standardized and usually not reported for the segmentation task, so we omit these tests in our results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://glaros.dtc.umn.edu/gkhome/views/cluto/10 We do not evaluate on the corpora used in their work, since our model relies on content similarity across documents in the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors acknowledge the funding support of NSF CAREER grant IIS-0448168, the NSF Graduate Fellowship, the Office of Naval Research, Quanta, Nokia, and the Microsoft Faculty Fellowship. We thank the members of the NLP group at MIT and numerous others who offered suggestions and comments on this work. We are especially grateful to Marina Meil\u0203 for introducing us to the Mallows model. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Catching the drift: Probabilistic content models, with applications to generation and summarization",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of NAACL/HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In Proceedings of NAACL/HLT.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Inferring strategies for sentence ordering in multidocument news summarization",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Noemie",
"middle": [],
"last": "Elhadad",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of Artificial Intelligence Research",
"volume": "17",
"issue": "",
"pages": "35--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay, Noemie Elhadad, and Kathleen McKe- own. 2002. Inferring strategies for sentence ordering in multidocument news summarization. Journal of Ar- tificial Intelligence Research, 17:35-55.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Statistical models for text segmentation. Machine Learning",
"authors": [
{
"first": "Doug",
"middle": [],
"last": "Beeferman",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Berger",
"suffix": ""
},
{
"first": "John",
"middle": [
"D"
],
"last": "Lafferty",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "34",
"issue": "",
"pages": "177--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Doug Beeferman, Adam Berger, and John D. Lafferty. 1999. Statistical models for text segmentation. Ma- chine Learning, 34:177-210.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Wiley Series in Probability and Statistics",
"authors": [
{
"first": "Bayesian",
"middle": [],
"last": "Theory",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bayesian Theory. Wiley Series in Probability and Statistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Pattern Recognition and Machine Learning",
"authors": [
{
"first": "Christopher",
"middle": [
"M"
],
"last": "Bishop",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher M. Bishop. 2006. Pattern Recognition and Machine Learning. Springer.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Latent dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Ng, and Michael Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learn- ing Research, 3:993-1022.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Statistical significance of MUC-6 results",
"authors": [
{
"first": "Nancy",
"middle": [],
"last": "Chinchor",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 6th Conference on Message Understanding",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nancy Chinchor. 1995. Statistical significance of MUC- 6 results. In Proceedings of the 6th Conference on Message Understanding.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Bayesian unsupervised topic segmentation",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Eisenstein and Regina Barzilay. 2008. Bayesian unsupervised topic segmentation. In Proceedings of EMNLP.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A unified local and global model for discourse coherence",
"authors": [
{
"first": "Micha",
"middle": [],
"last": "Elsner",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Austerweil",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of NAACL/HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Micha Elsner, Joseph Austerweil, and Eugene Charniak. 2007. A unified local and global model for discourse coherence. In Proceedings of NAACL/HLT.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Distance based ranking models",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Fligner",
"suffix": ""
},
{
"first": "J",
"middle": [
"S"
],
"last": "Verducci",
"suffix": ""
}
],
"year": 1986,
"venue": "Journal of the Royal Statistical Society, Series B",
"volume": "48",
"issue": "3",
"pages": "359--369",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M.A. Fligner and J.S. Verducci. 1986. Distance based ranking models. Journal of the Royal Statistical Soci- ety, Series B, 48(3):359-369.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Discourse segmentation of multi-party conversation",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [
"R"
],
"last": "Mckeown",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Fosler-Lussier",
"suffix": ""
},
{
"first": "Hongyan",
"middle": [],
"last": "Jing",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Galley, Kathleen R. McKeown, Eric Fosler- Lussier, and Hongyan Jing. 2003. Discourse segmen- tation of multi-party conversation. In Proceedings of ACL.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Finding scientific topics",
"authors": [
{
"first": "L",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Griffiths",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Steyvers",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the National Academy of Sciences",
"volume": "101",
"issue": "",
"pages": "5228--5235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas L. Griffiths and Mark Steyvers. 2004. Find- ing scientific topics. Proceedings of the National Academy of Sciences, 101:5228-5235.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Integrating topics and syntax",
"authors": [
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steyvers",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Joshua",
"middle": [
"B"
],
"last": "Tenenbaum",
"suffix": ""
}
],
"year": 2005,
"venue": "Advances in NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas L. Griffiths, Mark Steyvers, David M. Blei, and Joshua B. Tenenbaum. 2005. Integrating topics and syntax. In Advances in NIPS.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Hidden topic markov models",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Gruber",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Rosen-Zvi",
"suffix": ""
},
{
"first": "Yair",
"middle": [],
"last": "Weiss",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of AIS-TATS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Gruber, Michal Rosen-Zvi, and Yair Weiss. 2007. Hidden topic markov models. In Proceedings of AIS- TATS.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Evaluating centeringbased metrics of coherence for text structuring using a reliably annotated corpus",
"authors": [
{
"first": "Nikiforos",
"middle": [],
"last": "Karamanis",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Mellish",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Oberlander",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikiforos Karamanis, Massimo Poesio, Chris Mellish, and Jon Oberlander. 2004. Evaluating centering- based metrics of coherence for text structuring using a reliably annotated corpus. In Proceedings of ACL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Probabilistic text structuring: Experiments with sentence ordering",
"authors": [
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mirella Lapata. 2003. Probabilistic text structuring: Ex- periments with sentence ordering. In Proceedings of ACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Cranking: combining rankings using conditional probability models on permutations",
"authors": [
{
"first": "Guy",
"middle": [],
"last": "Lebanon",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guy Lebanon and John Lafferty. 2002. Cranking: com- bining rankings using conditional probability models on permutations. In Proceedings of ICML.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Minimum cut model for spoken lecture segmentation",
"authors": [
{
"first": "Igor",
"middle": [],
"last": "Malioutov",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Igor Malioutov and Regina Barzilay. 2006. Minimum cut model for spoken lecture segmentation. In Pro- ceedings of ACL.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Consensus ranking under the exponential model",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Meil\u0203",
"suffix": ""
},
{
"first": "Kapil",
"middle": [],
"last": "Phadnis",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Patterson",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Bilmes",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of UAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Meil\u0203, Kapil Phadnis, Arthur Patterson, and Jeff Bilmes. 2007. Consensus ranking under the exponen- tial model. In Proceedings of UAI.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Slice sampling",
"authors": [
{
"first": "M",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Neal",
"suffix": ""
}
],
"year": 2003,
"venue": "Annals of Statistics",
"volume": "31",
"issue": "",
"pages": "705--767",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radford M. Neal. 2003. Slice sampling. Annals of Statistics, 31:705-767.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Computer Intensive Methods for Testing Hypotheses. An Introduction",
"authors": [
{
"first": "Eric",
"middle": [
"W"
],
"last": "Noreen",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric W. Noreen. 1989. Computer Intensive Methods for Testing Hypotheses. An Introduction. Wiley.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A critique and improvement of an evaluation metric for text segmentation",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Pevzner",
"suffix": ""
},
{
"first": "Marti",
"middle": [
"A"
],
"last": "Hearst",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics",
"volume": "28",
"issue": "",
"pages": "19--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Pevzner and Marti A. Hearst. 2002. A critique and improvement of an evaluation metric for text segmen- tation. Computational Linguistics, 28:19-36.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Fast collapsed gibbs sampling for latent dirichlet allocation",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Porteous",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Newman",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Ihler",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Asuncion",
"suffix": ""
},
{
"first": "Padhraic",
"middle": [],
"last": "Smyth",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of SIGKDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Porteous, David Newman, Alexander Ihler, Arthur Asuncion, Padhraic Smyth, and Max Welling. 2008. Fast collapsed gibbs sampling for latent dirichlet allo- cation. In Proceedings of SIGKDD.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Unsupervised topic modelling for multi-party spoken discourse",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Purver",
"suffix": ""
},
{
"first": "Konrad",
"middle": [],
"last": "K\u00f6rding",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Joshua",
"middle": [
"B"
],
"last": "Tenenbaum",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of ACL/COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Purver, Konrad K\u00f6rding, Thomas L. Griffiths, and Joshua B. Tenenbaum. 2006. Unsupervised topic modelling for multi-party spoken discourse. In Pro- ceedings of ACL/COLING.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "On some pitfalls in automatic evaluation and significance testing for MT",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": ""
},
{
"first": "John",
"middle": [
"T"
],
"last": "Maxwell",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Riezler and John T. Maxwell. 2005. On some pitfalls in automatic evaluation and significance test- ing for MT. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Ma- chine Translation and/or Summarization.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Modeling online reviews with multi-grain topic models",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of WWW",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Titov and Ryan McDonald. 2008. Modeling online reviews with multi-grain topic models. In Proceedings of WWW.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A statistical model for domain-independent text segmentation",
"authors": [
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masao Utiyama and Hitoshi Isahara. 2001. A statistical model for domain-independent text segmentation. In Proceedings of ACL.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Topic modeling: beyond bag of words",
"authors": [
{
"first": "M",
"middle": [],
"last": "Hanna",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wallach",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hanna M. Wallach. 2006. Topic modeling: beyond bag of words. In Proceedings of ICML.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Formulaic Language and the Lexicon",
"authors": [
{
"first": "Alison",
"middle": [],
"last": "Wray",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alison Wray. 2002. Formulaic Language and the Lexi- con. Cambridge University Press, Cambridge.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "For each document d with N d paragraphs: (a) Draw a bag of topics t d by sampling N d times from Multinomial(\u03c4 ). (b) Draw a topic ordering \u03c0 d by sampling a vector of inversion counts v d \u223c GMM(\u03c1). (c) Compute the vector of topic assignments z d for document d's paragraphs, by sorting t d according to \u03c0 d . 7 (d) For each paragraph p in document d: i. Sample each word w d,p,j according to the language model of",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF1": {
"html": null,
"content": "<table><tr><td>presents the segmentation</td></tr><tr><td>experiment results. On both data sets, our model</td></tr></table>",
"type_str": "table",
"text": "Comparison of the alignments produced by our model and a series of baselines and model variations, for both 10 and 20 topics, evaluated against clean and noisy sets of section headings. Higher scores are better. Within the same K, the methods which our model significantly outperforms are indicated with * for p < 0.001 and for p < 0.01.",
"num": null
}
}
}
}