{ "paper_id": "N09-1040", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:42:28.567091Z" }, "title": "Hierarchical Text Segmentation from Multi-Scale Lexical Cohesion", "authors": [ { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "", "affiliation": { "laboratory": "", "institution": "Beckman Institute for Advanced Science and Technology University of Illinois Urbana", "location": { "postCode": "61801", "region": "IL" } }, "email": "jacobe@illinois.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents a novel unsupervised method for hierarchical topic segmentation. Lexical cohesion-the workhorse of unsupervised linear segmentation-is treated as a multi-scale phenomenon, and formalized in a Bayesian setting. Each word token is modeled as a draw from a pyramid of latent topic models, where the structure of the pyramid is constrained to induce a hierarchical segmentation. Inference takes the form of a coordinate-ascent algorithm, iterating between two steps: a novel dynamic program for obtaining the globally-optimal hierarchical segmentation, and collapsed variational Bayesian inference over the hidden variables. The resulting system is fast and accurate, and compares well against heuristic alternatives.", "pdf_parse": { "paper_id": "N09-1040", "_pdf_hash": "", "abstract": [ { "text": "This paper presents a novel unsupervised method for hierarchical topic segmentation. Lexical cohesion-the workhorse of unsupervised linear segmentation-is treated as a multi-scale phenomenon, and formalized in a Bayesian setting. Each word token is modeled as a draw from a pyramid of latent topic models, where the structure of the pyramid is constrained to induce a hierarchical segmentation. Inference takes the form of a coordinate-ascent algorithm, iterating between two steps: a novel dynamic program for obtaining the globally-optimal hierarchical segmentation, and collapsed variational Bayesian inference over the hidden variables. The resulting system is fast and accurate, and compares well against heuristic alternatives.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Recovering structural organization from unformatted texts or transcripts is a fundamental problem in natural language processing, with applications to classroom lectures, meeting transcripts, and chatroom logs. In the unsupervised setting, a variety of successful systems have leveraged lexical cohesion (Halliday and Hasan, 1976) -the idea that topically-coherent segments display consistent lexical distributions (Hearst, 1994; Utiyama and Isahara, 2001 ; Eisenstein and Barzilay, 2008) . However, such systems almost invariably focus on linear segmentation, while it is widely believed that discourse displays a hierarchical structure (Grosz and Sidner, 1986) . This paper introduces the concept of multi-scale lexical cohesion, and leverages this idea in a Bayesian generative model for hierarchical topic segmentation.", "cite_spans": [ { "start": 304, "end": 330, "text": "(Halliday and Hasan, 1976)", "ref_id": null }, { "start": 415, "end": 429, "text": "(Hearst, 1994;", "ref_id": "BIBREF9" }, { "start": 430, "end": 455, "text": "Utiyama and Isahara, 2001", "ref_id": "BIBREF22" }, { "start": 458, "end": 488, "text": "Eisenstein and Barzilay, 2008)", "ref_id": "BIBREF2" }, { "start": 638, "end": 662, "text": "(Grosz and Sidner, 1986)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The idea of multi-scale cohesion is illustrated by the following two examples, drawn from the Wikipedia entry for the city of Buenos Aires.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There are over 150 city bus lines called Colectivos ... Colectivos in Buenos Aires do not have a fixed timetable, but run from 4 to several per hour, depending on the bus line and time of the day.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The Buenos Aires metro has six lines, 74 stations, and 52.3 km of track. An expansion program is underway to extend existing lines into the outer neighborhoods. Track length is expected to reach 89 km...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The two sections are both part of a high-level segment on transportation. Words in bold are characteristic of the subsections (buses and trains, respectively), and do not occur elsewhere in the transportation section; words in italics occur throughout the high-level section, but not elsewhere in the article. This paper shows how multi-scale cohesion can be captured in a Bayesian generative model and exploited for unsupervised hierarchical topic segmentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Latent topic models (Blei et al., 2003) provide a powerful statistical apparatus with which to study discourse structure. A consistent theme is the treatment of individual words as draws from multinomial language models indexed by a hidden \"topic\" associated with the word. In latent Dirichlet allocation (LDA) and related models, the hidden topic for each word is unconstrained and unrelated to the hidden topic of neighboring words (given the parameters). In this paper, the latent topics are constrained to produce a hierarchical segmentation structure, as shown in Figure 1 .", "cite_spans": [ { "start": 20, "end": 39, "text": "(Blei et al., 2003)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 569, "end": 577, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "w 1 ... w T \u04e8 8 \u04e8 6 \u04e8 7 \u04e8 1 \u04e8 2 \u04e8 3 \u04e8 4 \u04e8 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Figure 1: Each word w t is drawn from a mixture of the language models located above t in the pyramid.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "These structural requirements simplify inference, allowing the language models to be analytically marginalized. The remaining hidden variables are the scale-level assignments for each word token. Given marginal distributions over these variables, it is possible to search the entire space of hierarchical segmentations in polynomial time, using a novel dynamic program. Collapsed variational Bayesian inference is then used to update the marginals. This approach achieves high quality segmentation on multiple levels of the topic hierarchy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Source code is available at http://people. csail.mit.edu/jacobe/naacl09.html.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The use of lexical cohesion (Halliday and Hasan, 1976) in unsupervised topic segmentation dates back to Hearst's seminal TEXTTILING system (1994) . Lexical cohesion was placed in a probabilistic (though not Bayesian) framework by Utiyama and Isahara (2001) . The application of Bayesian topic models to text segmentation was investigated first by Blei and Moreno (2001) and later by Purver et al. (2006) , using HMM-like graphical models for linear segmentation. Eisenstein and Barzilay (2008) extend this work by marginalizing the language models using the Dirichlet compound multinomial distribution; this permits efficient inference to be performed directly in the space of segmentations. All of these papers consider only linear topic segmentation; we introduce multi-scale lexical cohesion, which posits that the distribution of some words changes slowly with high-level topics, while others change rapidly with lower-level subtopics. This gives a principled mechanism to model hierarchical topic segmentation.", "cite_spans": [ { "start": 28, "end": 54, "text": "(Halliday and Hasan, 1976)", "ref_id": null }, { "start": 104, "end": 145, "text": "Hearst's seminal TEXTTILING system (1994)", "ref_id": null }, { "start": 230, "end": 256, "text": "Utiyama and Isahara (2001)", "ref_id": "BIBREF22" }, { "start": 347, "end": 369, "text": "Blei and Moreno (2001)", "ref_id": "BIBREF0" }, { "start": 383, "end": 403, "text": "Purver et al. (2006)", "ref_id": "BIBREF19" }, { "start": 463, "end": 493, "text": "Eisenstein and Barzilay (2008)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The literature on hierarchical topic segmentation is relatively sparse. Hsueh et al. (2006) describe a supervised approach that trains separate classifiers for topic and sub-topic segmentation; more relevant for the current work is the unsupervised method of Yaari (1997) . As in TEXTTILING, cohesion is measured using cosine similarity, and agglomerative clustering is used to induce a dendrogram over paragraphs; the dendrogram is transformed into a hierarchical segmentation using a heuristic algorithm. Such heuristic approaches are typically brittle, as they include a number of parameters that must be hand-tuned. These problems can be avoided by working in a Bayesian probabilistic framework.", "cite_spans": [ { "start": 72, "end": 91, "text": "Hsueh et al. (2006)", "ref_id": "BIBREF11" }, { "start": 259, "end": 271, "text": "Yaari (1997)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We note two orthogonal but related approaches to extracting nonlinear discourse structures from text. Rhetorical structure theory posits a hierarchical structure of discourse relations between spans of text (Mann and Thompson, 1988) . This structure is richer than hierarchical topic segmentation, and the base level of analysis is typically more fine-grained -at the level of individual clauses. Unsupervised approaches based purely on cohesion are unlikely to succeed at this level of granularity. Elsner and Charniak (2008) propose the task of conversation disentanglement from internet chatroom logs. Unlike hierarchical topic segmentation, conversational threads may be disjoint, with unrelated threads interposed between two utterances from the same thread. Elsner and Charniak present a supervised approach to this problem, but the development of cohesion-based unsupervised methods is an interesting possibility for future work.", "cite_spans": [ { "start": 207, "end": 232, "text": "(Mann and Thompson, 1988)", "ref_id": "BIBREF15" }, { "start": 500, "end": 526, "text": "Elsner and Charniak (2008)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Topic modeling is premised on a generative framework in which each word w t is drawn from a multinomial \u03b8 yt , where y t is a hidden topic indexing the language model that generates w t . From a modeling standpoint, linear topic segmentation merely adds the constraint that y t \u2208 {y t\u22121 , y t\u22121 + 1}. Segmentations that draw boundaries so as to induce compact, low-entropy language models will achieve a high likelihood. Thus topic models situate lexical cohesion in a probabilistic setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "For hierarchical segmentation, we take the hypothesis that lexical cohesion is a multi-scale phenomenon. This is represented with a pyramid of language models, shown in Figure 1 . Each word may be drawn from any language model above it in the pyramid. Thus, the high-level language models will be required to explain words throughout large parts of the document, while the low-level language models will be required to explain only a local set of words. A hidden variable z t indicates which level is responsible for generating the word w t .", "cite_spans": [], "ref_spans": [ { "start": 169, "end": 177, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "Ideally we would like to choose the segmentation y = argmax y p(w|y)p(y). However, we must deal with the hidden language models \u0398 and scale-level assignments z. The language models can be integrated out analytically (Section 3.1). Given marginal likelihoods for the hidden variables z, the globally optimal segmentation\u0177 can be found using a dynamic program (Section 4.1). Given a segmentation, we can estimate marginals for the hidden variables, using collapsed variational inference (Section 4.2). We iterate between these procedures in an EM-like coordinate-ascent algorithm (Section 4.4) until convergence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "We begin the formal presentation of the model with some notation. Each word w t is modeled as a single draw from a multinomial language model \u03b8 j . The language models in turn are drawn from symmetric Dirichlet distributions with parameter \u03b1. The number of language models is written K; the number of words is W ; the length of the document is T ; and the depth of the hierarchy is L.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language models", "sec_num": "3.1" }, { "text": "For hierarchical segmentation, the vector y t indicates the segment index of t at each level of the topic hierarchy; the specific level of the hierarchy responsible for w t is given by the hidden variable z t . Thus,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language models", "sec_num": "3.1" }, { "text": "y (zt) t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language models", "sec_num": "3.1" }, { "text": "is the index of the language model that generates w t .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language models", "sec_num": "3.1" }, { "text": "With these pieces in place, we can write the observation likelihood,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language models", "sec_num": "3.1" }, { "text": "p(w|y, z, \u0398) = T t p(w t |\u03b8 y (z t ) t ) = K j {t:y (z t ) t =j} p(w t |\u03b8 j ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language models", "sec_num": "3.1" }, { "text": "where we have merely rearranged the product to group terms that are drawn from the same language model. As the goal is to obtain the hierarchical segmentation and not the language models, the search space can be reduced by marginalizing \u0398. The derivation is facilitated by a notational convenience:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language models", "sec_num": "3.1" }, { "text": "x j represents the lexical counts induced by the set of words {w t : y", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language models", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(zt) t = j}. p(w|y, z, \u03b1) = K j d\u03b8 j p(\u03b8 j |\u03b1)p(x j |\u03b8 j ) = K j p dcm (x j ; \u03b1) = K j \u0393(W \u03b1) \u0393( W i x ji + \u03b1) W i \u0393(x ji + \u03b1) \u0393(\u03b1) .", "eq_num": "(1)" } ], "section": "Language models", "sec_num": "3.1" }, { "text": "Here, p dcm indicates the Dirichlet compound multinomial distribution (Madsen et al., 2005) , which is the closed form solution to the integral over language models. Also known as the multivariate Polya distribution, the probability density function can be computed exactly as a ratio of gamma functions. Here we use a symmetric Dirichlet prior \u03b1, though asymmetric priors can easily be applied.", "cite_spans": [ { "start": 70, "end": 91, "text": "(Madsen et al., 2005)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Language models", "sec_num": "3.1" }, { "text": "Thus far we have treated the hidden variables z as observed. In fact we will compute approximate marginal probabilities Q zt (z t ), written \u03b3 t \u2261 Q zt (z t = ). Writing x Qz for the expectation of x under distribution Q z , we approximate,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language models", "sec_num": "3.1" }, { "text": "p dcm (x j ; \u03b1) Qz \u2248 p dcm ( x j Qz ; \u03b1) x j (i) Qz = {t:j\u2208yt} L \u03b4(w t = i)\u03b4(y ( ) t = j)\u03b3 t ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language models", "sec_num": "3.1" }, { "text": "where x j (i) indicates the count for word type i generated from segment j. In the outer sum, we consider all t for possibly drawn from segment j. The inner sum goes over all levels of the pyramid. The delta functions take the value one if the enclosed Boolean expression is true and zero otherwise, so we are adding the fractional counts \u03b3 t only when w t = i and y", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language models", "sec_num": "3.1" }, { "text": "( ) t = j.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language models", "sec_num": "3.1" }, { "text": "Maximizing the joint probability p(w, y) = p(w|y)p(y) leaves the term p(y) as a prior on segmentations. This prior can be used to favor segmentations with the desired granularity. Consider a prior of the form p(y) = L =1 p(y ( ) |y ( \u22121) ); for notational convenience, we introduce a base level such that y (0) t = t, where every word is a segmentation point. At every level > 0, the prior is a Markov process, p(y", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prior on segmentations", "sec_num": "3.2" }, { "text": "( ) |y ( \u22121) ) = T t p(y ( ) t |y ( ) t\u22121 , y ( \u22121) ). The constraint y ( ) t \u2208 {y ( ) t\u22121 , y ( ) t\u22121 + 1}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prior on segmentations", "sec_num": "3.2" }, { "text": "ensures a linear segmentation at each level. To enforce hierarchical consistency, each y ( ) t can be a segmentation point only if t is also a segmentation point at the lower level \u2212 1. Zero probability is assigned to segmentations that violate these constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prior on segmentations", "sec_num": "3.2" }, { "text": "To quantify the prior probability of legal segmentations, assume a set of parameters d , indicating the expected segment duration at each level. If t is a valid potential segmentation point at level (i.e., y", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prior on segmentations", "sec_num": "3.2" }, { "text": "( \u22121) t = 1 + y ( \u22121) t\u22121 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prior on segmentations", "sec_num": "3.2" }, { "text": ", then the prior probability of a segment transition is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prior on segmentations", "sec_num": "3.2" }, { "text": "r = d \u22121 /d , with d 0 = 1. If there are N segments in level and M \u2265 N segments in level \u2212 1, then the prior p(y ( ) |y ( \u22121) ) = r N (1 \u2212 r ) M \u2212N ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prior on segmentations", "sec_num": "3.2" }, { "text": "as long as the hierarchical segmentation constraint is obeyed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prior on segmentations", "sec_num": "3.2" }, { "text": "For the purposes of inference it will be preferable to have a prior that decomposes over levels and segments. In particular, we do not want to have to commit to a particular segmentation at level before segmenting level + 1. The above prior can be approximated by replacing M with its expectation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prior on segmentations", "sec_num": "3.2" }, { "text": "M d \u22121 = T /d \u22121 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prior on segmentations", "sec_num": "3.2" }, { "text": "Then a single segment ranging from w u to w v (inclusive) will contribute log r", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prior on segmentations", "sec_num": "3.2" }, { "text": "+ v\u2212u d \u22121 log(1 \u2212 r )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prior on segmentations", "sec_num": "3.2" }, { "text": "to the log of the prior.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prior on segmentations", "sec_num": "3.2" }, { "text": "This section describes the inference for the segmentation y, the approximate marginals Q Z , and the hyperparameter \u03b1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "4" }, { "text": "While the model structure is reminiscent of a factorial hidden Markov model (HMM), there are important differences that prevent the direct application of HMM inference. Hidden Markov models assume that the parameters of the observation likelihood distributions are available directly, while we marginalize them out. This has the effect of introducing dependencies throughout the state space: the segment assignment for each y t contributes to lexical counts which in turn affect the observation likelihoods for many other t . However, due to the left-to-right nature of segmentation, efficient inference of the optimal hierarchical segmentation (given the marginals Q Z ) is still possible. Let B ( ) [u, v] represent the log-likelihood of grouping together all contiguous words w u . . . w v\u22121 at level of the segmentation hierarchy. Using x t to indicate a vector of zeros with one at the position w t , we can express B more formally:", "cite_spans": [ { "start": 701, "end": 707, "text": "[u, v]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Dynamic programming for hierarchical segmentation", "sec_num": "4.1" }, { "text": "B ( ) [u, v] = log p dcm v t=u x t \u03b3 t + log r + v \u2212 u \u2212 1 d \u22121 log(1 \u2212 r ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic programming for hierarchical segmentation", "sec_num": "4.1" }, { "text": "The last two terms are from the prior p(y), as explained in Section 3.2. The value of B ( ) [u, v] is computed for all u, all v > u, and all .", "cite_spans": [ { "start": 92, "end": 98, "text": "[u, v]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Dynamic programming for hierarchical segmentation", "sec_num": "4.1" }, { "text": "Next, we compute the log-likelihood of the optimal segmentation, which we write as A (L) [0, T ]. This matrix can be filled in recursively:", "cite_spans": [ { "start": 85, "end": 88, "text": "(L)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Dynamic programming for hierarchical segmentation", "sec_num": "4.1" }, { "text": "A ( ) [u, v] = max u\u2264t