{ "paper_id": "C14-1005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:23:46.655002Z" }, "title": "Hierarchical Topical Segmentation with Affinity Propagation", "authors": [ { "first": "Anna", "middle": [], "last": "Kazantseva", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Ottawa Ottawa", "location": { "region": "Ontario", "country": "Canada" } }, "email": "" }, { "first": "Stan", "middle": [], "last": "Szpakowicz", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Ottawa Ottawa", "location": { "region": "Ontario", "country": "Canada" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a hierarchical topical segmenter for free text. Hierarchical Affinity Propagation for Segmentation (HAPS) is derived from a clustering algorithm Affinity Propagation. Given a document, HAPS builds a topical tree. The nodes at the top level correspond to the most prominent shifts of topic in the document. Nodes at lower levels correspond to finer topical fluctuations. For each segment in the tree, HAPS identifies a segment centre-a sentence or a paragraph which best describes its contents. We evaluate the segmenter on a subset of a novel manually segmented by several annotators, and on a dataset of Wikipedia articles. The results suggest that hierarchical segmentations produced by HAPS are better than those obtained by iteratively running several one-level segmenters. An additional advantage of HAPS is that it does not require the \"gold standard\" number of segments in advance.", "pdf_parse": { "paper_id": "C14-1005", "_pdf_hash": "", "abstract": [ { "text": "We present a hierarchical topical segmenter for free text. Hierarchical Affinity Propagation for Segmentation (HAPS) is derived from a clustering algorithm Affinity Propagation. Given a document, HAPS builds a topical tree. The nodes at the top level correspond to the most prominent shifts of topic in the document. Nodes at lower levels correspond to finer topical fluctuations. For each segment in the tree, HAPS identifies a segment centre-a sentence or a paragraph which best describes its contents. We evaluate the segmenter on a subset of a novel manually segmented by several annotators, and on a dataset of Wikipedia articles. The results suggest that hierarchical segmentations produced by HAPS are better than those obtained by iteratively running several one-level segmenters. An additional advantage of HAPS is that it does not require the \"gold standard\" number of segments in advance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "When an NLP application works with a document, it may benefit from knowing something about this document's high-level structure. Text summarization (Haghighi and Vanderwende, 2009) , question answering (Oh et al., 2007) and information retrieval (Ponte and Croft, 1998) are some of the examples of such applications. Topical segmentation is a lightweight form of such structural analysis: given a sequence of sentences or paragraphs, split it into a sequence of topical segments, each characterized by a certain degree of topical unity. This is particularly useful for texts with little structure imposed by the author, such as speech transcripts, meeting notes or literature.", "cite_spans": [ { "start": 148, "end": 180, "text": "(Haghighi and Vanderwende, 2009)", "ref_id": "BIBREF9" }, { "start": 202, "end": 219, "text": "(Oh et al., 2007)", "ref_id": "BIBREF22" }, { "start": 246, "end": 269, "text": "(Ponte and Croft, 1998)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The past decade has witnessed significant progress in the area of text segmentation. Most of the topical segmenters (Malioutov and Barzilay, 2006; Eisenstein and Barzilay, 2008; Kazantseva and Szpakowicz, 2011; Misra et al., 2011; Du et al., 2013 ) can only produce single-level segmentation, a worthy endeavour in and of itself. Yet, to view the structure of a document linearly, as a sequence of segments, is in certain discord with most theories of discourse structure, where it is more customary to consider documents as trees (Mann and Thompson, 1988; Marcu, 2000; Hernault et al., 2010; Feng and Hirst, 2012) or graphs (Wolf and Gibson, 2006) . Regardless of the theory, we hypothesize that it may be useful to have an idea about fluctuations of topic in documents beyond the coarsest level. It is the contribution of this work that we develop such a hierarchical segmenter, implement it and do our best to evaluate it.", "cite_spans": [ { "start": 116, "end": 146, "text": "(Malioutov and Barzilay, 2006;", "ref_id": "BIBREF16" }, { "start": 147, "end": 177, "text": "Eisenstein and Barzilay, 2008;", "ref_id": "BIBREF4" }, { "start": 178, "end": 210, "text": "Kazantseva and Szpakowicz, 2011;", "ref_id": "BIBREF12" }, { "start": 211, "end": 230, "text": "Misra et al., 2011;", "ref_id": "BIBREF21" }, { "start": 231, "end": 246, "text": "Du et al., 2013", "ref_id": "BIBREF3" }, { "start": 531, "end": 556, "text": "(Mann and Thompson, 1988;", "ref_id": "BIBREF17" }, { "start": 557, "end": 569, "text": "Marcu, 2000;", "ref_id": "BIBREF19" }, { "start": 570, "end": 592, "text": "Hernault et al., 2010;", "ref_id": "BIBREF11" }, { "start": 593, "end": 614, "text": "Feng and Hirst, 2012)", "ref_id": "BIBREF6" }, { "start": 625, "end": 648, "text": "(Wolf and Gibson, 2006)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The segmenter described here is HAPS -Hierarchical Affinity Propagation for Segmentation. It is closely based on a graphical model for hierarchical clustering called Hierarchical Affinity Propagation (Givoni et al., 2011) . It is a similarity-based segmenter. It takes as input a matrix of similarities between atomic units of text in the sequence to be segmented (sentences or paragraphs), the desired number of levels in the topical tree and a preference value for each data point and each level. This value captures a priori belief about how likely it is that this data point is a segment centre at that level. The preference values also control the granularity of segmentation: how many segments are to be identified at each level. The output is a topical tree. For each segment at every level, HAPS also finds a segment centre, a data point which best describes the segment.", "cite_spans": [ { "start": 200, "end": 221, "text": "(Givoni et al., 2011)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The objective function maximized by the segmenter is net similarity -the sum of similarities between all segment centres and their children for all levels of the tree. This function is similar to the objective function of the well-known k-means algorithm, except that here it is computed hierarchically.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "It is not easy to evaluate HAPS. We are not aware of comparable hierarchical segmenters other than that in (Eisenstein, 2009) which, unfortunately, is no longer publicly available. Therefore we compared the trees built by HAPS to the results of running iteratively two state-of-the-art flat segmenters. The results are compared on two datasets. A set of Wikipedia articles was automatically compiled by Carroll (2010) . The other set, created to evaluate HAPS, consists of nine chapters from the novel Moonstone by Wilkie Collins. Each chapter was annotated for hierarchical structure by 3-6 people.", "cite_spans": [ { "start": 107, "end": 125, "text": "(Eisenstein, 2009)", "ref_id": "BIBREF5" }, { "start": 403, "end": 417, "text": "Carroll (2010)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The evaluation is based on two metrics, windowDiff (Pevzner and Hearst, 2002) and evalHDS (Carroll, 2010). Both metrics are less then ideal. They do not give a complete picture of the quality of topical segmentations, but the preliminary results suggest that running a global model for hierarchical segmentation produces better results then iteratively running flat segmenters. Compared to the baseline segmenters, HAPS has an important practical advantage. It does not require the number of segments as an input; this requirement is customary for most flat segmenters.", "cite_spans": [ { "start": 51, "end": 77, "text": "(Pevzner and Hearst, 2002)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We also made a rough attempt to evaluate the quality of the segment centres identified by HAPS. Using 20 chapters from several novels of Jane Austen, we compared the centres identified for each chapter against summaries produces by a recent automatic summarizer CohSum (Smith et al., 2012) . The basis of comparison was the ROUGE metric (Lin, 2004) . While far from conclusive, the results suggest that segment centres identified by HAPS are rather comparable with the summaries produced by an automatic summarizer.", "cite_spans": [ { "start": 269, "end": 289, "text": "(Smith et al., 2012)", "ref_id": "BIBREF25" }, { "start": 337, "end": 348, "text": "(Lin, 2004)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A Java implementation of HAPS and the corpus of hierarchical segmentations for nine chapters of Moonstone are publicly available. We consider these to be the main contributions of this research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most work on topical text segmentation has been done for single-level segmentation. Contemporary approaches usually rely on the idea that topic shifts can be identified by finding shifts in the vocabulary (Youmans, 1991) . We can distinguish between local and global models for topical text segmentation. Local algorithms have a limited view of the document. For example, TextTiling (Hearst, 1997) operates by sliding a window through the input sequence and computing similarity between adjacent units. By identifying \"valleys\" in similarities, TextTiling identifies topic shifts. More recently, Marathe (2010) used lexical chains and Blei and Moreno (2001) used Hidden Markov Models. Such methods are usually very fast, but can be thrown off by small digressions in the text.", "cite_spans": [ { "start": 205, "end": 220, "text": "(Youmans, 1991)", "ref_id": "BIBREF29" }, { "start": 383, "end": 397, "text": "(Hearst, 1997)", "ref_id": "BIBREF10" }, { "start": 596, "end": 610, "text": "Marathe (2010)", "ref_id": "BIBREF18" }, { "start": 635, "end": 657, "text": "Blei and Moreno (2001)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Among global algorithms, we can distinguish generative probabilistic models and similarity-based models. Eisenstein and Barzilay (2008) model a document as a sequence of segments generated by latent topic variables. Misra et al. (2011) and Du et al. (2013) have similar models. Malioutov and Barzilay (2006) and (Kazantseva and Szpakowicz, 2011) use similarity-based representations. Both algorithms take as input a matrix of similarities between sentences of the input document; the former uses graph cuts to find cohesive segments, while the latter modifies a clustering algorithm to perform segmentation.", "cite_spans": [ { "start": 105, "end": 135, "text": "Eisenstein and Barzilay (2008)", "ref_id": "BIBREF4" }, { "start": 216, "end": 235, "text": "Misra et al. (2011)", "ref_id": "BIBREF21" }, { "start": 240, "end": 256, "text": "Du et al. (2013)", "ref_id": "BIBREF3" }, { "start": 278, "end": 307, "text": "Malioutov and Barzilay (2006)", "ref_id": "BIBREF16" }, { "start": 312, "end": 345, "text": "(Kazantseva and Szpakowicz, 2011)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Research on hierarchical segmentation has been more scarce. Yaari (1997) produced hierarchical segmentation by agglomerative clustering. Eisenstein (2009) used a Bayesian model to create topical trees, but the system is regrettably no longer publicly available. Song et al. (2011) develop an algorithm for hierarchical segmentation which iteratively splits a document in two at a place where cohesion links are the weakest. A second pass transforms a deep binary tree into a shallow and broad structure.", "cite_spans": [ { "start": 60, "end": 72, "text": "Yaari (1997)", "ref_id": "BIBREF28" }, { "start": 137, "end": 154, "text": "Eisenstein (2009)", "ref_id": "BIBREF5" }, { "start": 262, "end": 280, "text": "Song et al. (2011)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Any flat segmenter can certainly be used iteratively to create trees of segments by subdividing each segment, but this may be problematic. Topical segmenters are not perfect, so running them iteratively is likely to compound the error. Most segmenters also require the number of segments as an input. This estimate is feasible for flat segmentation. To know in advance the number of segments and sub-segments at each level is not a realistic requirement when building a tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "This work describes a hierarchical model of text segmentation. It takes a global view of the document and of the topical hierarchy. Each iteration attempts to find the best assignment of segments for the whole tree. It does not need to know the exact number of segments. Instead, it takes a more abstract parameter, preference values, to specify the granularity of segmentation at each level. For each segment it also outputs a segment centre, a unit of text which best captures the contents of the segment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "3 Creating a corpus of hierarchical segmentations Before embarking on the task of building a hierarchical segmenter, we wanted to study how people perform such a task. We also needed a benchmark corpus which could be used to evaluate the quality of segmentations produced by HAPS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "To this end, we annotated nine chapters of the novel Moonstone for hierarchical structure. We settled on these data because it is a subset of a publicly available dataset for flat segmentation (Kazantseva and Szpakowicz, 2012) . In our study, each chapter was annotated by 3-6 people (4.8 on average). The annotators, undergraduate students of English, were paid $50 dollars each.", "cite_spans": [ { "start": 193, "end": 226, "text": "(Kazantseva and Szpakowicz, 2012)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Procedure. The instructions asked the annotator to read the chapter and split it into top-level segments according to where there is a perceptible shift of topic. She had to provide a one-sentence description of what the segment is about. The procedure had to be repeated for each segment all the way down to the level of individual paragraphs. Effectively, the annotators were building a detailed hierarchical outline for each chapter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Metrics. Two different metrics helped estimate the quality of our hierarchical dataset: windowDiff (Pevzner and Hearst, 2002) and S (Fournier and Inkpen, 2012) .", "cite_spans": [ { "start": 99, "end": 125, "text": "(Pevzner and Hearst, 2002)", "ref_id": "BIBREF23" }, { "start": 132, "end": 159, "text": "(Fournier and Inkpen, 2012)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "windowDiff is computed by sliding a window across the input sequence and checking, for each window position, whether the number of reference breaks is the same as the number of breaks in the hypothetical segmentation. The number of erroneous windows is then normalized by the total number of windows. In Equation 1, N is the length of the input sequence and k is the size of the sliding window.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "windowDif f = 1 N \u2212 k N \u2212k i=1 (|ref \u2212 hyp| = 0) (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "windowDiff is designed to compare sequences of segments, not trees. That is why we compute it for each level between each pair of annotators who worked on the same chapter. It should be noted that windowDiff is a penalty metric: higher values indicate less agreement (windowDiff = 0 corresponds to two identical segmentations).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "The S metric allows us to compare trees and take into account situations when the segmenter places a boundary at a correct position but at a wrong level. S is an edit-distance metric. It computes the number of operations necessary to turn one segmentation into another. There are three types of editing operations: add/delete, transpose and substitute (change the level in the tree). The sum is normalized by the number of possible boundaries in the sequence. S has an unfortunate downside of being too optimistic, but it allows the breakdown of error types and it explicitly compares trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Unlike windowDiff, S is a similarity metric: higher values correspond to more similar segmentations. The value of S between two identical segmentations is 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "S(bs a , bs b , n) = 1 \u2212 |boundary distance(bs a , bs b , n)| pb(D)", "eq_num": "(2)" } ], "section": "Related work", "sec_num": "2" }, { "text": "Here boundary distance(bs a , bs b , n) is the total number of edit operations needed to turn a segmentation bs a into bs b , n is the threshold defining the maximum distance of transpositions. pb(D) is the maximum possible number of edits. Segmentations bs a and bs a are represented as strings of sets of boundary positions. For example bs a = ({2}, {1,2}, {1,2}) corresponds to a hierarchical segmentation of a three-unit sequence in the following manner: a segment boundary at level 1 after the first unit, segment boundaries at levels 1 and 2 after the second unit and the third unit.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Corpus Analysis. On average, the annotators took 3.5 hours to complete the task (\u03c3 = 1.6). The average depth of the tree is 3.00 levels (\u03c3 = 0.65), suggesting that the annotators prefer shallow but broad structures. Table 1 reports the average breadth of the tree at different levels. In the Table and further in this paper we refer to the bottom level of the tree (i.e., the leaves of the tree or the most fine-grained level of segmentation) as level 1. In Table 1 , level 4 refers to the top level of the tree (the coarsest segmentations). The values were computed using only the breaks explicitly specified by the annotators (i.e., we did not assume that a break at a coarse level implies a break at a more detailed level).", "cite_spans": [], "ref_spans": [ { "start": 216, "end": 223, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 292, "end": 309, "text": "Table and further", "ref_id": null }, { "start": 458, "end": 465, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "The average breadth of the trees at the bottom (level 1) is lower than that at level 2, indicating that only a small percentage of the entire tree was annotated more than three levels deep. The table also shows the average values of windowDiff computed for each possible pair of annotators. The values worsen toward the bottom of the tree, suggesting that the annotators agree more about top-level segments and less and less about finer fluctuations of topic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "We hypothesize that these shallow broad structures are due to the fact that it is difficult for people to create deep recursive structures in their mental representations. We do not, however, have any hard data to support this hypothesis. Many of the annotators specifically commented on the difficulty of the task. 9 out of 23 people included comments ranging from notes about specific places to general comments about their lack of confidence. 4 annotators found several (specific) passages they had trouble with.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "The average value of pairwise S is 0.79. We have noted earlier that the S metric tends to be optimistic (that is due to its normalization factor) but it provides a breakdown of disagreements between the annotators. According to S, 46.14% of disagreements are errors of omission (some of the annotators did not include segment breaks where others did), 47.56% are disagreements about the level of segmentation (the annotators placed boundaries in the same place but at different levels) and only 6.31% are errors of transposition (the annotators do not agree about the exact placement but place boundaries within 1 position of each other). This distribution is more interesting than the overall value of S. Among other things, it shows why it is so important to take into account adjacent levels when evaluating topical trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "4 The HAPS algorithm 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "The HAPS segmenter is based on factor graphs, a unifying formalism for such graphical models as Markov or Bayesian networks. A factor graph is a bi-partite graph with two types of nodes, factor or function nodes and variable nodes. Each factor node is connected to those variable nodes which are its arguments. Running the well-known Max-Sum algorithm (Bishop, 2006 ) on a factor graph finds a configuration of variables which maximizes the sum of all component functions. This is a messagepassing algorithm. All variable nodes send messages to their factor neighbours (functions in which those nodes are variables) and all factor nodes send messages to their variable neighbours (their arguments). A message \u00b5 x\u2192f sent from a variable node x to a function node f is computed as a sum of all incoming messages to x, except the message from the recipient function f :", "cite_spans": [ { "start": 352, "end": 365, "text": "(Bishop, 2006", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Factor graphs", "sec_num": "4.1" }, { "text": "\u00b5 x\u2192f = f \u2208N (x)\\f \u00b5 f \u2192x (3) N (x)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factor graphs", "sec_num": "4.1" }, { "text": "is the set of all function nodes which are x's neighbours. Intuitively, the message reflects evidence about the distribution of x from all functions which have x as an argument, except the function corresponding to the receiving node f . A message \u00b5 f (x,...)\u2192x sent from the factor node f (x, ...) to the ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factor graphs", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "C l\u22121 1 C l\u22121 i C l\u22121 N e l\u22121 1 e l\u22121 i e l\u22121 N E l\u22121 1 E i l \u2212 1 E l\u22121 N I l\u22121 1 I l\u22121 i I l\u22121 N c l\u22121 11 c l\u22121 1i c l\u22121 1N c l\u22121 i1 c l\u22121 ii c l\u22121 iN c l\u22121 N 1 c l\u22121 N i c l\u22121 N N S 11 l\u22121 S l\u22121 1i S 1N l\u22121 S l\u22121 i1 S l\u22121 ii S l\u22121 iN S l\u22121 N 1 S l\u22121 N i S l\u22121 N N Level l -1 C l 1 C l j C l N e l 1 e l j e l N E l 1 E l j E l N I l 1 I l i I l N c l 11 c l 1j c l 1N c l i1 c l ij c l iN c l N 1 c l N j c l N N S 11 l S l 1j S 1N l S l i1 S l ij S l iN S l N 1 S l N j S l N N Level l", "eq_num": "(" } ], "section": "Factor graphs", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u00b5 f \u2192x = max N (f )\\x (f (x 1 , . . . , x m ) + x \u2208N (f )\\x \u00b5 x \u2192f )", "eq_num": "(4)" } ], "section": "Factor graphs", "sec_num": "4.1" }, { "text": "N (f ) is the set of all variable nodes which are f 's neighbours. The message reflects the evidence about the distribution of x from function f and its neighbours other than x.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factor graphs", "sec_num": "4.1" }, { "text": "This work aims to build trees of topical segments. Each segment is characterized by a centre which best describes its content. The objective function is net similarity, the sum of similarities between all centres and the data points which they exemplify. The complete sequence of data points is to be segmented at each level of the tree, subject to the following constraint: centres at each level l, l > 1, must be a subset of the centres from the previous level l \u2212 1. Figure 1a shows a fragment of the factor graph describing HAPS corresponding to levels l and l \u2212 1. The tree has L levels, from the root (l = L) down to the leaves (l = 1). The superscripts of factor and variable nodes denote the level. At each level, there are N 2 variable nodes c l ij and N variable nodes e l j (N is the number of data points in the sequence to segment). A variable's value is 0 or 1: c l ij = 1 \u21d4 the data point i at level l belongs to the segment centred around data point j; e l j = 1 \u21d4 there is a segment centred around j at level l. Four types of factor nodes in Figure 1a are I, E, C and S. The I factors ensure that each data point is assigned to exactly one segment and that segment centres at level l are a subset of those from level l \u2212 1. The E nodes ensure that segments are centred around the segment centres in solid blocks (rather than unordered clusters). The values of I and E are 0 for valid configurations and -\u221e otherwise. The S factors capture similarities between data points. S l ij = sim(i, j) if c l ij = 1; S l ij = 0 if c l ij = 0. 2 The C factors handle preferences in an analogous manner. Running the Max-Sum algorithm on the factor graph in Figure 1a maximizes the net similarity between all segment centres and their children at all levels: Figure 1b shows a close-up view of the messages that must be sent to find the optimizing configuration of variables. Messages \u03b2, \u03b7,\u03c1 do not need to be sent explicitly: their values are subsumed by other types of messages. We only need to compute explicitly and send four types of messages: \u03b1, \u03c1, \u03c6 and \u03c4 .", "cite_spans": [], "ref_spans": [ { "start": 470, "end": 479, "text": "Figure 1a", "ref_id": "FIGREF1" }, { "start": 1059, "end": 1068, "text": "Figure 1a", "ref_id": "FIGREF1" }, { "start": 1662, "end": 1671, "text": "Figure 1a", "ref_id": "FIGREF1" }, { "start": 1763, "end": 1772, "text": "Figure 1b", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Hierarchical Affinity Propagation for Segmentation", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "max {c l ij },{e l j } S({c l ij }, {e l j }) = i,j,l S l i,j (c l ij ) + i,l I l i (c l i1 , . . . , c l iN , e l\u22121 i ) + j,l E l j (c l 1j , . . . , c l N j , e l j ) + j,l C l j (e l j )", "eq_num": "(5)" } ], "section": "Hierarchical Affinity Propagation for Segmentation", "sec_num": "4.2" }, { "text": "Algorithm 1 shows the pseudo-code for the HAPS algorithm. 3 Intuitively, different parts of the update messages in Algorithm 1 correspond to likelihood ratios between two hypotheses: whether a data point i is or is not part of a segment centred around another data point j at a given level l. For example, here is the availability (\u03b1) message sent from a potential segment centre j to itself at level l:", "cite_spans": [ { "start": 58, "end": 59, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Affinity Propagation for Segmentation", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b1 l ij = p l j + \u03c6 l j + j max s=1 ( j\u22121 k=s \u03c1 l kj ) + N max e=j ( e k=j+1 \u03c1 l kj )", "eq_num": "(6)" } ], "section": "Hierarchical Affinity Propagation for Segmentation", "sec_num": "4.2" }, { "text": "Here p l j incorporates the information about the preference value for the data point j at the level l. \u03c6 l j brings in the information from the coarser level of the tree. The summand max j s=1 ( j\u22121 k=s \u03c1 l kj ) encodes the likelihood that there is a segment starting before j given the values of responsibility messages for all data points i such that i < j -hence the information from a more detailed level of the tree as well as the similarities between all data points i (i < j) and j. The summand max N e=j ( e k=j+1 \u03c1 l kj ) does the same for the tail-end of the segment (all data points i such that i > j).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Affinity Propagation for Segmentation", "sec_num": "4.2" }, { "text": "Complexity analysis. The HAPS model contains N 2 c l ij nodes at each level. In practice, however, the matrix of similarities SIM does not need to be fully specified. It is customary to compute this matrix with a large sliding window; the size should be at least twice the anticipated average length. On each iteration, we need to send L*M*N messages \u03b1 and \u03c1, resulting in the complexity O(L*M*N). Here L is the number of levels, N is the number of data points in the sequence and M (M \u2264 N ) is the size of the sliding window used for computing similarities. The computation of \u03c1 and \u03b1 messages is independent for each row and column respectively, so the algorithm would be easy to parallelize.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Affinity Propagation for Segmentation", "sec_num": "4.2" }, { "text": "Parameter settings. An important advantage of HAPS is that it does not require the number of segments in advance. Instead, the user needs to set the preference values for each level. However, HAPS is fairly resistant to changes in preferences and this generic parameter is a convenient knob for fine-tuning the desired granularity of segmentation, as opposed to specifying the exact number of segments at each level of the tree. In this work we set preferences uniformly, but it is possible to incorporate additional knowledge through more discriminative settings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Affinity Propagation for Segmentation", "sec_num": "4.2" }, { "text": "In all our experiments, preference values are set uniformly for each level of the tree, so effectively all data points are equally likely to be chosen as segment centres at each level. As a starting point, the preference value for the most detailed level of the tree should be about approximately equal to the median similarity value (as specified in the input matrix). A near-zero preference value tends to result in a medium number of segments and is thus suitable to the middle levels of the tree. A negative preference value results in a small number of segments and is appropriate for identifying the most pronounced segment breaks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Affinity Propagation for Segmentation", "sec_num": "4.2" }, { "text": "In order to evaluate the quality of topical trees produced by HAPS, we ran the system on two datasets. We compared the results obtained by HAPS against topical trees obtained by iteratively running two high-performance single-level segmenters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental evaluation", "sec_num": "5" }, { "text": "Datasets. We used the Moonstone corpus described in Section 2, and the Wikipedia dataset compiled by Carroll (2010) . Created automatically from metadata on Web pages, the dataset consists of 66 Wikipedia entries on various topics; the annotations and the results concern sentences. In the Moonstone corpus we work with paragraphs. To simplify evaluation and interpretation, we produced three-tier trees. This is in line with the average depths of manual annotations in the Moonstone data.", "cite_spans": [ { "start": 101, "end": 115, "text": "Carroll (2010)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental evaluation", "sec_num": "5" }, { "text": "Algorithm 1 Hierarchical Affinity Propagation for Segmentation 1: input: 1) L pairwise similarity matrices {SIM l (i, j)} (i,j)\u2208{1,...,N } 2 ; 2) L preferences p l (one per level l) indicating a priori likelihood of point i being a segment centre at level l 2: initialization: \u2200i, j : \u03b1 ij = 0 (set all availabilities to 0) 3: repeat 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental evaluation", "sec_num": "5" }, { "text": "iteratively update \u03c1, \u03b1, \u03c6 and \u03c4 messages 5:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental evaluation", "sec_num": "5" }, { "text": "\u2200i, l : \u03c6 l\u22121 i = max[0, \u03b1 ii \u2212 max k =i (s l ik + \u03b1 l ik )] 6:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental evaluation", "sec_num": "5" }, { "text": "\u2200i, j, l :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental evaluation", "sec_num": "5" }, { "text": "\u03c1 l ij = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 min(0, \u03c4 l i ) \u2212 max k =i (s l ik + \u03b1 l ik ) if i = j s l ij + min[max(0, \u2212\u03c4 l i ) \u2212 \u03b1 l ii , \u2212 max k i,j (s l ik + \u03b1 l ik )] if i = j 7: \u2200i, j, l : \u03b1 l ij = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 p l j + \u03c6 l j + j max s=1 ( j\u22121 k=s \u03c1 l kj ) + N max e=j ( e k=j+1 \u03c1 l kj ) if i = j \u03b1 l ij,i 0 11: output: segment centres and segment boundaries Baselines. Regrettably, we are not aware of another publicly available hierarchical segmenter. That is why we used as baselines two recent flat segmenters: MCSeg (Malioutov and Barzilay, 2006) and BSeg (Eisenstein and Barzilay, 2008) . Both were first run to produce top-level segmentations. Each segment thus computed was a new input document for segmentation. We repeated the procedure twice to obtain three-tiered trees. MCSeg cannot be run without knowing the number of segments in advance. Therefore, on each iteration, we had to specify the correct number of segments in the reference segmentation. BSeg does not need the exact number of segments, so we had two settings: with and without knowing the number of segments.", "cite_spans": [ { "start": 332, "end": 362, "text": "(Malioutov and Barzilay, 2006)", "ref_id": "BIBREF16" }, { "start": 372, "end": 403, "text": "(Eisenstein and Barzilay, 2008)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental evaluation", "sec_num": "5" }, { "text": "Evaluation metrics. We did our best to obtain a realistic picture of the results, but each metric has its shortcomings. We compared topical trees using windowDiff and evalHDS (Carroll, 2010) . Both metrics are penalties: the higher the values, the worse the hypothetical segmentation. evalHDS computes windowDiff for each level of the tree in isolation and weighs the errors according to their prominence in the tree. We computed evalHDS using the publicly available Python implementation (Carroll, 2010) . 4 When computing windowDiff, we treated each level of the tree as a separate segmentation and compared each hypothetical level against a corresponding level in the reference segmentation.", "cite_spans": [ { "start": 175, "end": 190, "text": "(Carroll, 2010)", "ref_id": "BIBREF2" }, { "start": 489, "end": 504, "text": "(Carroll, 2010)", "ref_id": "BIBREF2" }, { "start": 507, "end": 508, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental evaluation", "sec_num": "5" }, { "text": "To ensure that evaluations are well-defined at all levels, we propagated the more pronounced reference breaks to lower levels (in both annotations and in the results). In effect, the whole sequence is segmented at each level -otherwise windowDiff would not be not well-defined. Conceptually this means that if there is a topical shift of noticeable magnitude (e.g., at the top level), there must be at least a shift of less pronounced magnitude (e.g., at an intermediate level).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental evaluation", "sec_num": "5" }, { "text": "The Moonstone dataset has on average 4.8 annotations per chapter. It is not obvious how to combine these multiple annotations. We evaluated separately each hypothetical segmentation against each available gold standard. We report the averages across all annotators -for both evalHDS and windowDiffper level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental evaluation", "sec_num": "5" }, { "text": "Preprocessing. The representations used by HAPS and the MCSeg are very similar. Both systems compute a matrix of similarities between atomic units of the document (sentences or paragraphs). Each unit was represented as a bag of words. The vectors were further weighted by the tf.idf value of the term and also smoothed in the same manner as in (Malioutov and Barzilay, 2006) . We computed cosine similarity between vectors corresponding to each sentence or paragraph. We used tenfold cross-validation on the Wikipedia dataset and fourfold cross-validation on the smaller Moonstone data.", "cite_spans": [ { "start": 344, "end": 374, "text": "(Malioutov and Barzilay, 2006)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental evaluation", "sec_num": "5" }, { "text": "The quality of the segment centres. In addition to finding topical shifts, HAPS identifies segment centres -sentences or paragraphs which best capture what each segment is about. In order to get a rough estimate of the quality of the centres, we extracted paragraphs identified as segment centres at the second (middle) level of HAPS trees. These pseudo-summaries were then compared to summaries created by an automatic summarizer CohSum. We used ROUGE-1 and ROUGE-L metrics (Lin, 2004) as a basis for comparison. CohSum identifies the most salient sentences in a document by running a variant of the TextRank algorithm (Mihalcea and Tarau, 2004) on the entire document. In addition to using lexical similarity, the summarizer takes into account coreference links between sentences. We ran CohSum at 10% compression rate.", "cite_spans": [ { "start": 475, "end": 486, "text": "(Lin, 2004)", "ref_id": "BIBREF15" }, { "start": 620, "end": 646, "text": "(Mihalcea and Tarau, 2004)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental evaluation", "sec_num": "5" }, { "text": "The summarization experiment was performed on the Moonstone corpus. We also collected 20 chapters from several other XIX century novels and used it in a separate experiment. The ROUGE package requires manually written summaries to compare with the automatically created ones. We obtained the summaries from the SparkNotes website. 5 Table 2 shows the results of comparing HAPS with two baseline segmenters using windowDiff and evalHDS. HAPS was run without knowing the number of segments. MCSeg required that the exact number be specified. BSeg was tested with and without that parameter. Therefore, rows 3 and 4 in Table 2 correspond to baselines considerably more informed than HAPS. This is especially true of the bottom levels where sometimes knowing the exact number of segments unambiguously determines the only possible segmentation.", "cite_spans": [], "ref_spans": [ { "start": 333, "end": 340, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 616, "end": 623, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experimental evaluation", "sec_num": "5" }, { "text": "The results suggest that HAPS performs well on the Moonstone data even when compared to more informed baselines. This applies to both metrics, windowDiff and evalHDS. BSeg performs slightly better at the bottom levels of the tree when it has the information about the exact number of segments. We hypothesize that the advantage may be due to this additional information, especially when segmenting already small segments at level 1 into a predefined number of segments. Another explanation may be that when using windowDiff as the evaluation metric, HAPS was fine-tuned so as to maximize the value of windowDiff at the top level, effectively disregarding lower levels of segmentation. All segmenters perform worse on the Wikipedia dataset. Using that scale, informed BSeg performs the best, but it is interesting to note a significant drop in performance when the number of segments is not specified.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and discussion", "sec_num": "6" }, { "text": "Overall, HAPS appears to perform better than, or comparably to, the more informed baselines, and much better than the baseline not given information about the number of segments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and discussion", "sec_num": "6" }, { "text": "We also made a preliminary attempt to evaluate the quality of segment centres by comparing them to the summaries created by the CohSum summarizer. In addition to working with the Moonstone corpus, we collected a corpus of 20 chapters from various novels by Jane Austen. Table 3 shows the results. They are not conclusive because there is no evidence that ROUGE scores correlate with the quality of automatically created summaries for literature. According to the scores in Table 3 , however, the summaries created by CohSum cannot be distinguished from simple summaries composed of segment centres identified by HAPS. We interpret this as a sign that the centres identified by HAPS are approximately as informative as those created by an automatic summarizer.", "cite_spans": [], "ref_spans": [ { "start": 270, "end": 277, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 473, "end": 480, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results and discussion", "sec_num": "6" }, { "text": "This paper presented HAPS, a hierarchical segmenter for free text. Given an input document, HAPS creates a topical tree and identifies a segment centre for each segment. One of the advantages of HAPS is that it does not require the exact number of segments in advance. Instead, it estimates the number of segments given information on generic preferences with regard to segmentation granularity. We also created a corpus of hierarchical segmentations which has been annotated by 3-6 people per chapter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A brief conclusion", "sec_num": "7" }, { "text": "A Java implementation of HAPS and the Moonstone corpus are publicly available. 6 ", "cite_spans": [ { "start": 79, "end": 80, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A brief conclusion", "sec_num": "7" }, { "text": "The derivation of the HAPS algorithm, quite involved, is unlikely to interest many readers. We only present the bare minimum of facts about the algorithm, the framework of factor graphs and the derivation of HAPS from the underlying model of Affinity Propagation. A detailed account appears in(Kazantseva, 2014).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The value sim(i, j) is specified in the input matrix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "It is not possible to include a detailed derivation of the new update messages in the space allowed here. The interested reader can find these details in(Kazantseva, 2014). The derivation follows the same logic as(Givoni et al., 2011) and(Kazantseva and Szpakowicz, 2011).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "When working with the Moonstone dataset, we realized that the software produces very low values, almost too good to be true. That is because the bottommost annotations are very fine-grained. Sometimes each paragraph corresponds to a separate segment. This causes problems for the software. So, when we report evalHDS values for the Moonstone dataset, we only consider two top levels of the tree, disregarding the leaves. We also remove the \"too good to be true\" outliers, though the \"bad\" tail is left intact. We applied the same procedure to all three segmenters, only for the Moonstone dataset.5 http://www.sparknotes.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.eecs.uottawa.ca/\u02dcankazant/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank Chris Fournier (for computing S values using a beta version of SegEval software for hierarchical datasets), Lucien Carrol (for help and discussion of the evalHDS software and representation) and Christian Smith (for allowing us to use his implementation of CohSum).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Pattern Recognition and Machine Learning", "authors": [ { "first": "Christopher", "middle": [ "M" ], "last": "Bishop", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher M. Bishop. 2006. Pattern Recognition and Machine Learning. Springer.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Topic segmentation with an aspect hidden Markov Model", "authors": [ { "first": "David", "middle": [], "last": "Blei", "suffix": "" }, { "first": "Pedro", "middle": [], "last": "Moreno", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "343--348", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Blei and Pedro Moreno. 2001. Topic segmentation with an aspect hidden Markov Model. In Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 343-348.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Evaluating Hierarchical Discourse Segmentation", "authors": [ { "first": "Lucien", "middle": [], "last": "Carroll", "suffix": "" } ], "year": 2010, "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "993--1001", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lucien Carroll. 2010. Evaluating Hierarchical Discourse Segmentation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 993-1001.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Topic Segmentation with a Structured Topic Model", "authors": [ { "first": "Lan", "middle": [], "last": "Du", "suffix": "" }, { "first": "Wray", "middle": [], "last": "Buntine", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "190--200", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lan Du, Wray Buntine, and Mark Johnson. 2013. Topic Segmentation with a Structured Topic Model. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 190-200, Atlanta, Georgia.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Bayesian Unsupervised Topic Segmentation", "authors": [ { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "334--343", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Eisenstein and Regina Barzilay. 2008. Bayesian Unsupervised Topic Segmentation. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 334-343, Honolulu, Hawaii.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Hierarchical Text Segmentation from Multi-Scale Lexical Cohesion", "authors": [ { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "353--361", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Eisenstein. 2009. Hierarchical Text Segmentation from Multi-Scale Lexical Cohesion. In Proceedings of the 2009 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 353-361. The Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Text-level Discourse Parsing with Rich Linguistic Features", "authors": [ { "first": "Vanessa", "middle": [], "last": "Wei Feng", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "60--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vanessa Wei Feng and Graeme Hirst. 2012. Text-level Discourse Parsing with Rich Linguistic Features. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 60-68, Jeju Island, Korea, July. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Segmentation Similarity and Agreement", "authors": [ { "first": "Chris", "middle": [], "last": "Fournier", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Inkpen", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "152--161", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Fournier and Diana Inkpen. 2012. Segmentation Similarity and Agreement. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 152-161, Montr\u00e9al, Canada.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Hierarchical Affinity Propagation", "authors": [ { "first": "E", "middle": [], "last": "Inmar", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Givoni", "suffix": "" }, { "first": "Brendan", "middle": [ "J" ], "last": "Chung", "suffix": "" }, { "first": "", "middle": [], "last": "Frey", "suffix": "" } ], "year": 2011, "venue": "Uncertainty in AI, Proceedings of the Twenty-Seventh Conference", "volume": "", "issue": "", "pages": "238--246", "other_ids": {}, "num": null, "urls": [], "raw_text": "Inmar E. Givoni, Clement Chung, and Brendan J. Frey. 2011. Hierarchical Affinity Propagation. In Uncertainty in AI, Proceedings of the Twenty-Seventh Conference (2011), pages 238-246.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Exploring Content Models for Multi-Document Summarization", "authors": [ { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Lucy", "middle": [], "last": "Vanderwende", "suffix": "" } ], "year": 2009, "venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "362--370", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aria Haghighi and Lucy Vanderwende. 2009. Exploring Content Models for Multi-Document Summarization. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 362-370, Boulder, Colorado, June.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "TextTiling: segmenting text into multi-paragraph subtopic passages", "authors": [ { "first": "Marti", "middle": [ "A" ], "last": "Hearst", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "1", "pages": "33--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marti A. Hearst. 1997. TextTiling: segmenting text into multi-paragraph subtopic passages. Computational Linguistics, 23(1):33-64.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "HILDA: A Discourse Parser Using Support Vector Machine Classification", "authors": [ { "first": "Hugo", "middle": [], "last": "Hernault", "suffix": "" }, { "first": "Helmut", "middle": [], "last": "Prendinger", "suffix": "" }, { "first": "David", "middle": [ "A" ], "last": "Duverlea", "suffix": "" }, { "first": "Mitsuru", "middle": [], "last": "Ishizuka", "suffix": "" } ], "year": 2010, "venue": "Dialogue and Discourse", "volume": "3", "issue": "", "pages": "1--33", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hugo Hernault, Helmut Prendinger, David A. duVerlea, and Mitsuru Ishizuka. 2010. HILDA: A Discourse Parser Using Support Vector Machine Classification. Dialogue and Discourse, 3:1-33.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Linear Text Segmentation Using Affinity Propagation", "authors": [ { "first": "Anna", "middle": [], "last": "Kazantseva", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Szpakowicz", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "284--293", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Kazantseva and Stan Szpakowicz. 2011. Linear Text Segmentation Using Affinity Propagation. In Proceed- ings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 284-293, Edinburgh, Scotland.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Topical Segmentation: a Study of Human Performance and a New Measure of Quality", "authors": [ { "first": "Anna", "middle": [], "last": "Kazantseva", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Szpakowicz", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "211--220", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Kazantseva and Stan Szpakowicz. 2012. Topical Segmentation: a Study of Human Performance and a New Measure of Quality. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 211-220, Montr\u00e9al, Canada.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Topical Structure in Long Informal Documents", "authors": [ { "first": "Anna", "middle": [], "last": "Kazantseva", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Kazantseva. 2014. Topical Structure in Long Informal Documents. Ph.D. thesis, University of Ottawa. http://www.eecs.uottawa.ca/\u02dcankazant/ .", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "ROUGE: A Package for Automatic Evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Text Summarization Branches Out, Proceedings of the ACL Workshop", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. ROUGE: A Package for Automatic Evaluation of summaries. In Text Summarization Branches Out, Proceedings of the ACL Workshop, pages 74-81, Barcelona, Spain.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Minimum Cut Model for Spoken Lecture Segmentation", "authors": [ { "first": "Igor", "middle": [], "last": "Malioutov", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "25--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Igor Malioutov and Regina Barzilay. 2006. Minimum Cut Model for Spoken Lecture Segmentation. In Pro- ceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 25-32, Sydney, Australia.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Rhetorical Structure Theory: Toward a functional theory of text organization", "authors": [ { "first": "C", "middle": [], "last": "William", "suffix": "" }, { "first": "Sandra", "middle": [ "A" ], "last": "Mann", "suffix": "" }, { "first": "", "middle": [], "last": "Thompson", "suffix": "" } ], "year": 1988, "venue": "Text", "volume": "8", "issue": "3", "pages": "243--281", "other_ids": {}, "num": null, "urls": [], "raw_text": "William C. Mann and Sandra A. Thompson. 1988. Rhetorical Structure Theory: Toward a functional theory of text organization. Text, 8(3):243-281.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Lexical Chains Using Distributional Measures of Concept Distance. Master's thesis", "authors": [ { "first": "Meghana", "middle": [], "last": "Marathe", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meghana Marathe. 2010. Lexical Chains Using Distributional Measures of Concept Distance. Master's thesis, University of Toronto.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "The Theory and Practice of Discourse Parsing and Summarization", "authors": [ { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Marcu. 2000. The Theory and Practice of Discourse Parsing and Summarization. MIT Press, Cambridge, Mass.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Textrank: Bringing order into texts", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Tarau", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "404--411", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into texts. In Dekang Lin and Dekai Wu, editors, Proceedings of the Conference on Empirical Methods in Natural Language Processing 2004, pages 404-411, Barcelona, Spain.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Text segmentation: A topic modeling perspective", "authors": [ { "first": "Hemant", "middle": [], "last": "Misra", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Yvon", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Capp\u00e9", "suffix": "" }, { "first": "Joemon", "middle": [ "M" ], "last": "Jose", "suffix": "" } ], "year": 2011, "venue": "Information Processing and Management", "volume": "47", "issue": "4", "pages": "528--544", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hemant Misra, Fran\u00e7ois Yvon, Olivier Capp\u00e9, and Joemon M. Jose. 2011. Text segmentation: A topic modeling perspective. Information Processing and Management, 47(4):528-544.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Semantic passage segmentation based on sentence topics for question answering", "authors": [ { "first": "Hyo-Jung", "middle": [], "last": "Oh", "suffix": "" }, { "first": "Sung", "middle": [ "Hyon" ], "last": "Myaeng", "suffix": "" }, { "first": "Myung-Gil", "middle": [], "last": "Jang", "suffix": "" } ], "year": 2007, "venue": "Information Sciences, an International Journal", "volume": "177", "issue": "", "pages": "3696--3717", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hyo-Jung Oh, Sung Hyon Myaeng, and Myung-Gil Jang. 2007. Semantic passage segmentation based on sentence topics for question answering. Information Sciences, an International Journal, 177:3696-3717.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A Critique and Improvement of an Evaluation Metric for Text Segmentation", "authors": [ { "first": "Lev", "middle": [], "last": "Pevzner", "suffix": "" }, { "first": "Marti", "middle": [ "A" ], "last": "Hearst", "suffix": "" } ], "year": 2002, "venue": "Computational Linguistics", "volume": "28", "issue": "1", "pages": "19--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lev Pevzner and Marti A. Hearst. 2002. A Critique and Improvement of an Evaluation Metric for Text Segmenta- tion. Computational Linguistics, 28(1):19-36.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A Language Modeling Approach to Information Retrieval", "authors": [ { "first": "M", "middle": [], "last": "Jay", "suffix": "" }, { "first": "W", "middle": [ "Bruce" ], "last": "Ponte", "suffix": "" }, { "first": "", "middle": [], "last": "Croft", "suffix": "" } ], "year": 1998, "venue": "SIGIR '98: Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "275--281", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jay M. Ponte and W. Bruce Croft. 1998. A Language Modeling Approach to Information Retrieval. In SIGIR '98: Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 275-281, Melbourne, Australia.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "A more cohesive summarizer", "authors": [ { "first": "Christian", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Henrik", "middle": [], "last": "Danielsson", "suffix": "" }, { "first": "Arne", "middle": [], "last": "Jnsson", "suffix": "" } ], "year": 2012, "venue": "24th International Conference on Computational Linguistics, Proceedings of COLING 2012: Posters", "volume": "", "issue": "", "pages": "1161--1170", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian Smith, Henrik Danielsson, and Arne Jnsson. 2012. A more cohesive summarizer. In 24th International Conference on Computational Linguistics, Proceedings of COLING 2012: Posters, pages 1161-1170, Mumbai, India.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "An iterative approach to text segmentation", "authors": [ { "first": "Fei", "middle": [], "last": "Song", "suffix": "" }, { "first": "William", "middle": [ "M" ], "last": "Darling", "suffix": "" }, { "first": "Adnan", "middle": [], "last": "Duric", "suffix": "" }, { "first": "Fred", "middle": [ "W" ], "last": "Kroon", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 33rd European Conference on Advances in Information Retrieval, ECIR'11", "volume": "", "issue": "", "pages": "629--640", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fei Song, William M. Darling, Adnan Duric, and Fred W. Kroon. 2011. An iterative approach to text segmentation. In Proceedings of the 33rd European Conference on Advances in Information Retrieval, ECIR'11, pages 629- 640, Berlin, Heidelberg. Springer-Verlag.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Coherence in Natural Language: Data Structures and Applications", "authors": [ { "first": "Florian", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Gibson", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Florian Wolf and Edward Gibson. 2006. Coherence in Natural Language: Data Structures and Applications. MIT Press, Cambridge, MA.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Segmentation of Expository Texts by Hierarchical Agglomerative Clustering", "authors": [ { "first": "Yaakov", "middle": [], "last": "Yaari", "suffix": "" } ], "year": 1997, "venue": "Proceedings of International Conference on Recent Advances in Natural Language Processing RANLP97", "volume": "", "issue": "", "pages": "59--65", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaakov Yaari. 1997. Segmentation of Expository Texts by Hierarchical Agglomerative Clustering. In Proceedings of International Conference on Recent Advances in Natural Language Processing RANLP97, pages 59-65, Tzigov Chark, Bulgaria.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "A new tool for discourse analysis: The vocabulary-management profile", "authors": [ { "first": "Gilbert", "middle": [], "last": "Youmans", "suffix": "" } ], "year": 1991, "venue": "Language", "volume": "67", "issue": "4", "pages": "763--789", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gilbert Youmans. 1991. A new tool for discourse analysis: The vocabulary-management profile. Language, 67(4):763-789.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "a) Fragment of the factor graph for levels l \u2212 1 and l (b) Types of messages sent in the HAPS model", "uris": null, "type_str": "figure" }, "FIGREF1": { "num": null, "text": "Factor graph for HAPS -Hierarchical Affinity Propagation for Segmentation variable node x is computed as a maximum of the value of f (x) plus all messages incoming to f (x, ...) other than the message from the recipient node x:", "uris": null, "type_str": "figure" }, "TABREF0": { "type_str": "table", "text": "Average breadth of manually created topical trees and windowDiff value across different levels", "content": "
LevelAverage breadth windowDiff
4 (top)6.530.35
317.550.46
217.630.47
1 (bottom)8.800.50
", "html": null, "num": null }, "TABREF2": { "type_str": "table", "text": "Evaluation of HAPS and iterative versions of APS, MCSeg and BSeg using windowDiff per level (mean windowDiff and standard deviation for cross-validation)", "content": "
Moonstone corpusAusten corpus
ROUGE-1ROUGE-LROUGE-1ROUGE-L
Segment centres0.3410.3210.2910.301
(0.312, 0.370)(0.298, 0.346)(0.272, 0.311)(0.293, 0.330)
CohSum0.2940.2690.3050.307
summaries(0.243, 0.334)(0.226, 0.306)(0.290, 0.320)(0.287, 0.327)
", "html": null, "num": null }, "TABREF3": { "type_str": "table", "text": "HAPS segment centres compared to CohSum summaries: ROUGE scores and 95% confidence intervals", "content": "", "html": null, "num": null } } } }