Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N13-1019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:40:24.926872Z"
},
"title": "Topic Segmentation with a Structured Topic Model",
"authors": [
{
"first": "Lan",
"middle": [],
"last": "Du",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Macquarie University Sydney",
"location": {
"country": "Australia"
}
},
"email": "lan.du@mq.edu.au"
},
{
"first": "Wray",
"middle": [],
"last": "Buntine",
"suffix": "",
"affiliation": {
"laboratory": "Canberra Research Lab National ICT Australia Canberra",
"institution": "",
"location": {
"country": "Australia"
}
},
"email": "wray.buntine@nicta.com.au"
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": "",
"affiliation": {},
"email": "mark.johnson@mq.edu.au"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a new hierarchical Bayesian model for unsupervised topic segmentation. This new model integrates a point-wise boundary sampling algorithm used in Bayesian segmentation into a structured topic model that can capture a simple hierarchical topic structure latent in documents. We develop an MCMC inference algorithm to split/merge segment(s). Experimental results show that our model outperforms previous unsupervised segmentation methods using only lexical information on Choi's datasets and two meeting transcripts and has performance comparable to those previous methods on two written datasets.",
"pdf_parse": {
"paper_id": "N13-1019",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a new hierarchical Bayesian model for unsupervised topic segmentation. This new model integrates a point-wise boundary sampling algorithm used in Bayesian segmentation into a structured topic model that can capture a simple hierarchical topic structure latent in documents. We develop an MCMC inference algorithm to split/merge segment(s). Experimental results show that our model outperforms previous unsupervised segmentation methods using only lexical information on Choi's datasets and two meeting transcripts and has performance comparable to those previous methods on two written datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Documents are usually comprised of topically coherent text segments, each of which contains some number of text passages (e.g., sentences or paragraphs) (Salton et al., 1996) . Within each topically coherent segment, one would expect that the word usage demonstrates more consistent lexical distributions (known as lexical cohesion (Eisenstein and Barzilay, 2008) ) than that across segments. A linear partition of texts into topic segments may reveal information about, for example, themes of segments and the overall thematic structure of the text, and can subsequently be useful for text analysis tasks, such as information retrieval (e.g., passage retrieval (Salton et al., 1996) ), document summarisation and discourse analysis (Galley et al., 2003) .",
"cite_spans": [
{
"start": 153,
"end": 174,
"text": "(Salton et al., 1996)",
"ref_id": "BIBREF30"
},
{
"start": 332,
"end": 363,
"text": "(Eisenstein and Barzilay, 2008)",
"ref_id": "BIBREF10"
},
{
"start": 662,
"end": 683,
"text": "(Salton et al., 1996)",
"ref_id": "BIBREF30"
},
{
"start": 733,
"end": 754,
"text": "(Galley et al., 2003)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we consider how to automatically find a topic segmentation. It involves identifying the most prominent topic changes in a sequence of text passages, and splits those passages into a sequence of topically coherent segments (Hearst, 1997; Beeferman et al., 1999) . This task can be cast as an unsupervised machine learning problem: placing topic boundaries in unannotated text.",
"cite_spans": [
{
"start": 236,
"end": 250,
"text": "(Hearst, 1997;",
"ref_id": "BIBREF14"
},
{
"start": 251,
"end": 274,
"text": "Beeferman et al., 1999)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although a variety of cues in text can be used for topic segmentation, such as cue phases (Beeferman et al., 1999; Reynar, 1999; Eisenstein and Barzilay, 2008) ) and discourse information (Galley et al., 2003) , in this paper, we focus on lexical cohesion and use it as the primary cue in developing an unsupervised segmentation model. The effectiveness of lexical cohesion has been demonstrated by Text-Tiling (Hearst, 1997 ), c99 (Choi, 2000 , MinCut (Malioutov and Barzilay, 2006) , PLDA (Purver et al., 2006) , Bayesseg (Eisenstein and Barzilay, 2008) , TopicTiling (Riedl and Biemann, 2012) , etc.",
"cite_spans": [
{
"start": 90,
"end": 114,
"text": "(Beeferman et al., 1999;",
"ref_id": "BIBREF1"
},
{
"start": 115,
"end": 128,
"text": "Reynar, 1999;",
"ref_id": "BIBREF28"
},
{
"start": 129,
"end": 159,
"text": "Eisenstein and Barzilay, 2008)",
"ref_id": "BIBREF10"
},
{
"start": 188,
"end": 209,
"text": "(Galley et al., 2003)",
"ref_id": "BIBREF12"
},
{
"start": 411,
"end": 424,
"text": "(Hearst, 1997",
"ref_id": "BIBREF14"
},
{
"start": 425,
"end": 443,
"text": "), c99 (Choi, 2000",
"ref_id": null
},
{
"start": 453,
"end": 483,
"text": "(Malioutov and Barzilay, 2006)",
"ref_id": "BIBREF21"
},
{
"start": 491,
"end": 512,
"text": "(Purver et al., 2006)",
"ref_id": "BIBREF27"
},
{
"start": 524,
"end": 555,
"text": "(Eisenstein and Barzilay, 2008)",
"ref_id": "BIBREF10"
},
{
"start": 570,
"end": 595,
"text": "(Riedl and Biemann, 2012)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our work uses recent progress in hierarchical topic modelling with non-parametric Bayesian methods (Du et al., 2010; Chen et al., 2011; Du et al., 2012a) , and is based on Bayesian segmentation methods (Goldwater et al., 2009; Purver et al., 2006; Eisenstein and Barzilay, 2008) using topic models. This can also be viewed as a multi-topic extension of hierarchical Bayesian segmentation (Eisenstein, 2009) , although our use of hierarchies is used to improve the performance of linear segmentation, rather than develop hierarchical segmentation.",
"cite_spans": [
{
"start": 99,
"end": 116,
"text": "(Du et al., 2010;",
"ref_id": "BIBREF7"
},
{
"start": 117,
"end": 135,
"text": "Chen et al., 2011;",
"ref_id": "BIBREF5"
},
{
"start": 136,
"end": 153,
"text": "Du et al., 2012a)",
"ref_id": "BIBREF8"
},
{
"start": 202,
"end": 226,
"text": "(Goldwater et al., 2009;",
"ref_id": "BIBREF13"
},
{
"start": 227,
"end": 247,
"text": "Purver et al., 2006;",
"ref_id": "BIBREF27"
},
{
"start": 248,
"end": 278,
"text": "Eisenstein and Barzilay, 2008)",
"ref_id": "BIBREF10"
},
{
"start": 388,
"end": 406,
"text": "(Eisenstein, 2009)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, topic models are increasingly used in various text analysis tasks including topic segmentation. Previous work (Purver et al., 2006; Misra et al., 2008; Sun et al., 2008; Misra et al., 2009; Riedl and Biemann, 2012) has shown that using topic assignments or topic distributions instead of word frequency can significantly improve segmentation performance. Here we consider more advanced topic models that model dependencies between (sub-)sections in a document, such as structured topic models (STMs) presented in (Du et al., 2010; Du et al., 2012b) . STMs treat each text as a sequence of segments, each of which is a set of text passages (e.g., a paragraph or sentence). Text passages in a segment share the same prior distribution on their topics. The topic distributions of segments in a single document are then encouraged to be similar via a hierarchical prior. This gives a substantial improvement in modelling accuracy. However, instead of explicitly learning the segmentation, STMs just leverage the existing structure of documents from the given segmentation.",
"cite_spans": [
{
"start": 120,
"end": 141,
"text": "(Purver et al., 2006;",
"ref_id": "BIBREF27"
},
{
"start": 142,
"end": 161,
"text": "Misra et al., 2008;",
"ref_id": "BIBREF22"
},
{
"start": 162,
"end": 179,
"text": "Sun et al., 2008;",
"ref_id": "BIBREF31"
},
{
"start": 180,
"end": 199,
"text": "Misra et al., 2009;",
"ref_id": "BIBREF23"
},
{
"start": 200,
"end": 224,
"text": "Riedl and Biemann, 2012)",
"ref_id": "BIBREF29"
},
{
"start": 523,
"end": 540,
"text": "(Du et al., 2010;",
"ref_id": "BIBREF7"
},
{
"start": 541,
"end": 558,
"text": "Du et al., 2012b)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given a sequence of text passages, how can we automatically learn the segmentation? The word boundary sampling algorithm introduced in (Goldwater et al., 2009) uses point-wise sampling of word boundaries after phonemes in an utterance. Similarly, the segmentation method of PLDA (Purver et al., 2006) samples segment boundaries, but also jointly samples a topic model. This is different to other topic modelling approaches that run LDA as a precursor to a separate segmentation step (Misra et al., 2009; Riedl and Biemann, 2012) . While conceptually similar to PLDA, our non-parametric approach built on STM required new methods to implement, but the resulting improvement by the standard segmentation scores is substantial.",
"cite_spans": [
{
"start": 279,
"end": 300,
"text": "(Purver et al., 2006)",
"ref_id": "BIBREF27"
},
{
"start": 483,
"end": 503,
"text": "(Misra et al., 2009;",
"ref_id": "BIBREF23"
},
{
"start": 504,
"end": 528,
"text": "Riedl and Biemann, 2012)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper presents a new hierarchical Bayesian unsupervised topic segmentation model, integrating a point-wise boundary sampling algorithm with a structured topic model. This new model takes advantage of the high modelling accuracy of structured topic models (Du et al., 2010) to produce a topic segmentation based on the distribution of latent topics. We show that this model provides high quality segmentation performance on Choi's dataset, as well as two sets of meeting transcripts and written texts.",
"cite_spans": [
{
"start": 260,
"end": 277,
"text": "(Du et al., 2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the following sections we describe our topic segmentation model and an MCMC inference algorithm for the non-parametric split/merge process. The rest of the paper is organised as follows. In Section 2 we review recent related work in the topic segmentation literature. Section 3 presents the new topic segmentation model, followed by the derivation of a sampling algorithm in Section 4. We report the experimental results by comparing several related topic segmentation methods in Section 5. Section 6 concludes the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We are interested in unsupervised topic segmentation in either written or spoken language. There is a large body of work on unsupervised topic segmentation of text based on lexical cohesion. It can be characterised by how lexical cohesion is modelled.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "One branch of this work represents the lexical cohesion in a vector space by exploring the word cooccurrence patterns, e.g., TF or TF-IDF. Work following this line includes TextTiling (Hearst, 1997) , which calculates the cosine similarity between two adjacent blocks of words purely based on the word frequency; C99 (Choi, 2000) , an algorithm based on divisive clustering with a matrix-ranking scheme; LSeg (Galley et al., 2003) , which uses a lexical chain to identify and weight word repetitions; U00 (Utiyama and Isahara, 2001 ), a probalistic approach using dynamic programming to find a segmentation with a minimum cost; MinCut (Malioutov and Barzilay, 2006) , which casts segmentation as a graph cut problem, and APS (Kazantseva and Szpakowicz, 2011) , which uses affinity propagation to learn clustering for segmentation.",
"cite_spans": [
{
"start": 184,
"end": 198,
"text": "(Hearst, 1997)",
"ref_id": "BIBREF14"
},
{
"start": 317,
"end": 329,
"text": "(Choi, 2000)",
"ref_id": "BIBREF6"
},
{
"start": 409,
"end": 430,
"text": "(Galley et al., 2003)",
"ref_id": "BIBREF12"
},
{
"start": 505,
"end": 531,
"text": "(Utiyama and Isahara, 2001",
"ref_id": "BIBREF34"
},
{
"start": 635,
"end": 665,
"text": "(Malioutov and Barzilay, 2006)",
"ref_id": "BIBREF21"
},
{
"start": 725,
"end": 758,
"text": "(Kazantseva and Szpakowicz, 2011)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The other branch of this work characterises the lexical cohesion using topic models, to which the model introduced in Section 3 belongs. Lexical cohesion in this line of research is modelled by a probabilistic generative process. PLDA presented by Purver et al. (2006) is an unsupervised topic modelling approach for segmentation. It chains a set of LDAs (Blei et al., 2003) by assuming a Markov structure on topic distributions. A binary topic shift variable is attached to each text passage (i.e., an utterance in (Purver et al., 2006) ). It is sampled to indicate whether the j th text passage shares the topic distribution with the (j \u2212 1) th passage.",
"cite_spans": [
{
"start": 248,
"end": 268,
"text": "Purver et al. (2006)",
"ref_id": "BIBREF27"
},
{
"start": 355,
"end": 374,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF2"
},
{
"start": 516,
"end": 537,
"text": "(Purver et al., 2006)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Using a similar Markov structure, SITS (Nguyen et al., 2012) chains a set of HDP-LDAs . Unlike PLDA, SITS assumes each text passage is associated with a speaker identity that is attached to the topic shift variable as supervising in-formation. SITS further assumes speakers have different topic change probabilities that work as priors on topic shift variables. Instead of assuming documents in a dataset share the same set of topics, Bayesseg (Eisenstein and Barzilay, 2008) treats words in a segment generated from a segment specific multinomial language model, i.e., it assumes each segment is generated from one topic, and a later hierarchical extension (Eisenstein, 2009) assumes each segment is generated from one topic or its parents. Other methods using as input the output of topic models include (Sun et al., 2008) , (Misra et al., 2009) , and (Riedl and Biemann, 2012) .",
"cite_spans": [
{
"start": 444,
"end": 475,
"text": "(Eisenstein and Barzilay, 2008)",
"ref_id": "BIBREF10"
},
{
"start": 658,
"end": 676,
"text": "(Eisenstein, 2009)",
"ref_id": "BIBREF11"
},
{
"start": 806,
"end": 824,
"text": "(Sun et al., 2008)",
"ref_id": "BIBREF31"
},
{
"start": 827,
"end": 847,
"text": "(Misra et al., 2009)",
"ref_id": "BIBREF23"
},
{
"start": 854,
"end": 879,
"text": "(Riedl and Biemann, 2012)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this paper we take a generative approach lying between PLDA and SITS. In contrast to PLDA, which uses a flat topic model (i.e., LDA), we assume each text has a latent topic structure that can reflect the topic coherence pattern, and the model adapts its parameters to the segments to further improve performance. Unlike SITS that targets analysing multiparty meeting transcripts, where speaker identities are available, we are interested in more general texts and assume each text has a specific topic change probability, since (1) the identity information is not always available for all kinds of texts (e.g., continuous broadcast news transcripts (Allan et al., 1998) ), (2) even for the same author, topic change probabilities for his/her different articles might be different.",
"cite_spans": [
{
"start": 652,
"end": 672,
"text": "(Allan et al., 1998)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In documents, topically coherent segments usually encapsulate a set of consecutive passages that are semantically related (Wang et al., 2011) . However, the topic boundaries between segments are often unavailable a priori. Thus we treat all passage boundaries (e.g., sentence boundaries, paragraph boundaries or pauses between utterances) as possible topic boundaries. To recover the topic boundaries we develop a structured topic segmentation model by integrating ideas from the segmented topic model (Du et al., 2010, STM) and Bayesian segmentation models.",
"cite_spans": [
{
"start": 122,
"end": 141,
"text": "(Wang et al., 2011)",
"ref_id": "BIBREF36"
},
{
"start": 502,
"end": 524,
"text": "(Du et al., 2010, STM)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation with Topic Models",
"sec_num": "3"
},
{
"text": "The basic idea of our model is that each document consists of a set of segments where text passages in the same segment are generated from the same topic distribution, called segment level topic distribution. The segment level topic distribution is drawn from a topic distribution associated with the whole document, called document level topic distribution. The relationships between the levels is managed using Bayesian non-parametric methods and a significant change in segment level topic distribution indicates a segment change.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation with Topic Models",
"sec_num": "3"
},
{
"text": "Our unsupervised topic segmentation model is based on the premise that using a hierarchical topic model like the STM with a point-wise segment sampling algorithm should allow better detection of topic boundaries. We believe that (1) segment change should be associated with significant change in the topic distribution, (2) topic cohesion can be reflected in document topic structure, (3) the loglikelihood of a topically coherent segment is typically higher than an incoherent segment (Misra et al., 2008) .",
"cite_spans": [
{
"start": 486,
"end": 506,
"text": "(Misra et al., 2008)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation with Topic Models",
"sec_num": "3"
},
{
"text": "Assume we have a corpus of D documents, each document d consists of a sequence of U d text passages, and each passage u contains a set of N d,u words denoted by w d,u that are from a vocabulary W . Our model consists of:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation with Topic Models",
"sec_num": "3"
},
{
"text": "Modelling topic boundary: We assume each document has its own topic shift probability \u03c0 d , a Beta distributed random variable, i.e., \u03c0 d \u223cBeta(\u03bb 0 , \u03bb 1 ). Then, we associate a boundary indicator variable \u03c1 d,u with u, like the topic shift variable in PLDA and SITS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation with Topic Models",
"sec_num": "3"
},
{
"text": "\u03c1 d,u is Bernoulli distributed with parameter \u03c0 d , i.e., \u03c1 d,u \u223cBernoulli(\u03c0 d ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation with Topic Models",
"sec_num": "3"
},
{
"text": "It indicates whether there is a topic boundary after text passage u or not. To sample \u03c1 d,u , we use a point-wise sampling algorithm. Consequently, a sequence of \u03c1's defines a set of segments, i.e., a topic segmentation of d. For example, let a \u03c1 vector \u03c1 = (0, 0, 1, 0, 1, 0, 0, 1) 1 , it gives us three segments, which are {1, 2, 3}, {4, 5} and {6, 7, 8}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation with Topic Models",
"sec_num": "3"
},
{
"text": "Modelling topic structure: Following the idea of the STM, we assume each document d is associated with a document level topic distribution \u00b5 d , which is drawn from a Dirichlet distribution with parameter \u03b1; and text passages in topic segment s in d are generated from \u03bd d,s , a segment level topic distribution. The number of segments S d can be com- Figure 1 : The topic segmentation model process with a discount parameter a and a concentration parameter b is used to link",
"cite_spans": [],
"ref_spans": [
{
"start": 352,
"end": 360,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Segmentation with Topic Models",
"sec_num": "3"
},
{
"text": "puted as S d =1 + U d \u22121 u=1 \u03c1 d,u . Then, a Pitman-Yor D K \u03b1 \u03bc \u03b3 \u03c6 \u03bd \u03bb \u03c0 U s w z S U N",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation with Topic Models",
"sec_num": "3"
},
{
"text": "\u00b5 d and \u03bd d,s by \u03bd d,s \u223cPYP(a, b, \u00b5 d )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation with Topic Models",
"sec_num": "3"
},
{
"text": ", which forms a simple topic hierarchy. The idea here is that topics discussed in segments can be variants of topics of the whole document. Du et al. (2010) have shown that this topic structure can significantly improve the modelling accuracy, which should contribute to more accurate segmentation. This generative process is different from PLDA. PLDA does not assume the document level topic distribution and each time generates the segment level topic distribution directly from a Dirichlet distribution.",
"cite_spans": [
{
"start": 140,
"end": 156,
"text": "Du et al. (2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation with Topic Models",
"sec_num": "3"
},
{
"text": "The complete probabilistic generative process, shown as a graph in Figure 1 is as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 67,
"end": 75,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Segmentation with Topic Models",
"sec_num": "3"
},
{
"text": "1. For each topic k \u2208 {1, . . . , K}, draw a word distribution \u03c6 k \u223c DirichletW (\u03b3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation with Topic Models",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d \u2208 {1, . . . , D},",
"eq_num": "(a)"
}
],
"section": "For each document",
"sec_num": "2."
},
{
"text": "Draw topic shift probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For each document",
"sec_num": "2."
},
{
"text": "\u03c0 d \u223c Beta(\u03bb0, \u03bb1). (b) Draw \u00b5 d \u223c DirichletK (\u03b1). (c) For each text passage (except last) u \u2208 {1, . . . , U d \u2212 1}, draw \u03c1 d,u \u223c Bernoulli(\u03c0 d ). (d) Compute S d the number of segments as 1 + U d \u22121 u=1 \u03c1 d,u . (e) For each segment s \u2208 {1, . . . , S d }, draw \u03bd d,s \u223c PYP(a, b, \u00b5 d ). (f) For each text passage u \u2208 {1, . . . , U d }, i. Set segment s d,u = 1 + u\u22121 v=1 \u03c1 d,v . ii. For each word index n \u2208 {1, . . . , N d,u }, A. Draw topic z d,u,n \u223c DiscreteK \u03bd d,s d,u . B. Draw word w d,u,n \u223c DiscreteK (\u03c6 z d,u,n ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For each document",
"sec_num": "2."
},
{
"text": "where s d,u indicates which segment text passage u belongs to. We assume the dimensionality of the Dirichlet distribution (i.e., the number of topics) is known and fixed, and word probabilities are parameterized with a K \u00d7 W matrix \u03a6 = (\u03c6 1 , . . . , \u03c6 K ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For each document",
"sec_num": "2."
},
{
"text": "In future work we plan to investigate replace the ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For each document",
"sec_num": "2."
},
{
"text": "In this section we develop a collapsed Gibbs sampling algorithm to do an approximate inference by integrating out some latent variables (i.e., \u00b5's, \u03bd's and \u03c0 d 's). The hierarchy in our model can be well explained with the Chinese restaurant franchise metaphor introduced in . For easier understanding, terminologies of the Chinese Restaurant Process (CRP) will be used throughout this section, i.e., customers, dishes and restaurants, correspond to words, topics, and segments respectively. Statistics used are listed in Table 1 . To integrate out the \u03bd d,s 's generated from the PYP, we use the technique presented in (Chen et al., 2011) , which computes the joint posterior for the PYP by summing out all the possible seating arrangements for a sequence of customers (Teh, 2006) . In this technique an auxiliary binary variable, called table indicator (\u03b4 d,u,n ), is introduced to facilitate computing table count t d,s,k for topic k. This method has two effects: (1) faster mixing of the sampler, and (2) elimination of the need for dynamic memory to store the populations/counts of each table in the CRP. In the CRP each word w d,u,n in topic k (i.e., where z d,u,n =k) contributes a count to n d,s,k for u \u2208 s; and, if w d,u,n , as a customer, also opens a new table to the CRP, it leads to increasing t d,s,k by one. In this case, \u03b4 d,u,n =1 indicates w d,u,n is the first customer on the table, called table head. Thus,",
"cite_spans": [
{
"start": 620,
"end": 639,
"text": "(Chen et al., 2011)",
"ref_id": "BIBREF5"
},
{
"start": 770,
"end": 781,
"text": "(Teh, 2006)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 522,
"end": 529,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Posterior Inference",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "t d,s,k = u\u2208s N d,u n=1 \u03b4 d,u,n 1 z d,u,n =k .",
"eq_num": "(1)"
}
],
"section": "Posterior Inference",
"sec_num": "4"
},
{
"text": "Note the two constraints on these two counts, i.e.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Posterior Inference",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "n d,s,k \u2265t d,s,k \u22650 and t d,s,k =0 iff n d,s,k =0",
"eq_num": "(2)"
}
],
"section": "Posterior Inference",
"sec_num": "4"
},
{
"text": "can be replaced be a simpler constraint in the table indicator representation. The sampler we develop is an MCMC sampler on the space \u03b8 = {z, \u03b4, \u03c1} where z defines the topic assignments of words, \u03b4 maintains the needed CRP configuration (from which t is derived) and \u03c1 defines the segmentation. Moreover, it is not a traditional Gibbs sampler changing one variable at a time, but is a block Gibbs sampler where two different kinds of blocks are used. The first block is (z d,u,n , \u03b4 d,u,n ) (for each word w d,u,n ), which can be sampled with a table indicator variant of a hierarchical topic sampler (Du et al., 2010) , described in Section 4.1. This corresponds to Equation (6) in (Purver et al., 2006) . The second kind of block is a boundary indicator \u03c1 d,u together with a particular constrained set of table counts designed to handle splitting and merging, which corresponds to Equation (7) in (Purver et al., 2006) . Sampling this second kind of block is harder in our non-parametric model requiring a potentially exponential summation, a problem we overcome using symmetric polynomials, shown in Section 4.2.",
"cite_spans": [
{
"start": 601,
"end": 618,
"text": "(Du et al., 2010)",
"ref_id": "BIBREF7"
},
{
"start": 683,
"end": 704,
"text": "(Purver et al., 2006)",
"ref_id": "BIBREF27"
},
{
"start": 900,
"end": 921,
"text": "(Purver et al., 2006)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Posterior Inference",
"sec_num": "4"
},
{
"text": "One step in our model is to sample the assignments of topics to words conditioned on all \u03c1's. As discussed in Section 3, given the sequence of \u03c1 d,u 's, \u03c1 d , one can figure out which segment s text passage u belongs to. Thus, conditioned on a set of segments s given by \u03c1, the joint posterior distribution of w, z and \u03b4 is computed as p(z, w, \u03b4 | \u03c1, \u03a6, a, b, \u03b3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Topics",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= d Beta K \u03b1 + s t d,s Beta K (\u03b1) k Beta W (\u03b3 + M k ) Beta W (\u03b3) d s\u2208s (b|a) T d,s (b) N d,s k S n d,s,k t d,s,k ,a n d,s,k t d,s,k \u22121 ,",
"eq_num": "(3)"
}
],
"section": "Sampling Topics",
"sec_num": "4.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Topics",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Beta K (\u2022) is a K-dimension Beta function,",
"eq_num": "(x|y)"
}
],
"section": "Sampling Topics",
"sec_num": "4.1"
},
{
"text": "n the Pochhammer symbol 2 , and S n t,a the generalised Stirling number of the second kind (Hsu and Shiue, 1998) 3 precomputed in a table so cost- 2 The Pochhammer symbol (x|y)n denotes the rising factorial with a specified increment, i.e., y. It is defined as (x|y)n = x(x + y)...(x + (n \u2212 1)y).",
"cite_spans": [
{
"start": 147,
"end": 148,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Topics",
"sec_num": "4.1"
},
{
"text": "3 A Stirling number of the second kind is used to study the number of ways of partitioning a set of n objects into k nonempty subsets. The generalised version given by Hsu and Shiue (1998) has a linear recursion which in our case is S n+1 m,a = S n m\u22121,a + (n \u2212 ma)S n m,a .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Topics",
"sec_num": "4.1"
},
{
"text": "ing O(1) to use (Buntine and Hutter, 2012) .Eq 3is an indicator variant of Eq (1) in (Du et al., 2010) with applying Theorem 1 in (Chen et al., 2011) . Given the current segmentation and topic assignments for all other words, using Bayes rule, we can derive the following two conditionals from Eq (3):",
"cite_spans": [
{
"start": 16,
"end": 42,
"text": "(Buntine and Hutter, 2012)",
"ref_id": "BIBREF4"
},
{
"start": 85,
"end": 102,
"text": "(Du et al., 2010)",
"ref_id": "BIBREF7"
},
{
"start": 130,
"end": 149,
"text": "(Chen et al., 2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Topics",
"sec_num": "4.1"
},
{
"text": "1. The joint probability of assigning topic k to word w d,u,n and w d,u,n being a table head,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Topics",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(z d,u,n = k, \u03b4 d,u,n = 1 | \u03b8 ) = \u03b3 wi,j,n + M k,wi,j,n w (\u03b3 w + M k,w ) \u03b1 k + s t d,s,k k \u03b1 k + s,k t d,s,k b + aT d,s b + N d,s S n d,s,k +1 t d,s,k +1,a S n d,s,k t d,s,k ,a t d,s,k + 1 n d,s,k + 1",
"eq_num": "(4)"
}
],
"section": "Sampling Topics",
"sec_num": "4.1"
},
{
"text": "2. The joint probability of assigning k to w d,u,n and w d,u,n not being a table head,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Topics",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(z d,u,n = k, \u03b4 d,u,n = 0 | \u03b8 ) = \u03b3 w i,j,l + M k,w i,j,l w \u03b3 w + M k,w 1 b + N d,s S n d,s,k +1 t d,s,k ,a S n d,s,k t d,s,k ,a n d,s,k + 1 \u2212 t d,s,k n d,s,k + 1",
"eq_num": "(5)"
}
],
"section": "Sampling Topics",
"sec_num": "4.1"
},
{
"text": "where u,n , \u03c1, \u03b1, a, b, \u03b3}. From the two conditionals, we develop a blocked Gibbs sampling algorithm for (z d,u,n , \u03b4 d,u,n ).",
"cite_spans": [
{
"start": 6,
"end": 9,
"text": "u,n",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Topics",
"sec_num": "4.1"
},
{
"text": "\u03b8 = {z \u2212z d,u,n , w, \u03b4 \u2212\u03b4 d,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Topics",
"sec_num": "4.1"
},
{
"text": "In our model, each segment corresponds to a Chinese restaurant in the CRP. Sampling topic boundaries corresponds to splitting/merging restaurant(s). This is different from the split-merge process proposed by Jian and Neal (2004) , where one actually splits/merges table(s). To our knowledge, there has been no method developed to split/merge restaurant(s). We tried different approximations, such as the minimum-path-assumption (Wallach, 2008) , which in our case assumes one table for each topic k, and all words in k are placed in the same table. Although this simplifies the split-merge process, it yielded poor results. We instead developed a novel approximate block Gibbs sampling algorithm using symmetric polynomials. Its segmentation performance worked well in our development dataset.",
"cite_spans": [
{
"start": 208,
"end": 228,
"text": "Jian and Neal (2004)",
"ref_id": null
},
{
"start": 428,
"end": 443,
"text": "(Wallach, 2008)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Segmentation Boundaries",
"sec_num": "4.2"
},
{
"text": "For simplicity, we consider a passage u in document d, and assume: (1) If \u03c1 d,u =1, there are two segments, s l and s r ; s l ends at text passage u, and s r starts at text passage u+1. (2) If \u03c1 d,u =0, there is one segment, s m , where u is is somewhere in the middle of s m . The split-merge choice we sample is one to many, for a given split pair (s l , s r ) we consider a set of merged states s m (represented by different possible table counts). Then, to compute the Gibbs probability for splitting/merging restaurant(s), we consider the probability of the single split, the probability of the corresponding set of merges, and then if a merge is selected, we have to sample from the set of merges. These are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Segmentation Boundaries",
"sec_num": "4.2"
},
{
"text": "Splitting: split s m into s r and s l by placing a boundary after u. Since passages have a fixed order in each document, all the words are put into s r and s l based on which passages they belong to. Then, given all the topic assignments, we first sample all table indicators \u03b4 d,u ,n , for n \u2208 {1, ..., N d,u } and u \u2208 s m using Bernoulli sampling without replacement. It runs as follows: 1) sample \u03b4 d,u ,n according to probability t d,sm,k /n d,sm,k ; 2) decrease t d,sm,k if \u03b4 d,u ,n = 1, otherwise, just decrease n d,sm,k . Using the sampled \u03b4 d,u ,n 's we compute the inferred table counts t d,s,k (from Eq (1)) and customer counts n d,s,k respectively for segments s=s l and s r and topics k. The computation may result in the following cases: for a given topic k, The Gibbs probability for splitting a segment is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Segmentation Boundaries",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(\u03c1 d,u = 1 | \u03b8 ) \u221d \u03bb 1 + c d,1 \u03bb 0 + \u03bb 1 + c d,0 + c d,1",
"eq_num": "(6)"
}
],
"section": "Sampling Segmentation Boundaries",
"sec_num": "4.2"
},
{
"text": "Beta K \u03b1 + S d s=1 t d,s s\u2208{s l ,sr} (b|a) T d,s (b) N d,s k S n d,s,k t d,s,k ,a ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Segmentation Boundaries",
"sec_num": "4.2"
},
{
"text": "where \u03b8 = {z, w, \u03b4, \u03c1 \u2212\u03c1 d,u , \u03b1, a, b, \u03bb 0 , \u03bb 1 }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Segmentation Boundaries",
"sec_num": "4.2"
},
{
"text": "Merging: remove the boundary after u, and merge s r and s l to one segment s m . For this case, both s r and s l satisfy constraints (2) for all k's, and set ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Segmentation Boundaries",
"sec_num": "4.2"
},
{
"text": "n d,sm,k =n d,sr,k + n d,s l ,k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Segmentation Boundaries",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "t d,sm,k = t d,s l ,k + t d,sr,k (7) t d,sm,k = t d,s l ,k + t d,sr,k \u2212 1",
"eq_num": "(8)"
}
],
"section": "Sampling Segmentation Boundaries",
"sec_num": "4.2"
},
{
"text": "Note that choosing Eq (8) means we need to decrease the table count t d,sm,k by one. The idea here is that we sample to decide whether the remove table was added due to splitting case (III) or not. Clearly, we have a one-to-many split-merge choice. To compute the probability of a set of possible merges, we use elementary symmetric polynomials as follows: let KS be a set of topic-segment combinations that satisfy the condition in merging case (III), for (k, s) \u2208 KS, we sample either Eq (7) or Eq (8). Let T = {t d,s,k : (k, s) \u2208 KS} be the set of table counts affected by the changes of Eq (7) or Eq (8). The Gibbs probability for merging two segments is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Segmentation Boundaries",
"sec_num": "4.2"
},
{
"text": "p(\u03c1 d,u = 0 | \u03b8 ) = T p(\u03c1 d,u = 0, T | \u03b8 ) (9) \u221d T \u03bb 0 + c d,0 \u03bb 0 + \u03bb 1 + c d,0 + c d,1 Beta K \u03b1 + S d s=1 t d,s (b|a) T d,sm (b) N d,sm k S n d,sm,k t d,sm,k ,a ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Segmentation Boundaries",
"sec_num": "4.2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Segmentation Boundaries",
"sec_num": "4.2"
},
{
"text": "\u03b8 = {z, w, t \u2212 T , \u03c1 \u2212\u03c1 d,u , \u03b1, a, b, \u03bb 0 , \u03bb 1 }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Segmentation Boundaries",
"sec_num": "4.2"
},
{
"text": "This is converted to a sum on |T | booleans with independent terms and evaluated recursively in O(|T | 2 ) by symmetric polynomials. If a merge is chosen, one then samples according to the terms in the sum using a similar recursion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Segmentation Boundaries",
"sec_num": "4.2"
},
{
"text": "To demonstrate the effectiveness of our model (denoted by TSM) in topic segmentation tasks, we evaluate it on three different kinds of corpora 4 : a set of synthetic documents, two meeting transcripts and two sets of text books (see Tables 2 and 3) ; and compare TSM with the following methods: two baselines (the Random algorithm that places topic boundaries uniformly at random, and the Even algorithm that places a boundary after every m th text passage, where m is the average gold-standard segment length (Beeferman et al., 1999) ), C99, MinCut, Bayesseg, APS (Kazantseva and Szpakowicz, 2011) , and PLDA.",
"cite_spans": [
{
"start": 510,
"end": 534,
"text": "(Beeferman et al., 1999)",
"ref_id": "BIBREF1"
},
{
"start": 565,
"end": 598,
"text": "(Kazantseva and Szpakowicz, 2011)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 233,
"end": 248,
"text": "Tables 2 and 3)",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Metrics: We evaluated the segmentation performance with PK (Beeferman et al., 1999) and Win-dowDiff (WD r ) (Pevzner and Hearst, 2002) , which are two common metrics used in topic segmentation. Both move a sliding window of fixed size k over the document, and compare the inferred segmentation with the gold-standard segmentation for each window. The window size is usually set to the half of the average gold-standard segment size (Pevzner and Hearst, 2002) . In addition, we also used an extended WindowDiff proposed by Lamprier et al. (2007) , denoted by WD e . One problem of WD r is that errors near the two ends of a text are penalised less than those in the middle. To solve the problem WD e adds k fictive text passages at the beginning and the end of the text when computing the score. We evaluated all the methods with the same Java code for the three metrics.",
"cite_spans": [
{
"start": 59,
"end": 83,
"text": "(Beeferman et al., 1999)",
"ref_id": "BIBREF1"
},
{
"start": 108,
"end": 134,
"text": "(Pevzner and Hearst, 2002)",
"ref_id": "BIBREF25"
},
{
"start": 432,
"end": 458,
"text": "(Pevzner and Hearst, 2002)",
"ref_id": "BIBREF25"
},
{
"start": 522,
"end": 544,
"text": "Lamprier et al. (2007)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Parameter Settings: In order to make all the methods comparable, we chose for each method the parameter settings that give the gold-standard number of segments 5 . Specifically, we used a 11 \u00d7 11 rank mask for C99, as suggested by Choi (2000) , the configurations included in the code (http://groups.csail.mit.edu/rbg/code) for Bayesseg and manually tuned parameters for MinCut. For APS, a greedy approach was used to search parameter settings that can approximately give the gold-standard number of segments. For PLDA, two randomly initialised Gibbs chains were used. Each chain ran for 75,000 burn-in iterations, then 1000 samples were drawn at a lag of 25 from each chain. For TSM, 10 randomly initialised Gibbs chains were used. Each chain ran for 30,000 iterations with 25,000 for burn-in, then 200 samples were drawn. The concentration parameter b in TSM was sampled using the Adaptive-Reject sampling scheme introduced in (Du et al., 2012b) , the discount parameter a = 0.2, and \u03bb 0 = \u03bb 1 = 0.1. To derive the final segmentation for PLDA and TSM, we first estimated the marginal probabilities of placing boundaries after text passages from the total of 2000 samples. These probabilities were then thresholded to give the gold-standard number of segments. Precisely, we apply a small amount of Gaussian smoothing to the marginal probabilities (except for Choi's dataset), like Puerver et al. (2006) does. Finally, we used a symmetric Dirichlet prior in PLDA and STM, the one on topic distributions is \u03b1 = 0.1, the other on word distributions \u03b3 = 0.01.",
"cite_spans": [
{
"start": 231,
"end": 242,
"text": "Choi (2000)",
"ref_id": "BIBREF6"
},
{
"start": 929,
"end": 947,
"text": "(Du et al., 2012b)",
"ref_id": "BIBREF9"
},
{
"start": 1383,
"end": 1404,
"text": "Puerver et al. (2006)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Choi's dataset (Choi, 2000) is commonly used in evaluating topic segmentation methods. It consists of 700 documents, each being a concatenation of 10 segments. Each segment is the first n sentences of a randomly selected document from the Brown corpus, s.t. 3 \u2264 n \u2264 11. Those documents are divided into 4 subsets with different range of n, as shown in Table 2 . We ran PLDA and STM with 50 topics. Results in Table 4 show that our model significantly outperforms all the other methods on the four subsets over all the metrics. Furthermore, comparing to other published results, this also outperforms (Misra et al., 2009 ) (see their table 2), and (Riedl and Biemann, 2012) (they report an average of 1.04 and 1.06 in Tables 1 and 2 , whereas TSM averages 0.93). This gives TSM the best reported results to date. Note the lexical transitions in these concatenated documents are very sharp (Malioutov and Barzilay, 2006) . The sharp transitions lead to significant change in segment level topic distributions, which further implies the variance of these distributions is large. In TSM, a large variance causes a small concentration parameter b. We observed that the sampled b's (about 0.1) are indeed small for the four subsets, which shows there is no topic sharing among segments. Therefore, TSM is able to recognise the segments are unrelated text.",
"cite_spans": [
{
"start": 15,
"end": 27,
"text": "(Choi, 2000)",
"ref_id": "BIBREF6"
},
{
"start": 600,
"end": 619,
"text": "(Misra et al., 2009",
"ref_id": "BIBREF23"
},
{
"start": 888,
"end": 918,
"text": "(Malioutov and Barzilay, 2006)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 352,
"end": 359,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 409,
"end": 416,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 717,
"end": 731,
"text": "Tables 1 and 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Evaluation on Choi's Dataset",
"sec_num": "5.1"
},
{
"text": "We applied our model to segmenting the two meeting transcripts, which are the ICSI meeting transcripts (Janin et al., 2003) and the 2008 presidential election debates (Boydstun et al., 2011) . The ICSI meeting has 75 transcripts, we used the 25 annotated transcripts provided by Galley et al. (2003) for evaluation. For the election debates, we used the four annotated debates used in (Nguyen et al., 2012) . The statistics are shown in Table 3 . PLDA and TSM were trained with 10 topics on the ICSI and 50 on the Election. In this set of experiments, we show that our model is robust to meeting transcripts. As shown in Table 5 , topic modelling based methods (i.e., Bayesseg, PLDA and TSM) outperform those using either TF or TF-IDF, which is consistent with previously reported results (Misra et al., 2009; Riedl and Biemann, 2012) . Among the topic model based methods, TSM achieves the best results on all the three metrics. On the ICSI transcripts, TSM performs 6.8%, 9.7% and 3.4% better than Bayesseg on the WD r , WD e and PK metrics respectively. Figure 2 shows an example of how the inferred topic boundary probabilities at utterances compare with the gold-standard boundaries on one ICSI meeting transcript. The gold-standard segmentation is {77, 95, 189, 365, 508, 609, 860}, TSM and PLDA infer {85, 96, 188, 363, 499, 508, 860} and {96, 136, 203, 226, 361, 508, 860} respectively. Both models miss the boundary after the 609 th utterance, but put a boundary after the 508 th utterance. Note the boundaries placed by TSM are always within 10 utterances with respect to the gold standard. Although TSM still performs the best on the debates, all the methods have relatively worse performance than on the ICSI meeting transcripts. Nguyen et al. (2012) pointed out that the ICSI meetings are characterised by pragmatic topic changes, in contrast, the debates are characterised by strategic topic changes with strong rewards for setting the agenda, dodging a question, etc. Thus, considering the properties of debates might further improve the segmentation performance.",
"cite_spans": [
{
"start": 103,
"end": 123,
"text": "(Janin et al., 2003)",
"ref_id": "BIBREF17"
},
{
"start": 167,
"end": 190,
"text": "(Boydstun et al., 2011)",
"ref_id": "BIBREF3"
},
{
"start": 279,
"end": 299,
"text": "Galley et al. (2003)",
"ref_id": "BIBREF12"
},
{
"start": 385,
"end": 406,
"text": "(Nguyen et al., 2012)",
"ref_id": "BIBREF24"
},
{
"start": 789,
"end": 809,
"text": "(Misra et al., 2009;",
"ref_id": "BIBREF23"
},
{
"start": 810,
"end": 834,
"text": "Riedl and Biemann, 2012)",
"ref_id": "BIBREF29"
},
{
"start": 1254,
"end": 1258,
"text": "{77,",
"ref_id": null
},
{
"start": 1259,
"end": 1262,
"text": "95,",
"ref_id": null
},
{
"start": 1263,
"end": 1267,
"text": "189,",
"ref_id": null
},
{
"start": 1268,
"end": 1272,
"text": "365,",
"ref_id": null
},
{
"start": 1273,
"end": 1277,
"text": "508,",
"ref_id": null
},
{
"start": 1278,
"end": 1282,
"text": "609,",
"ref_id": null
},
{
"start": 1283,
"end": 1288,
"text": "860},",
"ref_id": null
},
{
"start": 1289,
"end": 1312,
"text": "TSM and PLDA infer {85,",
"ref_id": null
},
{
"start": 1313,
"end": 1316,
"text": "96,",
"ref_id": null
},
{
"start": 1317,
"end": 1321,
"text": "188,",
"ref_id": null
},
{
"start": 1322,
"end": 1326,
"text": "363,",
"ref_id": null
},
{
"start": 1327,
"end": 1331,
"text": "499,",
"ref_id": null
},
{
"start": 1332,
"end": 1336,
"text": "508,",
"ref_id": null
},
{
"start": 1337,
"end": 1350,
"text": "860} and {96,",
"ref_id": null
},
{
"start": 1351,
"end": 1355,
"text": "136,",
"ref_id": null
},
{
"start": 1356,
"end": 1356,
"text": "",
"ref_id": null
}
],
"ref_spans": [
{
"start": 437,
"end": 444,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 621,
"end": 628,
"text": "Table 5",
"ref_id": null
},
{
"start": 1057,
"end": 1065,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Evaluation on Meeting Transcripts",
"sec_num": "5.2"
},
{
"text": "We further tested TSM on two written text datasets, Clinical (Eisenstein and Barzilay, 2008) and Fiction (Kazantseva and Szpakowicz, 2011) . The statistics are shown in Table 3 . Each document in the Clinical dataset is a chapter of a medical textbook. Section breaks are selected to be the true topic boundaries. For the Fiction dataset, each document is a fiction downloaded from Project Gutenberg, the true topic boundaries are chapter breaks. We trained PLDA and TSM with 25 topics on the Fiction and 50 on the Clinical. Results are shown in Table 5 . TSM compares favourably with Bayesseg and outperforms the other methods on the Clinical dataset, but it does not perform as well as Bayesseg on the Fiction dataset.",
"cite_spans": [
{
"start": 61,
"end": 92,
"text": "(Eisenstein and Barzilay, 2008)",
"ref_id": "BIBREF10"
},
{
"start": 105,
"end": 138,
"text": "(Kazantseva and Szpakowicz, 2011)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 169,
"end": 176,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 546,
"end": 553,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation on Written Texts",
"sec_num": "5.3"
},
{
"text": "In fiction books, the topic boundaries between sections are usually blurred by the authors for reasons of continuity (Reynar, 1999) . We observed that the sampled concentration (or inverse variance) parameter b in TSM is about 18.4 on Fiction, but 4.8 on Clinical, as shown in Table 6 . This means the variance of segment level topic distributions \u03bd learnt by TSM is not large for the fiction, so chapter breaks may not necessarily indicate topic changes. For example, there is a document in the Fiction dataset where gold-standard topic boundaries are placed after each block of text. In contrast, Bayesseg assumes each segment has its own distribution over words, i.e., one topic per segment, which means topics are not shared among segments. We hypothesize that for certain kinds of documents where the change in topic distribution is subtle, such as fiction, assuming one topic per segment can capture subtle changes in word usage. This is an area for future investigation.",
"cite_spans": [
{
"start": 117,
"end": 131,
"text": "(Reynar, 1999)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 277,
"end": 284,
"text": "Table 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Evaluation on Written Texts",
"sec_num": "5.3"
},
{
"text": "In this paper, we have presented a hierarchical Bayesian model for unsupervised topic segmentation. This new model takes advances of both Bayesian segmentation and structured topic modelling. It uses a point-wise boundary sampling algorithm to sample a topic segmentation, while concurrently building a structured topic model. We have developed a novel approximation to compute the Gibbs probabilities of spliting/merging segment(s). Our model shows prominent segmentation performance on both written or spoken texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In future work, we would like to make the model fully nonparametric and investigate the effects of adding different cues in texts, such as cue phrases, pronoun usage, prosody, etc. Currently, our model uses marginal boundary probabilities to generate the final segmentation. Instead, we could develop a Metropolis-Hasting sampling algorithm to move one boundary at a time, given the gold-standard number of segments. To further study the effectiveness of our model, we would like to compare it with other methods, like SITS (Nguyen et al., 2012) and to run on more datasets, like email (Joty et al., 2010) . For example, in order to compare with SITS, one can make an assumption that each document just has one speaker.",
"cite_spans": [
{
"start": 519,
"end": 545,
"text": "SITS (Nguyen et al., 2012)",
"ref_id": null
},
{
"start": 586,
"end": 605,
"text": "(Joty et al., 2010)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The last 1 in \u03c1 is the document boundary that is know a priori. This means one does not need to sample it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For preprocessing, we only removed stop words.5 The segments learnt by those methods will differ, but just the segment count will be the same as the gold-standard count.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors would like to thank all the anonymous reviewers for their valuable comments. This research was supported under Australian Research Council's Discovery Projects funding scheme (project numbers DP110102506 and DP110102593). NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the Australian Research Council through the ICT Centre of Excellence program.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Topic detection and tracking pilot study: Final report",
"authors": [
{
"first": "J",
"middle": [],
"last": "Allan",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Doddington",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Yamron",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the DARPA Broadcast News Transcription and Understanding Workshop",
"volume": "",
"issue": "",
"pages": "194--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Allan, J. Carbonell, G. Doddington, J. Yamron, and Y. Yang. 1998. Topic detection and tracking pi- lot study: Final report. In Proceedings of the DARPA Broadcast News Transcription and Under- standing Workshop, pages 194-218.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Statistical models for text segmentation",
"authors": [
{
"first": "Doug",
"middle": [],
"last": "Beeferman",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Berger",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 1999,
"venue": "Mach. Learn",
"volume": "34",
"issue": "1-3",
"pages": "177--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Doug Beeferman, Adam Berger, and John Lafferty. 1999. Statistical models for text segmentation. Mach. Learn., 34(1-3):177-210.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Latent Dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "J. Mach. Learn. Res",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. J. Mach. Learn. Res., 3:993-1022.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Its the economy again",
"authors": [
{
"first": "A",
"middle": [
"E"
],
"last": "Boydstun",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Phillips",
"suffix": ""
},
{
"first": "R",
"middle": [
"A"
],
"last": "Glazier",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A.E. Boydstun, C. Phillips, and R.A. Glazier. 2011. Its the economy again, stupid: Agenda control in the 2008 presidential debates.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Bayesian review of the Poisson-Dirichlet process",
"authors": [
{
"first": "W",
"middle": [],
"last": "Buntine",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hutter",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1007.0296v2"
]
},
"num": null,
"urls": [],
"raw_text": "W. Buntine and M. Hutter. 2012. A Bayesian review of the Poisson-Dirichlet process. Technical Report arXiv:1007.0296v2, ArXiv, Cornell.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Sampling for the Poisson-Dirichlet process",
"authors": [
{
"first": "Changyou",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Lan",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Wray",
"middle": [],
"last": "Buntine",
"suffix": ""
}
],
"year": 2011,
"venue": "European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Database",
"volume": "",
"issue": "",
"pages": "296--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Changyou Chen, Lan Du, and Wray Buntine. 2011. Sampling for the Poisson-Dirichlet process. In Euro- pean Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Database, pages 296-311.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Advances in domain independent linear text segmentation",
"authors": [
{
"first": "Y",
"middle": [
"Y"
],
"last": "Freddy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference",
"volume": "",
"issue": "",
"pages": "26--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Freddy Y. Y. Choi. 2000. Advances in domain inde- pendent linear text segmentation. In Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference, NAACL 2000, pages 26-33.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A segmented topic model based on the two-parameter Poisson-Dirichlet process",
"authors": [
{
"first": "Lan",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Wray",
"middle": [],
"last": "Buntine",
"suffix": ""
},
{
"first": "Huidong",
"middle": [],
"last": "Jin",
"suffix": ""
}
],
"year": 2010,
"venue": "Mach. Learn",
"volume": "81",
"issue": "1",
"pages": "5--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lan Du, Wray Buntine, and Huidong Jin. 2010. A segmented topic model based on the two-parameter Poisson-Dirichlet process. Mach. Learn., 81(1):5-19.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Modelling sequential text with an adaptive topic model",
"authors": [
{
"first": "Lan",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Wray",
"middle": [],
"last": "Buntine",
"suffix": ""
},
{
"first": "Huidong",
"middle": [],
"last": "Jin",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "535--545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lan Du, Wray Buntine, and Huidong Jin. 2012a. Mod- elling sequential text with an adaptive topic model. In Proceedings of the 2012 Joint Conference on Em- pirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 535-545.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Sequential latent Dirichlet allocation",
"authors": [
{
"first": "Lan",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Wray",
"middle": [],
"last": "Buntine",
"suffix": ""
},
{
"first": "Huidong",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Changyou",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2012,
"venue": "Knowledge and Information Systems",
"volume": "31",
"issue": "3",
"pages": "475--503",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lan Du, Wray Buntine, Huidong Jin, and Changyou Chen. 2012b. Sequential latent Dirichlet allocation. Knowledge and Information Systems, 31(3):475-503.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Bayesian unsupervised topic segmentation",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP'08",
"volume": "",
"issue": "",
"pages": "334--343",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Eisenstein and Regina Barzilay. 2008. Bayesian unsupervised topic segmentation. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP'08, pages 334-343.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Hierarchical text segmentation from multi-scale lexical cohesion",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2009,
"venue": "Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "353--361",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Eisenstein. 2009. Hierarchical text segmentation from multi-scale lexical cohesion. In Human Lan- guage Technologies: Conference of the North Amer- ican Chapter of the Association of Computational Lin- guistics, pages 353-361. The Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Discourse segmentation of multi-party conversation",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [
"R"
],
"last": "Mckeown",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Fosler-Lussier",
"suffix": ""
},
{
"first": "Hongyan",
"middle": [],
"last": "Jing",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "562--569",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Galley, Kathleen R. McKeown, Eric Fosler- Lussier, and Hongyan Jing. 2003. Discourse segmen- tation of multi-party conversation. In Proceedings of the 41st Annual Meeting of the Association for Com- putational Linguistics, pages 562-569.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A Bayesian framework for word segmentation: Exploring the effects of context",
"authors": [
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2009,
"venue": "Cognition",
"volume": "112",
"issue": "1",
"pages": "21--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharon Goldwater, Thomas L. Griffiths, and Mark John- son. 2009. A Bayesian framework for word segmen- tation: Exploring the effects of context. Cognition, 112(1):21-53.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "TextTiling: segmenting text into multi-paragraph subtopic passages",
"authors": [
{
"first": "Marti",
"middle": [
"A"
],
"last": "Hearst",
"suffix": ""
}
],
"year": 1997,
"venue": "Comput. Linguist",
"volume": "23",
"issue": "1",
"pages": "33--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marti A. Hearst. 1997. TextTiling: segmenting text into multi-paragraph subtopic passages. Comput. Lin- guist., 23(1):33-64.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A unified approach to generalized Stirling numbers",
"authors": [
{
"first": "C",
"middle": [],
"last": "Leetsch",
"suffix": ""
},
{
"first": "Peter Jau-Shyong",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shiue",
"suffix": ""
}
],
"year": 1998,
"venue": "Adv. Appl. Math",
"volume": "20",
"issue": "",
"pages": "366--384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leetsch C. Hsu and Peter Jau-Shyong Shiue. 1998. A unified approach to generalized Stirling numbers. Adv. Appl. Math., 20:366-384, April.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A split-merge Markov chain Monte Carlo procedure for the Dirichlet process mixture model",
"authors": [
{
"first": "Sonia",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Radford",
"middle": [],
"last": "Neal",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of Computational and Graphical Statistics",
"volume": "13",
"issue": "1",
"pages": "158--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sonia Jain and Radford Neal. 2004. A split-merge Markov chain Monte Carlo procedure for the Dirichlet process mixture model. Journal of Computational and Graphical Statistics, 13(1):158-182.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The ICSI Meeting Corpus",
"authors": [
{
"first": "A",
"middle": [],
"last": "Janin",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Baron",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Edwards",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Ellis",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Gelbart",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Morgan",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Peskin",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Pfau",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Wooters",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of 2003 IEEE International Conference on Acoustics, Speech, and Signal (ICASSP '03)",
"volume": "",
"issue": "",
"pages": "364--367",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Janin, D. Baron, J. Edwards, D. Ellis, D. Gelbart, N. Morgan, B. Peskin, T. Pfau, E. Shriberg, A. Stolcke, and C. Wooters. 2003. The ICSI Meeting Corpus. In Proceedings of 2003 IEEE International Conference on Acoustics, Speech, and Signal (ICASSP '03), pages 364-367.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Exploiting conversation structure in unsupervised topic segmentation for emails",
"authors": [
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Murray",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "388--398",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shafiq Joty, Giuseppe Carenini, Gabriel Murray, and Raymond T. Ng. 2010. Exploiting conversation struc- ture in unsupervised topic segmentation for emails. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 388- 398.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Linear text segmentation using affinity propagation",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Kazantseva",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Szpakowicz",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "284--293",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Kazantseva and Stan Szpakowicz. 2011. Linear text segmentation using affinity propagation. In Pro- ceedings of the 2011 Conference on Empirical Meth- ods in Natural Language Processing, pages 284-293.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "On evaluation methodologies for text segmentation algorithms",
"authors": [
{
"first": "Tassadit",
"middle": [],
"last": "Sylvain Lamprier",
"suffix": ""
},
{
"first": "Bernard",
"middle": [],
"last": "Amghar",
"suffix": ""
},
{
"first": "Frederic",
"middle": [],
"last": "Levrat",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Saubion",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 19th IEEE International Conference on Tools with Artificial Intelligence",
"volume": "02",
"issue": "",
"pages": "19--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sylvain Lamprier, Tassadit Amghar, Bernard Levrat, and Frederic Saubion. 2007. On evaluation methodologies for text segmentation algorithms. In Proceedings of the 19th IEEE International Conference on Tools with Artificial Intelligence -Volume 02, ICTAI '07, pages 19-26.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Minimum cut model for spoken lecture segmentation",
"authors": [
{
"first": "Igor",
"middle": [],
"last": "Malioutov",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Igor Malioutov and Regina Barzilay. 2006. Minimum cut model for spoken lecture segmentation. In Pro- ceedings of the 21st International Conference on Com- putational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL- 44, pages 25-32.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Using LDA to detect semantically incoherent documents",
"authors": [
{
"first": "Hemant",
"middle": [],
"last": "Misra",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Cappe",
"suffix": ""
},
{
"first": "Francois",
"middle": [],
"last": "Yvon",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of CoNLL-08",
"volume": "",
"issue": "",
"pages": "41--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hemant Misra, Olivier Cappe, and Francois Yvon. 2008. Using LDA to detect semantically incoherent docu- ments. In Proceedings of CoNLL-08, pages 41-48.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Text segmentation via topic modeling: an analytical study",
"authors": [
{
"first": "Hemant",
"middle": [],
"last": "Misra",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Yvon",
"suffix": ""
},
{
"first": "Joemon",
"middle": [
"M"
],
"last": "Jose",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Cappe",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 18th ACM conference on Information and knowledge management, CIKM '09",
"volume": "",
"issue": "",
"pages": "1553--1556",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hemant Misra, Fran\u00e7ois Yvon, Joemon M. Jose, and Olivier Cappe. 2009. Text segmentation via topic modeling: an analytical study. In Proceedings of the 18th ACM conference on Information and knowledge management, CIKM '09, pages 1553-1556.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "SITS: A hierarchical nonparametric model using speaker identity for topic segmentation in multiparty conversations",
"authors": [
{
"first": "",
"middle": [],
"last": "Viet-An",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "78--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Viet-An Nguyen, Jordan Boyd-Graber, and Philip Resnik. 2012. SITS: A hierarchical nonparametric model using speaker identity for topic segmentation in multiparty conversations. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 78-87.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A critique and improvement of an evaluation metric for text segmentation",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Pevzner",
"suffix": ""
},
{
"first": "Marti",
"middle": [
"A"
],
"last": "Hearst",
"suffix": ""
}
],
"year": 2002,
"venue": "Comput. Linguist",
"volume": "28",
"issue": "1",
"pages": "19--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Pevzner and Marti A. Hearst. 2002. A critique and improvement of an evaluation metric for text segmen- tation. Comput. Linguist., 28(1):19-36.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "The two-parameter Poisson-Diriclet distribution derived from a stable subordinator",
"authors": [
{
"first": "J",
"middle": [],
"last": "Pitman",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Yor",
"suffix": ""
}
],
"year": 1997,
"venue": "Annals Probability",
"volume": "25",
"issue": "",
"pages": "855--900",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Pitman and M. Yor. 1997. The two-parameter Poisson- Diriclet distribution derived from a stable subordina- tor. Annals Probability, 25:855-900.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Unsupervised topic modelling for multi-party spoken discourse",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Purver",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Konrad",
"middle": [
"P"
],
"last": "K\u00f6rding",
"suffix": ""
},
{
"first": "Joshua",
"middle": [
"B"
],
"last": "Tenenbaum",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Purver, Thomas L. Griffiths, Konrad P. K\u00f6rding, and Joshua B. Tenenbaum. 2006. Unsupervised topic modelling for multi-party spoken discourse. In Pro- ceedings of the 21st International Conference on Com- putational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL- 44, pages 17-24.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Statistical models for topic segmentation",
"authors": [
{
"first": "C",
"middle": [],
"last": "Jeffrey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Reynar",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "357--364",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey C. Reynar. 1999. Statistical models for topic seg- mentation. In Proceedings of the 37th Annual Meet- ing of the Association for Computational Linguistics, pages 357-364.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "How text segmentation algorithms gain from topic models",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Riedl",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Riedl and Chris Biemann. 2012. How text seg- mentation algorithms gain from topic models. In Pro- ceedings of the 2012 Conference of the North Ameri- can Chapter of the Association for Computational Lin- guistics: Human Language Technologies.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Automatic text decomposition using text segments and text themes",
"authors": [
{
"first": "Gerard",
"middle": [],
"last": "Salton",
"suffix": ""
},
{
"first": "Amit",
"middle": [],
"last": "Singhal",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Buckley",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Mitra",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the the seventh ACM conference on Hypertext",
"volume": "",
"issue": "",
"pages": "53--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerard Salton, Amit Singhal, Chris Buckley, and Mandar Mitra. 1996. Automatic text decomposition using text segments and text themes. In Proceedings of the the seventh ACM conference on Hypertext, pages 53-65.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Text segmentation with LDA-based Fisher kernel",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Runxin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dingsheng",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Xihong",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT, Short Papers",
"volume": "",
"issue": "",
"pages": "269--272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qi Sun, Runxin Li, Dingsheng Luo, and Xihong Wu. 2008. Text segmentation with LDA-based Fisher ker- nel. In Proceedings of ACL-08: HLT, Short Papers, pages 269-272.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Hierarchical Dirichlet processes",
"authors": [
{
"first": "Y",
"middle": [
"W"
],
"last": "Teh",
"suffix": ""
},
{
"first": "M",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "M",
"middle": [
"J"
],
"last": "Beal",
"suffix": ""
},
{
"first": "D",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of the American Statistical Association",
"volume": "101",
"issue": "476",
"pages": "1566--1581",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. 2006. Hierarchical Dirichlet processes. Journal of the Amer- ican Statistical Association, 101(476):1566-1581.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A Bayesian interpretation of interpolated Kneser-Ney",
"authors": [
{
"first": "Y",
"middle": [
"W"
],
"last": "Teh",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. W. Teh. 2006. A Bayesian interpretation of interpo- lated Kneser-Ney. Technical Report TRA2/06, School of Computing, National University of Singapore.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "A statistical model for domain-independent text segmentation",
"authors": [
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of 39th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "499--506",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masao Utiyama and Hitoshi Isahara. 2001. A statistical model for domain-independent text segmentation. In Proceedings of 39th Annual Meeting of the Associa- tion for Computational Linguistics, pages 499-506.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Structured topic models for language. doctoral dissertation",
"authors": [
{
"first": "H",
"middle": [
"M"
],
"last": "Wallach",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H.M. Wallach. 2008. Structured topic models for lan- guage. doctoral dissertation, Univ. of Cambridge.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Structural topic model for latent topical structure analysis",
"authors": [
{
"first": "Hongning",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Duo",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1526--1535",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongning Wang, Duo Zhang, and ChengXiang Zhai. 2011. Structural topic model for latent topical struc- ture analysis. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguistics: Human Language Technologies, pages 1526-1535.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Probability of a topic boundary, compared with gold-standard segmentation (shown in red and at the top of each diagram) on one ICSI transcript.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"num": null,
"content": "<table><tr><td>T d,s</td><td>total table count in segment s.</td></tr><tr><td>c d,1</td><td>total number of topic boundaries in d.</td></tr><tr><td>c d,0</td><td>total number of non-topic boundaries in d.</td></tr><tr><td colspan=\"2\">Dirichlet prior \u03b1 on \u00b5 with a Pitman-Yor prior (Pit-</td></tr><tr><td colspan=\"2\">man and Yor, 1997) to make the model fully non-</td></tr><tr><td colspan=\"2\">parametric, like SITS.</td></tr></table>",
"type_str": "table",
"text": "List of statistics M k,w total number of words with topic k. M k a vector of M k,w . n d,s,k total number of words with topic k in segment s in document d. N d,s total number of words in segment s. t d,s,k table count of topic k in the CRP for segment s in document d. t d,s a vector of t d,s,k for segment s in d.",
"html": null
},
"TABREF1": {
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Both s l and s r have n d,s,k >0 and t d,s,k \u22651, which means both segments have words assigned to k and words being labelled with table head. According to constraints (2), after splitting, restaurants corresponding to s l and s r are valid. We do not make any change on table counts. (II) Either s l or s r has n d,s,k =0 and t d,s,k =0. In this case, for example, all the words assigned to k in s m are in s l after splitting, and all those labelled with table head should also be in s l . s r has no words assigned to k. Thus, there is no need to change table counts. (III) Either s l or s r has n d,s,k >0 and t d,s,k =0. Both segments have words assigned to k, but those labelled with table head only exist in one segment. For instance, if they only exist in s l then s r has no table head, which means the restaurant of s r has customers eating a dish, but no tables serving that dish. Thus, we set t d,sr,k =1 to make the constraints (2) satisfied.",
"html": null
},
"TABREF2": {
"num": null,
"content": "<table><tr><td>(III) Both s l and s r have n d,s,k &gt;0, and either of them</td></tr><tr><td>has t d,s,k =1 or both. We have to choose between</td></tr><tr><td>Eq (7) and Eq (8), i.e., to decide whether a table</td></tr><tr><td>should be removed or not.</td></tr></table>",
"type_str": "table",
"text": "The following cases are considered: for a topic k (I) Both s l and s r have n d,s,k >0 and t d,s,k >1. We compute t d,sm,k using Eq (7). Thus table counts before and after merging are equal. (II) Either s l or s r has n d,s,k =0 and t d,s,k =0. Similar to the above case, we use Eq (7).",
"html": null
},
"TABREF3": {
"num": null,
"content": "<table><tr><td colspan=\"2\">Range of n</td><td>3-11</td><td>3-5</td><td>6-8</td><td>9-11</td></tr><tr><td>#docs</td><td/><td>400</td><td>100</td><td>100</td><td>100</td></tr><tr><td>DocLen</td><td>mean std</td><td>69.7 8.2</td><td>39.3 2.6</td><td>69.6 2.9</td><td>98.6 3.5</td></tr><tr><td>SegLen</td><td>mean std</td><td>7 2.57</td><td>4 0.84</td><td>7 0.87</td><td>10 1.03</td></tr></table>",
"type_str": "table",
"text": "The Choi's dataset",
"html": null
},
"TABREF4": {
"num": null,
"content": "<table><tr><td/><td/><td>ICSI</td><td>Election</td><td>Fiction</td><td>Clinical</td></tr><tr><td># doc</td><td/><td>25</td><td>4</td><td>84</td><td>227</td></tr><tr><td>DocLen</td><td>mean std</td><td>994.5 354.5</td><td>144.3 16.4</td><td>325.0 230.1</td><td>139.5 110.4</td></tr><tr><td>SegLen</td><td>mean std</td><td>188 219.1</td><td>7 8.9</td><td>22 23.8</td><td>35 41.7</td></tr></table>",
"type_str": "table",
"text": "Real dataset statistics",
"html": null
},
"TABREF5": {
"num": null,
"content": "<table><tr><td/><td/><td/><td colspan=\"7\">: Comparison on Choi's datasets with WD and PK (%)</td><td/><td/><td/></tr><tr><td/><td/><td>3-11</td><td/><td/><td>3-5</td><td/><td/><td>6-8</td><td/><td/><td>9-11</td><td/></tr><tr><td colspan=\"2\">WD r Random 51.7</td><td colspan=\"3\">49.1 48.7 51.4</td><td colspan=\"3\">50.0 48.4 52.5</td><td colspan=\"3\">49.9 49.2 52.4</td><td colspan=\"2\">48.9 49.2</td></tr><tr><td>Even</td><td>49.1</td><td colspan=\"3\">46.7 49.0 46.3</td><td colspan=\"3\">45.8 46.3 38.8</td><td colspan=\"3\">37.3 38.8 30.0</td><td colspan=\"2\">28.6 30.0</td></tr><tr><td>MinCut</td><td>30.4</td><td colspan=\"3\">29.8 26.7 41.6</td><td colspan=\"3\">41.5 37.3 28.2</td><td colspan=\"3\">27.4 25.5 23.6</td><td colspan=\"2\">22.7 21.6</td></tr><tr><td>APS</td><td>40.7</td><td colspan=\"3\">38.8 38.4 32.0</td><td colspan=\"3\">30.6 31.8 34.4</td><td colspan=\"3\">32.6 32.7 34.5</td><td colspan=\"2\">32.2 33.2</td></tr><tr><td>C99</td><td>13.5</td><td colspan=\"3\">12.3 12.3 11.3</td><td colspan=\"3\">10.2 10.8 10.2</td><td>9.3</td><td>9.8</td><td>8.9</td><td>8.1</td><td>8.6</td></tr><tr><td colspan=\"2\">Bayesseg 11.6</td><td colspan=\"3\">10.9 10.9 11.8</td><td colspan=\"2\">11.5 11.1</td><td>7.7</td><td>7.2</td><td>7.3</td><td>6.1</td><td>5.7</td><td>5.7</td></tr><tr><td>PLDA</td><td>2.4</td><td>2.2</td><td>1.8</td><td>4.0</td><td>3.9</td><td>3.3</td><td>3.6</td><td>3.5</td><td>2.7</td><td>3.0</td><td>2.8</td><td>2.0</td></tr><tr><td>TSM</td><td>0.8</td><td>0.8</td><td>0.6</td><td>1.3</td><td>1.3</td><td>1.0</td><td>1.4</td><td>1.4</td><td>0.9</td><td>1.9</td><td>1.8</td><td>1.2</td></tr><tr><td/><td colspan=\"11\">Table 5: Comparison on the meeting transcripts and written texts with WD and PK (%)</td><td/></tr><tr><td/><td/><td>ICSI</td><td/><td/><td>Election</td><td/><td/><td>Fiction</td><td/><td/><td>Clinical</td><td/></tr><tr><td/><td colspan=\"12\">WD r e PK</td></tr><tr><td>Random</td><td>46.3</td><td colspan=\"3\">41.7 44.1 51.0</td><td colspan=\"2\">49.7 45.1</td><td>51.0</td><td colspan=\"3\">48.7 47.5 45.9</td><td colspan=\"2\">38.5 44.1</td></tr><tr><td>Even</td><td>48.3</td><td colspan=\"3\">43.0 46.4 56.0</td><td colspan=\"2\">55.1 51.2</td><td>48.1</td><td colspan=\"3\">45.9 46.3 49.2</td><td colspan=\"2\">42.0 48.8</td></tr><tr><td>C99</td><td>42.9</td><td colspan=\"3\">37.4 39.9 43.1</td><td colspan=\"2\">41.5 37.0</td><td>48.1</td><td colspan=\"3\">45.1 42.1 39.7</td><td colspan=\"2\">31.9 38.7</td></tr><tr><td>MinCut</td><td>40.6</td><td colspan=\"3\">36.9 36.9 43.6</td><td colspan=\"2\">43.3 39.0</td><td>40.5</td><td colspan=\"3\">39.7 37.1 38.2</td><td colspan=\"2\">36.2 36.8</td></tr><tr><td>APS</td><td>58.2</td><td colspan=\"3\">49.7 54.6 47.7</td><td colspan=\"2\">36.8 40.6</td><td>48.0</td><td colspan=\"3\">45.8 45.1 39.9</td><td colspan=\"2\">32.8 39.6</td></tr><tr><td colspan=\"2\">Bayesseg 32.4</td><td colspan=\"3\">29.7 26.7 41.1</td><td colspan=\"2\">41.3 34.1</td><td>33.7</td><td colspan=\"3\">32.8 27.8 35.0</td><td colspan=\"2\">28.8 34.0</td></tr><tr><td>PLDA</td><td>32.6</td><td colspan=\"3\">28.8 29.4 40.6</td><td colspan=\"2\">41.1 32.0</td><td>43.0</td><td colspan=\"3\">41.3 36.1 37.3</td><td colspan=\"2\">32.1 32.4</td></tr><tr><td>TSM</td><td>30.2</td><td colspan=\"3\">26.8 25.8 38.1</td><td colspan=\"2\">38.9 31.3</td><td>40.8</td><td colspan=\"3\">38.7 32.5 34.5</td><td colspan=\"2\">29.1 30.6</td></tr></table>",
"type_str": "table",
"text": "WD e PK WD r WD e PK WD r WD e PK WD r WD e PK WD e PK WD r WD e PK WD r WD e PK WD r WD",
"html": null
},
"TABREF6": {
"num": null,
"content": "<table><tr><td colspan=\"5\">Choi ICSI Election Fiction Clinical</td></tr><tr><td>b 0.1</td><td>5.2</td><td>5.4</td><td>18.4</td><td>4.8</td></tr></table>",
"type_str": "table",
"text": "Sampled concentration parameters",
"html": null
}
}
}
}