{ "paper_id": "J02-4003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:44:35.200651Z" }, "title": "Automatic Summarization of Open-Domain Multiparty Dialogues in Diverse Genres", "authors": [ { "first": "Klaus", "middle": [], "last": "Zechner", "suffix": "", "affiliation": {}, "email": "kzechner@ets.org" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Automatic summarization of open-domain spoken dialogues is a relatively new research area. This article introduces the task and the challenges involved and motivates and presents an approach for obtaining automatic-extract summaries for human transcripts of multiparty dialogues of four different genres, without any restriction on domain. We address the following issues, which are intrinsic to spoken-dialogue summarization and typically can be ignored when summarizing written text such as news wire data: (1) detection and removal of speech disfluencies; (2) detection and insertion of sentence boundaries; and (3) detection and linking of cross-speaker information units (question-answer pairs). A system evaluation is performed using a corpus of 23 dialogue excerpts with an average duration of about 10 minutes, comprising 80 topical segments and about 47,000 words total. The corpus was manually annotated for relevant text spans by six human annotators. The global evaluation shows that for the two more informal genres, our summarization system using dialoguespecific components significantly outperforms two baselines: (1) a maximum-marginal-relevance ranking algorithm using TF*IDF term weighting, and (2) a LEAD baseline that extracts the first n words from a text.", "pdf_parse": { "paper_id": "J02-4003", "_pdf_hash": "", "abstract": [ { "text": "Automatic summarization of open-domain spoken dialogues is a relatively new research area. This article introduces the task and the challenges involved and motivates and presents an approach for obtaining automatic-extract summaries for human transcripts of multiparty dialogues of four different genres, without any restriction on domain. We address the following issues, which are intrinsic to spoken-dialogue summarization and typically can be ignored when summarizing written text such as news wire data: (1) detection and removal of speech disfluencies; (2) detection and insertion of sentence boundaries; and (3) detection and linking of cross-speaker information units (question-answer pairs). A system evaluation is performed using a corpus of 23 dialogue excerpts with an average duration of about 10 minutes, comprising 80 topical segments and about 47,000 words total. The corpus was manually annotated for relevant text spans by six human annotators. The global evaluation shows that for the two more informal genres, our summarization system using dialoguespecific components significantly outperforms two baselines: (1) a maximum-marginal-relevance ranking algorithm using TF*IDF term weighting, and (2) a LEAD baseline that extracts the first n words from a text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Although the field of summarizing written texts has been explored for many decades, gaining significantly increased attention in the last five to ten years, summarization of spoken language is a comparatively recent research area. As the number of spoken audio databases is growing rapidly, however, we predict that the need for high-quality summarization of information contained in this medium will increase substantially. Summarization of spoken dialogues, in particular, may aid in the archiving, indexing, and retrieval of various records of oral communication, such as corporate meetings, sales interactions, or customer support.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The purpose of this article is to explore the issues of spoken-dialogue summarization and to describe and evaluate an implementation addressing some of the core challenges intrinsic to the task. We will use an implementation of a state-of-the-art text summarization method (maximum marginal relevance, or MMR) as the main baseline for comparative evaluations, and then add a set of components addressing issues specific to spoken dialogues to this MMR module to create our spoken dialogue summarization system, which we call DIASUMM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "We consider the following dimensions to be relevant for our research; the combination of these dimensions distinguishes our work from most other work in the field of summarization:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 spoken versus written language", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 multiparty dialogues versus texts written by one author", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 unrestricted versus restricted domains", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 diverse genres versus a single genre The main challenges this work has to address, in addition to the challenges of writtentext summarization, are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 coping with speech disfluencies", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 identifying the units for extraction", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 maintaining cross-speaker coherence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 coping with speech recognition errors", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "We will discuss these challenges in more detail in the following section. Although we have addressed the issue of speech recognition errors in previous related work (Zechner and Waibel 2000b) , for the purpose of this article, we exclusively use human transcripts of spoken dialogues.", "cite_spans": [ { "start": 165, "end": 191, "text": "(Zechner and Waibel 2000b)", "ref_id": "BIBREF60" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Intrinsic evaluations of text summaries usually use sentences as their basic units. For our data, however, sentence boundaries are typically not available in the first place. Thus we devise a word-based evaluation metric derived from an average relevance score from human relevance annotations (section 6.2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The organization of this article is as follows: Section 2 provides the motivation for our research, introducing and discussing the main challenges of spoken-dialogue summarization, followed by a section on related work (section 3). Section 4 describes the corpus we use to develop and evaluate our system, along with the procedures employed for corpus annotation. The system architecture and its components are described in detail in section 5, along with evaluations thereof. Section 6 presents the global evaluation of our approach, before we conclude the article with a discussion of our results, contributions, and directions for future research in this field (sections 7 and 8).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Consider the following example from a phone conversation drawn from the English CALLHOME database (LDC 1996) . It is a transcript of a conversation between two native speakers of American English; one person is in the New York area (speaker a), the other one (speaker b) in Israel. It was recorded about a month after Yitzhak Rabin's assassination (1995) . This dialogue segment is about one minute of real time. The audio is segmented into speaker turns using silence heuristics, 1 and each turn is marked with a turn number and with the speaker label. Noises are removed to increase readability. 2 a: oh b: they didn't know he was going to get shot but it was at a peace rally so i mean it just worked out b: i mean it was a good place for the poor guy to die i mean because it was you know right after the rally and everything was on film and everything a: yeah b: oh the whole country we just finished the thirty days mourning for him now you know it's uh oh everybody's still in shock it's a: oh a: i know b: terrible what's going on over here b: and this guy that killed him they show him on t v smiling he's all happy he did it and everything he isn't even sorry or anything a: there are i b: him him he and his brother you know the two of them were in it together and there's a whole group now it's like a a conspiracy oh it's eh a: mm a: with the kahane chai b: unbelievable b: yeah yeah it's all those people yeah you probably see them running around new york don't you they're all a: yeah a: oh yeah they're here b: new york based yeah a: oh there's a: all those fanatics a: like the extreme b: oh but b: but wh-what's the reaction in america really i mean i mean do people care you know i mean you know do they a: yeah mo-most pe-i mean uh a: i don't know what commu-i mean like the jewish community a: a lot e-all of us were a: very upset and there were lots all the b: yeah a: like two days after did it happen like on a sunday b: yeah it hap-it happened on it happened on a saturday night By looking at this transcript we can readily identify some of the phenomena that would cause difficulties for conventional summarizers of written texts:", "cite_spans": [ { "start": 98, "end": 108, "text": "(LDC 1996)", "ref_id": null }, { "start": 348, "end": 354, "text": "(1995)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "2." }, { "text": "\u2022 Some turns (e.g., turn 51) contain many disfluencies that (1) make them hard to read and (2) reduce the relevance of the information contained therein.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "2." }, { "text": "\u2022 Some (important) pieces of information are distributed over a sequence of turns (e.g., turns 53-54-55, 45-47-48-49) ; this is due to a silence-based segmentation algorithm that causes breaks in logically connected clauses. A traditional summarizer might render these sequences incompletely.", "cite_spans": [ { "start": 95, "end": 117, "text": "53-54-55, 45-47-48-49)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "2." }, { "text": "\u2022 Some turns are quite long (e.g., 36, 39) and contain several sentences; a within-turn segmentation seems necessary to avoid the extraction of too much extraneous information when only parts of a turn contain relevant information.", "cite_spans": [ { "start": 28, "end": 42, "text": "(e.g., 36, 39)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "2." }, { "text": "\u2022 Some of the information is constructed interactively by both speakers; the prototypical cases are question-answer pairs (e.g., turns 51-52ff., turns 57-58). A traditional text summarizer might miss either question or answer and hence produce a less meaningful summary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "2." }, { "text": "We shall discuss these arising issues along with an indication of our computational remedies in the following subsections. We want to stress beforehand, though, that the originality of our system should not be seen in the particular implementation of its individual components, but rather in their selection and specific composition to address the issues at hand in an effective and also efficient way.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "2." }, { "text": "The two main negative effects speech disfluencies have on summarization are that they (1) decrease the readability of the summary and (2) increase its noncontent noise. In particular for informal conversations, the percentage of disfluent words is quite high, typically around 20% of the total words spoken. 3 This means that this issue should, in our opinion, be addressed to improve the quality (readability and conciseness) of the generated summaries. In section 5.3 we shall present three components for identifying most of the major classes of speech disfluencies in the input of the summarization system, such as filled pauses, repetitions, and false starts. All detected disfluencies are marked in this process and can be selectively excluded during summary generation.", "cite_spans": [ { "start": 308, "end": 309, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Disfluency Detection", "sec_num": "2.1" }, { "text": "Unlike written texts, in which punctuation markers clearly indicate clause and sentence boundaries, spoken language is generated as a sequence of streams of words, in which pauses (silences between words) do not always match linguistically meaningful segments: A speaker can pause in the middle of a sentence or even a phrase, or, on the other hand, might not pause at all after the end of a sentence or clause.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Boundary Detection", "sec_num": "2.2" }, { "text": "This mismatch between acoustic and linguistic segmentation is reflected in the output of a speech recognizer, which typically generates a sequence of speaker turns whose boundaries are marked by periods of silence (or nonspeech). As a result, one speaker's turn may contain multiple sentences, or, on the other hand, a speaker's sentence might span more than one turn. In a test corpus of five English CALLHOME dialogues with an average length of 320 turns, we found on average of about 30 such continuations of logical clauses over automatically determined acoustic segments per dialogue.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Boundary Detection", "sec_num": "2.2" }, { "text": "The main problems for a summarizer would thus be (1) the lack of coherence and readability of the output because of incomplete sentences and (2) extraneous information due to extracted units consisting of more than one sentence. In section 5.4 we describe a component for sentence segmentation that addresses both of these problems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Boundary Detection", "sec_num": "2.2" }, { "text": "Since we have multiparty conversations as opposed to monologues, sometimes the crucial information is found in a sequence of turns from several speakers, the prototypical case of this being a question-answer pair. If the summarizer were to extract only the question or only the answer, the lack of the corresponding answer or question would often cause a severe reduction of coherence in the summary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributed Information", "sec_num": "2.3" }, { "text": "In some cases, either the question or the answer is very short and does not contain any words with high relevance that would yield a substantial weight in the summarizer. In order not to lose these short sentences at a later stage, when only the most relevant sentences are extracted, we need to identify matching question-answer pairs ahead of time, so that the summarizer can output the matching sentences during summary generation as one unit. We describe our approach to cross-speaker information linking in section 5.5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributed Information", "sec_num": "2.3" }, { "text": "We see the work reported in this article as the first in-depth analysis and evaluation in the area of open-domain spoken-dialogue summarization. Given the large scope of this undertaking, we had to restrict ourselves to those issues that are, in our opinion, the most salient for the task at hand.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Issues", "sec_num": "2.4" }, { "text": "A number of other important issues for summarization in general and for speech summarization in particular are either simplified or not addressed in this article and left for future work in this field. In the following, we briefly mention some of these issues, indicating their potential relevance and promise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Issues", "sec_num": "2.4" }, { "text": "2.4.1 Topic Segmentation. In many cases, spoken dialogues are multitopical. For the English CALLHOME corpus, we determined an average topic length of about one to two minutes' speaking time (or about 200-400 words). Summarization can be accomplished faster and more concisely if it operates on smaller topical segments rather than on long pieces of input consisting of diverse topics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Issues", "sec_num": "2.4" }, { "text": "Although we have implemented a topic segmentation component as part of our system for these reasons, all of the evaluations are based on the topical segments determined by human annotators. Therefore, this component will not be discussed in this article. Furthermore, topical segmentation is not an issue intrinsic to spoken dialogues, which in our opinion justifies this simplification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Issues", "sec_num": "2.4" }, { "text": "Resolution. An analogous reasoning holds for the issue of anaphora resolution: Although it would certainly be desirable, for the sake of increased coherence and readability, to employ a well-working anaphora resolution component, this issue is not specific to the task at hand, either. One could argue that particularly for summarization of more informal conversations, in which personal pronouns are rather frequent, anaphora resolution might be more helpful than for, say, summarization of written texts. But we conjecture that this task might also prove more challenging than written-text anaphora resolution. In our system, we did not implement a module for anaphora resolution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anaphora", "sec_num": "2.4.2" }, { "text": "Previous work indicates that information about discourse structure from written texts can help in identifying the more salient and relevant sentences or clauses for summary generation (Marcu 1999; Miike et al. 1994) . Much less exploration has been done, however, in the area of automatic analysis of discourse structure for non-task-oriented spoken dialogues in unrestricted domains, such as CALLHOME (LDC 1996) . Research for those kinds of corpora reported in Jurafsky et al. (1998) , Stolcke et al. (2000) , Levin et al. (1999) , and Ries et al. (2000) focuses more on detecting localized phenomena such as speech acts, dialogue games, or functional activities. We conjecture that there are two reasons for this: (1) free-flowing spontaneous conversations have much less structure than task-oriented dialogues, and (2) the automatic detection of hierarchical structure would be much harder than it is for written texts or dialogues based on a premeditated plan.", "cite_spans": [ { "start": 184, "end": 196, "text": "(Marcu 1999;", "ref_id": "BIBREF34" }, { "start": 197, "end": 215, "text": "Miike et al. 1994)", "ref_id": "BIBREF36" }, { "start": 402, "end": 412, "text": "(LDC 1996)", "ref_id": null }, { "start": 463, "end": 485, "text": "Jurafsky et al. (1998)", "ref_id": "BIBREF48" }, { "start": 488, "end": 509, "text": "Stolcke et al. (2000)", "ref_id": "BIBREF51" }, { "start": 512, "end": 531, "text": "Levin et al. (1999)", "ref_id": "BIBREF28" }, { "start": 538, "end": 556, "text": "Ries et al. (2000)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Discourse Structure.", "sec_num": "2.4.3" }, { "text": "Although we believe that in the long run attempts to automatically identify the discourse structure of spoken dialogues may benefit summarization, in this article, we greatly simplify this matter and exclusively look at local contexts in which speakers interactively construct shared information (the question-answer pairs).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Structure.", "sec_num": "2.4.3" }, { "text": ". Throughout this article, our simplifying assumption is that our input comes from a perfect speech recognizer; that is, we use human textual transcripts of the dialogues in our corpus. Although there are cases in which this assumption is justifiable, such as transcripts provided by news services in parallel to the recorded audio data, we believe that in general a spoken dialogue summarizer has to be able to accept corrupted input from an automatic speech recognizer (ASR), as well. Our system is indeed able to work with ASR output; it is integrated in a larger system (Meeting Browser) that creates, summarizes, and archives meeting records and is connected to a speech recognition engine (Bett et al. 2000) . Further, we have shown in previous work how we can use ASR confidence scores (1) to reduce the word error rate within the summary and (2) to increase the summary accuracy (Zechner and Waibel 2000b ).", "cite_spans": [ { "start": 695, "end": 713, "text": "(Bett et al. 2000)", "ref_id": "BIBREF5" }, { "start": 887, "end": 912, "text": "(Zechner and Waibel 2000b", "ref_id": "BIBREF60" } ], "ref_spans": [], "eq_spans": [], "section": "Speech Recognition Errors", "sec_num": "2.4.4" }, { "text": "A further simplifying assumption of this work is that prosodic information is not available, with the exception of start and end times of speaker turns. Considering the results reported by Shriberg et al. (1998) and Shriberg et al. (2000) , we conjecture that future work in this field will demonstrate the additional benefit of incorporating prosodic information, such as stress, pitch, and intraturn pauses, into the summarization system. In particular, we would expect improved system performance when speech recognition hypotheses are used as input: In that case, the prosodic information could compensate to some extent for incorrect word information.", "cite_spans": [ { "start": 189, "end": 211, "text": "Shriberg et al. (1998)", "ref_id": "BIBREF48" }, { "start": 216, "end": 238, "text": "Shriberg et al. (2000)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "Prosodic Information.", "sec_num": "2.4.5" }, { "text": "The vast majority of summarization research in the past clearly has focused exclusively on written text. A good selection of both early seminal papers and more recent work can be found in Mani and Maybury (1999) . In general, most summarization approaches can be classified as either corpus-based, statistical summarization (such as Kupiec, Pedersen, and Chen [1995] ), or knowledge-based summarization (such as Reimer and Hahn [1988] ) in which the text domain is restricted. (The MMR method [Carbonell, Geng, and Goldstein 1997] , which we are using as the summarization engine for our DIASUMM system, belongs to the first category.) More recently, Marcu (1999) presented work on using automatically detected discourse structure for summarization. Knight and Marcu (2000) and Berger and Mittal (2000) presented approaches in which summarization can be reformulated as a problem of machine translation: translating a long sentence into a shorter sentence, or translating a Web page into a brief gist, respectively.", "cite_spans": [ { "start": 188, "end": 211, "text": "Mani and Maybury (1999)", "ref_id": "BIBREF33" }, { "start": 333, "end": 366, "text": "Kupiec, Pedersen, and Chen [1995]", "ref_id": "BIBREF26" }, { "start": 412, "end": 434, "text": "Reimer and Hahn [1988]", "ref_id": "BIBREF41" }, { "start": 493, "end": 530, "text": "[Carbonell, Geng, and Goldstein 1997]", "ref_id": "BIBREF7" }, { "start": 651, "end": 663, "text": "Marcu (1999)", "ref_id": "BIBREF34" }, { "start": 750, "end": 773, "text": "Knight and Marcu (2000)", "ref_id": "BIBREF23" }, { "start": 778, "end": 802, "text": "Berger and Mittal (2000)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3." }, { "text": "Two main areas are exceptions to the focus on text summarization in past work: (1) summarization of task-oriented dialogues in restricted domains and (2) summarization of spoken news in unrestricted domains. We shall discuss both of these areas in the following subsections, followed by a discussion of prosody-based emphasis detection in spoken language, and finally by a summary of research most closely related to the topic of this work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3." }, { "text": "During the past decade, there has been significant progress in the area of closeddomain spoken-dialogue translation and understanding, even with automatic speech recognition input. Two examples of systems developed in that time frame are JANUS (Lavie et al. 1997) and VERBMOBIL (Wahlster 1993) .", "cite_spans": [ { "start": 244, "end": 263, "text": "(Lavie et al. 1997)", "ref_id": "BIBREF27" }, { "start": 278, "end": 293, "text": "(Wahlster 1993)", "ref_id": "BIBREF55" } ], "ref_spans": [], "eq_spans": [], "section": "Summarization of Dialogues in Restricted Domains", "sec_num": "3.1" }, { "text": "In that context, several spoken-dialogue summarization systems have been developed whose goal it is to capture the essence of the task-based dialogues at hand. The MIMI system (Kameyama and Arima 1994; Kameyama, Kawai, and Arima 1996) deals with the travel reservation domain and uses a cascade of finite-state pattern recognizers to find the desired information. Within VERBMOBIL, a more knowledge-rich approach is used (Alexandersson and Poller 1998; Reithinger et al. 2000) . The domain here is travel planning and negotiation of a trip. In addition to finite-state transducers for content extraction and statistical dialogue act recognition, VERBMOBIL also uses a dialogue processor and a summary generator that have access to a world knowledge database, a domain model, and a semantic database. The abstract representations built by this summarizer allow for summary generation in multiple languages.", "cite_spans": [ { "start": 176, "end": 201, "text": "(Kameyama and Arima 1994;", "ref_id": "BIBREF22" }, { "start": 202, "end": 234, "text": "Kameyama, Kawai, and Arima 1996)", "ref_id": null }, { "start": 421, "end": 452, "text": "(Alexandersson and Poller 1998;", "ref_id": "BIBREF0" }, { "start": 453, "end": 476, "text": "Reithinger et al. 2000)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Summarization of Dialogues in Restricted Domains", "sec_num": "3.1" }, { "text": "Within the context of the Text Retrieval Conference (TREC) spoken document retrieval (SDR) conferences (Garofolo et al. 1997; Garofolo et al. 1999) as well as the recent Defense Advanced Research Project Agency (DARPA) broadcast news workshops, a number of research groups have been developing multimedia browsing tools for text, audio, and video data, which should facilitate the access to news data, combining different modalities. Hirschberg et al. (1999) and Whittaker et al. (1999) present a system that supports local navigation for browsing and information extraction from acoustic databases, using speech recognizer transcripts in tandem with the original audio recording. Although their interface helps users in the tasks of relevance ranking and fact finding, it is less helpful in the creating of summaries, partly because of imperfect speech recognition. Valenza et al. (1999) present an audio summarization system that combines acoustic confidence scores with relevance scores to obtain more accurate and reliable summaries. An evaluation showed that human judges preferred summaries with a compression rate of about 15% (30 words per minute at a speaking rate of about 200 words per minute) and that the summary word error rate was significantly smaller than the word error rate for the full transcript. Hori and Furui (2000) use salience features in combination with a language model to reduce Japanese broadcast news captions by about 30-40% while keeping the meaning of about 72% of all sentences in the test set. Another speech-related reduction approach was presented recently by Koumpis and Renals (2000) , who summarize voice mail in the Small Message format.", "cite_spans": [ { "start": 103, "end": 125, "text": "(Garofolo et al. 1997;", "ref_id": "BIBREF13" }, { "start": 126, "end": 147, "text": "Garofolo et al. 1999)", "ref_id": "BIBREF13" }, { "start": 434, "end": 458, "text": "Hirschberg et al. (1999)", "ref_id": "BIBREF20" }, { "start": 463, "end": 486, "text": "Whittaker et al. (1999)", "ref_id": "BIBREF58" }, { "start": 867, "end": 888, "text": "Valenza et al. (1999)", "ref_id": "BIBREF54" }, { "start": 1318, "end": 1339, "text": "Hori and Furui (2000)", "ref_id": "BIBREF21" }, { "start": 1599, "end": 1624, "text": "Koumpis and Renals (2000)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Summarization of Spoken News", "sec_num": "3.2" }, { "text": "Whereas most approaches to summarizing acoustic data rely on the word information (provided by a human or ASR transcript), there have been attempts to generate summaries based on emphasized regions in a discourse, using only prosodic features. Chen and Withgott (1992) train a hidden Markov model on transcripts of spontaneous speech, labeled for different degrees of emphasis by a panel of listeners. Their \"audio summaries\" on an unseen (but rather small) test set achieve a remarkably good agreement with human annotators (\u03ba > 0.5). Stifelman (1995) uses a pitch-based emphasis detection algorithm developed by Arons (1994) to find emphasized passages in a 13-minute discourse. In her analysis, she finds good agreement between these emphasized regions and the beginnings of manually marked discourse segments (in the framework of Grosz and Sidner [1986] ). Although these are promising results, being suggestive of the role of prosody for determining emphasis, relevance, or salience in spoken discourse, in this work we restrict the use of prosody to the turn length and interturn pause features. We conjecture, however, that the integration of prosodic and word level information would be a fruitful research area that would have to be explored in future work. Waibel, Bett, and Finke (1998) report results of their summarizer on automatically transcribed SWITCHBOARD (SWBD) data (Godfrey, Holliman, and McDaniel 1992) , the word error rate being about 30%. Their implementation used an algorithm inspired by MMR, but they did not address any dialogue-or speech-related issues in their summarizer. In a question-answer test with summaries of five dialogues, participants could identify most of the key concepts using a summary size of only five turns. These results varied widely (between 20% and 90% accuracy) across the five different dialogues tested in this experiment.", "cite_spans": [ { "start": 244, "end": 268, "text": "Chen and Withgott (1992)", "ref_id": "BIBREF10" }, { "start": 536, "end": 552, "text": "Stifelman (1995)", "ref_id": "BIBREF50" }, { "start": 614, "end": 626, "text": "Arons (1994)", "ref_id": "BIBREF3" }, { "start": 834, "end": 857, "text": "Grosz and Sidner [1986]", "ref_id": "BIBREF17" }, { "start": 1267, "end": 1297, "text": "Waibel, Bett, and Finke (1998)", "ref_id": "BIBREF56" }, { "start": 1386, "end": 1424, "text": "(Godfrey, Holliman, and McDaniel 1992)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Prosody-Based Emphasis Detection in Spoken Audio", "sec_num": "3.3" }, { "text": "Our own previous work (Zechner and Waibel 2000a) addressed for the first time the combination of challenges of dialogue summarization with summarization of spoken language in unrestricted domains. We presented a first prototype of DIASUMM that addressed the issues of disfluency detection and removal and sentence boundary detection, as well as cross-speaker information linking.", "cite_spans": [ { "start": 22, "end": 48, "text": "(Zechner and Waibel 2000a)", "ref_id": "BIBREF59" } ], "ref_spans": [], "eq_spans": [], "section": "Spoken Dialogue Summarization in Unrestricted Domains", "sec_num": "3.4" }, { "text": "This work extends and expands these initial attempts substantially, in that we are now focusing on (1) a systematic training of the major components of the DIASUMM system, enabled by the recent availability of a large corpus of disfluency-annotated conversations (LDC 1999b), and (2) the exploration of three more genres of spoken dialogues in addition to the English CALLHOME corpus (NEWSHOUR, CROSSFIRE, GROUP MEETINGS). Further, the relevance annotations are now performed by a set of six human annotators, which makes the global system evaluation more meaningful, considering the typical divergence among different annotators' relevance judgments. Table 1 provides the statistics on the corpus used for the development and evaluation of our system. We use data from four different genres, two being more informal, two more formal:", "cite_spans": [], "ref_spans": [ { "start": 652, "end": 659, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Spoken Dialogue Summarization in Unrestricted Domains", "sec_num": "3.4" }, { "text": "\u2022 English CALLHOME and CALLFRIEND: from the Linguistic Data Consortium (LDC) collections, eight dialogues for the devtest set (8E-CH) and four dialogues for the eval set (4E-CH). 4 These are recordings of phone conversations between two family members or friends, typically about 30 minutes in length; the excerpts we used were matched with the transcripts, which typically represent 5-10 minutes of speaking time.", "cite_spans": [ { "start": 179, "end": 180, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Corpus Characteristics", "sec_num": "4.1" }, { "text": "\u2022 NEWSHOUR (NHOUR): Excerpts from PBS's NewsHour television show with Jim Lehrer (recorded in 1998).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Characteristics", "sec_num": "4.1" }, { "text": "\u2022 CROSSFIRE (XFIRE): Excerpts from CNN's CrossFire television show with Bill Press and Robert Novak (recorded in 1998).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Characteristics", "sec_num": "4.1" }, { "text": "\u2022 GROUP MEETINGS (G-MTG): Excerpts from recordings of project group meetings in the Interactive Systems Labs at Carnegie Mellon University.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Characteristics", "sec_num": "4.1" }, { "text": "Furthermore, we used the Penn Treebank distribution of the SWITCHBOARD corpus, annotated with disfluencies, to train the major components of the system (LDC 1999b). From Table 1 we can see that the two more formal corpora, NEWSHOUR and CROSSFIRE, have longer sentences, more sentences per turn, and fewer disfluencies (particularly nonlexicalized filled pauses and false starts) than English CALLHOME and the GROUP MEETINGS. This means that their flavor is more like that of written text and not so close to the conversational speech typically found in the SWITCHBOARD or CALLHOME corpora.", "cite_spans": [], "ref_spans": [ { "start": 170, "end": 177, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Corpus Characteristics", "sec_num": "4.1" }, { "text": "All the annotations were performed on human-generated transcripts of the dialogues. The CALLHOME and GROUP MEETINGS dialogues were automatically partitioned into speaker turns (by means of a silence heuristic); the other corpora were segmented manually (based on the contents and flow of the conversation). 5 There were six naive human annotators performing the task; 6 only four, however, completed the entire set of dialogues. Thus, the number of annotations available for each dialogue varies from four to six. Prior to the relevance annotations, the annotators had to mark topical boundaries, because we want to be able to define and then create summaries for each topical segment separately (as opposed to a whole conversation consisting of multiple topics). The notion of a topic was informally defined as a region in the text that ends, according to the annotation manual, \"when the speakers shift their topic of discussion.\"", "cite_spans": [ { "start": 307, "end": 308, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Corpus Annotation 4.2.1 First Annotation Phase.", "sec_num": "4.2" }, { "text": "Once the topical segments were marked, for each such segment, each annotator had to identify the most relevant information units (IUs), called nucleus IUs, and somewhat relevant IUs, called satellite IUs. IUs are often equivalent to sentences but can span longer or shorter contiguous segments of text, dependent on the annotator's choice. The overall goal of this relevance markup was to create a concise and readable summary containing the main information present in the topical segment. Annotators were also asked to mark the most salient words within their annotated IUs with a +, which would render a summary with a somewhat more telegraphic style (+-marked words).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Annotation 4.2.1 First Annotation Phase.", "sec_num": "4.2" }, { "text": "We also asked that the human annotators stay within a preset target length for their summaries: The +-marked words in all IUs within a topical segment should be 10-20% of all the words in the segment. The guideline was enforced by a checker program that was run during and after annotation of a transcript and that also ensured that no markup errors and no accidental word deletions occurred. We provide a brief example here (n[, n] mark the beginning and end of a nucleus IU, the phrase they fly to Boston was +-marked as the core content within this IU): B: heck it might turn out that you know n[ if +they +fly in +to +boston i can n]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Annotation 4.2.1 First Annotation Phase.", "sec_num": "4.2" }, { "text": "After the first annotation phase, in which each coder worked independently according to the guidelines described above, we devised a second phase, in which two coders from the initial group were asked to create a common-ground annotation, based on the majority opinion of the whole group. To construct such a majority opinion guideline automatically, we assigned weights to all words in nucleus IUs and satellite IUs and added all weights for all marked words of all coders for every turn. 7 The total turn weights were then sorted by decreasing value to provide a guide for the two coders in the second phase as to which turns they should focus their annotations on for the common-ground or gold-standard summaries. Other than this guideline, the requirements were almost exactly identical to those in phase 1, except that (1) the pair of annotators was required to work together on this task to be able to reach a consensus opinion, and (2) the preset relative word length of the gold summary (10-20%) applied only to the nucleus IUs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Creation of Gold-Standard Summaries.", "sec_num": "4.2.2" }, { "text": "As for the topical boundaries, which obviously vary among coders, a list of boundary positions chosen by the majority (at least half) of the coders in the first phase was provided. In this gold-standard phase, the two coders mostly stayed with these suggestions and changed less than 15% of the suggested topic boundaries, the majority of which were minor (less than two turns' difference in boundary position). Table 2 provides the statistics on the frequencies of the annotated nucleus and satellite IUs. We make the following observations:", "cite_spans": [], "ref_spans": [ { "start": 412, "end": 419, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Creation of Gold-Standard Summaries.", "sec_num": "4.2.2" }, { "text": "\u2022 On average, about 23% of all tokens were assigned to a nucleus IU and 5% to a satellite IU; counting only the +-marked tokens, this reduces to about 11% and 2% of all tokens, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "General Annotation Analysis.", "sec_num": "4.2.3" }, { "text": "\u2022 The average total lengths of nuclei and satellites vary widely across corpora: between 17.1 (13.1) tokens for CALLHOME and 37.7 (23.4) tokens for GROUP MEETINGS data. This is most likely a reflection on the typical length of turns in the different subcorpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "General Annotation Analysis.", "sec_num": "4.2.3" }, { "text": "\u2022 A similar variation is also observed across annotators: between 12 and 40 tokens for nucleus-IUs and between 9 and 20 tokens for satellites. The granularity of IUs is quite different across annotators.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "General Annotation Analysis.", "sec_num": "4.2.3" }, { "text": "\u2022 Since some annotators mark a larger number of IUs than others, there is an even larger discrepancy in the relative number of words assigned to nucleus IUs and satellite IUs among the different annotators: 11-44% (nucleus IUs) and 0-13% (satellite IUs).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "General Annotation Analysis.", "sec_num": "4.2.3" }, { "text": "\u2022 The ratio of nucleus versus satellite tokens also varies greatly among the annotators: from about 1:1 to 40:1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "General Annotation Analysis.", "sec_num": "4.2.3" }, { "text": "\u2022 The ratio of nucleus and satellite tokens that are +-marked varies greatly: between 36 and 77% for nucleus IUs and between 2 and 80% for satellite IUs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "General Annotation Analysis.", "sec_num": "4.2.3" }, { "text": "From these observations, we conclude that merging the nucleus and satellite IUs into one class would yield a more consistent picture than keeping them separate. A similar argument can be made for the +-marked passages, in which we also find a quite high intercoder variation in relative +-marking. This led us to the decision of giving equal weight to any word in an IU, irrespective of IU type or marking, for the purpose of global system evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "General Annotation Analysis.", "sec_num": "4.2.3" }, { "text": "Finally, we conjecture that the average length of our extraction units should be in the 10-40 words range, which roughly corresponds to about 3-12 seconds of real time, assuming an average word length of 300 milliseconds. As a comparison, we note that Valenza et al. (1999) found summaries with 30-grams 8 working well in their experiments, a finding that is in line with our observations here on typical human IU lengths.", "cite_spans": [ { "start": 252, "end": 273, "text": "Valenza et al. (1999)", "ref_id": "BIBREF54" } ], "ref_spans": [], "eq_spans": [], "section": "General Annotation Analysis.", "sec_num": "4.2.3" }, { "text": "Agreement between coders (and between automatic methods and coders) has been measured in the summarization literature with quite a wide range of methods: Rath, Resnick, and Savage (1961) use Kendall's \u03c4 ; Kupiec, Pedersen, and Chen (1995) (among many others) use percentage agreement; and Aone, Okurowski, and Gorlinsky (1997) (among others) use the notions of precision, recall, and F 1 -score, which are commonly employed in the information retrieval community. Similarly, in the literature on discourse segmentation and labeling, a variety of different agreement measures have been used, including precision and recall (Hearst 1997; Passonneau and Litman 1997) , Krippendorff's (1980) \u03b1 (Passonneau and Litman 1997) and Cohen's (1960) \u03ba (Carletta et al. 1997) .", "cite_spans": [ { "start": 154, "end": 186, "text": "Rath, Resnick, and Savage (1961)", "ref_id": "BIBREF40" }, { "start": 622, "end": 635, "text": "(Hearst 1997;", "ref_id": "BIBREF18" }, { "start": 636, "end": 663, "text": "Passonneau and Litman 1997)", "ref_id": "BIBREF38" }, { "start": 666, "end": 687, "text": "Krippendorff's (1980)", "ref_id": "BIBREF25" }, { "start": 690, "end": 718, "text": "(Passonneau and Litman 1997)", "ref_id": "BIBREF38" }, { "start": 723, "end": 737, "text": "Cohen's (1960)", "ref_id": "BIBREF11" }, { "start": 740, "end": 762, "text": "(Carletta et al. 1997)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Intercoder Agreement.", "sec_num": "4.2.4" }, { "text": "In this work, we use the two following metrics: (1) the \u03ba-statistic in its extension for more than two coders (Davies and Fleiss 1982) ; and (2) precision, recall, and F 1 -score. 9 We will discuss the \u03ba-statistic first.", "cite_spans": [ { "start": 110, "end": 134, "text": "(Davies and Fleiss 1982)", "ref_id": "BIBREF12" }, { "start": 180, "end": 181, "text": "9", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Intercoder Agreement.", "sec_num": "4.2.4" }, { "text": "For intercoder agreement with respect to topical boundaries, agreement is found if boundaries fall within the same 50-word bin of a dialogue. Relevance agreements are computed at the word level. For relevance markings, we compute \u03ba both for the three-way case (nucleus IUs, satellite IUs, unmarked) and the two-way case (any IUs, unmarked). 10 Topical-boundary agreement was not evaluated for two of the GROUP MEETINGS dialogues, in which only one of four annotators marked any text-internal topic boundary. We compute agreements for each dialogue separately and report the arithmetic means for the five subcorpora in Table 3 . We observe that agreement for topical boundaries is much higher than for relevance markings. Furthermore, agreement is generally higher for CALLHOME and comparatively low for the GROUP MEETINGS corpus.", "cite_spans": [], "ref_spans": [ { "start": 618, "end": 625, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Intercoder Agreement.", "sec_num": "4.2.4" }, { "text": "As a second evaluation metric, we compute precision, recall, and F 1 -scores for the same four annotators and the same sets of subcorpora as before. For topical boundaries, a match means that the boundaries fall within \u00b13 turns of each other, and for relevant Table 3 Intercoder annotation \u03ba agreement for topical boundaries and relevance markings. Table 4 Intercoder annotation F 1 -agreement for topical boundaries and relevance markings. words a match means that the two words to be compared are both in a nucleus or satellite IU. The results can be seen in Table 4 .", "cite_spans": [], "ref_spans": [ { "start": 260, "end": 267, "text": "Table 3", "ref_id": null }, { "start": 349, "end": 356, "text": "Table 4", "ref_id": null }, { "start": 561, "end": 568, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Intercoder Agreement.", "sec_num": "4.2.4" }, { "text": "In addition to the annotation for topic boundaries and relevant text spans, the corpus was also annotated for speech disfluencies in the same style as the Penn Treebank SWITCHBOARD corpus (LDC 1999b). One coder (different from the six annotators mentioned before) manually tagged the corpus for disfluencies and sentence boundaries following the SWITCHBOARD disfluency annotation style book (Meteer et al. 1995) .", "cite_spans": [ { "start": 391, "end": 411, "text": "(Meteer et al. 1995)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Disfluency and Sentence Boundary Annotation.", "sec_num": "4.2.5" }, { "text": "A final type of annotation was performed on the entire corpus to mark all questions and their answers, for the purpose of training and evaluation of the question-answer linking system component. Questions and answers were annotated in the following way: Every sentence that is a question was marked as either a Yes-No-question or a Wh-question. Exceptions were back-channel questions, such as \"Is that right?\"; rhetorical questions, such as \"Who would lie in public?\"; and other questions that do not refer to a propositional content. These were not supposed to be marked (even if they have an apparent answer), since we see the latter class of questions as irrelevant for the purpose of increasing the local coherence within summaries. For each Yes-No-question and Wh-question that has an answer, the answer was marked with its relative offset to the question to which it belongs. Some answers are continued over several sentences, but only the core answer (which usually consists of a single sentence) was marked. This decision was made to bias the answer detection module toward brief answers and to avoid the question-answer regions' getting too lengthy, at the expense of summary conciseness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question-Answer Annotation.", "sec_num": "4.2.6" }, { "text": "The global system architecture of the spoken-dialogue summarization system presented in this article (DIASUMM) is depicted in Figure 1 . The input data are a timeordered sequence of speaker turns with the following quadruple of information: start time, end time, speaker label, and word sequence. The seven major components are executed sequentially, yielding a pipeline architecture. ", "cite_spans": [], "ref_spans": [ { "start": 126, "end": 134, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "System Architecture", "sec_num": "5.1" }, { "text": "Global system architecture.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "The following subsections describe the components of the system in more detail. As argued earlier, the topic detection component is not relevant for the way we conduct the global system evaluation and hence is not discussed here. (We implemented a variant of Hearst's [1997] TextTiling algorithm.) The three components involved in disfluency detection are the part-of-speech (POS) tagger, the false-start detection module, and the repetition filter. They are discussed in subsection 5.3, followed by a subsection on sentence boundary detection (5.4). The question-answer pair detection is described in subsection 5.5, and the sentence selection module, performing relevance ranking, in subsection 5.6.", "cite_spans": [ { "start": 259, "end": 274, "text": "Hearst's [1997]", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "We eliminate all human and nonhuman noises and incomplete words from the input transcript. Further, we eliminate all information on case and punctuation, since we emulate the ASR output in that regard, which does not provide this information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Tokenization", "sec_num": "5.2" }, { "text": "Contractions such as don't or I'll are divided and treated as separate words-in these examples we would obtain do n't and I 'll.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Tokenization", "sec_num": "5.2" }, { "text": "5.3 Disfluency Detection 5.3.1 Motivation. Conversational, informal spoken language is quite different from written language in that a speaker's utterances are typically much less well-formed than a writer's sentences. We can observe a set of disfluencies such as false starts, hesitations, repetitions, filled pauses, and interruptions. Additionally, in speech there is no good match between linguistically motivated sentence boundaries and turn boundaries or recognition hypotheses from automatic speech recognition. Shriberg (1994) , Meteer et al. (1995) , and Rose (1998) . It is worth noting, however, that any disfluency classification will be only an approximation of the assumed real phenomena and that often boundaries between different classes are fuzzy and hard to decide for human annotators (cf. Meteer et al. [1995] on annotators' problems with the classification of the word so).", "cite_spans": [ { "start": 519, "end": 534, "text": "Shriberg (1994)", "ref_id": "BIBREF47" }, { "start": 537, "end": 557, "text": "Meteer et al. (1995)", "ref_id": "BIBREF35" }, { "start": 564, "end": 575, "text": "Rose (1998)", "ref_id": "BIBREF44" }, { "start": 809, "end": 829, "text": "Meteer et al. [1995]", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Input Tokenization", "sec_num": "5.2" }, { "text": "\u2022 Filled pauses: We follow Rose's (1998) classification of nonlexicalized filled pauses (typically uh, um) and lexicalized filled pauses (e.g., like, you know). Whereas the former are usually nonambiguous and hence easy to detect, the latter are ambiguous and much harder to detect accurately.", "cite_spans": [ { "start": 27, "end": 40, "text": "Rose's (1998)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Types of Disfluencies. The classification of disfluencies in this work follows", "sec_num": "5.3.2" }, { "text": "\u2022 Restarts or repairs: These are fragments that are resumed, but without completely abandoning the first attempt. We follow the notation in Meteer et al. (1995) and Shriberg (1994) , which has these parts:", "cite_spans": [ { "start": 140, "end": 160, "text": "Meteer et al. (1995)", "ref_id": "BIBREF35" }, { "start": 165, "end": 180, "text": "Shriberg (1994)", "ref_id": "BIBREF47" } ], "ref_spans": [], "eq_spans": [], "section": "Types of Disfluencies. The classification of disfluencies in this work follows", "sec_num": "5.3.2" }, { "text": "(1) reparandum, (2) interruption point (+), (3) interregnum (editing phase, {. . . }), and (4) repair. \u2022 False starts: These are abandoned, incomplete clauses. In some cases, they may occur at the end of an utterance, and they can be due to interruption by another speaker. Example: so we didn't-they have not accepted our proposal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Types of Disfluencies. The classification of disfluencies in this work follows", "sec_num": "5.3.2" }, { "text": "The past decade has produced a substantial amount of research in the area of detecting intonational and linguistic boundaries in conversational speech, as well as in the area of detecting and correcting speech disfluencies. Whereas earlier work tended to look at these phenomena in isolation (Nakatani and Hirschberg 1994; Stolcke and Shriberg 1996) , more recent work has attempted to solve several tasks within one framework (Heeman and Allen 1999; Stolcke et al. 1998) .", "cite_spans": [ { "start": 292, "end": 322, "text": "(Nakatani and Hirschberg 1994;", "ref_id": "BIBREF37" }, { "start": 323, "end": 349, "text": "Stolcke and Shriberg 1996)", "ref_id": "BIBREF52" }, { "start": 427, "end": 450, "text": "(Heeman and Allen 1999;", "ref_id": "BIBREF19" }, { "start": 451, "end": 471, "text": "Stolcke et al. 1998)", "ref_id": "BIBREF53" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work.", "sec_num": "5.3.3" }, { "text": "Most approaches use some kind of prosodic information, such as duration of pauses, stress, and pitch contours, and most of them combine this prosodic information with information about word identity and sequence (n-grams, hidden Markov models). In the study of Stolcke et al. (1998) , the goal was to detect sentence boundaries and a variety of speech disfluencies on a large portion of the SWITCHBOARD corpus. An explicit comparison was made between prosodic and word-based models, and the results showed that an n-gram model, enhanced with segmental information about turn boundaries, significantly outperformed the prosodic model. Model combination improved the overall results, but only to a small extent. In more recent research, Shriberg et al. (2000) reported that for sentence boundary detection in two different corpora (BROADCAST NEWS and SWITCHBOARD), prosodic models outperform word-based language models and a model combination yields additional performance gains.", "cite_spans": [ { "start": 261, "end": 282, "text": "Stolcke et al. (1998)", "ref_id": "BIBREF53" }, { "start": 735, "end": 757, "text": "Shriberg et al. (2000)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work.", "sec_num": "5.3.3" }, { "text": "In the following, we will discuss the three components of the DIASUMM system that perform disfluency detection:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview.", "sec_num": "5.3.4" }, { "text": "\u2022 a POS tagger that tags, in addition to the standard SWITCHBOARD Treebank-3 tag set (LDC 1999b), the following disfluent regions or words:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview.", "sec_num": "5.3.4" }, { "text": "1. coordinating conjunctions that don't serve their usual connective role, but act more as links between subsequent speech acts of a speaker (e.g., and then; we call these empty coordinating conjunctions in this work) 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview.", "sec_num": "5.3.4" }, { "text": "lexicalized filled pauses (labeled as discourse markers in the Treebank-3 corpus; e.g., you know, like) 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview.", "sec_num": "5.3.4" }, { "text": "editing terms within speech repairs (e.g., I mean) 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview.", "sec_num": "5.3.4" }, { "text": "nonlexicalized filled pauses (e.g., um, uh)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview.", "sec_num": "5.3.4" }, { "text": "\u2022 a decision tree (supported by a shallow chunk parser) that decides whether to label a particular sentence as a false start", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview.", "sec_num": "5.3.4" }, { "text": "\u2022 a repetition detection script (for repeated sequences of up to four words) 5.3.5 Training Corpus. For training, we used a part of the SWITCHBOARD transcripts that was manually annotated for sentence boundaries, POS, and the following types of disfluent regions (LDC 1999b):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview.", "sec_num": "5.3.4" }, { "text": "\u2022 {A. . . }: asides (very rare; we ignore them in our experiments)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview.", "sec_num": "5.3.4" }, { "text": "\u2022 {C. . . }: empty coordinating conjunctions (e.g., and then)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview.", "sec_num": "5.3.4" }, { "text": "\u2022 {D. . . }: discourse markers (i.e., lexicalized filled pauses in our terminology, e.g., you know)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview.", "sec_num": "5.3.4" }, { "text": "\u2022 {E. . . }: editing terms (within repairs; e.g., I mean)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview.", "sec_num": "5.3.4" }, { "text": "\u2022 {F. . . }: filled pauses (nonlexicalized; e.g., uh)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview.", "sec_num": "5.3.4" }, { "text": "\u2022 [. . . + . . .]: repairs: the part before the + is called reparandum (to be removed), the part after the + repair (proper)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview.", "sec_num": "5.3.4" }, { "text": "Sentence boundaries can be at the end of completed sentences (E S) or of noncompleted sentences, such as false starts or abandoned clauses (N S). 5.3.6 POS Tagger. We are using Brill's rule-based POS tagger (Brill 1994) . Its basic algorithm at run time (after training) can be described as follows:", "cite_spans": [ { "start": 207, "end": 219, "text": "(Brill 1994)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Overview.", "sec_num": "5.3.4" }, { "text": "1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview.", "sec_num": "5.3.4" }, { "text": "Tag every word with its most likely tag, predicting tags of unknown words based on rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview.", "sec_num": "5.3.4" }, { "text": "Change every tag according to its right and left context (both words and tags are considered), following a list of rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "For preprocessing, we replaced the tags in the regions of {C. . . }, {D. . . }, and {E. . . } with the tags CO (coordinating), DM (discourse marker), and ET (editing term), respectively. (The filler regions {F. . . } are already tagged with UH in the corpus.) Lines that contain typographical errors were excluded from the training corpus. We further eliminated all incomplete words (XX tag) and combined multiwords, marked by a GW tag, into a single word (hence eliminating the GW tag). 11 The entire resulting new tag set had 42 tags. 12 Training of the POS tagger proceeded in three stages, using about 250,000 tagged words for each stage. The trained POS tagger's performance on an unseen test set of about 185,000 words is 94.1% tag accuracy (untrained baseline: 84.8% accuracy). Table 5 shows precision, recall, and F 1 -scores for the four categories of disfluency tags, measured on the test set after the last training phase. We see that the nonlexicalized filler words are almost perfectly tagged (F 1 = 0.98), whereas the hardest task for the tagger is the empty coordinating conjunctions (F 1 = 0.88): There are a few highly ambiguous words in that set, such as and, so, and or. Table 6 shows the POS tagging accuracy on the five subcorpora of our dialogue corpus, evaluated on a sample of 500 words per subcorpus. We see that the POStagging accuracy is slightly lower than for the SWITCHBOARD set that was used for Table 7 Disfluency tag detection (F 1 ) for five subcorpora (results in parentheses: less than 10 tags to be detected).", "cite_spans": [ { "start": 537, "end": 539, "text": "12", "ref_id": null } ], "ref_spans": [ { "start": 785, "end": 792, "text": "Table 5", "ref_id": "TABREF6" }, { "start": 1190, "end": 1197, "text": "Table 6", "ref_id": null }, { "start": 1427, "end": 1434, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": ". training (approximately 90-93%; global average: 91.1%). Further we observe that with the exception of the CALLHOME corpora, the majority of unknown words were actually tagged correctly. The most frequent errors were (1) conjunctions tagged as empty coordinated conjunctions, (2) proper names tagged as regular nouns, and (3) adverbs tagged as adjectives.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CO", "sec_num": null }, { "text": "Finally, we look at the POS tagger's performance for the four disfluency tags CO, DM, ET, and UH in our five subcorpora; the results of this evaluation are presented in Table 7 . We can see that the detection accuracy is generally lower than for the corpus on which we trained the tagger (SWITCHBOARD), but still quite good in general. The major exceptions are the UH tags, on which the F 1 -scores are comparatively low for all subcorpora. The reason for this can be found mostly in words like yes, no, uh-huh, right, okay, and yeah, which are often tagged with UH in SWITCHBOARD but frequently are not considered to be irrelevant words in our corpus and hence not marked as disfluent (e.g., if they are considered to be the answer to a question or a summary-relevant acknowledgment). We circumvent potential exclusion from the summary output of these and other words that might be erroneously tagged as nonlexicalized filled pauses (UH) by marking a small set of words as exempt from removal (see section 5.5.6).", "cite_spans": [], "ref_spans": [ { "start": 169, "end": 176, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "CO", "sec_num": null }, { "text": "False starts are quite frequent in spontaneous speech, occurring at a rate of about 10-15% of all sentences (SWITCHBOARD, CALLHOME). They involve less than 10% of the total words of a dialogue; about 34% of the words in these incomplete sentences are part of some other disfluencies, such as filled pauses or repairs. (In complete sentences, only about 15% of the words are part of these disfluencies.) For CALLHOME, the average length of complete sentences is about 6 words, of incomplete sentences about 4.1 words (including disfluencies).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "False Start Detection.", "sec_num": "5.3.7" }, { "text": "We trained a C4.5 decision tree (Quinlan 1992) on 8,000 sentences of SWITCHBOARD. As features we use the first and last four trigger words (words that have a high incidence around sentence boundaries) and POS of every sentence, as well as the first and last four chunks from a POS-based chunk parser. This chunk parser is based on a simple context-free POS grammar for English. It outputs a phrasal bracketing of the input string (e.g., noun phrases or prepositional phrases). Further, we encode the length of the sentence in words and the number of the words not parsed by the chunk parser. We observed that whereas the chunk information itself does not improve performance over the baseline of using trigger words and POS information only, the derived feature of \"number of not parsed words\" actually does improve the results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "False Start Detection.", "sec_num": "5.3.7" }, { "text": "We ran the decision tree on data with perfect POS tags (for SWITCHBOARD only), disfluency tags (except for repairs), and sentence boundaries. The evaluations were performed on independent test sets of about 3,000 sentences for SWITCHBOARD and of our complete dialogue corpus. Table 8 shows the results of these experiments. Typical errors, where complete sentences were classified as incomplete, are inverted forms or Table 8 False start classification results for different corpora (F 1 ).", "cite_spans": [], "ref_spans": [ { "start": 276, "end": 283, "text": "Table 8", "ref_id": null }, { "start": 418, "end": 425, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "False Start Detection.", "sec_num": "5.3.7" }, { "text": "False start frequency (in %) 12.3 12.1 11.0 2.0 7.2 13.9 False start detection (F 1 )", "cite_spans": [], "ref_spans": [ { "start": 26, "end": 28, "text": "%)", "ref_id": null } ], "eq_spans": [], "section": "SWBD 8E-CH 4E-CH NHOUR XFIRE G-MTG", "sec_num": null }, { "text": ".611 .545 .640 .286 .352 .557 Table 9 Detection accuracy for repairs on the basis of individual word tokens using the repetition filter. ellipsis at the end of a sentence (e.g., neither do I, it seems to). The performance for the informal corpora (CALLHOME, GROUP MEETINGS) is better than that for the formal corpora (NEWSHOUR, CROSSFIRE); this is related to the fact that the relative frequency of false starts is markedly lower in these latter data sets and that these corpora are more dissimilar to the training corpus (SWITCHBOARD).", "cite_spans": [], "ref_spans": [ { "start": 30, "end": 37, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "SWBD 8E-CH 4E-CH NHOUR XFIRE G-MTG", "sec_num": null }, { "text": "The repetition detection component is concerned with (verbatim) repetitions within a speaker's turn, the most frequently occurring case of all speech repairs for informal dialogues (insertions and substitutions are comparatively less frequent). Repeated phrases can potentially be interrupted by other disfluencies, such as filled pauses or editing terms. Repetition detection is performed with a script that can identify repetitions of word/POS sequences of length one to four (longer repetitions are extremely rare: on average, less than 1% of all repetitions). Words that have been marked as disfluent by the POS tagger are ignored when the repeated sequences are considered, so we can correctly detect repetitions such as [ he said uh to + he said to ] him. . . . We are evaluating the precision, recall, and F 1 -scores for this component at the level of individual words when the POS tagger and the sentence boundary detection component are used. Table 9 shows the results. We see that for the informal subcorpora, we get very good precision (only a few repetitions detected are incorrect), and recall is in the 25-45% range (since we cannot detect substitution or insertion type of repairs). The results for the formal subcorpora are considerably worse, so this filter should probably not be used for corpora with as few repetitions as NEWSHOUR or CROSSFIRE. We checked all of the 95 false positives of this evaluation and observed that in the majority of cases (41%), the repetition was correctly detected but was not marked by the human annotator, since it might be considered a case of emphasis. We believe that although some nuances of the sentence(s) might be lost, for the purpose of summarization it makes perfect sense to reduce this information. Sometimes, individual words are repeated for emphasis, sometimes whole sentences (e.g., \"Good./ Good./\"). In the following example from English CALLHOME, the emphasis is rather extreme:", "cite_spans": [], "ref_spans": [ { "start": 953, "end": 960, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Repetition Detection.", "sec_num": "5.3.8" }, { "text": "Further, about 19% of false positives were correct but not annotated because they span multiple turns, and about 14% were erroneously missed by the human annotator. Only the remaining cases (26%) were actual false positives, caused by incorrect POS tags (5%, typically an incorrectly tagged \"that/WDT that/DT\" sequence at the beginning of a relative clause) or incorrect sentence boundaries (21%).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Repetition Detection.", "sec_num": "5.3.8" }, { "text": "There have been attempts to get a more complete coverage of detection and correction of all types of speech repairs (Heeman and Allen 1999) . We decided, however, to use a simple method here that works well for a large subset of cases and is very efficient at the same time.", "cite_spans": [ { "start": 116, "end": 139, "text": "(Heeman and Allen 1999)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Repetition Detection.", "sec_num": "5.3.8" }, { "text": "After detection, the correction of disfluencies is straightforward. When DIASUMM generates its output from the ranked list of sentences, it skips the false starts, the repetitions, and the words that were tagged with CO, DM, ET, or UH by the POS tagger.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Disfluency Correction in DIASUMM.", "sec_num": "5.3.9" }, { "text": "The purpose of the sentence boundary detection component is to insert linguistically meaningful sentence boundaries in the text, given a POS-tagged input. We consider all intraturn and interturn boundary positions for every speaker in a conversation. We use the abbreviations EOS for end of complete sentence (E S in the SWITCHBOARD corpus) and NEOS for end of noncomplete sentence (N S in the SWITCHBOARD corpus). The frequency of sentence boundaries (with respect to the total number of words) is about 13.3%, most of the boundaries (almost 90%) being end markers of completed sentences (SWITCHBOARD).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Boundary Detection 5.4.1 Introduction.", "sec_num": "5.4" }, { "text": "We trained a C4.5 decision tree and computed its input features from a context of four words before and after a potential sentence boundary, motivated by the results of Gavald\u00e0, Zechner, and Aist (1997) . Also following Gavald\u00e0, Zechner, and Aist (1997) , we used 60 trigger words with high predictive potential, employing the score computation method described in this article.", "cite_spans": [ { "start": 169, "end": 202, "text": "Gavald\u00e0, Zechner, and Aist (1997)", "ref_id": "BIBREF15" }, { "start": 220, "end": 253, "text": "Gavald\u00e0, Zechner, and Aist (1997)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Training and Testing.", "sec_num": "5.4.2" }, { "text": "The decision tree input features for every word position are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Testing.", "sec_num": "5.4.2" }, { "text": "\u2022 POS tag (42 different tags)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Testing.", "sec_num": "5.4.2" }, { "text": "\u2022 trigger word (60 different trigger words)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Testing.", "sec_num": "5.4.2" }, { "text": "\u2022 turn boundary before this word?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Testing.", "sec_num": "5.4.2" }, { "text": "\u2022 if turn boundary: length of pause after last turn of same speaker", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Testing.", "sec_num": "5.4.2" }, { "text": "Since NEOS boundaries occur very infrequently (only about 10% of all boundaries, which is only about 1% of all potential boundaries), we decided to merge this class with the EOS class and report results for this combined class only (CEOS). (We relied on the false-start detection module described above to identify the NEOS sentences within this merged class of sentences after the sentence boundary classification.) For training, we used 25,000 words from the Treebank-3 corpus; the test set size was 1,000 words. Table 10 shows the results in detail for the various parameter combinations. We see that for good performance we need to know about one of these two features: \"is there a turn boundary before this word?\" or \"pause duration after last turn from same speaker.\" ", "cite_spans": [], "ref_spans": [ { "start": 515, "end": 523, "text": "Table 10", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Training and Testing.", "sec_num": "5.4.2" }, { "text": "Inter-and intraturn boundary detection (BD) results on 1,000-word test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 11", "sec_num": null }, { "text": "Interturn non-BD 12 (1.2) .56 Interturn BD 112 (11.3) .95 Intraturn non-BD 809 (81.4) .99 Intraturn BD 61 (6.1) .77", "cite_spans": [ { "start": 30, "end": 53, "text": "Interturn BD 112 (11.3)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Occurrence (%) Detection Accuracy (F1)", "sec_num": null }, { "text": "To see how much influence an imperfect POS tagging might have on these results, we POS-tagged the test set data using the POS tagger described above. For this and the following experiments, we increased the training corpus for the decision tree to 40,000 words. The POS tagger accuracy for this test set was about 95.3%, and the F 1 -score for CEOS was .882, which is 98.9% of .892 on perfect POS-tagged input. This is encouraging, since it shows that the decision tree is not very sensitive to the majority of POS errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of Imperfect POS Tagging.", "sec_num": "5.4.3" }, { "text": "In this analysis, we are interested in comparing the detection of sentence boundaries between turns (interturn) to the detection of boundaries within a turn (intraturn). Table 11 shows the results of this analysis (same test set as above). As might be expected, the performance is very good for the two frequent classes: sentence boundaries at the end of turns and nonboundaries within turns (F 1 > .95), but considerably worse for the two more infrequent cases. The very rare cases (around 1% only) of non-sentence boundaries at the end of turns (i.e. turncontinuations) show the lowest performance (F 1 = .56).", "cite_spans": [], "ref_spans": [ { "start": 170, "end": 178, "text": "Table 11", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Interturn and Intraturn Boundaries.", "sec_num": "5.4.4" }, { "text": "To get a picture of the realistic performance of the sentence boundary detection component, using the (imperfect) POS tagger and a faster, but slightly less accurate, decision tree, 13 we evaluate the sentence boundary detection accuracy for all five subcorpora of our dialogue corpus. Table 12 provides the results of these experiments. The results reflect a trend very similar to that for the SWITCHBOARD corpus, in that the two more frequent classes (interturn boundaries and intraturn nonboundaries) have high detection scores, whereas the two more infrequent classes are less well detected. Furthermore, we observe that in cases in which the relative frequency of rare classes is further reduced, the classification accuracy declines overproportionally (particularly for the rarest class of the interturn nonboundaries). Also, overall boundary detection is better for the two more informal corpora, CALLHOME and GROUP MEETINGS (F 1 > .72).", "cite_spans": [], "ref_spans": [ { "start": 286, "end": 294, "text": "Table 12", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Sentence Boundary Detection on Dialogue Corpus.", "sec_num": "5.4.5" }, { "text": "Boundary detection (BD) accuracy (F 1 ) for five subcorpora (in parentheses: relative frequency of class in percent).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 12", "sec_num": null }, { "text": "Interturn non-BD .51 (2.9) .31 (1. 5.5 Cross-Speaker Information Linking 5.5.1 Introduction. One of the properties of multiparty dialogues is that shared information is created between dialogue participants. The most obvious interactions of this kind are question-answer (Q-A) pairs. The purpose of this component is to create automatically such coherent pieces of relevant information, which can then be extracted together while generating the summary. The effects of such linkings on actual summaries can be seen in two dimensions: (1) increased local coherence in the summary and (2) a potentially higher informativeness of the summary. Since Q-A linking has a side effect in that other information will be lost with respect to a summary of the same length without Q-A linking, the second claim is much less certain to hold than the first. We investigated these questions in related work (Zechner and Lavie 2001) and found that although Q-A linking does not significantly change the informativeness of summaries on average, it does increase summary coherence (fluency) significantly. In this section, we will be concerned with the following two intuitive subtasks of Q-A linking: (1) identifying questions (Qs) and (2) finding their corresponding answers.", "cite_spans": [ { "start": 891, "end": 915, "text": "(Zechner and Lavie 2001)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "8E-CH 4E-CH NHOUR XFIRE G-MTG", "sec_num": null }, { "text": "Detecting a question and its corresponding answer can be seen as a subtask of the speech act detection and classification task. Recently, Stolcke et al. (2000) presented a comprehensive approach to dialogue act modeling with statistical techniques. A good overview and comparison of recent related work can also be found in Stolcke et al.'s article. Results from their evaluations on SWITCHBOARD data show that word-based speech act classifiers usually perform better than prosody-based classifiers, but that a model combination of the two approaches can yield an improvement in classification accuracy.", "cite_spans": [ { "start": 138, "end": 159, "text": "Stolcke et al. (2000)", "ref_id": "BIBREF51" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work.", "sec_num": "5.5.2" }, { "text": "For training of the question detection module, we used the manually annotated set of about 200,000 SWITCHBOARD speech acts 14 (SAs); 15 for training of the answer detection component, we used the eight English CALLHOME dialogues (8E-CH), which were manually annotated for Q-A pairs. Although we were aiming to detect all questions in the question detection module, the answer detection module focuses on Q-A pairs only: We exclude all questions from consideration that are not Yes-No-(YN) or Wh-questions (such as rhetorical or back-channel questions), as well as those that do not have an answer in the dialogue. Thus we employ only 68 pf the 83 questions marked in the 8E-CH data set for these evaluations. Table 13 provides the statistics concerning questions and answers for the 8E-CH subcorpus and shows that for a small but significant number of questions, the answer does not immediately follow the question speech act (delayed answers).", "cite_spans": [], "ref_spans": [ { "start": 709, "end": 717, "text": "Table 13", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Corpus Statistics.", "sec_num": "5.5.3" }, { "text": "We used two different methods, both trained on SWITCHBOARD data: (1) a speech act tagger 16 and (2) a decision tree based on trigger word and part-of-speech information. Speech act tagger. The speech act tagger tags one speech act at a time and hence can make use only of speech act unigram information. Within a speech act, it uses a language model based on POS and the 500 most frequent word/POS pairs. It was trained on the aforementioned SWITCHBOARD speech act training set. It was not optimized for the task of question detection. Its typical run time for speech act classification is about 10 speech acts per second.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Question Detection.", "sec_num": "5.5.4" }, { "text": "Decision tree question classifier. The decision tree classifier (C4.5) uses the following set of features: (1) POS and trigger word information for the first and last five tokens of each speech act; 17 (2) speech act length, and (3) occurrence of POS bigrams. The set of trigger words is the same as for the sentence boundary detection module. The POS bigrams were designed to be most discriminative between question speech acts (q-SAs) and non-question speech acts (non-q-SAs). The bigrams were obtained as follows:", "cite_spans": [ { "start": 199, "end": 201, "text": "17", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Automatic Question Detection.", "sec_num": "5.5.4" }, { "text": "For a balanced set of q-SAs and non-q-SAs (about 9,000 SAs each): Count all the POS bigrams in positions 1 . . . 5 and (n \u2212 4) . . . n (using START and END for the first and last bigrams, respectively) and memorize position (beginning or end of SA) and type (q-SA vs. non-q-SA).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "(a) Add one to the count (to prevent division by zero).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For all bigrams:", "sec_num": "2." }, { "text": "Divide the q-SA count by the non-q-SA count. (c)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(b)", "sec_num": null }, { "text": "If the ratio is smaller than one, invert it (ratio := 1/ratio).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(b)", "sec_num": null }, { "text": "Multiply the result of (c) by the sum of q-SA count and non-q-SA count. 18", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(d)", "sec_num": null }, { "text": "Extract the 100 bigrams with the highest value. Experiments and results. The question detection decision tree was trained on a set of about 20,000 speech acts from the SWITCHBOARD corpus. We first evaluated the speech act tagger and the decision tree classifier on the 8E-CH data set. Whereas in the later stage of answer detection, questions without answers and nonpropositional questions are ignored, at this point we are interested in the detection of all annotated questions in the corpus. This also reflects the fact that the training set contains all possible types of questions. Table 14 reports the results of the question detection experiments with the two classifiers used on the 8E-CH subcorpus. We note that whereas the decision tree is performing only slightly worse than the speech act tagger, its typical classification time is two orders of magnitude faster. Based on these observations, we decided to use the question detection decision tree in the Q-A linking component of the DIASUMM system.", "cite_spans": [], "ref_spans": [ { "start": 586, "end": 594, "text": "Table 14", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "After identifying which sentences are questions, the next step is to identify the answers to them. From the 8E-CH statistics of Table 13 we observe that for more than 75% of the YN-and Wh-questions, the answer is to be found in the first sentence of the speaker talking after the speaker uttering the question. In the remainder of cases, the majority of answers are in the second (instead of the first) sentence of the responding speaker. Further, the speaker who has posed a question usually utters no (or only very few) sentences after the question is asked and before the next speaker starts talking.", "cite_spans": [], "ref_spans": [ { "start": 128, "end": 136, "text": "Table 13", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Detecting the Answers.", "sec_num": "5.5.5" }, { "text": "In addition to detecting sequential Q-A pairs, we also want to be able to detect simple embedded questions, as shown in this example of a brief clarification dialogue: Q 1 A: When are we meeting then? Q 2 B: You mean tomorrow?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detecting the Answers.", "sec_num": "5.5.5" }, { "text": "3 A: Yes. 4 B: At 4pm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detecting the Answers.", "sec_num": "5.5.5" }, { "text": "We devise the following heuristics to detect answers to question speech acts which have been previously identified:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detecting the Answers.", "sec_num": "5.5.5" }, { "text": "\u2022 If the first speaker change after the question occurs more than maxChg sentences after the question, the search is stopped and no Q-A pair is returned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detecting the Answers.", "sec_num": "5.5.5" }, { "text": "\u2022 Answer hypotheses are sought for maximally maxSeek sentences after the first speaker change following the question, but not over interruptions by any other speaker; that is, we check within a single speaker region (this is the stopping criterion for the following two heuristics). An exception occurs if there is an embedded question in the first single speaker region: In that case, we look at the next region where a speaker different from the initial Q-speaker is active. 19", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detecting the Answers.", "sec_num": "5.5.5" }, { "text": "\u2022 Answers have to be minimally minAns words long; if they are shorter, we add the next sentence to the current answer hypothesis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detecting the Answers.", "sec_num": "5.5.5" }, { "text": "\u2022 Even if the minimum answer length is reached, the answer can be optionally extended if at least one word in the answer matches a word from the question (one of two different stop lists (StopShort, StopLong) or no stop list is used to remove function words from consideration). 20", "cite_spans": [ { "start": 187, "end": 208, "text": "(StopShort, StopLong)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Detecting the Answers.", "sec_num": "5.5.5" }, { "text": "We have these further restrictions for the case of embedded questions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detecting the Answers.", "sec_num": "5.5.5" }, { "text": "1. If we detect a potential embedded Q-A pair, the answer to the surrounding question must immediately follow the answer to the embedded question (i.e., the region following the potential answer region of the embedded question-sentence 4 in our above example-must (1) not contain a question itself and (2) be from a different speaker than the surrounding question).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detecting the Answers.", "sec_num": "5.5.5" }, { "text": "A crossover is prohibited; that is, we eliminate all pairs Q j , A l when a pair Q i , A k was already detected, with i < j < k < l (k, l being start indices of answer spans).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "The output of the algorithm is a list of triples Q, A start , A end , where Q is the sentence ID of the question and A start the first and A end the last sentence of the answer. As mentioned above, we use only 68 of the 83 questions marked in the 8E-CH data set for these evaluations, since only these are YN-or Wh-questions that actually have answers in the dialogue. There are four possible outcomes for each triple:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "(1) irrelevant: a Q-A pair with an incorrectly hypothesized question (this is the fault of the question detection module, not of this heuristic); (2) missed: the answer was missed entirely; (3) completely correct: A end coincides with the correct answer sentence ID; and (4) correct range: the answer is contained in the interval [A start , A end ] but does not coincide with A end . For the calculation of precision, recall, and F 1 -score, we count classes (3) and (4) as correct and use the sum of all classes for the denominator of precision and the total number of Q-A pairs (68 in this development set) as the denominator of recall.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "To determine the best parameters, we varied them across a reasonable set of values and ran the answer detection script for all combinations of parameters. The best results (with respect to F 1 -score) using questions detected by the speech act tagger and the decision tree are reported in Table 15 . In the DIASUMM system, we use the following optimal parameter settings for the answer detection heuristics: maxChg = 2, maxSeek = 4, minAns = 10, sim = on, stop = no.", "cite_spans": [], "ref_spans": [ { "start": 289, "end": 297, "text": "Table 15", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "Finally, we evaluated the performance of both the Q-detection module and the combined Q-A detection on all five subcorpora, using the decision tree for question detection; the results are reported in Table 16 . Except for the rather small NEWSHOUR Table 15 Q-A detection results using two different classifiers for question detection (68 Q-A pairs to be detected).", "cite_spans": [], "ref_spans": [ { "start": 200, "end": 208, "text": "Table 16", "ref_id": "TABREF0" }, { "start": 248, "end": 256, "text": "Table 15", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "All hypothesized Q-A pairs 80 54 Correct [(3) and 4 Table 16 Performance comparison for Q-and Q-A detection (Q-detection with the decision tree question classifier). corpus (with fewer than 20 questions or Q-A pairs to identify), the typical Q-detection F 1 -score is around .6 and the Q-A detection F 1 -score around .5. In two cases, the Q-A detection performance is slightly better than the Q-detection performance. This can be explained by the fact that the answer detection algorithm prunes away a number of Q-hypotheses, reducing the space for potential Q-A hypotheses.", "cite_spans": [ { "start": 41, "end": 45, "text": "[(3)", "ref_id": null } ], "ref_spans": [ { "start": 52, "end": 60, "text": "Table 16", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "SA Tagger Decision Tree", "sec_num": null }, { "text": "When we use the Q-A detection module as part of the DIASUMM system, we want to ensure that (1) there are no Q-A pairs containing Q-sentences that are false starts and that (2) the initial part of an answer is not lost in case the disfluency detection component marks some indicative words as disfluencies. To satisfy the first constraint, we block Q-detection of sentences that have been previously classified as false starts; as for the second constraint, we create a list of indicative words (relevant for YN-questions) that are not to be removed by the summary generator if they appear in the beginning (leading five words) of answers. 21", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Q-A Detection within DIASUMM.", "sec_num": "5.5.6" }, { "text": "5.6 Sentence Ranking and Selection 5.6.1 Introduction. The sentence ranking and selection component is an implementation of the MMR algorithm (Carbonell, Geng, and Goldstein 1997) , applied to extracting the most relevant sentences from a topical segment of a dialogue. The component's output in isolation serves as the MMR baseline for the global system evaluation. Its purpose is to determine weights for terms and sentences, to rank the sentences according to their relevance within each topical segment of the dialogue, and finally to select the sentences for the summary output according to their rank, as well as to other criteria, such as question-answer linkages, established by previous components. The selected sentences are presented to the user in text order.", "cite_spans": [ { "start": 142, "end": 179, "text": "(Carbonell, Geng, and Goldstein 1997)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Q-A Detection within DIASUMM.", "sec_num": "5.5.6" }, { "text": "In addition to the tokenization rules for the global system (section 5.2), we apply a simple six-character truncation for stemming and use a stop word list to eliminate frequent noncontent words. In the experiments, we used the following five different stop word lists:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tokenization.", "sec_num": "5.6.2" }, { "text": "\u2022 the original SMART list (Salton 1971) (SMART-O)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tokenization.", "sec_num": "5.6.2" }, { "text": "\u2022 a manually edited stop list based on SMART (SMART-M)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tokenization.", "sec_num": "5.6.2" }, { "text": "\u2022 a stop list with all closed-class words from the POS tagger's lexicon (POS-O)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tokenization.", "sec_num": "5.6.2" }, { "text": "\u2022 a manually edited stop list based on the POS tagger's lexicon and frequent closed-class words in the CALLHOME training corpus (POS-M)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tokenization.", "sec_num": "5.6.2" }, { "text": "\u2022 an empty stop list (EMPTY)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tokenization.", "sec_num": "5.6.2" }, { "text": "Weighting. The basic idea for determining the most relevant sentences within a topical segment is as follows: First, we compute a vector of word weights for the segment tf q (including all stemmed non-stop words) and do the same for each sentence ( tf t ), then we compute the similarity between sentence and segment vectors for each sentence. That way, sentences that have many words in common with the segment vector are rewarded and receive a higher relevance weight. Whereas we compose the sentence vectors tf t using direct term frequency counts, the weights for segment terms are determined according to one of the three formulae in equation (1) (freq, smax, and log), inspired by Cornell University's SMART system (Salton 1971) :", "cite_spans": [ { "start": 721, "end": 734, "text": "(Salton 1971)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Term and Sentence", "sec_num": "5.6.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "tf i,s = f i,s or 0.5 + 0.5 f i,s f smax or 1 + log f i,s ,", "eq_num": "(1)" } ], "section": "Term and Sentence", "sec_num": "5.6.3" }, { "text": "where f i,s are the in-segment frequencies of a stem and f smax are maximal segment frequencies of any stem in the segment. Finally, we multiply an inverse document frequency (IDF) weight to tf s to obtain the segment vectors tf q , as shown in equations 2and 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Term and Sentence", "sec_num": "5.6.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "tf i,q = tf i,s IDF i,s", "eq_num": "(2)" } ], "section": "Term and Sentence", "sec_num": "5.6.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "IDF i,s = 1 + log N seg i seg or N seg i seg .", "eq_num": "(3)" } ], "section": "Term and Sentence", "sec_num": "5.6.3" }, { "text": "IDF values are computed with respect to a collection of topical segments, either the current dialogue (DIALOGUE) or a set of dialogues (CORPUS). N seg is the total number of topical segments in the IDF corpus, and i seg is the number of segments in which the token i appears at least once. The effect of using IDF values is to boost those words that are (relatively) unique to any given segment over those that are more evenly distributed across the corpus. As stated above, the main algorithm is a version of the MMR algorithm (Carbonell, Geng, and Goldstein 1997; Carbonell and Goldstein 1998) , which emphasizes sentences that contain many highly weighted terms for the current segment (salience) and are sufficiently dissimilar to previously ranked sentences (diversity or antiredundancy). The MMR formula is given in equation 4:", "cite_spans": [ { "start": 528, "end": 565, "text": "(Carbonell, Geng, and Goldstein 1997;", "ref_id": "BIBREF7" }, { "start": 566, "end": 595, "text": "Carbonell and Goldstein 1998)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Term and Sentence", "sec_num": "5.6.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "nextsentence = arg max t nr,j (\u03bbsim 1 (query, t nr,j ) \u2212 (1 \u2212 \u03bb) max t r,k sim 2 (t nr ,j , t r,k )).", "eq_num": "(4)" } ], "section": "Term and Sentence", "sec_num": "5.6.3" }, { "text": "The MMR formula describes an iterative algorithm and states that the next sentence to be put in the ranked list will be taken from the sentences that have not yet been ranked (t nr ). This sentence is (1) maximally similar to a query and (2) maximally dissimilar to the sentences that have already been ranked (t r ). We use the topical segment word vector tf q as query vector. The \u03bb parameter (0.0 \u2264 \u03bb \u2264 1.0) is used to trade off the influence of salience against that of redundancy. Both similarity metrics (sim 1 , sim 2 ) are inner vector products of stemmed-term frequencies (equations (5) and (6)). sim 1 can be normalized in different ways: (1) to yield a cosine vector product (division by product of vector lengths), (2) division by number of content words, 22 and (3) no normalization:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Term and Sentence", "sec_num": "5.6.3" }, { "text": "sim 1 = tf q tf t | tf q || tf t | or tf q tf t 1 + i tf i,t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Term and Sentence", "sec_num": "5.6.3" }, { "text": "or tf q tf t (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Term and Sentence", "sec_num": "5.6.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "sim 2 = tf t1 tf t2 | tf t1 || tf t2 |", "eq_num": "(6)" } ], "section": "Term and Sentence", "sec_num": "5.6.3" }, { "text": "Emphasis factors. Every sentence's similarity weight (sim 1 ) can be (de)emphasized, based on a number of its properties. We implemented optional emphasis factors for:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Term and Sentence", "sec_num": "5.6.3" }, { "text": "\u2022 Lead emphasis: for the leading n% of a segment's sentences: sim 1 = sim 1 l, with l being the lead factor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Term and Sentence", "sec_num": "5.6.3" }, { "text": "\u2022 Q-A emphasis: for all sentences that belong to a question-answer pair: sim 1 = sim 1 q, with q being the Q-A emphasis factor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Term and Sentence", "sec_num": "5.6.3" }, { "text": "\u2022 False-start deemphasis: for all sentences that are false starts: sim 1 = sim 1 f , with f being the false-start factor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Term and Sentence", "sec_num": "5.6.3" }, { "text": "\u2022 Speaker emphasis: for each individual speaker s, an emphasis factor s e can be defined: sim 1 = sim 1 s e for all sentences of speaker s. 23 These parameters can serve to fine-tune the system for particular applications or user preferences. For example, if the false starts are deemphasized, they are less likely to trigger a question's being linked to them in the linking process. If questions and answers are emphasized, more of them will show up in the summary, increasing its coherence and readability. In a situation in which a particular speaker's statements are of higher interest than those of other speakers, his sentences can be emphasized, as well.", "cite_spans": [ { "start": 140, "end": 142, "text": "23", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Term and Sentence", "sec_num": "5.6.3" }, { "text": "Since sim 2 is a cosine vector product and hence in [0,1], we have to normalize sim 1 to [0,1] as well to enable a proper application of the MMR formula. For this normalization of sim 1 , we divide each sim 1 score by the maximum of all sim 1 scores in a segment after initial computation and application of the various emphasis factors described here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Term and Sentence", "sec_num": "5.6.3" }, { "text": "While generating the output summary from the MMR-ranked list of sentences, whenever a question or an answer is encountered (detected previously by the Q-A detection module), the corresponding answer/question is linked to it and moved up the relevance ranking list to immediately follow the current question/answer. If the question-answer pair consists of more than two sentences, the linkages are repeated until no further questions or answers can be added to the current linkage cluster. 5.6.5 Summary Types. DIASUMM can generate several different types of summaries, the two main versions being (1) the CLEAN summary, which is based on the output of all DIASUMM components (disfluency detection, sentence boundary detection, Q-A linking), and (2) the TRANS summary, in which all dialogue specific components are ignored (essentially, this is an MMR summary of the original dialogue transcript). For the purpose of the global system evaluation, we use only these two versions of summaries, as well as LEAD baseline summaries, where the summary is formed by extracting the first n words from a topical segment. 24 Furthermore, the system can generate phrasal summaries, which render the sentences in the same ranking order as the CLEAN summary but reduce the output to noun phrases and potentially other phrases, depending on the setting of parameters. 25 In Figure 2 we show an example of a set of LEAD, TRANS, CLEAN, and PHRASAL summaries. The set was generated from the CALLHOME transcript presented in section 2.", "cite_spans": [ { "start": 1111, "end": 1113, "text": "24", "ref_id": null }, { "start": 1353, "end": 1355, "text": "25", "ref_id": null } ], "ref_spans": [ { "start": 1359, "end": 1367, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Q-A Linking.", "sec_num": "5.6.4" }, { "text": "Tuning. This section describes how we arrive at an optimal parameter setting for each subcorpus (CALLHOME, NEWSHOUR, CROSSFIRE, GROUP MEETINGS). We want to establish an MMR baseline for the global system evaluations with which we can then compare the results of the entire DIASUMM system. Note that for all the tuning experiments described in this subsection, we did not make use of any other DIASUMM components, namely, disfluency detection, sentence boundary detection, and questionanswer linking. All experiments were based on the human gold standard with respect to topical segments. We used only the devtest set for the four subcorpora here (8E-CH = CALLHOME, DT-NH = NEWSHOUR, DT-XF = CROSSFIRE, and DT-MTG = GROUP MEETINGS).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System", "sec_num": "5.6.6" }, { "text": "Since the length of turns varies widely, one could argue that an easy way to increase performance for the MMR baseline (which does not use automatic sentence boundary detection) might be to split overly long turns evenly into shorter chunks. This was done by Valenza et al. (1999) , who experimented with lengths of 10-30 words per extract fragment. We add this option as an additional parameter to the MMR baseline. If the parameter is set to n words, turns with a length l \u2265 1.5n get cut into pieces of lengths n iteratively until the last remaining piece is l < 1.5n.", "cite_spans": [ { "start": 259, "end": 280, "text": "Valenza et al. (1999)", "ref_id": "BIBREF54" } ], "ref_spans": [], "eq_spans": [], "section": "System", "sec_num": "5.6.6" }, { "text": "Evaluation metric. To evaluate the performance of this component, we use the word-based evaluation metric described in section 6.2, which gives the highest scores to summaries containing words with the highest average relevance scores, as marked by human annotators. We then average these scores over all topical segment summaries of a particular subcorpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System", "sec_num": "5.6.6" }, { "text": "1 a: Oh 2 b: They didn't know he was going to get shot but it was at a peace rally so I mean it just worked out 3 b: I mean it was a good place for the poor guy to die I mean because it was you know right after the rally and everything was on film and everything [...] TRANS: 2 b: They didn't know he was going to get shot but it was at a peace rally so I mean it just worked out 3 b: I mean it was a good place for the poor guy to die I mean because it was you know right after the rally and everything was on film and everything ", "cite_spans": [ { "start": 263, "end": 268, "text": "[...]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "LEAD:", "sec_num": null }, { "text": "Note: The turn IDs are just indicating the relative position of the turns within the original text and do not always correspond to the turn numbers of the original or to the turn numbers of the other summaries. The . . . marks the position in those sentences where the length threshold for a summary was reached.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "of us were", "sec_num": null }, { "text": "Example summaries of 20% length: LEAD, TRANS, CLEAN and PHRASAL.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "Parameter tuning. The system tuning proceeded in three phases, in which we held the summary size constant to 15% and optimized the following set of parameters: Optimally tuned parameters for MMR baseline system (tuning on devtest set subcorpora). Lead factor: 1.0-5.0 (applied to first 20% of sentences) Table 17 shows the parameter settings that were determined to be optimal for the MMR baseline system (TRANS summaries).", "cite_spans": [], "ref_spans": [ { "start": 304, "end": 312, "text": "Table 17", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "The majority of the system components are implemented in Perl5, except for the C4.5 decision tree (Quinlan 1992) , the chunk parser (Ward 1991) , and the POS tagger (Brill 1994) , which were implemented in C by the respective authors. We measured the system runtime on a 300 MHz Sun Ultra60 dual-processor workstation with 1 GB main memory, summarizing all 23 dialogue excerpts from our corpus. The average runtime for the whole system, including all of its components except for the topic segmentation module, was 17.8 seconds, and for the sentence selection component alone 7.0 seconds (per-dialogue average). The average ratio of system runtime to dialogue duration was 0.029 (2.9% of real speaking time).", "cite_spans": [ { "start": 98, "end": 112, "text": "(Quinlan 1992)", "ref_id": "BIBREF39" }, { "start": 132, "end": 143, "text": "(Ward 1991)", "ref_id": "BIBREF57" }, { "start": 165, "end": 177, "text": "(Brill 1994)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "System Performance", "sec_num": "5.7" }, { "text": "Traditionally, summarization systems have been evaluated in two major ways: (1) intrinsically, measuring the amount of the core information preserved from the original text (Kupiec, Pedersen, and Chen 1995; Teufel and Moens 1997) , and (2) extrinsically, measuring how much the summary can benefit in accomplishing another task (e.g., finding a document relevant to a query or classifying a document into a topical category) (Mani et al. 1998) .", "cite_spans": [ { "start": 173, "end": 206, "text": "(Kupiec, Pedersen, and Chen 1995;", "ref_id": "BIBREF26" }, { "start": 207, "end": 229, "text": "Teufel and Moens 1997)", "ref_id": "BIBREF54" }, { "start": 425, "end": 443, "text": "(Mani et al. 1998)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "6.1" }, { "text": "In this work, we focus on intrinsic evaluation exclusively. That is, we want to assess how well the summaries preserve the essential information contained in the original texts. As other studies have shown (Rath, Resnick, and Savage 1961; Marcu 1999) , the level of agreement between human annotators about which passages to choose to form a good summary is usually quite low. Our own findings, reported in section 4.2.4, support this in that the intercoder agreement, here measured on a word level, is rather low.", "cite_spans": [ { "start": 206, "end": 238, "text": "(Rath, Resnick, and Savage 1961;", "ref_id": "BIBREF40" }, { "start": 239, "end": 250, "text": "Marcu 1999)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "6.1" }, { "text": "We decided to minimize the bias that would result from selecting either a particular human annotator, or even the manually created gold standard, as a reference Table 18 Average summary accuracy scores: devtest set and eval set subcorpora on optimized parameters, comparing LEAD, MMR baseline, DIASUMM, and the human gold standard. Table 19 Best emphasis parameters for the DIASUMM system, trained on the devtest set.", "cite_spans": [], "ref_spans": [ { "start": 161, "end": 169, "text": "Table 18", "ref_id": "TABREF0" }, { "start": 332, "end": 340, "text": "Table 19", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "6.1" }, { "text": "CALLHOME 0.5 1.0 2.0 NEWSHOUR 0.5 2.0 1.0 CROSSFIRE 0.5 1.0 1.0 GROUP MEETINGS 0.5 1.0 3.0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus False Start Q-A Lead Factor", "sec_num": null }, { "text": "Average summary accuracy scores for different system configurations for the four different subcorpora. subcorpora. Comparisons were made for each of the five summary sizes within each topical segment. For the CALLHOME and GROUP MEETINGS subcorpora, our DIASUMM system is significantly better than the MMR baseline (p < 0.01); for the two more formal subcorpora, NEWSHOUR and CROSSFIRE, the differences between the performance of the two systems are not significant. Except for on the NEWSHOUR subcorpus, both the MMR baseline and the DIASUMM system perform significantly better than the LEAD baseline. Table 20 shows the average performance of the following six system configurations, averaged over all topical segments and all summary sizes (5-25% length summaries; in configurations 3-5 below, components used are in addition to the core MMR summarizer):", "cite_spans": [], "ref_spans": [ { "start": 602, "end": 610, "text": "Table 20", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Table 20", "sec_num": null }, { "text": "1. LEAD: using the first n% of the words in a segment 2. MMR: the MMR baseline (tuned; see above)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6.4" }, { "text": "3. DFF-ONLY: using the disfluency detection components (POS tagger, false-start detection, repetition detection), but no sentence boundary detection or question-answer linking 4. SB-ONLY: using the sentence boundary detection module, but no other dialogue-specific modules 5. NO-QA: a combination of DFF-ONLY and SB-ONLY (all preprocessing components used except for question-answer linking) 6. DIASUMM: complete system with all components (all disfluency detection components, sentence boundary detection, and Q-A linking)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6.4" }, { "text": "We observe that in all subcorpora, except for CROSSFIRE, the addition of either the disfluency components or the sentence boundary component improves the summary accuracy over that of the MMR baseline. As we would expect, given the much higher frequency of disfluencies in the two informal subcorpora (CALLHOME, GROUP MEETINGS), the relative performance increase of DFF-ONLY over the MMR baseline is much higher here (about 10-15%) than for the two more formal subcorpora (5% and below). Looking at the performance increase of SB-ONLY, we find marked improvements over the MMR baseline for those two subcorpora that use the true original turn boundaries in the MMR baseline: GROUP MEETINGS and NEWSHOUR (>10%); for the two other subcorpora, the improvement is below 5%. Furthermore, the combination of the disfluency detection and sentence boundary detection components (NO-QA) improves the results over the configurations DFF-ONLY and SB-ONLY.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6.4" }, { "text": "The situation is much less uniform when we add the question-answer detection component (this then corresponds to the full DIASUMM system): In the CROSSFIRE corpus, we have the largest performance increase (we also have the highest relative frequency of question speech acts here). For the two informal corpora, the change is only minor; for NEWSHOUR, the performance decreases substantially. We showed in Zechner and Lavie (2001) , however, that in general, for dialogues with relatively frequent Q-A exchanges, the accuracy of a summary (informativeness) does not change significantly when the Q-A detection component is applied. On the other hand, the (local) coherence of the summary does increase significantly, but we cannot measure this increase with the evaluation criterion of summary accuracy used here.", "cite_spans": [ { "start": 405, "end": 429, "text": "Zechner and Lavie (2001)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6.4" }, { "text": "To conclude, we have shown that using dialogue-specific components, with the possible exception of the Q-A detection module, can help in creating more accurate summaries for more informal, casual, spontaneous dialogues. When more formal conversations (which may even be partially scripted), containing relatively few disfluencies, are involved, either a simple LEAD method or a standard MMR summarizer will be much harder to improve upon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6.4" }, { "text": "The problem of how to generate readable and concise summaries automatically for spoken dialogues of unrestricted domains involves many challenges that need to be addressed. Some of the research issues are similar or identical to those faced in summarizing written texts (such as topic segmentation, determining the most salient/relevant information, anaphora resolution, summary evaluation), but other additional dimensions are added on top of this list, including speech disfluency detection, sentence boundary detection, cross-speaker information linking, and coping with imperfect speech recognition. The line of argument of this article has been that whereas using a traditional approach for written text summarization (such as the MMR-based sentence selection component within DIASUMM) may be a good starting point, addressing the dialogue-specific issues is key for obtaining better summaries for informal genres.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Directions for Future Work", "sec_num": "7." }, { "text": "We decided to focus on the three problems of (1) speech disfluency detection, (2) sentence boundary detection, and (3) cross-speaker information linking and implemented trainable system components to address each of these issues. Both the evaluations of the individual components of our spoken-dialogue summarization system and the global evaluations as well have shown that we can successfully make use of the SWITCHBOARD corpus (LDC 1999b) to train a system that works well on two other genres of informal dialogues, CALLHOME and GROUP MEETINGS. We conjecture that the reasons why the DIASUMM system was not able to improve over the MMR baseline for the two other corpora, which are more formal, lies in their very nature of being of a quite different genre: the NEWSHOUR and CROSSFIRE corpora have longer turns and sentences, as well as fewer disfluencies. We would also conjecture that their sentence structures are more complex than what we typically find in the other corpora of more colloquial, spontaneous conversations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Directions for Future Work", "sec_num": "7." }, { "text": "Future work will have to address the issue of whether the availability of training data for more formal dialogues (in size and annotation style comparable to the SWITCHBOARD corpus, though) could lead to an improvement in performance on those data sets, as well, or if even then a standard written-text-based summarizer would be hard to improve upon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Directions for Future Work", "sec_num": "7." }, { "text": "Given the complexity of the task, we had to make a number of simplifying assumptions, most notably about the input data for our system: We use perfect transcripts by humans instead of ASR transcripts, which, for these genres, typically show word error rates (WERs) ranging from 15% to 35%. Previous related work (Valenza et al. 1999; Zechner and Waibel 2000b) demonstrated that the actual WERs in summaries generated from ASR output are usually substantially lower than the full-ASR-transcript WER and can further be reduced by taking acoustically derived confidence scores into account.", "cite_spans": [ { "start": 312, "end": 333, "text": "(Valenza et al. 1999;", "ref_id": "BIBREF54" }, { "start": 334, "end": 359, "text": "Zechner and Waibel 2000b)", "ref_id": "BIBREF60" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Directions for Future Work", "sec_num": "7." }, { "text": "We further did not explore the potential improvements of components as well as of the system overall when prosodic information such as stress and pitch is added as an input feature. Past work in related fields (Shriberg et al. 1998; Shriberg et al. 2000) suggests that particularly for ASR input, noticeable improvements might be achievable when such input is provided.", "cite_spans": [ { "start": 210, "end": 232, "text": "(Shriberg et al. 1998;", "ref_id": "BIBREF48" }, { "start": 233, "end": 254, "text": "Shriberg et al. 2000)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Directions for Future Work", "sec_num": "7." }, { "text": "Although presegmentation of the input into topically coherent segments certainly is a useful step in summarization for any kind of texts (written or spoken), we have not addressed and discussed this issue in this article.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Directions for Future Work", "sec_num": "7." }, { "text": "Finally, we think that there is more work needed in the area of automatically deriving discourse structures for spoken dialogues in unrestricted domains, even if the text spans covered might be only local (because of a lack of global discourse plans). We believe that a summarizer, in addition to knowing about the interactively constructed and coherent pieces of information (such as in question-answer pairs), could make good use of such structured information and be better guided in making its selections for summary generation. In addition, this discourse structure might aid modules that perform automatic anaphora detection and resolution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Directions for Future Work", "sec_num": "7." }, { "text": "We have motivated, implemented, and evaluated an approach for automatically creating extract summaries for open-domain spoken dialogues in informal and formal genres of multiparty conversations. Our dialogue summarization system DIASUMM uses trainable components to detect and remove speech disfluencies (making the output more readable and less noisy), to determine sentence boundaries (creating suitable text spans for summary generation), and to link cross-speaker information units (allowing for increased summary coherence).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8." }, { "text": "We used a corpus of 23 dialogue excerpts from four different genres (80 topical segments, about 47,000 words) for system development and evaluation and the disfluencyannotated SWITCHBOARD corpus (LDC 1999b) for training of the dialogue-specific components. Our corpus was annotated by six human coders for topical boundaries and relevant text spans for summaries. Additionally, we had annotations made for disfluencies, sentence boundaries, question speech acts, and the corresponding answers to those question speech acts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8." }, { "text": "In a global system evaluation we compared the MMR-based sentence selection component with the DIASUMM system using all of its components discussed in this article. The results showed that (1) both a baseline MMR system as well as DIASUMM create better summaries than a LEAD baseline (except for NEWSHOUR) and that (2) DIASUMM performs significantly better than the baseline MMR system for the informal dialogue corpora (CALLHOME and GROUP MEETINGS).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8." }, { "text": "Therefore, in some cases, we can find several turns of one speaker following each other. 2 Hence there can be \"missing\" turns (e.g., turn 37), in case they contain only noises and no actual words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Although other studies have found percentages lower than this figure, we included content-less categories such as discourse markers or rhetorical connectives, which are often not regarded as disfluencies per se.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We used the devtest set corpus for system development and tuning and set aside the eval set for the final global system evaluation. For the other three genres, two dialogue excerpts each were used for the devtest set, the remainder for the eval set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This fact may partially account for NEWSHOUR and CROSSFIRE turns being longer than CALLHOME and GROUP MEETING turns. 6 Naive in this context means that they were nonexperts in linguistics or discourse analysis. 7 The weights were set as follows: nucleus IUs: 3.0 if +-marked, 2.0 otherwise; satellite IUs: 1.0 if +-marked, 0.5 otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "A 30-gram is a passage of text containing 30 adjacent words. 9 Precision is the ratio of correctly matched items over all items (boundaries, marked words); recall is the ratio of correctly matched items over all items that need to be matched; and the F 1 -score combines precision (P) and recall (R) in the following way: F 1 = 2PR P+R . 10 These computations were performed for those four (out of six) annotators who completed the entire corpus markup.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The sole function of the GW tag is to label words that are considered to be parts of other words but were transcribed separately, such as: drug/GW testing/NN. 12 For a description of the POS tags used in that database seeSantorini (1990) andLDC (1999a).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "B:[...] How is the new person doing? q/ 204 A: Very very very very very well. /[...]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This decision tree uses a different type of encoding, but the same input features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In this work, the notions of speech acts and sentences can be considered equivalent. 15 From the Johns Hopkins University Large Vocabulary Continuous Speech Recognition (LVCSR) Summer Workshop 1997. Thanks to Klaus Ries for providing the data, which are also available from http://www.colorado.edu/ling/jurafsky/ws97/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Thanks to Klaus Ries for providing us with the software. 17 Shorter speech acts are padded with dummies. 18 Leaving out this step favors low-frequency, high-discriminative bigrams too much and causes a slight reduction in overall Q-detection performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This would be sentence 4 in the example above. 20 StopLong contains 571 words, StopShort only 89 words, most of which are auxiliary verbs and filler words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The current list comprises the following words: no,yes, yeah, yep, sure, uh-huh, mhm, nope.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "To avoid division by zero, we add one to every sentence length. 23 Speaker emphasis is not used in our evaluations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that LEAD summaries are to be distinguished from summaries in which lead emphasis is used, as described above. In the latter case, the segment-initial sentence weights are increased, whereas in the former case, we strictly extract the leading n words from a given segment. 25 To determine these constituents, we use the output of the chunk parser employed by the false start detection component.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Definition: 1 if summ s,i is contained in the summary, 0 otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We are grateful to Alex Waibel, Alon Lavie, Jaime Carbonell, Vibhu Mittal, Jade Goldstein, Klaus Ries, Lori Levin, and Marsal Gavald\u00e0 for many discussions, suggestions, and comments regarding this work. We also want to commend the corpus annotators for their efforts. Finally, we want to thank the four anonymous reviewers for their detailed feedback on a preliminary draft, which greatly helped improve this article. This work was performed while the author was affiliated with the Language Technologies Institute at Carnegie Mellon University and was supported in part by grants from the U.S. Department of Defense.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "for automatic evaluation; instead, we weigh all annotations from all human coders equally. Intuitively, we want to reward summaries that contain a high number of words considered to be relevant by most annotators. We formalize this notion in the following subsection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "All evaluations are based on topically coherent segments from the dialogue excerpts of our corpus. As mentioned before, the segment boundaries were chosen from the human gold standard for the purpose of the global system evaluation.For each segment s, for each annotator a, and for each word position w i , we define a boolean word vector of annotations w s,a , each component w s,a,i being 1 if the word w i is part of a nucleus IU or a satellite IU for that annotator and segment, and 0 otherwise. We then sum over all annotators' annotation vectors and normalize them by the number of annotators per segment (A) to obtain the average relevance vector for segment s, r s :To obtain the summary accuracy score sa s,n for any segment summary with length n, we multiply the boolean summary vector summ s 26 by the average relevance vector r s , and then divide this product by the sum of the n highest scores within r s (maximum achievable score), rsort s being the vector r s sorted by relevance weight in descending order:It is easy to see that the summary accuracy score always is in the interval [0.0, 1.0].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metric", "sec_num": "6.2" }, { "text": "Whereas section 5 was concerned with the design and evaluation of the individual system components, the goal here is to describe and analyze the quality of the global system, with all its components combined. In this section, we compare our DIASUMM system with the MMR baseline system, which operates without any dialogue-specific components, and with the LEAD baseline. We described the optimization and finetuning of the MMR system in subsection 5.6.6. The second column of Table 18 presents the average relevance scores for this MMR baseline, averaged over the five summary sizes of 5%, 10%, 15%, 20%, and 25% length, for the four devtest set and the four eval set subcorpora; the first column of this table shows the results for the LEAD baseline. We used the optimized baseline MMR parameters and varied only the emphasis parameters for (1) false starts, (2) lead factor, and (3) Q-A sentences, to optimize the CLEAN summaries further. (Again, for this step, we used only the devtest subcorpora.) For each corpus in the devtest set, we determined the optimal parameter settings and report the corresponding results also for the eval set subcorpora. Column 3 in Table 18 provides the results for this optimized DIASUMM system. Further, in column 4, we provide the summary accuracy averages for the human gold standard (nucleus IUs only, fixed-length summaries). Table 19 shows the best emphasis parameter combinations for the DIASUMM summaries used in these evaluations.We determined the statistical differences between the DIASUMM system and the two baselines for the eval set, using the Wilcoxon rank sum test for each of the four", "cite_spans": [], "ref_spans": [ { "start": 476, "end": 484, "text": "Table 18", "ref_id": null }, { "start": 1166, "end": 1174, "text": "Table 18", "ref_id": null }, { "start": 1366, "end": 1374, "text": "Table 19", "ref_id": null } ], "eq_spans": [], "section": "Global System Evaluation", "sec_num": "6.3" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Towards multilingual protocol generation for spontaneous speech dialogues", "authors": [ { "first": "Jan", "middle": [], "last": "Alexandersson", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Poller", "suffix": "" } ], "year": 1998, "venue": "Proceedings of INLG-98", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexandersson, Jan and Peter Poller. 1998. Towards multilingual protocol generation for spontaneous speech dialogues. In Proceedings of INLG-98, Niagara-on-the-Lake, Canada, August.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Trainable, scalable summarization using robust NLP and machine learning", "authors": [ { "first": "Chinatsu", "middle": [], "last": "Aone", "suffix": "" }, { "first": "Mary", "middle": [ "Ellen" ], "last": "Okurowski", "suffix": "" }, { "first": "James", "middle": [], "last": "Gorlinsky", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aone, Chinatsu, Mary Ellen Okurowski, and James Gorlinsky. 1997. Trainable, scalable summarization using robust NLP and machine learning. In ACL/EACL-97", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Workshop on Intelligent and Scalable Text Summarization", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Workshop on Intelligent and Scalable Text Summarization, Madrid.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Pitch-based emphasis detection for segmenting speech", "authors": [ { "first": "Barry", "middle": [], "last": "Arons", "suffix": "" } ], "year": 1994, "venue": "Proceedings of ICSLP-94", "volume": "", "issue": "", "pages": "1931--1934", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arons, Barry. 1994. Pitch-based emphasis detection for segmenting speech. In Proceedings of ICSLP-94, pages 1931-1934.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "OCELOT: A system for summarizing Web pages", "authors": [ { "first": "Adam", "middle": [ "L" ], "last": "Berger", "suffix": "" }, { "first": "O", "middle": [], "last": "Vibhu", "suffix": "" }, { "first": "", "middle": [], "last": "Mittal", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 23rd ACM-SIGIR Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Berger, Adam L. and Vibhu O. Mittal. 2000. OCELOT: A system for summarizing Web pages. In Proceedings of the 23rd ACM-SIGIR Conference.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Multimodal meeting tracker", "authors": [ { "first": "Michael", "middle": [], "last": "Bett", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Xiaojin", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the Conference on Content-Based Multimedia Information Access", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bett, Michael, Ralph Gross, Hua Yu, Xiaojin Zhu, Yue Pan, Jie Yang, and Alex Waibel. 2000. Multimodal meeting tracker. In Proceedings of the Conference on Content-Based Multimedia Information Access (RIAO-2000), Paris, April.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Some advances in transformation-based part of speech tagging", "authors": [ { "first": "Eric", "middle": [], "last": "Brill", "suffix": "" } ], "year": 1994, "venue": "Proceedings of AAAI-94", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brill, Eric. 1994. Some advances in transformation-based part of speech tagging. In Proceedings of AAAI-94.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Automated query-relevant summarization and diversity-based reranking", "authors": [ { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "Yibing", "middle": [], "last": "Geng", "suffix": "" }, { "first": "Jade", "middle": [], "last": "Goldstein", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the IJCAI-97 Workshop on AI and Digital Libraries", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carbonell, Jaime, Yibing Geng, and Jade Goldstein. 1997. Automated query-relevant summarization and diversity-based reranking. In Proceedings of the IJCAI-97 Workshop on AI and Digital Libraries, Nagoya, Japan.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The use of MMR, diversity-based reranking for reordering documents and producing summaries", "authors": [ { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "Jade", "middle": [], "last": "Goldstein", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 21st ACM-SIGIR International Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carbonell, Jaime and Jade Goldstein. 1998. The use of MMR, diversity-based reranking for reordering documents and producing summaries. In Proceedings of the 21st ACM-SIGIR International Conference on Research and Development in Information Retrieval, Melbourne, Australia.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The reliability of a dialogue structure coding scheme", "authors": [ { "first": "Jean", "middle": [], "last": "Carletta", "suffix": "" }, { "first": "Amy", "middle": [], "last": "Isard", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Isard", "suffix": "" }, { "first": "Jacqueline", "middle": [ "C" ], "last": "Kowtko", "suffix": "" }, { "first": "Gwyneth", "middle": [], "last": "Doherty-Sneddon", "suffix": "" }, { "first": "Anne", "middle": [ "H" ], "last": "Anderson", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "1", "pages": "13--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carletta, Jean, Amy Isard, Stephen Isard, Jacqueline C. Kowtko, Gwyneth Doherty-Sneddon, and Anne H. Anderson. 1997. The reliability of a dialogue structure coding scheme. Computational Linguistics, 23(1):13-31.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The use of emphasis to automatically summarize a spoken discourse", "authors": [ { "first": "Francine", "middle": [ "R" ], "last": "Chen", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Withgott", "suffix": "" } ], "year": 1992, "venue": "Proceedings of ICASSP-92", "volume": "", "issue": "", "pages": "229--332", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, Francine R. and Margaret Withgott. 1992. The use of emphasis to automatically summarize a spoken discourse. In Proceedings of ICASSP-92, pages 229-332.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A coefficient of agreement for nominal scales", "authors": [ { "first": "Jacob", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 1960, "venue": "Educational and Psychological Measurement", "volume": "20", "issue": "1", "pages": "37--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cohen, Jacob. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1):37-46.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Measuring agreement for multinomial data", "authors": [ { "first": "Mark", "middle": [], "last": "Davies", "suffix": "" }, { "first": "Joseph", "middle": [ "L" ], "last": "Fleiss", "suffix": "" } ], "year": 1982, "venue": "Biometrics", "volume": "38", "issue": "", "pages": "1047--1051", "other_ids": {}, "num": null, "urls": [], "raw_text": "Davies, Mark and Joseph L. Fleiss. 1982. Measuring agreement for multinomial data. Biometrics, 38:1047-1051, December.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Spoken document retrieval: 1998 evaluation and investigation of new metrics", "authors": [ { "first": "John", "middle": [ "S" ], "last": "Garofolo", "suffix": "" }, { "first": "M", "middle": [], "last": "Ellen", "suffix": "" }, { "first": "Cedric", "middle": [ "G P" ], "last": "Voorhees", "suffix": "" }, { "first": "Vincent", "middle": [ "M" ], "last": "Auzanne", "suffix": "" }, { "first": ";", "middle": [], "last": "Stanford", "suffix": "" }, { "first": "April", "middle": [], "last": "Garofolo", "suffix": "" }, { "first": "John", "middle": [ "S" ], "last": "Ellen", "suffix": "" }, { "first": "M", "middle": [], "last": "Voorhees", "suffix": "" }, { "first": "Vincent", "middle": [ "M" ], "last": "Stanford", "suffix": "" }, { "first": "Karen", "middle": [ "Sparck" ], "last": "Jones", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the ESCA Workshop: Accessing Information in Spoken Audio", "volume": "", "issue": "", "pages": "1997--2003", "other_ids": {}, "num": null, "urls": [], "raw_text": "Garofolo, John S., Ellen M. Voorhees, Cedric G. P. Auzanne, and Vincent M. Stanford. 1999. Spoken document retrieval: 1998 evaluation and investigation of new metrics. In Proceedings of the ESCA Workshop: Accessing Information in Spoken Audio, pages 1-7, Cambridge, UK, April. Garofolo, John S., Ellen M. Voorhees, Vincent M. Stanford, and Karen Sparck Jones. 1997. TREC-6 1997 spoken document retrieval track overview and results. In Proceedings of the 1997 TREC-6", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "High performance segmentation of spontaneous speech using part of speech and trigger word information", "authors": [ { "first": "Marsal", "middle": [], "last": "Gavald\u00e0", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Zechner", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Aist", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the fifth ANLP Conference", "volume": "", "issue": "", "pages": "12--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gavald\u00e0, Marsal, Klaus Zechner, and Gregory Aist. 1997. High performance segmentation of spontaneous speech using part of speech and trigger word information. In Proceedings of the fifth ANLP Conference, Washington, DC, pages 12-15.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "SWITCHBOARD: Telephone speech corpus for research and development", "authors": [ { "first": "J", "middle": [ "J" ], "last": "Godfrey", "suffix": "" }, { "first": "E", "middle": [ "C" ], "last": "Holliman", "suffix": "" }, { "first": "J", "middle": [], "last": "Mcdaniel", "suffix": "" } ], "year": 1992, "venue": "Proceedings of ICASSP-92", "volume": "1", "issue": "", "pages": "517--520", "other_ids": {}, "num": null, "urls": [], "raw_text": "Godfrey, J. J., E. C. Holliman, and J. McDaniel. 1992. SWITCHBOARD: Telephone speech corpus for research and development. In Proceedings of ICASSP-92, volume 1, pages 517-520.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Attention, intentions, and the structure of discourse", "authors": [ { "first": "Barbara", "middle": [ "J" ], "last": "Grosz", "suffix": "" }, { "first": "Candace", "middle": [ "L" ], "last": "Sidner", "suffix": "" } ], "year": 1986, "venue": "Computational Linguistics", "volume": "12", "issue": "3", "pages": "175--204", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grosz, Barbara J. and Candace L. Sidner. 1986. Attention, intentions, and the structure of discourse. Computational Linguistics, 12(3):175-204.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "TextTiling: Segmenting text into multi-paragraph subtopic passages", "authors": [ { "first": "Marti", "middle": [ "A" ], "last": "Hearst", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "1", "pages": "33--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hearst, Marti A. 1997. TextTiling: Segmenting text into multi-paragraph subtopic passages. Computational Linguistics, 23(1):33-64.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Speech repairs, intonational phrases, and discourse markers: Modeling speakers' utterances in spoken dialogue", "authors": [ { "first": "Peter", "middle": [ "A" ], "last": "Heeman", "suffix": "" }, { "first": "James", "middle": [ "F" ], "last": "Allen", "suffix": "" } ], "year": 1999, "venue": "Computational Linguistics", "volume": "25", "issue": "4", "pages": "527--571", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heeman, Peter A. and James F. Allen. 1999. Speech repairs, intonational phrases, and discourse markers: Modeling speakers' utterances in spoken dialogue. Computational Linguistics, 25(4):527-571.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Finding information in audio: A new paradigm for audio browsing/retrieval", "authors": [ { "first": "Julia", "middle": [], "last": "Hirschberg", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Whittaker", "suffix": "" }, { "first": "Don", "middle": [], "last": "Hindle", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Amit", "middle": [], "last": "Singhal", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the ESCA Workshop: Accessing Information in Spoken Audio", "volume": "", "issue": "", "pages": "117--122", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hirschberg, Julia, Steve Whittaker, Don Hindle, Fernando Pereira, and Amit Singhal. 1999. Finding information in audio: A new paradigm for audio browsing/retrieval. In Proceedings of the ESCA Workshop: Accessing Information in Spoken Audio, pages 117-122, Cambridge, UK, April.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Automatic speech summarization based on word significance and linguistic likelihood", "authors": [ { "first": "Chiori", "middle": [], "last": "Hori", "suffix": "" }, { "first": "Sadaoki", "middle": [], "last": "Furui", "suffix": "" } ], "year": 1998, "venue": "Proceedings of ICASSP-00", "volume": "30", "issue": "", "pages": "1579--1582", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hori, Chiori and Sadaoki Furui. 2000. Automatic speech summarization based on word significance and linguistic likelihood. In Proceedings of ICASSP-00, pages 1579-1582, Istanbul, Turkey, June. Jurafsky, Daniel, Rebecca Bates, Noah Coccaro, Rachel Martin, Marie Meteer, Klaus Ries, Elizabeth Shriberg, Andreas Stolcke, Paul Taylor, and Carol Van Ess-Dykema. 1998. SwitchBoard discourse language modeling project: Final report. Research Note 30, Center for Language and Speech Processing, Johns Hopkins University, Baltimore, MD.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Coping with aboutness complexity in information extraction from spoken dialogues", "authors": [ { "first": "Megumi", "middle": [], "last": "Kameyama", "suffix": "" }, { "first": "I", "middle": [], "last": "Arima", "suffix": "" } ], "year": 1994, "venue": "Proceedings of ICSLP 94", "volume": "", "issue": "", "pages": "681--684", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kameyama, Megumi, and I. Arima. 1994. Coping with aboutness complexity in information extraction from spoken dialogues. In Proceedings of ICSLP 94, pages 87-90, Yokohama, Japan. Kameyama, Megumi, Goh Kawai, and Isao Arima. 1996. A real-time system for summarizing human-human spontaneous spoken dialogues. In Proceedings of ICSLP-96, pages 681-684.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Statistics-based summarization-Step one: Sentence compression", "authors": [ { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 17th National Conference of the AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Knight, Kevin and Daniel Marcu. 2000. Statistics-based summarization-Step one: Sentence compression. In Proceedings of the 17th National Conference of the AAAI.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Transcription and summarization of voicemail speech", "authors": [ { "first": "Konstantinos", "middle": [], "last": "Koumpis", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Renals", "suffix": "" } ], "year": 2000, "venue": "Proceedings of ICSLP-00", "volume": "", "issue": "", "pages": "688--691", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koumpis, Konstantinos and Steve Renals. 2000. Transcription and summarization of voicemail speech. In Proceedings of ICSLP-00, pages 688-691, Beijing, China, October.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Content Analysis", "authors": [ { "first": "Klaus", "middle": [], "last": "Krippendorff", "suffix": "" } ], "year": 1980, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Krippendorff, Klaus. 1980. Content Analysis. Sage, Beverly Hills, CA.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A trainable document summarizer", "authors": [ { "first": "J", "middle": [], "last": "Kupiec", "suffix": "" }, { "first": "J", "middle": [], "last": "Pedersen", "suffix": "" }, { "first": "F", "middle": [], "last": "Chen", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 18th ACM-SIGIR Conference", "volume": "", "issue": "", "pages": "68--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kupiec, J., J. Pedersen, and F. Chen. 1995. A trainable document summarizer. In Proceedings of the 18th ACM-SIGIR Conference, pages 68-73.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Janus III: Speech-to-speech translation in multiple languages", "authors": [ { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Waibel", "suffix": "" }, { "first": "Lori", "middle": [], "last": "Levin", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Finke", "suffix": "" }, { "first": "Donna", "middle": [], "last": "Gates", "suffix": "" }, { "first": "Marsal", "middle": [], "last": "Gavald\u00e0", "suffix": "" }, { "first": "Torsten", "middle": [], "last": "Zeppenfeld", "suffix": "" }, { "first": "Puming", "middle": [], "last": "Zhan", "suffix": "" } ], "year": 1997, "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lavie, Alon, Alex Waibel, Lori Levin, Michael Finke, Donna Gates, Marsal Gavald\u00e0, Torsten Zeppenfeld, and Puming Zhan. 1997. Janus III: Speech-to-speech translation in multiple languages. In IEEE International Conference on Acoustics, Speech and Signal Processing, Munich.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Tagging of speech acts and dialogue games in Spanish call home", "authors": [ { "first": "Lori", "middle": [], "last": "Levin", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Ries", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the ACL-99 Workshop on Discourse Tagging", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Levin, Lori, Klaus Ries, Ann Thym\u00e9-Gobbel, and Alon Lavie. 1999. Tagging of speech acts and dialogue games in Spanish call home. In Proceedings of the ACL-99 Workshop on Discourse Tagging, College Park, MD.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Linguistic Data Consortium (LDC). 1996. CallHome and CallFriend LVCSR databases", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linguistic Data Consortium (LDC). 1996. CallHome and CallFriend LVCSR databases.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Addendum to the part-of-speech tagging guidelines for the Penn Treebank project (Modifications for the SwitchBoard corpus)", "authors": [], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linguistic Data Consortium (LDC). 1999a. Addendum to the part-of-speech tagging guidelines for the Penn Treebank project (Modifications for the SwitchBoard corpus). LDC CD-ROM LDC99T42.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Treebank-3: Databases of disfluency annotated Switchboard transcripts", "authors": [], "year": 1999, "venue": "LDC CD-ROM LDC99T42", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linguistic Data Consortium (LDC). 1999b. Treebank-3: Databases of disfluency annotated Switchboard transcripts. LDC CD-ROM LDC99T42.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "The TIPSTER SUMMAC text summarization evaluation", "authors": [ { "first": "Inderjeet", "middle": [], "last": "Mani", "suffix": "" }, { "first": "David", "middle": [], "last": "House", "suffix": "" }, { "first": "Gary", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Lynette", "middle": [], "last": "Hirschman", "suffix": "" }, { "first": "Leo", "middle": [], "last": "Obrst", "suffix": "" }, { "first": "Therese", "middle": [], "last": "Firmin", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Chrzanowski", "suffix": "" }, { "first": "Beth", "middle": [], "last": "Sundheim", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mani, Inderjeet, David House, Gary Klein, Lynette Hirschman, Leo Obrst, Therese Firmin, Michael Chrzanowski, and Beth Sundheim. 1998. The TIPSTER SUMMAC text summarization evaluation. Technical Report MTR 98W0000138, Mitre Corporation, October 1998.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Advances in Automatic Text Summarization", "authors": [ { "first": "Inderjeet", "middle": [], "last": "Mani", "suffix": "" }, { "first": "Mark", "middle": [ "T" ], "last": "Maybury", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mani, Inderjeet and Mark T. Maybury, editors. 1999. Advances in Automatic Text Summarization. MIT Press, Cambridge.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Discourse trees are good indicators of importance in text", "authors": [ { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 1999, "venue": "Advances in Automatic Text Summarization", "volume": "", "issue": "", "pages": "123--136", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcu, Daniel. 1999. Discourse trees are good indicators of importance in text. In I. Mani and M. T. Maybury, editors, Advances in Automatic Text Summarization. MIT Press, Cambridge, pages 123-136.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Dysfluency annotation stylebook for the Switchboard corpus", "authors": [ { "first": "Marie", "middle": [], "last": "Meteer", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Taylor", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Macintyre", "suffix": "" }, { "first": "Rukmini", "middle": [], "last": "Iyer", "suffix": "" } ], "year": 1995, "venue": "Linguistic Data Consortium (LDC) CD-ROM LDC99T42", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meteer, Marie, Ann Taylor, Robert MacIntyre, and Rukmini Iyer. 1995. Dysfluency annotation stylebook for the Switchboard corpus. Linguistic Data Consortium (LDC) CD-ROM LDC99T42.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "A full-text retrieval system with a dynamic abstract generation function", "authors": [ { "first": "Seiji", "middle": [], "last": "Miike", "suffix": "" }, { "first": "Etuso", "middle": [], "last": "Itoh", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Onon", "suffix": "" }, { "first": "Kazuo", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 17th ACM-SIGIR Conference", "volume": "", "issue": "", "pages": "318--327", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miike, Seiji, Etuso Itoh, Kenji Onon, and Kazuo Sumita. 1994. A full-text retrieval system with a dynamic abstract generation function. In Proceedings of the 17th ACM-SIGIR Conference, pages 318- 327.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "A corpus-based study of repair cues in spontaneous speech", "authors": [ { "first": "Christine", "middle": [ "H" ], "last": "Nakatani", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hirschberg", "suffix": "" } ], "year": 1994, "venue": "Journal of the Acoustic Society of America", "volume": "95", "issue": "3", "pages": "1603--1616", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nakatani, Christine H. and Julia Hirschberg. 1994. A corpus-based study of repair cues in spontaneous speech. Journal of the Acoustic Society of America, 95(3):1603-1616.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Discourse segmentation by human and automated means", "authors": [ { "first": "Rebecca", "middle": [ "J" ], "last": "Passonneau", "suffix": "" }, { "first": "Diane", "middle": [ "J" ], "last": "Litman", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "1", "pages": "103--139", "other_ids": {}, "num": null, "urls": [], "raw_text": "Passonneau, Rebecca J. and Diane J. Litman. 1997. Discourse segmentation by human and automated means. Computational Linguistics, 23(1):103-139.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "C4.5: Programs for Machine Learning", "authors": [ { "first": "J", "middle": [], "last": "Quinlan", "suffix": "" }, { "first": "", "middle": [], "last": "Ross", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quinlan, J. Ross. 1992. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, CA.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "The formation of abstracts by the selection of sentences", "authors": [ { "first": "G", "middle": [ "J" ], "last": "Rath", "suffix": "" }, { "first": "A", "middle": [], "last": "Resnick", "suffix": "" }, { "first": "T", "middle": [ "R" ], "last": "Savage", "suffix": "" } ], "year": 1961, "venue": "American Documentation", "volume": "12", "issue": "2", "pages": "139--143", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rath, G. J., A. Resnick, and T. R. Savage. 1961. The formation of abstracts by the selection of sentences. American Documentation, 12(2):139-143.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Text condensation as knowledge base abstraction", "authors": [ { "first": "U", "middle": [], "last": "Reimer", "suffix": "" }, { "first": "U", "middle": [], "last": "Hahn", "suffix": "" } ], "year": 1988, "venue": "Proceedings of the fourth Conference on Artificial Intelligence Applications", "volume": "", "issue": "", "pages": "338--344", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reimer, U. and U. Hahn. 1988. Text condensation as knowledge base abstraction. In Proceedings of the fourth Conference on Artificial Intelligence Applications, pages 338-344, San Diego.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Summarizing multilingual spoken negotiation dialogues", "authors": [ { "first": "Norbert", "middle": [], "last": "Reithinger", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Kipp", "suffix": "" }, { "first": "Ralf", "middle": [], "last": "Engel", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 38th Conference of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "310--317", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reithinger, Norbert, Michael Kipp, Ralf Engel, and Jan Alexandersson. 2000. Summarizing multilingual spoken negotiation dialogues. In Proceedings of the 38th Conference of the Association for Computational Linguistics, pages 310-317, Hong Kong, China, October.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Shallow discourse genre annotation in CALLHOME Spanish", "authors": [ { "first": "Klaus", "middle": [], "last": "Ries", "suffix": "" }, { "first": "Lori", "middle": [], "last": "Levin", "suffix": "" }, { "first": "Liza", "middle": [], "last": "Valle", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the Second Conference on Language Resources and Evaluation (LREC-2000)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ries, Klaus, Lori Levin, Liza Valle, Alon Lavie, and Alex Waibel. 2000. Shallow discourse genre annotation in CALLHOME Spanish. In Proceedings of the Second Conference on Language Resources and Evaluation (LREC-2000), Athens, May/June.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "The Communicative Value of Filled Pauses in Spontaneous Speech", "authors": [ { "first": "Ralph", "middle": [], "last": "Rose", "suffix": "" }, { "first": "", "middle": [], "last": "Leon", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rose, Ralph Leon. 1998. The Communicative Value of Filled Pauses in Spontaneous Speech. Ph.D. thesis, University of Birmingham, Birmingham, UK.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "The SMART Retrieval System-Experiments in Automatic Text Processing", "authors": [], "year": 1971, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Salton, Gerard, editor. 1971. The SMART Retrieval System-Experiments in Automatic Text Processing. Prentice Hall, Englewood Cliffs, NJ.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Part-of-Speech Tagging guidelines for the Penn Treebank project", "authors": [ { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" } ], "year": 1990, "venue": "Linguistic Data Consortium (LDC) CD-ROM LDC99T42", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Santorini, Beatrice. 1990. Part-of-Speech Tagging guidelines for the Penn Treebank project. Linguistic Data Consortium (LDC) CD-ROM LDC99T42.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Preliminaries to a Theory of Speech Disfluencies", "authors": [ { "first": "Elizabeth", "middle": [ "E" ], "last": "Shriberg", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shriberg, Elizabeth E. 1994. Preliminaries to a Theory of Speech Disfluencies. Ph.D. thesis, University of Berkeley, Berkeley.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Can prosody aid the automatic classification of dialog acts in conversational speech?", "authors": [ { "first": "Elizabeth", "middle": [], "last": "Shriberg", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Bates", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Taylor", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Ries", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Coccaro", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Marie", "middle": [], "last": "Meteer", "suffix": "" }, { "first": "Carol", "middle": [], "last": "Van Ess-Dykema", "suffix": "" } ], "year": 1998, "venue": "Language and Speech", "volume": "41", "issue": "3-4", "pages": "439--487", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shriberg, Elizabeth, Rebecca Bates, Andreas Stolcke, Paul Taylor, Daniel Jurafsky, Klaus Ries, Noah Coccaro, Rachel Martin, Marie Meteer, and Carol Van Ess-Dykema. 1998. Can prosody aid the automatic classification of dialog acts in conversational speech? Language and Speech, 41(3-4):439-487.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Prosody-based automatic segmentation of speech into sentences and topics", "authors": [ { "first": "Elizabeth", "middle": [], "last": "Shriberg", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani-T\u00fcr", "suffix": "" }, { "first": "G\u00f6khan", "middle": [], "last": "T\u00fcr", "suffix": "" } ], "year": 2000, "venue": "Speech Communication", "volume": "32", "issue": "1-2", "pages": "127--154", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shriberg, Elizabeth, Andreas Stolcke, Dilek Hakkani-T\u00fcr, and G\u00f6khan T\u00fcr. 2000. Prosody-based automatic segmentation of speech into sentences and topics. Speech Communication, 32(1-2):127-154.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "A discourse analysis approach to structured speech", "authors": [ { "first": "Lisa", "middle": [ "J" ], "last": "Stifelman", "suffix": "" } ], "year": 1995, "venue": "AAAI-95 Spring Symposium on Empirical Methods in Discourse Interpretation and Generation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stifelman, Lisa J. 1995. A discourse analysis approach to structured speech. In AAAI-95 Spring Symposium on Empirical Methods in Discourse Interpretation and Generation, Stanford, March.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Dialogue act modeling for automatic tagging and recognition of conversational speech", "authors": [ { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Ries", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Coccaro", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Shriberg", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Bates", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Taylor", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Carol", "middle": [], "last": "Van Ess-Dykema", "suffix": "" }, { "first": "Marie", "middle": [], "last": "Meteer", "suffix": "" } ], "year": 2000, "venue": "Computational Linguistics", "volume": "26", "issue": "3", "pages": "339--373", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stolcke, Andreas, Klaus Ries, Noah Coccaro, Elizabeth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema, and Marie Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational Linguistics, 26(3):339-373.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Automatic linguistic segmentation of conversational speech", "authors": [ { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Shriberg", "suffix": "" } ], "year": 1996, "venue": "Proceedings of ICSLP-96", "volume": "", "issue": "", "pages": "1005--1008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stolcke, Andreas and Elizabeth Shriberg. 1996. Automatic linguistic segmentation of conversational speech. In Proceedings of ICSLP-96, pages 1005-1008.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Automatic detection of sentence boundaries and disfluencies based on recognized words", "authors": [ { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Shriberg", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Bates", "suffix": "" }, { "first": "Mari", "middle": [], "last": "Ostendorf", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani", "suffix": "" }, { "first": "Madeleine", "middle": [], "last": "Plauche", "suffix": "" }, { "first": "G\u00f6khan", "middle": [], "last": "T\u00fcr", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Lu", "suffix": "" } ], "year": 1998, "venue": "Proceedings of ICSLP-98", "volume": "5", "issue": "", "pages": "2247--2250", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stolcke, Andreas, Elizabeth Shriberg, Rebecca Bates, Mari Ostendorf, Dilek Hakkani, Madeleine Plauche, G\u00f6khan T\u00fcr, and Yu Lu. 1998. Automatic detection of sentence boundaries and disfluencies based on recognized words. In Proceedings of ICSLP-98, volume 5, pages 2247-2250, Sydney, December.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Summarisation of spoken audio through information extraction", "authors": [ { "first": "Simone", "middle": [], "last": "Teufel", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Moens", "suffix": "" }, { "first": ";", "middle": [], "last": "Madrid", "suffix": "" }, { "first": "", "middle": [], "last": "Valenza", "suffix": "" }, { "first": "Tony", "middle": [], "last": "Robin", "suffix": "" }, { "first": "Marianne", "middle": [], "last": "Robinson", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Hickey", "suffix": "" }, { "first": "", "middle": [], "last": "Tucker", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the ESCA Workshop: Accessing Information in Spoken Audio", "volume": "", "issue": "", "pages": "111--116", "other_ids": {}, "num": null, "urls": [], "raw_text": "Teufel, Simone and Marc Moens. 1997. Sentence extraction as a classification task. In ACL/EACL-97 Workshop on Intelligent and Scalable Text Summarization, Madrid. Valenza, Robin, Tony Robinson, Marianne Hickey, and Roger Tucker. 1999. Summarisation of spoken audio through information extraction. In Proceedings of the ESCA Workshop: Accessing Information in Spoken Audio, pages 111-116, Cambridge, UK, April.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Verbmobil-Translation of face-to-face dialogs", "authors": [ { "first": "Wolfgang", "middle": [], "last": "Wahlster", "suffix": "" } ], "year": 1993, "venue": "Proceedings of MT Summit IV", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wahlster, Wolfgang. 1993. Verbmobil-Translation of face-to-face dialogs. In Proceedings of MT Summit IV, Kobe, Japan.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Meeting browser: Tracking and summarizing meetings", "authors": [ { "first": "Alex", "middle": [], "last": "Waibel", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Bett", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Finke", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the DARPA Broadcast News Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Waibel, Alex, Michael Bett, and Michael Finke. 1998. Meeting browser: Tracking and summarizing meetings. In Proceedings of the DARPA Broadcast News Workshop.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Understanding spontaneous speech: The PHOENIX system", "authors": [ { "first": "Wayne", "middle": [], "last": "Ward", "suffix": "" } ], "year": 1991, "venue": "Proceedings of ICASSP-91", "volume": "", "issue": "", "pages": "365--367", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ward, Wayne. 1991. Understanding spontaneous speech: The PHOENIX system. In Proceedings of ICASSP-91, pages 365-367.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Increasing the coherence of spoken dialogue summaries by cross-speaker information linking", "authors": [ { "first": "Steve", "middle": [], "last": "Whittaker", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hirschberg", "suffix": "" }, { "first": "John", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Don", "middle": [], "last": "Hindle", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Amit", "middle": [], "last": "Singhal", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 22nd ACM-SIGIR International Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "22--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Whittaker, Steve, Julia Hirschberg, John Choi, Don Hindle, Fernando Pereira, and Amit Singhal. 1999. SCAN: Designing and evaluating user interfaces to support retrieval from speech archives. In Proceedings of the 22nd ACM-SIGIR International Conference on Research and Development in Information Retrieval, pages 26-33, Berkeley, August. Zechner, Klaus and Alon Lavie. 2001. Increasing the coherence of spoken dialogue summaries by cross-speaker information linking. In Proceedings of the NAACL-01 Workshop on Automatic Summarization, pages 22-31, Pittsburgh, June.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "DIASUMM: Flexible summarization of spontaneous dialogues in unrestricted domains", "authors": [ { "first": "Klaus", "middle": [], "last": "Zechner", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 2000, "venue": "Proceedings of COLING-2000", "volume": "", "issue": "", "pages": "968--974", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zechner, Klaus and Alex Waibel. 2000a. DIASUMM: Flexible summarization of spontaneous dialogues in unrestricted domains. In Proceedings of COLING-2000, pages 968-974, Saarbr\u00fccken, Germany, July/August.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "Minimizing word error rate in textual summaries of spoken language", "authors": [ { "first": "Klaus", "middle": [], "last": "Zechner", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the First Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL-2000)", "volume": "", "issue": "", "pages": "186--193", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zechner, Klaus and Alex Waibel. 2000b. Minimizing word error rate in textual summaries of spoken language. In Proceedings of the First Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL-2000), pages 186-193, Seattle, April/May.", "links": null } }, "ref_entries": { "TABREF0": { "content": "
Data Set8E-CH4E-CH NHOUR XFIRE G-MTG
Formal/informalinformal informal formal formal informal
Topics predeterminednonoyesyesyes
Dialogue excerpts (total)84344
Topical segments (total)28238147
Different speakers2.12267.5
Turns2422762596140
Sentences280366101281304
Sentences per turn1.21.34.12.92.2
Questions (in %)3.76.46.39.84.0
False starts (in %)12.111.02.07.213.9
Words16851905122431652355
Words per sentence6.05.212.111.37.7
Disfluent (in %)16.016.35.14.213.2
Disfluencies2222594895266
Disfluencies per sentence0.790.710.480.340.87
Empty coordinating conjunctions (in %)30.330.464.850.724.3
Lexicalized filled pauses (in %)18.821.017.223.513.9
Editing terms (in %)3.61.63.45.73.3
Nonlexicalized filled pauses (in %)20.829.90.72.329.5
Repairs (in %)26.617.113.817.829.0
", "num": null, "type_str": "table", "html": null, "text": "Data characteristics for the corpus (average over dialogues). 8E-CH, 4E-CH: English CallHome; NHOUR: NewsHour; XFIRE: CrossFire; G-MTG: Group Meetings." }, "TABREF1": { "content": "
Annotator
", "num": null, "type_str": "table", "html": null, "text": "Nuclei and satellites: Length in tokens and relative frequency (in % of all tokens)." }, "TABREF6": { "content": "
DescriptionCount Tag Precision RecallF1
Empty coordinating conjunctions5,990 CO0.840.930.88
Lexicalized filled pauses5,787 DM0.950.900.93
Editing terms1,004ET0.980.940.96
Nonlexicalized filled pauses12,926 UH0.980.980.98
Table 6
POS tagging accuracy on five subcorpora (evaluated on 500-word samples).
8E-CH4E-CH NHOURXFIREG-MTG
Known words92.890.692.790.693.2
Unknown words (total) 48.0 (25) 44.4 (9) 69.6 (23) 86.4 (22) 92.6 (27)
Overall90.689.891.690.493.2
", "num": null, "type_str": "table", "html": null, "text": "Precision, recall and F 1 -scores of the four disfluency tag categories for the SWITCHBOARD test set." }, "TABREF9": { "content": "
With Interturn Pause Duration?YesNo
With Turn Boundary Info?YesNoYesNo
Training set.904 .903 .900 .884
Test set.887 .884 .884 .825
", "num": null, "type_str": "table", "html": null, "text": "Sentence boundary detection accuracy (F 1 -score)." }, "TABREF11": { "content": "
Sentences2,211
Wh-questions total20
. . . With immediate answers15 (75%)
YN-questions total48
. . . With immediate answers38 (79%)
Qs excluded for Q-A detection 15
Questions total83 (3.75%)
", "num": null, "type_str": "table", "html": null, "text": "Frequency of different types of questions in the 8E-CH data set." }, "TABREF12": { "content": "
SA Tagger Decision Tree
Overall error3.2%4.7%
Precision.57.63
Recall.61.51
F 1.59.56
Typical classification time (SAs/sec)101,000
", "num": null, "type_str": "table", "html": null, "text": "Question detection on the 8E-CH corpus using two different classifiers." }, "TABREF15": { "content": "
11 b: Him [...]
CLEAN:
7 b: We just finished the thirty days mourning for him now
it's everybody's still in shock it's terrible what's
going on over here
31 b: What's the reaction in america really do people care [...]
34 a: Most I don't know what I mean like the jewish community
a lot all of us were very upset
PHRASAL:
4 b
", "num": null, "type_str": "table", "html": null, "text": ": it just worked ... it was a good place for the poor guy to die ... it was [...] 7 b: we just finished the thirty days mourning for him ... it's ... everybody's ... in shock it's ... going ... 31 b: 's the reaction in america ... do people care ... 34 a: i don't know ... mean like the jewish community a lot ..." }, "TABREF17": { "content": "", "num": null, "type_str": "table", "html": null, "text": "" }, "TABREF19": { "content": "
8E-CH0.4630.5450.5970.709 (13.1)
DT-NH0.3860.6370.5540.791 (20.9)
DT-XF0.5160.5950.5410.764 (11.4)
DT-MTG0.4880.5940.6060.705 (14.9)
4E-CH0.4380.5260.6140.793 (12.9)
EVAL-NH0.6920.5260.5060.850 (14.4)
EVAL-XF0.3780.5640.5660.790 (13.9)
EVAL-MTG0.3240.4490.5830.704 (16.0)
", "num": null, "type_str": "table", "html": null, "text": "" } } } }