{
"paper_id": "P02-1047",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:30:21.132715Z"
},
"title": "An Unsupervised Approach to Recognizing Discourse Relations",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Southern California",
"location": {
"addrLine": "4676 Admiralty Way, Suite 1001 Marina del Rey",
"postCode": "90292",
"region": "CA"
}
},
"email": "marcu@isi.edu"
},
{
"first": "Abdessamad",
"middle": [],
"last": "Echihabi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Southern California",
"location": {
"addrLine": "4676 Admiralty Way, Suite 1001 Marina del Rey",
"postCode": "90292",
"region": "CA"
}
},
"email": "echihabi\u00a1@isi.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present an unsupervised approach to recognizing discourse relations of CONTRAST , EXPLANATION-EVIDENCE, CONDITION and ELABORATION that hold between arbitrary spans of texts. We show that discourse relation classifiers trained on examples that are automatically extracted from massive amounts of text can be used to distinguish between some of these relations with accuracies as high as 93%, even when the relations are not explicitly marked by cue phrases.",
"pdf_parse": {
"paper_id": "P02-1047",
"_pdf_hash": "",
"abstract": [
{
"text": "We present an unsupervised approach to recognizing discourse relations of CONTRAST , EXPLANATION-EVIDENCE, CONDITION and ELABORATION that hold between arbitrary spans of texts. We show that discourse relation classifiers trained on examples that are automatically extracted from massive amounts of text can be used to distinguish between some of these relations with accuracies as high as 93%, even when the relations are not explicitly marked by cue phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In the field of discourse research, it is now widely agreed that sentences/clauses are usually not understood in isolation, but in relation to other sentences/clauses. Given the high level of interest in explaining the nature of these relations and in providing definitions for them (Mann and Thompson, 1988; Hobbs, 1990; Martin, 1992; Lascarides and Asher, 1993; Hovy and Maier, 1993; Knott and Sanders, 1998) , it is surprising that there are no robust programs capable of identifying discourse relations that hold between arbitrary spans of text. Consider, for example, the sentence/clause pairs below.",
"cite_spans": [
{
"start": 283,
"end": 308,
"text": "(Mann and Thompson, 1988;",
"ref_id": "BIBREF11"
},
{
"start": 309,
"end": 321,
"text": "Hobbs, 1990;",
"ref_id": "BIBREF6"
},
{
"start": 322,
"end": 335,
"text": "Martin, 1992;",
"ref_id": "BIBREF14"
},
{
"start": 336,
"end": 363,
"text": "Lascarides and Asher, 1993;",
"ref_id": "BIBREF10"
},
{
"start": 364,
"end": 385,
"text": "Hovy and Maier, 1993;",
"ref_id": "BIBREF7"
},
{
"start": 386,
"end": 410,
"text": "Knott and Sanders, 1998)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "a. Such standards would preclude arms sales to states like Libya, which is also currently subject to a U.N. embargo.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "b. But states like Rwanda before its present crisis would still be able to legally buy arms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) a. South Africa can afford to forgo sales of guns and grenades b. because it actually makes most of its profits from the sale of expensive, high-technology systems like laser-designated missiles, aircraft electronic warfare systems, tactical radios, anti-radiation bombs and battlefield mobility systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In these examples, the discourse markers But and because help us figure out that a CONTRAST relation holds between the text spans in (1) and an EXPLANATION-EVIDENCE relation holds between the spans in (2). Unfortunately, cue phrases do not signal all relations in a text. In the corpus of Rhetorical Structure trees (www.isi.edu/\u00a2 marcu/discourse/) built by Carlson et al. (2001) , for example, we have observed that only 61 of 238 CONTRAST relations and 79 out of 307 EXPLANATION-EVIDENCE relations that hold between two adjacent clauses were marked by a cue phrase.",
"cite_spans": [
{
"start": 358,
"end": 379,
"text": "Carlson et al. (2001)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "So what shall we do when no discourse markers are used?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "If we had access to robust semantic interpreters, we could, for example, infer from sentence 1.a that \"cannot buy arms legally(libya)\", infer from sentence 1.b that \"can buy arms legally(rwanda)\", use our background knowledge in order to infer that \"similar(libya,rwanda)\", and apply Hobbs's (1990) definitions of discourse relations to arrive at the conclusion that a CONTRAST relation holds between the sentences in (1). Unfortunately, the state of the art in NLP does not provide us access to semantic interpreters and general purpose knowledge bases that would support these kinds of inferences. The discourse relation definitions proposed by others (Mann and Thompson, 1988; Lascarides and Asher, 1993; Knott and Sanders, 1998) are not easier to apply either because they assume the ability to automatically derive, in addition to the semantics of the text spans, the intentions and illocutions associated with them as well.",
"cite_spans": [
{
"start": 284,
"end": 298,
"text": "Hobbs's (1990)",
"ref_id": "BIBREF6"
},
{
"start": 654,
"end": 679,
"text": "(Mann and Thompson, 1988;",
"ref_id": "BIBREF11"
},
{
"start": 680,
"end": 707,
"text": "Lascarides and Asher, 1993;",
"ref_id": "BIBREF10"
},
{
"start": 708,
"end": 732,
"text": "Knott and Sanders, 1998)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In spite of the difficulty of determining the discourse relations that hold between arbitrary text spans, it is clear that such an ability is important in many applications. First, a discourse relation recognizer would enable the development of improved discourse parsers and, consequently, of high performance single document summarizers (Marcu, 2000) . In multidocument summarization (DUC, 2002) , it would enable the development of summarization programs capable of identifying contradictory statements both within and across documents and of producing summaries that reflect not only the similarities between various documents, but also their differences. In question-answering, it would enable the development of systems capable of answering sophisticated, non-factoid queries, such as \"what were the causes of X?\" or \"what contradicts Y?\", which are beyond the state of the art of current systems (TREC, 2001) .",
"cite_spans": [
{
"start": 339,
"end": 352,
"text": "(Marcu, 2000)",
"ref_id": "BIBREF13"
},
{
"start": 386,
"end": 397,
"text": "(DUC, 2002)",
"ref_id": null
},
{
"start": 903,
"end": 915,
"text": "(TREC, 2001)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we describe experiments aimed at building robust discourse-relation classification systems. To build such systems, we train a family of Naive Bayes classifiers on a large set of examples that are generated automatically from two corpora: a corpus of 41,147,805 English sentences that have no annotations, and BLIPP, a corpus of 1,796,386 automatically parsed English sentences (Charniak, 2000) , which is available from the Linguistic Data Consortium (www.ldc.upenn.edu). We study empirically the adequacy of various features for the task of discourse relation classification and we show that some discourse relations can be correctly recognized with accuracies as high as 93%.",
"cite_spans": [
{
"start": 392,
"end": 408,
"text": "(Charniak, 2000)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to build a discourse relation classifier, one first needs to decide what relation definitions one is going to use. In Section 1, we simply relied on the reader's intuition when we claimed that a CON-TRAST relation holds between the sentences in (1). In reality though, associating a discourse relation with a text span pair is a choice that is clearly influenced by the theoretical framework one is willing to adopt. If we adopt, for example, Knott and Sanders's (1998) account, we would say that the relation between sentences 1.a and 1.b is ADDITIVE, because no causal connection exists between the two sentences, PRAGMATIC, because the relation pertains to illocutionary force and not to the propositional content of the sentences, and NEGATIVE, because the relation involves a CONTRAST between the two sentences. In the same framework, the relation between clauses 2.a and 2.b will be labeled as CAUSAL-SEMANTIC-POSITIVE-NONBASIC. In Lascarides and Asher's theory (1993) , we would label the relation between 2.a and 2.b as EXPLANATION because the event in 2.b explains why the event in 2.a happened (perhaps by CAUSING it). In Hobbs's theory (1990), we would also label the relation between 2.a and 2.b as EXPLANATION because the event asserted by 2.b CAUSED or could CAUSE the event asserted in 2.a. And in Mann and Thompson theory (1988), we would label sentence pairs 1.a, 1.b as CONTRAST because the situations presented in them are the same in many respects (the purchase of arms), because the situations are different in some respects (Libya cannot buy arms legally while Rwanda can), and because these situations are compared with respect to these differences. By a similar line of reasoning, we would label the relation between 2.a and 2.b as EVIDENCE.",
"cite_spans": [
{
"start": 452,
"end": 478,
"text": "Knott and Sanders's (1998)",
"ref_id": "BIBREF8"
},
{
"start": 947,
"end": 983,
"text": "Lascarides and Asher's theory (1993)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse relation definitions and generation of training data 2.1 Background",
"sec_num": "2"
},
{
"text": "The discussion above illustrates two points. First, it is clear that although current discourse theories are built on fundamentally different principles, they all share some common intuitions. Sure, some theories talk about \"negative polarity\" while others about \"contrast\". Some theories refer to \"causes\", some to \"potential causes\", and some to \"explanations\". But ultimately, all these theories acknowledge that there are such things as CONTRAST, CAUSE, and EXPLA-NATION relations. Second, given the complexity of the definitions these theories propose, it is clear why it is difficult to build programs that recognize such relations in unrestricted texts. Current NLP techniques do not enable us to reliably infer from sen-tence 1.a that \"cannot buy arms legally(libya)\" and do not give us access to general purpose knowledge bases that assert that \"similar(libya,rwanda)\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse relation definitions and generation of training data 2.1 Background",
"sec_num": "2"
},
{
"text": "The approach we advocate in this paper is in some respects less ambitious than current approaches to discourse relations because it relies upon a much smaller set of relations than those used by Mann and Thompson (1988) or Martin (1992) . In our work, we decide to focus only on four types of relations, which we call: CONTRAST, CAUSE-EXPLANATION-EVIDENCE (CEV), CONDITION, and ELABORA-TION. (We define these relations in Section 2.2.) In other respects though, our approach is more ambitious because it focuses on the problem of recognizing such discourse relations in unrestricted texts. In other words, given as input sentence pairs such as those shown in (1)-(2), we develop techniques and programs that label the relations that hold between these sentence pairs as CONTRAST, CAUSE-EXPLANATION-EVIDENCE, CONDITION, ELABO-RATION or NONE-OF-THE-ABOVE, even when the discourse relations are not explicitly signalled by discourse markers.",
"cite_spans": [
{
"start": 195,
"end": 219,
"text": "Mann and Thompson (1988)",
"ref_id": "BIBREF11"
},
{
"start": 223,
"end": 236,
"text": "Martin (1992)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse relation definitions and generation of training data 2.1 Background",
"sec_num": "2"
},
{
"text": "The discourse relations we focus on are defined at a much coarser level of granularity than in most discourse theories. For example, we consider that a CONTRAST relation holds between two text spans if one of the following relations holds: CONTRAST, ANTITHESIS, CONCESSION, or OTH-ERWISE, as defined by Mann and Thompson (1988) , CONTRAST or VIOLATED EXPECTATION, as defined by Hobbs (1990) , or any of the relations characterized by this regular expression of cognitive primitives, as defined by Knott and Sanders (1998) ",
"cite_spans": [
{
"start": 303,
"end": 327,
"text": "Mann and Thompson (1988)",
"ref_id": "BIBREF11"
},
{
"start": 378,
"end": 390,
"text": "Hobbs (1990)",
"ref_id": "BIBREF6"
},
{
"start": 497,
"end": 521,
"text": "Knott and Sanders (1998)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse relation definitions",
"sec_num": "2.2"
},
{
"text": ": (CAUSAL \u00a3 ADDITIVE) -(SEMANTIC \u00a3 PRAGMATIC)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse relation definitions",
"sec_num": "2.2"
},
{
"text": "-NEGATIVE. In other words, in our approach, we do not distinguish between contrasts of semantic and pragmatic nature, contrasts specific to violated expectations, etc. Table 1 shows the definitions of the relations we considered.",
"cite_spans": [],
"ref_spans": [
{
"start": 168,
"end": 175,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Discourse relation definitions",
"sec_num": "2.2"
},
{
"text": "The advantage of operating with coarsely defined discourse relations is that it enables us to automatically construct relatively low-noise datasets that can be used for learning. For example, by extracting sentence pairs that have the keyword \"But\" at the beginning of the second sentence, as the sen-tence pair shown in (1), we can automatically collect many examples of CONTRAST relations. And by extracting sentences that contain the keyword \"because\", we can automatically collect many examples of CAUSE-EXPLANATION-EVIDENCE relations. As previous research in linguistics (Halliday and Hasan, 1976; Schiffrin, 1987) and computational linguistics (Marcu, 2000) show, some occurrences of \"but\" and \"because\" do not have a discourse function; and others signal other relations than CONTRAST and CAUSE-EXPLANATION. So we can expect the examples we extract to be noisy. However, empirical work of Marcu (2000) and Carlson et al. (2001) suggests that the majority of occurrences of \"but\", for example, do signal CONTRAST relations. (In the RST corpus built by Carlson et al. (2001) , 89 out of the 106 occurrences of \"but\" that occur at the beginning of a sentence signal a CONTRAST relation that holds between the sentence that contains the word \"but\" and the sentence that precedes it.) Our hope is that simple extraction methods are sufficient for collecting low-noise training corpora.",
"cite_spans": [
{
"start": 576,
"end": 602,
"text": "(Halliday and Hasan, 1976;",
"ref_id": "BIBREF5"
},
{
"start": 603,
"end": 619,
"text": "Schiffrin, 1987)",
"ref_id": "BIBREF15"
},
{
"start": 650,
"end": 663,
"text": "(Marcu, 2000)",
"ref_id": "BIBREF13"
},
{
"start": 896,
"end": 908,
"text": "Marcu (2000)",
"ref_id": "BIBREF13"
},
{
"start": 913,
"end": 934,
"text": "Carlson et al. (2001)",
"ref_id": "BIBREF1"
},
{
"start": 1058,
"end": 1079,
"text": "Carlson et al. (2001)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse relation definitions",
"sec_num": "2.2"
},
{
"text": "In order to collect training cases, we mined in an unsupervised manner two corpora. The first corpus, which we call Raw, is a corpus of 1 billion words of unannotated English (41,147,805 sentences) that we created by catenating various corpora made available over the years by the Linguistic Data Consortium. The second, called BLIPP, is a corpus of only 1,796,386 sentences that were parsed automatically by Charniak (2000) . We extracted from both corpora all adjacent sentence pairs that contained the cue phrase \"But\" at the beginning of the second sentence and we automatically labeled the relation between the two sentence pairs as CONTRAST. We also extracted all the sentences that contained the word \"but\" in the middle of a sentence; we split each extracted sentence into two spans, one containing the words from the beginning of the sentence to the occurrence of the keyword \"but\" and one containing the words from the occurrence of \"but\" to the end of the sentence; and we labeled the relation between the two resulting text spans as CONTRAST as well. Table 2 lists some of the cue phrases we used in order to extract CONTRAST, CAUSE-EXPLANATION-EVIDENCE, ELABORATION, and (M&T -(Mann and Thompson, 1988) ; Ho -(Hobbs, 1990); A&L - (Lascarides and Asher, 1993) ; K&S - (Knott and Sanders, 1998) ",
"cite_spans": [
{
"start": 409,
"end": 424,
"text": "Charniak (2000)",
"ref_id": "BIBREF2"
},
{
"start": 1184,
"end": 1215,
"text": "(M&T -(Mann and Thompson, 1988)",
"ref_id": null
},
{
"start": 1243,
"end": 1271,
"text": "(Lascarides and Asher, 1993)",
"ref_id": "BIBREF10"
},
{
"start": 1280,
"end": 1305,
"text": "(Knott and Sanders, 1998)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 1063,
"end": 1070,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Generation of training data",
"sec_num": "2.3"
},
{
"text": "CONTRAST CAUSE-EXPLANATION-EVIDENCE ELABORATION CONDITION ANTITHESIS (M&T) EVIDENCE (M&T) ELABORATION (M&T) CONDITION (M&T) CONCESSION (M&T) VOLITIONAL-CAUSE (M&T) EXPANSION (Ho) OTHERWISE (M&T) NONVOLITIONAL-CAUSE (M&T) EXEMPLIFICATION (Ho) CONTRAST (M&T) VOLITIONAL-RESULT (M&T) ELABORATION (A&L) VIOLATED EXPECTATION (Ho) NONVOLITIONAL-RESULT (M&T) EXPLANATION (Ho) ( CAUSAL \u00a4 ADDITIVE ) - RESULT (A&L) ( SEMANTIC \u00a4 PRAGMATIC ) - EXPLANATION (A&L) NEGATIVE (K&S) CAUSAL - (SEMANTIC \u00a4 PRAGMATIC ) - POSITIVE (K&S)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation of training data",
"sec_num": "2.3"
},
{
"text": "). CONTRAST -3,881,588 examples [BOS \u00a5 \u00a6 \u00a5 \u00a7 \u00a5 EOS] [BOS But \u00a5 \u00a5 \u00a5 EOS] [BOS \u00a5 \u00a6 \u00a5 \u00a7 \u00a5 ] [but \u00a5 \u00a7 \u00a5 \u00a5 EOS] [BOS \u00a5 \u00a6 \u00a5 \u00a7 \u00a5 ] [although \u00a5 \u00a6 \u00a5 \u00a5 EOS] [BOS Although \u00a5 \u00a5 \u00a7 \u00a5 ,] [ \u00a5 \u00a5 \u00a5 EOS] CAUSE-EXPLANATION-EVIDENCE -889,946 examples [BOS \u00a5 \u00a6 \u00a5 \u00a7 \u00a5 ] [because \u00a5 \u00a6 \u00a5 \u00a5 EOS] [BOS Because \u00a5 \u00a5 \u00a5 ,] [ \u00a5 \u00a6 \u00a5 \u00a5 EOS] [BOS \u00a5 \u00a6 \u00a5 \u00a7 \u00a5 EOS] [BOS Thus, \u00a5 \u00a5 \u00a5 EOS] CONDITION -1,203,813 examples [BOS If \u00a5 \u00a5 \u00a5 ,] [ \u00a5 \u00a5 \u00a6 \u00a5 EOS] [BOS If \u00a5 \u00a5 \u00a5 ] [then \u00a5 \u00a5 \u00a6 \u00a5 EOS] [BOS \u00a5 \u00a6 \u00a5 \u00a7 \u00a5 ] [if \u00a5 \u00a5 \u00a5 EOS] ELABORATION -1,836,227 examples [BOS \u00a5 \u00a6 \u00a5 \u00a7 \u00a5 EOS] [BOS \u00a5 \u00a6 \u00a5 \u00a5 for example \u00a5 \u00a5 \u00a5 EOS] [BOS \u00a5 \u00a6 \u00a5 \u00a7 \u00a5 ] [which \u00a5 \u00a5 \u00a6 \u00a5 ,] NO-RELATION-SAME-TEXT -1,000,000 examples",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation of training data",
"sec_num": "2.3"
},
{
"text": "Randomly extract two sentences that are more than 3 sentences apart in a given text. NO-RELATION-DIFFERENT-TEXTS -1,000,000 examples Randomly extract two sentences from two different documents. \" stand for occurrences of any words and punctuation marks, the square brackets stand for text span boundaries, and the other words and punctuation marks stand for the cue phrases that we used in order to extract discourse relation examples. For example, the pattern [BOS Although EOS] is used in order to extract examples of CONTRAST relations that hold between a span of text delimited to the left by the cue phrase \"Although\" occurring in the beginning of a sentence and to the right by the first occurrence of a comma, and a span of text that contains the rest of the sentence to which \"Although\" belongs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation of training data",
"sec_num": "2.3"
},
{
"text": "We also extracted automatically 1,000,000 examples of what we hypothesize to be non-relations, by randomly selecting non-adjacent sentence pairs that are at least 3 sentences apart in a given text. We label such examples NO-RELATION-SAME-TEXT. And we extracted automatically 1,000,000 examples of what we hypothesize to be cross-document nonrelations, by randomly selecting two sentences from distinct documents. As in the case of CONTRAST and CONDITION, the NO-RELATION examples are also noisy because long distance relations are common in well-written texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation of training data",
"sec_num": "2.3"
},
{
"text": "We hypothesize that we can determine that a CON-TRAST relation holds between the sentences in (3) even if we cannot semantically interpret the two sentences, simply because our background knowledge tells us that good and fails are good indicators of contrastive statements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determining discourse relations using Naive Bayes classifiers",
"sec_num": "3"
},
{
"text": "John is good in math and sciences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determining discourse relations using Naive Bayes classifiers",
"sec_num": "3"
},
{
"text": "Paul fails almost every class he takes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determining discourse relations using Naive Bayes classifiers",
"sec_num": "3"
},
{
"text": "Similarly, we hypothesize that we can determine that a CONTRAST relation holds between the sentences in (1), because our background knowledge tells us that embargo and legally are likely to occur in contexts of opposite polarity. In general, we hypothesize that lexical item pairs can provide clues about the discourse relations that hold between the text spans in which the lexical items occur.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determining discourse relations using Naive Bayes classifiers",
"sec_num": "3"
},
{
"text": "To test this hypothesis, we need to solve two problems. First, we need a means to acquire vast amounts of background knowledge from which we can derive, for example, that the word pairs good -fails and embargo -legally are good indicators of CONTRAST relations. The extraction patterns described in Table 2 enable us to solve this problem. 1 Second, given vast amounts of training material, we need a means to learn which pairs of lexical items are likely to co-occur in conjunction with each discourse relation and a means to apply the learned parameters to any pair of text spans in order to determine the discourse relation that holds between them. We solve the second problem in a Bayesian probabilistic framework.",
"cite_spans": [],
"ref_spans": [
{
"start": 299,
"end": 306,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Determining discourse relations using Naive Bayes classifiers",
"sec_num": "3"
},
{
"text": "We assume that a discourse relation that holds between two text spans, \" !",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determining discourse relations using Naive Bayes classifiers",
"sec_num": "3"
},
{
"text": ", is determined by the word pairs in the cartesian product defined over the words in the two text spans can \"signal\" any relation . We determine the most likely discourse relation that holds between two text spans and A ! by taking the maximum over",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determining discourse relations using Naive Bayes classifiers",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "# % $ ' & \u00a6 \u00a7 $ )",
"eq_num": "( 1 0"
}
],
"section": "Determining discourse relations using Naive Bayes classifiers",
"sec_num": "3"
},
{
"text": "B C 1 D C E F B H G 7 I P R Q S # % T U \u00a3 V 5 R \" ! 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determining discourse relations using Naive Bayes classifiers",
"sec_num": "3"
},
{
"text": ", which according to Bayes rule, amounts to taking the maximum over",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determining discourse relations using Naive Bayes classifiers",
"sec_num": "3"
},
{
"text": "B H T D H E F B C G W I \u00a6 P C XY \u00e0 T D H Q S # b A ! c \u00a3 d 0 8 e @ Y \u00e0 D f Q S # % c 0 h g",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determining discourse relations using Naive Bayes classifiers",
"sec_num": "3"
},
{
"text": ". If we assume that the word pairs in the cartesian product are independent,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determining discourse relations using Naive Bayes classifiers",
"sec_num": "3"
},
{
"text": "Q S # b R A ! c \u00a3 i 0 is equivalent to p 5 q s r W t a ur H v x w y i \u00a7 u W Q S # \u00a7 # % $ ' & \u00a7 $ ) ( T 0 \u00a3 1 0 . The values Q S # \u00a7 # % $ ' & \u00a7 $ ) ( T 0 \u00a3 F i 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determining discourse relations using Naive Bayes classifiers",
"sec_num": "3"
},
{
"text": "are computed using maximum likelihood estimators, which are smoothed using the Laplace method (Manning and Sch\u00fctze, 1999) .",
"cite_spans": [
{
"start": 94,
"end": 121,
"text": "(Manning and Sch\u00fctze, 1999)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Determining discourse relations using Naive Bayes classifiers",
"sec_num": "3"
},
{
"text": "For each discourse relation pair T c \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determining discourse relations using Naive Bayes classifiers",
"sec_num": "3"
},
{
"text": ", we train a word-pair-based classifier using the automatically derived training examples in the Raw corpus, from which we first removed the cue-phrases used for extracting the examples. This ensures that our classi-1 Note that relying on the list of antonyms provided by Wordnet (Fellbaum, 1998) is not enough because the semantic relations in Wordnet are not defined across word class boundaries. For example, Wordnet does not list the \"antonymy\"-like relation between embargo and legally. fiers do not learn, for example, that the word pair if -then is a good indicator of a CONDITION relation, which would simply amount to learning to distinguish between the extraction patterns used to construct the corpus. We test each classifier on a test corpus of 5000 examples labeled with and 5000 examples labeled with , which ensures that the baseline is the same for all combinations and , namely 50%. Table 3 shows the performance of all discourse relation classifiers. As one can see, each classifier outperforms the 50% baseline, with some classifiers being as accurate as that that distinguishes between CAUSE-EXPLANATION-EVIDENCE and ELABORA-TION relations, which has an accuracy of 93%. We have also built a six-way classifier to distinguish between all six relation types. This classifier has a performance of 49.7%, with a baseline of 16.67%, which is achieved by labeling all relations as CON-",
"cite_spans": [
{
"start": 272,
"end": 296,
"text": "Wordnet (Fellbaum, 1998)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 900,
"end": 907,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Determining discourse relations using Naive Bayes classifiers",
"sec_num": "3"
},
{
"text": "We also examined the learning curves of various classifiers and noticed that, for some of them, the addition of training examples does not appear to have a significant impact on their performance. For example, the classifier that distinguishes between CON-TRAST and CAUSE-EXPLANATION-EVIDENCE relations has an accuracy of 87.1% when trained on 2,000,000 examples and an accuracy of 87.3% when trained on 4,771,534 examples. We hypothesized that the flattening of the learning curve is explained by the noise in our training data and the vast amount of word pairs that are not likely to be good predictors of discourse relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRASTS.",
"sec_num": null
},
{
"text": "To test this hypothesis, we decided to carry out a second experiment that used as predictors only a subset of the word pairs in the cartesian product defined over the words in two given text spans. To achieve this, we used the patterns in Table 2 to extract examples of discourse relations from the BLIPP corpus. As expected, the BLIPP corpus yielded much fewer learning cases: 185,846 CON-TRAST; 44,776 CAUSE-EXPLANATION-EVIDENCE; 55,699 CONDITION; and 33,369 ELABORA-TION relations. To these examples, we added 58,000 NO-RELATION-SAME-TEXT and 58,000 NO-RELATION-DIFFERENT-TEXTS relations.",
"cite_spans": [],
"ref_spans": [
{
"start": 239,
"end": 246,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "TRASTS.",
"sec_num": null
},
{
"text": "To each text span in the BLIPP corpus corresponds a parse tree (Charniak, 2000) . We wrote CONTRAST CEV COND ELAB NO-REL-SAME-TEXT NO-REL-DIFF-TEXTS CONTRAST -87 74 82 64 64 CEV 76 93 75 74 COND 89 69 71 ELAB 76 75 NO-REL-SAME-TEXT 64 Table 3 : Performances of classifiers trained on the Raw corpus. The baseline in all cases is 50%. Table 4 : Performances of classifiers trained on the BLIPP corpus. The baseline in all cases is 50%.",
"cite_spans": [
{
"start": 63,
"end": 79,
"text": "(Charniak, 2000)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 91,
"end": 279,
"text": "CONTRAST CEV COND ELAB NO-REL-SAME-TEXT NO-REL-DIFF-TEXTS CONTRAST -87 74 82 64 64 CEV 76 93 75 74 COND 89 69 71 ELAB 76 75 NO-REL-SAME-TEXT 64 Table 3",
"ref_id": "TABREF0"
},
{
"start": 371,
"end": 378,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "TRASTS.",
"sec_num": null
},
{
"text": "a simple program that extracted the nouns, verbs, and cue phrases in each sentence/clause. We call these the most representative words of a sentence/discourse unit. For example, the most representative words of the sentence in example (4), are those shown in italics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRASTS.",
"sec_num": null
},
{
"text": "Italy's unadjusted industrial production fell in January 3.4% from a year earlier but rose 0.4% from December, the government said",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRASTS.",
"sec_num": null
},
{
"text": "We repeated the experiment we carried out in conjunction with the Raw corpus on the data derived from the BLIPP corpus as well. Table 4 summarizes the results.",
"cite_spans": [],
"ref_spans": [
{
"start": 128,
"end": 135,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "TRASTS.",
"sec_num": null
},
{
"text": "Overall, the performance of the systems trained on the most representative word pairs in the BLIPP corpus is clearly lower than the performance of the systems trained on all the word pairs in the Raw corpus. But a direct comparison between two classifiers trained on different corpora is not fair because with just 100,000 examples per relation, the systems trained on the Raw corpus are much worse than those trained on the BLIPP data. The learning curves in Figure 1 are illuminating as they show that if one uses as features only the most representative word pairs, one needs only about 100,000 training examples to achieve the same level of performance one achieves using 1,000,000 training examples and features defined over all word pairs. Also, since the learning curve for the BLIPP corpus is steeper than the learning curve for the Raw corpus, this suggests that discourse relation classifiers trained on most representative word pairs and millions of training examples can achieve higher levels of performance than classifiers trained on all word pairs (unannotated data).",
"cite_spans": [],
"ref_spans": [
{
"start": 460,
"end": 468,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "TRASTS.",
"sec_num": null
},
{
"text": "The results in Section 3 indicate clearly that massive amounts of automatically generated data can be used to distinguish between discourse relations defined as discussed in Section 2. in Section 3 do not show is whether the classifiers built in this manner can be of any use in conjunction with some established discourse theory. To test this, we used the corpus of discourse trees built in the style of RST by Carlson et al. (2001) . We automatically extracted from this manually annotated corpus all CONTRAST, CAUSE-EXPLANATION-EVIDENCE, CONDITION and ELABORATION relations that hold between two adjacent elementary discourse units.",
"cite_spans": [
{
"start": 412,
"end": 433,
"text": "Carlson et al. (2001)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relevance to RST",
"sec_num": "4"
},
{
"text": "Since RST (Mann and Thompson, 1988 ) employs a finer grained taxonomy of relations than we used, we applied the definitions shown in Table 1 . That is, we considered that a CONTRAST relation held between two text spans if a human annotator labeled the relation between those spans as ANTITHESIS, CONCESSION, OTHERWISE or CONTRAST. We retrained then all classifiers on the Raw corpus, but this time without removing from the corpus the cue phrases that were used to generate the training examples. We did this because when trying to determine whether a CONTRAST relation holds between two spans of texts separated by the cue phrase \"but\", for example, we want to take advantage of the cue phrase occurrence as well. We employed our classifiers on the manually labeled examples extracted from Carlson et al.'s corpus (2001) . Table 5 displays the performance of our two way classifiers for relations defined over elementary discourse units. The table displays in the second row, for each discourse relation, the number of examples extracted from the RST corpus. For each binary classifier, the table lists in bold the accuracy of our classifier and in non-bold font the majority baseline associated with it.",
"cite_spans": [
{
"start": 6,
"end": 34,
"text": "RST (Mann and Thompson, 1988",
"ref_id": null
},
{
"start": 791,
"end": 821,
"text": "Carlson et al.'s corpus (2001)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 133,
"end": 140,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 824,
"end": 831,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Relevance to RST",
"sec_num": "4"
},
{
"text": "The results in Table 5 show that the classifiers learned from automatically generated training data can be used to distinguish between certain types of RST relations. For example, the results show that the classifiers can be used to distinguish between CONTRAST and CAUSE-EXPLANATION-EVIDENCE relations, as defined in RST, but not so well between ELABORATION and any other relation. This result is consistent with the discourse model proposed by Knott et al. (2001) , who suggest that ELABORATION relations are too ill-defined to be part of any discourse theory.",
"cite_spans": [
{
"start": 446,
"end": 465,
"text": "Knott et al. (2001)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 15,
"end": 22,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Relevance to RST",
"sec_num": "4"
},
{
"text": "The analysis above is informative only from a machine learning perspective. From a linguistic perspective though, this analysis is not very useful. If no cue phrases are used to signal the relation between two elementary discourse units, an automatic discourse labeler can at best guess that an ELABORATION relation holds between the units, because ELABORATION relations are the most frequently used relations (Carlson et al., 2001) . Fortunately, with the classifiers described here, one can label some of the unmarked discourse relations correctly.",
"cite_spans": [
{
"start": 410,
"end": 432,
"text": "(Carlson et al., 2001)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relevance to RST",
"sec_num": "4"
},
{
"text": "For example, the RST-annotated corpus of Carlson et al. (2001) contains 238 CONTRAST relations that hold between two adjacent elementary discourse units. Of these, only 61 are marked by a cue phrase, which means that a program trained only on Carlson et al.'s corpus could identify at most 61/238 of the CONTRAST relations correctly. Because Carlson et al.'s corpus is small, all unmarked relations will be likely labeled as ELABORATIONs. However, when we run our CONTRAST vs. ELAB-ORATION classifier on these examples, we can label correctly 60 of the 61 cue-phrase marked relations and, in addition, we can also label 123 of the 177 relations that are not marked explicitly with cue phrases. This means that our classifier contributes to an increase in accuracy from .",
"cite_spans": [
{
"start": 41,
"end": 62,
"text": "Carlson et al. (2001)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relevance to RST",
"sec_num": "4"
},
{
"text": "In a seminal paper, Banko and Brill (2001) have recently shown that massive amounts of data can be used to significantly increase the performance of confusion set disambiguators. In our paper, we show that massive amounts of data can have a major impact on discourse processing research as well.",
"cite_spans": [
{
"start": 20,
"end": 42,
"text": "Banko and Brill (2001)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Our experiments show that discourse relation classifiers that use very simple features achieve unexpectedly high levels of performance when trained on extremely large data sets. Developing lower-noise methods for automatically collecting training data and discovering features of higher predictive power for discourse relation classification than the features presented in this paper appear to be research avenues that are worthwhile to pursue. Over the last thirty years, the nature, number, and taxonomy of discourse relations have been among the most controversial issues in text/discourse linguistics. This paper does not settle the controversy. Rather, it raises some new, interesting questions because the lexical patterns learned by our algorithms can be interpreted as empirical proof of existence for discourse relations. If text production was not governed by any rules above the sentence level, we should have not been able to improve on any of the baselines in our experiments. Our results suggest that it may be possible to develop fully automatic techniques for defining empirically justified discourse relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "Acknowledgments. This work was supported by the National Science Foundation under grant number IIS-0097846 and by the Advanced Research and Development Activity (ARDA)'s Advanced Question Answering for Intelligence (AQUAINT) Program under contract number MDA908-02-C-0007.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Scaling to very very large corpora for natural language disambiguation",
"authors": [
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics (ACL'01)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michele Banko and Eric Brill. 2001. Scaling to very very large corpora for natural language disambigua- tion. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics (ACL'01), Toulouse, France, July 6-11.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Building a discourse-tagged corpus in the framework of rhetorical structure theory",
"authors": [
{
"first": "Lynn",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ellen"
],
"last": "Okurowski",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 2nd SIGDIAL Workshop on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lynn Carlson, Daniel Marcu, and Mary Ellen Okurowski. 2001. Building a discourse-tagged corpus in the framework of rhetorical structure theory. In Proceed- ings of the 2nd SIGDIAL Workshop on Discourse and Dialogue, Eurospeech 2001, Aalborg, Denmark.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A maximum-entropy-inspired parser",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the First Annual Meeting of the North American Chapter of the Association for Computational Linguistics NAACL-2000",
"volume": "",
"issue": "",
"pages": "132--139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of the First Annual Meeting of the North American Chapter of the Association for Computational Linguistics NAACL-2000, pages 132- 139, Seattle, Washington, April 29 -May 3.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Proceedings of the Second Document Understanding Conference",
"authors": [
{
"first": "",
"middle": [],
"last": "Duc-2002",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "DUC-2002. Proceedings of the Second Document Un- derstanding Conference, Philadelphia, PA, July.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Wordnet: An Electronic Lexical Database",
"authors": [],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum, editor. 1998. Wordnet: An Elec- tronic Lexical Database. The MIT Press.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Cohesion in English",
"authors": [
{
"first": "A",
"middle": [
"K"
],
"last": "Michael",
"suffix": ""
},
{
"first": "Ruqaiya",
"middle": [],
"last": "Halliday",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hasan",
"suffix": ""
}
],
"year": 1976,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael A.K. Halliday and Ruqaiya Hasan. 1976. Cohe- sion in English. Longman.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Literature and Cognition. CSLI Lecture Notes Number 21",
"authors": [
{
"first": "R",
"middle": [],
"last": "Jerry",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hobbs",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerry R. Hobbs. 1990. Literature and Cognition. CSLI Lecture Notes Number 21.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Parsimonious or profligate: How many and which discourse structure relations?",
"authors": [
{
"first": "Eduard",
"middle": [
"H"
],
"last": "Hovy",
"suffix": ""
},
{
"first": "Elisabeth",
"middle": [],
"last": "Maier",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduard H. Hovy and Elisabeth Maier. 1993. Parsimo- nious or profligate: How many and which discourse structure relations? Unpublished Manuscript.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The classification of coherence relations and their linguistic markers: An exploration of two languages",
"authors": [
{
"first": "Alistair",
"middle": [],
"last": "Knott",
"suffix": ""
},
{
"first": "Ted",
"middle": [
"J M"
],
"last": "Sanders",
"suffix": ""
}
],
"year": 1998,
"venue": "Journal of Pragmatics",
"volume": "30",
"issue": "",
"pages": "135--175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alistair Knott and Ted J.M. Sanders. 1998. The clas- sification of coherence relations and their linguistic markers: An exploration of two languages. Journal of Pragmatics, 30:135-175.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Beyond elaboration: The interaction of relations and focus in coherent text",
"authors": [
{
"first": "Alistair",
"middle": [],
"last": "Knott",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Oberlander",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Mick",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Donnell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mellish",
"suffix": ""
}
],
"year": 2001,
"venue": "Text representation: linguistic and psycholinguistic aspects",
"volume": "",
"issue": "",
"pages": "181--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alistair Knott, Jon Oberlander, Mick O'Donnell, and Chris Mellish. 2001. Beyond elaboration: The in- teraction of relations and focus in coherent text. In T. Sanders, J. Schilperoord, and W. Spooren, editors, Text representation: linguistic and psycholinguistic aspects, pages 181-196. Benjamins.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Temporal interpretation, discourse relations, and common sense entailment",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Lascarides",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
}
],
"year": 1993,
"venue": "Linguistics and Philosophy",
"volume": "16",
"issue": "5",
"pages": "437--493",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Lascarides and Nicholas Asher. 1993. Temporal interpretation, discourse relations, and common sense entailment. Linguistics and Philosophy, 16(5):437- 493.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Rhetorical structure theory: Toward a functional theory of text organization",
"authors": [
{
"first": "C",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Sandra",
"middle": [
"A"
],
"last": "Mann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Thompson",
"suffix": ""
}
],
"year": 1988,
"venue": "Text",
"volume": "8",
"issue": "3",
"pages": "243--281",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William C. Mann and Sandra A. Thompson. 1988. Rhetorical structure theory: Toward a functional the- ory of text organization. Text, 8(3):243-281.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Foundations of Statistical Natural Language Processing",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Manning and Hinrich Sch\u00fctze. 1999. Foun- dations of Statistical Natural Language Processing. The MIT Press.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The Theory and Practice of Discourse Parsing and Summarization",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Marcu. 2000. The Theory and Practice of Dis- course Parsing and Summarization. The MIT Press.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "English Text. System and Structure",
"authors": [
{
"first": "R",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Martin",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James R. Martin. 1992. English Text. System and Struc- ture. John Benjamin Publishing Company.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Discourse Markers",
"authors": [
{
"first": "Deborah",
"middle": [],
"last": "Schiffrin",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deborah Schiffrin. 1987. Discourse Markers. Cam- bridge University Press.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Proceedings of the Text Retrieval Conference, November. The Question-Answering Track",
"authors": [
{
"first": "",
"middle": [],
"last": "Trec-2001",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "TREC-2001. Proceedings of the Text Retrieval Confer- ence, November. The Question-Answering Track.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "Learning curves for the ELABORATION vs. CAUSE-EXPLANATION-EVIDENCE classifiers, trained on the Raw and BLIPP corpora.",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "of the 307 CAUSE-EXPLANATION-EVIDENCE relations that hold between two discourse units in Carlson et al.'s corpus, only 79 are explicitly marked. A program trained only on Carlson et al.'s corpus, would, therefore, identify at most 79 of the 307 relations correctly. When we run our CAUSE-EXPLANATION-EVIDENCE vs. ELABORATION classifier on these examples, we labeled correctly 73 of the 79 cue-phrase-marked relations and 102 of the 228 unmarked relations. This corresponds to an increase in accuracy from",
"uris": null
},
"TABREF0": {
"content": "
",
"type_str": "table",
"html": null,
"text": "Relation definitions as union of definitions proposed by other researchers",
"num": null
},
"TABREF1": {
"content": "",
"type_str": "table",
"html": null,
"text": "Patterns used to automatically construct a corpus of text span pairs labeled with discourse relations. relations and the number of examples extracted from the Raw corpus for each type of discourse relation. In the patterns inTable 2, the symbols BOS and EOS denote BeginningOfSentence and EndOfSentence boundaries, the \"\u00a9",
"num": null
},
"TABREF5": {
"content": ": Performances of Raw-trained classifiers on |
manually labeled RST relations that hold between |
elementary discourse units. Performance results are |
shown in bold; baselines are shown in normal fonts. |
",
"type_str": "table",
"html": null,
"text": "",
"num": null
}
}
}
}