{ "paper_id": "P97-1011", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:15:51.980730Z" }, "title": "Learning Features that Predict Cue Usage", "authors": [ { "first": "Barbara", "middle": [], "last": "Di", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pittsburgh Pittsburgh", "location": { "postCode": "15260", "region": "PA", "country": "USA" } }, "email": "" }, { "first": "Johanna", "middle": [ "D" ], "last": "Moore", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pittsburgh Pittsburgh", "location": { "postCode": "15260", "region": "PA", "country": "USA" } }, "email": "jmoore@cs.pitt.edu" }, { "first": "Massimo", "middle": [], "last": "Paolucci", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pittsburgh Pittsburgh", "location": { "postCode": "15260", "region": "PA", "country": "USA" } }, "email": "paolucci@cs.pitt.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Our goal is to identify the features that predict the occurrence and placement of discourse cues in tutorial explanations in order to aid in the automatic generation of explanations. Previous attempts to devise rules for text generation were based on intuition or small numbers of constructed examples. We apply a machine learning program, C4.5, to induce decision trees for cue occurrence and placement from a corpus of data coded for a variety of features previously thought to affect cue usage. Our experiments enable us to identify the features with most predictive power, and show that machine learning can be used to induce decision trees useful for text generation.", "pdf_parse": { "paper_id": "P97-1011", "_pdf_hash": "", "abstract": [ { "text": "Our goal is to identify the features that predict the occurrence and placement of discourse cues in tutorial explanations in order to aid in the automatic generation of explanations. Previous attempts to devise rules for text generation were based on intuition or small numbers of constructed examples. We apply a machine learning program, C4.5, to induce decision trees for cue occurrence and placement from a corpus of data coded for a variety of features previously thought to affect cue usage. Our experiments enable us to identify the features with most predictive power, and show that machine learning can be used to induce decision trees useful for text generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Discourse cues are words or phrases, such as because, first, and although, that mark structural and semantic relationships between discourse entities. They play a crucial role in many discourse processing tasks, including plan recognition (Litman and Allen, 1987 ), text comprehension (Cohen, 1984; Hobbs, 1985; Mann and Thompson, 1986; Reichman-Adar, 1984) , and anaphora resolution (Grosz and Sidner, 1986) . Moreover, research in reading comprehension indicates that felicitous use of cues improves comprehension and recall (Goldman, 1988) , but that their indiscriminate use may have detrimental effects on recall (Millis, Graesser, and Haberlandt, 1993) .", "cite_spans": [ { "start": 239, "end": 262, "text": "(Litman and Allen, 1987", "ref_id": "BIBREF7" }, { "start": 285, "end": 298, "text": "(Cohen, 1984;", "ref_id": "BIBREF0" }, { "start": 299, "end": 311, "text": "Hobbs, 1985;", "ref_id": "BIBREF4" }, { "start": 312, "end": 336, "text": "Mann and Thompson, 1986;", "ref_id": "BIBREF8" }, { "start": 337, "end": 357, "text": "Reichman-Adar, 1984)", "ref_id": "BIBREF18" }, { "start": 384, "end": 408, "text": "(Grosz and Sidner, 1986)", "ref_id": "BIBREF3" }, { "start": 527, "end": 542, "text": "(Goldman, 1988)", "ref_id": "BIBREF2" }, { "start": 618, "end": 658, "text": "(Millis, Graesser, and Haberlandt, 1993)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our goal is to identify general strategies for cue usage that can be implemented for automatic text generation. From the generation perspective, cue usage consists of three distinct, but interrelated problems:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) occurrence: whether or not to include a cue in the generated text, (2) placement: where the cue should be placed in the text, and (3) selection: what lexical item(s) should be used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Prior work in text generation has focused on cue selection (McKeown and Elhadad, 1991; Elhadad and McKeown, 1990) Souza, 1990; Vander Linden and Martin, 1995) . Other hypotheses about cue usage derive from work on discourse coherence and structure. Previous research (Hobbs, 1985; Grosz and Sidner, 1986; Schiffrin, 1987; Mann and Thompson, 1988; Elhadad and McKeown, 1990) , which has been largely descriptive, suggests factors such as structural features of the discourse (e.g., level of embedding and segment complexity), intentional and informational relations in that structure, ordering of relata, and syntactic form of discourse constituents. Moser and Moore (1995; 1997) coded a corpus of naturally occurring tutorial explanations for the range of features identified in prior work. Because they were also interested in the contrast between occurrence and non-occurrence of cues, they exhaustively coded for all of the factors thought to contribute to cue usage in all of the text. From their study, Moscr and Moore identified several interesting correlations between particular features and specific aspects of cue usage, and were able to test specific hypotheses from the hterature that were based on constructed examples.", "cite_spans": [ { "start": 59, "end": 86, "text": "(McKeown and Elhadad, 1991;", "ref_id": "BIBREF11" }, { "start": 87, "end": 113, "text": "Elhadad and McKeown, 1990)", "ref_id": "BIBREF1" }, { "start": 114, "end": 126, "text": "Souza, 1990;", "ref_id": "BIBREF21" }, { "start": 127, "end": 158, "text": "Vander Linden and Martin, 1995)", "ref_id": "BIBREF24" }, { "start": 267, "end": 280, "text": "(Hobbs, 1985;", "ref_id": "BIBREF4" }, { "start": 281, "end": 304, "text": "Grosz and Sidner, 1986;", "ref_id": "BIBREF3" }, { "start": 305, "end": 321, "text": "Schiffrin, 1987;", "ref_id": "BIBREF20" }, { "start": 322, "end": 346, "text": "Mann and Thompson, 1988;", "ref_id": "BIBREF9" }, { "start": 347, "end": 373, "text": "Elhadad and McKeown, 1990)", "ref_id": "BIBREF1" }, { "start": 650, "end": 672, "text": "Moser and Moore (1995;", "ref_id": "BIBREF14" }, { "start": 673, "end": 678, "text": "1997)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we focus on cue occurrence and placement, and present an empirical study of the hypotheses provided by previous research, which have never been systematically evaluated with naturally occurring data. Wc use a machine learning program, C4.5 (Quinlan, 1993) , on the tagged corpus of Moser and Moore to induce decision trees. The number of coded features and their interactions makes the manual construction of rules that predict cue occurrence and placement an intractable task.", "cite_spans": [ { "start": 255, "end": 270, "text": "(Quinlan, 1993)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our results largely confirm the suggestions from the hterature, and clarify them by highhghting the most influential features for a particular task. Discourse structure, in terms of both segment structure and levels of embedding, affects cue occurrence the most; intentional relations also play an important role. For cue placement, the most important factors are syntactic structure and segment complexity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The paper is organized as follows. In Section 2 we discuss previous research in more detail. Section 3 provides an overview of Moser and Moore's coding scheme. In Section 4 we present our learning experiments, and in Section 5 we discuss our results and conclude.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "McKeown and Elhadad (1991; ) studied severai connectives (e.g., but, since, because), and include many insightful hypotheses about cue selection; their observation that the distinction between but and \u00a2lthoug/~ depends on the point of the move is related to the notion of core discussed below. However, they do not address the problem of cue occurrence.", "cite_spans": [ { "start": 12, "end": 26, "text": "Elhadad (1991;", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": null }, { "text": "Other researchers (R6sner and Stede, 1902; Scott and de Souza, 1990) are concerned with generating text from \"RST trees\", hierarchical structures where leaf nodes contain content and internal nodes indicate the rt~etorical relations, as defined in Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) , that exist between subtrees. They proposed heuristics for including and choosing cues based on the rhetorical relation between spans of text, the order of the relata, and the complexity of the related text spans. However, (Scott and de Souza, 1990) was based on a small number of constructed exampies, and (R6sner and Stede, 1992) focused on a small number of RST relations. (Litman, 1996) and (Siegel and McKeown, 1994 ) have applied machine learning to disambiguate between the discourse and sentcntial usages of cues; however, they do not consider the issues of occurrence and placement, and approach the problem from the point of view of interpretation. We closely follow the approach in (Litman, 1996) in two ways. First, we use C4.5. Second, we experiment first with each feature individually, and then with \"interesting\" subsets of features.", "cite_spans": [ { "start": 18, "end": 42, "text": "(R6sner and Stede, 1902;", "ref_id": null }, { "start": 43, "end": 68, "text": "Scott and de Souza, 1990)", "ref_id": "BIBREF21" }, { "start": 282, "end": 307, "text": "(Mann and Thompson, 1988)", "ref_id": "BIBREF9" }, { "start": 532, "end": 558, "text": "(Scott and de Souza, 1990)", "ref_id": "BIBREF21" }, { "start": 685, "end": 699, "text": "(Litman, 1996)", "ref_id": "BIBREF6" }, { "start": 704, "end": 729, "text": "(Siegel and McKeown, 1994", "ref_id": "BIBREF22" }, { "start": 1002, "end": 1016, "text": "(Litman, 1996)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": null }, { "text": "This section briefly describes Relational Discourse Anal~tsis (RDA) (Moser, Moore, and Glendening, 1996) , the coding scheme used to tag the data for our machine learning experiments. 1 RDA is a scheme devised for analyzing tutorial explanations in the domain of electronics troubleshooting. It synthesizes ideas from (Grosz and Sidner, 1986) and from RST (Mann and Thompson, 1988) .", "cite_spans": [ { "start": 68, "end": 104, "text": "(Moser, Moore, and Glendening, 1996)", "ref_id": "BIBREF16" }, { "start": 318, "end": 342, "text": "(Grosz and Sidner, 1986)", "ref_id": "BIBREF3" }, { "start": 356, "end": 381, "text": "(Mann and Thompson, 1988)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Relational Discourse Analysis", "sec_num": null }, { "text": "Coders use RDA to exhaustively analyze each explanation in the corpus, i.e., every word in each explanation belongs to exactly one element in the analysis. An explanation may consist of multiple segments. Each segment originates with an intention of the speaker. Segments are internally structured and consist of a core, i.e., that element that most directly expresses the segment purpose, and any number of contributors, i.e. the remaining constituents. For each contributor, one analyzes its relation to the core from an intentional perspective, i.e., how it is intended to support the core, and from an informational perspective, i.e., how its content relates to that 1For more detail about the RDA coding scheme see (Moser and Moore, 1995; Moser and Moore, 1997) .", "cite_spans": [ { "start": 720, "end": 743, "text": "(Moser and Moore, 1995;", "ref_id": "BIBREF14" }, { "start": 744, "end": 766, "text": "Moser and Moore, 1997)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Relational Discourse Analysis", "sec_num": null }, { "text": "of the core. The set of intentional relations in RDA is a modification of the presentational relations of RST, while informational relations are similar to the subject matter relations in RST. Each segment constituent, both core and contributors, may itself be a segment with a core:contributor structure. In some cases the core is not explicit. This is often the case with the whole tutor's explanation, since its purpose is to answer the student's explicit question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relational Discourse Analysis", "sec_num": null }, { "text": "As an example of the application of RDA, consider the partial tutor explanation in (1) 2 . The purpose of this segment is to inform the student that she made the strategy error of testing inside part3 too soon. The constituent that makes the purpose obvious, in this case (l-B), is the core of the segment. The other constituents help to serve the segment purpose by contributing to it. (1-C) is an example ofsubsegment with its own core:contributor structure; its purpose is to give a reason for testing part2 first.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relational Discourse Analysis", "sec_num": null }, { "text": "The RDA analysis of (I) is shown schematically in Figure 1 . The core is depicted as the mother of all the relations it participates in. Each relation node is labeled with both its intentional and informational relation, with the order of relata in the label indicating the linear order in the discourse. Each relation node has up to two daughters: the cue, if any, and the contributor, in the order they appear in the discourse.", "cite_spans": [], "ref_spans": [ { "start": 50, "end": 58, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Relational Discourse Analysis", "sec_num": null }, { "text": "Coders analyze each explanation in the corpus and enter their analyses into a database. The corpus consists of 854 clauses comprising 668 segments, for a total of 780 relations. Table 1 summarizes the distribution of different relations, and the number of cued relations in each category. Joints are segments comprising more than one core, but no contributor; clusters are multiunit structures with no recognizable core:contributor relation. (l-B) is a cluster composed of two units (the two clauses), related only at the informational level by a temporal relation. Both clauses describe actions, with the first action description embedded in a matriz (\"You should\"). Cues are much more likely to occur in clusters, where only informational relations occur, than in core:contributor structures, where intentional and informational relations co-occur (X 2 = 33.367, p <.001, df = 1). In the following, we will not discuss joints and clusters any further.", "cite_spans": [], "ref_spans": [ { "start": 178, "end": 185, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Relational Discourse Analysis", "sec_num": null }, { "text": "An important result pointed out by (Moser and Moore, 1995) is that cue placement depends on core position. When the core is first and a cue is associated with the relation, the cue never occurs with the core. In contrast, when the core is second, if a cue occurs, it can occur either on the core or on the contributor.", "cite_spans": [ { "start": 35, "end": 58, "text": "(Moser and Moore, 1995)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Relational Discourse Analysis", "sec_num": null }, { "text": "aTo make the example more intelligible, we replaced references to parts of the circuit with the labels partl, part2 and part3. We chose the C4.5 learning algorithm (Quinlan, 1993) because it is well suited to a domain such as ours with discrete valued attributes. Moreover, C4.5 produces decision trees and rule sets, both often used in text generation to implement mappings from function features to forms? Finally, C4.5 is both readily available, and is a benchmark learning algorithm that has been extensively used in NLP applications, e.g. (Litman, 1996; Mooney, 1996; Vander Linden and Di Eugenio, 1996) . As our dataset is small, the results we report are based on cross-validation, which (Weiss and Kulikowski, 1091) recommends as the best method to evaluate decision trees on datasets whose cardinality is in the hundreds. Data for learning should be divided into training and test sets; however, for small datasets this has the disadvantage that a sizable portion of the data is not available for learning. Crossvalidation obviates this problem by running the algorithm N times (N=10 is a typical value): in each run, (N~l)th of the data, randomly chosen, is used as the training set, and the remaining ~th used as the test 3We will discuss only decision trees here. set. The error rate of a tree obtained by using the whole dataset for training is then assumed to be the average error rate on the test set over the N runs. Further, as C4.5 prunes the initial tree it obtains to avoid overfitting, it computes both actual and estimated error rates for the pruned tree; see (Quinlan, 1993, Ch. 4) for details. Thus, below we will report the average estimated error rate on the test set, as computed by 10-fold cross-validation experiments.", "cite_spans": [ { "start": 164, "end": 179, "text": "(Quinlan, 1993)", "ref_id": "BIBREF17" }, { "start": 544, "end": 558, "text": "(Litman, 1996;", "ref_id": "BIBREF6" }, { "start": 559, "end": 572, "text": "Mooney, 1996;", "ref_id": "BIBREF13" }, { "start": 573, "end": 608, "text": "Vander Linden and Di Eugenio, 1996)", "ref_id": "BIBREF23" }, { "start": 1582, "end": 1604, "text": "(Quinlan, 1993, Ch. 4)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Relational Discourse Analysis", "sec_num": null }, { "text": "Each data point in our dataset corresponds to a core:contributor relation, and is characterized by the following features, summarized in Table 2 . Segment Structure. Three features capture the global structure of the segment in which the current core:contributor relation appears.", "cite_spans": [], "ref_spans": [ { "start": 137, "end": 144, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "The features", "sec_num": "4.2" }, { "text": "tion of a particular contributor within the larger segment in which it occurs, and encodes the structure of the segment in terms of how many contributors precede and follow the core. For example, contributor (l-D) in Figure 1 is labeled as BIA3-2after, as it is the second contributor following the core in a segment with 1 contributor before and 3 after the core. \u2022 /nten(tional)-structure indicates which contributors in the segment bear the same intentional relations to the core.", "cite_spans": [], "ref_spans": [ { "start": 217, "end": 225, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "\u2022 (Con)Trib(utor)-pos(ition) captures the posi-", "sec_num": null }, { "text": "\u2022 Infor(mationalJ-structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 (Con)Trib(utor)-pos(ition) captures the posi-", "sec_num": null }, { "text": "Similar to intentional structure, but applied to informational relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 (Con)Trib(utor)-pos(ition) captures the posi-", "sec_num": null }, { "text": "Core:contributor relation. These features more specifically characterize the current core:contributor relation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 (Con)Trib(utor)-pos(ition) captures the posi-", "sec_num": null }, { "text": "\u2022 lnten (tionalJ-rel(ation) . One of concede, convince, enable.", "cite_spans": [ { "start": 8, "end": 27, "text": "(tionalJ-rel(ation)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "\u2022 (Con)Trib(utor)-pos(ition) captures the posi-", "sec_num": null }, { "text": "\u2022 Infor(maiional)-rel(ation). About 30 informational relations have been coded for. However, as preliminary experiments showed that using them individually results in overfitting the data, we classify them according to the four classes proposed in (Moser, Moore, and Glendening, 1996) : causality, similarity, elaboration, temporal. Temporal relations only appear in clusters, thus not in the data we discuss in this paper.", "cite_spans": [ { "start": 248, "end": 284, "text": "(Moser, Moore, and Glendening, 1996)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "\u2022 (Con)Trib(utor)-pos(ition) captures the posi-", "sec_num": null }, { "text": "\u2022 Syn(tactic)-rel(atiou). Captures whether the core and contributor are independent units (segments or sentences); whether they are coordinated clauses; or which of the two is subordinate to the other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 (Con)Trib(utor)-pos(ition) captures the posi-", "sec_num": null }, { "text": "\u2022 Adjacency. Whether core and contributor are adjacent in linear order.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 (Con)Trib(utor)-pos(ition) captures the posi-", "sec_num": null }, { "text": "Embedding. These features capture segment embedding, Core-type and Trib-type qualitatively, and A bore/Below quantitatively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 (Con)Trib(utor)-pos(ition) captures the posi-", "sec_num": null }, { "text": "\u2022 Core-type/(ConJTrib(utor)-type. Whether the core/the contributor is a segment, or a minimal unit (further subdivided into action, state, matriz).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 (Con)Trib(utor)-pos(ition) captures the posi-", "sec_num": null }, { "text": "\u2022 Above//Belozo encode the number of relations hierarchically above and below the current relation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 (Con)Trib(utor)-pos(ition) captures the posi-", "sec_num": null }, { "text": "Initially, we performed learning on all 406 instances of core:contributor relations. We quickly determined that this approach would not lead to useful decision trees. First, the trees we obtained were extremely complex (at least 50 nodes). Second, some of the subtrees corresponded to clearly identifiable subclasses of the data, such as relations with an implicit core, which suggested that we should apply learning to these independently identifiable subclasses. Thus, we subdivided the data into three subsets:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The experiments", "sec_num": "4.3" }, { "text": "\u2022 Core/: core:contributor relations with the core in first position", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The experiments", "sec_num": "4.3" }, { "text": "\u2022 Core~: core:contributor relations with the core in second position", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The experiments", "sec_num": "4.3" }, { "text": "\u2022 Impl(icit)-core: core:contributor relations with an implicit core While this has the disadvantage of smaller training sets, the trees we obtain are more manageable and more meaningful. We ran four sets of experiments. In three of them we predict cue occurrence and in one cue placement. 4 Table 4 summarizes our main results concerning cue occurrence, and includes the error rates associated with different feature sets. We adopt Litman's approach (1906) to determine whether two error rates El and \u00a32 are significantly different. We compute 05% confidence intervals for the two error rates using a t-test. \u00a31 is significantly better than \u00a3~ if the upper bound of the 95% confidence interval for \u00a31 is lower than the lower bound of the 95% confidence interval for g2-~ For each set of experiments, we report the following:", "cite_spans": [ { "start": 450, "end": 456, "text": "(1906)", "ref_id": null } ], "ref_spans": [ { "start": 291, "end": 298, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "The experiments", "sec_num": "4.3" }, { "text": "1. A baseline measure obtained by choosing the majority class. E.g., for Corel 58.9% of the relations are not cued; thus, by deciding to never include a cue, one would be wrong 41.1% of the times.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cue Occurrence", "sec_num": "4.3.1" }, { "text": "power is better than the baseline: as Table 4 makes apparent, individual features do not have much predictive power. For neither Gorcl nor Impl-core does any individual feature perform better than the baseline, and for Core~ only one feature is sufficiently predictive.", "cite_spans": [], "ref_spans": [ { "start": 38, "end": 45, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "The best individual features whose predictive", "sec_num": "2." }, { "text": "3. (One of) the best induced tree(s). For each tree, we list the number of nodes, and up to six of the features that appear highest in the tree, with their levels of embedding. 5 Figure 2 shows the tree for Core~ (space constraints prevent us from including figures for each tree). In the figure, the numbers in parentheses indicate the number of cases correctly covered by the leaf, and the number of expected errors at that leaf.", "cite_spans": [], "ref_spans": [ { "start": 179, "end": 187, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "The best individual features whose predictive", "sec_num": "2." }, { "text": "Learning turns out to be most useful for Corel,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The best individual features whose predictive", "sec_num": "2." }, { "text": "where the error reduction (as percentage) from baseline to the upper bound of the best result is 32%;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The best individual features whose predictive", "sec_num": "2." }, { "text": "~AII our experiments are run with groupin 9 turned on, so that C4.5 groups values together rather than creating a branch per value. The latter choice always results in trees overfitted to the data in our domain. Using classes of informational relations, rather than individual informational relations, constitutes a sort of a priori grouping. SThe trees that C4.5 generates are right-branching, so this description is fairly adequate. error reduction is 19% for Core2 and only 3% for Impl-core.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The best individual features whose predictive", "sec_num": "2." }, { "text": "The best tree was obtained partly by informed choice, partly by trial and error. Automatically trying out all the 211 --2048 subsets of features would be possible, but it would require manual examination of about 2,000 sets of results, a daunting task. Thus, for each dataset wc considered only the following subsets of features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The best individual features whose predictive", "sec_num": "2." }, { "text": "1. All features. This always results in C4.5 selecting a few features (from 3 to 7) for the final tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The best individual features whose predictive", "sec_num": "2." }, { "text": "2. Subsets built out of the 2 to 4 attributes appearing highest in the tree obtained by running C4.5 on all features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The best individual features whose predictive", "sec_num": "2." }, { "text": "3. In Table 2 , three features --Trib-pos, In~e~struck, Infor-s~ructconcern segment structure, eight do not. We constructed three subsets by always including the eight features that do not concern segment structure, and adding one of those that does. The trees obtained by including Trib-pos, I~tert-struc~, Infor-struc~ at the same time are in general more complex, and not significantly better than other trees obtained by including only one of these three features. We attribute this to the fact that these features encode partly overlapping information.", "cite_spans": [], "ref_spans": [ { "start": 6, "end": 13, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "The best individual features whose predictive", "sec_num": "2." }, { "text": "Finally, the best tree was obtained as follows. We build the set of trees that are statistically equivalent to the tree with the best error rate (i.e., with the lowest error rate upper bound). Among these trees, we choose the one that we deem the most perspicuous in terms of features and of complexity. Namely, we pick the simplest tree with Trib-Pos as the root if one exists, otherwise the simplest tree. Trees that have Trib-Pos as the root are the most useful for text generation, because, given a complex segment, Trib-Pos is the only attribute that unambiguously identifies a specific contributor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The best individual features whose predictive", "sec_num": "2." }, { "text": "Our results make apparent that the structure of segments plays a fundamental role in determining cue occurrence. One of the three features concerning segment structure (Trib-Pos, Inten-Structure, Infor-StrucZure) appears as the root or just below the root in all trees in Table 4 ; more importantly, this same configuration occurs in all trees equivalent to the best tree (even if the specific feature encoding segment structure may change). The level of embedding in a", "cite_spans": [], "ref_spans": [ { "start": 272, "end": 279, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "The best individual features whose predictive", "sec_num": "2." }, { "text": "Impl-core InLen-rel appears in all trees, confirming the intuition that the speaker's purpose affects cue occurrence. More specifically, in Figure 2 , Inten-reldistinguishes two different speaker purposes, convince and enable. The same split occurs in some of the best trees induced on Core1, with the same outcome: i.e., convince directly correlates with the occurrence of a cue, whereas for enable other features must be taken into account. 6 Informational relations do not appear as often as intentional relations; their discriminatory power seems more relevant for clusters. Preliminary ewe can't draw any conclusions concerning concede,", "cite_spans": [], "ref_spans": [ { "start": 140, "end": 148, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Core l Core2", "sec_num": null }, { "text": "as there are only 24 occurrences of concede out of 406 core:contributor relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Core l Core2", "sec_num": null }, { "text": "experiments show that cue occurrence in clusters depends only on informational and syntactic relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Core l Core2", "sec_num": null }, { "text": "Finally, Adjacency does not seem to play any substantial role.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Core l Core2", "sec_num": null }, { "text": "While cue occurrence and placement are interrelated problems, we performed learning on them separately. First, the issue of placement arises only in the case of Core~; for Core1, cues only occur on the contributor. Second, we attempted experiments on", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cue Placement", "sec_num": "4.3.2" }, { "text": "Core2 that discriminated between occurrence and placement at the same time, and the derived trees were complex and not perspicuous. Thus, we ran an experiment on the 100 cued relations from Core~ to investigate which factors affect placing the cue on the contributor in first position or on the core in second; Table 5 . We ran the same trials discussed above on this dataset. In this case, the best tree --see Figure 3 --results from combining the two best individual features, and reduces the error rate by 50%. The most discriminant feature turns out to be the syntactic relation between the contributor and the core. However, segment structure still plays an important role, via Trib-pos.", "cite_spans": [], "ref_spans": [ { "start": 311, "end": 318, "text": "Table 5", "ref_id": "TABREF9" }, { "start": 411, "end": 419, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Cue Placement", "sec_num": "4.3.2" }, { "text": "While the importance of S~ln-rel for placement seems clear, its role concerning occurrence requires further exploration. It is interesting to note that the tree induced on Gorel --the only case in which Synrel is relevant for occurrence --indudes the same distinction as in Figure 3 : namely, if the contributor depends on the core, the contributor must be marked, otherwise other features have to be taken into account. Scott and de Souza (1990) point out that \"there is a strong correlation between the syntactic specification of a complex sentence and its perceived rhetorical structure.\" It seems that certain syntactic structures function as a cue.", "cite_spans": [], "ref_spans": [ { "start": 274, "end": 282, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Cue Placement", "sec_num": "4.3.2" }, { "text": "Discussion and Conclusions", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5", "sec_num": null }, { "text": "We have presented the results of machine learning experiments concerning cue occurrence and placement. As (Litman, 1996) observes, this sort of empirical work supports the utility of machine learning techniques applied to coded corpora. As our study shows, individual features have no predictive power for cue occurrence. Moreover, it is hard to see how the best combination of individual features could be found by manual inspection.", "cite_spans": [ { "start": 106, "end": 120, "text": "(Litman, 1996)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "5", "sec_num": null }, { "text": "Our results also provide guidance for those building text generation systems. This study clearly in-dicates that segment structure, most notably the ordering of core and contributor, is crucial for determining cuc occurrence. Recall that it was only by considering Corel and Core~ relations in distinct datasets that we were able to obtain perspicuous decision trees that signifcantly reduce the error rate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5", "sec_num": null }, { "text": "This indicates that the representations produced by discourse planners should distinguish those elements that constitute the core of each discourse segment, in addition to representing the hierarchical structure of segments. Note that the notion of core is related to the notions of nucleus in RST, intended effect in (Young and Moore, 1994) , and of point of a move in (Elhadad and McKeown, 1990) , and that text generators representing these notions exist.", "cite_spans": [ { "start": 318, "end": 341, "text": "(Young and Moore, 1994)", "ref_id": "BIBREF26" }, { "start": 370, "end": 397, "text": "(Elhadad and McKeown, 1990)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "5", "sec_num": null }, { "text": "Moreover, in order to use the decision trees derived here, decisions about whether or not to make the core explicit and how to order the core and contributor(s) must be made before deciding cue occurrence, e.g., by exploiting other factors such as focus (McKeown, 1985) and a discourse history.", "cite_spans": [ { "start": 254, "end": 269, "text": "(McKeown, 1985)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "5", "sec_num": null }, { "text": "Once decisions about core:contributor ordering and cuc occurrence have been made, a generator must still determine where to place cues and select appropriate Icxical items. A major focus of our future research is to explore the relationship between the selection and placement decisions. Elsewhere, we have found that particular lexical items tend to have a preferred location, defined in terms of functional (i.e., core or contributor) and linear (i.e., first or second relatum) criteria (Moser and Moore, 1997) . Thus, if a generator uses decision trees such as the one shown in Figure 3 to determine where a cuc should bc placed, it can then select an appropriate cue from those that can mark the given intentional / informational relations, and are usually placed in that functional-linear location. To evaluate this strategy, we must do further work to understand whether there are important distinctions among cues (e.g., so, because) apart from their different preferred locations. The work of Elhadad (1990) and Knott (1996) will help in answering this question.", "cite_spans": [ { "start": 489, "end": 512, "text": "(Moser and Moore, 1997)", "ref_id": "BIBREF15" }, { "start": 1001, "end": 1015, "text": "Elhadad (1990)", "ref_id": "BIBREF1" }, { "start": 1020, "end": 1032, "text": "Knott (1996)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 581, "end": 589, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "5", "sec_num": null }, { "text": "Future work comprises further probing into machine learning techniques, in particular investigating whether other learning algorithms are more appropriate for our problem (Mooney, 1996) , especially algorithms that take into account some a priori knowledge about features and their dependencies.", "cite_spans": [ { "start": 171, "end": 185, "text": "(Mooney, 1996)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "5", "sec_num": null } ], "back_matter": [ { "text": "This research is supported by the Office of Naval Research, Cognitive and Neural Sciences Division (Grants N00014-91-J-1694 and N00014-93-I-0812). Thanks to Megan Moser for her prior work on this project and for comments on this paper; to Erin Glendening and Liina Pylkkanen for their coding efforts; to Haiqin Wang for running many experiments; to Giuseppe Carenini and Stefll Briininghaus for discussions about machine learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A computational theory of the function of clue words in argument understanding", "authors": [ { "first": "Robin", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 1984, "venue": "Proceedings of COLINGS~", "volume": "", "issue": "", "pages": "251--258", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cohen, Robin. 1984. A computational theory of the function of clue words in argument understand- ing. In Proceedings of COLINGS~, pages 251-258, Stanford, CA.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Generating connectives", "authors": [ { "first": "Michael", "middle": [], "last": "Elhadad", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 1990, "venue": "Proceedings of COL-INGgO", "volume": "", "issue": "", "pages": "97--101", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elhadad, Michael and Kathleen McKeown. 1990. Generating connectives. In Proceedings of COL- INGgO, pages 97-101, Helsinki, Finland.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The role of sequence markers in reading and recall: Comparison of native and normative english speakers", "authors": [ { "first": "Susan", "middle": [ "R" ], "last": "Goldman", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Goldman, Susan R. 1988. The role of sequence markers in reading and recall: Comparison of na- tive and normative english speakers. Technical re- port, University of California, Santa Barbara.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Attention, intention, and the structure of discourse", "authors": [ { "first": "Barbara", "middle": [ "J" ], "last": "Grosz", "suffix": "" }, { "first": "Candace", "middle": [ "L" ], "last": "Sidner", "suffix": "" } ], "year": 1986, "venue": "Computational Linguistics", "volume": "12", "issue": "3", "pages": "175--204", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grosz, Barbara J. and Candace L. Sidner. 1986. At- tention, intention, and the structure of discourse. Computational Linguistics, 12(3):175-204.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "On the coherence and structure of discourse", "authors": [ { "first": "Jerry", "middle": [ "R" ], "last": "Hobbs", "suffix": "" } ], "year": 1985, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hobbs, Jerry R. 1985. On the coherence and struc- ture of discourse. Technical Report CSLI-85-37, Center for the Study of Language and Informa- tion, Stanford University.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A Data-Driver, methodology for motivating a set of coherence relations", "authors": [ { "first": "Alistair", "middle": [], "last": "Knott", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Knott, Alistair. 1996. A Data-Driver, methodology for motivating a set of coherence relations. Ph.D. thesis, University of Edinburgh.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Cue phrase classification using machine learning", "authors": [ { "first": "Diane", "middle": [ "J" ], "last": "Litman", "suffix": "" } ], "year": 1996, "venue": "Journal of Artificial Intelligence Research", "volume": "5", "issue": "", "pages": "53--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Litman, Diane J. 1996. Cue phrase classification using machine learning. Journal of Artificial In- telligence Research, 5:53-94.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A plan recognition model for subdialogues in conversations", "authors": [ { "first": "Diane", "middle": [ "J" ], "last": "Litman", "suffix": "" }, { "first": "James", "middle": [ "F" ], "last": "Allen", "suffix": "" } ], "year": 1987, "venue": "Cognitive Science", "volume": "11", "issue": "", "pages": "163--200", "other_ids": {}, "num": null, "urls": [], "raw_text": "Litman, Diane J. and James F. Allen. 1987. A plan recognition model for subdialogues in conver- sations. Cognitive Science, 11:163-200.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Relational propositions in discourse. Discourse Processes", "authors": [ { "first": "William", "middle": [ "C" ], "last": "Mann", "suffix": "" }, { "first": "Sandra", "middle": [ "A" ], "last": "Thompson", "suffix": "" } ], "year": 1986, "venue": "", "volume": "9", "issue": "", "pages": "57--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mann, William C. and Sandra A. Thompson. 1986. Relational propositions in discourse. Discourse Processes, 9:57-90.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Rhetorical Structure Theory: Towards a functional theory of text organization. TEXT", "authors": [ { "first": "William", "middle": [ "C" ], "last": "Mann", "suffix": "" }, { "first": "Sandra", "middle": [ "A" ], "last": "Thompson", "suffix": "" } ], "year": 1988, "venue": "", "volume": "8", "issue": "", "pages": "243--281", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mann, William C. and Sandra A. Thompson. 1988. Rhetorical Structure Theory: Towards a functional theory of text organization. TEXT, 8(3):243-281.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Tezt Generation: Using Discourse Strategies and Focus Constraints to Generate Natural Language Tezt", "authors": [ { "first": "Kathleen", "middle": [ "R" ], "last": "Mckeown", "suffix": "" } ], "year": 1985, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "McKeown, Kathleen R. 1985. Tezt Generation: Us- ing Discourse Strategies and Focus Constraints to Generate Natural Language Tezt. Cambridge Uni- versity Press, Cambridge, England.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A contrastive evaluation of functional unification grammar for surface language generation: A case study in the choice of connectives", "authors": [ { "first": "Kathleen", "middle": [ "R" ], "last": "Mckeown", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Elhadad", "suffix": "" } ], "year": 1991, "venue": "Natural Language Generation in Artificial Intelligence and Computational Linguistics", "volume": "", "issue": "", "pages": "351--396", "other_ids": {}, "num": null, "urls": [], "raw_text": "McKeown, Kathleen R. and Michael Elhadad. 1991. A contrastive evaluation of functional unification grammar for surface language generation: A case study in the choice of connectives. In C. L. Paris, W. R. Swartout, and W. C. Mann, eds., Natu- ral Language Generation in Artificial Intelligence and Computational Linguistics. Kluwer Academic Publishers, Boston, pages 351-396.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The impact of connectives on the memory for expository text", "authors": [ { "first": "Keith", "middle": [], "last": "Millis", "suffix": "" }, { "first": "Arthur", "middle": [], "last": "Graesser", "suffix": "" }, { "first": "Karl", "middle": [], "last": "Haberlandt", "suffix": "" } ], "year": 1993, "venue": "Applied Cognitive Psychology", "volume": "7", "issue": "", "pages": "317--339", "other_ids": {}, "num": null, "urls": [], "raw_text": "Millis, Keith, Arthur Graesser, and Karl Haberlandt. 1993. The impact of connectives on the memory for expository text. Applied Cognitive Psychology, 7:317-339.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Comparative experiments on disambiguating word senses: An illustration of the role of bias in machine learning", "authors": [ { "first": "Raymond", "middle": [ "J" ], "last": "Mooney", "suffix": "" } ], "year": 1996, "venue": "Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mooney, Raymond J. 1996. Comparative experi- ments on disambiguating word senses: An illus- tration of the role of bias in machine learning. In Conference on Empirical Methods in Natural Lan- guage Processing.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Investigating cue selection and placement in tutorial discourse", "authors": [ { "first": "Megan", "middle": [], "last": "Moser", "suffix": "" }, { "first": "Johanna", "middle": [ "D" ], "last": "Moore", "suffix": "" } ], "year": 1995, "venue": "Proceedings of ACLgS", "volume": "", "issue": "", "pages": "130--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moser, Megan and Johanna D. Moore. 1995. In- vestigating cue selection and placement in tutorial discourse. In Proceedings of ACLgS, pages 130- 135, Boston, MA.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A corpus analysis of discourse cues and relational discourse structure", "authors": [ { "first": "Megan", "middle": [], "last": "Moser", "suffix": "" }, { "first": "Johanna", "middle": [ "D" ], "last": "Moore", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moser, Megan and Johanna D. Moore. 1997. A cor- pus analysis of discourse cues and relational dis- course structure. Submitted for publication.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Instructions for Coding Explanations: Identifying Segments, Relations and Minireal Units", "authors": [ { "first": "Megan", "middle": [], "last": "Moser", "suffix": "" }, { "first": "Johanna", "middle": [ "D" ], "last": "Moore", "suffix": "" }, { "first": "Erin", "middle": [], "last": "Glendening", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moser, Megan, Johanna D. Moore, and Erin Glen- dening. 1996. Instructions for Coding Explana- tions: Identifying Segments, Relations and Mini- real Units. Technical Report 96-17, University of Pittsburgh, Department of Computer Science.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "C~.5: Programs for Machine Learning", "authors": [ { "first": "J", "middle": [], "last": "Quinlan", "suffix": "" }, { "first": "", "middle": [], "last": "Ross", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quinlan, J. Ross. 1993. C~.5: Programs for Machine Learning. Morgan Kaufmann.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Extended person-machine interface", "authors": [ { "first": "Rachel", "middle": [], "last": "Reichman-Adar", "suffix": "" } ], "year": 1984, "venue": "Artificial Intelligence", "volume": "22", "issue": "2", "pages": "157--218", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reichman-Adar, Rachel. 1984. Extended person-machine interface. Artificial Intelligence, 22(2):157-218.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Customizing RST for the automatic production of technical manuals", "authors": [ { "first": "Dietmar", "middle": [], "last": "Rssner", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Stede", "suffix": "" } ], "year": 1992, "venue": "6th International Workshop or* Natural Language Generation", "volume": "", "issue": "", "pages": "199--215", "other_ids": {}, "num": null, "urls": [], "raw_text": "RSsner, Dietmar and Manfred Stede. 1992. Cus- tomizing RST for the automatic production of technical manuals. In R. Dale, E. Hovy, D. RSsner, and O. Stock, eds., 6th International Workshop or* Natural Language Generation, Springer-Verlag, Berlin, pages 199-215.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Discourse Markers. Cambridge University Press", "authors": [ { "first": "Deborah", "middle": [], "last": "Schiffrin", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schiffrin, Deborah. 1987. Discourse Markers. Cam- bridge University Press, New York.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Getting the message across in RST-based text generation", "authors": [ { "first": "Donia", "middle": [], "last": "Scott", "suffix": "" }, { "first": "Clarisse", "middle": [], "last": "Sieckenius De", "suffix": "" }, { "first": "", "middle": [], "last": "Souza", "suffix": "" } ], "year": 1990, "venue": "Current Research in Natural Language Generation", "volume": "", "issue": "", "pages": "47--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Scott, Donia and Clarisse Sieckenius de Souza. 1990. Getting the message across in RST-based text gen- eration. In R. Dale, C. Mellish, and M. Zock, eds., Current Research in Natural Language Gen- eration. Academic Press, New York, pages 47-73.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Emergent linguistic rules from inducing decision trees: Disambiguating discourse clue words", "authors": [ { "first": "Eric", "middle": [ "V" ], "last": "Siegel", "suffix": "" }, { "first": "Kathleen", "middle": [ "R" ], "last": "Mckeown", "suffix": "" } ], "year": 1994, "venue": "Proceedings of AAAI94", "volume": "", "issue": "", "pages": "820--826", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siegel, Eric V. and Kathleen R. McKeown. 1994. Emergent linguistic rules from inducing decision trees: Disambiguating discourse clue words. In Proceedings of AAAI94, pages 820-826.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Learning micro-planning rules for preventative expressions", "authors": [ { "first": "Keith", "middle": [], "last": "Vander Linden", "suffix": "" }, { "first": "Barbara", "middle": [ "Di" ], "last": "Eugenio", "suffix": "" } ], "year": 1996, "venue": "8th International Workshop on Natural Language Generation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vander Linden, Keith and Barbara Di Eugenio. 1996. Learning micro-planning rules for preven- tative expressions. In 8th International Workshop on Natural Language Generation, Sussex, UK.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Expressing rhetorical relations in instructional text: A case study of the purpose relation", "authors": [ { "first": "", "middle": [], "last": "Vander Linden", "suffix": "" }, { "first": "James", "middle": [ "H" ], "last": "Keith", "suffix": "" }, { "first": "", "middle": [], "last": "Martin", "suffix": "" } ], "year": 1995, "venue": "Computational Linguistics", "volume": "21", "issue": "1", "pages": "29--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vander Linden, Keith and James H. Martin. 1995. Expressing rhetorical relations in instructional text: A case study of the purpose relation. Com- putational Linguistics, 21(1):29-58.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Computer Systems that learn: classification and prediction methods from statistics, neural nets, machine learning, and ezpert systems", "authors": [ { "first": "Sholom", "middle": [ "M" ], "last": "Weiss", "suffix": "" }, { "first": "Casimir", "middle": [], "last": "Kulikowski", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weiss, Sholom M. and Casimir Kulikowski. 1991. Computer Systems that learn: classification and prediction methods from statistics, neural nets, machine learning, and ezpert systems. Morgan Kaufmann.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "DPOCL: A Principled Approach to Discourse Planning", "authors": [ { "first": "R", "middle": [], "last": "Young", "suffix": "" }, { "first": "Johanna", "middle": [ "D" ], "last": "Michael", "suffix": "" }, { "first": "", "middle": [], "last": "Moore", "suffix": "" } ], "year": 1994, "venue": "7th International Workshop on Natural Language Generation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Young, R. Michael and Johanna D. Moore. 1994. DPOCL: A Principled Approach to Discourse Planning. In 7th International Workshop on Natu- ral Language Generation, Kennebunkport, Maine.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Figure 1: The RDA analysis of (1)", "num": null, "type_str": "figure" }, "FIGREF1": { "uris": null, "text": "Decision tree for Core2 -occurrence segment, as encoded by Core-type, Trib-type, Above and Below also figures prominently.", "num": null, "type_str": "figure" }, "FIGREF2": { "uris": null, "text": "Decision tree for Core~--placement see", "num": null, "type_str": "figure" }, "TABREF0": { "num": null, "type_str": "table", "content": "", "text": ", or on the relation between *Learning Research & Development Center tComputer Science Department, and Learning Research ~z Development Center tlntelllgent Systems Program cue occurrence and placement and specific rhetorical structures (RSsner and Stede, 1992; Scott and de", "html": null }, "TABREF3": { "num": null, "type_str": "table", "content": "
", "text": "", "html": null }, "TABREF4": { "num": null, "type_str": "table", "content": "
summarizes the cardinal-
", "text": "", "html": null }, "TABREF5": { "num": null, "type_str": "table", "content": "", "text": "Distributions of relations and cue occurrences", "html": null }, "TABREF7": { "num": null, "type_str": "table", "content": "
Trib POS}{ B 1A0-I prc.B l A 1-1 prc.B 1A2-1 pre.B 1A3-I pre.
{B IA,-I pre. B2A2.2pr\u00a2i ~ B3A0-3pre/~_81)p~ { B21A2.~N.~.~B2A0-I pre.B2A0-2pre. B2A I-1 pre.B2A 1-2pr*2 B3A0-1P rc'B3A0-2prc }
(4/I.2)
No-CueCue[ Intcn RclJ
{cneb'c} /~{ .... i .......... d} (70/I 2.7)
[ Int-o RelJCue
{causal. elaboration} / /~{ sioailarity } /I 2,
[ ,,,,o~o}No-Cue
Cue[ Core Type)
{ segment }
{ mat .. { action )
[ ae~ow )No-Cu~(T.b Pos J
Cue[ Trib Pos {B IA0-1 pre/ ( 16/5~/] {BIAl-lpre.B1A2-1prc. B I A3-1pr\u00a2. B2A0-I pre.B2AO-2prc. (15/3.3) \\ B3A0-1 pre.B3A0-2pre } ~ B2A l -I prc.B2A 1-2pro{B1A0-1pre,// B2A0-2pre } / (1915.8, ~Zr No-Cue\\Cue[BIAl-lpre.BlA2-1pr\u00a2. B 1A3-I prc.B2A0-I pro. B2A 1 -I pre.B2A 1-2pre B3A0-I prc.B3A0=2prc } (713 3)
CueNo-Cue
", "text": "Summary of learning results", "html": null }, "TABREF9": { "num": null, "type_str": "table", "content": "
12d: Ttab depends on Corei\u00a2: Core and Tab are independent clauses
21d: Core depends on Tabcc.cp.ct: Core and Tnb are coordinaled phrases
{izd}\"N~d .: ,:c ,=p ,:, I .\".\" .\" .
,26,'2. V
Cue-on-Trib[ Trib-Pos
hB/AO71Pre.~'B. I A 1.~ I Pro'~{ B2AO-Iofe B2AI-Iprc
Cue-on-CoreCue~on-Trib
", "text": "Cue placement on Core2", "html": null } } } }