{
"paper_id": "P04-1011",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:44:05.207215Z"
},
"title": "Trainable Sentence Planning for Complex Information Presentation in Spoken Dialog Systems",
"authors": [
{
"first": "Amanda",
"middle": [],
"last": "Stent",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stony Brook University Stony Brook",
"location": {
"postCode": "11794",
"region": "NY",
"country": "U.S.A"
}
},
"email": "stent@cs.sunysb.edu"
},
{
"first": "Rashmi",
"middle": [],
"last": "Prasad",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania Philadelphia",
"location": {
"postCode": "19104",
"region": "PA",
"country": "U.S.A"
}
},
"email": "rjprasad@linc.cis.upenn.edu"
},
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {
"postCode": "S1 4DP",
"settlement": "Sheffield",
"country": "U.K"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "A challenging problem for spoken dialog systems is the design of utterance generation modules that are fast, flexible and general, yet produce high quality output in particular domains. A promising approach is trainable generation, which uses general-purpose linguistic knowledge automatically adapted to the application domain. This paper presents a trainable sentence planner for the MATCH dialog system. We show that trainable sentence planning can produce output comparable to that of MATCH's template-based generator even for quite complex information presentations.",
"pdf_parse": {
"paper_id": "P04-1011",
"_pdf_hash": "",
"abstract": [
{
"text": "A challenging problem for spoken dialog systems is the design of utterance generation modules that are fast, flexible and general, yet produce high quality output in particular domains. A promising approach is trainable generation, which uses general-purpose linguistic knowledge automatically adapted to the application domain. This paper presents a trainable sentence planner for the MATCH dialog system. We show that trainable sentence planning can produce output comparable to that of MATCH's template-based generator even for quite complex information presentations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "One very challenging problem for spoken dialog systems is the design of the utterance generation module. This challenge arises partly from the need for the generator to adapt to many features of the dialog domain, user population, and dialog context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are three possible approaches to generating system utterances. The first is templatebased generation, used in most dialog systems today. Template-based generation enables a programmer without linguistic training to program a generator that can efficiently produce high quality output specific to different dialog situations. Its drawbacks include the need to (1) create templates anew by hand for each application; (2) design and maintain a set of templates that work well together in many dialog contexts; and (3) repeatedly encode linguistic constraints such as subject-verb agreement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The second approach is natural language generation (NLG), which divides generation into:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) text (or content) planning, (2) sentence planning, and (3) surface realization. NLG promises portability across domains and dialog contexts by using general rules for each generation module. However, the quality of the output for a particular domain, or a particular dialog context, may be inferior to that of a templatebased system unless domain-specific rules are developed or general rules are tuned for the particular domain. Furthermore, full NLG may be too slow for use in dialog systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A third, more recent, approach is trainable generation: techniques for automatically training NLG modules, or hybrid techniques that adapt NLG modules to particular domains or user groups, e.g. (Langkilde, 2000; Mellish, 1998; Walker, Rambow and Rogati, 2002) . Open questions about the trainable approach include (1) whether the output quality is high enough, and (2) whether the techniques work well across domains. For example, the training method used in SPoT (Sentence Planner Trainable), as described in (Walker, Rambow and Rogati, 2002) , was only shown to work in the travel domain, for the information gathering phase of the dialog, and with simple content plans involving no rhetorical relations.",
"cite_spans": [
{
"start": 194,
"end": 211,
"text": "(Langkilde, 2000;",
"ref_id": "BIBREF0"
},
{
"start": 212,
"end": 226,
"text": "Mellish, 1998;",
"ref_id": "BIBREF8"
},
{
"start": 227,
"end": 259,
"text": "Walker, Rambow and Rogati, 2002)",
"ref_id": "BIBREF16"
},
{
"start": 510,
"end": 543,
"text": "(Walker, Rambow and Rogati, 2002)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper describes trainable sentence planning for information presentation in the MATCH (Multimodal Access To City Help) dialog system . We provide evidence that the trainable approach is feasible by showing (1) that the training technique used for SPoT can be extended to a new domain (restaurant information); (2) that this technique, previously used for informationgathering utterances, can be used for information presentations, namely recommendations and comparisons; and (3) that the quality of the output is comparable to that of a template-based generator previously developed and experimentally evaluated with MATCH users .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Section 2 describes SPaRKy (Sentence Planning with Rhetorical Knowledge), an extension of SPoT that uses rhetorical relations. SPaRKy consists of a randomized sentence plan generator (SPG) and a trainable sentence plan ranker (SPR); these are described in Sections 3 strategy:recommend items: Chanpen Thai relations:justify(nuc:1;sat:2); justify(nuc:1;sat:3); justify(nuc:1;sat:4) content: 1. assert(best(Chanpen Thai)) 2. assert(has-att(Chanpen Thai, decor(decent))) 3. assert(has-att(Chanpen Thai, service(good)) 4. assert(has-att(Chanpen Thai, cuisine(Thai)))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Figure 1: A content plan for a recommendation for a restaurant in midtown Manhattan strategy:compare3 items: Above, Carmine's relations:elaboration(1;2); elaboration(1;3); elaboration(1,4); elaboration(1,5); elaboration(1,6); elaboration(1,7); contrast(2;3); contrast(4;5); contrast(6;7) content: 1. assert(exceptional(Above, Carmine's)) 2. assert(has-att(Above, decor(good))) 3. assert(has-att(Carmine's, decor(decent))) 4. assert(has-att(Above, service(good))) 5. assert(has-att(Carmine's, service(good))) 6. assert(has-att(Above, cuisine(New American))) 7. assert(has-att(Carmine's, cuisine(italian)))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Figure 2: A content plan for a comparison between restaurants in midtown Manhattan and 4. Section 5 presents the results of two experiments. The first experiment shows that given a content plan such as that in Figure 1 , SPaRKy can select sentence plans that communicate the desired rhetorical relations, are significantly better than a randomly selected sentence plan, and are on average less than 10% worse than a sentence plan ranked highest by human judges. The second experiment shows that the quality of SPaRKy's output is comparable to that of MATCH's template-based generator. We sum up in Section 6.",
"cite_spans": [],
"ref_spans": [
{
"start": 210,
"end": 218,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Information presentation in the MATCH system focuses on user-tailored recommendations and comparisons of restaurants . Following the bottom-up approach to text-planning described in (Marcu, 1997; Mellish, 1998) , each presentation consists of a set of assertions about a set of restaurants and a specification of the rhetorical relations that hold between them. Chanpen Thai, which is a Thai restaurant, has decent decor. It has good service. It has the best overall quality among the selected restaurants.",
"cite_spans": [
{
"start": 182,
"end": 195,
"text": "(Marcu, 1997;",
"ref_id": "BIBREF7"
},
{
"start": 196,
"end": 210,
"text": "Mellish, 1998)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SPaRKy Architecture",
"sec_num": "2"
},
{
"text": "Since Chanpen Thai is a Thai restaurant, with good service, and it has decent decor, it has the best overall quality among the selected restaurants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".28 5",
"sec_num": "3"
},
{
"text": "Chanpen Thai, which is a Thai restaurant, with decent decor and good service, has the best overall quality among the selected restaurants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".14 6",
"sec_num": "2.5"
},
{
"text": "Figure 3: Some alternative sentence plan realizations for the recommendation in Figure 1 . H = Humans' score. SPR = SPR's score. Figure 4 : Some of the alternative sentence plan realizations for the comparison in Figure 2 . H = Humans' score. SPR = SPR's score. NR = Not generated or ranked",
"cite_spans": [],
"ref_spans": [
{
"start": 80,
"end": 88,
"text": "Figure 1",
"ref_id": null
},
{
"start": 129,
"end": 137,
"text": "Figure 4",
"ref_id": null
},
{
"start": 213,
"end": 221,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": ".70",
"sec_num": "4"
},
{
"text": "The architecture of the spoken language generation module in MATCH is shown in Figure 5 . The dialog manager sends a high-level communicative goal to the SPUR text planner, which selects the content to be communicated using a user model and brevity constraints (see (Walker SPaRKy, the sentence planner, gets the content plan, and then a sentence plan generator (SPG) generates one or more sentence plans ( Figure 7 ) and a sentence plan ranker (SPR) ranks the generated plans. In order for the SPG to avoid generating sentence plans that are clearly bad, a content-structuring module first finds one or more ways to linearly order the input content plan using principles of entity-based coherence based on rhetorical relations (Knott et al., 2001 ). It outputs a set of text plan trees (tp-trees), consisting of a set of speech acts to be communicated and the rhetorical relations that hold between them. For example, the two tp-trees in Figure 6 are generated for the content plan in Figure 2 . Sentence plans such as alternative 25 in Figure 4 are avoided; it is clearly worse than alternatives 12, 13 and 20 since it neither combines information based on a restaurant entity (e.g Babbo) nor on an attribute (e.g. decor).",
"cite_spans": [
{
"start": 728,
"end": 747,
"text": "(Knott et al., 2001",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 79,
"end": 87,
"text": "Figure 5",
"ref_id": "FIGREF1"
},
{
"start": 407,
"end": 415,
"text": "Figure 7",
"ref_id": "FIGREF3"
},
{
"start": 939,
"end": 947,
"text": "Figure 6",
"ref_id": null
},
{
"start": 986,
"end": 994,
"text": "Figure 2",
"ref_id": null
},
{
"start": 1038,
"end": 1046,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": ".70",
"sec_num": "4"
},
{
"text": "The top ranked sentence plan output by the SPR is input to the RealPro surface realizer which produces a surface linguistic utterance (Lavoie and Rambow, 1997) . A prosody assignment module uses the prior levels of linguistic representation to determine the appropriate prosody for the utterance, and passes a markedup string to the text-to-speech module.",
"cite_spans": [
{
"start": 134,
"end": 159,
"text": "(Lavoie and Rambow, 1997)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": ".70",
"sec_num": "4"
},
{
"text": "As in SPoT, the basis of the SPG is a set of clause-combining operations that operate on tptrees and incrementally transform the elementary predicate-argument lexico-structural representations (called DSyntS (Melcuk, 1988) ) associated with the speech-acts on the leaves of the tree. The operations are applied in a bottom-up left-to-right fashion and the resulting representation may contain one or more sentences. The application of the operations yields two parallel structures: (1) a sentence plan tree (sp-tree), a binary tree with leaves labeled by the assertions from the input tp-tree, and interior nodes labeled with clause-combining operations; and (2) one or more DSyntS trees (d-trees) which reflect the parallel operations on the predicate-argument representations.",
"cite_spans": [
{
"start": 193,
"end": 222,
"text": "(called DSyntS (Melcuk, 1988)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Plan Generation",
"sec_num": "3"
},
{
"text": "We generate a random sample of possible sentence plans for each tp-tree, up to a prespecified number of sentence plans, by randomly selecting among the operations according to a probability distribution that favors preferred operations 1 . The choice of operation is further constrained by the rhetorical relation that relates the assertions to be combined, as in other work e.g. (Scott and de Souza, 1990 ). In the current work, three RST rhetorical relations (Mann and Thompson, 1987) are used in the content planning phase to express the relations between assertions: the justify relation for recommendations, and the contrast and elaboration relations for comparisons. We added another relation to be used during the content-structuring phase, called infer, which holds for combinations of speech acts for which there is no rhetorical relation expressed in the content plan, as in (Marcu, 1997) . By explicitly representing the discourse structure of the information presentation, we can generate information presentations with considerably more internal complexity than those generated in (Walker, Rambow and Rogati, 2002) and eliminate those that violate certain coherence principles, as described in Section 2.",
"cite_spans": [
{
"start": 380,
"end": 405,
"text": "(Scott and de Souza, 1990",
"ref_id": "BIBREF13"
},
{
"start": 461,
"end": 486,
"text": "(Mann and Thompson, 1987)",
"ref_id": "BIBREF6"
},
{
"start": 885,
"end": 898,
"text": "(Marcu, 1997)",
"ref_id": "BIBREF7"
},
{
"start": 1094,
"end": 1127,
"text": "(Walker, Rambow and Rogati, 2002)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Plan Generation",
"sec_num": "3"
},
{
"text": "The clause-combining operations are general operations similar to aggregation operations used in other research (Rambow and Korelsky, 1992; Danlos, 2000) constraints on their use are described below. merge applies to two clauses with identical matrix verbs and all but one identical arguments. The clauses are combined and the nonidentical arguments coordinated. For example, merge(Above has good service;Carmine's has good service) yields Above and Carmine's have good service. merge applies only for the relations infer and contrast.",
"cite_spans": [
{
"start": 112,
"end": 139,
"text": "(Rambow and Korelsky, 1992;",
"ref_id": "BIBREF10"
},
{
"start": 140,
"end": 153,
"text": "Danlos, 2000)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Plan Generation",
"sec_num": "3"
},
{
"text": "with-reduction is treated as a kind of \"verbless\" participial clause formation in which the participial clause is interpreted with the subject of the unreduced clause. For example, with-reduction(Above is a New American restaurant;Above has good decor) yields Above is a New American restaurant, with good decor. with-reduction uses two syntactic constraints: (a) the subjects of the clauses must be identical, and (b) the clause that undergoes the participial formation must have a havepossession predicate. In the example above, for instance, the Above is a New American restaurant clause cannot undergo participial formation since the predicate is not one of havepossession. with-reduction applies only for the relations infer and justify.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Plan Generation",
"sec_num": "3"
},
{
"text": "relative-clause combines two clauses with identical subjects, using the second clause to relativize the first clause's subject. For example, relative-clause(Chanpen Thai is a Thai restaurant, with decent decor and good ser-vice;Chanpen Thai has the best overall quality among the selected restaurants) yields Chanpen Thai, which is a Thai restaurant, with decent decor and good service, has the best overall quality among the selected restaurants. relativeclause also applies only for the relations infer and justify.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Plan Generation",
"sec_num": "3"
},
{
"text": "cue-word inserts a discourse connective (one of since, however, while, and, but, and on the other hand), between the two clauses to be combined. cue-word conjunction combines two distinct clauses into a single sentence with a coordinating or subordinating conjunction (e.g. Above has decent decor BUT Carmine's has good decor), while cue-word insertion inserts a cue word at the start of the second clause, producing two separate sentences (e.g. Carmine's is an Italian restaurant. HOWEVER, Above is a New American restaurant). The choice of cue word is dependent on the rhetorical relation holding between the clauses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Plan Generation",
"sec_num": "3"
},
{
"text": "Finally, period applies to two clauses to be treated as two independent sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Plan Generation",
"sec_num": "3"
},
{
"text": "Note that a tp-tree can have very different realizations, depending on the operations of the SPG. For example, the second tp-tree in Figure 6 yields both Alt 11 and Alt 13 in Figure 4 . However, Alt 13 is more highly rated than Alt 11. The sp-tree and d-tree produced by the SPG for Alt 13 are shown in Figures 7 and 8 . The composite labels on the interior nodes of the sp- Figure 8 shows that the SPG treats the period operation as part of the lexico-structural representation for the d-tree. After sentence planning, the d-tree is split into multiple d-trees at period nodes; these are sent to the RealPro surface realizer. Separately, the SPG also handles referring expression generation by converting proper names to pronouns when they appear in the previous utterance. The rules are applied locally, across adjacent sequences of utterances (Brennan et al., 1987) . Referring expressions are manipulated in the d-trees, either intrasententially during the creation of the sp-tree, or intersententially, if the full sp-tree contains any period operations. The third and fourth sentences for Alt 13 in Figure 4 show the conversion of a named restaurant (Carmine's) to a pronoun.",
"cite_spans": [
{
"start": 846,
"end": 868,
"text": "(Brennan et al., 1987)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 133,
"end": 139,
"text": "Figure",
"ref_id": null
},
{
"start": 175,
"end": 183,
"text": "Figure 4",
"ref_id": null
},
{
"start": 303,
"end": 318,
"text": "Figures 7 and 8",
"ref_id": "FIGREF3"
},
{
"start": 375,
"end": 383,
"text": "Figure 8",
"ref_id": "FIGREF4"
},
{
"start": 1105,
"end": 1113,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentence Plan Generation",
"sec_num": "3"
},
{
"text": "The SPR takes as input a set of sp-trees generated by the SPG and ranks them. The SPR's rules for ranking sp-trees are learned from a labeled set of sentence-plan training examples using the RankBoost algorithm (Schapire, 1999) . Examples and Feedback: To apply Rank-Boost, a set of human-rated sp-trees are encoded in terms of a set of features. We started with a set of 30 representative content plans for each strategy. The SPG produced as many as 20 distinct sp-trees for each content plan. The sentences, realized by RealPro from these sp-trees, were then rated by two expert judges on a scale from 1 to 5, and the ratings averaged. Each sptree was an example input for RankBoost, with each corresponding rating its feedback.",
"cite_spans": [
{
"start": 211,
"end": 227,
"text": "(Schapire, 1999)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training the Sentence Plan Ranker",
"sec_num": "4"
},
{
"text": "Features used by RankBoost: RankBoost requires each example to be encoded as a set of real-valued features (binary features have values 0 and 1). A strength of RankBoost is that the set of features can be very large. We used 7024 features for training the SPR. These features count the number of occurrences of certain structural configurations in the sp-trees and the d-trees, in order to capture declaratively decisions made by the randomized SPG, as in (Walker, Rambow and Rogati, 2002) . The features were automatically generated using feature templates. For this experiment, we use two classes of feature: (1) Rule-features: These features are derived from the sp-trees and represent the ways in which merge, infer and cueword operations are applied to the tp-trees. These feature names start with \"rule\". (2) Sentfeatures: These features are derived from the DSyntSs, and describe the deep-syntactic structure of the utterance, including the chosen lexemes. As a result, some may be domain specific. These feature names are prefixed with \"sent\".",
"cite_spans": [
{
"start": 456,
"end": 489,
"text": "(Walker, Rambow and Rogati, 2002)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training the Sentence Plan Ranker",
"sec_num": "4"
},
{
"text": "We now describe the feature templates used in the discovery process. Three templates were used for both sp-tree and d-tree features; two were used only for sp-tree features. Local feature templates record structural configurations local to a particular node (its ancestors, daughters etc.). Global feature templates, which are used only for sp-tree features, record properties of the entire sp-tree. We discard features that occur fewer than 10 times to avoid those specific to particular text plans. Local feature templates are applied to all nodes in a sp-tree or d-tree (except that the leaf feature is not used for d-trees); the value of the resulting feature is the number of occurrences of the described configuration in the tree. For each node in the tree, traversal features record the preorder traversal of the subtree rooted at that node, for all subtrees of all depths. An example is the feature \"rule traversal assertcom-list exceptional\" (with value 1) of the tree in Figure 7 . Sister features record all consecutive sister nodes. An example is the feature \"rule sisters PERIOD infer RELATIVE CLAUSE infer\" (with value 1) of the tree in Figure 7 .",
"cite_spans": [],
"ref_spans": [
{
"start": 981,
"end": 989,
"text": "Figure 7",
"ref_id": "FIGREF3"
},
{
"start": 1151,
"end": 1159,
"text": "Figure 7",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Training the Sentence Plan Ranker",
"sec_num": "4"
},
{
"text": "For each node in the tree, ancestor features record all the initial subpaths of the path from that node to the root.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training the Sentence Plan Ranker",
"sec_num": "4"
},
{
"text": "An example is the feature \"rule ancestor PERIOD contrast*PERIOD infer\" (with value 1) of the tree in Figure 7 . Finally, leaf features record all initial substrings of the frontier of the sp-tree. For example, the sp-tree of Figure 7 has value 1 for the feature \"leaf #assert-com-list exceptional#assert-comcuisine\".",
"cite_spans": [],
"ref_spans": [
{
"start": 101,
"end": 109,
"text": "Figure 7",
"ref_id": "FIGREF3"
},
{
"start": 225,
"end": 233,
"text": "Figure 7",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Training the Sentence Plan Ranker",
"sec_num": "4"
},
{
"text": "Global features apply only to the sptree. They record, for each sp-tree and for each clause-combining operation labeling a nonfrontier node, (1) the minimal number of leaves dominated by a node labeled with that operation in that tree (MIN); (2) the maximal number of leaves dominated by a node labeled with that operation (MAX); and (3) the average number of leaves dominated by a node labeled with that operation (AVG). For example, the sp-tree in Figure 7 has value 3 for \"PERIOD infer max\", value 2 for \"PERIOD infer min\" and value 2.5 for \"PE-RIOD infer avg\".",
"cite_spans": [],
"ref_spans": [
{
"start": 450,
"end": 458,
"text": "Figure 7",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Training the Sentence Plan Ranker",
"sec_num": "4"
},
{
"text": "We report two sets of experiments. The first experiment tests the ability of the SPR to select a high quality sentence plan from a population of sentence plans randomly generated by the SPG. Because the discriminatory power of the SPR is best tested by the largest possible population of sentence plans, we use 2-fold cross validation for this experiment. The second experiment compares SPaRKy to template-based generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5"
},
{
"text": "Cross Validation Experiment: We repeatedly tested SPaRKy on the half of the corpus of 1756 sp-trees held out as test data for each fold. The evaluation metric is the humanassigned score for the variant that was rated highest by SPaRKy for each text plan for each task/user combination. We evaluated SPaRKy on the test sets by comparing three data points for each text plan: HUMAN (the score of the top-ranked sentence plan); SPARKY (the score of the SPR's selected sentence); and RANDOM (the score of a sentence plan randomly selected from the alternate sentence plans).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5"
},
{
"text": "We report results separately for comparisons between two entities and among three or more entities. These two types of comparison are generated using different strategies in the SPG, and can produce text that is very different both in terms of length and structure. Table 1 summarizes the difference between SPaRKy, HUMAN and RANDOM for recommendations, comparisons between two entities and comparisons between three or more entities. For all three presentation types, a paired t-test comparing SPaRKy to HUMAN to RAN-DOM showed that SPaRKy was significantly better than RANDOM (df = 59, p < .001) and significantly worse than HUMAN (df = 59, p < .001). This demonstrates that the use of a trainable sentence planner can lead to sentence plans that are significantly better than baseline (RANDOM), with less human effort than programming templates.",
"cite_spans": [],
"ref_spans": [
{
"start": 266,
"end": 273,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5"
},
{
"text": "For each content plan input to SPaRKy, the judges also rated the output of a templatebased generator for MATCH. This templatebased generator performs text planning and sentence planning (the focus of the current paper), including some discourse cue insertion, clause combining and referring expression generation; the templates themselves are described in . Because the templates are highly tailored to this domain, this generator can be expected to perform well. Example template-based and SPaRKy outputs for a comparison between three or more items are shown in Figure 9 . Table 2 shows the mean HUMAN scores for the template-based sentence planning. A paired t-test comparing HUMAN and template-based scores showed that HUMAN was significantly better than template-based sentence planning only for compare2 (df = 29, t = 6.2, p < .001). The judges evidently did not like the template for comparisons between two items. A paired t-test comparing SPaRKy and template-based sentence planning showed that template-based sentence planning was significantly better than SPaRKy only for recommendations (df = 29, t = 3.55, p < .01). These results demonstrate that trainable sentence planning shows promise for producing output comparable to that of a template-based generator, with less programming effort and more flexibility.",
"cite_spans": [],
"ref_spans": [
{
"start": 564,
"end": 572,
"text": "Figure 9",
"ref_id": null
},
{
"start": 575,
"end": 582,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Comparison with template generation:",
"sec_num": null
},
{
"text": "The standard deviation for all three templatebased strategies was wider than for HUMAN or SPaRKy, indicating that there may be content-specific aspects to the sentence planning done by SPaRKy that contribute to output variation. The data show this to be correct; SPaRKy learned content-specific preferences about clause combining and discourse cue insertion that a template-based generator can-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with template generation:",
"sec_num": null
},
{
"text": "Realization H",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "Template Among the selected restaurants, the following offer exceptional overall value. Uguale's price is 33 dollars. It has good decor and very good service. It's a French, Italian restaurant. Da Andrea's price is 28 dollars. It has good decor and very good service. It's an Italian restaurant. John's Pizzeria's price is 20 dollars. It has mediocre decor and decent service. It's an Italian, Pizza restaurant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "Da Andrea, Uguale, and John's Pizzeria offer exceptional value among the selected restaurants. Da Andrea is an Italian restaurant, with very good service, it has good decor, and its price is 28 dollars. John's Pizzeria is an Italian , Pizza restaurant. It has decent service. It has mediocre decor. Its price is 20 dollars. Uguale is a French, Italian restaurant, with very good service. It has good decor, and its price is 33 dollars.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SPaRKy",
"sec_num": null
},
{
"text": "Figure 9: Comparisons between 3 or more items, H = Humans' score not easily model, but that a trainable sentence planner can. For example, Table 3 shows the nine rules generated on the first test fold which have the largest negative impact on the final RankBoost score (above the double line) and the largest positive impact on the final Rank-Boost score (below the double line), for comparisons between three or more entities. The rule with the largest positive impact shows that SPaRKy learned to prefer that justifications involving price be merged with other information using a conjunction. These rules are also specific to presentation type. Averaging over both folds of the experiment, the number of unique features appearing in rules is 708, of which 66 appear in the rule sets for two presentation types and 9 appear in the rule sets for all three presentation types. There are on average 214 rule features, 428 sentence features and 26 leaf features. The majority of the features are ancestor features (319) followed by traversal features (264) and sister features (60). The remainder of the features (67) are for specific lexemes.",
"cite_spans": [],
"ref_spans": [
{
"start": 139,
"end": 146,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "4",
"sec_num": null
},
{
"text": "To sum up, this experiment shows that the ability to model the interactions between domain content, task and presentation type is a strength of the trainable approach to sentence planning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4",
"sec_num": null
},
{
"text": "This paper shows that the training technique used in SPoT can be easily extended to a new Table 3 : The nine rules generated on the first test fold which have the largest negative impact on the final RankBoost score (above the double line) and the largest positive impact on the final RankBoost score (below the double line), for Compare3. \u03b1 s represents the increment or decrement associated with satisfying the condition.",
"cite_spans": [],
"ref_spans": [
{
"start": 90,
"end": 97,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "domain and used for information presentation as well as information gathering. Previous work on SPoT also compared trainable sentence planning to a template-based generator that had previously been developed for the same application (Rambow et al., 2001 ). The evaluation results for SPaRKy (1) support the results for SPoT, by showing that trainable sentence generation can produce output comparable to template-based generation, even for complex information presentations such as extended comparisons;",
"cite_spans": [
{
"start": 233,
"end": 253,
"text": "(Rambow et al., 2001",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "(2) show that trainable sentence generation is sensitive to variations in domain application, presentation type, and even human preferences about the arrangement of particular types of information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Although the probability distribution here is handcrafted based on assumed preferences for operations such as merge, relative-clause and with-reduction, it might also be possible to learn this probability distribution from the data by training in two phases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank AT&T for supporting this research, and the anonymous reviewers for their helpful comments on this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "7"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Forest-based statistical sentence generation",
"authors": [
{
"first": "I",
"middle": [],
"last": "Langkilde",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. NAACL 2000",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Langkilde. Forest-based statistical sentence gen- eration. In Proc. NAACL 2000, 2000.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Pollard. A centering approach to pronouns",
"authors": [
{
"first": "S",
"middle": [
"E"
],
"last": "Brennan",
"suffix": ""
},
{
"first": "M",
"middle": [
"Walker"
],
"last": "Friedman",
"suffix": ""
},
{
"first": "C",
"middle": [
"J"
],
"last": "",
"suffix": ""
}
],
"year": 1987,
"venue": "Proc. 25th Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "155--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. E. Brennan, M. Walker Friedman, and C. J. Pol- lard. A centering approach to pronouns. In Proc. 25th Annual Meeting of the ACL, Stanford, pages 155-162, 1987.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "G-TAG: A lexicalized formalism for text generation inspired by tree adjoining grammar",
"authors": [
{
"first": "L",
"middle": [],
"last": "Danlos",
"suffix": ""
}
],
"year": 2000,
"venue": "Tree Adjoining Grammars: Formalisms, Linguistic Analysis, and Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Danlos. 2000. G-TAG: A lexicalized formal- ism for text generation inspired by tree ad- joining grammar. In Tree Adjoining Grammars: Formalisms, Linguistic Analysis, and Processing. CSLI Publications.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "MATCH: An architecture for multimodal dialogue systems",
"authors": [
{
"first": "M",
"middle": [],
"last": "Johnston",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Vasireddy",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Stent",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Ehlen",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Whittaker",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Maloor",
"suffix": ""
}
],
"year": 2002,
"venue": "Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Johnston, S. Bangalore, G. Vasireddy, A. Stent, P. Ehlen, M. Walker, S. Whittaker, and P. Mal- oor. MATCH: An architecture for multimodal di- alogue systems. In Annual Meeting of the ACL, 2002.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Beyond Elaboration: the interaction of relations and focus in coherent text",
"authors": [
{
"first": "A",
"middle": [],
"last": "Knott",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Oberlander",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "O'donnell",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Mellish",
"suffix": ""
}
],
"year": 2001,
"venue": "Text Representation: linguistic and psycholinguistic aspects",
"volume": "",
"issue": "",
"pages": "181--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Knott, J. Oberlander, M. O'Donnell and C. Mel- lish. Beyond Elaboration: the interaction of rela- tions and focus in coherent text. In Text Repre- sentation: linguistic and psycholinguistic aspects, pages 181-196, 2001.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A fast and portable realizer for text generation systems",
"authors": [
{
"first": "B",
"middle": [],
"last": "Lavoie",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. of the 3rd Conference on Applied Natural Language Processing, ANLP97",
"volume": "",
"issue": "",
"pages": "265--268",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Lavoie and O. Rambow. A fast and portable re- alizer for text generation systems. In Proc. of the 3rd Conference on Applied Natural Language Pro- cessing, ANLP97, pages 265-268, 1997.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Rhetorical structure theory: A framework for the analysis of texts",
"authors": [
{
"first": "C",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Thompson",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Mann and S.A. Thompson. Rhetorical struc- ture theory: A framework for the analysis of texts. Technical Report RS-87-190, USC/Information Sciences Institute, 1987.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "From local to global coherence: a bottom-up approach to text planning",
"authors": [
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the National Conference on Artificial Intelligence (AAAI'97)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Marcu. From local to global coherence: a bottom-up approach to text planning. In Proceed- ings of the National Conference on Artificial In- telligence (AAAI'97), 1997.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Experiments using stochastic search for text planning",
"authors": [
{
"first": "C",
"middle": [],
"last": "Mellish",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Knott",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Oberlander",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "O'donnell",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of INLG-98",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Mellish, A. Knott, J. Oberlander, and M. O'Donnell. Experiments using stochastic search for text planning. In Proceedings of INLG-98. 1998.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Dependency Syntax: Theory and Practice. SUNY",
"authors": [
{
"first": "I",
"middle": [
"A"
],
"last": "Mel\u010duk",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. A. Mel\u010duk. Dependency Syntax: Theory and Prac- tice. SUNY, Albany, New York, 1988.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Applied text generation",
"authors": [
{
"first": "O",
"middle": [],
"last": "Rambow",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Korelsky",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the Third Conference on Applied Natural Language Processing",
"volume": "92",
"issue": "",
"pages": "40--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "O. Rambow and T. Korelsky. Applied text genera- tion. In Proceedings of the Third Conference on Applied Natural Language Processing, ANLP92, pages 40-47, 1992.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Evaluating a Trainable Sentence Planner for a Spoken Dialogue Travel System",
"authors": [
{
"first": "M",
"middle": [],
"last": "Rambow",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Rogati",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Walker",
"suffix": ""
}
],
"year": 2001,
"venue": "Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rambow, M. Rogati and M. A. Walker. Evalu- ating a Trainable Sentence Planner for a Spoken Dialogue Travel System In Meeting of the ACL, 2001.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A brief introduction to boosting",
"authors": [
{
"first": "R",
"middle": [
"E"
],
"last": "Schapire",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. of the 16th IJCAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. E. Schapire. A brief introduction to boosting. In Proc. of the 16th IJCAI, 1999.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Getting the message across in RST-based text generation",
"authors": [
{
"first": "D",
"middle": [
"R"
],
"last": "Scott",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Sieckenius De",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Souza",
"suffix": ""
}
],
"year": 1990,
"venue": "Current Research in Natural Language Generation",
"volume": "",
"issue": "",
"pages": "47--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. R. Scott and C. Sieckenius de Souza. Getting the message across in RST-based text generation. In Current Research in Natural Language Gener- ation, pages 47-73, 1990.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "User-tailored generation for spoken dialogue: An experiment",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stent",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Whittaker",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Maloor",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ICSLP 2002",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Stent, M. Walker, S. Whittaker, and P. Maloor. User-tailored generation for spoken dialogue: An experiment. In Proceedings of ICSLP 2002., 2002.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Speech-Plans: Generating evaluative responses in spoken dialogue",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Walker",
"suffix": ""
},
{
"first": "S",
"middle": [
"J"
],
"last": "Whittaker",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Stent",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Maloor",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Moore",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Johnston",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Vasireddy",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of INLG-02",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. A. Walker, S. J. Whittaker, A. Stent, P. Mal- oor, J. D. Moore, M. Johnston, and G. Vasireddy. Speech-Plans: Generating evaluative responses in spoken dialogue. In Proceedings of INLG-02., 2002.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Training a sentence planner for spoken dialogue using boosting. Computer Speech and Language: Special Issue on Spoken Language Generation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Rambow",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Rogati",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Walker, O. Rambow, and M. Rogati. Training a sentence planner for spoken dialogue using boost- ing. Computer Speech and Language: Special Is- sue on Spoken Language Generation, 2002.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"text": "A dialog system with a spoken language generator et al., 2002)). The output is a content plan for a recommendation or comparison such as those in Figures 1 and 2.",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": ". The operations and the nucleus:<3>assert-com-decor contrast nucleus:<2>assert-com-decor nucleus:<6>assert-com-cuisine nucleus:<7>assert-com-cuisine contrast nucleus:<4>assert-com-service nucleus:<5>assert-com-service contrast elaboration nucleus:<1>assert-com-list_exceptional infer nucleus:<3>assert-com-decor nucleus:<5>assert-com-service nucleus:<7>assert-com-cuisine infer infer nucleus:<2>assert-com-decor nucleus:<6>assert-com-cuisine nucleus:<4>assert-com-service elaboration nucleus:<1>assert-com-list_exceptional contrast Figure 6: Two tp-trees for alternative 13 in Figure 4.",
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"text": "Sentence plan tree (sp-tree) for alternative 13 inFigure 4",
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"uris": null,
"text": "Dependency tree (d-tree) for alternative 13 in Figure 4 tree indicate the clause-combining relation selected to communicate the specified rhetorical relation. The d-tree for Alt 13 in",
"num": null,
"type_str": "figure"
},
"TABREF3": {
"type_str": "table",
"num": null,
"text": "",
"content": "
: Summary of Recommend, Compare2 |
and Compare3 results (N = 180) |
There are four types of local feature |
template: traversal features, sister features, |
ancestor features and leaf features. |
",
"html": null
},
"TABREF5": {
"type_str": "table",
"num": null,
"text": "",
"content": ": Summary of template-based genera- |
tion results. N = 180 |
",
"html": null
}
}
}
}