{ "paper_id": "P06-1034", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:24:15.667259Z" }, "title": "Learning to Generate Naturalistic Utterances Using Reviews in Spoken Dialogue Systems", "authors": [ { "first": "Ryuichiro", "middle": [], "last": "Higashinaka", "suffix": "", "affiliation": { "laboratory": "", "institution": "NTT Corporation", "location": {} }, "email": "" }, { "first": "Rashmi", "middle": [], "last": "Prasad", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania", "location": {} }, "email": "rjprasad@linc.cis.upenn.edu" }, { "first": "Marilyn", "middle": [ "A" ], "last": "Walker", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Sheffield", "location": {} }, "email": "walker@dcs.shef.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Spoken language generation for dialogue systems requires a dictionary of mappings between semantic representations of concepts the system wants to express and realizations of those concepts. Dictionary creation is a costly process; it is currently done by hand for each dialogue domain. We propose a novel unsupervised method for learning such mappings from user reviews in the target domain, and test it on restaurant reviews. We test the hypothesis that user reviews that provide individual ratings for distinguished attributes of the domain entity make it possible to map review sentences to their semantic representation with high precision. Experimental analyses show that the mappings learned cover most of the domain ontology, and provide good linguistic variation. A subjective user evaluation shows that the consistency between the semantic representations and the learned realizations is high and that the naturalness of the realizations is higher than a hand-crafted baseline. An example user review (we8there.com) Ratings Food=5, Service=5, Atmosphere=5, Value=5, Overall=5 Review comment The best Spanish food in New York. I am from Spain and I had my 28th birthday there and we all had a great time. Salud! \u2193", "pdf_parse": { "paper_id": "P06-1034", "_pdf_hash": "", "abstract": [ { "text": "Spoken language generation for dialogue systems requires a dictionary of mappings between semantic representations of concepts the system wants to express and realizations of those concepts. Dictionary creation is a costly process; it is currently done by hand for each dialogue domain. We propose a novel unsupervised method for learning such mappings from user reviews in the target domain, and test it on restaurant reviews. We test the hypothesis that user reviews that provide individual ratings for distinguished attributes of the domain entity make it possible to map review sentences to their semantic representation with high precision. Experimental analyses show that the mappings learned cover most of the domain ontology, and provide good linguistic variation. A subjective user evaluation shows that the consistency between the semantic representations and the learned realizations is high and that the naturalness of the realizations is higher than a hand-crafted baseline. An example user review (we8there.com) Ratings Food=5, Service=5, Atmosphere=5, Value=5, Overall=5 Review comment The best Spanish food in New York. I am from Spain and I had my 28th birthday there and we all had a great time. Salud! \u2193", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "One obstacle to the widespread deployment of spoken dialogue systems is the cost involved with hand-crafting the spoken language generation module. Spoken language generation requires a dictionary of mappings between semantic representations of concepts the system wants to express and realizations of those concepts. Dictionary creation is a costly process: an automatic method for creating them would make dialogue technology more scalable. A secondary benefit is that a learned dictionary may produce more natural and colloquial utterances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We propose a novel method for mining user reviews to automatically acquire a domain specific generation dictionary for information presentation in a dialogue system. Our hypothesis is that reviews that provide individual ratings for various distinguished attributes of review entities can be used to map review sentences to a semantic rep-\u2193 Mapping between a semantic representation (a set of relations) and a syntactic structure (DSyntS) Figure 1 : Example of procedure for acquiring a generation dictionary mapping.", "cite_spans": [], "ref_spans": [ { "start": 439, "end": 447, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "resentation. Figure 1 shows a user review in the restaurant domain, where we hypothesize that the user rating food=5 indicates that the semantic representation for the sentence \"The best Spanish food in New York\" includes the relation 'RESTAU-RANT has foodquality=5.'", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 21, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We apply the method to extract 451 mappings from restaurant reviews. Experimental analyses show that the mappings learned cover most of the domain ontology, and provide good linguistic variation. A subjective user evaluation indicates that the consistency between the semantic representations and the learned realizations is high and that the naturalness of the realizations is significantly higher than a hand-crafted baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Section 2 provides a step-by-step description of the method. Sections 3 and 4 present the evaluation results. Section 5 covers related work. Section 6 summarizes and discusses future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our automatically created generation dictionary consists of triples (U, R, S) representing a mapping between the original utterance U in the user review, its semantic representation R(U), and its syntactic structure S(U). Although templates are widely used in many practical systems (Seneff and Polifroni, 2000; Theune, 2003) , we derive syntactic structures to represent the potential realizations, in order to allow aggregation, and other syntactic transformations of utterances, as well as context specific prosody assignment (Walker et al., 2003; Moore et al., 2004) .", "cite_spans": [ { "start": 283, "end": 311, "text": "(Seneff and Polifroni, 2000;", "ref_id": "BIBREF15" }, { "start": 312, "end": 325, "text": "Theune, 2003)", "ref_id": "BIBREF16" }, { "start": 529, "end": 550, "text": "(Walker et al., 2003;", "ref_id": "BIBREF18" }, { "start": 551, "end": 570, "text": "Moore et al., 2004)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Learning a Generation Dictionary", "sec_num": "2" }, { "text": "The method is outlined briefly in Fig. 1 and described below. It comprises the following steps:", "cite_spans": [], "ref_spans": [ { "start": 34, "end": 40, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Learning a Generation Dictionary", "sec_num": "2" }, { "text": "1. Collect user reviews on the web to create a population of utterances U.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning a Generation Dictionary", "sec_num": "2" }, { "text": "\u2022 Identify distinguished attributes and construct a domain ontology; \u2022 Specify lexicalizations of attributes;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "To derive semantic representations R(U):", "sec_num": "2." }, { "text": "\u2022 Scrape webpages' structured data for named-entities; \u2022 Tag named-entities. 3. Derive syntactic representations S(U). 4. Filter inappropriate mappings. 5. Add mappings (U, R, S) to dictionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "To derive semantic representations R(U):", "sec_num": "2." }, { "text": "We created a corpus of restaurant reviews by scraping 3,004 user reviews of 1,810 restaurants posted at we8there.com (http://www.we8there.com/), where each individual review includes a 1-to-5 Likert-scale rating of different restaurant attributes. The corpus consists of 18,466 sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Creating the corpus", "sec_num": "2.1" }, { "text": "The distinguished attributes are extracted from the webpages for each restaurant entity. They include attributes that the users are asked to rate, i.e. food, service, atmosphere, value, and overall, which have scalar values. In addition, other attributes are extracted from the webpage, such as the name, foodtype and location of the restaurant, which have categorical values. The name attribute is assumed to correspond to the restaurant entity. Given the distinguished attributes, a Dist. Attr . Lexicalization food food, meal service service, staff, waitstaff, wait staff, server, waiter, waitress atmosphere atmosphere, decor, ambience, decoration value value, price, overprice, pricey, expensive, inexpensive, cheap, affordable, afford overall recommend, place, experience, establishment Table 1 : Lexicalizations for distinguished attributes.", "cite_spans": [], "ref_spans": [ { "start": 496, "end": 759, "text": ". Lexicalization food food, meal service service, staff, waitstaff, wait staff, server, waiter, waitress atmosphere atmosphere, decor, ambience, decoration value value, price, overprice, pricey, expensive, inexpensive, cheap, affordable, afford overall", "ref_id": null }, { "start": 804, "end": 811, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Deriving semantic representations", "sec_num": "2.2" }, { "text": "simple domain ontology can be automatically derived by assuming that a meronymy relation, represented by the predicate 'has', holds between the entity type (RESTAURANT) and the distinguished attributes. Thus, the domain ontology consists of the relations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deriving semantic representations", "sec_num": "2.2" }, { "text": "\u23a7 \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23a8 \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23a9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deriving semantic representations", "sec_num": "2.2" }, { "text": "RESTAURANT has foodquality RESTAURANT has servicequality RESTAURANT has valuequality RESTAURANT has atmospherequality RESTAURANT has overallquality RESTAURANT has foodtype RESTAURANT has location", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deriving semantic representations", "sec_num": "2.2" }, { "text": "We assume that, although users may discuss other attributes of the entity, at least some of the utterances in the reviews realize the relations specified in the ontology. Our problem then is to identify these utterances. We test the hypothesis that, if an utterance U contains named-entities corresponding to the distinguished attributes, that R for that utterance includes the relation concerning that attribute in the domain ontology.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deriving semantic representations", "sec_num": "2.2" }, { "text": "We define named-entities for lexicalizations of the distinguished attributes, starting with the seed word for that attribute on the webpage (Table 1) . 1 For named-entity recognition, we use GATE (Cunningham et al., 2002) , augmented with namedentity lists for locations, food types, restaurant names, and food subtypes (e.g. pizza), scraped from the we8there webpages.", "cite_spans": [ { "start": 196, "end": 221, "text": "(Cunningham et al., 2002)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 140, "end": 149, "text": "(Table 1)", "ref_id": null } ], "eq_spans": [], "section": "Deriving semantic representations", "sec_num": "2.2" }, { "text": "We also hypothesize that the rating given for the distinguished attribute specifies the scalar value of the relation. For example, a sentence containing food or meal is assumed to realize the relation 'RESTAURANT has foodquality.', and the value of the foodquality attribute is assumed to be the value specified in the user rating for that attribute, e.g. 'RESTAURANT has foodquality = 5' in Fig. 1 . Similarly, the other relations in Fig. 1 ", "cite_spans": [], "ref_spans": [ { "start": 392, "end": 398, "text": "Fig. 1", "ref_id": null }, { "start": 435, "end": 441, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Deriving semantic representations", "sec_num": "2.2" }, { "text": "We adopt Deep Syntactic Structures (DSyntSs) as a format for syntactic structures because they can be realized by the fast portable realizer RealPro (Lavoie and Rambow, 1997) . Since DSyntSs are a type of dependency structure, we first process the sentences with Minipar (Lin, 1998) , and then convert Minipar's representation into DSyntS. Since user reviews are different from the newspaper articles on which Minipar was trained, the output of Minipar can be inaccurate, leading to failure in conversion. We check whether conversion is successful in the filtering stage.", "cite_spans": [ { "start": 149, "end": 174, "text": "(Lavoie and Rambow, 1997)", "ref_id": "BIBREF8" }, { "start": 271, "end": 282, "text": "(Lin, 1998)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Parsing and DSyntS conversion", "sec_num": "2.3" }, { "text": "The goal of filtering is to identify U that realize the distinguished attributes and to guarantee high precision for the learned mappings. Recall is less important since systems need to convey requested information as accurately as possible. Our procedure for deriving semantic representations is based on the hypothesis that if U contains named-entities that realize the distinguished attributes, that R will include the relevant relation in the domain ontology. We also assume that if U contains namedentities that are not covered by the domain ontology, or words indicating that the meaning of U depends on the surrounding context, that R will not completely characterizes the meaning of U, and so U should be eliminated. We also require an accurate S for U. Therefore, the filters described below eliminate U that (1) realize semantic relations not in the ontology;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering", "sec_num": "2.4" }, { "text": "(2) contain words indicating that its meaning depends on the context; (3) contain unknown words; or (4) cannot be parsed accurately.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering", "sec_num": "2.4" }, { "text": "No Relations Filter: The sentence does not contain any named-entities for the distinguished attributes. Other Relations Filter: The sentence contains named-entities for food subtypes, person 1 2 3 4 5 Total food 5 8 6 18 57 94 service 15 3 6 17 56 97 atmosphere 0 3 3 8 31 45 value 0 0 1 8 12 21 overall 3 2 5 15 45 70 Total 23 15 21 64 201 327 Table 3 : Domain coverage of single scalar-valued relation mappings.", "cite_spans": [], "ref_spans": [ { "start": 191, "end": 394, "text": "1 2 3 4 5 Total food 5 8 6 18 57 94 service 15 3 6 17 56 97 atmosphere 0 3 3 8 31 45 value 0 0 1 8 12 21 overall 3 2 5 15 45 70 Total 23 15 21 64 201 327 Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Filtering", "sec_num": "2.4" }, { "text": "names, country names, dates (e.g., today, tomorrow, Aug. 26th) or prices (e.g., 12 dollars), or POS tag CD for numerals. These indicate relations not in the ontology. Contextual Filter: The sentence contains indexicals such as I, you, that or cohesive markers of rhetorical relations that connect it to some part of the preceding text, which means that the sentence cannot be interpreted out of context. These include discourse markers, such as list item markers with LS as the POS tag, that signal the organization structure of the text (Hirschberg and Litman, 1987) , as well as discourse connectives that signal semantic and pragmatic relations of the sentence with other parts of the text (Knott, 1996) , such as coordinating conjunctions at the beginning of the utterance like and and but etc., and conjunct adverbs such as however, also, then. Unknown Words Filter: The sentence contains words not in WordNet (Fellbaum, 1998) (which includes typographical errors), or POS tags contain NN (Noun), which may indicate an unknown named-entity, or the sentence has more than a fixed length of words, 2 indicating that its meaning may not be estimated solely by named entities. Parsing Filter: The sentence fails the parsing to DSyntS conversion. Failures are automatically detected by comparing the original sentence with the one realized by RealPro taking the converted DSyntS as an input.", "cite_spans": [ { "start": 538, "end": 567, "text": "(Hirschberg and Litman, 1987)", "ref_id": "BIBREF5" }, { "start": 693, "end": 706, "text": "(Knott, 1996)", "ref_id": "BIBREF7" }, { "start": 915, "end": 931, "text": "(Fellbaum, 1998)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Filtering", "sec_num": "2.4" }, { "text": "We apply the filters, in a cascading manner, to the 18,466 sentences with semantic representations. As a result, we obtain 512 (2.8%) mappings of (U, R, S). After removing 61 duplicates, 451 distinct (2.4%) mappings remain. Table 2 shows the number of sentences eliminated by each filter.", "cite_spans": [], "ref_spans": [ { "start": 224, "end": 231, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Filtering", "sec_num": "2.4" }, { "text": "We evaluate the learned expressions with respect to domain coverage, linguistic variation and generativity. Table 4 : Counts for multi-relation mappings.", "cite_spans": [], "ref_spans": [ { "start": 108, "end": 115, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Objective Evaluation", "sec_num": "3" }, { "text": "To be usable for a dialogue system, the mappings must have good domain coverage. Table 3 shows the distribution of the 327 mappings realizing a single scalar-valued relation, categorized by the associated rating score. 3 For example, there are 57 mappings with R of 'RESTAURANT has foodqual-ity=5,' and a large number of mappings for both the foodquality and servicequality relations. Although we could not obtain mappings for some relations such as price={1,2}, coverage for expressing a single relation is fairly complete. There are also mappings that express several relations. Table 4 shows the counts of mappings for multi-relation mappings, with those containing a food or service relation occurring more frequently as in the single scalar-valued relation mappings. We found only 21 combinations of relations, which is surprising given the large potential number of combinations (There are 50 combinations if we treat relations with different scalar values differently). We also find that most of the mappings have two or three relations, perhaps suggesting that system utterances should not express too many relations in a single sentence.", "cite_spans": [], "ref_spans": [ { "start": 81, "end": 88, "text": "Table 3", "ref_id": null }, { "start": 581, "end": 588, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Domain Coverage", "sec_num": "3.1" }, { "text": "We also wish to assess whether the linguistic variation of the learned mappings was greater than what we could easily have generated with a hand-crafted dictionary, or a hand-crafted dictionary augmented with aggregation operators, as in (Walker et al., 2003) . Thus, we first categorized the mappings by the patterns of the DSyntSs. Table 5 shows the most common syntactic patterns (more than 10 occurrences), indicating that 30% of the learned patterns consist of the simple form \"X is ADJ\" where ADJ is an adjective, or \"X is RB ADJ,\" where RB is a degree modifier. Furthermore, up to 55% of the learned mappings could be generated from these basic patterns by the application of a combination operator that coordinates multiple adjectives, or coordinates predications over distinct attributes. However, there are 137 syntactic patterns in all, 97 with unique syntactic structures and 21 with two occurrences, accounting for 45% of the learned mappings. Table 6 shows examples of learned mappings with distinct syntactic structures. It would be surprising to see this type of variety in a hand-crafted generation dictionary. In addition, the learned mappings contain 275 distinct lexemes, with a minimum of 2, maximum of 15, and mean of 4.63 lexemes per DSyntS, indicating that the method extracts a wide variety of expressions of varying lengths.", "cite_spans": [ { "start": 238, "end": 259, "text": "(Walker et al., 2003)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 334, "end": 341, "text": "Table 5", "ref_id": "TABREF5" }, { "start": 957, "end": 964, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Linguistic Variation", "sec_num": "3.2" }, { "text": "Another interesting aspect of the learned mappings is the wide variety of adjectival phrases (APs) in the common patterns. Tables 7 and 8 show the APs in single scalar-valued relation mappings for food and service categorized by the associated ratings. Tables for atmosphere, value and overall can be found in the Appendix. Moreover, the meanings for some of the learned APs are very specific to the particular attribute, e.g. cold and burnt associated with foodquality of 1, attentive and prompt for servicequality of 5, silly and inattentive for servicequality of 1. and mellow for atmosphere of 5. In addition, our method places the adjectival phrases (APs) in the common patterns on a more fine-grained scale of 1 to 5, similar to the strength classifications in (Wilson et al., 2004) , in contrast to other automatic methods that classify expressions into a binary positive or negative polarity (e.g. (Turney, 2002) ).", "cite_spans": [ { "start": 767, "end": 788, "text": "(Wilson et al., 2004)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 123, "end": 137, "text": "Tables 7 and 8", "ref_id": "TABREF6" }, { "start": 906, "end": 920, "text": "(Turney, 2002)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Linguistic Variation", "sec_num": "3.2" }, { "text": "Our motivation for deriving syntactic representations for the learned expressions was the possibility of using an off-the-shelf sentence planner to derive new combinations of relations, and apply aggregation and other syntactic transformations. We examined how many of the learned DSyntSs can be combined with each other, by taking every pair of DSyntSs in the mappings and applying the built-in merge operation in the SPaRKy generator (Walker et al., 2003) . We found that only 306 combinations out of a potential 81,318 [food=5] The food is to die for.", "cite_spans": [ { "start": 436, "end": 457, "text": "(Walker et al., 2003)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Generativity", "sec_num": "3.3" }, { "text": "[food=5] What incredible food.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generativity", "sec_num": "3.3" }, { "text": "[food=4] Very pleasantly surprised by the food.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generativity", "sec_num": "3.3" }, { "text": "[food=1] The food has gone downhill.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generativity", "sec_num": "3.3" }, { "text": "[atmosphere=5, overall=5] This is a quiet little place with great atmosphere.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generativity", "sec_num": "3.3" }, { "text": "[atmosphere=5, food=5, overall=5, service=5, value=5] The food, service and ambience of the place are all fabulous and the prices are downright cheap. Table 6 : Acquired generation patterns (with shorthand for relations in square brackets) whose syntactic patterns occurred only once.", "cite_spans": [], "ref_spans": [ { "start": 151, "end": 158, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Generativity", "sec_num": "3.3" }, { "text": "combinations (0.37%) were successful. This is because the merge operation in SPaRKy requires that the subjects and the verbs of the two DSyntSs are identical, e.g. the subject is RESTAURANT and verb is has, whereas the learned DSyntSs often place the attribute in subject position as a definite noun phrase. However, the learned DSyntS can be incorporated into SPaRKy using the semantic representations to substitute learned DSyntSs into nodes in the sentence plan tree. Figure 2 shows some example utterances generated by SPaRKy with its original dictionary and example utterances when the learned mappings are incorporated. The resulting utterances seem more natural and colloquial; we examine whether this is true in the next section.", "cite_spans": [], "ref_spans": [ { "start": 471, "end": 479, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Generativity", "sec_num": "3.3" }, { "text": "We evaluate the obtained mappings in two respects: the consistency between the automatically derived semantic representation and the realiza-food=1 awful, bad, burnt, cold, very ordinary food=2 acceptable, bad, flavored, not enough, very bland, very good food=3 adequate, bland and mediocre, flavorful but cold, pretty good, rather bland, very good food=4 absolutely wonderful, awesome, decent, excellent, good, good and generous, great, outstanding, rather good, really good, traditional, very fresh and tasty, very good, very very good food=5 absolutely delicious, absolutely fantastic, absolutely great, absolutely terrific, ample, well seasoned and hot, awesome, best, delectable and plentiful, delicious, delicious but simple, excellent, exquisite, fabulous, fancy but tasty, fantastic, fresh, good, great, hot, incredible, just fantastic, large and satisfying, outstanding, plentiful and outstanding, plentiful and tasty, quick and hot, simply great, so delicious, so very tasty, superb, terrific, tremendous, very good, wonderful tion, and the naturalness of the realization. For comparison, we used a baseline of handcrafted mappings from (Walker et al., 2003) except that we changed the word decor to atmosphere and added five mappings for overall. For scalar relations, this consists of the realization \"RESTAURANT has ADJ LEX\" where ADJ is mediocre, decent, good, very good, or excellent for rating values 1-5, and LEX is food quality, service, atmosphere, value, or overall depending on the relation. RESTAURANT is filled with the name of a restaurant at runtime. For example, 'RESTAU-RANT has foodquality=1' is realized as \"RESTAU-RANT has mediocre food quality. \" The location and food type relations are mapped to \"RESTAU-RANT is located in LOCATION\" and \"RESTAU-RANT is a FOODTYPE restaurant. \"", "cite_spans": [ { "start": 1147, "end": 1168, "text": "(Walker et al., 2003)", "ref_id": "BIBREF18" }, { "start": 1513, "end": 1523, "text": "RESTAURANT", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Subjective Evaluation", "sec_num": "4" }, { "text": "The learned mappings include 23 distinct semantic representations for a single-relation (22 for scalar-valued relations and one for location) and 50 for multi-relations. Therefore, using the handcrafted mappings, we first created 23 utterances for the single-relations. We then created three utterances for each of 50 multi-relations using different clause-combining operations from (Walker et al., 2003) . This gave a total of 173 baseline utterances, which together with 451 learned mappings, service=1 awful, bad, great, horrendous, horrible, inattentive, forgetful and slow, marginal, really slow, silly and inattentive, still marginal, terrible, young service=2 overly slow, very slow and inattentive service=3 bad, bland and mediocre, friendly and knowledgeable, good, pleasant, prompt, very friendly service=4 all very warm and welcoming, attentive, extremely friendly and good, extremely pleasant, fantastic, friendly, friendly and helpful, good, great, great and courteous, prompt and friendly, really friendly, so nice, swift and friendly, very friendly, very friendly and accommodating service=5 all courteous, excellent, excellent and friendly, extremely friendly, fabulous, fantastic, friendly, friendly and helpful, friendly and very attentive, good, great, great, prompt and courteous, happy and friendly, impeccable, intrusive, legendary, outstanding, pleasant, polite, attentive and prompt, prompt and courteous, prompt and pleasant, quick and cheerful, stupendous, superb, the most attentive, unbelievable, very attentive, very congenial, very courteous, very friendly, very friendly and helpful, very friendly and pleasant, very friendly and totally personal, very friendly and welcoming, very good, very helpful, very timely, warm and friendly, wonderful yielded 624 utterances for evaluation. Ten subjects, all native English speakers, evaluated the mappings by reading them from a webpage. For each system utterance, the subjects were asked to express their degree of agreement, on a scale of 1 (lowest) to 5 (highest), with the statement (a) The meaning of the utterance is consistent with the ratings expressing their semantics, and with the statement (b) The style of the utterance is very natural and colloquial. They were asked not to correct their decisions and also to rate each utterance on its own merit. Table 9 shows the means and standard deviations of the scores for baseline vs. learned utterances for consistency and naturalness. A t-test shows that the consistency of the learned expression is significantly lower than the baseline (df=4712, p < .001) but that their naturalness is significantly higher than the baseline (df=3107, p < .001). However, consistency is still high. Only 14 of the learned utterances (shown in Tab. 10) have a mean consistency score lower than 3, which indicates that, by and large, the human judges felt that the inferred semantic representations were consistent with the meaning of the learned expressions. The correlation coefficient between consistency and naturalness scores is 0.42, which indicates that consis-Original SPaRKy utterances \u2022 Babbo has the best overall quality among the selected restaurants with excellent decor, excellent service and superb food quality.", "cite_spans": [ { "start": 383, "end": 404, "text": "(Walker et al., 2003)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 2336, "end": 2343, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Subjective Evaluation", "sec_num": "4" }, { "text": "\u2022 Babbo has excellent decor and superb food quality with excellent service. It has the best overall quality among the selected restaurants.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.1" }, { "text": "\u2022 Because the food is excellent, the wait staff is professional and the decor is beautiful and very comfortable, Babbo has the best overall quality among the selected restaurants.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combination of SPaRKy and learned DSyntS", "sec_num": null }, { "text": "\u2022 Babbo has the best overall quality among the selected restaurants because atmosphere is exceptionally nice, food is excellent and the service is superb.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combination of SPaRKy and learned DSyntS", "sec_num": null }, { "text": "\u2022 Babbo has superb food quality, the service is exceptional and the atmosphere is very creative. It has the best overall quality among the selected restaurants. Table 9 : Consistency and naturalness scores averaged over 10 subjects.", "cite_spans": [], "ref_spans": [ { "start": 161, "end": 168, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Combination of SPaRKy and learned DSyntS", "sec_num": null }, { "text": "tency does not greatly relate to naturalness. We also performed an ANOVA (ANalysis Of VAriance) of the effect of each relation in R on naturalness and consistency. There were no significant effects except that mappings combining food, service, and atmosphere were significantly worse (df=1, F=7.79, p=0.005). However, there is a trend for mappings to be rated higher for the food attribute (df=1, F=3.14, p=0.08) and the value attribute (df=1, F=3.55, p=0.06) for consistency, suggesting that perhaps it is easier to learn some mappings than others.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combination of SPaRKy and learned DSyntS", "sec_num": null }, { "text": "Automatically finding sentences with the same meaning has been extensively studied in the field of automatic paraphrasing using parallel corpora and corpora with multiple descriptions of the same events (Barzilay and McKeown, 2001 ; Barzilay and Lee, 2003) . Other work finds predicates of similar meanings by using the similarity of contexts around the predicates (Lin and Pantel, 2001) . However, these studies find a set of sentences with the same meaning, but do not associate a specific meaning with the sentences. One exception is (Barzilay and Lee, 2002) , which derives mappings between semantic representations and realizations using a parallel (but unaligned) corpus consisting of both complex semantic input and corresponding natural language verbalizations for mathemat-shorthand for relations and utterance score [food=4] The food is delicious and beautifully prepared.", "cite_spans": [ { "start": 203, "end": 230, "text": "(Barzilay and McKeown, 2001", "ref_id": "BIBREF2" }, { "start": 233, "end": 256, "text": "Barzilay and Lee, 2003)", "ref_id": "BIBREF1" }, { "start": 365, "end": 387, "text": "(Lin and Pantel, 2001)", "ref_id": "BIBREF9" }, { "start": 537, "end": 561, "text": "(Barzilay and Lee, 2002)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "[overall=4] A wonderful experience.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.9", "sec_num": null }, { "text": "2.9 [service=3] The service is bland and mediocre.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.9", "sec_num": null }, { "text": "2 [atmosphere=5, food=5, service=5] The atmosphere, food and service.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.9", "sec_num": null }, { "text": "[overall=3] Overall, a great experience.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.6", "sec_num": null }, { "text": "1.4 [service=1] The waiter is great.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.6", "sec_num": null }, { "text": "1.4 Table 10 : The 14 utterances with consistency scores below 3. ical proofs. However, our technique does not require parallel corpora or previously existing semantic transcripts or labeling, and user reviews are widely available in many different domains (See http://www.epinions.com/). There is also significant previous work on mining user reviews. For example, Hu and Liu (2005) use reviews to find adjectives to describe products, and Popescu and Etzioni (2005) automatically find features of a product together with the polarity of adjectives used to describe them. They both aim at summarizing reviews so that users can make decisions easily. Our method is also capable of finding polarities of modifying expressions including adjectives, but on a more fine-grained scale of 1 to 5. However, it might be possible to use their approach to create rating information for raw review texts as in (Pang and Lee, 2005) , so that we can create mappings from reviews without ratings.", "cite_spans": [ { "start": 366, "end": 383, "text": "Hu and Liu (2005)", "ref_id": "BIBREF6" }, { "start": 441, "end": 467, "text": "Popescu and Etzioni (2005)", "ref_id": "BIBREF13" }, { "start": 899, "end": 919, "text": "(Pang and Lee, 2005)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 4, "end": 12, "text": "Table 10", "ref_id": null } ], "eq_spans": [], "section": "1.6", "sec_num": null }, { "text": "We proposed automatically obtaining mappings between semantic representations and realizations from reviews with individual ratings. The results show that: (1) the learned mappings provide good coverage of the domain ontology and exhibit good linguistic variation; (2) the consistency between the semantic representations and realizations is high; and (3) the naturalness of the realizations are significantly higher than the baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary and Future Work", "sec_num": "6" }, { "text": "There are also limitations in our method. Even though consistency is rated highly by human subjects, this may actually be a judgement of whether the polarity of the learned mapping is correctly placed on the 1 to 5 rating scale. Thus, alternate ways of expressing, for example foodqual-ity=5, shown in Table 7 , cannot be guaranteed to be synonymous, which may be required for use in spoken language generation. Rather, an examination of the adjectival phrases in Table 7 shows that different aspects of the food are discussed. For example ample and plentiful refer to the portion size, fancy may refer to the presentation, and delicious describes the flavors. This suggests that perhaps the ontology would benefit from representing these sub-attributes of the food attribute, and sub-attributes in general. Another problem with consistency is that the same AP, e.g. very good in Table 7 may appear with multiple ratings. For example, very good is used for every foodquality rating from 2 to 5. Thus some further automatic or by-hand analysis is required to refine what is learned before actual use in spoken language generation. Still, our method could reduce the amount of time a system designer spends developing the spoken language generator, and increase the naturalness of spoken language generation.", "cite_spans": [], "ref_spans": [ { "start": 302, "end": 309, "text": "Table 7", "ref_id": "TABREF6" }, { "start": 464, "end": 471, "text": "Table 7", "ref_id": "TABREF6" }, { "start": 880, "end": 887, "text": "Table 7", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Summary and Future Work", "sec_num": "6" }, { "text": "Another issue is that the recall appears to be quite low given that all of the sentences concern the same domain: only 2.4% of the sentences could be used to create the mappings. One way to increase recall might be to automatically augment the list of distinguished attribute lexicalizations, using WordNet or work on automatic identification of synonyms, such as (Lin and Pantel, 2001 ). However, the method here has high precision, and automatic techniques may introduce noise. A related issue is that the filters are in some cases too strict. For example the contextual filter is based on POS-tags, so that sentences that do not require the prior context for their interpretation are eliminated, such as sentences containing subordinating conjunctions like because, when, if, whose arguments are both given in the same sentence (Prasad et al., 2005) . In addition, recall is affected by the domain ontology, and the automatically constructed domain ontology from the review webpages may not cover all of the domain. In some review domains, the attributes that get individual ratings are a limited subset of the domain ontology. Techniques for automatic feature identification (Hu and Liu, 2005; Popescu and Etzioni, 2005 ) could possibly help here, although these techniques currently have the limitation that they do not automatically identify different lexicalizations of the same feature.", "cite_spans": [ { "start": 364, "end": 385, "text": "(Lin and Pantel, 2001", "ref_id": "BIBREF9" }, { "start": 831, "end": 852, "text": "(Prasad et al., 2005)", "ref_id": "BIBREF14" }, { "start": 1179, "end": 1197, "text": "(Hu and Liu, 2005;", "ref_id": "BIBREF6" }, { "start": 1198, "end": 1223, "text": "Popescu and Etzioni, 2005", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Summary and Future Work", "sec_num": "6" }, { "text": "A different type of limitation is that dialogue systems need to generate utterances for information gathering whereas the mappings we obtained can only be used for information presentation. Thus these would have to be constructed by hand, as in current practice, or perhaps other types of corpora or resources could be utilized. In addition, the utility of syntactic structures in the mappings should be further examined, especially given the failures in DSyntS conversion. An alternative would be to leave some sentences unparsed and use them as templates with hybrid generation techniques (White and Caldwell, 1998) . Finally, while we believe that this technique will apply across domains, it would be useful to test it on domains such as movie reviews or product reviews, which have more complex domain ontologies.", "cite_spans": [ { "start": 591, "end": 617, "text": "(White and Caldwell, 1998)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Summary and Future Work", "sec_num": "6" }, { "text": "In future, we will investigate other techniques for bootstrapping these lexicalizations from the seed word on the webpage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We used 20 as a threshold.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "There are two other single-relation but not scalar-valued mappings that concern LOCATION in our mappings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the anonymous reviewers for their helpful comments. This work was supported by a Royal Society Wolfson award to Marilyn Walker and a research collaboration grant from NTT to the Cognitive Systems Group at the University of Sheffield.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "Adjectival phrases (APs) in single scalar-valued relation mappings for atmosphere, value, and overall.atmosphere=2 eclectic, unique and pleasant atmosphere=3 busy, pleasant but extremely hot atmosphere=4 fantastic, great, quite nice and simple, typical, very casual, very trendy, wonderful atmosphere=5 beautiful, comfortable, excellent, great, interior, lovely, mellow, nice, nice and comfortable, phenomenal, pleasant, quite pleasant, unbelievably beautiful, very comfortable, very cozy, very friendly, very intimate, very nice, very nice and relaxing, very pleasant, very relaxing, warm and contemporary, warm and very comfortable, wonderful value=3 very reasonable value=4 great, pretty good, reasonable, very good value=5 best, extremely reasonable, good, great, reasonable, totally reasonable, very good, very reasonable overall=1 just bad, nice, thoroughly humiliating overall=2 great, really bad overall=3 bad, decent, great, interesting, really fancy overall=4 excellent, good, great, just great, never busy, not very busy, outstanding, recommended, wonderful overall=5 amazing, awesome, capacious, delightful, extremely pleasant, fantastic, good, great, local, marvelous, neat, new, overall, overwhelmingly pleasant, pampering, peaceful but idyllic, really cool, really great, really neat, really nice, special, tasty, truly great, ultimate, unique and enjoyable, very enjoyable, very excellent, very good, very nice, very wonderful, warm and friendly, wonderful", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Bootstrapping lexical choice via multiple-sequence alignment", "authors": [ { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2002, "venue": "Proc. EMNLP", "volume": "", "issue": "", "pages": "164--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Regina Barzilay and Lillian Lee. 2002. Bootstrapping lex- ical choice via multiple-sequence alignment. In Proc. EMNLP, pages 164-171.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Learning to paraphrase: An unsupervised approach using multiplesequence alignment", "authors": [ { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2003, "venue": "Proc. HLT/NAACL", "volume": "", "issue": "", "pages": "16--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Regina Barzilay and Lillian Lee. 2003. Learning to paraphrase: An unsupervised approach using multiple- sequence alignment. In Proc. HLT/NAACL, pages 16-23.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Extracting paraphrases from a parallel corpus", "authors": [ { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2001, "venue": "Proc. 39th ACL", "volume": "", "issue": "", "pages": "50--57", "other_ids": {}, "num": null, "urls": [], "raw_text": "Regina Barzilay and Kathleen McKeown. 2001. Extracting paraphrases from a parallel corpus. In Proc. 39th ACL, pages 50-57.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "GATE: A framework and graphical development environment for robust NLP tools and applications", "authors": [ { "first": "Hamish", "middle": [], "last": "Cunningham", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Maynard", "suffix": "" }, { "first": "Kalina", "middle": [], "last": "Bontcheva", "suffix": "" }, { "first": "Valentin", "middle": [], "last": "Tablan", "suffix": "" } ], "year": 2002, "venue": "Proc. 40th ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hamish Cunningham, Diana Maynard, Kalina Bontcheva, and Valentin Tablan. 2002. GATE: A framework and graphical development environment for robust NLP tools and applications. In Proc. 40th ACL.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "WordNet: An Electronic Lexical Database (Language, Speech, and Communication)", "authors": [ { "first": "Christiane", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database (Language, Speech, and Communication). The MIT Press.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Now let's talk about NOW: Identifying cue phrases intonationally", "authors": [ { "first": "Julia", "middle": [], "last": "Hirschberg", "suffix": "" }, { "first": "Diane", "middle": [ "J" ], "last": "Litman", "suffix": "" } ], "year": 1987, "venue": "Proc. 25th ACL", "volume": "", "issue": "", "pages": "163--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julia Hirschberg and Diane. J. Litman. 1987. Now let's talk about NOW: Identifying cue phrases intonationally. In Proc. 25th ACL, pages 163-171.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Mining and summarizing customer reviews", "authors": [ { "first": "Minqing", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2005, "venue": "Proc. KDD", "volume": "", "issue": "", "pages": "168--177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minqing Hu and Bing Liu. 2005. Mining and summarizing customer reviews. In Proc. KDD, pages 168-177.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A Data-Driven Methodology for Motivating a Set of Coherence Relations", "authors": [ { "first": "Alistair", "middle": [], "last": "Knott", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alistair Knott. 1996. A Data-Driven Methodology for Moti- vating a Set of Coherence Relations. Ph.D. thesis, Univer- sity of Edinburgh, Edinburgh.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A fast and portable realizer for text generation systems", "authors": [ { "first": "Benoit", "middle": [], "last": "Lavoie", "suffix": "" }, { "first": "Owen", "middle": [], "last": "Rambow", "suffix": "" } ], "year": 1997, "venue": "Proc. 5th Applied NLP", "volume": "", "issue": "", "pages": "265--268", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benoit Lavoie and Owen Rambow. 1997. A fast and portable realizer for text generation systems. In Proc. 5th Applied NLP, pages 265-268.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Discovery of inference rules for question answering", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2001, "venue": "Natural Language Engineering", "volume": "7", "issue": "4", "pages": "343--360", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin and Patrick Pantel. 2001. Discovery of infer- ence rules for question answering. Natural Language En- gineering, 7(4):343-360.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Dependency-based evaluation of MINI-PAR", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1998, "venue": "Workshop on the Evaluation of Parsing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin. 1998. Dependency-based evaluation of MINI- PAR. In Workshop on the Evaluation of Parsing Systems.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Generating tailored, comparative descriptions in spoken dialogue", "authors": [ { "first": "Johanna", "middle": [ "D" ], "last": "Moore", "suffix": "" }, { "first": "Mary", "middle": [ "Ellen" ], "last": "Foster", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Lemon", "suffix": "" }, { "first": "Michael", "middle": [], "last": "White", "suffix": "" } ], "year": 2004, "venue": "Proc. 7th FLAIR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johanna D. Moore, Mary Ellen Foster, Oliver Lemon, and Michael White. 2004. Generating tailored, comparative descriptions in spoken dialogue. In Proc. 7th FLAIR.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2005, "venue": "Proc. 43st ACL", "volume": "", "issue": "", "pages": "115--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with re- spect to rating scales. In Proc. 43st ACL, pages 115-124.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Extracting product features and opinions from reviews", "authors": [ { "first": "Ana-Maria", "middle": [], "last": "Popescu", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2005, "venue": "Proc. HLT/EMNLP", "volume": "", "issue": "", "pages": "339--346", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ana-Maria Popescu and Oren Etzioni. 2005. Extracting product features and opinions from reviews. In Proc. HLT/EMNLP, pages 339-346.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The Penn Discourse TreeBank as a resource for natural language generation", "authors": [ { "first": "Rashmi", "middle": [], "last": "Prasad", "suffix": "" }, { "first": "Aravind", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Dinesh", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Eleni", "middle": [], "last": "Miltsakaki", "suffix": "" }, { "first": "Bonnie", "middle": [], "last": "Webber", "suffix": "" } ], "year": 2005, "venue": "Proc. Corpus Linguistics Workshop on Using Corpora for NLG", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rashmi Prasad, Aravind Joshi, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, and Bonnie Webber. 2005. The Penn Discourse TreeBank as a resource for natural language generation. In Proc. Corpus Linguistics Workshop on Us- ing Corpora for NLG.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Formal and natural language generation in the mercury conversational system", "authors": [ { "first": "Stephanie", "middle": [], "last": "Seneff", "suffix": "" }, { "first": "Joseph", "middle": [], "last": "Polifroni", "suffix": "" } ], "year": 2000, "venue": "Proc. ICSLP", "volume": "2", "issue": "", "pages": "767--770", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephanie Seneff and Joseph Polifroni. 2000. Formal and natural language generation in the mercury conversational system. In Proc. ICSLP, volume 2, pages 767-770.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "From monologue to dialogue: natural language generation in OVIS", "authors": [ { "first": "Mari\u00ebt", "middle": [], "last": "Theune", "suffix": "" } ], "year": 2003, "venue": "AAAI 2003 Spring Symposium on Natural Language Generation in Written and Spoken Dialogue", "volume": "", "issue": "", "pages": "141--150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mari\u00ebt Theune. 2003. From monologue to dialogue: natural language generation in OVIS. In AAAI 2003 Spring Sym- posium on Natural Language Generation in Written and Spoken Dialogue, pages 141-150.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Thumbs up or thumbs down? semantic orientation applied to unsupervised classification of reviews", "authors": [ { "first": "D", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Turney", "suffix": "" } ], "year": 2002, "venue": "Proc. 40th ACL", "volume": "", "issue": "", "pages": "417--424", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter D. Turney. 2002. Thumbs up or thumbs down? se- mantic orientation applied to unsupervised classification of reviews. In Proc. 40th ACL, pages 417-424.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A trainable generator for recommendations in multimodal dialog", "authors": [ { "first": "Marilyn", "middle": [], "last": "Walker", "suffix": "" }, { "first": "Rashmi", "middle": [], "last": "Prasad", "suffix": "" }, { "first": "Amanda", "middle": [], "last": "Stent", "suffix": "" } ], "year": 2003, "venue": "Proc. Eurospeech", "volume": "", "issue": "", "pages": "1697--1700", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marilyn Walker, Rashmi Prasad, and Amanda Stent. 2003. A trainable generator for recommendations in multimodal dialog. In Proc. Eurospeech, pages 1697-1700.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "EXEMPLARS: A practical, extensible framework for dynamic text generation", "authors": [ { "first": "Michael", "middle": [], "last": "White", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Caldwell", "suffix": "" } ], "year": 1998, "venue": "Proc. INLG", "volume": "", "issue": "", "pages": "266--275", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael White and Ted Caldwell. 1998. EXEMPLARS: A practical, extensible framework for dynamic text genera- tion. In Proc. INLG, pages 266-275.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Just how mad are you? finding strong and weak opinion clauses", "authors": [ { "first": "Theresa", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Hwa", "suffix": "" } ], "year": 2004, "venue": "Proc. AAAI", "volume": "", "issue": "", "pages": "761--769", "other_ids": {}, "num": null, "urls": [], "raw_text": "Theresa Wilson, Janyce Wiebe, and Rebecca Hwa. 2004. Just how mad are you? finding strong and weak opinion clauses. In Proc. AAAI, pages 761-769.", "links": null } }, "ref_entries": { "FIGREF1": { "num": null, "text": "Utterances incorporating learned DSyntSs (Bold font) in SPaRKy. baseline learned stat. mean sd. mean sd. sig. Consistency 4.714 0.588 4.459 0.890 + Naturalness 4.227 0.852 4.613 0.844 +", "type_str": "figure", "uris": null }, "TABREF2": { "html": null, "content": "
: Filtering statistics: the number of sen-
tences filtered and retained by each filter.
one FOODTYPE named-entity and one LOCATION
named-entity. Values of categorical attributes are
replaced by variables representing their type be-
fore the learned mappings are added to the dictio-
nary, as shown in Fig. 1.
", "text": "", "type_str": "table", "num": null }, "TABREF5": { "html": null, "content": "
[overall=1, value=2] Very disappointing experience for
the money charged.
[food=5, value=5] The food is excellent and plentiful at a
reasonable price.
[food=5, service=5] The food is exquisite as well as the
service and setting.
[food=5, service=5] The food was spectacular and so was
the service.
[food=5, foodtype, value=5] Best FOODTYPE food with
a great value for money.
[food=5, foodtype, value=5] An absolutely outstanding
value with fantastic FOODTYPE food.
[food=5, foodtype, location, overall=5] This is the best
place to eat FOODTYPE food in LOCATION.
", "text": "Common syntactic patterns of DSyntSs, flattened to a POS sequence for readability. NN, VB, JJ, RB, CC stand for noun, verb, adjective, adverb, and conjunction, respectively.", "type_str": "table", "num": null }, "TABREF6": { "html": null, "content": "", "text": "", "type_str": "table", "num": null }, "TABREF7": { "html": null, "content": "
", "text": "", "type_str": "table", "num": null } } } }