{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:55:55.888409Z" }, "title": "Style Control for Schema-Guided Natural Language Generation", "authors": [ { "first": "Alicia", "middle": [ "Y" ], "last": "Tsai", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of California", "location": { "settlement": "Berkeley" } }, "email": "aliciatsai@berkeley.edu" }, { "first": "Shereen", "middle": [], "last": "Oraby", "suffix": "", "affiliation": { "laboratory": "", "institution": "Amazon Alexa AI", "location": {} }, "email": "orabys@amazon.com" }, { "first": "Vittorio", "middle": [], "last": "Perera", "suffix": "", "affiliation": { "laboratory": "", "institution": "Amazon Alexa AI", "location": {} }, "email": "pererv@amazon.com" }, { "first": "Jiun-Yu", "middle": [], "last": "Kao", "suffix": "", "affiliation": { "laboratory": "", "institution": "Amazon Alexa AI", "location": {} }, "email": "" }, { "first": "Yuheng", "middle": [], "last": "Du", "suffix": "", "affiliation": { "laboratory": "", "institution": "Amazon Alexa AI", "location": {} }, "email": "yuhendu@amazon.com" }, { "first": "Anjali", "middle": [], "last": "Narayan-Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Amazon Alexa AI", "location": {} }, "email": "" }, { "first": "Tagyoung", "middle": [], "last": "Chung", "suffix": "", "affiliation": { "laboratory": "", "institution": "Amazon Alexa AI", "location": {} }, "email": "tagyoung@amazon.com" }, { "first": "Dilek", "middle": [], "last": "Hakkani-Tur", "suffix": "", "affiliation": { "laboratory": "", "institution": "Amazon Alexa AI", "location": {} }, "email": "hakkanit@amazon.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Natural Language Generation (NLG) for taskoriented dialogue systems focuses on communicating specific content accurately, fluently, and coherently. While these attributes are crucial for a successful dialogue, it is also desirable to simultaneously accomplish specific stylistic goals, such as response length, pointof-view, descriptiveness, sentiment, formality, and empathy. In this work, we focus on stylistic control and evaluation for schema-guided NLG, with joint goals of achieving both semantic and stylistic control. We experiment in detail with various controlled generation methods for large pretrained language models: specifically, conditional training, guided fine-tuning, and guided decoding. We discuss their advantages and limitations, and evaluate them with a broad range of automatic and human evaluation metrics. Our results show that while high style accuracy and semantic correctness are easier to achieve for more lexicallydefined styles with conditional training, stylistic control is also achievable for more semantically complex styles using discriminatorbased guided decoding methods. The results also suggest that methods that are more scalable (with less hyper-parameters tuning) and that disentangle content generation and stylistic variations are more effective at achieving semantic correctness and style accuracy. * Work done as an intern at Amazon Alexa AI.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Natural Language Generation (NLG) for taskoriented dialogue systems focuses on communicating specific content accurately, fluently, and coherently. While these attributes are crucial for a successful dialogue, it is also desirable to simultaneously accomplish specific stylistic goals, such as response length, pointof-view, descriptiveness, sentiment, formality, and empathy. In this work, we focus on stylistic control and evaluation for schema-guided NLG, with joint goals of achieving both semantic and stylistic control. We experiment in detail with various controlled generation methods for large pretrained language models: specifically, conditional training, guided fine-tuning, and guided decoding. We discuss their advantages and limitations, and evaluate them with a broad range of automatic and human evaluation metrics. Our results show that while high style accuracy and semantic correctness are easier to achieve for more lexicallydefined styles with conditional training, stylistic control is also achievable for more semantically complex styles using discriminatorbased guided decoding methods. The results also suggest that methods that are more scalable (with less hyper-parameters tuning) and that disentangle content generation and stylistic variations are more effective at achieving semantic correctness and style accuracy. * Work done as an intern at Amazon Alexa AI.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Natural Language Generation (NLG) for taskoriented dialogue focuses on effectively generating responses based on inputs that are frequently in the form of a structured meaning representation (MR) (Moryossef et al., 2019; Du\u0161ek et al., 2018; Colin et al., 2016; Wen et al., 2015) . Recent work has suggested a schema-guided paradigm for taskoriented dialogue by adding descriptions in natural language form (Lin et al., 2021; Du et al., 2020; Rastogi et al., 2019; Bapna et al., 2017) . Compared to structured MRs, dialogue schemata contain much richer contextual information, leading to better generated outputs.", "cite_spans": [ { "start": 196, "end": 220, "text": "(Moryossef et al., 2019;", "ref_id": "BIBREF25" }, { "start": 221, "end": 240, "text": "Du\u0161ek et al., 2018;", "ref_id": "BIBREF8" }, { "start": 241, "end": 260, "text": "Colin et al., 2016;", "ref_id": "BIBREF5" }, { "start": 261, "end": 278, "text": "Wen et al., 2015)", "ref_id": "BIBREF42" }, { "start": 406, "end": 424, "text": "(Lin et al., 2021;", "ref_id": "BIBREF19" }, { "start": 425, "end": 441, "text": "Du et al., 2020;", "ref_id": null }, { "start": 442, "end": 463, "text": "Rastogi et al., 2019;", "ref_id": "BIBREF34" }, { "start": 464, "end": 483, "text": "Bapna et al., 2017)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although the primary aim of task-oriented NLG is to effectively generate outputs that realize system dialogue actions and communicate their associated contents correctly, it is often desirable to control the stylistic variations of an output. For example, recognizing and reacting to emotions has been shown to enhance task outcomes and user engagement in task-oriented conversations (Fraser et al., 2018) . Language generation systems that use corpora and methods without awareness of emotions may generate callous, generic or even biased responses (Bender et al., 2021; Sheng et al., 2019) . Depending on the use case or type of system, it may be useful to stylistically vary responses, e.g., using shorter responses for spoken dialogue systems, longer responses if the system includes visual modality through a screen, or emotion-specific responses that appropriately address user sentiment.", "cite_spans": [ { "start": 384, "end": 405, "text": "(Fraser et al., 2018)", "ref_id": "BIBREF11" }, { "start": 550, "end": 571, "text": "(Bender et al., 2021;", "ref_id": "BIBREF1" }, { "start": 572, "end": 591, "text": "Sheng et al., 2019)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous work on controlled text generation aimed at achieving stylistic goals has not focused on a schema-guided paradigm where specific content must be communicated correctly; instead, most work focuses more on unconstrained text-to-text generation without explicit meaning representations Krause et al., 2020; Keskar et al., 2019; Ghazvininejad et al., 2017) . Meanwhile, work on schema-guided NLG has primarily focused on generating fluent outputs that achieve low semantic error rather than achieving stylistic goals (Kale and Rastogi, 2020; Du et al., 2020) . In this paper, we hope to fill the gap on stylistic control and evaluation for schemaguided NLG. Our contributions in this paper are three-fold:", "cite_spans": [ { "start": 292, "end": 312, "text": "Krause et al., 2020;", "ref_id": "BIBREF18" }, { "start": 313, "end": 333, "text": "Keskar et al., 2019;", "ref_id": "BIBREF16" }, { "start": 334, "end": 361, "text": "Ghazvininejad et al., 2017)", "ref_id": "BIBREF12" }, { "start": 522, "end": 546, "text": "(Kale and Rastogi, 2020;", "ref_id": "BIBREF15" }, { "start": 547, "end": 563, "text": "Du et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. We describe how we pre-process and annotate style parameters within the Schema-guided Dialogue (SGD) dataset (Rastogi et al., 2019) .", "cite_spans": [ { "start": 112, "end": 134, "text": "(Rastogi et al., 2019)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. We experiment with controlling different styles with various controlled text generation methods that aim to preserve fluency and semantic correctness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. We present results with a broad range of evaluation methods, including a detailed human evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Specifically, we consider three types of methods: conditional training, guided fine-tuning, and guided decoding. We show that conditional training (CT) can be used for both very lexically-defined (e.g., point-of-view) and more semantically complex styles (e.g., empathy). However, CT introduces the need to re-train new models per style and is more effective at learning styles with strong lexical characteristics (e.g., specific language patterns or vocabulary). For guided fine-tuning, we explore the Plug-and-Play Language Model (PPLM) (Dathathri et al., 2020) , but show that it requires careful hyper-parameter turning and is prone to degeneration. For guided decoding, we evaluate the beam search weighted decoding (BSWD) method and show that it performs best overall on measures of style accuracy for semantically complex styles. The results suggest that unlike style control for unconstrained text generation where no specific content needs to be communicated, style control under the schema-guided paradigm has stronger restrictions on the degree of freedom allowed for content generation. We show that methods that disentangle content generation and style variations, especially for more semantically complex styles, result in better overall performance on semantic and stylistic control.", "cite_spans": [ { "start": 539, "end": 563, "text": "(Dathathri et al., 2020)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Controllable text generation is an emerging research field. Current methods for controlling styles in text generation involve learning a conditional generative model or designing an appropriate decoding strategy. There are many methods proposed for learning a good conditional generative model. These include conditional training (Kikuchi et al., 2016; Ficler and Goldberg, 2017; Keskar et al., 2019; See et al., 2019) , fine-tuning language models with external attribute models or side models (Dathathri et al., 2020; Zhang et al., 2020) , finetuning models with reinforcement learning and human feedback (Ziegler et al., 2019) , training generative adversarial models (Yu et al., 2017) , and training variational auto-encoders (Yu et al., 2017; Hu et al., 2017) .", "cite_spans": [ { "start": 330, "end": 352, "text": "(Kikuchi et al., 2016;", "ref_id": "BIBREF17" }, { "start": 353, "end": 379, "text": "Ficler and Goldberg, 2017;", "ref_id": "BIBREF10" }, { "start": 380, "end": 400, "text": "Keskar et al., 2019;", "ref_id": "BIBREF16" }, { "start": 401, "end": 418, "text": "See et al., 2019)", "ref_id": "BIBREF35" }, { "start": 495, "end": 519, "text": "(Dathathri et al., 2020;", "ref_id": "BIBREF6" }, { "start": 520, "end": 539, "text": "Zhang et al., 2020)", "ref_id": "BIBREF44" }, { "start": 607, "end": 629, "text": "(Ziegler et al., 2019)", "ref_id": "BIBREF47" }, { "start": 671, "end": 688, "text": "(Yu et al., 2017)", "ref_id": "BIBREF43" }, { "start": 730, "end": 747, "text": "(Yu et al., 2017;", "ref_id": "BIBREF43" }, { "start": 748, "end": 764, "text": "Hu et al., 2017)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Others have worked on designing a good decoding strategy to guide generation, where the decoding procedure is guided by a scoring function or discriminator. These include weighted decoding (Ghazvininejad et al., 2017; Holtzman et al., 2018; See et al., 2019) and guided generation (Krause et al., 2020) . Other lines of work include curating training data with rich style markup to facilitate training models with explicit stylistic supervision Oraby et al. (2019 Oraby et al. ( , 2018 . While this previous work does focus on controllable text generation, most work has been carried out in a text-to-text generation setting, without specific semantic constraints. Instead, we focus on the task-oriented dialogue framework where specific values must be communicated, and conduct a rigorous evaluation of different methods and their efficacy on different forms of style generation. Other recent work has explored adding additional information such as chit-chat data to task-oriented dialogue (Sun et al., 2021; and could potentially provide new opportunities for stylistic control.", "cite_spans": [ { "start": 189, "end": 217, "text": "(Ghazvininejad et al., 2017;", "ref_id": "BIBREF12" }, { "start": 218, "end": 240, "text": "Holtzman et al., 2018;", "ref_id": "BIBREF13" }, { "start": 241, "end": 258, "text": "See et al., 2019)", "ref_id": "BIBREF35" }, { "start": 281, "end": 302, "text": "(Krause et al., 2020)", "ref_id": "BIBREF18" }, { "start": 445, "end": 463, "text": "Oraby et al. (2019", "ref_id": "BIBREF27" }, { "start": 464, "end": 485, "text": "Oraby et al. ( , 2018", "ref_id": "BIBREF28" }, { "start": 991, "end": 1009, "text": "(Sun et al., 2021;", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We use the Schema-guided Dialogue (SGD) dataset 1 to create a rich corpus of schema-totemplate pairs. This dataset is one of the largest publicly available corpora of annotated multidomain, task-oriented dialogues (Rastogi et al., 2019) . Each dialogue in the data is represented as a list of user and system utterances. We use only the system-side utterances and annotations since we are focused on system-side generation. Table 1 shows an example pre-processed data instance and the final flattened input. To create schema-to-template pairs, we follow the preprocessing steps outlined in Du et al. (2020) with the following modifications: (1) we replace the slot values with generic slot values without any slot type by adding a $ prefix and appending a increasing index (e.g., San Jose \u2192 $slot1) for better generalization; and (2) we use only domain, meaning representations (MRs), and slot description as input data. Domain provides the context of the conversation (e.g., Restaurants); an MR contains a dialog act, a slot and a value (e.g., OFFER(city=$slot2)); and the slot description describes the meaning of the slot in natural language. Table 8 in Appendix A summarizes the full statistics for the final pre-processed SGD dataset. In summary, we have 1,698 MRs and 118,715 example templates in the training set and 1,137 MRs and 34,598 templates in the test set.", "cite_spans": [ { "start": 214, "end": 236, "text": "(Rastogi et al., 2019)", "ref_id": "BIBREF34" }, { "start": 590, "end": 606, "text": "Du et al. (2020)", "ref_id": null } ], "ref_spans": [ { "start": 424, "end": 431, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 1146, "end": 1153, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Schema-to-Template pairs", "sec_num": "3.1" }, { "text": "For a single example, there are usually multiple dialogue acts as shown in Table 1 . An MR in the original data may also contain multiple values. In such cases, we flatten these MRs into multiple parts, each containing only one dialogue act. For instance, an MR that contains two values originally, e.g., REQUEST(cuisine=[Mexican, Italian] ) becomes two separate dialogue acts, e.g., REQUEST(cuisine=$slot1), REQUEST(cuisine=$slot2). Templates are obtained by delexicalizing the utterances with generic slot values (e.g., $slot1 is a good restaurant). Finally, we flatten the input data into flat natural language strings similar to Budzianowski and Vuli\u0107 (2019) .", "cite_spans": [ { "start": 305, "end": 339, "text": "REQUEST(cuisine=[Mexican, Italian]", "ref_id": null }, { "start": 633, "end": 662, "text": "Budzianowski and Vuli\u0107 (2019)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 75, "end": 82, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Schema-to-Template pairs", "sec_num": "3.1" }, { "text": "In order to perform style control, we also need to annotate style parameters for the SGD dataset. Style parameters are features of text that are stylistically expressive. These parameters can be roughly identified at lexical (vocabulary and words), syntactic (sentence structure) and semantic (abstract meaning/emotion) levels (Verma and Srinivasan, 2019) . We focus primarily on lexical and semantic features. Specifically, we characterize lexical style parameters as low-level linguistic features that can be derived from the text directly such as word count and number of adjectives, and semantic style parameters as high-level styles such as sentiment that are more complex to characterize. Table 2 summarizes the lexical styles we annotate for the SGD data and the description of each parameter. In total, we automatically annotate six lexical style parameters for the SGD data. Similar to Zhang et al. (2018) , the parameter \"has rare word\" uses the maximum Normalized Inverse Document Frequency (NIDF) to determine whether or not a template contains words that are used less frequently in the corpus. 2 The complete data distribution for all the style parameters is included in Table 10 in Appendix A.", "cite_spans": [ { "start": 327, "end": 355, "text": "(Verma and Srinivasan, 2019)", "ref_id": "BIBREF40" }, { "start": 895, "end": 914, "text": "Zhang et al. (2018)", "ref_id": "BIBREF45" }, { "start": 1108, "end": 1109, "text": "2", "ref_id": null } ], "ref_spans": [ { "start": 695, "end": 702, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 1185, "end": 1193, "text": "Table 10", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Style Parameters", "sec_num": "3.2" }, { "text": "Semantic style parameters Unlike lexical parameters which consider explicit features such as vocabulary, semantic parameters of style are less lexically-defined. As a result, it is generally harder to annotate these parameters directly from the original data without auxiliary information. In this work, we consider the following semantic parameters: formality, negative sentiment, positive sentiment and empathy. Formality and sentiment are common stylistic parameters studied in the stylistic control NLG literature. We also include empathy as an interesting and complex style studied in recent work Majumder et al., 2020; Lin et al., 2019; Zhou and Wang, 2018) . We train a classifier for each of the four styles and annotate the utterances in SGD with these features. We include more details about the classifiers in Section 4 and show additional information about the dataset used to train each classifier in Table 9 of Appendix A.", "cite_spans": [ { "start": 602, "end": 624, "text": "Majumder et al., 2020;", "ref_id": "BIBREF24" }, { "start": 625, "end": 642, "text": "Lin et al., 2019;", "ref_id": "BIBREF20" }, { "start": 643, "end": 663, "text": "Zhou and Wang, 2018)", "ref_id": "BIBREF46" } ], "ref_spans": [ { "start": 914, "end": 921, "text": "Table 9", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Lexical style parameters", "sec_num": null }, { "text": "Language model Given a sequence of tokens", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "4" }, { "text": "x = x 1 , \u2022 \u2022 \u2022 , x t ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "4" }, { "text": "the goal of the language model is to model the joint probability of the sequence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "4" }, { "text": "p(x) = p(x 1 , \u2022 \u2022 \u2022 , x t ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "4" }, { "text": "The joint probability p(x) is often factorized in terms of the product of conditional probabilities using the chain rule of probability et al., 2001) . In recent years, transformer-based models have been used widely to model these conditional probabilities (Vaswani et al., 2017) . In this work, we use GPT-2 3 as our baseline language model and fine-tune the GPT-2 model with the processed SGD data using a flat representation with the beginning of sequence, separator, and end of sequence special tokens. 4 Semantic style classifiers The classifiers used to annotate semantic parameters are single layer classifiers. To train each classifier, we encode the input", "cite_spans": [ { "start": 136, "end": 149, "text": "et al., 2001)", "ref_id": null }, { "start": 257, "end": 279, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF39" }, { "start": 507, "end": 508, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "4" }, { "text": "P (x) = T t=1 P (x T |x 0 , \u2022 \u2022 \u2022 , x T \u22121 ) (Bengio", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "4" }, { "text": "x = x 1 , \u2022 \u2022 \u2022 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "4" }, { "text": "x t of length t using the baseline GPT-2 model described above and obtain the last hidden layer o t for all time steps t. We then take the average representation across time, denoted\u014d t , and train the classifier to predict the target label (e.g., formal vs. informal) from the average repre- Template I see that at $slot1 there is a good restaurant which is in $slot2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "4" }, { "text": "Flattened input after pre-processing restaurants offer restaurant_name name of the restaurant $slot1 offer city city where the restaurant is located $slot2 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "4" }, { "text": "sentation: f (\u014d t ) = f T t=1 ot T .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "4" }, { "text": "The classifiers are used for annotating the semantic parameters of the SGD data and for the style control models described in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "4" }, { "text": "In this work, we require a controlled generation method that is able to simultaneously render the semantic content given the schemata while achieving the desired stylistic goals. We also require the method to be both stable (preserve the fluency of the response even when the stylistic goals are not met) and general-purpose (can be applied to many styles). Under these requirements and constraints, we discuss three types of controlled generation methods to achieve these goals: conditional training, guided fine-tuning, and guided decoding, and compare their performance in Section 6. To our knowledge, our work is the first to systematically study the effectiveness of these control methods for schema-guided NLG.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Style Controlled Text Generation", "sec_num": "5" }, { "text": "Controllable generation entails modeling p(x|a), where a is a control variable and x is the generated sequence. However, a pre-trained language model such as GPT-2 is only trained to learn p(x). On the other hand, conditional training (Kikuchi et al., 2016; Peng et al., 2018; See et al., 2019) refers to directly learning the conditional generative model p(x|a). The results are of high quality because the model is trained to directly maximize p(x|a), but this comes at the expense of fixing the control variable upfront and of re-training the entire model for each new control variable.", "cite_spans": [ { "start": 235, "end": 257, "text": "(Kikuchi et al., 2016;", "ref_id": "BIBREF17" }, { "start": 258, "end": 276, "text": "Peng et al., 2018;", "ref_id": "BIBREF32" }, { "start": 277, "end": 294, "text": "See et al., 2019)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Conditional Training (CT)", "sec_num": "5.1" }, { "text": "To perform style control, we fine-tune the baseline GPT-2 with the conditional training method. Specifically, each input in the training set is annotated with the variable a that we wish to control, e.g., the length (short, long) of the input. The value of the control variable a is then added to model vocabulary as a special token (e.g., [LENGTH_SHORT] ) and appended to the meaning representation after the [BOS] special token. The model then learns an embedding for each value of a and learns to generate", "cite_spans": [ { "start": 340, "end": 354, "text": "[LENGTH_SHORT]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conditional Training (CT)", "sec_num": "5.1" }, { "text": "x = x 1 , \u2022 \u2022 \u2022 , x t con-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Training (CT)", "sec_num": "5.1" }, { "text": "ditioned on a value of a and the given meaning representation by optimizing cross-entropy loss.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Training (CT)", "sec_num": "5.1" }, { "text": "Unlike conditional training that requires fine-tuning an entire GPT-2 model per style, guided fineturning refers to methods that require only finetuning a smaller set of the parameters while the majority of the base model stays fixed. In this paper, we consider the recent Plug-and-Play Language Model (PPLM) (Dathathri et al., 2020) . In guided fine-tuning methods, the conditional probability p(x|a) \u221d p(x)p(a|x) is obtained by finetuning the base language model (LM) using an auxiliary discriminator that explicitly models p(a|x).", "cite_spans": [ { "start": 309, "end": 333, "text": "(Dathathri et al., 2020)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Guided Fine-tuning", "sec_num": "5.2" }, { "text": "In our work, we use the semantic style classifiers described in Section 4 for the discriminator p(a|x) and the GPT-2 model for the base LM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Guided Fine-tuning", "sec_num": "5.2" }, { "text": "The major difficulty of the PPLM method is the problem of degeneration -output that is ungrammatical, incoherent, or repetitive. In practice, we observe that PPLM is prone to generating ungrammatical outputs or getting stuck in a repetitive loop if the hyper-parameters are not carefully tuned. We illustrate the effect of hyper-parameters tuning and the degeneration problem in Appendix B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Guided Fine-tuning", "sec_num": "5.2" }, { "text": "While conditional training and guided fine-tuning require fine-tuning the base language model, weighted decoding is applied only at decoding time, requiring no change to the base language model. To control the generation, it re-ranks the probability of words based on a scoring function or discriminator.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Guided Decoding", "sec_num": "5.3" }, { "text": "Weighted Decoding (WD) In weighted decoding (Ghazvininejad et al., 2017) , at time step t, the distribution of the next token x t+1 is re-weighted by a semantic style classifier that models p(a|x). The probability of each possible next word w in the vocabulary given the control variable a is then re-computed as p(w|a) = Softmax p(w) p(a|w) .", "cite_spans": [ { "start": 44, "end": 72, "text": "(Ghazvininejad et al., 2017)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Guided Decoding", "sec_num": "5.3" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Guided Decoding", "sec_num": "5.3" }, { "text": "Here p(w) is the probability of the word w calculated by the base language model as the next token given the generated sequence x 1 , \u2022 \u2022 \u2022 , x t , and p(a|w) is the probability of the word w associated with the control variable a.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Guided Decoding", "sec_num": "5.3" }, { "text": "Beam search weighted decoding (BSWD) The weighted decoding method described above takes the highest scoring item at each time step. While this approach is effective, it is often non-optimal and can limit the diversity of the generated text. To mitigate this limitation, we can increase the search space at generation time using the beam search algorithm. Given a fixed beam width parameter B, the beam search algorithm selects B best alternatives with the highest probability for an input sequence at each time step. Therefore, the original weighted decoding approach described above is a special case of the beam search algorithm with B = 1. Finally, we note that in Eq. 1, the style classifier is only conditioned on the next possible token w but not the entire past sequence, i.e., the next possible token w plus the text that has been generated", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Guided Decoding", "sec_num": "5.3" }, { "text": "x 1 , \u2022 \u2022 \u2022 , x t , w .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Guided Decoding", "sec_num": "5.3" }, { "text": "Empirically, in both WD and BSWD, we observe that maximizing the probability of the desired style by greedily considering only the next generated token, rather than the entire sequence of previously generated tokens, yielded better performance on the SGD data. When the entire sequence representation is used, we find that the re-weighting of the distribution is usually not strong enough to successfully match the desired stylistic goal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Guided Decoding", "sec_num": "5.3" }, { "text": "In this section, we show the experimental results of the methods described in Section 5 for controlling the styles described in Section 3 on the SGD data. The baseline GPT-2 model is fine-tuned on the training set with no control variables, and the conditional training model is fine-tuned with control variable special tokens, e.g., LENGTH_SHORT. Our evaluation is tested on the test set of 1,137 MRs. We focus on controlling a single style at a time in this experiment; however, it is also possible to control for multiple styles -we include details on multiple-style control experiments in Appendix C (with sample outputs in Appendix Table 14) .", "cite_spans": [], "ref_spans": [ { "start": 637, "end": 646, "text": "Table 14)", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "We focus on evaluating three key dimensions: style accuracy, fluency, and semantic correctness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation Metrics", "sec_num": "6.1" }, { "text": "Style accuracy To evaluate how effective each controlled generation method is per style, we use the style accuracy metric, or the percentage of outputs that conform to the required input style. For lexical styles, this is simply computed using the conditions in Table 2 . For semantic styles, we classify the generated text using the corresponding style classifier and check if the predicted style matches the desired style value. For instance, if the predicted sentiment for generated text with the \"positive sentiment\" control code does not match the \"positive\" label, then it is considered incorrect.", "cite_spans": [], "ref_spans": [ { "start": 262, "end": 269, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Automatic Evaluation Metrics", "sec_num": "6.1" }, { "text": "Response fluency We use BLEU score (n-gram precision with brevity penalty) (Papineni et al., 2002) as a measurement of the response fluency. We acknowledge that lexical overlap metrics are poor measures of quality (Novikova et al., 2017) ; however, we include BLEU for completeness and further evaluate quality through human judgments.", "cite_spans": [ { "start": 75, "end": 98, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF29" }, { "start": 214, "end": 237, "text": "(Novikova et al., 2017)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation Metrics", "sec_num": "6.1" }, { "text": "We use slot error rate (SER) (Luong et al., 2015) to measure the semantic correctness of the generated response as compared to the given MR. SER measures the ratio of semantic errors that the model makes by finding the total number of slot mistakes (deletions, repetitions, and hallucinations) in the generated text (lower is better). SER here only considers slots that have explicit values that must be realized (e.g., $slotN).", "cite_spans": [ { "start": 29, "end": 49, "text": "(Luong et al., 2015)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic correctness", "sec_num": null }, { "text": "We evaluate lexical styles with only conditional training as the rest of the methods include semantic style classifiers and are thus not applicable. Table 3 summarizes the performance of conditional training for the six lexical styles. The style accuracy for most styles is generally high, between 80% to nearly 100%, especially for styles marked explicitly by specific words, such as first and second person pronouns. However, we observe that \"descriptive\" has a particularly low accuracy. First, the majority of the references in the training data (95%) have less than two adjectives, making it difficult for the model to learn this kind of style effectively. 5 Secondly, we observe that conditional training is particularly effective when the style exhibits a clear syntactic characteristic (e.g., length) or a particular set of vocabulary (e.g., pronouns); however, this is not the case for the \"descriptive\" style.", "cite_spans": [ { "start": 662, "end": 663, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Lexical Style Automatic Evaluation", "sec_num": "6.2" }, { "text": "The fluency of the generated text with style control drops slightly as compared to no style control. Having style control often makes the generated text different from its matching responses, i.e., adding extra content or changing linguistic characteristics. Since BLEU score only considers lexical overlap, 5 The full data distribution can be found in Appendix A Table 10. this behavior is expected. Finally, we see that there is not much of a performance drop in semantic correctness with respect to SER. The experimental results show that for lexical styles with a clear syntactic pattern or vocabulary, CT can be quite effective. Table 4 illustrates example outputs using CT when controlling for \"short\", \"long\" and \"has rare word\" styles. 6 Interestingly, we see that when asked for a longer response, the model starts to hallucinate extra content (but not slots) not given in the MR in order to satisfy the control variable. This also translates to slightly lower BLEU scores and a higher SER. Methods to enforce better implicit constraints to increase fluency and semantic correctness are important points for future work. Table 5 summarizes the main results for semantic style evaluation using CT, PPLM, and BSWD. Since CT is trained to directly maximize conditional probability, it frequently has a higher BLEU score and a lower SER across different styles with the exception of \"formal\" BLEU. We note that formality is rather lexically-defined, exhibiting characteristic keywords such as \"please\", which are frequently picked up by the model, resulting in a particularly high style accuracy for \"formal\" responses and a lower BLEU score. For the three more semantically complex styles, \"positive sentiment\", \"negative sentiment\" and \"empathy\", we see that BSWD achieves a higher style accuracy than CT (at the cost of a lower BLEU and slightly higher semantic error). We note, however, that the drop in BLEU is expected: as the output is steered towards a certain style, its n-gram overlap with the references is more likely to decrease MR: OFFER(restaurant_name=$slot1), OFFER(city=$slot2) W/o Style Control $slot1 is a nice restaurant in $slot2 that serves curry. Short $slot1 is a nice restaurant in $slot2.", "cite_spans": [ { "start": 308, "end": 309, "text": "5", "ref_id": null }, { "start": 744, "end": 745, "text": "6", "ref_id": null } ], "ref_spans": [ { "start": 364, "end": 373, "text": "Table 10.", "ref_id": "TABREF1" }, { "start": 634, "end": 641, "text": "Table 4", "ref_id": null }, { "start": 1130, "end": 1137, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Lexical Style Automatic Evaluation", "sec_num": "6.2" }, { "text": "Okay! the restaurant, $slot1 located in $slot2 is a good one and serves Taiwanese dishes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Long", "sec_num": null }, { "text": "MR: OFFER(address=$slot1), OFFER(rating=$slot2) W/o Style Control There is a nice house at $slot1 with a $slot2 rating. Has Rare Word There is a lovely residence located at $slot1. It has a rating of $slot2. Table 4 : Example outputs for conditional training with lexical styles \"short\", \"long\" and \"has rare word\" (more content hallucinations for \"long\"). Example outputs for other lexical styles are included in Appendix C Table 12 .", "cite_spans": [], "ref_spans": [ { "start": 208, "end": 215, "text": "Table 4", "ref_id": null }, { "start": 423, "end": 433, "text": "C Table 12", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Long", "sec_num": null }, { "text": "-thus, we use these automatic metrics as an evaluation guide, but leave a more rigorous evaluation to human judgment in the next section. Finally, with careful hyper-parameter tuning, PPLM can achieve similar performance to BSWD on BLEU and SER but does worse on style accuracy. Increasing the style accuracy for PPLM worsens the BLEU and SER score significantly and thus we do not consider it in our human evaluation. In summary, our automatic evaluation shows that for semantic styles, BSWD gives us a good trade-off between consistent style accuracy and semantic fidelity across styles. Table 6 illustrates example outputs for the three semantic styles using BSWD (with additional examples including combining multiple styles in Appendix C). In general, styles that encapsulate complex phenomena such as \"empathy\" are harder to generate as shown by their lower style accuracy; nevertheless, we are able to preserve fluency and semantic correctness in most cases. ", "cite_spans": [], "ref_spans": [ { "start": 590, "end": 597, "text": "Table 6", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Long", "sec_num": null }, { "text": "We focus on human evaluation for our semantic styles since they are the most inherently subjec-tive. 7 We pick a subset of our semantic styles to evaluate, specifically, formal, negative and positive.", "cite_spans": [ { "start": 101, "end": 102, "text": "7", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Human Evaluation", "sec_num": "6.4" }, { "text": "We also focus on the evaluation of CT and BSWD only since they have an overall better performance in the automatic evaluation and are simpler in nature (e.g., less hyper-parameter tuning). To evaluate style, we ask three human annotators to rate:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Evaluation", "sec_num": "6.4" }, { "text": "\u2022 Style Rating (Sty. Rat.): How closely the response matches the given style (1 being not at all, 5 being very closely).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Evaluation", "sec_num": "6.4" }, { "text": "\u2022 Fluency (Flu.): The fluency of the generated response (1 being low, 5 being high).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Evaluation", "sec_num": "6.4" }, { "text": "\u2022 Semantic Error: Slot errors in the generated response using a 0/1 scale. For analysis purposes, we further break down the items marked \"0\" into four error types to understand the strengths and weaknesses of each method. For each type of error, the output is only marked \"1\" if there are no issues. compared to CT, confirming its ability to control semantic styles. We also observe that outputs from BSWD have a higher fluency score across all styles. This means that even though BSWD showed lower BLEU scores in automatic evaluation, its outputs are considered to be more natural and fluent in human evaluation. Finally, we see an interesting difference in error types when comparing CT and BSWD. In general, BSWD is more prone to deleting and hallucinating slots, while CT more frequently generates incorrect slot values. Since BSWD requires no change to the base language model, it is able to obtain a lower incorrect slot value error rate as compared to CT, which requires re-training the language model with a control code. On the other hand, it has a higher deletion and hallucination error rate since during decoding time, BSWD is free to insert or drop content in order to achieve the desired style. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Evaluation", "sec_num": "6.4" }, { "text": "In this work, we focus on stylistic control and evaluation of schema-guided NLG. We discuss three different types of methods for style controlled text generation: conditional training (CT), guided finetuning (PPLM), and guided decoding (BSWD). We present a rich set of evaluations to quantify each method's ability to achieve various styles while preserving language fluency and minimizing slot errors. Our analysis shows that, in general, styles that encapsulate abstract ideas are naturally harder to generate (e.g., empathy), and methods that require careful hyper-parameters tuning may run into the problems of instability and degeneration (e.g., PPLM) while under-performing in style accuracy. The automatic and human evaluations suggest that simultaneously achieving stylistic goals and realizing schema information requires methods that allow us to separate content generation and stylistic variations. We show that CT and BSWD overcome some of these challenges and are effective at controlling several styles while maintaining good fluency and semantic correctness in most cases. CT is effective for lexical styles with strong syntactic characteristics or distinctive vocabulary sets, while BSWD excels at semantic styles that are more complex to characterize. For future work, we are interested in extending our analysis to a larger number of styles, and in exploring techniques for understanding and representing styles that are more abstract, such as \"empathy\", as well as methods for generating those styles under a task-oriented dialogue framework. Additionally, since abstract styles may be characterized by multiple features (e.g., a combination of sentiment and descriptiveness), we are interested in studying how these underlying features can be represented and incorporated more accurately to improve overall semantic and stylistic control.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Formality corpus", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal", "sec_num": null }, { "text": "Stanford sentiment treebank (SST) Negative", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Positive", "sec_num": null }, { "text": "Empathetic reactions B Hyper-parameters Tuning for PPLM", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empathy", "sec_num": null }, { "text": "There are several hyper-parameters in PPLM models. We refer the readers to Dathathri et al. (2020) for the full model details. From Section 5.2, the PPLM model can be written as an optimization problem of the form:", "cite_spans": [ { "start": 75, "end": 98, "text": "Dathathri et al. (2020)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Empathy", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "min \u2206Ht L CE f T t=1 o t /T + \u03bbD KL ( o t o t ) (4) s.t. o t = LM(x t , H t )", "eq_num": "(5)" } ], "section": "Empathy", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "H t = H t + \u2206H t (6) o t = LM(x t , H t )", "eq_num": "(7)" } ], "section": "Empathy", "sec_num": null }, { "text": "where \u03bb is a hyper-parameter that scales the KL divergence and f (\u2022) is the semantic style classifier learned to produce p(a|x). We minimize the crossentropy loss L CE (\u2022) of the attribute model and the Kullback-Leibler (KL) divergence D KL (\u2022 \u2022) of the language model. For each time step t, the GPT-2 model generates a history of weight matrices H t for all hidden layers. To control the generation, we shift the weight matrices H t in the direction that accomplishes two goals: 1) minimize crossentropy (CE) of the attribute a under the conditional attribute model p(a|x) and 2) minimize the KL divergence between itself and the unmodified language model p(x). Let \u2206H t be the small shift of H t such that H t = H t + \u2206H t can shift the last $slot4 $slot4 $slot4 there is $slot4 there is 0.9 0.01 1 Table 11 : Example outputs of PPLM for controlling \"positive sentiment\" with various hyper-parameters. Increasing \u03b3 gm and \u03b1 encourages style control but can lead to ungrammatical outputs or run into degeneration. Increasing \u03bb encourages sentence fluency and semantic accuracy; however, careful hyper-parameters fine-tuning is required to ensure reasonable output quality.", "cite_spans": [], "ref_spans": [ { "start": 801, "end": 809, "text": "Table 11", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Empathy", "sec_num": null }, { "text": "layer (logit vector) to achieve the above two goals. The logit vector o t is obtained by a forward pass of the LM, i.e. o t = LM(x t , H t ). The next generated token x t+1 is sampled as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empathy", "sec_num": null }, { "text": "x t+1 \u223c softmax(W o t )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empathy", "sec_num": null }, { "text": "where W is a linear transformation that maps o t to a probability distribution of vocabulary size. The shift \u2206H t is then updated with gradient descent as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empathy", "sec_num": null }, { "text": "\u2206H t \u2190\u2206H t \u2212 (8) \u03b1 \u2207 \u2206Ht L CE (\u2022) + \u03bb\u2207 \u2206Ht D KL (\u2022 \u2022)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empathy", "sec_num": null }, { "text": "where \u03b1 is the step size and the update can be repeated until it converges or up to a certain time step to obtain the shift \u2206H t . At generation time, a postnorm fusion that samples the next token x t+1 based on the shifted distribution p t+1 and the unmodified distribution p t+1 is done to increase the language fluency:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empathy", "sec_num": null }, { "text": "x t+1 \u223c 1 \u03b2 p \u03b3gm t+1 p 1\u2212\u03b3gm t+1 (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empathy", "sec_num": null }, { "text": "\u03b2 is a normalizing factor such that it forms a valid distribution. As \u03b3 gm \u2192 1 this converges to the distribution from the shifted LM, and as \u03b3 gm \u2192 0 it converges to the unmodified unconditional LM distribution. We observe that the hyper-parameters \u03bb, \u03b1, \u03b3 gm can be tuned to affect the generation. In practice, we found that increasing \u03b3 gm led to nonsensical results very quickly as the generated text is no longer conformed to the given meaning representation. A larger step size \u03b1 also led to unstable and nonsensical results compared to a smaller step size as it moves H t further away from its original position. In practice, a larger \u03bb helps keep the fluency and the semantic of the generated text. Nevertheless, its effect is less influential than \u03b1 and \u03b3 gm . Table 11 illustrates the effect of hyper-parameters on the model outputs.", "cite_spans": [], "ref_spans": [ { "start": 770, "end": 778, "text": "Table 11", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Empathy", "sec_num": null }, { "text": "We include additional example outputs for lexical styles in Table 12 .", "cite_spans": [], "ref_spans": [ { "start": 60, "end": 68, "text": "Table 12", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "C Additional Experimental Results", "sec_num": null }, { "text": "Multiple Styles Control In this section, we demonstrate the possibility of controlling multiple styles with the methods described in Section 5. To control multiple styles with conditional training, we condition the CT model on multiple control variables P (x|a 1 , \u2022 \u2022 \u2022 , a n ). The control variables are concatenated and then added to the input as special tokens after the [BOS] token. To control multiple styles with PPLM, we simply add the cross-entropy loss of each semantic style classifier to the objective function. Extra hyper-parameters can be introduced to control the significance of each style. Similarly, for (beam search) weighted decoding, the distribution is re-weighted by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C Additional Experimental Results", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(w|a 1 , \u2022 \u2022 \u2022 , a n ) = Softmax p(w) n i=1 \u03bb i p(a i |w)", "eq_num": "(10)" } ], "section": "C Additional Experimental Results", "sec_num": null }, { "text": "where \u03bb i are hyper-parameters that allow us to determine the significance of each style a i . Finally, MR: OFFER(address=$slot1), OFFER(rating=$slot2) W/o Style Control There is a nice house at $slot1 with a $slot2 rating.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C Additional Experimental Results", "sec_num": null }, { "text": "$slot1 is rated $slot2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Short", "sec_num": null }, { "text": "There is a house at $slot1 with a rating of $slot2. Would you like that one?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Long", "sec_num": null }, { "text": "Has Rare Word There is a lovely residence located at $slot1. It has a rating of $slot2. : Example outputs using conditional training for the six lexical styles described in Section 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Long", "sec_num": null }, { "text": "conditional training and (beam search) weighted decoding can be used simultaneously by training a CT model and then applying WD or BSWD on the trained CT model during decoding time. Example outputs and the style accuracy of multiple style control using CT and BSWD are shown in Tables 13 and 14 . Table 13 : Style accuracy of multiple-style control using CT and BSWD with beam width B = 2. The parameter length is controlled by CT and the parameters formality and sentiment are controlled by the BSWD.", "cite_spans": [], "ref_spans": [ { "start": 278, "end": 295, "text": "Tables 13 and 14", "ref_id": "TABREF1" }, { "start": 298, "end": 306, "text": "Table 13", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Long", "sec_num": null }, { "text": "The goal of the annotation task is to evaluate the style of the automatically generated templates that express specific information to the user. The templates follow a certain style value such as \"formal\". The annotators are asked to imagine that they are having a conversation with the system, and that the template presented to them is an example of something that the system may say to them. The template may be in the form of a question, statement, confirmation, etc. Table 15 illustrates two sample MRs and four generated templates, which we lexicalize with a map of values (one per slot) to make the annotation task more intuitive.", "cite_spans": [], "ref_spans": [ { "start": 472, "end": 480, "text": "Table 15", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "D Human Evaluation Design", "sec_num": null }, { "text": "The Schema-guided Dialogue Dataset: https: //github.com/google-research-datasets/ dstc8-schema-guided-dialogue", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Appendix A includes the details of the NIDF calculation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "GPT-2 small from HuggingFace: https: //huggingface.co/transformers/model_ doc/gpt2.html 4 e.g., \"[BOS] flattened-schema-tokens [SEP] templatetokens [EOS]\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Example outputs for other lexical styles are included in Appendix CTable 12.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "More details on the human evaluation design are in Appendix D.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.seas.upenn.edu/~nlp/ resources/formality-corpus.tgz 9 https://nlp.stanford.edu/sentiment/ 10 https://github.com/wwbp/empathic_ reactions", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Sofia Scharfenberg, Jasmin Rehm, and the rest of the Alexa Data Services team for all of their help with preparing and performing the human evaluation study.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "Normalized inverse document frequency calculation For a word w in a template d, the Inverse Document Frequency (IDF) of w iswhere N is the total number of templates in the training data D, and |{d \u2208 D : w \u2208 d}| is the number of those templates d in the training data that contain w. The Normalized IDF (NIDF) is obtained via the min-max normalizationwhere max w \u2208D (IDF(w )) and min w \u2208D (IDF(w )) are the maximum and minimum IDF value of all the words in the training data. We use the maximum NIDF of all the words in a template to determine whether the template contains rarely used words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "The formality corpus 8 (Pavlick and Tetreault, 2016) , which contains sentence-level formality annotations, is used for training the formality classifier. The Stanford sentiment treebank (SST) 9 (Socher et al., 2013) is used for training the sentiment classifier. The SST data consists of 5 classes very negative, negative, neutral, positive, and very positive. We combine \"very negative\" and \"negative\" into one class and \"very positive\" and \"positive\" into one class. Finally, the empathetic reactions data 10 (Buechel et al., 2018) is used for training the empathy classifier. We use only the empathy label from the data. Okay! The $slot1 located in $slot2, as for the tofu, it has a pretty good rating. Would you like to try for a lunch or dinner reservation? Long W/o Style Control Okay! The $slot1 located in $slot2, as for the tofu, it has a pretty good rating. Would you like to try for a meal there? $slot1 is a nice restaurant located in $slot2.W/o Style Control Positive $slot1 might interest you, and it is located in $slot2. $slot1 is a good restaurant in $slot2.Short Positive $slot1 restaurant is also in $slot2.Okay! The restaurant $slot1 located right inside $slot2 is a good one and it has many vegetarian side dishes.Long Positive Okay! The restaurant $slot1 located right inside $slot2 is a good one. Do you wish to have your lunch here?There is a $slot2 rated house at $slot1. W/o Style Control W/o Style Control There is a house at $slot1 with a $slot2 rating.$slot1 has a $slot2 rating Short W/o Style Control $slot1 with a $slot2 rating There is a house at $slot1 with a $slot2 rating. would you like to stay here? Long W/o Style Control There is a house at $slot1 that you might be interested in. It has a $slot2 average rating.There is a lovely house at $slot1 with a rating of $slot2 W/o Style Control Positive There is a good house at $slot1 with a rating of $slot2 $slot1 has a rating of $slot2 Short Positive $slot1 with rating of $slot2There is a lovely residence located at $slot1 with a rating of $slot2Long Positive There is a nice house located at $slot1 with a rating of $slot2 Table 14 : Example outputs by combining CT and BSWD with beam width B = 2. Lexical style is controlled by CT and semantic style is controlled by the BSWD. Note how the lexical style \"long\" tends to yield outputs that include more hallucinated content in an attempt to fulfill the required style goal(s). ", "cite_spans": [ { "start": 23, "end": 52, "text": "(Pavlick and Tetreault, 2016)", "ref_id": "BIBREF30" }, { "start": 195, "end": 216, "text": "(Socher et al., 2013)", "ref_id": "BIBREF37" } ], "ref_spans": [ { "start": 2114, "end": 2122, "text": "Table 14", "ref_id": null } ], "eq_spans": [], "section": "Semantic style parameter annotation details", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Towards zero shot frame semantic parsing for domain scaling", "authors": [ { "first": "Ankur", "middle": [], "last": "Bapna", "suffix": "" }, { "first": "Gokhan", "middle": [], "last": "Tur", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani-Tur", "suffix": "" }, { "first": "Larry", "middle": [], "last": "Heck", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ankur Bapna, Gokhan Tur, Dilek Hakkani-Tur, and Larry Heck. 2017. Towards zero shot frame seman- tic parsing for domain scaling. In Interspeech 2017.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "On the dangers of stochastic parrots: Can language models be too big?", "authors": [ { "first": "Emily", "middle": [ "M" ], "last": "Bender", "suffix": "" }, { "first": "Timnit", "middle": [], "last": "Gebru", "suffix": "" }, { "first": "Angelina", "middle": [], "last": "Mcmillan-Major", "suffix": "" }, { "first": "Shmargaret", "middle": [], "last": "Shmitchell", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT '21", "volume": "", "issue": "", "pages": "610--623", "other_ids": { "DOI": [ "10.1145/3442188.3445922" ] }, "num": null, "urls": [], "raw_text": "Emily M. Bender, Timnit Gebru, Angelina McMillan- Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Confer- ence on Fairness, Accountability, and Transparency, FAccT '21, page 610-623, New York, NY, USA. As- sociation for Computing Machinery.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A neural probabilistic language model", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "R\u00e9jean", "middle": [], "last": "Ducharme", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" } ], "year": 2001, "venue": "Advances in Neural Information Processing Systems", "volume": "13", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, and Pascal Vincent. 2001. A neural probabilistic language model. In Advances in Neural Information Processing Systems, volume 13. MIT Press.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Hello, it's GPT-2 -how can I help you? towards the use of pretrained language models for task-oriented dialogue systems", "authors": [ { "first": "Pawe\u0142", "middle": [], "last": "Budzianowski", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 3rd Workshop on Neural Generation and Translation", "volume": "", "issue": "", "pages": "15--22", "other_ids": { "DOI": [ "10.18653/v1/D19-5602" ] }, "num": null, "urls": [], "raw_text": "Pawe\u0142 Budzianowski and Ivan Vuli\u0107. 2019. Hello, it's GPT-2 -how can I help you? towards the use of pre- trained language models for task-oriented dialogue systems. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 15-22, Hong Kong. Association for Computational Linguis- tics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Modeling empathy and distress in reaction to news stories", "authors": [ { "first": "Sven", "middle": [], "last": "Buechel", "suffix": "" }, { "first": "Anneke", "middle": [], "last": "Buffone", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Slaff", "suffix": "" }, { "first": "Lyle", "middle": [], "last": "Ungar", "suffix": "" }, { "first": "Jo\u00e3o", "middle": [], "last": "Sedoc", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sven Buechel, Anneke Buffone, Barry Slaff, Lyle Un- gar, and Jo\u00e3o Sedoc. 2018. Modeling empathy and distress in reaction to news stories. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018).", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The WebNLG challenge: Generating text from DBPedia data", "authors": [ { "first": "Emilie", "middle": [], "last": "Colin", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Gardent", "suffix": "" }, { "first": "M'rabet", "middle": [], "last": "Yassine", "suffix": "" }, { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Perez-Beltrachini", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 9th International Natural Language Generation conference", "volume": "", "issue": "", "pages": "163--167", "other_ids": { "DOI": [ "10.18653/v1/W16-6626" ] }, "num": null, "urls": [], "raw_text": "Emilie Colin, Claire Gardent, Yassine M'rabet, Shashi Narayan, and Laura Perez-Beltrachini. 2016. The WebNLG challenge: Generating text from DBPedia data. In Proceedings of the 9th International Nat- ural Language Generation conference, pages 163- 167, Edinburgh, UK. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Plug and play language models: A simple approach to controlled text generation", "authors": [ { "first": "Sumanth", "middle": [], "last": "Dathathri", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Madotto", "suffix": "" }, { "first": "Janice", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Jane", "middle": [], "last": "Hung", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Piero", "middle": [], "last": "Molino", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Yosinski", "suffix": "" }, { "first": "Rosanne", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2020. Plug and play language mod- els: A simple approach to controlled text generation. In International Conference on Learning Represen- tations.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Anushree Venkatesh, and Dilek Hakkani-Tur. 2020. Schema-guided natural language generation", "authors": [ { "first": "Yuheng", "middle": [], "last": "Du", "suffix": "" }, { "first": "Shereen", "middle": [], "last": "Oraby", "suffix": "" }, { "first": "Vittorio", "middle": [], "last": "Perera", "suffix": "" }, { "first": "Minmin", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Anjali", "middle": [], "last": "Narayan-Chen", "suffix": "" }, { "first": "Tagyoung", "middle": [], "last": "Chung", "suffix": "" } ], "year": null, "venue": "Proceedings of the 13th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "283--295", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuheng Du, Shereen Oraby, Vittorio Perera, Min- min Shen, Anjali Narayan-Chen, Tagyoung Chung, Anushree Venkatesh, and Dilek Hakkani-Tur. 2020. Schema-guided natural language generation. In Proceedings of the 13th International Conference on Natural Language Generation, pages 283-295, Dublin, Ireland. Association for Computational Lin- guistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Findings of the E2E NLG challenge", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Jekaterina", "middle": [], "last": "Novikova", "suffix": "" }, { "first": "Verena", "middle": [], "last": "Rieser", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 11th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "322--328", "other_ids": { "DOI": [ "10.18653/v1/W18-6539" ] }, "num": null, "urls": [], "raw_text": "Ond\u0159ej Du\u0161ek, Jekaterina Novikova, and Verena Rieser. 2018. Findings of the E2E NLG challenge. In Proceedings of the 11th International Conference on Natural Language Generation, pages 322-328, Tilburg University, The Netherlands. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Controllable Abstractive Summarization", "authors": [ { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation", "volume": "", "issue": "", "pages": "45--54", "other_ids": { "DOI": [ "10.18653/v1/W18-2706" ] }, "num": null, "urls": [], "raw_text": "Angela Fan, David Grangier, and Michael Auli. 2018. Controllable Abstractive Summarization. In Pro- ceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 45-54, Mel- bourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Controlling Linguistic Style Aspects in Neural Language Generation", "authors": [ { "first": "Jessica", "middle": [], "last": "Ficler", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Workshop on Stylistic Variation", "volume": "", "issue": "", "pages": "94--104", "other_ids": { "DOI": [ "10.18653/v1/W17-4912" ] }, "num": null, "urls": [], "raw_text": "Jessica Ficler and Yoav Goldberg. 2017. Controlling Linguistic Style Aspects in Neural Language Gen- eration. In Proceedings of the Workshop on Stylis- tic Variation, pages 94-104, Copenhagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Spoken Conversational AI in Video Games: Emotional Dialogue Management Increases User Engagement", "authors": [ { "first": "Jamie", "middle": [], "last": "Fraser", "suffix": "" }, { "first": "Ioannis", "middle": [], "last": "Papaioannou", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Lemon", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 18th International Conference on Intelligent Virtual Agents", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jamie Fraser, Ioannis Papaioannou, and Oliver Lemon. 2018. Spoken Conversational AI in Video Games: Emotional Dialogue Management Increases User Engagement. In Proceedings of the 18th Interna- tional Conference on Intelligent Virtual Agents.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Hafez: an interactive poetry generation system", "authors": [ { "first": "Marjan", "middle": [], "last": "Ghazvininejad", "suffix": "" }, { "first": "Xing", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Jay", "middle": [], "last": "Priyadarshi", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2017, "venue": "Proceedings of ACL 2017, System Demonstrations", "volume": "", "issue": "", "pages": "43--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marjan Ghazvininejad, Xing Shi, Jay Priyadarshi, and Kevin Knight. 2017. Hafez: an interactive poetry generation system. In Proceedings of ACL 2017, System Demonstrations, pages 43-48, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Learning to write with cooperative discriminators", "authors": [ { "first": "Ari", "middle": [], "last": "Holtzman", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Buys", "suffix": "" }, { "first": "Maxwell", "middle": [], "last": "Forbes", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bosselut", "suffix": "" }, { "first": "D", "middle": [], "last": "Golub", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2018, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, D. Golub, and Yejin Choi. 2018. Learn- ing to write with cooperative discriminators. ArXiv, abs/1805.06087.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Toward controlled generation of text", "authors": [ { "first": "Zhiting", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Zichao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Liang", "suffix": "" }, { "first": "R", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "E", "middle": [], "last": "Xing", "suffix": "" } ], "year": 2017, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiting Hu, Zichao Yang, Xiaodan Liang, R. Salakhut- dinov, and E. Xing. 2017. Toward controlled gener- ation of text. In ICML.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Template guided text generation for task-oriented dialogue", "authors": [ { "first": "Mihir", "middle": [], "last": "Kale", "suffix": "" }, { "first": "Abhinav", "middle": [], "last": "Rastogi", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "6505--6520", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.527" ] }, "num": null, "urls": [], "raw_text": "Mihir Kale and Abhinav Rastogi. 2020. Template guided text generation for task-oriented dialogue. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6505-6520, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Ctrl: A conditional transformer language model for controllable generation", "authors": [ { "first": "Bryan", "middle": [], "last": "Nitish Shirish Keskar", "suffix": "" }, { "first": "Lav", "middle": [ "R" ], "last": "Mccann", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Varshney", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Shirish Keskar, Bryan McCann, Lav R. Varsh- ney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for controllable generation. ArXiv, abs/1909.05858.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Controlling output length in neural encoder-decoders", "authors": [ { "first": "Yuta", "middle": [], "last": "Kikuchi", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Ryohei", "middle": [], "last": "Sasano", "suffix": "" }, { "first": "Hiroya", "middle": [], "last": "Takamura", "suffix": "" }, { "first": "Manabu", "middle": [], "last": "Okumura", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1328--1338", "other_ids": { "DOI": [ "10.18653/v1/D16-1140" ] }, "num": null, "urls": [], "raw_text": "Yuta Kikuchi, Graham Neubig, Ryohei Sasano, Hiroya Takamura, and Manabu Okumura. 2016. Control- ling output length in neural encoder-decoders. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1328-1338, Austin, Texas. Association for Compu- tational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Gedi: Generative discriminator guided sequence generation", "authors": [ { "first": "Ben", "middle": [], "last": "Krause", "suffix": "" }, { "first": "Akhilesh", "middle": [], "last": "Deepak Gotmare", "suffix": "" }, { "first": "B", "middle": [], "last": "Mc-Cann", "suffix": "" }, { "first": "N", "middle": [], "last": "Keskar", "suffix": "" }, { "first": "R", "middle": [], "last": "Shafiq", "suffix": "" }, { "first": "R", "middle": [], "last": "Joty", "suffix": "" }, { "first": "Nazneen", "middle": [], "last": "Socher", "suffix": "" }, { "first": "", "middle": [], "last": "Rajani", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ben Krause, Akhilesh Deepak Gotmare, B. Mc- Cann, N. Keskar, Shafiq R. Joty, R. Socher, and Nazneen Rajani. 2020. Gedi: Generative dis- criminator guided sequence generation. ArXiv, abs/2009.06367.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Leveraging slot descriptions for zero-shot cross-domain dialogue StateTracking", "authors": [ { "first": "Zhaojiang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Seungwhan", "middle": [], "last": "Moon", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Crook", "suffix": "" }, { "first": "Zhenpeng", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Zhiguang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhou", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Madotto", "suffix": "" }, { "first": "Eunjoon", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Rajen", "middle": [], "last": "Subba", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "5640--5648", "other_ids": { "DOI": [ "10.18653/v1/2021.naacl-main.448" ] }, "num": null, "urls": [], "raw_text": "Zhaojiang Lin, Bing Liu, Seungwhan Moon, Paul Crook, Zhenpeng Zhou, Zhiguang Wang, Zhou Yu, Andrea Madotto, Eunjoon Cho, and Rajen Subba. 2021. Leveraging slot descriptions for zero-shot cross-domain dialogue StateTracking. In Proceed- ings of the 2021 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5640-5648, Online. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "MoEL: Mixture of Empathetic Listeners", "authors": [ { "first": "Zhaojiang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Madotto", "suffix": "" }, { "first": "Jamin", "middle": [], "last": "Shin", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "121--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhaojiang Lin, Andrea Madotto, Jamin Shin, Peng Xu, and Pascale Fung. 2019. MoEL: Mixture of Empa- thetic Listeners. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 121-132, Hong Kong, China.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Dexperts: Decodingtime controlled text generation with experts and antiexperts", "authors": [ { "first": "Alisa", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maarten", "middle": [], "last": "Sap", "suffix": "" }, { "first": "Ximing", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Swabha", "middle": [], "last": "Swayamdipta", "suffix": "" }, { "first": "Chandra", "middle": [], "last": "Bhagavatula", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A. Smith, and Yejin Choi. 2021. Dexperts: Decoding- time controlled text generation with experts and anti- experts.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Effective approaches to attention-based neural machine translation", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1412--1421", "other_ids": { "DOI": [ "10.18653/v1/D15-1166" ] }, "num": null, "urls": [], "raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1412-1421, Lis- bon, Portugal. Association for Computational Lin- guistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Attention over parameters for dialogue systems", "authors": [ { "first": "Andrea", "middle": [], "last": "Madotto", "suffix": "" }, { "first": "Zhaojiang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Chien-Sheng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jamin", "middle": [], "last": "Shin", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2020, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrea Madotto, Zhaojiang Lin, Chien-Sheng Wu, Jamin Shin, and Pascale Fung. 2020. Attention over parameters for dialogue systems. ArXiv, abs/2001.01871.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "MIME: MIMicking emotions for empathetic response generation", "authors": [ { "first": "Navonil", "middle": [], "last": "Majumder", "suffix": "" }, { "first": "Pengfei", "middle": [], "last": "Hong", "suffix": "" }, { "first": "Shanshan", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Jiankun", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Deepanway", "middle": [], "last": "Ghosal", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "8968--8979", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.721" ] }, "num": null, "urls": [], "raw_text": "Navonil Majumder, Pengfei Hong, Shanshan Peng, Jiankun Lu, Deepanway Ghosal, Alexander Gel- bukh, Rada Mihalcea, and Soujanya Poria. 2020. MIME: MIMicking emotions for empathetic re- sponse generation. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 8968-8979, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Step-by-step: Separating planning from realization in neural data-to-text generation", "authors": [ { "first": "Amit", "middle": [], "last": "Moryossef", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2267--2277", "other_ids": { "DOI": [ "10.18653/v1/N19-1236" ] }, "num": null, "urls": [], "raw_text": "Amit Moryossef, Yoav Goldberg, and Ido Dagan. 2019. Step-by-step: Separating planning from realization in neural data-to-text generation. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2267-2277, Minneapolis, Minnesota. Association for Computational Linguis- tics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Why we need new evaluation metrics for NLG", "authors": [ { "first": "Jekaterina", "middle": [], "last": "Novikova", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Amanda", "middle": [ "Cercas" ], "last": "Curry", "suffix": "" }, { "first": "Verena", "middle": [], "last": "Rieser", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2241--2252", "other_ids": { "DOI": [ "10.18653/v1/D17-1238" ] }, "num": null, "urls": [], "raw_text": "Jekaterina Novikova, Ond\u0159ej Du\u0161ek, Amanda Cer- cas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2241-2252, Copenhagen, Denmark. Association for Computa- tional Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Curate and Generate: A Corpus and Method for Joint Control of Semantics and Style in Neural NLG", "authors": [ { "first": "Shereen", "middle": [], "last": "Oraby", "suffix": "" }, { "first": "Vrindavan", "middle": [], "last": "Harrison", "suffix": "" }, { "first": "Abteen", "middle": [], "last": "Ebrahimi", "suffix": "" }, { "first": "Marilyn", "middle": [], "last": "Walker", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5938--5951", "other_ids": { "DOI": [ "10.18653/v1/P19-1596" ] }, "num": null, "urls": [], "raw_text": "Shereen Oraby, Vrindavan Harrison, Abteen Ebrahimi, and Marilyn Walker. 2019. Curate and Generate: A Corpus and Method for Joint Control of Semantics and Style in Neural NLG. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 5938-5951, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Controlling Personality-Based Stylistic Variation with Neural Natural Language Generators", "authors": [ { "first": "Shereen", "middle": [], "last": "Oraby", "suffix": "" }, { "first": "Lena", "middle": [], "last": "Reed", "suffix": "" }, { "first": "Shubhangi", "middle": [], "last": "Tandon", "suffix": "" }, { "first": "T", "middle": [ "S" ], "last": "Sharath", "suffix": "" }, { "first": "Stephanie", "middle": [], "last": "Lukin", "suffix": "" }, { "first": "Marilyn", "middle": [], "last": "Walker", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue", "volume": "", "issue": "", "pages": "180--190", "other_ids": { "DOI": [ "10.18653/v1/W18-5019" ] }, "num": null, "urls": [], "raw_text": "Shereen Oraby, Lena Reed, Shubhangi Tandon, Sharath T.S., Stephanie Lukin, and Marilyn Walker. 2018. Controlling Personality-Based Stylistic Varia- tion with Neural Natural Language Generators. In Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, pages 180-190, Mel- bourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "An Empirical Analysis of Formality in Online Communication", "authors": [ { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Tetreault", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "61--74", "other_ids": { "DOI": [ "10.1162/tacl_a_00083" ] }, "num": null, "urls": [], "raw_text": "Ellie Pavlick and Joel Tetreault. 2016. An Empiri- cal Analysis of Formality in Online Communication. Transactions of the Association for Computational Linguistics, 4:61-74.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Few-shot natural language generation for task-oriented dialog", "authors": [ { "first": "Baolin", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Chenguang", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Chunyuan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xiujun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jinchao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2020, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baolin Peng, Chenguang Zhu, Chunyuan Li, Xiujun Li, Jinchao Li, Michael Zeng, and Jianfeng Gao. 2020. Few-shot natural language generation for task-oriented dialog. In EMNLP.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Towards Controllable Story Generation", "authors": [ { "first": "Nanyun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Marjan", "middle": [], "last": "Ghazvininejad", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "May", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the First Workshop on Storytelling", "volume": "", "issue": "", "pages": "43--49", "other_ids": { "DOI": [ "10.18653/v1/W18-1505" ] }, "num": null, "urls": [], "raw_text": "Nanyun Peng, Marjan Ghazvininejad, Jonathan May, and Kevin Knight. 2018. Towards Controllable Story Generation. In Proceedings of the First Work- shop on Storytelling, pages 43-49, New Orleans, Louisiana. Association for Computational Linguis- tics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Wu", "suffix": "" }, { "first": "R", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeff Wu, R. Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language mod- els are unsupervised multitask learners.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset", "authors": [ { "first": "Abhinav", "middle": [], "last": "Rastogi", "suffix": "" }, { "first": "Xiaoxue", "middle": [], "last": "Zang", "suffix": "" }, { "first": "Srinivas", "middle": [], "last": "Sunkara", "suffix": "" }, { "first": "Raghav", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Pranav", "middle": [], "last": "Khaitan", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.05855" ] }, "num": null, "urls": [], "raw_text": "Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2019. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. arXiv preprint arXiv:1909.05855.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "What makes a good conversation? How controllable attributes affect human judgments", "authors": [ { "first": "Abigail", "middle": [], "last": "See", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Roller", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1702--1723", "other_ids": { "DOI": [ "10.18653/v1/N19-1170" ] }, "num": null, "urls": [], "raw_text": "Abigail See, Stephen Roller, Douwe Kiela, and Ja- son Weston. 2019. What makes a good conver- sation? How controllable attributes affect human judgments. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1702-1723, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "The woman worked as a babysitter: On biases in language generation", "authors": [ { "first": "Emily", "middle": [], "last": "Sheng", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Premkumar", "middle": [], "last": "Natarajan", "suffix": "" }, { "first": "Nanyun", "middle": [], "last": "Peng", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3407--3412", "other_ids": { "DOI": [ "10.18653/v1/D19-1339" ] }, "num": null, "urls": [], "raw_text": "Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3407- 3412, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Perelygin", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1631--1642", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Adding chit-chat to enhance task-oriented dialogues", "authors": [ { "first": "Kai", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Seungwhan", "middle": [], "last": "Moon", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Crook", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Roller", "suffix": "" }, { "first": "Becka", "middle": [], "last": "Silvert", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Zhiguang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Honglei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Eunjoon", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1570--1583", "other_ids": { "DOI": [ "10.18653/v1/2021.naacl-main.124" ] }, "num": null, "urls": [], "raw_text": "Kai Sun, Seungwhan Moon, Paul Crook, Stephen Roller, Becka Silvert, Bing Liu, Zhiguang Wang, Honglei Liu, Eunjoon Cho, and Claire Cardie. 2021. Adding chit-chat to enhance task-oriented dialogues. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1570-1583, Online. Association for Compu- tational Linguistics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "A Lexical, Syntactic, and Semantic Perspective for Understanding Style in Text", "authors": [ { "first": "Gaurav", "middle": [], "last": "Verma", "suffix": "" }, { "first": "", "middle": [], "last": "Balaji Vasan Srinivasan", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.08349" ] }, "num": null, "urls": [], "raw_text": "Gaurav Verma and Balaji Vasan Srinivasan. 2019. A Lexical, Syntactic, and Semantic Perspective for Un- derstanding Style in Text. arXiv:1909.08349 [cs].", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Transformer-based empathetic response generation using dialogue situation and advanced-level definition of empathy", "authors": [ { "first": "Yi-Hsuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jia-Hao", "middle": [], "last": "Hsu", "suffix": "" }, { "first": "Chung-Hsien", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Tsung-Hsien", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2021, "venue": "2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP)", "volume": "", "issue": "", "pages": "1--5", "other_ids": { "DOI": [ "10.1109/ISCSLP49672.2021.9362067" ] }, "num": null, "urls": [], "raw_text": "Yi-Hsuan Wang, Jia-Hao Hsu, Chung-Hsien Wu, and Tsung-Hsien Yang. 2021. Transformer-based empa- thetic response generation using dialogue situation and advanced-level definition of empathy. In 2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP), pages 1-5.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Semantically conditioned LSTM-based natural language generation for spoken dialogue systems", "authors": [ { "first": "Milica", "middle": [], "last": "Tsung-Hsien Wen", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Ga\u0161i\u0107", "suffix": "" }, { "first": "Pei-Hao", "middle": [], "last": "Mrk\u0161i\u0107", "suffix": "" }, { "first": "David", "middle": [], "last": "Su", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Vandyke", "suffix": "" }, { "first": "", "middle": [], "last": "Young", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1711--1721", "other_ids": { "DOI": [ "10.18653/v1/D15-1199" ] }, "num": null, "urls": [], "raw_text": "Tsung-Hsien Wen, Milica Ga\u0161i\u0107, Nikola Mrk\u0161i\u0107, Pei- Hao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned LSTM-based natural lan- guage generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1711-1721, Lisbon, Portugal. Association for Com- putational Linguistics.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Seqgan: Sequence generative adversarial nets with policy gradient", "authors": [ { "first": "Lantao", "middle": [], "last": "Yu", "suffix": "" }, { "first": "W", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "J", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Y", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2017, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lantao Yu, W. Zhang, J. Wang, and Y. Yu. 2017. Seq- gan: Sequence generative adversarial nets with pol- icy gradient. In AAAI.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Side-tuning: A baseline for network adaptation via additive side networks", "authors": [ { "first": "J", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Sax", "suffix": "" }, { "first": "A", "middle": [], "last": "Zamir", "suffix": "" }, { "first": "L", "middle": [], "last": "Guibas", "suffix": "" }, { "first": "J", "middle": [], "last": "Malik", "suffix": "" } ], "year": 2020, "venue": "ECCV", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Zhang, Alexander Sax, A. Zamir, L. Guibas, and J. Malik. 2020. Side-tuning: A baseline for network adaptation via additive side networks. In ECCV.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Learning to Control the Specificity in Neural Response Generation", "authors": [ { "first": "Ruqing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jiafeng", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Yixing", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Yanyan", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xueqi", "middle": [], "last": "Cheng", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1108--1117", "other_ids": { "DOI": [ "10.18653/v1/P18-1102" ] }, "num": null, "urls": [], "raw_text": "Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Jun Xu, and Xueqi Cheng. 2018. Learning to Con- trol the Specificity in Neural Response Generation. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1108-1117, Melbourne, Aus- tralia. Association for Computational Linguistics.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "MojiTalk: Generating emotional responses at scale", "authors": [ { "first": "Xianda", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "William", "middle": [ "Yang" ], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xianda Zhou and William Yang Wang. 2018. MojiTalk: Generating emotional responses at scale. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers).", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Fine-tuning language models from human preferences", "authors": [ { "first": "M", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "Nisan", "middle": [], "last": "Ziegler", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Stiennon", "suffix": "" }, { "first": "T", "middle": [], "last": "Wu", "suffix": "" }, { "first": "A", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Christiano", "suffix": "" }, { "first": "", "middle": [], "last": "Irving", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, T. Brown, A. Radford, Dario Amodei, Paul Chris- tiano, and Geoffrey Irving. 2019. Fine-tuning lan- guage models from human preferences. ArXiv, abs/1909.08593.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "Deletion (Del.): Whether the response drops any slot values from the MR. -Repetition (Rep.): Whether the response repeats any slot values from the MR. -Content Hallucination (Cont. Hal.): Whether the response includes extra content not given in the MR. -Incorrect Slot Values (Inc. Slot): Whether the response includes any slot values not given in the MR (a specific type of hallucination)." }, "TABREF1": { "text": "Sample system-side schema and the flat natural language strings.", "content": "
Style ParameterDescriptionCondition
", "type_str": "table", "num": null, "html": null }, "TABREF2": { "text": "", "content": "", "type_str": "table", "num": null, "html": null }, "TABREF4": { "text": "Evaluation of lexical style parameters with conditional training.", "content": "
", "type_str": "table", "num": null, "html": null }, "TABREF6": { "text": "Evaluation of semantic styles.", "content": "
", "type_str": "table", "num": null, "html": null }, "TABREF7": { "text": "shows the aggregated evaluation (40 samples per style, with three judgments per sample). The results show that outputs generated by BSWD have a higher or comparable style rating MR: REQUEST(where_to=none) W/o style control What city are you staying? Formal What city are you planning to stay in? MR: OFFER(address=$slot1), OFFER(rating=$slot2)W/o style control There is a house at $slot1 with a rating of $slot2.", "content": "
PositiveThere is a nice house with a $slot2 rating located at $slot1.
MR: INFORM(rating=$slot1), NOTIFY_FAILURE(null=none)
W/o style control The rating is $slot1. I was unable to make a reservation.
EmpathyThe rating is $slot1. I'm sorry, but I couldn't make the reservation.
", "type_str": "table", "num": null, "html": null }, "TABREF8": { "text": "Example outputs of semantic styles using beam search weighted decoding (BSWD).", "content": "", "type_str": "table", "num": null, "html": null }, "TABREF10": { "text": "Human evaluation results for selected semantic styles and methods.", "content": "
", "type_str": "table", "num": null, "html": null }, "TABREF11": { "text": "Semantic style parameters with their possible values and associated training datasets.", "content": "
CategoryStyleTrainDevTest
Short31.42% 36.96% 32.23%
Long26.49% 25.13% 25.44%
Lexical stylesHas rare word35.83% 37.76% 42.04%
First person pron.28.48% 28.88% 28.75%
Second person pron. 52.89% 52.50% 60.18%
Descriptive5.00%5.32%3.88%
Formal30.59% 34.27% 29.76%
Semantic stylesNegative6.55%4.33%4.95%
Positive39.70% 42.43% 37.41%
Empathy27.72% 30.04% 25.65%
", "type_str": "table", "num": null, "html": null }, "TABREF12": { "text": "Data distribution for all of the style parameters we annotate in SGD.", "content": "", "type_str": "table", "num": null, "html": null }, "TABREF14": { "text": "First Person Pron.I found a house at $slot1 with a $slot2 rating.Second Person Pron. There is a house at $slot1 with a $slot2 rating. Would you like that one?Descriptive There is a nice house with a $slot2 rating available at $slot1.", "content": "
MR: OFFER_INTENT(intent=intent)
W/o Style ControlWould you like to make a reservation for this house?
ShortDo you want to reserve it?
LongDo you want me to go ahead and reserve a room at the hotel?
Has Rare WordWould you like to continue with the hotel reservation?
First Person Pron.Shall I book the rooms in that hotel now?
Second Person Pron. Would you like to make a reservation for this house?
DescriptiveWould you like to reserve a room at this hotel?
", "type_str": "table", "num": null, "html": null }, "TABREF15": { "text": "", "content": "", "type_str": "table", "num": null, "html": null } } } }