{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:31:22.490968Z" }, "title": "Low-Resource Adaptation of Open-Domain Generative Chatbots Greyson Gerhard-Young", "authors": [ { "first": "Raviteja", "middle": [], "last": "Anantha", "suffix": "", "affiliation": { "laboratory": "", "institution": "Brown University \u2660 Amazon", "location": {} }, "email": "" }, { "first": "Srinivas", "middle": [], "last": "Chappidi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Brown University \u2660 Amazon", "location": {} }, "email": "" }, { "first": "Bj\u00f6rn", "middle": [], "last": "Hoffmeister", "suffix": "", "affiliation": { "laboratory": "", "institution": "Brown University \u2660 Amazon", "location": {} }, "email": "" }, { "first": "\u2660", "middle": [], "last": "Apple", "suffix": "", "affiliation": { "laboratory": "", "institution": "Brown University \u2660 Amazon", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Recent work building open-domain chatbots has demonstrated that increasing model size improves performance (Adiwardana et al., 2020; Roller et al., 2020). On the other hand, latency and connectivity considerations dictate the move of digital assistants on the device (Verge, 2021). Giving a digital assistant like Siri, Alexa, or Google Assistant the ability to discuss just about anything leads to the need for reducing the chatbot model size such that it fits on the user's device. We demonstrate that low parameter models can simultaneously retain their general knowledge conversational abilities while improving in a specific domain. Additionally, we propose a generic framework that accounts for variety in question types, tracks reference throughout multiturn conversations, and removes inconsistent and potentially toxic responses. Our framework seamlessly transitions between chatting and performing transactional tasks, which will ultimately make interactions with digital assistants more human-like. We evaluate our framework on 1 internal and 4 public benchmark datasets using both automatic (Perplexity) and human (SSA-Sensibleness and Specificity Average) evaluation metrics and establish comparable performance while reducing model parameters by 90%.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Recent work building open-domain chatbots has demonstrated that increasing model size improves performance (Adiwardana et al., 2020; Roller et al., 2020). On the other hand, latency and connectivity considerations dictate the move of digital assistants on the device (Verge, 2021). Giving a digital assistant like Siri, Alexa, or Google Assistant the ability to discuss just about anything leads to the need for reducing the chatbot model size such that it fits on the user's device. We demonstrate that low parameter models can simultaneously retain their general knowledge conversational abilities while improving in a specific domain. Additionally, we propose a generic framework that accounts for variety in question types, tracks reference throughout multiturn conversations, and removes inconsistent and potentially toxic responses. Our framework seamlessly transitions between chatting and performing transactional tasks, which will ultimately make interactions with digital assistants more human-like. We evaluate our framework on 1 internal and 4 public benchmark datasets using both automatic (Perplexity) and human (SSA-Sensibleness and Specificity Average) evaluation metrics and establish comparable performance while reducing model parameters by 90%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Recent progress on end-to-end neural approaches for building open-domain chatbots (Zhang et al., 2020; Adiwardana et al., 2020; Roller et al., 2020) has demonstrated that large-scale pre-training using heavy-weight models combined with careful selection of datasets for fine-tuning to acquire specific skills can deliver superior performance. However, for one model to perform several tasks -such as dialogue state tracking or reference resolution, response generation, mitigating toxic responses, avoiding in-turn contradictions, and avoiding incorrect or \"I don't know\" responses due to lack of knowledge -in a reliable fashion, there is still a long way to go. Despite much research, these limitations from the recently proposed approaches prevent practical adoption. In addition, due to huge model sizes, these approaches lack practical utility in a low-resource setting. Some complex frameworks (Serban et al., 2017; Worswick, 2018; use a mix of templates and dialogue managers with rulebased systems. These complex frameworks often have problems: the produced responses are vague and generic, and they lack engagingness (Adiwardana et al., 2020) . Other complex frameworks address this issue by employing modularizing design assigning each conversational task to a specific component, which can help improve overall performance of the dialogue systems (Fang et al., 2017; Yu et al., 2019) . Prior works have shown that generative neural response models outperform template-based or hybrid response generation methods as measured using various human evaluation techniques (Adiwardana et al., 2020; Roller et al., 2020) .", "cite_spans": [ { "start": 82, "end": 102, "text": "(Zhang et al., 2020;", "ref_id": "BIBREF18" }, { "start": 103, "end": 127, "text": "Adiwardana et al., 2020;", "ref_id": "BIBREF0" }, { "start": 128, "end": 148, "text": "Roller et al., 2020)", "ref_id": "BIBREF10" }, { "start": 900, "end": 921, "text": "(Serban et al., 2017;", "ref_id": null }, { "start": 922, "end": 937, "text": "Worswick, 2018;", "ref_id": "BIBREF15" }, { "start": 1126, "end": 1151, "text": "(Adiwardana et al., 2020)", "ref_id": "BIBREF0" }, { "start": 1358, "end": 1377, "text": "(Fang et al., 2017;", "ref_id": "BIBREF6" }, { "start": 1378, "end": 1394, "text": "Yu et al., 2019)", "ref_id": "BIBREF16" }, { "start": 1577, "end": 1602, "text": "(Adiwardana et al., 2020;", "ref_id": "BIBREF0" }, { "start": 1603, "end": 1623, "text": "Roller et al., 2020)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we propose a generic, modular and light-weight framework that blends the desired characteristics of both classes of methods. A snippet of sample dialogue with our proposed framework is shown in Figure 1 . Our contributions are as follows: (1) demonstrating that a light-weight response generation model in a modular framework achieves comparable performance to recent models (Adiwardana et al., 2020; Roller et al., 2020) that have billions of parameters; (2) providing evidence that adding a reference resolution component improves the quality of the generated response for multi-turn conversations, compared to previous approaches that state track conversational context explicitly or use latent representations (Cervone et al., 2019; Roller et al., 2020) ; (3) providing a generic end-to-end framework that can process both objective (factual) and subjective questions.", "cite_spans": [ { "start": 389, "end": 414, "text": "(Adiwardana et al., 2020;", "ref_id": "BIBREF0" }, { "start": 415, "end": 435, "text": "Roller et al., 2020)", "ref_id": "BIBREF10" }, { "start": 728, "end": 750, "text": "(Cervone et al., 2019;", "ref_id": "BIBREF2" }, { "start": 751, "end": 771, "text": "Roller et al., 2020)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 208, "end": 216, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Lightweight Entertainment Domain (LED) chatbot interacts with the user through a pipeline of models. The LED chatbot architecture is illustrated in Figure 2 . Each module in our pipeline architecture handles a specific conversational task and passes the output for further processing to the downstream modules. In the following subsections, we describe these modules with their respective tasks and training details.", "cite_spans": [], "ref_spans": [ { "start": 148, "end": 156, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Lightweight Entertainment Domain Chatbot", "sec_num": "2" }, { "text": "In a multi-turn dialogue, the follow-up questions often contain implicit or explicit references to the entities from the previous turns. It is well established that providing self-contained questions by resolving references improves the efficiency of the language understanding systems (Elgohary et al., 2019; Anantha et al., 2021) . A illustration of reference resolution where the entity reference (in bold) in the question (Q) is disambiguated (Skyfall song vs Skyfall movie) by adding the entity type (song). The rewritten question (R) is a self-contained version of the follow-up question, that will be used for answering (A), where both the coreferences and ellipses (in bold) are resolved.", "cite_spans": [ { "start": 286, "end": 309, "text": "(Elgohary et al., 2019;", "ref_id": "BIBREF5" }, { "start": 310, "end": 331, "text": "Anantha et al., 2021)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Reference Resolution", "sec_num": "2.1" }, { "text": "The input to the reference resolution component is the current turn query along with the conversation context, i.e., previous queries and responses. We follow the implementation of the CopyTransformer model (Anantha et al., 2021) . Our reference resolution model consists of 90M parameters. A sample of input and output is shown in Figure 3 .", "cite_spans": [ { "start": 207, "end": 229, "text": "(Anantha et al., 2021)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 332, "end": 340, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Reference Resolution", "sec_num": "2.1" }, { "text": "One of the goals in a low-latency setting is to process a maximum amount of information on the device, and only send to server if it is absolutely needed. This design approach provides faster responses by avoiding unnecessary round trips to the server. In order to determine if the query can be processed on the device it is important to predict if the query needs information from external knowledge sources, such as the world wide web. We refer to the questions that require general knowledge and are of type objective as \"Factual Questions,\" and the questions that are of type chit-chat as \"Subjective Questions.\" We refer to the on-device classifier that predicts if a question is factual or not (subjective) as \"Factual Classifier\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factual Classifier", "sec_num": "2.2" }, { "text": "We use ALBERT as our factual classifier. We initialize the factual classifier weights using HuggingFace pre-trained ALBERT 1 model and train using binary labels from our Internal Media dataset, where 1 represents a factual question and 0 a subjective question. Our factual classifier consists of 11M parameters. We observed the optimal value for the threshold to be 0.8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factual Classifier", "sec_num": "2.2" }, { "text": "The subjective response generation component of our pipeline is a 90M parameter model with a conventional Seq2Seq Transformer architecture. Our work uses the optimized setup discussed in Blender to convert input sequences of dialogue to an output response (Roller et al., 2020) . However, there are a couple core differences. Our dialogue model was fine-tuned for a particular use case: subjective entertainment-domain questions. Additionally, our model has been trained on rewritten inputs (given our reference resolver in a prior portion of the pipeline).", "cite_spans": [ { "start": 256, "end": 277, "text": "(Roller et al., 2020)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Subjective Response Generation", "sec_num": "2.3" }, { "text": "The core response generation model was trained using the ParlAI 2 framework, a platform designed specifically for dialogue models. We build upon the 1 https://huggingface.co/albert-base-v2 2 https://github.com/facebookresearch/ ParlAI work of Blender's 90M generative model included in the broader ParlAI zoo (Roller et al., 2020) . The critical objective for this portion of the pipeline was to maintain general-domain performance while concurrently improving in our target domains: music and movies. As described in Section 3, our datasets contain human rewritten questions where anaphoric references are resolved, and we use the rewritten questions as input for the response generation. Our experimentation uses a variety of different techniques, with the methodology behind each tactic covered in this section. In order to understand how our fine-tuned model performed on both explicit and implicit inputs, we run all trials on original and rewritten questions (before comparing performance). The tests draw upon common tactics in transfer learning and dialogue models: comparisons on freezing different numbers of layers, retaining the original datasets, and selecting a decoding algorithm. In all experiments, we freeze the encoder portion of Blender's architecture to maintain their well-tuned representation. We compare results between training on the entire decoder and locking its first four layers. In separate automatic evaluation, we contrast using only internal media data to simply adding it as a fifth dataset. Finally, we look at the relative effect of the beam search and Top-K decoding algorithms on human evaluation. The validation perplexity and loss curves of the best run are shown in Figures 4 and 5 respectively.", "cite_spans": [ { "start": 309, "end": 330, "text": "(Roller et al., 2020)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Subjective Response Generation", "sec_num": "2.3" }, { "text": "In principle, any generative response module is bound to fail when a knowledge-based question is presented and if the response module does not have access to factual information. In our architecture, we route factual questions to the Extract-NParaphrase module, which extracts the answer spans and paraphrases the relevant text to generate a natural and engaging response. The response path for Turn-1 in Figure 2 illustrates the processing of the question.", "cite_spans": [], "ref_spans": [ { "start": 405, "end": 413, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "ExtractNParaphrase", "sec_num": "2.4" }, { "text": "ExtractNParaphrase consists of three stages: (1) Passage Retrieval, (2) Reading Comprehension and (3) Paraphrasing. The first two steps follow Anantha et al.; and for the third step, paraphrasing, we take motivation from the refine step of . We use BM25 to retrieve Top-K passages and a light-weight BERT-based model to extract answer spans. The scores obtained from passage retrieval and answer span extraction are combined to produce the final score. Passage retrieval and answer extraction models are comprised of 50M parameters. We refer to (Anantha et al., 2021) for more details. Finally, we train a sentence paraphraser model based on Transformer, which is comprised of 24M parameters. The paraphrased labels are provided as part of internal media dataset, which is described in Section 3.", "cite_spans": [ { "start": 545, "end": 567, "text": "(Anantha et al., 2021)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "ExtractNParaphrase", "sec_num": "2.4" }, { "text": "Logical consistency in dialogue and avoiding unnecessary or potentially toxic responses are critical factors to consider when developing open-domain chatbots. When interacting with chatbots, people expect coherent responses that at least do not contradict the chatbot's earlier responses in the same conversation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inconsistency/Toxicity Predictor", "sec_num": "2.5" }, { "text": "We train a classifier that can detect inconsistent responses given the conversation context. We follow the training procedure described in (Nie et al., 2020 ) using DECODE 3 dataset and internal media dataset. We use the ALBERT model for inconsistency/toxicity predictor.", "cite_spans": [ { "start": 139, "end": 156, "text": "(Nie et al., 2020", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Inconsistency/Toxicity Predictor", "sec_num": "2.5" }, { "text": "We use various datasets for training and evaluation focused on different tasks. In this section, we describe each dataset along with the corresponding modules that use the dataset for training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "3" }, { "text": "QReCC (Anantha et al., 2021) contains around 81,000 conversation turns. Every turn contains a question which may have anaphoric references, a rewritten version of the question with references resolved, an answer span to the question and a corresponding web URL. QReCC data is used to train the reference resolution, passage retrieval and answer span extraction models.", "cite_spans": [ { "start": 6, "end": 28, "text": "(Anantha et al., 2021)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "3" }, { "text": "Wizard of Wikipedia (Dinan et al., 2019b) (WoW) contains 194,000 turns of dialogue distributed over 1,250 topics. Each conversation is predicated on discussing the relevant topic in depth, with the goal of displaying expert knowledge in that subject. Note that in our pipeline framework, we refer objective questions to the ExtractNParaphrase component, so the subjective response generation model is not required to answer factual questions with a high degree of accuracy. Still, the WoW dataset helps our generative model maintain a breadth of knowledge to provide pertinent answers to subjective inputs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "3" }, { "text": "ConvAI2 is based off of the work of Per-sonaChat (Zhang et al., 2018; Dinan et al., 2019a) and was used at the NeurIPS 2018 ConvAI competition. This dataset is made up of 140,000 turns where gatherers are given a persona and tasked with learning about their counterpart. This helps opendomain agents ask questions, and perhaps more relevantly in our use case, respond in an engaging manner. We use the ConvAI2 dataset to train the subjective response generation model. Empathetic Dialogues (Rashkin et al., 2019) (ED) is a library of 50,000 turns where one speaker plays the role of sympathetic listener. These skills translate well to our needs, as the subjective model must account for previous dialogue history and attempt to match their chosen response to the appropriate tone.", "cite_spans": [ { "start": 49, "end": 69, "text": "(Zhang et al., 2018;", "ref_id": "BIBREF17" }, { "start": 70, "end": 90, "text": "Dinan et al., 2019a)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "3" }, { "text": "Blended Skill Talk (Smith et al., 2020) (BST) is a 76,000 turn compilation of the previous three datasets: WoW, ConvAI2, and ED. Guided human speakers were given the option to select between outputs from models trained on each of the individual tasks, which produces data that can teach the bot when a certain class of response should be used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "3" }, { "text": "DECODE (Nie et al., 2020 ) is a conversational dataset made up of 40,280 turns from human to human and human to bot contradictory dialogues. We use DECODE to train the inconsistency/toxicity detector model based off of the ALBERT model, along with our internal media dataset.", "cite_spans": [ { "start": 7, "end": 24, "text": "(Nie et al., 2020", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "3" }, { "text": "Internal Media dataset is composed of 100,000 movie themed turns. Each turn contains a natural question without explicit reference to the movie being discussed, as well as rewritten questions that convert those references to specifics (akin to the reference resolution component of our pipeline). Answer span along with web URL as well as paraphrased variation that is natural and engaging is also provided.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "3" }, { "text": "The dataset is collected using crowd-sourced annotators. The goal of the annotators is to mimic the flow of a natural human conversation while maintaining a neutral persona. The responses were validated against guidelines to be non-controversial, eliminate profanity, be neutral, engaging and concise (with an upper bound of 30 words). Every conversation consists of 10 turns, and we collect 10,000 conversations. We give instructions to explicitly add anaphoric references in follow up turns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "3" }, { "text": "We categorize our evaluation metrics based on component-wise vs end-to-end evaluation. QReCC and DECODE datasets are only used for taskspecific model training and are not used in establishing a chatbot's end-to-end metrics: Perplexity and Sensibleness and Specificity Average (SSA). We establish a human evaluation metric, SSA, on our internal media dataset only, due to limited human annotators. We establish the automatic evaluation metric, perplexity, on all 5 datasets: WoW, ConvAI2, ED, BT, and our internal media dataset. Below we discuss the intrinsic (component-wise) and extrinsic (end-to-end) metrics used to evaluate our LED framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics and Results", "sec_num": "4" }, { "text": "Excluding the subjective response generation model, all other components in LED have their own task-specific evaluation metrics. For reference resolution model using query rewriting and paraphraser in ExtractNParaphrase module, we use ROUGE, USE and Recall@10 as described in (Anantha et al., 2021) . For factual classifier and inconsistency/toxicity predictor, we use F1 as the evaluation metric and obtain 0.94 and 0.61 respectively. For passage retrieval of ExtractNParaphrase module we use MRR and Recall@k; similarly for answer-span extraction we use F1 and exact match as described in (Anantha et al., 2021) . For the subjective response generation model we use perplexity, which is also our extrinsic metric.", "cite_spans": [ { "start": 276, "end": 298, "text": "(Anantha et al., 2021)", "ref_id": "BIBREF1" }, { "start": 591, "end": 613, "text": "(Anantha et al., 2021)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Intrinsic Metrics", "sec_num": "4.1" }, { "text": "Our chatbot framework uses perplexity as its extrinsic metric for automatic evaluation. While there are a number of evaluation metrics that can serve to measure the quality of responses (see the other components of our pipeline), perplexity correlates well with human judgement (Adiwardana et al., 2020) . We build on the work of Meena (Adiwardana et al., 2020) that proposed SSA, Sensibleness and Specificity Average. We use SSA as another extrinsic metric for human evaluation. Adiwardana et al. subsequently demonstrated a strong correlation between perplexity and SSA among numerous state-of-the-art chatbots. Table 1 shows perplexity metrics of Blender models, both 90M and 2.7B parameter models; and LED framework, both with and without reference resolution, across all 5 datasets: 1 internal media dataset and 4 public dataset. Table 2 shows SSA metrics of Blender models (both 90M and 2.7B parameter models) and LED framework (both with and without reference resolution) on internal media dataset.", "cite_spans": [ { "start": 278, "end": 303, "text": "(Adiwardana et al., 2020)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 614, "end": 621, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 835, "end": 842, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Extrinsic Metrics", "sec_num": "4.2" }, { "text": "Our work follows the objective of combining opendomain chatbot and transactional digital assistants. The factual classifier component of LED serves (Zhang et al., 2020; Adiwardana et al., 2020; Roller et al., 2020) have shown that end-toend neural approaches, where the responses are produced in a generative fashion, can result in engaging dialogue. However, the resultant models from these approaches are huge -multiple billions of parameters -and are not on-device friendly. It has also been shown that end-to-end generative chatbots frequently generate responses with inconsistencies (Adiwardana et al., 2020; Roller et al., 2020) . It is obvious that there is need for an additional module that can correct, or at least detect, these inconsistencies. Generalizing this approach where we assign a specific task to a module, modularization can lead to overall improvement in dialogue systems (Fang et al., 2017; Yu et al., 2019) . We adopt the modularization approach to opendomain generative chatbot to minimize the total number of parameters while tackling some of the shortcomings in the end-to-end neural approaches.", "cite_spans": [ { "start": 148, "end": 168, "text": "(Zhang et al., 2020;", "ref_id": "BIBREF18" }, { "start": 169, "end": 193, "text": "Adiwardana et al., 2020;", "ref_id": "BIBREF0" }, { "start": 194, "end": 214, "text": "Roller et al., 2020)", "ref_id": "BIBREF10" }, { "start": 588, "end": 613, "text": "(Adiwardana et al., 2020;", "ref_id": "BIBREF0" }, { "start": 614, "end": 634, "text": "Roller et al., 2020)", "ref_id": "BIBREF10" }, { "start": 895, "end": 914, "text": "(Fang et al., 2017;", "ref_id": "BIBREF6" }, { "start": 915, "end": 931, "text": "Yu et al., 2019)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Blender (Roller et al., 2020) showed non-trivial improvement in response generation when evaluated using human side-by-side comparison. We adopt the Blender model as a basis for the core response generation functionality in subjective cases. We follow the Blender methodology of experimenting with multiple decoding algorithms for optimal human performance. However we also differ from Blender's approach. Firstly, we place a larger emphasis on model size for better on-device compatibility. Secondly, we account for a wider variety of cases where we use answer extraction and paraphrasing to accurately answer factual questions.", "cite_spans": [ { "start": 8, "end": 29, "text": "(Roller et al., 2020)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "And finally, we use the reference resolution component to track dialogue state since it is helpful for multi-turn conversations (Anantha et al., 2021) , along with providing our fine-tuned model with a wider variety of training data (multi-turn conversations where questions are either rewritten or preserved).", "cite_spans": [ { "start": 128, "end": 150, "text": "(Anantha et al., 2021)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Meena (Adiwardana et al., 2020) proposed a new metric, Sensibleness and Specificity Average (SSA), which captures key elements of a humanlike multi-turn conversation. Additionally, they also show perplexity is the best automatic metric that correlates well with human judgement. We borrow SSA to evaluate human performance. It is good for our use case, where the model is required not just to answer logically but should also be rewarded for referencing context from earlier in the conversation. One of the differences between our work and Meena is we do not use Evolved Transformer layers, though that may be basis for future work. One difference of our work compared to both Blender and Meena is we follow a modularized approach, instead of a single parameter-heavy model.", "cite_spans": [ { "start": 6, "end": 31, "text": "(Adiwardana et al., 2020)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Although we reduce the number of parameters by 90% and achieve comparable performance, we still notice shortcomings which can be possibly mitigated by the inconsistency/toxicity classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations", "sec_num": "6.1" }, { "text": "LED, often, is in agreement with the user which might cause the user to feel non-engaging. This behavior stems from the inclusion of the Empathetic Dialogues (Rashkin et al., 2019) dataset in the Subjective Response Generation component. Utilized in both the pre-trained Blender model and our finetuning process, Empathetic Dialgoues data incentivize the model to choose agreeable responses. An example of this behavior is shown in Figure 6 .", "cite_spans": [], "ref_spans": [ { "start": 432, "end": 440, "text": "Figure 6", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Consistent Agreement", "sec_num": "6.1.1" }, { "text": "LED responds to controversial questions with a non-neutral persona. These are instances where the inconsistency/toxicity predictor failed. While this class of responses was frequently present in the Subjective Response Generation component, we were able to significantly mitigate overall prevalence through the inclusion of the inconsis- ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sensitive Issues", "sec_num": "6.1.2" }, { "text": "LED provides unnecessary or questionable advice to questions seeking advice. The root cause of these outputs are examples from the Wizard of Wikipedia (Dinan et al., 2019b) dataset, where the model is taught to display expert knowledge in a particular area. An example of unnecessary financial advice is shown in Figure 8 . ", "cite_spans": [ { "start": 151, "end": 172, "text": "(Dinan et al., 2019b)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 313, "end": 321, "text": "Figure 8", "ref_id": "FIGREF8" } ], "eq_spans": [], "section": "Questionable Advice", "sec_num": "6.1.3" }, { "text": "We plan to investigate solutions to mitigate the undesired patterns noticed in Section 6.1 by improving the inconsistency/toxicity predictor, as well as, investigate the feasibility of a common embedding layer for all modules in our framework in an effort to further minimize the number of parameters with minimum or no-drop in performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "6.2" }, { "text": "Also, transactional requests have a stronger user feedback signal (e.g. if playing the wrong movie, then the user will stop the movie), which can help to learn whether a conversation was successful. The conversational models (i.e., natural language understanding) can learn from user feedback signals. We plan to investigate incorporating such feedback signals to improve task completion rate in a conversation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "6.2" }, { "text": "https://parl.ai/projects/ contradiction/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Barry Theobald, Russ Webb, Alex Acero and John Giannandrea for their insightful comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Towards a human-like opendomain chatbot", "authors": [ { "first": "Daniel", "middle": [], "last": "Adiwardana", "suffix": "" }, { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "David", "middle": [ "R" ], "last": "So", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Fiedel", "suffix": "" }, { "first": "Romal", "middle": [], "last": "Thoppilan", "suffix": "" }, { "first": "Zi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Apoorv", "middle": [], "last": "Kulshreshtha", "suffix": "" }, { "first": "Gaurav", "middle": [], "last": "Nemade", "suffix": "" }, { "first": "Yifeng", "middle": [], "last": "Lu", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2001.09977" ] }, "num": null, "urls": [], "raw_text": "Daniel Adiwardana, Minh-Thang Luong, David R. So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, and Quoc V. Le. 2020. Towards a human-like open- domain chatbot. arXiv:2001.09977.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Open-domain question answering goes conversational via question rewriting", "authors": [ { "first": "Raviteja", "middle": [], "last": "Anantha", "suffix": "" }, { "first": "Svitlana", "middle": [], "last": "Vakulenko", "suffix": "" }, { "first": "Zhucheng", "middle": [], "last": "Tu", "suffix": "" }, { "first": "Shayne", "middle": [], "last": "Longpre", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Pulman", "suffix": "" }, { "first": "Srinivas", "middle": [], "last": "Chappidi", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "520--534", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raviteja Anantha, Svitlana Vakulenko, Zhucheng Tu, Shayne Longpre, Stephen Pulman, and Srinivas Chappidi. 2021. Open-domain question answering goes conversational via question rewriting. In Pro- ceedings of the 2021 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, page 520-534.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Natural language generation at scale: A case study for open domain question answering", "authors": [ { "first": "Alessandra", "middle": [], "last": "Cervone", "suffix": "" }, { "first": "Chandra", "middle": [], "last": "Khatri", "suffix": "" }, { "first": "Rahul", "middle": [], "last": "Goel", "suffix": "" }, { "first": "Behnam", "middle": [], "last": "Hedayatnia", "suffix": "" }, { "first": "Anu", "middle": [], "last": "Venkatesh", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani-T\u00fcr", "suffix": "" }, { "first": "Raefer", "middle": [], "last": "Gabriel", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 12th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alessandra Cervone, Chandra Khatri, Rahul Goel, Behnam Hedayatnia, Anu Venkatesh, Dilek Hakkani-T\u00fcr, and Raefer Gabriel. 2019. Natural language generation at scale: A case study for open domain question answering. In Proceedings of the 12th International Conference on Natural Language Generation.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The second conversational intelligence challenge (convai2)", "authors": [ { "first": "Emily", "middle": [], "last": "Dinan", "suffix": "" }, { "first": "Varvara", "middle": [], "last": "Logacheva", "suffix": "" }, { "first": "Valentin", "middle": [], "last": "Malykh", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Miller", "suffix": "" }, { "first": "Kurt", "middle": [], "last": "Shuster", "suffix": "" }, { "first": "Jack", "middle": [], "last": "Urbanek", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Arthur", "middle": [], "last": "Szlam", "suffix": "" }, { "first": "Iulian", "middle": [], "last": "Serban", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Lowe", "suffix": "" }, { "first": "Shrimai", "middle": [], "last": "Prabhumoye", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Rudnicky", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Joelle", "middle": [], "last": "Pineau", "suffix": "" }, { "first": "Mikhail", "middle": [], "last": "Burtsev", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1902.00098" ] }, "num": null, "urls": [], "raw_text": "Emily Dinan, Varvara Logacheva, Valentin Ma- lykh, Alexander Miller, Kurt Shuster, Jack Ur- banek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, Shrimai Prabhumoye, Alan W Black, Alexander Rudnicky, Jason Williams, Joelle Pineau, Mikhail Burtsev, and Jason Weston. 2019a. The sec- ond conversational intelligence challenge (convai2). arXiv:1902.00098.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Wizard of wikipedia: Knowledge-powered conversational agents", "authors": [ { "first": "Emily", "middle": [], "last": "Dinan", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Roller", "suffix": "" }, { "first": "Kurt", "middle": [], "last": "Shuster", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1811.01241" ] }, "num": null, "urls": [], "raw_text": "Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019b. Wiz- ard of wikipedia: Knowledge-powered conversa- tional agents. arXiv:1811.01241.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Can you unpack that? learning to rewrite questions-in-context", "authors": [ { "first": "Ahmed", "middle": [], "last": "Elgohary", "suffix": "" }, { "first": "Denis", "middle": [], "last": "Peskov", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "5920--5926", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ahmed Elgohary, Denis Peskov, and Jordan Boyd- Graber. 2019. Can you unpack that? learning to rewrite questions-in-context. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 5920-5926.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Sounding board -university of washington's alexa prize submission", "authors": [ { "first": "Hao", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Ariel", "middle": [], "last": "Holtzman", "suffix": "" }, { "first": "Maarten", "middle": [], "last": "Sap", "suffix": "" }, { "first": "Mari", "middle": [], "last": "Ostendorf", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Fang, Hao Cheng, Elizabeth Clark, Ariel Holtz- man, Maarten Sap, Mari Ostendorf, Yejin Choi, and Noah A Smith. 2017. Sounding board -university of washington's alexa prize submission.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Albert: A lite bert for self-supervised learning of language representations", "authors": [ { "first": "Zhenzhong", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Mingda", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Piyush", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.11942" ] }, "num": null, "urls": [], "raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learn- ing of language representations. arXiv:1909.11942.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "2020. I like fish, especially dolphins: Addressing contradictions in dialogue modelling", "authors": [ { "first": "Yixin", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Mary", "middle": [], "last": "Williamson", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yixin Nie, Mary Williamson, Mohit Bansal, Douwe Kiela, and Jason Weston. 2020. I like fish, espe- cially dolphins: Addressing contradictions in dia- logue modelling.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Towards empathetic opendomain conversation models: a new benchmark and dataset", "authors": [ { "first": "Eric", "middle": [ "Michael" ], "last": "Hannah Rashkin", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Y-Lan", "middle": [], "last": "Li", "suffix": "" }, { "first": "", "middle": [], "last": "Boureau", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5370--5381", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic open- domain conversation models: a new benchmark and dataset. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, page 5370-5381.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Recipes for building an opendomain chatbot", "authors": [ { "first": "Stephen", "middle": [], "last": "Roller", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Dinan", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Da", "middle": [], "last": "Ju", "suffix": "" }, { "first": "Mary", "middle": [], "last": "Williamson", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Kurt", "middle": [], "last": "Shuster", "suffix": "" }, { "first": "Eric", "middle": [ "M" ], "last": "Smith", "suffix": "" }, { "first": "Y-Lan", "middle": [], "last": "Boureau", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.13637" ] }, "num": null, "urls": [], "raw_text": "Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, and Jason Weston. 2020. Recipes for building an open- domain chatbot. arXiv:2004.13637.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A deep reinforcement learning chatbot", "authors": [ { "first": "V", "middle": [], "last": "Iulian", "suffix": "" }, { "first": "Chinnadhurai", "middle": [], "last": "Serban", "suffix": "" }, { "first": "Mathieu", "middle": [], "last": "Sankar", "suffix": "" }, { "first": "Saizheng", "middle": [], "last": "Germain", "suffix": "" }, { "first": "Zhouhan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Sandeep", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Taesup", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Sarath", "middle": [], "last": "Pieper", "suffix": "" }, { "first": "", "middle": [], "last": "Chandar", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1709.02349" ] }, "num": null, "urls": [], "raw_text": "Iulian V. Serban, Chinnadhurai Sankar, Mathieu Ger- main, Saizheng Zhang, Zhouhan Lin, Sandeep Sub- ramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexan- dre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, and Yoshua Bengio. 2017. A deep reinforcement learning chatbot. arXiv:1709.02349.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Can you put it all together: Evaluating conversational agents' ability to blend skills", "authors": [ { "first": "Eric", "middle": [ "Michael" ], "last": "Smith", "suffix": "" }, { "first": "Mary", "middle": [], "last": "Williamson", "suffix": "" }, { "first": "Kurt", "middle": [], "last": "Shuster", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Y-Lan", "middle": [], "last": "Boureau", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2021--2030", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Michael Smith, Mary Williamson, Kurt Shuster, Jason Weston, and Y-Lan Boureau. 2020. Can you put it all together: Evaluating conversational agents' ability to blend skills. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, page 2021-2030.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Apple's siri will finally work without an internet connection with on-device speech recognition", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "The Verge. 2021. Apple's siri will finally work without an internet connection with on-device speech recog- nition.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Retrieve and refine: Improved sequence generation models for dialogue", "authors": [ { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Dinan", "suffix": "" }, { "first": "Alexander", "middle": [ "H" ], "last": "Miller", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI", "volume": "", "issue": "", "pages": "978--979", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Weston, Emily Dinan, and Alexander H. Miller. 2018. Retrieve and refine: Improved sequence gen- eration models for dialogue. In Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd Interna- tional Workshop on Search-Oriented Conversational AI 978-1-948087-75-9.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Mitsuku wins loebner", "authors": [ { "first": "Steve", "middle": [], "last": "Worswick", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steve Worswick. 2018. Mitsuku wins loebner prize 2018!", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Gunrock: A social bot for complex and engaging long conversations", "authors": [ { "first": "Dian", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Michelle", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "Yi", "middle": [ "Mang" ], "last": "Yang", "suffix": "" }, { "first": "Chun-Yen", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Weiming", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Jiaping", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Mingyang", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Jesse", "suffix": "" }, { "first": "Austin", "middle": [], "last": "Chau", "suffix": "" }, { "first": "Antara", "middle": [], "last": "Bhowmick", "suffix": "" }, { "first": "Shreenath", "middle": [], "last": "Iyer", "suffix": "" }, { "first": "Giritheja", "middle": [], "last": "Sreenivasulu", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Davidson", "suffix": "" }, { "first": "Ashwin Bhandare Andd Zhou", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 EMNLP and the 9th IJCNLP (System Demonstrations)", "volume": "", "issue": "", "pages": "79--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dian Yu, Michelle Cohn, Yi Mang Yang, Chun-Yen Chen, Weiming Wen, Jiaping Zhang, Mingyang Zhou, Kevin Jesse, Austin Chau, Antara Bhowmick, Shreenath Iyer, Giritheja Sreenivasulu, Sam David- son, and Ashwin Bhandare andd Zhou Yu. 2019. Gunrock: A social bot for complex and engaging long conversations. In Proceedings of the 2019 EMNLP and the 9th IJCNLP (System Demonstra- tions), page 79-84.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Personalizing dialogue agents: I have a dog", "authors": [ { "first": "Saizheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Dinan", "suffix": "" }, { "first": "Jack", "middle": [], "last": "Urbanek", "suffix": "" }, { "first": "Arthur", "middle": [], "last": "Szlam", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1801.07243v5" ] }, "num": null, "urls": [], "raw_text": "Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Per- sonalizing dialogue agents: I have a dog, do you have pets too? arXiv:1801.07243v5.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Dialogpt : Large-scale generative pre-training for conversational response generation", "authors": [ { "first": "Yizhe", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Siqi", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Yen-Chun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Jingjing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. Dialogpt : Large-scale generative pre-training for conversational response generation. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics: System Demonstrations.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "The design and implementation of xiaoice, an empathetic social chatbot", "authors": [ { "first": "Li", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Di", "middle": [], "last": "Li", "suffix": "" }, { "first": "Heung-Yeung", "middle": [], "last": "Shum", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1812.08989" ] }, "num": null, "urls": [], "raw_text": "Li Zhou, Jianfeng Gao, Di Li, and Heung-Yeung Shum. 2019. The design and implementation of xiaoice, an empathetic social chatbot. arXiv:1812.08989.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Equal contribution. \u2665 Work done while at Apple.", "type_str": "figure", "num": null }, "FIGREF1": { "uris": null, "text": "A sample dialogue of paper author (left) conversing with our LED chatbot framework (right). The responses are from the pipeline of models: Reference Resolution, Factual Classifier, Subjective Response Generator, ExtractNParaphrase, Inconsistency/Toxicity Module.", "type_str": "figure", "num": null }, "FIGREF2": { "uris": null, "text": "LED Pipeline illustrating end-to-end processing of multi-turn requests and response generation.", "type_str": "figure", "num": null }, "FIGREF3": { "uris": null, "text": "Figure 3: A illustration of reference resolution where the entity reference (in bold) in the question (Q) is disambiguated (Skyfall song vs Skyfall movie) by adding the entity type (song). The rewritten question (R) is a self-contained version of the follow-up question, that will be used for answering (A), where both the coreferences and ellipses (in bold) are resolved.", "type_str": "figure", "num": null }, "FIGREF4": { "uris": null, "text": "Validation perplexity of subjective response generation model using all five datasets: Wizard of Wiki, ConvAI2, Empathetic Dialogues, Blended Skill Talk, and our internal media dataset with rewritten questions as input.", "type_str": "figure", "num": null }, "FIGREF5": { "uris": null, "text": "Validation loss of subjective response generation model using all five datasets: Wizard of Wiki, ConvAI2, Empathetic Dialogues, Blended Skill Talk, and our internal media dataset with rewritten questions as input.", "type_str": "figure", "num": null }, "FIGREF6": { "uris": null, "text": "LED in agreement with user the majority of the time. tency/toxicity predictor component. An example of such an instance is shown in Figure 7.", "type_str": "figure", "num": null }, "FIGREF7": { "uris": null, "text": "LED responding to controversial question in a non-neutral manner.", "type_str": "figure", "num": null }, "FIGREF8": { "uris": null, "text": "LED providing unnecessary or questionable financial advice.", "type_str": "figure", "num": null }, "TABREF0": { "text": "Comparison of Perplexity metric across various datasets of Blender and LED chatbot frameworks with different parameter size.", "num": null, "type_str": "table", "html": null, "content": "
LED without LED with
" }, "TABREF1": { "text": "Comparison of SSA metric and number of model parameters of Blender and LED chatbot frameworks on internal media dataset.", "num": null, "type_str": "table", "html": null, "content": "
Model/Metric Parameters Sensibleness SpecificitySSA
Blender90M72.6083.10 77.85
Blender LED2.7B 186M80.42 78.2892.70 86.56 89.12 83.70
LED276M80.3891.95 86.17
as the gatekeeper between these two categories,
sending objective asks through the ExtractNPara-
phrase model and subjective inputs through our
fine-tuned open domain model. While our work
broadly falls under the category of open-domain
generative chatbots, because of the variety of mod-
els and their corresponding tasks, our work also
covers multiple key areas in language understand-
ing with a focus on low-resource adaptation design.
Prior works
" } } } }