{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:31:26.956637Z" }, "title": "Can I Be of Further Assistance? Using Unstructured Knowledge Access to Improve Task-oriented Conversational Modeling", "authors": [ { "first": "Di", "middle": [], "last": "Jin", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Seokhwan", "middle": [], "last": "Kim", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Most prior work on task-oriented dialogue systems are restricted to limited coverage of domain APIs. However, users oftentimes have requests that are out of the scope of these APIs. This work focuses on responding to these beyond-API-coverage user turns by incorporating external, unstructured knowledge sources. Our approach works in a pipelined manner with knowledge-seeking turn detection, knowledge selection, and response generation in sequence. We introduce novel data augmentation methods for the first two steps and demonstrate that the use of information extracted from dialogue context improves the knowledge selection and end-to-end performances. Through experiments, we achieve state-of-theart performance for both automatic and human evaluation metrics on the DSTC9 Track 1 benchmark dataset, validating the effectiveness of our contributions.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Most prior work on task-oriented dialogue systems are restricted to limited coverage of domain APIs. However, users oftentimes have requests that are out of the scope of these APIs. This work focuses on responding to these beyond-API-coverage user turns by incorporating external, unstructured knowledge sources. Our approach works in a pipelined manner with knowledge-seeking turn detection, knowledge selection, and response generation in sequence. We introduce novel data augmentation methods for the first two steps and demonstrate that the use of information extracted from dialogue context improves the knowledge selection and end-to-end performances. Through experiments, we achieve state-of-theart performance for both automatic and human evaluation metrics on the DSTC9 Track 1 benchmark dataset, validating the effectiveness of our contributions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Driven by the fast progress of natural language processing techniques, we are now witnessing a variety of task-orientated dialogue systems being used in daily life. These agents traditionally rely on pre-defined APIs to complete the tasks that users request (Williams et al., 2017; Eric et al., 2017) ; however, some user requests are related to the task domain but beyond these APIs' coverage (Kim et al., 2020a) . For example, while task-oriented agents can help users book a hotel, they fall short of answering potential follow-up questions users may have, such as \"whether they can bring their pets to the hotel\". These beyond-API-coverage user requests frequently refer to the task or entities that were discussed in the prior conversation and can be addressed by interpreting them in context and retrieving relevant domain knowledge from web pages, for example, from textual descriptions and frequently asked questions (FAQs). Most taskoriented dialogue systems do not incorporate these external knowledge sources into dialogue modeling, making conversational interactions inefficient.", "cite_spans": [ { "start": 258, "end": 281, "text": "(Williams et al., 2017;", "ref_id": "BIBREF13" }, { "start": 282, "end": 300, "text": "Eric et al., 2017)", "ref_id": "BIBREF2" }, { "start": 394, "end": 413, "text": "(Kim et al., 2020a)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To address this problem, Kim et al. (2020a) recently introduced a new challenge on task-oriented conversational modeling with unstructured knowledge access, and provided datasets that are annotated for three related sub-tasks: (1) knowledgeseeking turn detection, (2) knowledge selection, and (3) knowledge-grounded response generation (one data sample is in Section B.1 of Supplementary Material) . This problem was intensively studied as the main focus of the DSTC9 Track 1 (Kim et al., 2020b) , where a total of 105 systems developed by 24 participating teams were benchmarked.", "cite_spans": [ { "start": 25, "end": 43, "text": "Kim et al. (2020a)", "ref_id": "BIBREF4" }, { "start": 374, "end": 397, "text": "Supplementary Material)", "ref_id": null }, { "start": 476, "end": 495, "text": "(Kim et al., 2020b)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we also follow a pipelined approach and present novel contributions for the three subtasks: (1) For knowledge related turn detection, we propose a data augmentation strategy that makes use of available knowledge snippets. (2) For knowledge selection, we propose an approach that makes use of information extracted from the dialogue context via domain classification and entity tracking before knowledge ranking. (3) For the final response generation, we leverage powerful pre-trained models for knowledge grounded response generation in order to obtain coherent and accurate responses. Using the challenge test set as a benchmark, our pipelined approach achieves state-of-the art performance for all three sub-tasks, in both automated and manual evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our approach to task-oriented conversation modeling with unstructured knowledge access (Kim et al., 2020a) includes three successive sub-tasks, as illustrated in Figure 1 . First, knowledge-seeking turn detection aims to identify user requests that are beyond the coverage of the task API. Then, for detected queries, knowledge selection aims to find the most appropriate knowledge that can address the user queries from a provided knowledge base. Finally, knowledge-grounded response generation produces a response given the dialogue history and selected knowledge.", "cite_spans": [ { "start": 87, "end": 106, "text": "(Kim et al., 2020a)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 162, "end": 170, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Approach", "sec_num": "2" }, { "text": "DSTC9 Track 1 (Kim et al., 2020b) organizers provided a baseline system that adopted the finetuned GPT2-small (Radford et al., 2019) for all three sub-tasks. The winning teams (Team 19 and Team 3) extensively utilized ensembling strategies to boost the performance of their submissions (He et al., 2021; Tang et al., 2021; Mi et al., 2021) . We follow the pipelined architecture of the baseline system, but made innovations and improvements for each sub-task, outlined in detail below.", "cite_spans": [ { "start": 14, "end": 33, "text": "(Kim et al., 2020b)", "ref_id": "BIBREF5" }, { "start": 110, "end": 132, "text": "(Radford et al., 2019)", "ref_id": "BIBREF9" }, { "start": 286, "end": 303, "text": "(He et al., 2021;", "ref_id": "BIBREF3" }, { "start": 304, "end": 322, "text": "Tang et al., 2021;", "ref_id": "BIBREF11" }, { "start": 323, "end": 339, "text": "Mi et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "2" }, { "text": "We treat knowledge-seeking turn detection as a binary classification task, given the dialogue context as the input, and fine-tuned a pre-trained language model for this purpose. The knowledge provided in the knowledge base constitutes a set of FAQs. We augmented the available training sets by treating all questions in the knowledge base as new potential user queries. Furthermore, for all questions in this augmentation that contain an entity name, we created a new question by replacing this entity name with \"it\". In this way, we obtained 13,668 additional data samples. In contrast to the baseline, we found that replacing GPT2-small with RoBERTa-Large (Liu et al., 2019) improved the performance. The other changes we made include feeding only the last user utterance instead of the whole dialogue context into the model and fine-tuning the decision threshold t ktd (when the inferred probability score p > t ktd , the prediction is positive, otherwise negative) to optimize the F1 score on the validation set, both of which helped achieve better performance.", "cite_spans": [ { "start": 658, "end": 676, "text": "(Liu et al., 2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Knowledge-seeking Turn Detection", "sec_num": "2.1" }, { "text": "For knowledge selection, the baseline system predicts the relevance between a given dialogue context and every candidate in the whole knowledge base, which is very time-consuming especially when the size of knowledge base is substantially expanded. Instead, we propose a hierarchical filtering method to narrow down the candidate search space. Our proposed knowledge selection pipeline includes the following three modules: domain classification, entity tracking, and knowledge matching, as illustrated in Figure 1 . Specifications of each module are detailed below.", "cite_spans": [], "ref_spans": [ { "start": 506, "end": 514, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Knowledge Selection", "sec_num": "2.2" }, { "text": "In multi-domain conversations, if the system knows what domain a given turn belongs to, the search space for knowledge selection can be greatly reduced by taking the domain-specific knowledge only. The DSTC9 Track 1 data includes the augmented turns for \"Train\", \"Taxi\", \"Hotel\", and \"Restaurant\" domains in its training set, where the first two domains have domain-level knowledge only, while the others can be further subdivided for each entity-specific knowledge. To improve the generalizability of our filtering mechanism for unseen domains, we merged the domains which require further entity-level analysis into an \"Others\" class and defined this task as a three-way classification: {\"Train\", \"Taxi\", and \"Others\"}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Classification", "sec_num": "2.2.1" }, { "text": "We implemented a domain classifier by finetuning the RoBERTa-Large model which takes the whole dialogue context and outputs a domain label. Considering that a new domain (i.e., \"Attraction\") is introduced in the test set, we augmented the training data with 3,350 additional samples of the \"Attraction\" domain, which were obtained from the MultiWOZ 2.1 (Eric et al., 2020) , the source of the DSTC9 Track 1 data (all augmented samples are labeled as \"Others\"). More specifically, we first find out those dialogues for \"Attraction\" in the train-ing set of the MultiWOZ 2.1 dataset (this dataset contains seven domains including \"Attraction\") by selecting dialogue turns that contain \"Attraction\" related slots. We then replace the original \"Attraction\" related slots with entities of the \"Attraction\" domain in the knowledge base K. Meanwhile we replace the last user utterances in the dialogues with the knowledge questions that belong to the replaced new entities. Table 1 gives one example for explanation. In this example, we replace the original entity of \"funky fun house\" with a new entity of \"California Academy of Science\" randomly selected from the \"Attraction\" domain of the knowledge base. Besides, we replace the original last user utterance with a knowledge question randomly selected from the FAQs of this new entity \"California Academy of Science\".", "cite_spans": [ { "start": 353, "end": 372, "text": "(Eric et al., 2020)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 966, "end": 973, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Domain Classification", "sec_num": "2.2.1" }, { "text": "Once the domain classifier predicts the 'Others' label for a given turn, the entity tracking module is executed to detect the entities mentioned in the dialogue context and align them to the entity-level candidates in the knowledge base. We adopt an unsupervised approach based on fuzzy n-gram matching whose details can be referred to Section A.2 of the Supplementary Material. After extracting these entities, we determined the character-level start position of each entity in the dialogue context and selected the last three mentioned entities as the output of this module.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity Tracking", "sec_num": "2.2.2" }, { "text": "The knowledge matching module receives a list of knowledge candidates and ranks them in terms of relevance to the input dialogue context. We concatenated the dialogue context, domain/entity name, and each knowledge snippet into a long sequence, which is then sent to the fine-tuned RoBERTa-Large model to get a relevance score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Matching", "sec_num": "2.2.3" }, { "text": "To train the model, we adopted Hinge loss, which was reported to perform better for the ranking problems (Wang et al., 2014; Elsayed et al., 2018) than Cross-entropy loss used in the baseline system. For each positive instance, we drew four negative samples, each of which is randomly selected from one of four sources: 1) the whole knowledge base, 2) the knowledge snippets in the ground truth domain, 3) the knowledge snippets of the ground truth entity, and 4) the knowledge snippets of other entities mentioned in the same dialogue. In the execution time, we fed the knowl-edge candidates filtered by the predicted domain and entity from Section 2.2.1 and 2.2.2, repectively. Then, the module outputs a list of the candidates ranked by relevance score.", "cite_spans": [ { "start": 105, "end": 124, "text": "(Wang et al., 2014;", "ref_id": "BIBREF12" }, { "start": 125, "end": 146, "text": "Elsayed et al., 2018)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Knowledge Matching", "sec_num": "2.2.3" }, { "text": "For response generation, we compared the following three pre-trained sequence-to-sequence (seq2seq) models: T5-Base (Raffel et al., 2020) , BART-Large (Lewis et al., 2020) , and Pegasus-Large (Zhang et al., 2020) . Each model inputs a concatenated sequence of the whole dialogue context and the knowledge answer and then outputs a response. The ground-truth knowledge answer is used in the training phase, while the top-1 candidate from the knowledge selection result is used in the test phase.", "cite_spans": [ { "start": 116, "end": 137, "text": "(Raffel et al., 2020)", "ref_id": "BIBREF10" }, { "start": 151, "end": 171, "text": "(Lewis et al., 2020)", "ref_id": "BIBREF6" }, { "start": 192, "end": 212, "text": "(Zhang et al., 2020)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Response Generation", "sec_num": "2.3" }, { "text": "We used the same data split and evaluation metrics as the official DSTC9 Track 1 challenge. All model training and dataset details are summarized in the Section B of the Supplementary Material. Table 2 compares the knowledge seeking turn detection performance between our proposed models and the best single model and ensemble-based systems from the DSTC9 Track 1 official results. 1 The results show that our proposed data augmentation method helped to improve the recall of our detection model and led to the highest F1 score among all the single models in the challenge.", "cite_spans": [ { "start": 382, "end": 383, "text": "1", "ref_id": null } ], "ref_spans": [ { "start": 194, "end": 201, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Experiments and Results", "sec_num": "3" }, { "text": "Our domain classification and entity tracking modules achieved 99.5% in accuracy and 97.5% in recall, respectively. The data augmentation method helped to improve the domain classification accuracy from 97.1% to 99.5%. Table 3 summarizes the knowledge selection performance of our system based on the proposed hierarchical filtering mechanism using the results from both domain classification and entity tracking modules. Our proposed system outperformed the challenge baseline in all three metrics with a largely reduced execution time from more than 20 hours by the baseline to less than half an hour to process the whole test set with a single V100 GPU. Compared with the best knowledge selection results from the challenge, our model achieved higher performances than the best single model-based system in all metrics, and even surpassed the best ensemble model in recall@1. To be noted, recall@1 is the most important metric, since the response generation is grounded on only the top-1 result from knowledge selection.", "cite_spans": [], "ref_spans": [ { "start": 219, "end": 226, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Knowledge Selection", "sec_num": "3.2" }, { "text": "First of all, Table 5 summarizes the ablation results by imposing two kinds of changes based on our full knowledge matching model: instead of concatenating the dialogue context, domain name, entity name, and knowledge question and answer pair as the input to the model, we only concatenate the dialogue context and knowledge question and answer pair (w/o entity names); we replace the Hinge loss with Cross-entropy loss (w/o Hinge Loss). To be noted, we should pay more attention to the Recall@1 score in the Table 5 , which is the most important metric. And we can see that adding the domain and entity names are beneficial and the use of Hinge loss for optimization is better than Cross-entropy for this ranking problem.", "cite_spans": [], "ref_spans": [ { "start": 14, "end": 21, "text": "Table 5", "ref_id": "TABREF7" }, { "start": 509, "end": 516, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Ablation Study", "sec_num": "3.2.1" }, { "text": "As above-mentioned, for training the knowledge matching module, we need to sample several negative samples for each position sample and instead of using only one negative sampling strategy, we used a mixed strategy. More specifically, for sampling each negative sample, we randomly adopted one of the following four strategies:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Study", "sec_num": "3.2.1" }, { "text": "1. Randomly select from all knowledge snippets; 2. Randomly select from the knowledge snippets of entities that are the in the same domain as the ground truth one (i.e., the entity of the positive sample); 3. Randomly select from the knowledge snippets of the ground truth entity; 4. Randomly select from the knowledge snippets of entities that are mentioned in the same dialogue as the ground truth one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Study", "sec_num": "3.2.1" }, { "text": "Each strategy i \u2208 {1, 2, 3, 4} is sampled at a certain sampling ratio p i ns . We tuned this sampling ratio by trying several combinations, and the results are summarized in Table 6 . From it, we can see that: (1) Strategy 4 is the most effective among all four ones; (2) Mixing four strategies is better than using only one of them; (3) Allocating higher ratio to strategy 4 is better than uniform ratios for every strategy. Table 4 summarizes the automated evaluation results for the generated responses with different seq2seq models. Our fine-tuned T5-Base model achieved lower BLEU scores than BART-Large and Pegasus-Large, while its METEOR score is BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-1 ROUGE-2 ROUGE-L substantially higher than the others. Note that our generation system does not perform any model ensemble, and it surpasses the best single system in the DSTC9 Track 1 for half of the metrics. Following the official evaluation protocol in the challenge, we performed human evaluation to compare our system with the top systems from the challenge 2 , as shown in Table 7 . Specifically, we hired three crowd-workers for each instance, asked them to score each system output in terms of its \"accuracy\" and \"appropriateness\" in five point Likert scale, and reported the averaged scores. We have three findings: (1) T5 achieves higher accuracy, while Pegasus is slightly better for appropriateness;", "cite_spans": [], "ref_spans": [ { "start": 174, "end": 181, "text": "Table 6", "ref_id": "TABREF8" }, { "start": 426, "end": 433, "text": "Table 4", "ref_id": "TABREF6" }, { "start": 1075, "end": 1082, "text": "Table 7", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Ablation Study", "sec_num": "3.2.1" }, { "text": "(2) our systems generates more accurate responses than the top DSTC9 systems, while the appropri-ateness scores is comparable (confirmed by significance testing in Section C.2 of Supplementary Material) ; (3) the final average scores of our systems rank the highest. We present several examples of the generated responses by our system compared against the baseline and top 2 systems in Section C.3 of Supplementary Material.", "cite_spans": [], "ref_spans": [ { "start": 179, "end": 203, "text": "Supplementary Material)", "ref_id": null } ], "eq_spans": [], "section": "Response Generation", "sec_num": "3.3" }, { "text": "Accuracy Appropriateness Average (Kim et al., 2020b) . The symbol * means our score is significantly higher than the best previous system while \u2020 means our score is not significantly different from the best previous system, according to paired t-test with p < 0.05.", "cite_spans": [ { "start": 33, "end": 52, "text": "(Kim et al., 2020b)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Systems", "sec_num": null }, { "text": "In this work, we propose a comprehensive system to enable the task-orientated dialogue models to answer user queries that are out of the scope of APIs. We significantly improved the system's capability of finding the most relevant knowledge snippets, consequently providing excellent responses by introducing a novel data augmentation method, incorporating domain and entity identification modules for knowledge selection, and utilizing mixed negative sampling. To demonstrate the efficacy of our approach, we benchmark our system on the DSTC9 Track 1 challenge dataset and report the state-of-the-art performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "4" }, { "text": "Specifically, we first normalize the entity names in the knowledge base using a set of heuristic rules, such as replacing the punctuation \"&\" with \"and\". Table A .1 summarizes the full list of normalization rules and we give an example for each rule as illustration. Then we perform the fuzzy n-gram matching between an entity and a certain piece of dialogue context. For example of an entity of \"Alexander Bed and Breakfast\", it is a four-gram, therefore we extract all four-grams from the dialogue context and match each of them against it. And the process of matching is to first find out the longest contiguous matching sub-sequence and then calculate the matching ratio by the equation of 2M/T , where M is the length of the matched sub-sequence while T is the total length of the two n-grams to be matched. 3 If this ratio is higher than 0.95, we deem this pair of n-grams as matched.", "cite_spans": [], "ref_spans": [ { "start": 154, "end": 161, "text": "Table A", "ref_id": "TABREF13" } ], "eq_spans": [], "section": "A.1 Entity Extraction", "sec_num": null }, { "text": "In this way, we can find out which entities in the knowledge base are mentioned in a certain dialogue.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Entity Extraction", "sec_num": null }, { "text": "B.1 Data Samples & Statistics Table B .2 shows an example conversation with unstructured knowledge access. The user utterance at turn t = 5 requests the information about the gym facility, which is out of the coverage of the structured domain APIs. However, the relevant knowledge contents can be found from the external sources as in the rightmost column which includes the sampled QA snippets from the FAQ lists for each corresponding entity within domains such as train, hotel, or restaurant. With access to these unstructured external knowledge sources, the agent manages to continue the conversation with no friction by selecting the most appropriate knowledge. The data statistics are summarized in Table B .3. 4 The main data is an augmented version of MultiWOZ 2.1 that includes newly introduced knowledge-seeking turns in the MultiWOZ conversations. A total of 22,834 utterance pairs were newly collected based on 2,900 knowledge candidates from the FAQ webpages about the domains 3 https://towardsdatascience.com/sequencematcher-inpython-6b1e6f3915fc", "cite_spans": [ { "start": 718, "end": 719, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 30, "end": 37, "text": "Table B", "ref_id": "TABREF13" }, { "start": 705, "end": 713, "text": "Table B", "ref_id": "TABREF13" } ], "eq_spans": [], "section": "B Experiments", "sec_num": null }, { "text": "4 Data can be downloaded from: https://github.com/alexa/alexa-with-dstc9-track1-dataset and the entities in MultiWOZ databases. To be noted, for the test set, other conversations collected from scratch about touristic information for San Francisco are added. To evaluate the generalizability of models, the new conversations cover knowledge, locale and domains that are unseen from the train and validation data sets. In addition, this test set includes not only written conversations, but also spoken dialogues to evaluate system performance across different modalities. Table B .4 gives the statistics of the knowledge base, which is a collection of frequently asked questions (FAQs). To be noted, there are no entities for the \"Train\" and \"Taxi\" domains while for \"Hotel\", \"Restaurant\", and \"Attraction\" domains, each entity has its corresponding list of FAQ pairs. Besides, the knowledge base for the test set covers the train & validation sets and is further expanded by adding one more domain of \"Attraction\" and more entities.", "cite_spans": [], "ref_spans": [ { "start": 572, "end": 579, "text": "Table B", "ref_id": "TABREF13" } ], "eq_spans": [], "section": "B Experiments", "sec_num": null }, { "text": "We implemented our proposed system based on the DSTC9 Track 1 baseline provided by Kim et al. (2020b) and the transformers library (Wolf et al., 2020) . For all sub-tasks, the maximum sequence length for the dialogue context and the knowledge snippet is both 128. For the knowledge seeking turn detection sub-task, the model is fine-tuned for 5 epochs with the batch size of 16, while for other sub-tasks, 8 epochs and the batch size of 4 are used. A model checkpoint is saved after each epoch, and the best checkpoint is picked based on the validation results. For decoding process of the response generation model, we replaced the nucleus sampling in the baseline to beam search (beam width is 5), which achieved higher performances in the validation set.", "cite_spans": [ { "start": 83, "end": 101, "text": "Kim et al. (2020b)", "ref_id": "BIBREF5" }, { "start": 131, "end": 150, "text": "(Wolf et al., 2020)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "B.2 Experimental Details", "sec_num": null }, { "text": "Since those scores of human evaluation for response generation are quite close to each other, we resort to significance testing to confirm our system's superior performance. Table C .5 summarizes the significance testing p-value between our systems and the top-2 submitted systems in the DSTC9 challenge for the accuracy, appropriateness, and average scores, respectively. From it, we can see that T5-Base is significantly higher than the competing systems in terms of accuracy (p < 0.05). Besides,", "cite_spans": [], "ref_spans": [ { "start": 174, "end": 181, "text": "Table C", "ref_id": "TABREF13" } ], "eq_spans": [], "section": "C.1 Significance Testing for Human Evaluation", "sec_num": null }, { "text": "Replace the punctuation \"&\" with \"and\" Bay Subs & Deli \u2192 Bay Subs and Deli If the entity contains any symbol of \" -\", \", \" or \"/\", split this entity by this symbol and remove the second part Hard Knox Cafe -Potrero Hill \u2192 Hard Knox Cafe Replace \"guesthouse\" with \"guest house\" ARBURY LODGE GUESTHOUSE \u2192 ARBURY LODGE GUEST HOUSE If the entity contains a place name such as \"Fisherman's Wharf\" and \"San Francisco\" in the end, remove it (since the entities in the knowledge base do not contain these place names) Table B .2: Examples of task-oriented conversations with unstructured knowledge access. Three sampled FAQ pairs for the entity \"Lensfield Hotel\" are listed in the rightmost column for turn 5 which is beyond the coverage of API and needs external knowledge support. The most appropriate FAQ pair to address turn 5 is highlighted in bold font. Table B .4: Statistics of the knowledge base (the list of FAQs). \"Train\" and \"Taxi\" domains do not have any entities and there is no \"Attraction\" domain for the knowledge base in train and validation sets. ", "cite_spans": [], "ref_spans": [ { "start": 510, "end": 517, "text": "Table B", "ref_id": "TABREF13" }, { "start": 852, "end": 859, "text": "Table B", "ref_id": "TABREF13" } ], "eq_spans": [], "section": "Normalization rules Examples", "sec_num": null }, { "text": "Data is limited to 50MB per day with no option of additional data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Answer", "sec_num": null }, { "text": "Our systems: T5-Base Data is limited to 50MB per day with no option of additional data. Is there anything else I can do for you today or would you like to make a reservation?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Answer", "sec_num": null }, { "text": "Data is limited to 50MB per day with no option of additional data. Is there anything else I can help you with?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pegasus-Large", "sec_num": null }, { "text": "Top-2 submitted systems:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pegasus-Large", "sec_num": null }, { "text": "Team 3 No, there is no additional data available to purchase. Anything else I can do for you?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pegasus-Large", "sec_num": null }, { "text": "No, the train does not have a data limit for wifi usage. Anything else I can do for you? Table C .6: Qualitative comparison between our system with previous strong competitors. Knowledge answer is the answer part of the ground truth knowledge snippet. We are comparing against the top-2 systems submitted to the DSTC9 competition.", "cite_spans": [], "ref_spans": [ { "start": 89, "end": 96, "text": "Table C", "ref_id": "TABREF13" } ], "eq_spans": [], "section": "Team 19", "sec_num": null }, { "text": "T5-Base and Pegasus-Large are comparable to the best previous system in terms of appropriateness. Finally, with regards to the average score, our T5-Base significantly rivals the previous best system. Table C .6 gives one qualitative example to compare our system's responses against those of the top-2 submitted systems in the DSTC9 competition (i.e., Team 3 and 19) 5 . Overall, we can see that our system's responses are more accurate. Taking the example in Table C .6, our responses can exactly answer the user query and it is strictly aligning with the ground truth knowledge, while the response from Team 19 is totally wrong and that from Team 3 does not address the user query at all.", "cite_spans": [], "ref_spans": [ { "start": 201, "end": 208, "text": "Table C", "ref_id": "TABREF13" }, { "start": 461, "end": 468, "text": "Table C", "ref_id": "TABREF13" } ], "eq_spans": [], "section": "Team 19", "sec_num": null }, { "text": "There are up to five entries submitted by each team in the competition and we report only the best entries by a single model and ensemble-based systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/alexa/alexa-with-dstc9-track1dataset/tree/master/results", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/alexa/alexa-with-dstc9-track1dataset/tree/master/results", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Large margin deep networks for classification", "authors": [ { "first": "Gamaleldin", "middle": [], "last": "Elsayed", "suffix": "" }, { "first": "Dilip", "middle": [], "last": "Krishnan", "suffix": "" }, { "first": "Hossein", "middle": [], "last": "Mobahi", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Regan", "suffix": "" }, { "first": "Samy", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2018, "venue": "Advances in Neural Information Processing Systems", "volume": "31", "issue": "", "pages": "842--852", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gamaleldin Elsayed, Dilip Krishnan, Hossein Mobahi, Kevin Regan, and Samy Bengio. 2018. Large mar- gin deep networks for classification. In Advances in Neural Information Processing Systems, volume 31, pages 842-852. Curran Associates, Inc.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "MultiWOZ 2.1: A consolidated multi-domain dialogue dataset with state corrections and state tracking baselines", "authors": [ { "first": "Mihail", "middle": [], "last": "Eric", "suffix": "" }, { "first": "Rahul", "middle": [], "last": "Goel", "suffix": "" }, { "first": "Shachi", "middle": [], "last": "Paul", "suffix": "" }, { "first": "Abhishek", "middle": [], "last": "Sethi", "suffix": "" }, { "first": "Sanchit", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Shuyang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Adarsh", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Anuj", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Ku", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani-Tur", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "422--428", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihail Eric, Rahul Goel, Shachi Paul, Abhishek Sethi, Sanchit Agarwal, Shuyang Gao, Adarsh Kumar, Anuj Goyal, Peter Ku, and Dilek Hakkani-Tur. 2020. MultiWOZ 2.1: A consolidated multi-domain dia- logue dataset with state corrections and state track- ing baselines. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference, pages 422-428, Marseille, France. European Language Re- sources Association.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Key-value retrieval networks for task-oriented dialogue", "authors": [ { "first": "Mihail", "middle": [], "last": "Eric", "suffix": "" }, { "first": "Lakshmi", "middle": [], "last": "Krishnan", "suffix": "" }, { "first": "Francois", "middle": [], "last": "Charette", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the SIGDIAL 2017 Conference", "volume": "", "issue": "", "pages": "37--49", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihail Eric, Lakshmi Krishnan, Francois Charette, and Christopher D. Manning. 2017. Key-value retrieval networks for task-oriented dialogue. In Proceedings of the SIGDIAL 2017 Conference, pages 37-49.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Learning to select external knowledge with multi-scale negative sampling", "authors": [ { "first": "Hua", "middle": [], "last": "Huang He", "suffix": "" }, { "first": "Siqi", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Fan", "middle": [], "last": "Bao", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhengyu", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Niu", "suffix": "" }, { "first": "", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2102.02096" ] }, "num": null, "urls": [], "raw_text": "Huang He, Hua Lu, Siqi Bao, Fan Wang, Hua Wu, Zhengyu Niu, and Haifeng Wang. 2021. Learning to select external knowledge with multi-scale nega- tive sampling. arXiv preprint arXiv:2102.02096.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Beyond domain APIs: Task-oriented conversational modeling with unstructured knowledge access", "authors": [ { "first": "Seokhwan", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Mihail", "middle": [], "last": "Eric", "suffix": "" }, { "first": "Karthik", "middle": [], "last": "Gopalakrishnan", "suffix": "" }, { "first": "Behnam", "middle": [], "last": "Hedayatnia", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani-Tur", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue", "volume": "", "issue": "", "pages": "278--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seokhwan Kim, Mihail Eric, Karthik Gopalakrishnan, Behnam Hedayatnia, Yang Liu, and Dilek Hakkani- Tur. 2020a. Beyond domain APIs: Task-oriented conversational modeling with unstructured knowl- edge access. In Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 278-289, 1st virtual meeting. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Beyond domain apis: Task-oriented conversational modeling with unstructured knowledge access", "authors": [ { "first": "Seokhwan", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Mihail", "middle": [], "last": "Eric", "suffix": "" }, { "first": "Karthik", "middle": [], "last": "Gopalakrishnan", "suffix": "" }, { "first": "Behnam", "middle": [], "last": "Hedayatnia", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani-Tur", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2006.03533" ] }, "num": null, "urls": [], "raw_text": "Seokhwan Kim, Mihail Eric, Karthik Gopalakrishnan, Behnam Hedayatnia, Yang Liu, and Dilek Hakkani- Tur. 2020b. Beyond domain apis: Task-oriented con- versational modeling with unstructured knowledge access. arXiv preprint arXiv:2006.03533.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal ; Abdelrahman Mohamed", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7871--7880", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.703" ] }, "num": null, "urls": [], "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Roberta: A robustly optimized BERT pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Jing Zheng, and Peng Xu. 2021. Towards generalized models for beyond domain api task-oriented dialogue. AAAI-21 DSTC9 Workshop", "authors": [ { "first": "Haitao", "middle": [], "last": "Mi", "suffix": "" }, { "first": "Qiyu", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Yinpei", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yifan", "middle": [], "last": "He", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Yongbin", "middle": [], "last": "Li", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haitao Mi, Qiyu Ren, Yinpei Dai, Yifan He, Jian Sun, Yongbin Li, Jing Zheng, and Peng Xu. 2021. To- wards generalized models for beyond domain api task-oriented dialogue. AAAI-21 DSTC9 Workshop.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Exploring the limits of transfer learning with a unified text-totext transformer", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter", "middle": [ "J" ], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Journal of Machine Learning Research", "volume": "21", "issue": "140", "pages": "1--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to- text transformer. Journal of Machine Learning Re- search, 21(140):1-67.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Radge relevance learning and generation evaluating method for task-oriented conversational system-anonymous version", "authors": [ { "first": "Liang", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Qinghua", "middle": [], "last": "Shang", "suffix": "" }, { "first": "Kaokao", "middle": [], "last": "Lv", "suffix": "" }, { "first": "Zixi", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Shijiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chuanming", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Zhuo", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang Tang, Qinghua Shang, Kaokao Lv, Zixi Fu, Shi- jiang Zhang, Chuanming Huang, and Zhuo Zhang. 2021. Radge relevance learning and generation evaluating method for task-oriented conversational system-anonymous version. AAAI-21 DSTC9 Work- shop.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Learning fine-grained image similarity with deep ranking", "authors": [ { "first": "Jiang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Song", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Leung", "suffix": "" }, { "first": "Chuck", "middle": [], "last": "Rosenberg", "suffix": "" }, { "first": "Jingbin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "James", "middle": [], "last": "Philbin", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "1386--1393", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiang Wang, Yang Song, Thomas Leung, Chuck Rosen- berg, Jingbin Wang, James Philbin, Bo Chen, and Ying Wu. 2014. Learning fine-grained image simi- larity with deep ranking. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 1386-1393.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Hybrid code networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning", "authors": [ { "first": "Jason", "middle": [ "D" ], "last": "Williams", "suffix": "" }, { "first": "Kavosh", "middle": [], "last": "Asadi", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Zweig", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason D. Williams, Kavosh Asadi, and Geoffrey Zweig. 2017. Hybrid code networks: practical and efficient end-to-end dialog control with supervised and rein- forcement learning. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (ACL 2017).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "Remi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "Mariama", "middle": [], "last": "Gugger", "suffix": "" }, { "first": "", "middle": [], "last": "Drame", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "38--45", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-demos.6" ] }, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization", "authors": [ { "first": "Jingqing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yao", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Saleh", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 37th International Conference on Machine Learning", "volume": "119", "issue": "", "pages": "11328--11339", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020. PEGASUS: Pre-training with ex- tracted gap-sentences for abstractive summarization. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 11328-11339. PMLR.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Task formulation and architecture of our knowledge-grounded dialog system.", "uris": null, "type_str": "figure" }, "TABREF1": { "html": null, "content": "
Speaker Original DialogueNew Dialogue
UserI was hoping
", "type_str": "table", "num": null, "text": "to see local places while in Cambridge. Some entertainment would be great.I was hoping to see local places while in Cambridge. Some entertainment would be great. Agent I got 5 options. which side is okay for you? I got 5 options. which side is okay for you? User It doesn't matter. Can I have the address of a good one? It doesn't matter. Can I have the address of a good one? Agent How about funky fun house, they are located at 8 mercers row, mercers row industrial estate.How about California Academy of Sciences, they are located at 8 mercers row, mercers row industrial estate. User Could I also get the phone number and postcode? Is WiFi available?" }, "TABREF2": { "html": null, "content": "
Precision RecallF1
Our proposed model0.99200.9344 0.9623
+ data augmentation0.99030.9833 0.9868
DSTC9 Track 1 Systems:
Baseline0.99330.9021 0.9455
Team 17 \u20200.99330.9748 0.9839
Team 3 \u20210.99640.9859 0.9911
", "type_str": "table", "num": null, "text": "An example of data augmentation for domain classification. The left dialogue is the original dialogue from the MultiWOZ 2.1 dataset while the right one is synthesized by replacing the original entity and last user utterance highlighted by red with a new entity and knowledge question from the knowledge base highlighted by blue." }, "TABREF3": { "html": null, "content": "
MRR@5 Recall@1 Recall@5
Our proposed model0.94610.92510.9702
DSTC9 Track 1 Systems:
Baseline0.72630.62010.8772
Team 7 \u20200.93090.89880.9666
Team 19 \u20210.95040.92350.9840
", "type_str": "table", "num": null, "text": "Test results on task 1: knowledge-seeking turn detection." }, "TABREF4": { "html": null, "content": "", "type_str": "table", "num": null, "text": "" }, "TABREF6": { "html": null, "content": "
SettingsMRR@5 Recall@1 Recall@5
Original model0.98110.96930.9936
w/o entity names0.97880.96560.9933
w/o Hinge Loss0.97340.96130.9905
", "type_str": "table", "num": null, "text": "Test results on task 3: knowledge grounded response generation." }, "TABREF7": { "html": null, "content": "
Sampling ratiosMRR@5 Recall@1 Recall@5
Original model
[0.1,0.1,0.1,0.7]0.98110.96930.9936
[0.25,0.25,0.25,0.25]0.97610.96150.9929
[1.0,0.0,0.0,0.0]0.97120.95140.9933
[0.0,1.0,0.0,0.0]0.95590.92480.9906
[0.0,0.0,1.0,0.0]0.97280.95400.9933
[0.0,0.0,0.0,1.0]0.97510.95960.9929
", "type_str": "table", "num": null, "text": "Ablation study of the knowledge matching module for knowledge selection by removing entities and hinge loss. Scores are reported on the validation set." }, "TABREF8": { "html": null, "content": "
: Ablation study of the knowledge matching
module for knowledge selection by tuning the mixed
negative sampling ratio. Scores are reported on the val-
idation set. The sampling ratio is represented in the
format of [p 1 ns , p 2 ns , p 3 ns , p 4 ns ].
", "type_str": "table", "num": null, "text": "" }, "TABREF10": { "html": null, "content": "", "type_str": "table", "num": null, "text": "Human evaluation results of the test set for response generation. Numbers within the parentheses are official scores from DSCT9" }, "TABREF11": { "html": null, "content": "
Turn Speaker UtteranceSampled Knowledge Snippets from FAQs
1UserI'm looking for a place to stay in the south of town. It doesn't
need to have free parking.
2Agent There are 4 hotels that are in the area you are looking for. Would
you prefer a 3 or 4 star rated hotel?
3UserI don't care about the star rating as long as it's expensive.
4Agent The Lensfield Hotel is the only expensive hotel in the south area.
Would you like any more information on this location?
5UserI'm interested in knowing, do they have a workout facility on the
premises?
6Agent There are both a fitness center and gym available on the premises.Q1: Do you have room service for your guests?
Does this sound ok?
A1: Yes, the Lensfield Hotel provides room services.
Q2: Is there a gym available at your location?
A2: There is both a fitness center and gym avail-
able on the premises.
Q3: Can I bring my dog?
A3: Pets are not allowed at the Lensfield Hotel.
7UserThat is perfect can you book that for me please.
8Agent The Lensfield Hotel is located in the South. It has a 3 star rating
and is expensive. There is free parking and internet. I have booked
it for you.
9UserGreat, thank you!
", "type_str": "table", "num": null, "text": "Bay Bridge Inn San Francisco \u2192 Bay Bridge Inn Replace numbers with corresponding English words Pho Huynh Hiep 2 \u2192 Pho Huynh Hiep Two Table A.1: Entity normalization rules and the corresponding examples." }, "TABREF13": { "html": null, "content": "
DomainTrain & Val # Entities # Snippets # Entities # Snippets Test
Train-26-26
Taxi-5-5
Hotel331,2191784,346
Restaurant1101,6503917,155
Attraction--97507
.3: Statistics of the data divided into training,Total1432,90066612,309
validation, and test sets. The test set contains three
sources of samples: MultiWOZ, San Francisco tourism
in written English, and San Francisco tourism in spo-
ken English, which is different from train and valida-
tion sets.
", "type_str": "table", "num": null, "text": "" }, "TABREF14": { "html": null, "content": "
AccuracyAppropriatenessAverage
T5-BasePegasus-LargeT5-BasePegasus-LargeT5-BasePegasus-Large
Team 191.86 \u00d7 10 \u221261.45 \u00d7 10 \u221216.07 \u00d7 10 \u221215.56 \u00d7 10 \u221211.96 \u00d7 10 \u221234.25 \u00d7 10 \u22121
Team 31.21 \u00d7 10 \u2212168.01 \u00d7 10 \u221272.89 \u00d7 10 \u221245.68 \u00d7 \u221258.31 \u00d7 10 \u2212152.48 \u00d7 10 \u22128
Example
DialoguesUser: I need a train to Broxbourne, that arrives by
20:30.
Agent: Certainly. Can I assume you're leaving from
Cambridge?
User: Yes, that is correct.
Agent: There is a train leaving at 5:01, would you like
a seat?
User:
", "type_str": "table", "num": null, "text": "Table C.5: Significance Testing p-values between our systems and top-2 systems in the DSTC9 competition for the accuracy, appropriateness, and average score of response generation. Paired t-test is used. Is that leaving on Sunday? Agent: I have the TR8991 leaving at 19:01 and arriving by 20:01. Would you like me to book that for you? User: Not yet but I also need a restaurant that serve mediterranean food and should be in the centre of town. Agent: I have a few restaurants that fit your criteria. The Gardenia, La Mimosa, and Shiraz Restaurant. Which one are you interested in hearing about? User: Tell me about The Gardenia. Agent: It's a cheap restaurant located at 2 Rose Crescent City Centre. Phone number and postcode are 01223356354 and cb23ll. Do you need anything else? User: Does the restaurant offer live music on the weekend? Agent: Sorry, there is no live music at The Gardenia. What else can I help you with? User: Does the train have a data limit for wifi usage?" } } } }