{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:31:40.057617Z" }, "title": "UniDS: A Unified Dialogue System for Chit-Chat and Task-oriented Dialogues", "authors": [ { "first": "Xinyan", "middle": [], "last": "Zhao", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Science", "location": { "country": "Technology of China" } }, "email": "" }, { "first": "Yasheng", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "Huawei Noah's Ark Lab", "institution": "", "location": {} }, "email": "" }, { "first": "Yitong", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "Huawei Noah's Ark Lab", "institution": "", "location": {} }, "email": "" }, { "first": "Fei", "middle": [], "last": "Mi", "suffix": "", "affiliation": { "laboratory": "Huawei Noah's Ark Lab", "institution": "", "location": {} }, "email": "" }, { "first": "Yajiao", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "Huawei Noah's Ark Lab", "institution": "", "location": {} }, "email": "" }, { "first": "Xin", "middle": [], "last": "Jiang", "suffix": "", "affiliation": { "laboratory": "Huawei Noah's Ark Lab", "institution": "", "location": {} }, "email": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "", "affiliation": {}, "email": "qun.liu@huawei.com" }, { "first": "Huanhuan", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Science", "location": { "country": "Technology of China" } }, "email": "hchen@ustc.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "With the advances in deep learning, tremendous progress has been made with chitchat dialogue systems and task-oriented dialogue systems. However, these two systems are often tackled separately in current methods. To achieve more natural interaction with humans, dialogue systems need to be capable of both chatting and accomplishing tasks. To this end, we propose a unified dialogue system (UniDS) with the two aforementioned skills. In particular, we design a unified dialogue data schema, compatible for both chitchat and task-oriented dialogues. Besides, we propose a two-stage training method to train UniDS based on the unified dialogue data schema. UniDS does not need to adding extra parameters to existing chitchat dialogue systems. Experimental results demonstrate that the proposed UniDS works comparably well as the state-of-the-art chitchat dialogue systems and task-oriented dialogue systems. More importantly, UniDS achieves better robustness than pure dialogue systems and satisfactory switch ability between two types of dialogues. This work demonstrates the feasibility and potential of building a general dialogue system.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "With the advances in deep learning, tremendous progress has been made with chitchat dialogue systems and task-oriented dialogue systems. However, these two systems are often tackled separately in current methods. To achieve more natural interaction with humans, dialogue systems need to be capable of both chatting and accomplishing tasks. To this end, we propose a unified dialogue system (UniDS) with the two aforementioned skills. In particular, we design a unified dialogue data schema, compatible for both chitchat and task-oriented dialogues. Besides, we propose a two-stage training method to train UniDS based on the unified dialogue data schema. UniDS does not need to adding extra parameters to existing chitchat dialogue systems. Experimental results demonstrate that the proposed UniDS works comparably well as the state-of-the-art chitchat dialogue systems and task-oriented dialogue systems. More importantly, UniDS achieves better robustness than pure dialogue systems and satisfactory switch ability between two types of dialogues. This work demonstrates the feasibility and potential of building a general dialogue system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Dialogue system is an important tool to achieve intelligent user interaction, and it is actively studied by NLP and other communities. Current research of dialogue systems focus on task-oriented dialogue (TOD) systems (Hosseini-Asl et al., 2020; Peng et al., 2020; Yang et al., 2021) , achieving functional goals, and chit-chat dialogue systems aiming at entertainment (Zhou et al., 2018; Zhang et al., 2020; Zhao et al., 2020; . Different methods are devised for these two types of dialogue systems separately. However, a more suitable way for users would be to have one dialogue agent that is able to handle both chit-chat and TOD * This work was done during an internship at Huawei Noah's Ark Lab.", "cite_spans": [ { "start": 218, "end": 245, "text": "(Hosseini-Asl et al., 2020;", "ref_id": "BIBREF3" }, { "start": 246, "end": 264, "text": "Peng et al., 2020;", "ref_id": "BIBREF11" }, { "start": 265, "end": 283, "text": "Yang et al., 2021)", "ref_id": "BIBREF17" }, { "start": 369, "end": 388, "text": "(Zhou et al., 2018;", "ref_id": "BIBREF22" }, { "start": 389, "end": 408, "text": "Zhang et al., 2020;", "ref_id": "BIBREF19" }, { "start": 409, "end": 427, "text": "Zhao et al., 2020;", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "I would like someone in the center.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "I don't have much money...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Does money buy happiness?", "sec_num": null }, { "text": "I am looking for a place to stay that has cheap price range it should be in a type of hotel.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System", "sec_num": null }, { "text": "Depends how much money you spend on it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System", "sec_num": null }, { "text": "Okay, do you have a specific area you want to stay in?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System", "sec_num": null }, { "text": "Chit-chat Task in one conversation. As illustrated in Figure 1 , users may have communication-oriented needs (e.g. chatting about money and happiness) and task-oriented needs (e.g. hotel reservation) when interacting with a dialogue agent. Furthermore, inputs of dialogue systems are often interfered by background noise, such as voice from other people or devices, collected by the preceding automatic speech recognition (ASR) module. Therefore, the chit-chat ability may also improve the robustness of a task-oriented dialog system (Zhao et al., 2017) .", "cite_spans": [ { "start": 534, "end": 553, "text": "(Zhao et al., 2017)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 10, "end": 14, "text": "Task", "ref_id": null }, { "start": 54, "end": 62, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "System", "sec_num": null }, { "text": "As shown in Table1, there are many differences between chit-chat and task-oriented dialogues. Creating a single model for different tasks without performance degradation is challenging (Kaiser et al., 2017) . Some works attempt to model different dialogue skills via different experts or adapters (Madotto et al., 2020; Lin et al., 2021) . However, these methods increase the number of parameters and hard to achieve satisfactory performance on both types of dialogues. Besides, previous work lack the exploration of the ability to switch between different types of dialogues.", "cite_spans": [ { "start": 185, "end": 206, "text": "(Kaiser et al., 2017)", "ref_id": "BIBREF5" }, { "start": 297, "end": 319, "text": "(Madotto et al., 2020;", "ref_id": "BIBREF9" }, { "start": 320, "end": 337, "text": "Lin et al., 2021)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "System", "sec_num": null }, { "text": "This work proposes a auto-regressive language model based dialogue system to handle chit-chat", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System", "sec_num": null }, { "text": "Turns Mainstream method Chit-chat Strong Entertainment Long End-to-end method Task-oriented dialogue Weak Completing tasks Short Pipeline method * Table 1 : Differences between chit-chat and task-oriented dialogues. *: The model will predict belief state and system act before giving a response, to this end, the training set needs to be annotated with belief state and system act.", "cite_spans": [], "ref_spans": [ { "start": 147, "end": 154, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Diversity Purpose", "sec_num": null }, { "text": "and TOD in a unified framework (UniDS). Specifically, since chit-chat data do not have explicit belief state and agent action, to unify chit-chat and task-oriented dialogues format, we device belief state and agent act for chit-chat dialogues as taskoriented dialogues. On the other hand, because of the diversity of chit-chat, chit-chat dialogue systems need more training data than task-oriented dialogue systems, e.g., 147,116,725 dialogues for DialoGPT (Radford et al., 2019) and 8,438 dialogues for UBAR (Yang et al., 2021) . To overcome this difference, we propose to train UniDS in a twostage way. A chit-chat model is first trained with huge chit-chat dialogues, and then we train UniDS from the chit-chat dialogue system with mixed dialogues based on our proposed unified dialogue data schema.", "cite_spans": [ { "start": 457, "end": 479, "text": "(Radford et al., 2019)", "ref_id": "BIBREF12" }, { "start": 509, "end": 528, "text": "(Yang et al., 2021)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Diversity Purpose", "sec_num": null }, { "text": "We evaluate UniDS using a public task-oriented dialogue dataset MultiWOZ and a chit-chat dataset extracted from Reddit 1 through both automatic and human evaluations. UniDS achieves comparable performance compared to the state-of-the-art chit-chat dialogue system DialoGPT, and TOD system UBAR. In addition, we empirically show that UniDS is more robust to noise in task-oriented dialogues, and UniDS shows a desirable ability to switch between the two types of dialogues.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Diversity Purpose", "sec_num": null }, { "text": "The contributions of this work are summarised as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Diversity Purpose", "sec_num": null }, { "text": "\u2022 To the best of our knowledge, this is the first work presenting a unified dialogue system to jointly handle chit-chat and task-oriented dialogues in an end-to-end way.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Diversity Purpose", "sec_num": null }, { "text": "\u2022 We design a unified dialogue data schema for chit-chat and TOD, allowing the training and inference of dialogue systems to be performed in a unified manner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Diversity Purpose", "sec_num": null }, { "text": "\u2022 To tackle the gap between chit-chat dialogue systems and task-oriented dialogue systems in the requirement of training data, a two-stage training method is proposed to train UniDS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Diversity Purpose", "sec_num": null }, { "text": "\u2022 Extensive empirical results show that UniDS performs comparably to state-of-the-art chitchat dialogue systems and task-oriented dialogue systems. Moreover, UniDS achieves better robustness to dialog noise and satisfactory switch ability between two types of dialogues.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Diversity Purpose", "sec_num": null }, { "text": "With the development of large-scale language models, chit-chat dialogue systems achieve remarkable success. Based on GPT-2 (Radford et al., 2019) , DialoGPT (Zhang et al., 2020) is further trained on large-scale dialogues extracted from Reddit. Di-aloGPT could generate more relevant, contentful, and fluent responses than previous methods. Afterwards, larger pre-train LM based chit-chat dialogue systems (Adiwardana et al., 2020; Bao et al., 2020; are proposed and achieve even better performance. In the area of task-oriented dialogue systems, recent research (Hosseini-Asl et al., 2020; Peng et al., 2020; Yang et al., 2021) concatenated elements in a dialogue into one sequence and utilized pre-train LM to generate the belief state, system act, and response in an end-to-end way and achieved promising results. There are several works related to the unified dialogue system. Zhao et al. (2017) insert one turn chit-chat dialogue into task-oriented dialogues to train a model with better out-of-domain recovery ability. Attention over Parameters (AoP) (Madotto et al., 2020) utilizes different decoders for different dialogue skills (e.g., hotel booking, restaurant booking, chit). However, the performance of AoP can be improved and it largely increases parameters comparing with models that handle a single type of dialogues. ACCENTOR (Sun et al., 2021) adds chit-chat utterance at the beginning or end of task-oriented responses to make the conversation more engaging, but ACCENTOR is unable to have a chit-chat with users. Unlike the above works, UniDS does not add extra parameters to existing dialogue models, and UniDS could alternatively handle chit-chat and task-oriented dialogues in a seamless way.", "cite_spans": [ { "start": 123, "end": 145, "text": "(Radford et al., 2019)", "ref_id": "BIBREF12" }, { "start": 157, "end": 177, "text": "(Zhang et al., 2020)", "ref_id": "BIBREF19" }, { "start": 406, "end": 431, "text": "(Adiwardana et al., 2020;", "ref_id": "BIBREF0" }, { "start": 432, "end": 449, "text": "Bao et al., 2020;", "ref_id": "BIBREF1" }, { "start": 563, "end": 590, "text": "(Hosseini-Asl et al., 2020;", "ref_id": "BIBREF3" }, { "start": 591, "end": 609, "text": "Peng et al., 2020;", "ref_id": "BIBREF11" }, { "start": 610, "end": 628, "text": "Yang et al., 2021)", "ref_id": "BIBREF17" }, { "start": 881, "end": 899, "text": "Zhao et al. (2017)", "ref_id": "BIBREF20" }, { "start": 1057, "end": 1079, "text": "(Madotto et al., 2020)", "ref_id": "BIBREF9" }, { "start": 1342, "end": 1360, "text": "(Sun et al., 2021)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "As illustrated in Figure 2 , we formulate unified dialogue system as an auto-regressive language model. A dialogue session at turn t has the following components: user input U t , belief state B t , database search result D t , system act A t , and response R t . Each component consists of tokens from a fixed vocabulary. For turn t, the dialogue context C t is the concatenation of all the components of the previous dialogues as well as the user input at turn t:", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 26, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Architecture of UniDS", "sec_num": "3.1" }, { "text": "C t = [U 0 , B 0 , D 0 , A 0 , R 0 , \u2022 \u2022 \u2022 , R t\u22121 , U t ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Architecture of UniDS", "sec_num": "3.1" }, { "text": "Given the dialogue context C t , UniDS first generates the belief state B t :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Architecture of UniDS", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "B t = UniDS(C t ) ,", "eq_num": "(1)" } ], "section": "Architecture of UniDS", "sec_num": "3.1" }, { "text": "and use it to search the database to get the search result D t . Then, UniDS generates the system act A t conditioned on the updated context by extending C t with B t and D t :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Architecture of UniDS", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "A t = UniDS([C t , B t , D t ]) .", "eq_num": "(2)" } ], "section": "Architecture of UniDS", "sec_num": "3.1" }, { "text": "Lastly, the response R t is generated conditioned on the concatenation of all previous components:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Architecture of UniDS", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "R t = UniDS([C t , B t , D t , A t ]) .", "eq_num": "(3)" } ], "section": "Architecture of UniDS", "sec_num": "3.1" }, { "text": "In the widely adopted task-oriented dialogue system pipeline, a dialogue session consists of a user input utterance, a belief state that represents the user intention, a database search result, a system act, and a system response (Young et al., 2013; Yang et al., 2021) . However, due to the diversity of chit-chat and the cost of manual annotation, chit-chat dialogue systems do not assume the existence of the belief state nor system act (Bao et al., 2020; Zhang et al., 2020) . The inconsistency of data format between chit-chat and TOD hinders the implementation of a unified model. To tackle this problem, we design a data schema with belief state, database result representation and system act for chit-chat. Table 2 illustrates such unified data schema with examples. The following sections explain each component in detail.", "cite_spans": [ { "start": 230, "end": 250, "text": "(Young et al., 2013;", "ref_id": "BIBREF18" }, { "start": 251, "end": 269, "text": "Yang et al., 2021)", "ref_id": "BIBREF17" }, { "start": 440, "end": 458, "text": "(Bao et al., 2020;", "ref_id": "BIBREF1" }, { "start": 459, "end": 478, "text": "Zhang et al., 2020)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 715, "end": 722, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Unified Dialogue Data Schema", "sec_num": "3.2" }, { "text": "The unified belief state is represented in the form of \" slot [value]\". A belief state could have several domains, each containing several slot-value pairs. As we can observe, extracting belief state of TOD may need to copy some words from the user utterance. To make UniDS keep this copy mechanism, for chit-chat, nouns in the user utterance U t are extracted as the slot or value of belief state.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Belief state", "sec_num": "3.2.1" }, { "text": "We use a special token to represent the number of matched entities under the constraints of the belief state in the current turn.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "DB result", "sec_num": "3.2.2" }, { "text": "System acts are represented as \" [slot]\" for TOD. The meaning of \"\" is the same as in belief states. \"[act]\" denotes the type of action the system needs to perform. Following the \"domain-act\" pair, slots are optional. For chit-chat, token \"\" denotes the dialogue system will chat with the user. Therefore, a processed dialogue sequence X t at turn t for either TOD or chit-chat can be both represented as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System act", "sec_num": "3.2.3" }, { "text": "X t = [C t , B t , D t , A t , R t ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System act", "sec_num": "3.2.3" }, { "text": "(4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System act", "sec_num": "3.2.3" }, { "text": "Since the diversity of chit-chat in topics and terms, chit-chat dialogue systems need much larger training data than task-oriented dialogue systems. If directly training UniDS with the unified dialogue data which contains much more chit-chat dialogues than task-oriented dialogues, the trained model may ignore the ability to complete task-oriented dialogues. Therefore, this work proposes a two-stage method for training UniDS. As illustrated in Figure 3 , we propose to first train a chit-chat dialogue model with huge chit-chat dialogues, and then we train UniDS from the chit-chat dialogue system with mixed dialogues. The mixed dialogue data is obtained by mixing chit-chat and TOD data which are pre-processed by the proposed unified data schema in the ratio of 1:1. Motivated by the recent success of applying GPT-2 for task-oriented dialogue systems (Hosseini-Asl et al., 2020; Peng et al., 2020; Yang et al., 2021) and chit-chat dialogue systems (Zhang et al., 2020) , we use DialoGPT (Zhang et al., 2020) in an auto-regressive manner as:", "cite_spans": [ { "start": 858, "end": 885, "text": "(Hosseini-Asl et al., 2020;", "ref_id": "BIBREF3" }, { "start": 886, "end": 904, "text": "Peng et al., 2020;", "ref_id": "BIBREF11" }, { "start": 905, "end": 923, "text": "Yang et al., 2021)", "ref_id": "BIBREF17" }, { "start": 955, "end": 975, "text": "(Zhang et al., 2020)", "ref_id": "BIBREF19" }, { "start": 994, "end": 1014, "text": "(Zhang et al., 2020)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 447, "end": 455, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Two-stage training method", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L = N i=1 \u2212 log P (x i |x Chit-chatTask-orientedBelief state generationSystem act generationResponse generationUniDS[chit] money happiness[db_nore][chit] [chit_act]depends on ... on it .[hotel] price cheap[db_2][hotel] ... areaokey , do you ... stay in ?Dialog historyBelief stateDB resultSystem actResponseFigure 2: The architecture of UniDS.Unified dialogue data schema Chit-chat example Tokenized utterance does money buy happiness ? Belief state <domain> slot [value] User input <chit> money happiness DB result A token indicated the number <db_nore> of candidate entities Act <domain> <act> [slot] <chit> <chit_act> Response Tokenized utterance depends on how much money you spend on it .Task-oriented example i am looking for a cheap hotel . <hotel> price cheap <db_2> <hotel> <request> area do you have a specific area you want to stay in ?", "html": null }, "TABREF1": { "text": "Unified dialogue data schema (where tokens inside the square bracket are optional) and examples.", "type_str": "table", "num": null, "content": "
Trained with
Chit-chat dialogue modelmixed dialoguesUniDS
", "html": null }, "TABREF4": { "text": "Automatic evaluations of UniDS with two model sizes over two types of dialogue datasets. All results are reported in percentage, except Combined and AvgLen. Best results are in bold. *: Results reported in original paper(Yang et al., 2021) is not obtained by end-to-end evaluation. This result is reported by authors of UBAR in https://github.com/TonyNemo/UBAR-MultiWOZ/issues/3.", "type_str": "table", "num": null, "content": "", "html": null }, "TABREF5": { "text": "", "type_str": "table", "num": null, "content": "
presents the overall comparison results of
automatic evaluation. The first block shows the
results of UBAR. The following two blocks are
various baselines trained on 12 or 24 layers Di-
aloGPT respectively. From these results, we have
the following observations.
i) For the chit-chat task, UniDS achieves com-
parable performance with DialoGPT. For the
BLEU score, UniDS outperforms DialoGPT
with 12L and 24L. On other metrics, UniDS
is comparable with DialoGPT. This demon-
strates that UniDS can still keep strong chit-
", "html": null }, "TABREF6": { "text": "Ablation studies of automatic evaluations for UniDS.", "type_str": "table", "num": null, "content": "
...UniDS-24L w/o chit-chat BS...UniDS-24L
Sure, give me their phone number. ISure, give me their phone number. I
would also like to find an expensivewould also like to find an expensive
Userrestaurant in west cambridgeUserrestaurant in west cambridge
DBAct: [attraction] [inform] phone name address Belief state: [attraction] area westDBAct: [train] [request] destination Belief state:[attraction] area west [restaurant] pricerange expensive area west
[value_name] is located at [value_address]Here's the number for the [value_name], [value_phone].
and their phone number is [value_phone].SystemHow does the [value_name] sound for you?System
Figure 4: TOD examples from UniDS w/o chit-chat BS and UniDS. UniDS w/o chit-chat BS does not extract the user intent of searching restaurants, but UniDS extracts this intent successfully (highlighted in italics).
Relevance Informativeness Human-likeDialoGPT-24L Neutral UniDS-24L (Win %) (% ) (Win %) 25.33 42.67 32.00 29.33 33.33 37.34 26.67 43.33 30.00
", "html": null }, "TABREF7": { "text": "", "type_str": "table", "num": null, "content": "
: Win rate [%] between the UniDS-24L and DialoGPT-24L using three human evaluation metrics on chit-chat dialogues. \"Neutral\" means the generated responses of DialoGPT-24L and UniDS-24L are consid-ered to have equal quality.
", "html": null }, "TABREF9": { "text": "Switching performance of UniDS when having 2 turns chit-chat dialogues before task-orientated dialogues. Numbers in brackets indicates the exactly switching rate at the 2nd turn.", "type_str": "table", "num": null, "content": "
UniDS BLEU Dist-1 Dist-2 AvgLen Switch-1 Switch-2
12L 0.22 24L 0.344 619 3114.15 16.1831.8 98.9 (+67.1) 37.0 96.6 (+59.6)
", "html": null }, "TABREF10": { "text": "", "type_str": "table", "num": null, "content": "
: Switching performance of UniDS when pre-pending 2 turns task-oriented dialogues before chit-chat.
", "html": null }, "TABREF12": { "text": "", "type_str": "table", "num": null, "content": "
: Example of UniDS when switching from the task-oriented dialogue to chit-chat. UniDS gives a chatty response and thanks the user for using its services. Dia-logue history is omitted.
Model UBAR-12L 99.18 93.76 (-5.42) 88.14 (-11.04) Base 1 turn 2 turns UniDS-12L 100.06 96.13 (-3.93) 91.42 (-8.64) UBAR-24L 99.31 93.08 (-6.23) 88.67 (-10.64) UniDS-24L 104.12 100.71 (-3.41) 95.68 (-8.44)
", "html": null }, "TABREF13": { "text": "Combined score over TOD dataset for robustness test by inserting 1 and 2 turns of task-irrelavant utterances. Full results are presented in Appendix.", "type_str": "table", "num": null, "content": "", "html": null } } } }