{ "paper_id": "P19-1002", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:30:47.025315Z" }, "title": "Incremental Transformer with Deliberation Decoder for Document Grounded Conversations", "authors": [ { "first": "Zekang", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "Dian Group", "institution": "Huazhong University of Science and Technology \u2021 Pattern Recognition Center", "location": { "addrLine": "WeChat AI" } }, "email": "zekangli97@gmail.com" }, { "first": "Cheng", "middle": [], "last": "Niu", "suffix": "", "affiliation": {}, "email": "chengniu@tencent.com" }, { "first": "Fandong", "middle": [], "last": "Meng", "suffix": "", "affiliation": {}, "email": "fandongmeng@tencent.com" }, { "first": "Yang", "middle": [], "last": "Feng", "suffix": "", "affiliation": { "laboratory": "", "institution": "Northeastern University", "location": { "country": "China" } }, "email": "fengyang@ict.ac.cn" }, { "first": "Qian", "middle": [], "last": "Li", "suffix": "", "affiliation": {}, "email": "qianli@stumail.neu.edu.cn" }, { "first": "Jie", "middle": [], "last": "Zhou", "suffix": "", "affiliation": {}, "email": "jiezhou@tencent.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Document Grounded Conversations is a task to generate dialogue responses when chatting about the content of a given document. Obviously, document knowledge plays a critical role in Document Grounded Conversations, while existing dialogue models do not exploit this kind of knowledge effectively enough. In this paper, we propose a novel Transformerbased architecture for multi-turn document grounded conversations. In particular, we devise an Incremental Transformer to encode multi-turn utterances along with knowledge in related documents. Motivated by the human cognitive process, we design a two-pass decoder (Deliberation Decoder) to improve context coherence and knowledge correctness. Our empirical study on a real-world Document Grounded Dataset proves that responses generated by our model significantly outperform competitive baselines on both context coherence and knowledge relevance.", "pdf_parse": { "paper_id": "P19-1002", "_pdf_hash": "", "abstract": [ { "text": "Document Grounded Conversations is a task to generate dialogue responses when chatting about the content of a given document. Obviously, document knowledge plays a critical role in Document Grounded Conversations, while existing dialogue models do not exploit this kind of knowledge effectively enough. In this paper, we propose a novel Transformerbased architecture for multi-turn document grounded conversations. In particular, we devise an Incremental Transformer to encode multi-turn utterances along with knowledge in related documents. Motivated by the human cognitive process, we design a two-pass decoder (Deliberation Decoder) to improve context coherence and knowledge correctness. Our empirical study on a real-world Document Grounded Dataset proves that responses generated by our model significantly outperform competitive baselines on both context coherence and knowledge relevance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Past few years have witnessed the rapid development of dialogue systems. Based on the sequenceto-sequence framework (Sutskever et al., 2014) , most models are trained in an end-to-end manner with large corpora of human-to-human dialogues and have obtained impressive success (Shang et al., 2015; Vinyals and Le, 2015; Li et al., 2016; Serban et al., 2016) . While there is still a long way for reaching the ultimate goal of dialogue systems, which is to be able to talk like humans. And one of the essential intelligence to achieve this goal is the ability to make use of knowledge.", "cite_spans": [ { "start": 116, "end": 140, "text": "(Sutskever et al., 2014)", "ref_id": "BIBREF19" }, { "start": 275, "end": 295, "text": "(Shang et al., 2015;", "ref_id": "BIBREF18" }, { "start": 296, "end": 317, "text": "Vinyals and Le, 2015;", "ref_id": "BIBREF21" }, { "start": 318, "end": 334, "text": "Li et al., 2016;", "ref_id": "BIBREF8" }, { "start": 335, "end": 355, "text": "Serban et al., 2016)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There are several works on dialogue systems exploiting knowledge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The Mem2Seq (Madotto et al., 2018) incorporates structured knowledge into the end-to-end task-oriented dialogue. introduces factmatching and knowledge-diffusion to generate meaningful, diverse and natural responses using structured knowledge triplets. Ghazvininejad et al. (2018) , Parthasarathi and Pineau (2018) , Yavuz et al. (2018) , Dinan et al. (2018) and Lo and Chen (2019) apply unstructured text facts in open-domain dialogue systems. These works mainly focus on integrating factoid knowledge into dialogue systems, while factoid knowledge requires a lot of work to build up, and is only limited to expressing precise facts. Documents as a knowledge source provide a wide spectrum of knowledge, including but not limited to factoid, event updates, subjective opinion, etc. Recently, intensive research has been applied on using documents as knowledge sources for Question-Answering (Chen et al., 2017; Yu et al., 2018; Rajpurkar et al., 2018; Reddy et al., 2018) .", "cite_spans": [ { "start": 12, "end": 34, "text": "(Madotto et al., 2018)", "ref_id": "BIBREF12" }, { "start": 252, "end": 279, "text": "Ghazvininejad et al. (2018)", "ref_id": "BIBREF2" }, { "start": 282, "end": 313, "text": "Parthasarathi and Pineau (2018)", "ref_id": "BIBREF14" }, { "start": 316, "end": 335, "text": "Yavuz et al. (2018)", "ref_id": "BIBREF24" }, { "start": 338, "end": 357, "text": "Dinan et al. (2018)", "ref_id": "BIBREF1" }, { "start": 362, "end": 380, "text": "Lo and Chen (2019)", "ref_id": "BIBREF10" }, { "start": 891, "end": 910, "text": "(Chen et al., 2017;", "ref_id": "BIBREF0" }, { "start": 911, "end": 927, "text": "Yu et al., 2018;", "ref_id": "BIBREF25" }, { "start": 928, "end": 951, "text": "Rajpurkar et al., 2018;", "ref_id": "BIBREF15" }, { "start": 952, "end": 971, "text": "Reddy et al., 2018)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The Document Grounded Conversation is a task to generate natural dialogue responses when chatting about the content of a specific document. This task requires to integrate document knowledge with the multi-turn dialogue history. Different from previous knowledge grounded dialogue systems, Document Grounded Conversations utilize documents as the knowledge source, and hence are able to employ a wide spectrum of knowledge. And the Document Grounded Conversations is also different from document QA since the contextual consistent conversation response should be generated. To address the Document Grounded Conversation task, it is important to: 1) Exploit document knowledge which are relevant to the conversation; 2) Develop a unified representation combining multi-turn utterances along with the relevant document knowledge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose a novel and effective Transformer-based (Vaswani et al., 2017) architecture for Document Grounded Conversations, named Incremental Transformer with Deliberation Decoder. The encoder employs a transformer architecture to incrementally encode multi-turn history utterances, and incorporate document knowledge into the the multi-turn context encoding process. The decoder is a two-pass decoder similar to the Deliberation Network in Neural Machine Translation (Xia et al., 2017) , which is designed to improve the context coherence and knowledge correctness of the responses. The first-pass decoder focuses on contextual coherence, while the second-pass decoder refines the result of the firstpass decoder by consulting the relevant document knowledge, and hence increases the knowledge relevance and correctness. This is motivated by human cognition process. In real-world human conversations, people usually first make a draft on how to respond the previous utterance, and then consummate the answer or even raise questions by consulting background knowledge.", "cite_spans": [ { "start": 66, "end": 88, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF20" }, { "start": 483, "end": 501, "text": "(Xia et al., 2017)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We test the effectiveness of our proposed model on Document Grounded Conversations Dataset (Zhou et al., 2018) . Experiment results show that our model is capable of generating responses of more context coherence and knowledge relevance. Sometimes document knowledge is even well used to guide the following conversations. Both automatic and manual evaluations show that our model substantially outperforms the competitive baselines.", "cite_spans": [ { "start": 91, "end": 110, "text": "(Zhou et al., 2018)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contributions are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We build a novel Incremental Transformer to incrementally encode multi-turn utterances with document knowledge together.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We are the first to apply a two-pass decoder to generate responses for document grounded conversations. Two decoders focus on context coherence and knowledge correctness respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 Approach", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our goal is to incorporate the relevant document knowledge into multi-turn conversations. Formally, let U = u (1) , ..., u (k) , ..., u (K) be a whole conversation composed of K utterances. We use", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2.1" }, { "text": "u (k) = u (k) 1 , ..., u (k) i , ..., u (k) I", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2.1" }, { "text": "to denote the k-th utterance containing I words, where u (k) i denotes the i-th word in the k-th utterance. For each utterance u (k) , likewise, there is a specified relevant document s (k) ", "cite_spans": [ { "start": 129, "end": 132, "text": "(k)", "ref_id": null }, { "start": 186, "end": 189, "text": "(k)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2.1" }, { "text": "= s (k) 1 , ..., s (k) j , ..., s (k)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2.1" }, { "text": "J , which represents the document related to the kth utterance containing J words. We define the document grounded conversations task as generating a response u (k+1) given its related document s (k+1) and previous k utterances U \u2264k with related documents S \u2264k , where U \u2264k = u (1) , ..., u (k) and S \u2264k = s (1) , ..., s (k) . Note that s (k) , s (k+1) , ..., s (k+n) may be the same.", "cite_spans": [ { "start": 291, "end": 294, "text": "(k)", "ref_id": null }, { "start": 321, "end": 324, "text": "(k)", "ref_id": null }, { "start": 339, "end": 342, "text": "(k)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2.1" }, { "text": "Therefore, the probability to generate the response u (k+1) is computed as: Figure 1 shows the framework of the proposed Incremental Transformer with Deliberation De- (1) coder. Please refer to Figure 2 (1) for more details. It consists of three components: 1) Self-Attentive Encoder (SA) (in orange) is a transformer encoder as described in (Vaswani et al., 2017) , which encodes the document knowledge and the current utterance independently.", "cite_spans": [ { "start": 342, "end": 364, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 76, "end": 84, "text": "Figure 1", "ref_id": null }, { "start": 194, "end": 202, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Problem Statement", "sec_num": "2.1" }, { "text": "P (u (k+1) |U \u2264k , S \u2264k+1 ; \u03b8) = I i=1 P (u k+1 i |U \u2264k , S \u2264k+1 , u (k+1)