{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:31:19.894190Z" }, "title": "Grounded Dialogue Generation with Cross-encoding Re-ranker, Grounding Span Prediction, and Passage Dropout", "authors": [ { "first": "Kun", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Chinese University of Hong Kong", "location": { "settlement": "Hong Kong SAR", "country": "China" } }, "email": "kunli@se.cuhk.edu.hk" }, { "first": "Tianhua", "middle": [], "last": "Zhang", "suffix": "", "affiliation": {}, "email": "thzhang@cpii.hk" }, { "first": "Liping", "middle": [], "last": "Tang", "suffix": "", "affiliation": {}, "email": "lptang@cpii.hk" }, { "first": "Junan", "middle": [], "last": "Li", "suffix": "", "affiliation": {}, "email": "jali@cpii.hk" }, { "first": "Hongyuan", "middle": [], "last": "Lu", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Chinese University of Hong Kong", "location": { "settlement": "Hong Kong SAR", "country": "China" } }, "email": "hylu@se.cuhk.edu.hk" }, { "first": "Xixin", "middle": [], "last": "Wu", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Chinese University of Hong Kong", "location": { "settlement": "Hong Kong SAR", "country": "China" } }, "email": "xixinwu@cuhk.edu.hk" }, { "first": "Helen", "middle": [], "last": "Meng", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Chinese University of Hong Kong", "location": { "settlement": "Hong Kong SAR", "country": "China" } }, "email": "hmmeng@se.cuhk.edu.hk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "MultiDoc2Dial presents an important challenge on modeling dialogues grounded with multiple documents. This paper proposes a pipeline system of \"retrieve, re-rank, and generate\", where each component is individually optimized. This enables the passage re-ranker and response generator to fully exploit training with groundtruth data. Furthermore, we use a deep crossencoder trained with localized hard negative passages from the retriever. For the response generator, we use grounding span prediction as an auxiliary task to be jointly trained with the main task of response generation. We also adopt a passage dropout and regularization technique to improve response generation performance. Experimental results indicate that the system clearly surpasses the competitive baseline and our team CPII-NLP ranked 1st among the public submissions on ALL four leaderboards based on the sum of F1, SacreBLEU, METEOR and RougeL scores.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "MultiDoc2Dial presents an important challenge on modeling dialogues grounded with multiple documents. This paper proposes a pipeline system of \"retrieve, re-rank, and generate\", where each component is individually optimized. This enables the passage re-ranker and response generator to fully exploit training with groundtruth data. Furthermore, we use a deep crossencoder trained with localized hard negative passages from the retriever. For the response generator, we use grounding span prediction as an auxiliary task to be jointly trained with the main task of response generation. We also adopt a passage dropout and regularization technique to improve response generation performance. Experimental results indicate that the system clearly surpasses the competitive baseline and our team CPII-NLP ranked 1st among the public submissions on ALL four leaderboards based on the sum of F1, SacreBLEU, METEOR and RougeL scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The task of developing information-seeking dialogue systems has seen many recent research advancements. The goal is to answer users' questions grounded on documents in a conversational manner. MultiDoc2Dial 1 is a realistic task proposed by Feng et al. (2021) to model goal-oriented information-seeking dialogues that are grounded on multiple documents and participants are required to generate appropriate responses towards users' utterances according to the documents. To facilitate this task, the authors also propose a new dataset that contains dialogues grounded in multiple documents from four domains. Unlike previous work that mostly describe document-grounded dialogue modeling as a machine reading comprehension task based on one particular document or passage, the MultiDoc2Dial involves multiple topics within a conversation, hence it is grounded on different documents. The task contains two sub-tasks: Grounding Span Prediction aims to find the most relevant span from multiple documents for the next agent response, and Agent Response Generation generates the next agent response. This paper focuses on our work in to the second sub-task, and presents three major findings and contributions:", "cite_spans": [ { "start": 241, "end": 259, "text": "Feng et al. (2021)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 In order to fully leverage the ground-truth training data, we propose to individually optimize the retriever, re-ranker, and response generator.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We propose to adopt a deep cross-encoded re-ranker that is trained with localized hard negatives sampled from the retriever results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We propose to use grounding span prediction as an auxiliary task for the generator and use passage dropout as a regularization technique to improve the generation performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Experimental results indicate that our proposed system achieves a performance with marked improvement over the strong baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Open-domain Question Answering systems have evolved to adopt the popular \"Retriever-Reader (Generator)\" architecture since DrQA (Chen et al., 2017) . Previous work , Guu et al., 2020 adopt end-to-end training strategy to jointly learn the retriever and reader with question-answer pairs. Retrieval-augmented Generation (RAG) (Lewis et al., 2020b) uses Dense Passage Retriever (DPR) as the retriever to extract multiple documents related to the query and feed them into a BART (Lewis et al., 2020a) generator for answer generation. Izacard and Grave (2021) proposed the Fusion-in-Decoder method which processes passages individually in the encoder but jointly in the decoder, surpassing the performance of RAG. Other work like QuAC (Choi et al., 2018) , ShARC (Saeidi et al., 2018) and CoQA (Reddy et al., 2019) focus on the machine reading comprehension task, which assumes that the associated document is given. In particular, Feng et al. (2020) proposed the Doc2Dial task ,which aims to extract the related span from the given documents for generating the corresponding answer.", "cite_spans": [ { "start": 128, "end": 147, "text": "(Chen et al., 2017)", "ref_id": "BIBREF0" }, { "start": 164, "end": 182, "text": ", Guu et al., 2020", "ref_id": "BIBREF8" }, { "start": 325, "end": 346, "text": "(Lewis et al., 2020b)", "ref_id": "BIBREF15" }, { "start": 476, "end": 497, "text": "(Lewis et al., 2020a)", "ref_id": "BIBREF14" }, { "start": 531, "end": 555, "text": "Izacard and Grave (2021)", "ref_id": "BIBREF9" }, { "start": 731, "end": 750, "text": "(Choi et al., 2018)", "ref_id": "BIBREF1" }, { "start": 759, "end": 780, "text": "(Saeidi et al., 2018)", "ref_id": "BIBREF22" }, { "start": 790, "end": 810, "text": "(Reddy et al., 2019)", "ref_id": "BIBREF21" }, { "start": 928, "end": 946, "text": "Feng et al. (2020)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The MultiDoc2Dial task aims to generate an appropriate response R based on an input query Q (the current user turn u T and the concatenated dialogue history {u T \u22121", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "3" }, { "text": "1 } := u 1 , u 2 , ..., u T \u22121 ) and a collection of passages {P i } M i=1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "3" }, { "text": "The passages are extracted from documents based on document structural information indicated by markup tags in the original HTML file. The organizer splits the MultiDoc2Dial data into train, validation, development and test set, and results on the latter two are evaluated through the leaderboard 2 . The validation, development and test set contain two settings: seen and unseen, which is categorized based on whether there are dialogues grounded on the documents seen/unseen during training. We leave detailed dataset description in Appendix A.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "3" }, { "text": "We propose a pipeline system of \"retrieve, re-rank, and generate\". Following previous work in Lewis et al. (2020b) ; Feng et al. (2021) , we adopt DPR as the retriever ( \u00a74.1) to efficiently filter out irrelevant passages and narrow the search space. We then refine the retrieval results with a deep cross-encoder ( \u00a74.2) trained with localized negatives (Gao et al., 2021) . We introduce a passage dropout and regularization technique to enhance the robustness of the generator ( \u00a74.3) and use the grounding span prediction as an auxiliary task. Further more, pipeline training is adopted where each component is individually optimized to fully utilize the supervision. Experimental results ( \u00a75.3) also indicate the effectiveness and merits of the training strategy, which we observed to be a key factor for the performance gain. ", "cite_spans": [ { "start": 94, "end": 114, "text": "Lewis et al. (2020b)", "ref_id": "BIBREF15" }, { "start": 117, "end": 135, "text": "Feng et al. (2021)", "ref_id": "BIBREF5" }, { "start": 355, "end": 373, "text": "(Gao et al., 2021)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "Following Feng et al. 2021, we adopt DPR as the retriever with a representation-based bi-encoder, that is, a dialogue query encoder q(\u2022) and a passage context encoder p(\u2022). Given an input query Q and a collection of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Passage Retrieval", "sec_num": "4.1" }, { "text": "passages {P i } M i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Passage Retrieval", "sec_num": "4.1" }, { "text": ", we extract the query encoding as q(Q) and the passage encoding as p(P i ). The similarity is defined as the dot product of the two vectors \u27e8q(Q), p(P i )\u27e9 and the model is trained to optimize the negative log likelihood of the positive passage among L in-batch and hard negatives. We then pre-compute the representations of all passages and index them offline. Maximum Inner Product Search (MIPS) with Faiss (Johnson et al., 2017 ) is adopted to retrieve the top-K passages during inference.", "cite_spans": [ { "start": 410, "end": 431, "text": "(Johnson et al., 2017", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Passage Retrieval", "sec_num": "4.1" }, { "text": "To re-rank the passages retrieved by DPR, we use a BERT-based cross-encoder that exploits localized negatives sampled from DPR results (Gao et al., 2021) . This means that the construction of the training set for the re-ranker is based on the top negative passages retrieved by the DPR. Specifically, given a query Q, its corresponding ground truth passage P + , and its top-N negative passages {P \u2212 j } N j=1 retrieved by DPR, we first calculate a deep distance function for each positive and negative passage against the query:", "cite_spans": [ { "start": 135, "end": 153, "text": "(Gao et al., 2021)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Passage Re-ranking", "sec_num": "4.2" }, { "text": "dist(Q, P) = v T cls(BERT(concat(Q, P))),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Passage Re-ranking", "sec_num": "4.2" }, { "text": "(1) where v represents a trainable vector, cls extracts the [CLS] vector from BERT. Consequently, such a distance function is deeply cross-encoded, as we feed the concatenation of the query and the passage into the model instead of encoding them individually with a representation-based bi-encoder (Feng et al., 2021) . We then apply a contrastive loss:", "cite_spans": [ { "start": 298, "end": 317, "text": "(Feng et al., 2021)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Passage Re-ranking", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L c = \u2212 log exp(dist(Q, P + )) P\u2208P \u00b1 exp(dist(Q, P)) ,", "eq_num": "(2)" } ], "section": "Passage Re-ranking", "sec_num": "4.2" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Passage Re-ranking", "sec_num": "4.2" }, { "text": "P \u00b1 represents P + \u222a {P \u2212 i } N i=1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Passage Re-ranking", "sec_num": "4.2" }, { "text": "Here, it is important to condition the gradient on the negative passages to learn to recognize the positive passage from hard negatives retrieved by the DPR. 3 Ensemble We create an ensemble of three pretrained models (Dietterich, 2000) , namely, BERT (Devlin et al., 2019) , RoBERTa (Liu et al., 2019) , and ELECTRA (Clark et al., 2020) for re-ranking. We first calculate their distance function with Equation 1, with the output scores denoted as O B , O R , and O E . We define the final scores O as the weighted summation of the above three scores:", "cite_spans": [ { "start": 153, "end": 159, "text": "DPR. 3", "ref_id": null }, { "start": 218, "end": 236, "text": "(Dietterich, 2000)", "ref_id": "BIBREF4" }, { "start": 252, "end": 273, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF3" }, { "start": 284, "end": 302, "text": "(Liu et al., 2019)", "ref_id": "BIBREF18" }, { "start": 317, "end": 337, "text": "(Clark et al., 2020)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Passage Re-ranking", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "O = \u03b1O B + \u03b2O R + \u03b3O E ,", "eq_num": "(3)" } ], "section": "Passage Re-ranking", "sec_num": "4.2" }, { "text": "where \u03b1, \u03b2, and \u03b3 represent the weight hyperparameters for each model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Passage Re-ranking", "sec_num": "4.2" }, { "text": "For response generation, we leverage the pretrained sequence-to-sequence model BART large (Lewis et al., 2020a) , where the encoder is fed the concatenation of a query and a passage [Q, P] , and the decoder is then required to generate the corresponding response R. We use the ground truth passage as P for training. The training process can be summarized as follows:", "cite_spans": [ { "start": 90, "end": 111, "text": "(Lewis et al., 2020a)", "ref_id": "BIBREF14" }, { "start": 182, "end": 188, "text": "[Q, P]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Response Generation", "sec_num": "4.3" }, { "text": "Joint Training with Grounding Prediction The grounding span in a passage is the supporting evidence for the response, which can provide helpful information for response generation. Therefore, we take grounding span prediction as the auxiliary task and apply multi-task learning for model training. Specifically, the passage is first encoded into a sequence of hidden representations", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Response Generation", "sec_num": "4.3" }, { "text": "h i = Encoder([Q, P]), i \u2208 {1, ..., |P|}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Response Generation", "sec_num": "4.3" }, { "text": "Then a classifier outputs the probability of the i-th token of P to lie within the grounding span as P (y i |Q, P) = sigmoid(MLP(h i )). We define this task's training objective as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Response Generation", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L G = \u2212 |P| i=1 logP (y i |Q, P).", "eq_num": "(4)" } ], "section": "Response Generation", "sec_num": "4.3" }, { "text": "Passage Dropout and Regularization Preliminary experiments indicate that the generator is prone to overfit to some passages quoted frequently in the train set, which may cause generalization errors when applied to previously unseen passages. Hence, we apply passage dropout to enhance the robustness of the generator. In details, for a training sample ([Q, P], R), a consecutive span with a specified length (of 25% in our experiments) in P is randomly selected and then dropped, which produces P \u2032 . It is noteworthy that passage dropout is required to avoid truncating content of grounding spans. 4 Furthermore, we repeat passage dropout twice for each sample in a batch, and obtain ([Q, P \u2032 ], R) as well as ([Q, P \u2032\u2032 ], R). Since the grounding span in a passage serves as the oracle for response generation, the two modified inputs should have similar prediction distribution, denoted as P (r i |Q, P \u2032 , r " }, "TABREF3": { "num": null, "html": null, "type_str": "table", "text": "Retrieval performance on the MultiDoc2Dial validation set. All models are fine-tuned using the training set only. * indicates the model trained on the official pre-processed data; others are trained on our preprocessed version. E(\u2022) denotes ensemble.", "content": "