{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:31:29.329369Z" }, "title": "UGent-T2K at the 2nd DialDoc Shared Task: A Retrieval-Focused Dialog System Grounded in Multiple Documents", "authors": [ { "first": "Yiwei", "middle": [], "last": "Jiang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ghent University -imec", "location": { "settlement": "IDLab Ghent", "country": "Belgium" } }, "email": "" }, { "first": "Amir", "middle": [], "last": "Hadifar", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ghent University -imec", "location": { "settlement": "IDLab Ghent", "country": "Belgium" } }, "email": "" }, { "first": "Johannes", "middle": [], "last": "Deleu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ghent University -imec", "location": { "settlement": "IDLab Ghent", "country": "Belgium" } }, "email": "" }, { "first": "Thomas", "middle": [], "last": "Demeester", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ghent University -imec", "location": { "settlement": "IDLab Ghent", "country": "Belgium" } }, "email": "" }, { "first": "Chris", "middle": [], "last": "Develder", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ghent University -imec", "location": { "settlement": "IDLab Ghent", "country": "Belgium" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This work presents the contribution from the Text-to-Knowledge team of Ghent University (UGent-T2K) 1 to the MultiDoc2Dial shared task on modeling dialogs grounded in multiple documents. We propose a pipeline system, comprising (1) document retrieval, (2) passage retrieval, and (3) response generation. We engineered these individual components mainly by, for (1)-(2), combining multiple ranking models and adding a final LambdaMART reranker, and, for (3), by adopting a Fusion-in-Decoder (FiD) model. We thus significantly boost the baseline system's performance (over +10 points for both F1 and SacreBLEU). Further, error analysis reveals two major failure cases, to be addressed in future work: (i) in case of topic shift within the dialog, retrieval often fails to select the correct grounding document(s), and (ii) generation sometimes fails to use the correctly retrieved grounding passage. Our code is released at this link.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "This work presents the contribution from the Text-to-Knowledge team of Ghent University (UGent-T2K) 1 to the MultiDoc2Dial shared task on modeling dialogs grounded in multiple documents. We propose a pipeline system, comprising (1) document retrieval, (2) passage retrieval, and (3) response generation. We engineered these individual components mainly by, for (1)-(2), combining multiple ranking models and adding a final LambdaMART reranker, and, for (3), by adopting a Fusion-in-Decoder (FiD) model. We thus significantly boost the baseline system's performance (over +10 points for both F1 and SacreBLEU). Further, error analysis reveals two major failure cases, to be addressed in future work: (i) in case of topic shift within the dialog, retrieval often fails to select the correct grounding document(s), and (ii) generation sometimes fails to use the correctly retrieved grounding passage. Our code is released at this link.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Most prior research on document-grounded dialog systems assumes a single document for each dialog (Choi et al., 2018; Reddy et al., 2019; Feng et al., 2020) . There are relatively few works on Multi-Document Grounded (MDG) dialog modeling, which requires a dialog system to (i) retrieve grounded passages (or documents) given the user question, and then (ii) generate responses based on the retrieval results and dialog context. Real-world applications (e.g., administration question answering, travel booking assistance and procedural task guidance) for MDG are challenging because of more complex user behavior in such dialogs on diverse information sources. In particular, for (i) retrieval of grounding text passage(s) the challenges pertain to keeping track of dialog state, topic shift (e.g., switching from driving license requirements to car insurance), vocabulary mismatch, vague question formulation, etc. Furthermore, (ii) response generation needs to appropriately phrase the answer to fit in a human(-like) dialog rather than simply copying a source document snippet.", "cite_spans": [ { "start": 98, "end": 117, "text": "(Choi et al., 2018;", "ref_id": "BIBREF3" }, { "start": 118, "end": 137, "text": "Reddy et al., 2019;", "ref_id": "BIBREF17" }, { "start": 138, "end": 156, "text": "Feng et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We leverage the recently released dialog dataset, MultiDoc2Dial (Feng et al., 2021) , to tackle aforementioned challenges. We build a pipeline system ( Fig. 1) comprising (1) a document retriever, (2) a passage retriever, and (3) an answer generator fusing multiple grounding input passages. Given the dialog context (i.e., the dialog history and user question), a document retriever searches given supporting documents to select the top-m related ones. Subsequently, these full documents are segmented into shorter passages ranked by a passage retriever. For these retrieval components (1)-(2), we use an ensemble approach -combining BM25, cosine similarity, etc.; for passage retrieval, we included Dense Passage Retrieval (DPR; Karpukhin et al., 2020) -followed by a reranking step using Lam-baMART (Burges, 2010) . The top-k passages are fused with the dialog context by a response generator to produce knowledge-grounded responses, based on Fusion in Decoder (FiD; Izacard and Grave, 2021) . We contribute with: (i) a multi-stage pipeline system comprising first the grounding text retrieval stages, split further into document and subsequent passage retrieval components (both using a multi-feature ensemble system), and second an answer generation model fusing information from multiple passages; (ii) experiments demonstrating that our pipeline system outperforms the baseline method by a large margin (over +10 points for both F1 and SacreBLEU); (iii) insightful error analysis, suggesting that the main shortcomings of the current system include failures of (a) the retrieval stages in case of topic shifts by the user, and (b) the answer generation stage to identify the correct grounding passage among its inputs. Our codes are released at https://github. com/YiweiJiang2015/ugent-t2k-dialdoc", "cite_spans": [ { "start": 64, "end": 83, "text": "(Feng et al., 2021)", "ref_id": "BIBREF6" }, { "start": 791, "end": 816, "text": "Lam-baMART (Burges, 2010)", "ref_id": null }, { "start": 970, "end": 994, "text": "Izacard and Grave, 2021)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 152, "end": 159, "text": "Fig. 1)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The MultiDoc2Dial shared task comprises two subtasks: in the seen-domain (referenced by subscript S), the system can rely on training data comprising both exemplary dialogs as well as the corresponding document set from the domains it will be tested on, whereas in the unseen-domain (referenced by U ) no related dialogs nor documents have been seen by the system before.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definition", "sec_num": "2" }, { "text": "In general, for both subtasks, a system first retrieves relevant documents from a document pool (D S or D U ) given the dialog context, i.e., a user's question Q i (i is the turn number) and the full conversation history Q or

and its children nodes are treated as a passage prefixed by its hierarchical titles. We note that some passages produced in this way are too short (424 passages are shorter than 20 tokens, e.g., headers with empty content below) or too long (24 passages longer than 1,000 tokens) as shown in Fig. 3(a) , not to mention those repetitive passages due to document duplicates. Given that common transformer-based generation models takes input up to 512 tokens, such length distribution either wastes a generation model's capacity when short passages are padded or loses a significant portion of information when long passages are truncated. To eliminate these extreme cases, three measures are taken based on our cleaned document set: (i) We remove the 56 duplicate documents. (ii) For each of the remaining documents, we first split it using the structure-wise method, calling the results \"sections\" to differentiate from the baseline's \"passages\". If a section has fewer than 150 tokens, it is directly added to the final passage list. If not, it will be further split into passages using a flexible sliding window which allows for a passage with tokens fewer than the window size in order to not break sentences. 6 (iii) Next, a passage with fewer than 60 tokens is merged with its following passage -except if it appears at the end of a section, in which case it will be appended to its preceding one. Figure 3 (b) depicts the passage length distribution using our segmentation method. The long tail problem of the baseline is largely resolved. As Table 5 shows, our new segmentation method reduces the total number of passages from 4,110 to 3,734 while it increases the average passage length from 130.4 to 154.1. Table 5 : Total number of passages and average passage length produced by the baseline method and ours. \"tokenizer\" and \"white space\" denote using the BART tokenizer and splitting words by white space respectively. ", "cite_spans": [], "ref_spans": [ { "start": 1099, "end": 1108, "text": "Fig. 3(a)", "ref_id": "FIGREF3" }, { "start": 2209, "end": 2217, "text": "Figure 3", "ref_id": "FIGREF3" }, { "start": 2355, "end": 2362, "text": "Table 5", "ref_id": null }, { "start": 2522, "end": 2529, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "This section reports (i) the ablation study of BM25 for document retrieval revealing how different features affect the retrieval performance; (ii) domain classification that enhances document retrieval; (iii) passage retrieval experiments based on our new segmentation method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Experiments", "sec_num": null }, { "text": "B.1 Ablation study of BM25 for document retrieval Table 6 presents our results for BM25 tuned on document retrieval. The first row shows the simple BM25 model without any preprocessing on inputs (question and documents). The next four rows respectively represent: lower casing inputs, removing stop-words, removing punctuation, and stemming, which greatly improve the performance (over +10 points for R@25). We obtained slight improvement with a domain classifier that predicts the conversation domain (see Appendix B.2). We also observed that using n-grams (n = 1,2,3) features instead of unigrams brings a further improvement with additional 3.2 points of R@25.", "cite_spans": [], "ref_spans": [ { "start": 50, "end": 57, "text": "Table 6", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "B Experiments", "sec_num": null }, { "text": "In the training data of MultiDoc2Dial , the grounding documents were crawled from 4 U.S. government websites, 7 covering 4 domains: Social Security Administration, U.S. Department of Veterans Affairs, Department of Motor Vehicles (New York State) and Federal Student Aid, which are respectively noted as ssa, va, dmv and student. We applied the idea proposed by Han et al. (2021) to further improve BM25 performance by training a domain classifier, i.e., finetuning the RoBERTalarge model (Liu et al., 2019) to predict a domain label for a given dialog. The domain scores are multiplied to BM25 after which a weighted combination between the initial BM25 and the new scores is used to create the final ranked list. In our experiments, we simply assume equal weights (0.5) for the two scores. Table 7 presents different classifiers' accuracy for seen-domain prediction.", "cite_spans": [ { "start": 362, "end": 379, "text": "Han et al. (2021)", "ref_id": "BIBREF8" }, { "start": 489, "end": 507, "text": "(Liu et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 792, "end": 799, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "B.2 Domain Classifier", "sec_num": null }, { "text": "Model Accuracy SVM (Cortes and Vapnik, 1995) 96.7 Bert-large 97.0 Roberta-large (Liu et al., 2019) 98.2 Table 7 : Domain classifier accuracy on dev set. Table 8 presents the passage retrieval results based on our passage segmentation. We experiment with three models: DPR ranking all the passages, DPR ranking only the passages within top-m documents and the LambdaMART model based on top-30 documents. Restricting DPR's search space within the top-5 documents increases R@15 from 80.1 to 87.1, which further grows to 90.4 with the Lamb-daMART model.", "cite_spans": [ { "start": 19, "end": 44, "text": "(Cortes and Vapnik, 1995)", "ref_id": "BIBREF4" }, { "start": 80, "end": 98, "text": "(Liu et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 104, "end": 111, "text": "Table 7", "ref_id": null }, { "start": 153, "end": 160, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "B.2 Domain Classifier", "sec_num": null }, { "text": "FiD was finetuned from pretrained BART weights with the following hyperparameter settings:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C Hyperparameters", "sec_num": null }, { "text": "batch_size=4 total_epochs=15 max_source_length=400 max_target_length=64", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C Hyperparameters", "sec_num": null }, { "text": "Model m R@1 R@5 R@10 R@15 Table 8 : Recall scores for passage retrieval on dev set. The passage set is produced by the method described in Appendix A.", "cite_spans": [], "ref_spans": [ { "start": 26, "end": 33, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "C Hyperparameters", "sec_num": null }, { "text": "label_smoothing=0.1 optimizer=AdamW weight_decay=0.1 adam_epsilon=1e-08 max_grad_norm=1.0 lr_scheduler=linear learning_rate=5e-05 warmup_steps=500 gradient_accumulation_steps=2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C Hyperparameters", "sec_num": null }, { "text": "msmarco-bert-base-dot-v5: available at https://bit. ly/3ID92fF3 http://terrier.org/ -Note that due to our limited time budget for the challenge, we did not properly analyze the contribution of the various Terrier features; therefore some of them may be unnecessary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We select 30 documents, because at the document level, we find R@30 = 99.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The full list of duplicates can be found in https://bit. ly/376TxPX", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Window size \u2264 150, stride = 50. Since we rely on Spacy to extract sentences, some of them may be broken depending on Spacy model's decision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "ssa.gov, va.gov, dmv.ny.gov, studentaid.gov", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research received funding from the Flemish Government under the \"Onderzoeksprogramma Artifici\u00eble Intelligentie (AI) Vlaanderen\" programme. The first author was supported by China Scholarship Council (201806020194). We thank the anonymous reviewers whose comments helped to improve our work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "The current version of the MultiDoc2Dial dataset provides 488 documents in which we found 56 duplicate documents 5 . The baseline relies on a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Passage segmentation", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Frequentist and bayesian approach to information retrieval", "authors": [ { "first": "Giambattista", "middle": [], "last": "Amati", "suffix": "" } ], "year": 2006, "venue": "Proceedings of ECIR", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1007/11735106_3" ] }, "num": null, "urls": [], "raw_text": "Giambattista Amati. 2006. Frequentist and bayesian approach to information retrieval. In Proceedings of ECIR.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A simple but tough-to-beat baseline for sentence embeddings", "authors": [ { "first": "Sanjeev", "middle": [], "last": "Arora", "suffix": "" }, { "first": "Yingyu", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Tengyu", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2017, "venue": "Proceedings of ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence em- beddings. In Proceedings of ICLR.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "From ranknet to lambdarank to lambdamart: An overview", "authors": [ { "first": "J", "middle": [ "C" ], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Burges", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher JC Burges. 2010. From ranknet to lamb- darank to lambdamart: An overview. Microsoft Re- search Technical Report.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "QuAC: Question answering in context", "authors": [ { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "He", "middle": [], "last": "He", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Yatskar", "suffix": "" }, { "first": "Wentau", "middle": [], "last": "Yih", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/D18-1241" ] }, "num": null, "urls": [], "raw_text": "Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen- tau Yih, Yejin Choi, Percy Liang, and Luke Zettle- moyer. 2018. QuAC: Question answering in context. In Proceedings of EMNLP.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Supportvector networks", "authors": [ { "first": "Corinna", "middle": [], "last": "Cortes", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Vapnik", "suffix": "" } ], "year": 1995, "venue": "Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Corinna Cortes and Vladimir Vapnik. 1995. Support- vector networks. Machine Learning.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of NAACL.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "MultiDoc2Dial: Modeling dialogues grounded in multiple documents", "authors": [ { "first": "Song", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Sankalp", "middle": [], "last": "Siva", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Patel", "suffix": "" }, { "first": "Sachindra", "middle": [], "last": "Wan", "suffix": "" }, { "first": "", "middle": [], "last": "Joshi", "suffix": "" } ], "year": 2021, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2021.emnlp-main.498" ] }, "num": null, "urls": [], "raw_text": "Song Feng, Siva Sankalp Patel, Hui Wan, and Sachindra Joshi. 2021. MultiDoc2Dial: Modeling dialogues grounded in multiple documents. In Proceedings of EMNLP.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "2020. doc2dial: A goal-oriented document-grounded dialogue dataset", "authors": [ { "first": "Song", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Wan", "suffix": "" }, { "first": "Chulaka", "middle": [], "last": "Gunasekara", "suffix": "" }, { "first": "Siva", "middle": [], "last": "Patel", "suffix": "" }, { "first": "Sachindra", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Lastras", "suffix": "" } ], "year": null, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.652" ] }, "num": null, "urls": [], "raw_text": "Song Feng, Hui Wan, Chulaka Gunasekara, Siva Patel, Sachindra Joshi, and Luis Lastras. 2020. doc2dial: A goal-oriented document-grounded dialogue dataset. In Proceedings of EMNLP.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The simplest thing that can possibly work: (pseudo-)relevance feedback via text classification", "authors": [ { "first": "Xiao", "middle": [], "last": "Han", "suffix": "" }, { "first": "Yuqi", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2021, "venue": "Proceedings of SIGIR-ICTIR", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1145/3471158.3472261" ] }, "num": null, "urls": [], "raw_text": "Xiao Han, Yuqi Liu, and Jimmy Lin. 2021. The simplest thing that can possibly work: (pseudo-)relevance feedback via text classification. In Proceedings of SIGIR-ICTIR.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Leveraging passage retrieval with generative models for open domain question answering", "authors": [ { "first": "Gautier", "middle": [], "last": "Izacard", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" } ], "year": 2021, "venue": "Proceedings of EACL", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2021.eacl-main.74" ] }, "num": null, "urls": [], "raw_text": "Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open do- main question answering. In Proceedings of EACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension", "authors": [ { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Weld", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2017, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P17-1147" ] }, "num": null, "urls": [], "raw_text": "Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehen- sion. In Proceedings of ACL.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Dense passage retrieval for open-domain question answering", "authors": [ { "first": "Vladimir", "middle": [], "last": "Karpukhin", "suffix": "" }, { "first": "Barlas", "middle": [], "last": "Oguz", "suffix": "" }, { "first": "Sewon", "middle": [], "last": "Min", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Ledell", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2020, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.550" ] }, "num": null, "urls": [], "raw_text": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of EMNLP.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Natural Questions: A Benchmark for Question Answering Research. Transactions of the Association for Computational Linguistics", "authors": [ { "first": "Tom", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "Jennimaria", "middle": [], "last": "Palomaki", "suffix": "" }, { "first": "Olivia", "middle": [], "last": "Redfield", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Ankur", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Alberti", "suffix": "" }, { "first": "Danielle", "middle": [], "last": "Epstein", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1162/tacl_a_00276" ] }, "num": null, "urls": [], "raw_text": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Ken- ton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural Questions: A Benchmark for Question Answering Research. Transactions of the Association for Com- putational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Marjan", "middle": [], "last": "Ghazvininejad", "suffix": "" }, { "first": "Abdelrahman", "middle": [], "last": "Mohamed", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2020, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.703" ] }, "num": null, "urls": [], "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and com- prehension. In Proceedings of ACL.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Retrieval-augmented generation for knowledge-intensive nlp tasks", "authors": [ { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Ethan", "middle": [], "last": "Perez", "suffix": "" }, { "first": "Aleksandra", "middle": [], "last": "Piktus", "suffix": "" }, { "first": "Fabio", "middle": [], "last": "Petroni", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Karpukhin", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Heinrich", "middle": [], "last": "K\u00fcttler", "suffix": "" } ], "year": null, "venue": "Proceedings of NeurIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich K\u00fcttler, Mike Lewis, Wen-tau Yih, Tim Rock- t\u00e4schel, et al. 2020b. Retrieval-augmented generation for knowledge-intensive nlp tasks. In Proceedings of NeurIPS.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "RoBERTa: A robustly optimized BERT pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.48550/ARXIV.1907.11692" ], "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "From puppy to maturity: Experiences in developing Terrier", "authors": [ { "first": "Craig", "middle": [], "last": "Macdonald", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Mccreadie", "suffix": "" } ], "year": 2012, "venue": "Proceedings of SIGIR-OSIR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Craig Macdonald, Richard McCreadie, Rodrygo LT San- tos, and Iadh Ounis. 2012. From puppy to maturity: Experiences in developing Terrier. In Proceedings of SIGIR-OSIR.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "CoQA: A conversational question answering challenge. Transactions of the Association for Computational Linguistics", "authors": [ { "first": "Siva", "middle": [], "last": "Reddy", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1162/tacl_a_00266" ] }, "num": null, "urls": [], "raw_text": "Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A conversational question answering challenge. Transactions of the Association for Com- putational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Improvements to bm25 and language models examined", "authors": [ { "first": "Andrew", "middle": [], "last": "Trotman", "suffix": "" }, { "first": "Antti", "middle": [], "last": "Puurula", "suffix": "" }, { "first": "Blake", "middle": [], "last": "Burgess", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ADCS", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1145/2682862.2682863" ] }, "num": null, "urls": [], "raw_text": "Andrew Trotman, Antti Puurula, and Blake Burgess. 2014. Improvements to bm25 and language models examined. In Proceedings of ADCS.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Our proposed pipeline dialog system.", "uris": null, "num": null }, "FIGREF1": { "type_str": "figure", "text": "Impact of the number of passages (N p \u2265 1) on generation metrics. (seen-domain task; FiD-BARTbase model; on dev set)", "uris": null, "num": null }, "FIGREF3": { "type_str": "figure", "text": "Passage length histograms of baseline and our passage segmentation. The length is the number of tokens processed by the BART tokenizer. (a) Baseline passages. The x-axis is truncated to 1,000 to make smaller value bins more clear. (b) Our passages after removing duplicate documents and merging short passages. No passage is omitted.", "uris": null, "num": null }, "TABREF1": { "text": "Recall scores for document retrieval on dev set.", "num": null, "type_str": "table", "html": null, "content": "" }, "TABREF2": { "text": "", "num": null, "type_str": "table", "html": null, "content": "
compares our passage
" }, "TABREF3": { "text": "Recall scores for passage retrieval on dev set.", "num": null, "type_str": "table", "html": null, "content": "" }, "TABREF4": { "text": "", "num": null, "type_str": "table", "html": null, "content": "
, with
" }, "TABREF5": { "text": "Generation performance of the baseline and our FiD-BART-base model (seen-domain task; on dev set). Row 1-3 list the upper-bound performance. A perfect-retriever assumes that the grounding passage is always ranked as the top 1. Row 4-5 use realistic retrievers. The baseline scores are our reproduction results.", "num": null, "type_str": "table", "html": null, "content": "
40
35
30
15 20 25F1_U SacreBLEU Meteor RougeL
01020 Number of passages 304050
" }, "TABREF7": { "text": "Submission results on the leaderboard (on test-test set).", "num": null, "type_str": "table", "html": null, "content": "" }, "TABREF10": { "text": "BM25 tuned recall scores for document retrieval on dev set.", "num": null, "type_str": "table", "html": null, "content": "
" } } } }