{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:52:33.994998Z" }, "title": "An Encoder Attribution Analysis for Dense Passage Retriever in Open-Domain Question Answering", "authors": [ { "first": "Minghan", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Waterloo", "location": {} }, "email": "" }, { "first": "Xueguang", "middle": [], "last": "Ma", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Waterloo", "location": {} }, "email": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Waterloo", "location": {} }, "email": "jimmylin@uwaterloo.ca" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The bi-encoder design of dense passage retriever (DPR) is a key factor to its success in open-domain question answering (QA), yet it is unclear how DPR's question encoder and passage encoder individually contributes to overall performance, which we refer to as the encoder attribution problem. The problem is important as it helps us identify the factors that affect individual encoders to further improve overall performance. In this paper, we formulate our analysis under a probabilistic framework called encoder marginalization, where we quantify the contribution of a single encoder by marginalizing other variables. First, we find that the passage encoder contributes more than the question encoder to indomain retrieval accuracy. Second, we demonstrate how to find the affecting factors for each encoder, where we train DPR with different amounts of data and use encoder marginalization to analyze the results. We find that positive passage overlap and corpus coverage of training data have big impacts on the passage encoder, while the question encoder is mainly affected by training sample complexity under this setting. Based on this framework, we can devise data-efficient training regimes: for example, we manage to train a passage encoder on SQuAD using 60% less training data without loss of accuracy.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "The bi-encoder design of dense passage retriever (DPR) is a key factor to its success in open-domain question answering (QA), yet it is unclear how DPR's question encoder and passage encoder individually contributes to overall performance, which we refer to as the encoder attribution problem. The problem is important as it helps us identify the factors that affect individual encoders to further improve overall performance. In this paper, we formulate our analysis under a probabilistic framework called encoder marginalization, where we quantify the contribution of a single encoder by marginalizing other variables. First, we find that the passage encoder contributes more than the question encoder to indomain retrieval accuracy. Second, we demonstrate how to find the affecting factors for each encoder, where we train DPR with different amounts of data and use encoder marginalization to analyze the results. We find that positive passage overlap and corpus coverage of training data have big impacts on the passage encoder, while the question encoder is mainly affected by training sample complexity under this setting. Based on this framework, we can devise data-efficient training regimes: for example, we manage to train a passage encoder on SQuAD using 60% less training data without loss of accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Attribution analysis, or credit assignment, concerns how individual components of a system contribute to its overall performance (Minsky, 1961) . In this paper, we are interested in the encoder attribution problem of dense passage retrievers (DPR) (Karpukhin et al., 2020; Zhan et al., 2020b) for open-domain question answering (Voorhees and Tice, 2000; Chen et al., 2017) . DPR leverages a bi-encoder structure that encodes questions and passages into low dimensional vectors separately. Here, \"*\" denotes the target encoder we want to evaluate, where we use the Q-encoder of DPR trained on NQ as an example. The Q-encoder is evaluated on NQ-test data and paired with different P-encoders, and the final contribution is determined by averaging across the scores of different encoder pairings.", "cite_spans": [ { "start": 129, "end": 143, "text": "(Minsky, 1961)", "ref_id": "BIBREF38" }, { "start": 248, "end": 272, "text": "(Karpukhin et al., 2020;", "ref_id": "BIBREF16" }, { "start": 273, "end": 292, "text": "Zhan et al., 2020b)", "ref_id": "BIBREF57" }, { "start": 328, "end": 353, "text": "(Voorhees and Tice, 2000;", "ref_id": "BIBREF48" }, { "start": 354, "end": 372, "text": "Chen et al., 2017)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Follow-up work has proposed various methods to further improve and analyze DPR Luan et al., 2021; Gao and Callan, 2021) . However, most of these methods only test the bi-encoder model in tandem, leaving two questions unanswered:", "cite_spans": [ { "start": 79, "end": 97, "text": "Luan et al., 2021;", "ref_id": "BIBREF31" }, { "start": 98, "end": 119, "text": "Gao and Callan, 2021)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) What are the individual contributions of each encoder of DPR?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(2) How to find the affecting factors for each encoder in different QA datasets?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The first problem, which we refer to as encoder attribution, is important as it helps us understand which part of the DPR model might go wrong and identify possible sources of error in the data for the second problem. Therefore, it is important to separately inspect individual encoders of DPR. In this paper, we perform an encoder attribution analysis of DPR under a probabilistic framework, where we model the evaluation function for DPR's predictions as a probabilistic distribution. The core component of our method is called encoder marginalization, where we target one encoder and marginalize over the other variables. We then use the expectation under the marginalized 1 distribution as the encoder's contribution to the evaluation score. The marginalization can be approximated using Monte-Carlo, as illustrated in Fig. 1 , where encoders trained from different domains are used as empirical samples, which will be discussed in Section 3.2.", "cite_spans": [], "ref_spans": [ { "start": 823, "end": 829, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For question (1), we introduce a technique we call encoder marginalization to compare the question encoder and passage encoder of the same DPR (Section 5.2). We find that in general, the passage encoder plays a more important role than the question encoder in terms of retrieval accuracy, as replacing the passage encoder generally causes a larger accuracy drop.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For question (2), we perform a case study where we analyze DPR's individual encoders under a data efficiency setting. We evaluate different DPR models trained with different amounts of data. Under this setting, we find that positive passage overlap and corpus coverage of the training data might be the affecting factors for the passage encoder, while the question encoder seems to be affected by the sample complexity of training data. Based on the discovery of these affecting factors, we develop a data-efficient training regime, where we manage to train a passage encoder on SQuAD using 60% less training data with very little drop in accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our work makes the following four main contributions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 To our knowledge, we are the first to perform an encoder attribution analysis for DPR under a probabilistic framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We find that the passage encoder plays a more important role than the question encoder in terms of in-domain retrieval accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Under a data efficiency setting, we identify that passage encoders are affected by positive passage overlap and corpus coverage of the training data, while question encoders are sensitive to the training sample complexity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Our framework enables the development of dataefficient training regimes where we are able to use up to 60% less training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Attribution analysis It is also known as credit assignment and has long been discussed in various areas and applications. In reinforcement learning (Sutton and Barto, 1998) , the accumulated re-ward from the environment needs to be distributed to the agent's historical decisions (Sutton, 1984; Harutyunyan et al., 2019; Arumugam et al., 2021) . In investment (Binay, 2005) , it is used to explain why a portfolio's performance differed from the benchmark. Attribution analysis has also been used in NLP (Mudrakarta et al., 2018; Jiang et al., 2021) and CV (Schulz et al., 2020) to interpret models' decisions. Therefore, attribution analysis is an important topic for understanding a system's behavior, especially for black-box models like deep neural networks (Goodfellow et al., 2016) .", "cite_spans": [ { "start": 148, "end": 172, "text": "(Sutton and Barto, 1998)", "ref_id": "BIBREF45" }, { "start": 280, "end": 294, "text": "(Sutton, 1984;", "ref_id": "BIBREF46" }, { "start": 295, "end": 320, "text": "Harutyunyan et al., 2019;", "ref_id": "BIBREF11" }, { "start": 321, "end": 343, "text": "Arumugam et al., 2021)", "ref_id": null }, { "start": 360, "end": 373, "text": "(Binay, 2005)", "ref_id": "BIBREF3" }, { "start": 500, "end": 529, "text": "NLP (Mudrakarta et al., 2018;", "ref_id": null }, { "start": 530, "end": 549, "text": "Jiang et al., 2021)", "ref_id": "BIBREF13" }, { "start": 557, "end": 578, "text": "(Schulz et al., 2020)", "ref_id": "BIBREF42" }, { "start": 762, "end": 787, "text": "(Goodfellow et al., 2016)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Background and Related Work", "sec_num": "2" }, { "text": "Retrieval for QA First-stage retrieval aims to efficiently find a set of candidate documents from a large corpus. Term-matching methods such as BM25 (Robertson and Zaragoza, 2009; have established strong baselines in the firststage retrieval of various QA tasks (Chen et al., 2017; Yang et al., 2019; Min et al., 2019) . Recently, retrievers based on pre-trained language models (Devlin et al., 2019; Liu et al., 2019) also make great advancements (Seo et al., 2019; Guu et al., 2020; Khattab and Zaharia, 2020) . Particularly, dense passage retrievers (DPR) (Karpukhin et al., 2020; Zhan et al., 2020b ) set a milestone by encoding questions and passages separately with a bi-encoder design. Based on DPR, multiple works on compression (Yamada et al., 2021; Izacard et al., 2020; , hardnegative mining Zhan et al., 2021) , multi-vector encoding (Luan et al., 2021; Lee et al., 2021b) , and QA pre-training (Lu et al., 2021; Gao and Callan, 2021) expand the boundary of dense retrieval.", "cite_spans": [ { "start": 149, "end": 179, "text": "(Robertson and Zaragoza, 2009;", "ref_id": "BIBREF41" }, { "start": 262, "end": 281, "text": "(Chen et al., 2017;", "ref_id": "BIBREF4" }, { "start": 282, "end": 300, "text": "Yang et al., 2019;", "ref_id": "BIBREF54" }, { "start": 301, "end": 318, "text": "Min et al., 2019)", "ref_id": "BIBREF37" }, { "start": 379, "end": 400, "text": "(Devlin et al., 2019;", "ref_id": "BIBREF5" }, { "start": 401, "end": 418, "text": "Liu et al., 2019)", "ref_id": "BIBREF29" }, { "start": 448, "end": 466, "text": "(Seo et al., 2019;", "ref_id": "BIBREF44" }, { "start": 467, "end": 484, "text": "Guu et al., 2020;", "ref_id": "BIBREF10" }, { "start": 485, "end": 511, "text": "Khattab and Zaharia, 2020)", "ref_id": "BIBREF17" }, { "start": 559, "end": 583, "text": "(Karpukhin et al., 2020;", "ref_id": "BIBREF16" }, { "start": 584, "end": 602, "text": "Zhan et al., 2020b", "ref_id": "BIBREF57" }, { "start": 737, "end": 758, "text": "(Yamada et al., 2021;", "ref_id": "BIBREF53" }, { "start": 759, "end": 780, "text": "Izacard et al., 2020;", "ref_id": "BIBREF12" }, { "start": 803, "end": 821, "text": "Zhan et al., 2021)", "ref_id": "BIBREF55" }, { "start": 846, "end": 865, "text": "(Luan et al., 2021;", "ref_id": "BIBREF31" }, { "start": 866, "end": 884, "text": "Lee et al., 2021b)", "ref_id": "BIBREF23" }, { "start": 907, "end": 924, "text": "(Lu et al., 2021;", "ref_id": "BIBREF30" }, { "start": 925, "end": 946, "text": "Gao and Callan, 2021)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Background and Related Work", "sec_num": "2" }, { "text": "Other Analyses of DPR BEIR investigates DPR's transferability to multiple domains and retrieval tasks (Thakur et al., 2021) , while Mr.TYDI evaluates DPR pre-trained on English for retrieval in a multi-lingual setting . find that most of the test answers also occur somewhere in the training data for most QA datasets. observe that neural retrievers fail to generalize to compositional questions and novel entities. Sciavolino et al. (2021) also find that dense models can only generalize to common question patterns.", "cite_spans": [ { "start": 102, "end": 123, "text": "(Thakur et al., 2021)", "ref_id": "BIBREF47" }, { "start": 416, "end": 440, "text": "Sciavolino et al. (2021)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Background and Related Work", "sec_num": "2" }, { "text": "Open-domain question answering requires finding answers to given questions from a large collection of documents (Voorhees and Tice, 2000) . For example, the question \"How many episodes in Season 2 Breaking Bad?\" is given and then the answer \"13\" will be either extracted from the retrieved passages or generated from a model. The goal of open-domain question answering is to learn a mapping from the questions to the answers, where the mapping could be a multi-stage pipeline that includes retrieval and extraction, or it could be a large language model that generates the answers directly given the questions. In this paper, we mainly discuss the retrieval component in a multi-stage system, which involves retrieving a set of candidate documents from a large text corpus. Based on the type of corpus, we could further divide opendomain question answering into textual QA and knowledge base QA. Textual QA mines answers from unstructured text documents (e.g., Wikipedia) while the other one searches through a structured knowledge base. We will mainly focus on textual QA in this paper.", "cite_spans": [ { "start": 112, "end": 137, "text": "(Voorhees and Tice, 2000)", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "Open-Domain Question Answering", "sec_num": "2.1" }, { "text": "Given a corpus of passages", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dense Passage Retrieval", "sec_num": "2.2" }, { "text": "C = {d 1 , d 2 , \u2022 \u2022 \u2022 , d n }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dense Passage Retrieval", "sec_num": "2.2" }, { "text": "and a query q, DPR (Karpukhin et al., 2020) leverages two encoders \u03b7 Q and \u03b7 D to encode the question and passages separately. The similarity between the question q and passage d is defined as the dot product of their vector output:", "cite_spans": [ { "start": 19, "end": 43, "text": "(Karpukhin et al., 2020)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Dense Passage Retrieval", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s = E T q E d ,", "eq_num": "(1)" } ], "section": "Dense Passage Retrieval", "sec_num": "2.2" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dense Passage Retrieval", "sec_num": "2.2" }, { "text": "E q = \u03b7 Q (q) and E d = \u03b7 D (d).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dense Passage Retrieval", "sec_num": "2.2" }, { "text": "The similarity score s is used to rank the passages during retrieval. Both \u03b7 Q and \u03b7 D use a pre-trained BERT model (Devlin et al., 2019) for initialization and its [CLS] vector as the representation.", "cite_spans": [ { "start": 116, "end": 137, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Dense Passage Retrieval", "sec_num": "2.2" }, { "text": "Training As pointed out by Karpukhin et al. (2020) , training the encoders such that Eq. (1) becomes a good ranking function is essentially a metric learning problem (Kulis, 2012) . Given a specific question q, let d + be the positive context that contains the answer a for q and", "cite_spans": [ { "start": 27, "end": 50, "text": "Karpukhin et al. (2020)", "ref_id": "BIBREF16" }, { "start": 166, "end": 179, "text": "(Kulis, 2012)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Dense Passage Retrieval", "sec_num": "2.2" }, { "text": "{d \u2212 1 , d \u2212 2 , ...d \u2212 k }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dense Passage Retrieval", "sec_num": "2.2" }, { "text": "be the negative contexts, the contrastive learning objective with respect to q, d + , and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dense Passage Retrieval", "sec_num": "2.2" }, { "text": "{d \u2212 i } k i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dense Passage Retrieval", "sec_num": "2.2" }, { "text": "is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dense Passage Retrieval", "sec_num": "2.2" }, { "text": "L(q, d + , d \u2212 1 , d \u2212 2 , ...d \u2212 k ) = \u2212 log exp(E T q E d + ) exp(E T q E d + ) + k i=1 exp(E T q E d \u2212 i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dense Passage Retrieval", "sec_num": "2.2" }, { "text": ".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dense Passage Retrieval", "sec_num": "2.2" }, { "text": "( 2)The loss function in Eq. (2) encourages the representations of q and d + to be close and increases the distance between q and d \u2212 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dense Passage Retrieval", "sec_num": "2.2" }, { "text": "Retrieval/Inference The bi-encoder design enables DPR to perform an approximate nearest neighbour search (ANN) using tools like FAISS (Johnson et al., 2021) , where the representations of the corpus passages are indexed offline. It is typically used in first-stage retrieval, where the goal is to retrieve all potentially relevant documents from the large corpus. Therefore, we consider topk accuracy as the evaluation metric in this paper, following Karpukhin et al. (2020) . Let R be an evaluation function (e.g., top-k accuracy) for first-stage retrieval. Given a questionanswer pair (q, a) and a corpus C, we use \u03b7 Q and \u03b7 D to encode questions and retrieve passages separately. We define the evaluation score r 0 given the above inputs to be:", "cite_spans": [ { "start": 134, "end": 156, "text": "(Johnson et al., 2021)", "ref_id": "BIBREF14" }, { "start": 451, "end": 474, "text": "Karpukhin et al. (2020)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Dense Passage Retrieval", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "r 0 = R(q, a, C, \u03b7 Q , \u03b7 D )", "eq_num": "(3)" } ], "section": "Dense Passage Retrieval", "sec_num": "2.2" }, { "text": "For simplicity's sake, in the rest of the paper, we will omit the answer a and corpus C as they are held fixed during evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dense Passage Retrieval", "sec_num": "2.2" }, { "text": "In this section, we propose a simple probabilistic method to evaluate the contributions of encoders \u03b7 Q and \u03b7 D , as well as to compare the same type of encoder across different datasets. The core idea is called encoder marginalization, where marginalization simply means summing over the probability of possible values of a random variable. Typically, the evaluation function R in Eq.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder Marginalization", "sec_num": "3.1" }, { "text": "(3) outputs a deterministic score r 0 . However, we could also view r 0 as a specific value of a continuous random variable r \u2208 R sampled from a Dirac", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder Marginalization", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "delta distribution p(r | q, \u03b7 Q , \u03b7 D ): p(r | q, \u03b7 Q , \u03b7 D ) . = \u03b4(r \u2212 r 0 ) = +\u221e, r = r 0 0, r = r 0 , s.t., +\u221e \u2212\u221e \u03b4(r \u2212 r 0 )dr = 1", "eq_num": "(4)" } ], "section": "Encoder Marginalization", "sec_num": "3.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder Marginalization", "sec_num": "3.1" }, { "text": "r 0 = R(q, a, C, \u03b7 Q , \u03b7 D ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder Marginalization", "sec_num": "3.1" }, { "text": "Again, the answer a and corpus C are omitted for simplicity's sake. The expectation of the evaluation score r under the Dirac delta distribution \u03b4(r \u2212 r 0 ) is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder Marginalization", "sec_num": "3.1" }, { "text": "E p(r|q,\u03b7 Q ,\u03b7 D ) [r] = +\u221e \u2212\u221e r \u2022 \u03b4(r \u2212 r 0 )dr = r 0 (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder Marginalization", "sec_num": "3.1" }, { "text": "which is the score of the evaluation function in Eq. 3. This is also known as the sifting property 1 of the Dirac delta distribution (Mack, 2008) , where the delta function is said to \"sift out\" the value at r = r 0 . The reason for such a formalization is that now we can evaluate the contribution of a single encoder to the evaluation score r by marginalizing the other random variables. The contribution of an individual encoder \u03b7 Q or \u03b7 D to score r on a question q can be evaluated by marginalizing the other encoder of p(r | q, \u03b7 Q , \u03b7 D ) in Eq. (4). We assume that the question q is sampled from the training data distribution for learning \u03b7 Q and \u03b7 D . Let's take the question encoder \u03b7 Q as an example. The distribution of r after marginalizing over \u03b7 D is:", "cite_spans": [ { "start": 133, "end": 145, "text": "(Mack, 2008)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Encoder Marginalization", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(r | q, \u03b7 Q ) = \u03b7 D p(r | q, \u03b7 Q , \u03b7 D )p(\u03b7 D )d\u03b7 D \u2248 1 K K i=1 p(r | q, \u03b7 Q , \u03b7 (i) D ) = 1 K K i=1 \u03b4(r \u2212 r (i) 0 )", "eq_num": "(6)" } ], "section": "Encoder Marginalization", "sec_num": "3.1" }, { "text": "where the superscript (i) means the tagged random variables belong to the i th out of K QA dataset (e.g., \u03b7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder Marginalization", "sec_num": "3.1" }, { "text": "D means the passage encoder trained on the i th QA dataset). The second to the last step uses the Monte-Carlo approximation, where we use \u03b7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder Marginalization", "sec_num": "3.1" }, { "text": "(i) D", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder Marginalization", "sec_num": "3.1" }, { "text": "sampled from a prior distribution p(\u03b7 D ), which will be discussed in Section 3.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder Marginalization", "sec_num": "3.1" }, { "text": "The integration step in Eq. (6) assumes independence between q, \u03b7 D , and \u03b7 Q . Although during the training of DPR, \u03b7 D and \u03b7 Q are usually learned together, the two encoders do not necessarily need to be evaluated together during inference. For example, a question encoder trained on NQ could be paired with a passage encoder trained on Curated and tested on the Trivia QA dataset, without assuming any dependency. Therefore, we assume here no prior knowledge about how \u03b7 D and \u03b7 Q are trained, but rather highlight their independence during evaluation to validate Eq. (6).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder Marginalization", "sec_num": "3.1" }, { "text": "As for the contribution of \u03b7 Q , according to the expectation of Dirac delta distribution in Eq. 5, the expectation of r under the marginalized distribution in Eq. 6is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder Marginalization", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E p(r|q,\u03b7 Q ) [r] = +\u221e \u2212\u221e r \u2022 p(r | q, \u03b7 Q )dr \u2248 +\u221e \u2212\u221e r \u2022 1 K K i=1 p(r | q, \u03b7 Q , \u03b7 (i) D )dr = 1 K K i=1 +\u221e \u2212\u221e r \u2022 \u03b4(r \u2212 r (i) 0 )dr = 1 K K i=1 r (i) 0", "eq_num": "(7)" } ], "section": "Encoder Marginalization", "sec_num": "3.1" }, { "text": "which corresponds to the in-domain encoder marginalization in Fig. 1 . In this way, we manage to calculate the contribution of a question encoder \u03b7 Q to the evaluation score r given a question q.", "cite_spans": [], "ref_spans": [ { "start": 62, "end": 68, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Encoder Marginalization", "sec_num": "3.1" }, { "text": "In the previous section, we define the contribution of a single encoder for DPR using encoder marginalization. However, to approximate the expectation under the marginalized distribution in Eq. (6), we need to sample the encoder \u03b7 D from a prior distribution p(\u03b7 D ). In practice, we do not have access to p(\u03b7 D ) but instead, we need to train \u03b7 D on specific datasets as empirical samples. In addition, we cannot consider every possible function for the encoder. Therefore, we need to put constraints on the encoder prior distribution, so that p(\u03b7 D ) becomes p(\u03b7 D | \u03a6) that implicitly conditions on some constraints \u03a6. In this paper, \u03a6 could represent, for example, model structures, training schemes, optimizers, initialization, and so on. The (sampled) encoders we run in the experiments are initialized with the same pre-trained language model (e.g., bert-base-uncased) and optimized with the same scheme (e.g., 40 epochs, Adam optimizers. . . ), to ensure the constraints we put are consistent for different DPR models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder Prior Distribution, Sampling, and Approximation", "sec_num": "3.2" }, { "text": "In practice, we use empirical samples such as DPRs pre-trained on different QA datasets for approximation in Eq. 7. Although the sample size is not big enough as it is very expensive to train DPR and encode a large textual corpus, the samples themselves are statistically meaningful as they are carefully fine-tuned for the domains we want", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder Prior Distribution, Sampling, and Approximation", "sec_num": "3.2" }, { "text": "Dev Test Natural Questions 58,880 8,757 3,610 TriviaQA 60,413 8,837 11,313 WebQuestions 2,474 361 2,032 CuratedTREC 1,125 133 694 SQuAD 70,096 8,886 10,570 to evaluate, instead of using models with randomly initialized weights.", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 149, "text": "Test Natural Questions 58,880 8,757 3,610 TriviaQA 60,413 8,837 11,313 WebQuestions 2,474 361 2,032 CuratedTREC 1,125 133 694 SQuAD", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Train", "sec_num": null }, { "text": "We follow the DPR paper (Karpukhin et al., 2020) to train and evaluate our dense retrievers. We reproduce their results on five benchmark datasets using Tevatron 2 (Gao et al., 2022) , a toolkit for efficiently training dense retrievers with deep neural language models. Our reproduced results have only a maximum difference of \u223c2% compared to their numbers. We report the top-20 and top-100 accuracy for evaluation.", "cite_spans": [ { "start": 24, "end": 48, "text": "(Karpukhin et al., 2020)", "ref_id": "BIBREF16" }, { "start": 164, "end": 182, "text": "(Gao et al., 2022)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "Datasets We train individual DPR models on five standard benchmark QA tasks, as shown in Tbl. 1: Natural Questions (NQ) (Kwiatkowski et al., 2019) , TriviaQA (Trivia) (Joshi et al., 2017) , WebQuestions (WQ) (Berant et al., 2013) , CuratedTREC (Curated) (Baudi\u0161 and \u0160ediv\u1ef3, 2015), SQuAD-1.1 (SQuAD) (Rajpurkar et al., 2016) . We use the data provided in the DPR 3 repository to reproduce their results. We evaluate the retriever models on the test sets of the aforementioned datasets. For retrieval, we chunk the Wikipedia collection (Guu et al., 2020) into passages of 100 words as in Wang et al. (2019) , which yields about 21 million samples in total. We follow Karpukhin et al. (2020) using BM25 (Robertson and Zaragoza, 2009; to select the positive and negative passages as the initial training data for DPR.", "cite_spans": [ { "start": 120, "end": 146, "text": "(Kwiatkowski et al., 2019)", "ref_id": "BIBREF44" }, { "start": 167, "end": 187, "text": "(Joshi et al., 2017)", "ref_id": "BIBREF15" }, { "start": 208, "end": 229, "text": "(Berant et al., 2013)", "ref_id": "BIBREF2" }, { "start": 299, "end": 323, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF40" }, { "start": 534, "end": 552, "text": "(Guu et al., 2020)", "ref_id": "BIBREF10" }, { "start": 586, "end": 604, "text": "Wang et al. (2019)", "ref_id": "BIBREF49" }, { "start": 665, "end": 714, "text": "Karpukhin et al. (2020) using BM25 (Robertson and", "ref_id": null }, { "start": 715, "end": 730, "text": "Zaragoza, 2009;", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "Models and Training During training, each question is paired with 1 positive passage, 1 hard negative retrieved by BM25, and 2 \u00d7 (B \u2212 1) inbatch negatives where B is the batch size. We optimize the objective in Eq.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "(2) with a learning rate of 1e-05 using Adam (Kingma and Ba, 2015) for 40 epochs. The rest of the hyperparameters remain the same as described in Karpukhin et al. (2020) .", "cite_spans": [ { "start": 146, "end": 169, "text": "Karpukhin et al. (2020)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "This section aims to show the generalization ability of DPR's bi-encoder evaluated in tandem. Tbl. 2 shows the zero-shot retrieval accuracy of different DPR models and BM25 on five benchmark QA datasets. Each row represents one model's accuracy on five datasets and each column represents the accuracy of five different models on one dataset. Normally, the in-domain DPR model is expected to outperform the other DPR models trained using data from other domains, which is the situation we observe for most datasets, such as NQ, Trivia, and SQuAD. However, for Curated, the DPR trained on NQ and Trivia has better zero-shot retrieval accuracy than the in-domain one. We suspect it is because NQ and Trivia have much larger training data than Curated, as shown in Tbl. 1, which potentially covers some similar questions in Curated. Moreover, BM25 outperforms all DPR models on SQuAD as SQuAD mainly contains entitycentered questions which are good for termmatching algorithms. Besides, the SQuAD dataset is mainly for machine-reading comprehension and therefore a passage could be used to answer multiple questions, which could cause potential conflicts in representation learning (Wu et al., 2021) .", "cite_spans": [ { "start": 1179, "end": 1196, "text": "(Wu et al., 2021)", "ref_id": "BIBREF50" } ], "ref_spans": [], "eq_spans": [], "section": "Generalization of Tandem Encoders", "sec_num": "5.1" }, { "text": "In the following sections, we will perform encoder attribution analysis to examine DPR's each encoder individually.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generalization of Tandem Encoders", "sec_num": "5.1" }, { "text": "This section aims to answer the question (1) \"What are the individual contributions of each encoder of DPR?\" from Section 1. To analyze the contributions of a single encoder on a specific QA dataset, we compare the marginalized top-20 retrieval accuracy of the encoder using in-domain encoder marginalization shown in Fig. 1 and Eq. (7). Fig. 2 shows the in-domain encoder marginalization results relative to the tandem DPR results. The blue bars show the question encoder's contributions where we target the question encoder and marginalize over the passage encoders, and vice versa for the orange bars (passage encoder) on five datasets. We further divide those results by the in-domain DPR's top-20 accuracy, which is normalized to 100% (the horizontal line in Fig. 2) . We do not compare across different datasets, but rather compare the question encoder and the passage encoder for each domain.", "cite_spans": [], "ref_spans": [ { "start": 318, "end": 324, "text": "Fig. 1", "ref_id": "FIGREF0" }, { "start": 338, "end": 344, "text": "Fig. 2", "ref_id": "FIGREF2" }, { "start": 764, "end": 771, "text": "Fig. 2)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "In-Domain Encoder Marginalization", "sec_num": "5.2" }, { "text": "We can see that in general, the passage encoder (orange bars) contributes more to the top-20 accuracy compared to the question encoder (blue bars) on all five datasets. Moreover, for the Curated dataset, marginalizing the out-of-domain question encoders even improves the marginalized accuracy of the passage encoder of Curated. Overall, we can see that the passage encoder plays a more vital role compared to the question encoder in terms of in-domain retrieval accuracy, which makes sense as the passage encoder needs to encode the entire corpus (in our case, 21M passages), while the question sets are much smaller.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "In-Domain Encoder Marginalization", "sec_num": "5.2" }, { "text": "In this section, our goal is to answer question (2), \"How to find the affecting factors for each encoder in different QA datasets?\" from Section 1. We will use the data efficiency test as an example and show how using encoder attribution in the data efficiency test can help us locate possible affecting factors in the dataset. Specifically, we will train DPR models with different amounts of training data. The reason we choose to change the size of the training data is that data sizes often have a large influence on a model's generalization ability, which could help reveal relevant affecting factors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Affecting Factors for Encoders in QA Training Data", "sec_num": "5.3" }, { "text": "We train the DPR model with different amounts of data and test each encoder's in-domain marginalization accuracy with respect to the training data amount. Since it is extremely resource-consuming to train different DPR models and encode the entire Wikipedia corpus into dense vectors, in this section, we mainly focus on NQ, Trivia, and SQuAD due to their relatively large dataset sizes. Fig. 3 shows the in-domain encoder marginalization results for both question encoder and passage encoder under a data efficiency setting, where we uniformly sample 10%, 25%, 40%, 55%, 70%, 85% of training data of each dataset to train DPR. We use in-domain encoder marginalization to evaluate each encoder's accuracy with different amounts of data. Specifically, to provide a fair comparison, we use DPR's encoders trained with 100% data as the samples for all marginalization. For example, for the question encoder trained with 10% data, it is paired with five passage encoders of DPR trained on five different domains with 100% data. This is to ensure that the comparison between different question encoders is not affected by different ways of marginalization. Figure 3 : In-domain encoder marginalization results under a data efficiency setting. We train DPR on NQ, Trivia, and SQuAD with different amounts of training data. The marginalized top-20/100 accuracy (%) for each encoder is normalized. Note that the y-axis is shared in each row. The horizontal line is the accuracy of an encoder trained with 100% data.", "cite_spans": [], "ref_spans": [ { "start": 388, "end": 394, "text": "Fig. 3", "ref_id": null }, { "start": 1152, "end": 1160, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "In-Domain Data Efficiency Test", "sec_num": null }, { "text": "As we can see, the accuracy of the question encoder with respect to different training data amounts (left column in Fig. 3 ) on three datasets improves as the amount of training data increases. For the passage encoder (right column in Fig. 3 ), NQ's and Trivia's behave similarly to the question encoder (blue and orange lines of the right column in Fig. 3) . However, the accuracy of SQuAD's passage encoder (green line of the right column in Fig. 3) shows non-monotonic behaviour with respect to training data sizes in the [40%, 100%] interval, where the accuracy first rises before 40% and drops afterwards. This means that besides the training sample complexity, there are more affecting factors that influence the accuracy of the passage encoder, which we further analyze below.", "cite_spans": [], "ref_spans": [ { "start": 116, "end": 122, "text": "Fig. 3", "ref_id": null }, { "start": 235, "end": 241, "text": "Fig. 3", "ref_id": null }, { "start": 350, "end": 357, "text": "Fig. 3)", "ref_id": null }, { "start": 444, "end": 451, "text": "Fig. 3)", "ref_id": null } ], "eq_spans": [], "section": "In-Domain Data Efficiency Test", "sec_num": null }, { "text": "Factor Analysis Based on the results in the previous section, we now propose two possible affecting factors in the training data for the question encoder and passage encoder: corpus coverage and positive passage overlap, defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "In-Domain Data Efficiency Test", "sec_num": null }, { "text": "\u2022 Corpus coverage: Number of distinct positive passages in the training data (i.e., with different texts and titles in Wikipedia corpus). \u2022 Positive passage overlap: Ratio between the number of positive passages that can answer more than two training questions and the total number of distinct positive passages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "In-Domain Data Efficiency Test", "sec_num": null }, { "text": "In this paper, each question only has one positive passage. We further define an intermediate statistic called unique passage coverage:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "In-Domain Data Efficiency Test", "sec_num": null }, { "text": "\u2022 Unique passage coverage: Corpus coverage \u00d7 (1 \u2212 positive passage overlap) \u03b1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "In-Domain Data Efficiency Test", "sec_num": null }, { "text": "where \u03b1 is an empirical value and is used to adjust the weight between the coverage and overlap. Despite there being other statistics, we find these statistics above reasonable to reflect the features of each dataset, as well as the correlation with the cross-domain marginalization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "In-Domain Data Efficiency Test", "sec_num": null }, { "text": "Tbl. 3 shows the corpus coverage and positive passage overlap measures that we defined on three QA datasets, where we collect the aforementioned statistics for the training data of each dataset. We can see that despite having the most training data, SQuAD also has the largest positive passage overlap. Fig. 4 (right column) shows that the unique passage coverage of SQuAD (green line) also behaves similarly to the in-domain marginalization Table 4 : Top-20/100 (%) accuracy of passage encoders trained on all of SQuAD and 40% of SQuAD, paired with the question encoder trained on each domain and tested on each domain's test set. With only 40% of data, a better balance between the corpus coverage and positive passage overlap is achieved on SQuAD, and therefore these passage encoders are even better overall than the ones trained with 100% of SQuAD data.", "cite_spans": [], "ref_spans": [ { "start": 303, "end": 324, "text": "Fig. 4 (right column)", "ref_id": "FIGREF4" }, { "start": 442, "end": 449, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "In-Domain Data Efficiency Test", "sec_num": null }, { "text": "results of SQuAD's passage encoder (Fig. 3, right column) , which rises as the data amount increases and then drops after 40% of training data.", "cite_spans": [], "ref_spans": [ { "start": 35, "end": 58, "text": "(Fig. 3, right column)", "ref_id": null } ], "eq_spans": [], "section": "In-Domain Data Efficiency Test", "sec_num": null }, { "text": "To further verify the robustness of the passage encoder trained with only 40% of training data of SQuAD, we test its passage encoder on five QA test sets and pair it with the in-domain question encoder trained with 100% data. Tbl. 4 shows the comparison between the passage encoders trained with full SQuAD and 40% of SQuAD, respectively. We can see that with only 40% of training data, the passage encoders manage to achieve similar and in some cases even higher accuracy compared to the ones trained with all data. Therefore, this analysis provides evidence leading us to believe that the unique passage coverage measure, which is related to the corpus coverage and positive passage overlap of the training data, indeed influences the passage encoder strongly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "In-Domain Data Efficiency Test", "sec_num": null }, { "text": "In the previous sections, we manage to identify the importance of the passage encoder and its affecting factors such as positive passage overlap and corpus coverage of the training data. We find that our discoveries are consistent with some previous work's conclusions. For example, Zhan et al. (2021 Zhan et al. ( , 2020a ; Sciavolino et al. (2021) all find that it is sufficient to achieve reasonable retrieval accuracy by just fine-tuning the question encoder with a fixed passage encoder, which demonstrates the importance of a robust passage encoder in domain adaptation and hard-negative mining.", "cite_spans": [ { "start": 283, "end": 300, "text": "Zhan et al. (2021", "ref_id": "BIBREF55" }, { "start": 301, "end": 322, "text": "Zhan et al. ( , 2020a", "ref_id": "BIBREF56" }, { "start": 325, "end": 349, "text": "Sciavolino et al. (2021)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Impact of Passage Encoders", "sec_num": "5.4" }, { "text": "However, how to learn such a robust passage encoder is challenging as pre-training DPR on a single QA dataset will introduce biases. Multi-task dense retrieval (Maillard et al., 2021; Metzler et al., 2021) uses multiple experts learned in different domains to solve this problem. These solutions are effective but not efficient as they build multiple indexes and perform searches for each expert, requiring a lot of resources and storage space.", "cite_spans": [ { "start": 160, "end": 183, "text": "(Maillard et al., 2021;", "ref_id": "BIBREF34" }, { "start": 184, "end": 205, "text": "Metzler et al., 2021)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Impact of Passage Encoders", "sec_num": "5.4" }, { "text": "Another solution is to build a question-agnostic passage encoder so that the model is not biased towards particular QA tasks. DensePhrases (Lee et al., 2021a,b) pioneers this direction by building indexes using phrases instead of chunks of passages for multi-granularity retrieval. By breaking passages into finer-grained units, DensePhrases indeed improve the generalization of dense retrieval in different domains with query-side fine-tuning. However, similar to multi-task learning, it is not efficient as the phrase index can be enormous for a corpus like Wikipedia. Although techniques such as product quantization (Gray and Neuhoff, 1998) can be applied to improve efficiency, it comes at the cost of effectiveness.", "cite_spans": [ { "start": 139, "end": 160, "text": "(Lee et al., 2021a,b)", "ref_id": null }, { "start": 620, "end": 644, "text": "(Gray and Neuhoff, 1998)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Impact of Passage Encoders", "sec_num": "5.4" }, { "text": "Overall, it is desirable to have a robust passage encoder for efficient dense retrieval according to previous work and our analysis, but challenges still remain in the effectiveness-efficiency trade-off.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Impact of Passage Encoders", "sec_num": "5.4" }, { "text": "We propose an encoder attribution analysis of DPR using encoder marginalization to individually evaluate each encoder of DPR. We quantify the contribution of each encoder of DPR by marginalizing the other random variables under a probabilistic framework. We find that the passage encoder plays a more important role compared to the question encoder in terms of top-k retrieval accuracy. We also perform a case study under the data efficiency setting to demonstrate how to find possible affecting factors in the QA datasets for individual encoders. We identify that passage encoders are affected by positive passage overlap and corpus coverage of the training data, while question encoders are sensitive to the training sample complexity. Our framework is also very general and can be applied to other methods based on bi-encoders for encoder attribution analysis, but one needs to pay attention to the choice of the encoder prior distribution to ensure the marginalization is appropriate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "This property requires the sifted function g(r) (in this case, g(r) = r) to be Lipschitz continuous.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/texttron/tevatron 3 https://github.com/facebookresearch/DPR", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was supported in part by the Canada First Research Excellence Fund and the Natural Sciences and Engineering Research Council (NSERC) of Canada. Computational resources were provided by Compute Canada.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "2021. An information-theoretic perspective on credit assignment in reinforcement learning", "authors": [ { "first": "Dilip", "middle": [], "last": "Arumugam", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Henderson", "suffix": "" }, { "first": "Pierre-Luc", "middle": [], "last": "Ba", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2103.06224" ] }, "num": null, "urls": [], "raw_text": "Dilip Arumugam, Peter Henderson, and Pierre-Luc Ba- con. 2021. An information-theoretic perspective on credit assignment in reinforcement learning. arXiv preprint arXiv:2103.06224.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Modeling of the question answering task in the YodaQA system", "authors": [ { "first": "Petr", "middle": [], "last": "Baudi\u0161", "suffix": "" } ], "year": 2015, "venue": "International Conference of the Cross-Language Evaluation Forum for European Languages", "volume": "", "issue": "", "pages": "222--228", "other_ids": {}, "num": null, "urls": [], "raw_text": "Petr Baudi\u0161 and Jan \u0160ediv\u1ef3. 2015. Modeling of the question answering task in the YodaQA system. In International Conference of the Cross-Language Evaluation Forum for European Languages, pages 222-228.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Semantic parsing on Freebase from question-answer pairs", "authors": [ { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Chou", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Frostig", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1533--1544", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1533-1544, Seattle, Wash- ington, USA.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Performance attribution of us institutional investors", "authors": [ { "first": "Murat", "middle": [], "last": "Binay", "suffix": "" } ], "year": 2005, "venue": "Financial Management", "volume": "34", "issue": "2", "pages": "127--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "Murat Binay. 2005. Performance attribution of us institutional investors. Financial Management, 34(2):127-152.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Reading Wikipedia to answer opendomain questions", "authors": [ { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Fisch", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1870--1879", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- domain questions. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870- 1879, Vancouver, Canada.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Unsupervised corpus aware language model pre-training for dense passage retrieval", "authors": [ { "first": "Luyu", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Callan", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2108.05540" ] }, "num": null, "urls": [], "raw_text": "Luyu Gao and Jamie Callan. 2021. Unsupervised cor- pus aware language model pre-training for dense passage retrieval. arXiv preprint arXiv:2108.05540.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Tevatron: An efficient and flexible toolkit for dense retrieval", "authors": [ { "first": "Luyu", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Xueguang", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Callan", "suffix": "" } ], "year": 2022, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2203.05765" ] }, "num": null, "urls": [], "raw_text": "Luyu Gao, Xueguang Ma, Jimmy Lin, and Jamie Callan. 2022. Tevatron: An efficient and flexible toolkit for dense retrieval. arXiv:2203.05765.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Deep Learning. Adaptive computation and machine learning", "authors": [ { "first": "Ian", "middle": [ "J" ], "last": "Goodfellow", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Aaron", "middle": [ "C" ], "last": "Courville", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian J. Goodfellow, Yoshua Bengio, and Aaron C. Courville. 2016. Deep Learning. Adaptive compu- tation and machine learning. MIT Press.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Quantization", "authors": [ { "first": "M", "middle": [], "last": "Robert", "suffix": "" }, { "first": "David", "middle": [ "L" ], "last": "Gray", "suffix": "" }, { "first": "", "middle": [], "last": "Neuhoff", "suffix": "" } ], "year": 1998, "venue": "IEEE Transactions on Information Theory", "volume": "44", "issue": "6", "pages": "2325--2383", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert M. Gray and David L. Neuhoff. 1998. Quanti- zation. IEEE Transactions on Information Theory, 44(6):2325-2383.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Realm: Retrievalaugmented language model pre-training", "authors": [ { "first": "Kelvin", "middle": [], "last": "Guu", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Z", "middle": [], "last": "Tung", "suffix": "" }, { "first": "Panupong", "middle": [], "last": "Pasupat", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.08909" ] }, "num": null, "urls": [], "raw_text": "Kelvin Guu, Kenton Lee, Z. Tung, Panupong Pasu- pat, and Ming-Wei Chang. 2020. Realm: Retrieval- augmented language model pre-training. arXiv preprint arXiv:2002.08909.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Satinder Singh, Doina Precup, and R\u00e9mi Munos", "authors": [ { "first": "Anna", "middle": [], "last": "Harutyunyan", "suffix": "" }, { "first": "Will", "middle": [], "last": "Dabney", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Mesnard", "suffix": "" }, { "first": "Mohammad", "middle": [ "Gheshlaghi" ], "last": "Azar", "suffix": "" }, { "first": "Bilal", "middle": [], "last": "Piot", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Heess", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Hado Van Hasselt", "suffix": "" }, { "first": "", "middle": [], "last": "Wayne", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "12467--12476", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Harutyunyan, Will Dabney, Thomas Mesnard, Mohammad Gheshlaghi Azar, Bilal Piot, Nicolas Heess, Hado van Hasselt, Gregory Wayne, Satin- der Singh, Doina Precup, and R\u00e9mi Munos. 2019. Hindsight credit assignment. In Advances in Neural Information Processing Systems 32, pages 12467- 12476, Vancouver, BC, Canada.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A memory efficient baseline for open domain question answering", "authors": [ { "first": "Gautier", "middle": [], "last": "Izacard", "suffix": "" }, { "first": "Fabio", "middle": [], "last": "Petroni", "suffix": "" }, { "first": "Lucas", "middle": [], "last": "Hosseini", "suffix": "" }, { "first": "Nicola", "middle": [ "De" ], "last": "Cao", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2012.15156" ] }, "num": null, "urls": [], "raw_text": "Gautier Izacard, Fabio Petroni, Lucas Hosseini, Nicola De Cao, Sebastian Riedel, and Edouard Grave. 2020. A memory efficient baseline for open domain question answering. arXiv preprint arXiv:2012.15156.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "How does BERT rerank passages? An attribution analysis with information bottlenecks", "authors": [ { "first": "Zhiying", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Raphael", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Ji", "middle": [], "last": "Xin", "suffix": "" }, { "first": "Jimmy", "middle": [ "Lin" ], "last": "", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "496--509", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiying Jiang, Raphael Tang, Ji Xin, and Jimmy Lin. 2021. How does BERT rerank passages? An at- tribution analysis with information bottlenecks. In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 496-509, Punta Cana, Dominican Re- public.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Billion-scale similarity search with GPUs", "authors": [ { "first": "Jeff", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Matthijs", "middle": [], "last": "Douze", "suffix": "" }, { "first": "Herv\u00e9", "middle": [], "last": "J\u00e9gou", "suffix": "" } ], "year": 2021, "venue": "IEEE Transactions on Big Data", "volume": "7", "issue": "3", "pages": "535--547", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeff Johnson, Matthijs Douze, and Herv\u00e9 J\u00e9gou. 2021. Billion-scale similarity search with GPUs. IEEE Transactions on Big Data, 7(3):535-547.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension", "authors": [ { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Weld", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1601--1611", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale dis- tantly supervised challenge dataset for reading com- prehension. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601-1611, Vancou- ver, Canada.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Dense passage retrieval for open-domain question answering", "authors": [ { "first": "Vladimir", "middle": [], "last": "Karpukhin", "suffix": "" }, { "first": "Barlas", "middle": [], "last": "Oguz", "suffix": "" }, { "first": "Sewon", "middle": [], "last": "Min", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Ledell", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "6769--6781", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 6769- 6781, Online.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "ColBERT: Efficient and effective passage search via contextualized late interaction over BERT", "authors": [ { "first": "Omar", "middle": [], "last": "Khattab", "suffix": "" }, { "first": "Matei", "middle": [], "last": "Zaharia", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2020)", "volume": "", "issue": "", "pages": "39--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omar Khattab and Matei Zaharia. 2020. ColBERT: Ef- ficient and effective passage search via contextual- ized late interaction over BERT. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2020), pages 39-48.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Metric learning: A survey. Foundations and Trends in Machine Learning", "authors": [ { "first": "Brian", "middle": [], "last": "Kulis", "suffix": "" } ], "year": 2012, "venue": "", "volume": "5", "issue": "", "pages": "287--364", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brian Kulis. 2012. Metric learning: A survey. Foun- dations and Trends in Machine Learning, 5(4):287- 364.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Natural Questions: A benchmark for question answering research", "authors": [ { "first": "Matthew", "middle": [], "last": "Kelcey", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Andrew", "middle": [ "M" ], "last": "Dai", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" } ], "year": 2019, "venue": "Transactions of the Association for Computational Linguistics", "volume": "7", "issue": "", "pages": "452--466", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural Questions: A benchmark for question an- swering research. Transactions of the Association for Computational Linguistics, 7:452-466.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Learning dense representations of phrases at scale", "authors": [ { "first": "Jinhyuk", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Mujeen", "middle": [], "last": "Sung", "suffix": "" }, { "first": "Jaewoo", "middle": [], "last": "Kang", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "6634--6647", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jinhyuk Lee, Mujeen Sung, Jaewoo Kang, and Danqi Chen. 2021a. Learning dense representations of phrases at scale. In Proceedings of the 59th Annual Meeting of the Association for Computational Lin- guistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6634-6647, Online.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Phrase retrieval learns passage retrieval, too", "authors": [ { "first": "Jinhyuk", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Wettig", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "3661--3672", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jinhyuk Lee, Alexander Wettig, and Danqi Chen. 2021b. Phrase retrieval learns passage retrieval, too. In Proceedings of the 2021 Conference on Empiri- cal Methods in Natural Language Processing, pages 3661-3672, Online and Punta Cana, Dominican Re- public.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Latent retrieval for weakly supervised open domain question answering", "authors": [ { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6086--6096", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 6086-6096, Florence, Italy.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Question and answer test-train overlap in open-domain question answering datasets", "authors": [ { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Pontus", "middle": [], "last": "Stenetorp", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", "volume": "", "issue": "", "pages": "1000--1008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Lewis, Pontus Stenetorp, and Sebastian Riedel. 2021. Question and answer test-train overlap in open-domain question answering datasets. In Pro- ceedings of the 16th Conference of the European Chapter of the Association for Computational Lin- guistics: Main Volume, pages 1000-1008, Online.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Multi-task dense retrieval via model uncertainty fusion for open-domain question answering", "authors": [ { "first": "Minghan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Li", "suffix": "" }, { "first": "Kun", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2021, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2021", "volume": "", "issue": "", "pages": "274--287", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minghan Li, Ming Li, Kun Xiong, and Jimmy Lin. 2021. Multi-task dense retrieval via model uncer- tainty fusion for open-domain question answering. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 274-287, Punta Cana, Dominican Republic.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Pyserini: A Python toolkit for reproducible information retrieval research with sparse and dense representations", "authors": [ { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Xueguang", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Sheng-Chieh", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Jheng-Hong", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ronak", "middle": [], "last": "Pradeep", "suffix": "" }, { "first": "Rodrigo", "middle": [], "last": "Nogueira", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021)", "volume": "", "issue": "", "pages": "2356--2362", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng- Hong Yang, Ronak Pradeep, and Rodrigo Nogueira. 2021. Pyserini: A Python toolkit for reproducible information retrieval research with sparse and dense representations. In Proceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021), pages 2356-2362.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Challenges in generalization in open domain question answering", "authors": [ { "first": "Linqing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Pontus", "middle": [], "last": "Stenetorp", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2109.01156" ] }, "num": null, "urls": [], "raw_text": "Linqing Liu, Patrick Lewis, Sebastian Riedel, and Pon- tus Stenetorp. 2021. Challenges in generalization in open domain question answering. arXiv preprint arXiv:2109.01156.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "RoBERTa: A robustly optimized BERT pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Less is more: Pretraining a strong siamese encoder using a weak decoder", "authors": [ { "first": "Shuqi", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Chenyan", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Di", "middle": [], "last": "He", "suffix": "" }, { "first": "Guolin", "middle": [], "last": "Ke", "suffix": "" }, { "first": "Waleed", "middle": [], "last": "Malik", "suffix": "" }, { "first": "Zhicheng", "middle": [], "last": "Dou", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Bennett", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Arnold", "middle": [], "last": "Overwijk", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2102.09206" ] }, "num": null, "urls": [], "raw_text": "Shuqi Lu, Chenyan Xiong, Di He, Guolin Ke, Waleed Malik, Zhicheng Dou, Paul Bennett, Tie-Yan Liu, and Arnold Overwijk. 2021. Less is more: Pre- training a strong siamese encoder using a weak de- coder. arXiv preprint arXiv:2102.09206.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Sparse, dense, and attentional representations for text retrieval", "authors": [ { "first": "Yi", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2021, "venue": "Trans. Assoc. Comput. Linguistics", "volume": "9", "issue": "", "pages": "329--345", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Luan, Jacob Eisenstein, Kristina Toutanova, and Michael Collins. 2021. Sparse, dense, and atten- tional representations for text retrieval. Trans. Assoc. Comput. Linguistics, 9:329-345.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Simple and effective unsupervised redundancy elimination to compress dense vectors for passage retrieval", "authors": [ { "first": "Xueguang", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Minghan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Ji", "middle": [], "last": "Xin", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2854--2859", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xueguang Ma, Minghan Li, Kai Sun, Ji Xin, and Jimmy Lin. 2021. Simple and effective unsuper- vised redundancy elimination to compress dense vec- tors for passage retrieval. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2854-2859, Online and Punta Cana, Dominican Republic.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Appendix C: The Dirac delta function. Fundamental Principles of Optical Lithography", "authors": [ { "first": "Chris", "middle": [], "last": "Mack", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "495--500", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Mack. 2008. Appendix C: The Dirac delta func- tion. Fundamental Principles of Optical Lithogra- phy, pages 495-500.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Multi-task retrieval for knowledge-intensive tasks", "authors": [ { "first": "Jean", "middle": [], "last": "Maillard", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Karpukhin", "suffix": "" }, { "first": "Fabio", "middle": [], "last": "Petroni", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" }, { "first": "Barlas", "middle": [], "last": "Oguz", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Gargi", "middle": [], "last": "Ghosh", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "1098--1111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jean Maillard, Vladimir Karpukhin, Fabio Petroni, Wen-tau Yih, Barlas Oguz, Veselin Stoyanov, and Gargi Ghosh. 2021. Multi-task retrieval for knowledge-intensive tasks. In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 1098-1111, Online.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Generation-augmented retrieval for opendomain question answering", "authors": [ { "first": "Yuning", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Pengcheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yelong", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Han", "suffix": "" }, { "first": "Weizhu", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "4089--4100", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuning Mao, Pengcheng He, Xiaodong Liu, Yelong Shen, Jianfeng Gao, Jiawei Han, and Weizhu Chen. 2021. Generation-augmented retrieval for open- domain question answering. In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 4089-4100, Online.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Rethinking search: Making domain experts out of dilettantes", "authors": [ { "first": "Donald", "middle": [], "last": "Metzler", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Tay", "suffix": "" }, { "first": "Dara", "middle": [], "last": "Bahri", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Najork", "suffix": "" } ], "year": 2021, "venue": "SIGIR Forum", "volume": "", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Donald Metzler, Yi Tay, Dara Bahri, and Marc Najork. 2021. Rethinking search: Making domain experts out of dilettantes. SIGIR Forum, 55(1).", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "A discrete hard EM approach for weakly supervised question answering", "authors": [ { "first": "Sewon", "middle": [], "last": "Min", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2851--2864", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sewon Min, Danqi Chen, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2019. A discrete hard EM ap- proach for weakly supervised question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 2851- 2864, Hong Kong, China.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Steps toward artificial intelligence", "authors": [ { "first": "Marvin", "middle": [], "last": "Minsky", "suffix": "" } ], "year": 1961, "venue": "Proceedings of the IRE", "volume": "49", "issue": "", "pages": "8--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marvin Minsky. 1961. Steps toward artificial intelli- gence. Proceedings of the IRE, 49(1):8-30.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Did the model understand the question?", "authors": [ { "first": "Ankur", "middle": [], "last": "Pramod Kaushik Mudrakarta", "suffix": "" }, { "first": "Mukund", "middle": [], "last": "Taly", "suffix": "" }, { "first": "Kedar", "middle": [], "last": "Sundararajan", "suffix": "" }, { "first": "", "middle": [], "last": "Dhamdhere", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1896--1906", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pramod Kaushik Mudrakarta, Ankur Taly, Mukund Sundararajan, and Kedar Dhamdhere. 2018. Did the model understand the question? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1896-1906, Melbourne, Australia.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "SQuAD: 100,000+ questions for machine comprehension of text", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Konstantin", "middle": [], "last": "Lopyrev", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2383--2392", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392, Austin, Texas.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "The probabilistic relevance framework: BM25 and beyond", "authors": [ { "first": "E", "middle": [], "last": "Stephen", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Robertson", "suffix": "" }, { "first": "", "middle": [], "last": "Zaragoza", "suffix": "" } ], "year": 2009, "venue": "Foundations and Trends in Information Retrieval", "volume": "3", "issue": "4", "pages": "333--389", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen E. Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and be- yond. Foundations and Trends in Information Re- trieval, 3(4):333-389.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Restricting the flow: Information bottlenecks for attribution", "authors": [ { "first": "Karl", "middle": [], "last": "Schulz", "suffix": "" }, { "first": "Leon", "middle": [], "last": "Sixt", "suffix": "" }, { "first": "Federico", "middle": [], "last": "Tombari", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Landgraf", "suffix": "" } ], "year": 2020, "venue": "8th International Conference on Learning Representations", "volume": "2020", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karl Schulz, Leon Sixt, Federico Tombari, and Tim Landgraf. 2020. Restricting the flow: Information bottlenecks for attribution. In 8th International Con- ference on Learning Representations, ICLR 2020.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Simple entity-centric questions challenge dense retrievers", "authors": [ { "first": "Christopher", "middle": [], "last": "Sciavolino", "suffix": "" }, { "first": "Zexuan", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Jinhyuk", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2109.08535" ] }, "num": null, "urls": [], "raw_text": "Christopher Sciavolino, Zexuan Zhong, Jinhyuk Lee, and Danqi Chen. 2021. Simple entity-centric ques- tions challenge dense retrievers. arXiv preprint arXiv:2109.08535.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Real-time open-domain question answering with dense-sparse phrase index", "authors": [ { "first": "Minjoon", "middle": [], "last": "Seo", "suffix": "" }, { "first": "Jinhyuk", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "Ankur", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Farhadi", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4430--4441", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minjoon Seo, Jinhyuk Lee, Tom Kwiatkowski, Ankur Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2019. Real-time open-domain question answering with dense-sparse phrase index. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 4430-4441, Florence, Italy.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Reinforcement learning: An introduction", "authors": [ { "first": "Richard", "middle": [ "S" ], "last": "Sutton", "suffix": "" }, { "first": "Andrew", "middle": [ "G" ], "last": "Barto", "suffix": "" } ], "year": 1998, "venue": "IEEE Trans. Neural Networks", "volume": "9", "issue": "5", "pages": "1054--1054", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard S. Sutton and Andrew G. Barto. 1998. Rein- forcement learning: An introduction. IEEE Trans. Neural Networks, 9(5):1054-1054.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Temporal credit assignment in reinforcement learning", "authors": [ { "first": "Richard", "middle": [], "last": "Stuart", "suffix": "" }, { "first": "Sutton", "middle": [], "last": "", "suffix": "" } ], "year": 1984, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Stuart Sutton. 1984. Temporal credit assign- ment in reinforcement learning. Ph.D. thesis, Uni- versity of Massachusetts Amherst.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "BEIR: A heterogenous benchmark for zero-shot evaluation of information retrieval models", "authors": [ { "first": "Nandan", "middle": [], "last": "Thakur", "suffix": "" }, { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "R\u00fcckl\u00e9", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2104.08663" ] }, "num": null, "urls": [], "raw_text": "Nandan Thakur, Nils Reimers, Andreas R\u00fcckl\u00e9, Ab- hishek Srivastava, and Iryna Gurevych. 2021. BEIR: A heterogenous benchmark for zero-shot evaluation of information retrieval models. arXiv preprint arXiv:2104.08663.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "The TREC-8 question answering track", "authors": [ { "first": "Ellen", "middle": [ "M" ], "last": "Voorhees", "suffix": "" }, { "first": "Dawn", "middle": [ "M" ], "last": "Tice", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the Second International Conference on Language Resources and Evaluation (LREC'00)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen M. Voorhees and Dawn M. Tice. 2000. The TREC-8 question answering track. In Proceed- ings of the Second International Conference on Language Resources and Evaluation (LREC'00), Athens, Greece.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Multi-passage BERT: A globally normalized BERT model for open-domain question answering", "authors": [ { "first": "Zhiguo", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Xiaofei", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Xiang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "5878--5882", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiguo Wang, Patrick Ng, Xiaofei Ma, Ramesh Nal- lapati, and Bing Xiang. 2019. Multi-passage BERT: A globally normalized BERT model for open-domain question answering. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5878-5882, Hong Kong, China.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Representation decoupling for open-domain passage retrieval", "authors": [ { "first": "Bohong", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Zhuosheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jinyuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2110.07524" ] }, "num": null, "urls": [], "raw_text": "Bohong Wu, Zhuosheng Zhang, Jinyuan Wang, and Hai Zhao. 2021. Representation decoupling for open-domain passage retrieval. arXiv preprint arXiv:2110.07524.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Approximate nearest neighbor negative contrastive learning for dense text retrieval", "authors": [ { "first": "Arnold", "middle": [], "last": "Overwijk", "suffix": "" } ], "year": 2021, "venue": "9th International Conference on Learning Representations", "volume": "2021", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arnold Overwijk. 2021. Approximate nearest neigh- bor negative contrastive learning for dense text re- trieval. In 9th International Conference on Learning Representations, ICLR 2021.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Efficient passage retrieval with hashing for open-domain question answering", "authors": [ { "first": "Ikuya", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "Akari", "middle": [], "last": "Asai", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "2", "issue": "", "pages": "979--986", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ikuya Yamada, Akari Asai, and Hannaneh Hajishirzi. 2021. Efficient passage retrieval with hashing for open-domain question answering. In Proceedings of the 59th Annual Meeting of the Association for Com- putational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 979-986, Online.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "End-to-end open-domain question answering with BERTserini", "authors": [ { "first": "Wei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yuqing", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Aileen", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Xingyu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Luchen", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Kun", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)", "volume": "", "issue": "", "pages": "72--77", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019. End-to-end open-domain question answering with BERTserini. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Asso- ciation for Computational Linguistics (Demonstra- tions), pages 72-77, Minneapolis, Minnesota.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Optimizing dense retrieval model training with hard negatives", "authors": [ { "first": "Jingtao", "middle": [], "last": "Zhan", "suffix": "" }, { "first": "Jiaxin", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Yiqun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jiafeng", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shaoping", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 44th Annual International ACM SI-GIR Conference on Research and Development in Information Retrieval (SIGIR 2021)", "volume": "", "issue": "", "pages": "1503--1512", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, and Shaoping Ma. 2021. Optimizing dense retrieval model training with hard negatives. In Pro- ceedings of the 44th Annual International ACM SI- GIR Conference on Research and Development in Information Retrieval (SIGIR 2021), pages 1503- 1512.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Learning to retrieve: How to train a dense retrieval model effectively and efficiently", "authors": [ { "first": "Jingtao", "middle": [], "last": "Zhan", "suffix": "" }, { "first": "Jiaxin", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Yiqun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shaoping", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.10469" ] }, "num": null, "urls": [], "raw_text": "Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Min Zhang, and Shaoping Ma. 2020a. Learning to retrieve: How to train a dense retrieval model effectively and ef- ficiently. arXiv preprint arXiv:2010.10469.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Repbert: Contextualized text embeddings for first-stage retrieval", "authors": [ { "first": "Jingtao", "middle": [], "last": "Zhan", "suffix": "" }, { "first": "Jiaxin", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Yiqun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shaoping", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2006.15498" ] }, "num": null, "urls": [], "raw_text": "Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Min Zhang, and Shaoping Ma. 2020b. Repbert: Contextualized text embeddings for first-stage retrieval. arXiv preprint arXiv:2006.15498.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Mr. TyDi: A multi-lingual benchmark for dense retrieval", "authors": [ { "first": "Xinyu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xueguang", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 1st Workshop on Multilingual Representation Learning", "volume": "", "issue": "", "pages": "127--137", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinyu Zhang, Xueguang Ma, Peng Shi, and Jimmy Lin. 2021. Mr. TyDi: A multi-lingual benchmark for dense retrieval. In Proceedings of the 1st Workshop on Multilingual Representation Learning, pages 127-137, Punta Cana, Dominican Republic.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Encoder marginalization.", "type_str": "figure", "uris": null, "num": null }, "FIGREF2": { "text": "In-domain marginalized top-20 accuracy (%) of each encoder relative to the in-domain DPR for each dataset using Eq. (7). Each in-domain DPR's top-20 accuracy is normalized to 100%.", "type_str": "figure", "uris": null, "num": null }, "FIGREF4": { "text": "Dataset statistics for different amounts of data. Left: Normalized corpus coverage. Right: Normalized unique passage coverage. Note that the y-axis is shared in both plots.", "type_str": "figure", "uris": null, "num": null }, "TABREF0": { "html": null, "type_str": "table", "text": "The number of questions in each QA dataset fromKarpukhin et al. (2020). The \"Train\" column denotes the number of questions after filtering.", "content": "", "num": null }, "TABREF1": { "html": null, "type_str": "table", "text": "/78.3 62.4/75.5 76.4/83.2 80.7/89.9 71.1/81.8 70.7/81.7 DPR-NQ 79.8/86.9 73.2/81.7 68.8/79.3 86.7/92.7 54.5/70.2 72.6/82.2 DPR-Trivia 66.4/78.9 80.2/85.5 71.4/81.7 87.3/93.9 53.0/69.2 71.7/81.8", "content": "
Encoder Test setNQTriviaWQCuratedSQuADAverage
BM25 62.9DPR-WQ 54.9/70.0 66.5/78.9 76.0/82.9 82.9/90.8 49.3/66.2 65.9/77.8 DPR-Curated 68.5/72.7 66.5/77.7 65.5/77.5 84.0/90.7 51.3/67.5 67.2/77.2
DPR-SQuAD56.6/72.3 71.0/81.7 64.3/77.0 83.3/92.4 61.1/76.0 67.3/80.0
", "num": null }, "TABREF2": { "html": null, "type_str": "table", "text": "Zero-shot evaluation of DPR's bi-encoder in tandem. Top-20/Top-100 retrieval accuracy (%) on five benchmark QA test sets is reported. Each score represents the percentage of questions that have at least one correct answer in the top-20/100 retrieved passages.", "content": "", "num": null }, "TABREF4": { "html": null, "type_str": "table", "text": "Corpus coverage and positive passage overlap, as well as the unique passage coverage, which equals corpus coverage \u00d7 (1 \u2212 positive passage overlap) 1.3 for each dataset.", "content": "
", "num": null }, "TABREF5": { "html": null, "type_str": "table", "text": "/77.1 73.5/82.4 65.2/76.7 79.5/90.6 61.1/76.0 68.5/80.5 SQuAD-40% 62.8/76.4 72.8/82.3 65.9/77.4 81.3/91.1 62.3/76.8 69.2/80.8", "content": "
P-encoderNQTriviaWQCuratedSQuADAverage
SQuAD-100%63.3
", "num": null } } } }