|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:44:02.791717Z" |
|
}, |
|
"title": "Flexible retrieval with NMSLIB and FlexNeuART", |
|
"authors": [ |
|
{ |
|
"first": "Leonid", |
|
"middle": [], |
|
"last": "Boytsov", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Nyberg", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Our objective is to introduce to the NLP community an existing k-NN search library NMSLIB, a new retrieval toolkit FlexNeuART, as well as their integration capabilities. NMSLIB, while being one the fastest k-NN search libraries, is quite generic and supports a variety of distance/similarity functions. Because the library relies on the distance-based structure-agnostic algorithms, it can be further extended by adding new distances. FlexNeuART is a modular, extendible and flexible toolkit for candidate generation in IR and QA applications, which supports mixing of classic and neural ranking signals. FlexNeuART can efficiently retrieve mixed dense and sparse representations (with weights learned from training data), which is achieved by extending NMSLIB. In that, other retrieval systems work with purely sparse representations (e.g., Lucene), purely dense representations (e.g., FAISS and Annoy), or only perform mixing at the re-ranking stage.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Our objective is to introduce to the NLP community an existing k-NN search library NMSLIB, a new retrieval toolkit FlexNeuART, as well as their integration capabilities. NMSLIB, while being one the fastest k-NN search libraries, is quite generic and supports a variety of distance/similarity functions. Because the library relies on the distance-based structure-agnostic algorithms, it can be further extended by adding new distances. FlexNeuART is a modular, extendible and flexible toolkit for candidate generation in IR and QA applications, which supports mixing of classic and neural ranking signals. FlexNeuART can efficiently retrieve mixed dense and sparse representations (with weights learned from training data), which is achieved by extending NMSLIB. In that, other retrieval systems work with purely sparse representations (e.g., Lucene), purely dense representations (e.g., FAISS and Annoy), or only perform mixing at the re-ranking stage.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Although there has been substantial progress on machine reading tasks using neural models such as BERT (Devlin et al., 2018) , these approaches have practical limitations for open-domain challenges, which typically require (1) a retrieval and (2) a re-scoring/re-ranking step to restrict the number of candidate documents. Otherwise, the application of state-of-the-art machine reading models to large document * Work done primarily while at CMU. collections would be impractical even with recent efficiency improvements (Khattab and Zaharia, 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 124, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 521, |
|
"end": 548, |
|
"text": "(Khattab and Zaharia, 2020)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The first retrieval stage is commonly referred to as the candidate generation (i.e., we generate candidates for re-scoring). Until about 2019, the candidate generation would exclusively rely on a traditional search engine such as Lucene, 1 which indexes occurrences of individual terms, their lemmas or stems (Manning et al., 2010) . In that, there are several recent papers where promising results were achieved by generating dense embeddings and using a k-NN search library to retrieve them Karpukhin et al., 2020; Xiong et al., 2020) . However, these studies typically have at least one of the following flaws: (1) they compare against a weak baseline such as untuned BM25 or (2) they rely on exact k-NN search, thus, totally ignoring practical efficiencyeffectiveness and scalability trade-offs related to using k-NN search, see, e.g., \u00a73.3 in Boytsov (2018) . FlexNeuART implements some of the most effective non-neural ranking signals: It produced best non-neural runs in the TREC 2019 deep learning challenge (Craswell et al., 2020) and would be a good tool to verify these results.", |
|
"cite_spans": [ |
|
{ |
|
"start": 309, |
|
"end": 331, |
|
"text": "(Manning et al., 2010)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 493, |
|
"end": 516, |
|
"text": "Karpukhin et al., 2020;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 517, |
|
"end": 536, |
|
"text": "Xiong et al., 2020)", |
|
"ref_id": "BIBREF52" |
|
}, |
|
{ |
|
"start": 840, |
|
"end": 844, |
|
"text": "\u00a73.3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 848, |
|
"end": 862, |
|
"text": "Boytsov (2018)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1016, |
|
"end": 1039, |
|
"text": "(Craswell et al., 2020)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Furthermore, there is evidence that when dense representations perform well, even better results may be obtained by combining them with traditional sparse-vector models (Seo et al., 2019; Gysel et al., 2018; Karpukhin et al., 2020; Kuzi et al., 2020) . It is not straightforward to incorporate these representations into existing toolkits, but FlexNeuART supports dense and densesparse representations out of the box with the help of NMSLIB (Boytsov and Naidan, 2013a; Naidan et al., 2015a) . 2 NMSLIB is an efficient library for k-NN search on CPU, which supports a wide range of similarity functions and data formats. NMSLIB is a commonly used library 3 , which was recently adopted by Amazon. 4 Because NMSLIB algorithms are largely distance-agnostic, it is relatively easy to extend the library by adding new distances. In what follows we describe NMSLIB, FlexNeuART, and their integration in more detail. The code is publicly available:", |
|
"cite_spans": [ |
|
{ |
|
"start": 169, |
|
"end": 187, |
|
"text": "(Seo et al., 2019;", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 188, |
|
"end": 207, |
|
"text": "Gysel et al., 2018;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 208, |
|
"end": 231, |
|
"text": "Karpukhin et al., 2020;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 232, |
|
"end": 250, |
|
"text": "Kuzi et al., 2020)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 441, |
|
"end": 468, |
|
"text": "(Boytsov and Naidan, 2013a;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 469, |
|
"end": 490, |
|
"text": "Naidan et al., 2015a)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 493, |
|
"end": 494, |
|
"text": "2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 696, |
|
"end": 697, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 https://github.com/oaqa/FlexNeuART", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 https://github.com/nmslib/nmslib", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Non-Metric Space Library (NMSLIB) is an efficient cross-platform similarity search library and a toolkit for evaluation of similarity search methods (Boytsov and Naidan, 2013a; Naidan et al., 2015a) , which is the first commonly used library with a principled support for non-metric space searching. 5 NMSLIB is an extendible library, which means that is possible to add new search methods and distance functions. NMSLIB can be used directly in C++ and Python (via Python bindings). In addition, it is also possible to build a query server, which can be used from Java (or other languages supported by Apache Thrift 6 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 176, |
|
"text": "(Boytsov and Naidan, 2013a;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 177, |
|
"end": 198, |
|
"text": "Naidan et al., 2015a)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 300, |
|
"end": 301, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NMSLIB", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "k-NN search is a conceptually simple procedure that consists in finding k data set elements that have highest similarity scores (or, alternatively, smallest distances) to another element called query. Despite its formulaic simplicity, k-NN search is a notoriously difficult problem, which is hard to do efficiently, i.e., faster than the brute-force scan of the data set, for high dimensional data and/or non-Euclidean distances. In particular, for some data sets exact search methods do not outperform the brute-force search in just a dozen of dimensions (see, e.g., a discussion in \u00a7 1 and \u00a7 2 of Boytsov 2018).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NMSLIB", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "For sufficiently small data sets and simple similarities, e.g., L 2 , the brute-force search can be a feasible solution, especially when the data set fits into a memory of an AI accelerator. In particular, the Facebook library for k-NN search FAISS (Johnson et al., 2017) supports the brute-force search on GPU 7 . However, GPU memory is quite limited compared to the main RAM. For example, the latest A100 GPU has only 40 GB of memory 8 while some commodity servers have 1+ TB of main RAM.", |
|
"cite_spans": [ |
|
{ |
|
"start": 249, |
|
"end": 271, |
|
"text": "(Johnson et al., 2017)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NMSLIB", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In addition, GPUs are designed primarily for dense-vector manipulations and have poor support for sparse vectors (Hong et al., 2018) . When data is very sparse, as in the case of traditional text indices, it is possible to efficiently retrieve data using search toolkits such as Lucene. Yet, for less sparse sets, more complex similarities, and large dense-vector data sets we have to resort to approximate k-NN search, which does not have accuracy guarantees.", |
|
"cite_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 132, |
|
"text": "(Hong et al., 2018)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NMSLIB", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "One particular efficient class of k-NN search methods relies on the construction of neighborhood graphs for data set points (see a recent survey by Shimomura et al. (2020) for a thorough description). Despite initial promising results were published nearly 30 years ago (Arya and Mount, 1993) , this approach has only recently become popular due to good performance of NMSLIB and KGraph (Dong et al., 2011) 9 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 270, |
|
"end": 292, |
|
"text": "(Arya and Mount, 1993)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NMSLIB", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Specifically, two successive ANN-Benchmarks challenges (Aum\u00fcller et al., 2019) were won first by our efficient implementation of the Navigable Small World (NSW) (Malkov et al., 2014) and then by the Hierarchical Navigable Small World (HNSW) contributed to NMSLIB by Yury Malkov (Malkov and Yashunin, 2018) . HNSW performance was particularly impressive. Unlike many other libraries for k-NN search, NMSLIB focuses on retrieval for generic similarities.", |
|
"cite_spans": [ |
|
{ |
|
"start": 55, |
|
"end": 78, |
|
"text": "(Aum\u00fcller et al., 2019)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 161, |
|
"end": 182, |
|
"text": "(Malkov et al., 2014)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 266, |
|
"end": 305, |
|
"text": "Yury Malkov (Malkov and Yashunin, 2018)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NMSLIB", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The generality is achieved by relying largely on distancebased methods: NSW (Malkov et al., 2014) , HNSW (Malkov and Yashunin, 2018) , NAPP (Tellez et al., 2013; Boytsov et al., 2016) , and an extension of the VP-tree (Boytsov and Naidan, 2013b; Boytsov and Nyberg, 2019b) . Distance-based methods can only use values of the mutual data point distances, but cannot exploit the structure of the data, e.g., they have no direct access to vector elements or string characters. In addition, NMSLIB has a simple (no compression) implementation of a traditional inverted file, which can be used to carry out an exact maximum-inner product search on sparse vectors.", |
|
"cite_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 97, |
|
"text": "(Malkov et al., 2014)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 105, |
|
"end": 132, |
|
"text": "(Malkov and Yashunin, 2018)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 140, |
|
"end": 161, |
|
"text": "(Tellez et al., 2013;", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 162, |
|
"end": 183, |
|
"text": "Boytsov et al., 2016)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 218, |
|
"end": 245, |
|
"text": "(Boytsov and Naidan, 2013b;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 246, |
|
"end": 272, |
|
"text": "Boytsov and Nyberg, 2019b)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NMSLIB", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Graph-based retrieval algorithms have been shown to work efficiently for a variety of non-metric and non-symmetric distances (Boytsov and Nyberg, 2019a; Boytsov, 2018; Naidan et al., 2015b) . This flexibility permits adding new distances/similarities with little effort (as we do not have to change the retrieval algorithms). However, this needs to be done in C++, which is one limitation. It is desirable to have an API where C++ code could call Python-implemented distances. NMSLIB supports only in-memory indices and with a single exception all indices are static, which is another (current) limitation of the library.", |
|
"cite_spans": [ |
|
{ |
|
"start": 125, |
|
"end": 152, |
|
"text": "(Boytsov and Nyberg, 2019a;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 153, |
|
"end": 167, |
|
"text": "Boytsov, 2018;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 168, |
|
"end": 189, |
|
"text": "Naidan et al., 2015b)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NMSLIB", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "There is a number of data format and distances-a combination which we call a space-supported by NMSLIB. A detailed description can be found online 10 . Most importantly, the library supports L p distances with the norm x p = i\u2208I |x i | p 1/p , the cosine similarity, and the inner product similarity. For all of these, the data can be both fixedsize \"dense\" and variable-size \"sparse\" vectors. Sparse vectors can have an unlimited number of non-zero elements and their processing is less efficient compared to dense vectors. On Intel CPUs the processing is 10 https://github.com/nmslib/nmslib/blob/ master/manual/spaces.md speed up using special SIMD operations. In addition, NMSLIB supports the Jaccard similarity, the Levenshtein distance (for ASCII strings), and the number of (more exotic) divergences (including the KL-divergence).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NMSLIB", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The library has substantial documentation and additional information can be found online 11 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NMSLIB", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Flexible classic and NeurAl Retrieval Toolkit, or shortly FlexNeuART (intended pronunciation flex-noo-art) is a modular text retrieval toolkit, which incorporates some of the best classic, i.e., traditional, information retrieval (IR) signals and provides capabilities for integration with recent neural models. This toolkit supports all key stages of the retrieval pipeline, including indexing, generation of training data, training the models, candidate generation, and re-ranking.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "FlexNeuART has been under active development for several years and has been used for our own projects, in particular, to investigate applicability of k-NN search for text retrieval (Boytsov et al., 2016) . It was also used in recent TREC evaluations (Craswell et al., 2020) as well as to produce strong runs on the MS MARCO document leaderboard. 12 The toolkit is geared towards TREC evaluations: For broader acceptance we would clearly need to implement Python bindings and experimentation code at the Python level.", |
|
"cite_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 203, |
|
"text": "(Boytsov et al., 2016)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 250, |
|
"end": 273, |
|
"text": "(Craswell et al., 2020)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 346, |
|
"end": 348, |
|
"text": "12", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "FlexNeuART was created to fulfill the following needs:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Shallow integration with Lucene and state-of-the-art toolkits for k-NN search (i.e., the candidate generation component should be easy to change);", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Efficient retrieval and efficient reranking with basic relevance signals;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 An out-of-the-box support for multi-field document ranking;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 An ease of implementation and/or use of most traditional ranking signals;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 An out-of-the-box support for learningto-rank (LETOR) and basic experimentation;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 A support for mixed dense-sparse retrieval and/or re-ranking.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Packages most similar to ours in retrieval and LETOR capabilities are Anserini (Yang et al., 2018) , Terrier (Ounis et al., 2006) , and OpenNIR (MacAvaney, 2020). Anserini and Terrier are Java packages, which were recently enhanced with Python bindings through Pyserini 13 and PyTerrier (Macdonald and Tonellotto, 2020). OpenNIR implements re-ranking code on top of Anserini. These packages are tightly integrated with specific retrieval toolkits, which makes implementation of re-ranking components difficult, as these components need to access retrieval engine internals-which are frequently undocumented-to retrieve stored documents, term statistics, etc. Replacing the core retrieval component becomes problematic as well. In contrast, our system decouples retrieval and re-ranking modules by keeping an independent forward index, which enables plugable LETOR and IR modules. In addition to this, OpenNIR and Pyserini do not provide API for fusion of relevance signals and none of the toolkits incorporates a lexical translation model (Berger et al., 2000) , which can substantially boost accuracy for QA. 13 https://github.com/castorini/pyserini 1 { 2 \"DOCNO\" : \"0\" , 3 \" text \" : \" n f l team represent super bowl 50\" , 4 \"text_unlemm\" : \" n f l teams represented super bowl 50\" 5 } Figure 2 : Sample input for question \"Which NFL team represented the AFC at Super Bowl 50?\"", |
|
"cite_spans": [ |
|
{ |
|
"start": 79, |
|
"end": 98, |
|
"text": "(Yang et al., 2018)", |
|
"ref_id": "BIBREF54" |
|
}, |
|
{ |
|
"start": 109, |
|
"end": 129, |
|
"text": "(Ounis et al., 2006)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 1039, |
|
"end": 1060, |
|
"text": "(Berger et al., 2000)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1110, |
|
"end": 1112, |
|
"text": "13", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1289, |
|
"end": 1297, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The FlexNeuART system-outlined in Figure 1-implements a classic multi-stage retrieval pipeline, where documents flow through a series of \"funnels\" that discard unpromising candidates using increasingly more complex and accurate ranking components. In that, FlexNeuART supports one intermediate and one final re-ranker (both are optional). The initial ranked set of documents is provided by the so-called candidate generator (also known as the candidate provider).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 40, |
|
"text": "Figure", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "System Design and Workflow", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "FlexNeuART is designed to work with plugable candidate generators and rerankers. Out-of-the-box it supports Apache Lucene 14 and NMSLIB, which we describe in \u00a7 2. NMSLIB works as a standalone multithreaded server implemented with Apache Thrift. 15 NMSLIB supports an efficient approximate (and in some cases exact) maximum inner-product search on sparse and sparse-dense representations. Sparse-dense retrieval is a recent addition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Design and Workflow", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Lucene full-text search algorithms rely on classic term-level inverted files, which are stored in compressed formats (so Lucene is quite space-efficient). NMSLIB (see \u00a7 2) supports the classic (uncompressed) inverted files with document-at-at-time (DAAT) processing, the brute-force search, the graphbased retrieval algorithms HNSW (Malkov and Yashunin, 2018) and NSW (Malkov et al., 2014) , as well the pivoting algorithm NAPP (Tellez et al., 2013; Boytsov et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 332, |
|
"end": 359, |
|
"text": "(Malkov and Yashunin, 2018)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 368, |
|
"end": 389, |
|
"text": "(Malkov et al., 2014)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 428, |
|
"end": 449, |
|
"text": "(Tellez et al., 2013;", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 450, |
|
"end": 471, |
|
"text": "Boytsov et al., 2016)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Design and Workflow", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The indexing and querying pipelines ingest data (queries and documents) in the form of multi-field JSON entries, which are generated by external Java and/or Python code. Each field can be parsed or raw. The parsed field contains white-space separated tokens while the raw field can keep arbitrary text, which is tokenized directly by reranking components. In particular, BERT models rely on their own tokenizers (Devlin et al., 2018 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 412, |
|
"end": 432, |
|
"text": "(Devlin et al., 2018", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Design and Workflow", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The core system does not directly incorporate any text processing code, instead, we assume that an external pipeline does all the processing: parsing, tokenization, stopping, and possibly stemming/lemmatization to produce a string of white-space separated tokens. This relieves the indexing code from the need to do complicated parsing and offers extra flexibility in choosing parsing tools.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Design and Workflow", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "An example of a two-field input JSON entry for a SQuAD 1.1 (Rajpurkar et al., 2016) question is given in Fig. 2 . Document and query entries contain at least two mandatory fields: DOCNO and text, which represent the document identifier and indexable text. Queries and documents may have additional optional fields. For example, HTML documents commonly have a title field. In Fig. 2 , text _ unlemm consists of lowercased original words, and text contains word lemmas. Stop words are removed from both fields. From our prior TREC experiments we learned that it is beneficial to combine scores obtained for the lemmatized (or stemmed) and the original text (Boytsov and Belova, 2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 83, |
|
"text": "(Rajpurkar et al., 2016)", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 655, |
|
"end": 681, |
|
"text": "(Boytsov and Belova, 2011)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 111, |
|
"text": "Fig. 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 375, |
|
"end": 381, |
|
"text": "Fig. 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "System Design and Workflow", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Retrieval requires a Lucene or an NMSLIB index, each of which can be created independently. To support re-ranking, we also need to create forward indices. There is one forward index for each data field. For parsed fields, it contains bag-of-word representations of documents (term IDs and frequencies) and (optionally) an ordered sequence of words. For raw fields, the index keeps unmodified text. A forward index is also required to create an NMSLIB index.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Design and Workflow", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The FlexNeuART system has a configurable re-ranking module, which can combine results from several ranking components. A sample configuration file shown in Fig. 3 1 {\" extractors \" : [ 2 {\" type \" : \" TFIDFSimilarity \" , 3 \"params\" : { 4 \"indexFieldName\" : \" text \" , 5 \"queryFieldName\" : \" text \" , 6 \" similType \" : \"bm25\" , 7 \"k1\" : \"1 . 2\" , 8 \"b\" : \"0 . 75\"} 9 } , 10 {\" type \" : \"avgWordEmbed\" , 11 \"params\" : { 12 \"indexFieldName\" : \"text_unlemm\" , 13 \"queryFieldName\" : \"text_unlemm\" , 14 \"queryEmbedFile\" : \"embeds/ starspace_unlemm . query\" , 15 \"docEmbedFile\" : \"embeds/ starspace_unlemm . answer\" , contains an array of scoring sub-modules whose parameters are specified via nested dictionaries (in curly brackets). Each description contains the mandatory parameters type and params. Scoring modules are feature extractors, each of which produces one or more numerical feature that can be used by a LETOR component to train a ranking model or to score a candidate document.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 162, |
|
"text": "Fig. 3", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "System Design and Workflow", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The special composite feature extractor reads the configuration file and for each description of the extractor it creates an instance of the feature extractor whose type is defined by type. The value of params can be arbitrary: parsing and interpreting parameters is delegated to the constructor of the extractor object.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Design and Workflow", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "A sample configuration in Fig. 3 defines a BM25 (Robertson, 2004) scorer with parameters k 1 = 1.2 and b = 0.25 for the index field text (and query field text) as well as the averaged embedding generator for the fields text _ unlemm. The latter creates dense query and document representations using StarSpace embeddings (Wu et al., 2018) . There are separate sets of embeddings for queries and documents. Word embeddings are weighted using IDFs and subsequently L 2 normalized. Finally, this extractor produces a single feature equal to the L 2 distance between averaged embeddings of the query and the document.", |
|
"cite_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 65, |
|
"text": "(Robertson, 2004)", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 321, |
|
"end": 338, |
|
"text": "(Wu et al., 2018)", |
|
"ref_id": "BIBREF50" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 26, |
|
"end": 32, |
|
"text": "Fig. 3", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "System Design and Workflow", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "From the forward indices, we can export data to NMSLIB and create an index for k-NN search. This is supported only for inner-product similarities. As discussed in the following subsection \u00a7 3.3, there are two scenarios. In the first scenario we export one vector per feature extractor. In particular, we generate a sparse vector for BM25 and a dense vector for the averaged embeddings. Then, NMSLIB combines these representations on its own using adjustable weights, which can be tweaked after data is exported. In the second scenario-which is more efficient but less flexible-we create one composite vector per document/query, where individual component weights cannot be changed further after export.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Design and Workflow", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Similarity scores between queries and documents are computed for a pair of query and a document field (typically these are the same fields). 16 Scores from various scorers are then combined into a single score by a learning-to-rank (LETOR) algorithm (Liu et al., 2009) . FlexNeuART use the LETOR library RankLib from which we use two particularly effective learning algorithms: a coordinate ascent (Metzler and Croft, 2007) and LambdaMART (Burges, 2010) . We have found a bug in RankLib implementation of the coordinate ascent: We, thus, use our own, bugfixed, version.", |
|
"cite_spans": [ |
|
{ |
|
"start": 141, |
|
"end": 143, |
|
"text": "16", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 250, |
|
"end": 268, |
|
"text": "(Liu et al., 2009)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 398, |
|
"end": 423, |
|
"text": "(Metzler and Croft, 2007)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 428, |
|
"end": 453, |
|
"text": "LambdaMART (Burges, 2010)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring Modules", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Coordinate ascent produces a linear model. It is most effective when the number of features and/or the number of examples is small. LambdaMART is a boosted tree 16 There can be multiple scorers for each pair of fields. model, which, in our experience, is effective primarily when the number of features and training examples is quite large.", |
|
"cite_spans": [ |
|
{ |
|
"start": 161, |
|
"end": 163, |
|
"text": "16", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring Modules", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We provide basic experimentation support. An experiment is described via a JSON descriptor, which defines parameters of the candidate generating, re-ranking, and LETOR algorithms. Some experimentation parameters such as training and testing subsets can also be specified in the command line.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring Modules", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "A sample descriptor is shown in Fig. 4 . It uses an intermediate re-ranker which rescores 2000 entries with the highest Lucene scores. A given number of highly scored entries can be further re-scored using the \"final\" re-ranker. Note that the experimental descriptor references feature-extractor JSONs rather than defining everything in a single configuration file.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 32, |
|
"end": 38, |
|
"text": "Fig. 4", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Scoring Modules", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Given an experimental descriptor, the training pipeline generates specified features, exports results to a special RankLib format and trains the model. Training of the LETOR model also requires a relevance file (a QREL file in the TREC NIST format), which lists known relevant documents. After training, the respective retrieval system is evaluated on another set of queries. The user can disable model training: This mode is used to tune BM25.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring Modules", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Based on our experience with TREC and community QA collections (Boytsov and Naidan, 2013b; Boytsov, 2018) , we support the following scoring approaches:", |
|
"cite_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 90, |
|
"text": "(Boytsov and Naidan, 2013b;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 91, |
|
"end": 105, |
|
"text": "Boytsov, 2018)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring Modules", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "\u2022 A proxy scorer that reads scores from one or more standalone scoring servers, which can be implemented in Python or any other language supported by Apache Thrift. 17 . Our system implements neural proxy scorers for CEDR (MacAvaney et al., 2019) and MatchZoo (Fan et al., 2017) . We have modified CEDR by providing a better parameterization of the training procedure, adding support for BERT large (Devlin et al., 2018) and multi-GPU training.", |
|
"cite_spans": [ |
|
{ |
|
"start": 222, |
|
"end": 246, |
|
"text": "(MacAvaney et al., 2019)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 260, |
|
"end": 278, |
|
"text": "(Fan et al., 2017)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 399, |
|
"end": 420, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring Modules", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "\u2022 The TF\u00d7IDF similarity BM25 (Robertson, 2004) , where logarithms of inverse document term frequencies (IDFs) are multiplied by normalized and smoothed term counts in a document (TFs).", |
|
"cite_spans": [ |
|
{ |
|
"start": 29, |
|
"end": 46, |
|
"text": "(Robertson, 2004)", |
|
"ref_id": "BIBREF43" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring Modules", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "\u2022 Sequential dependence model : our re-implementation is based on the one from Anserini.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring Modules", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "\u2022 BM25-based proximity scorer, which treats ordered and unordered pairs of query terms as a single token. It is similar to the proximity scorer used in our prior work (Boytsov and Belova, 2011 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 167, |
|
"end": 192, |
|
"text": "(Boytsov and Belova, 2011", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring Modules", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "\u2022 Cosine/L 2 distance between averaged word embeddings. We first train word embeddings for the corpus, then construct a dense vector for a document (or query) by applying TF\u00d7IDF weighting to the individual word embeddings and summing them. Then we compare averaged embeddings using the cosine similarity (or L 2 distance).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring Modules", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "\u2022 IBM Model 1 is a lexical translation model trained using expectation maximization. We use Model 1 to compute an alignment log-probability between queries and answer documents. Using Model 1 allows us to reduce the vocabulary gap between queries and documents (Berger et al., 2000) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 261, |
|
"end": 282, |
|
"text": "(Berger et al., 2000)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring Modules", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "\u2022 A proxy query-and document embedder, that produces fixed-size dense vectors for queries and documents. The similarity is the inner product between query and document embeddings. This scorer operates as an Apache Thrift server.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring Modules", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "\u2022 A BM25-based pseudo-relevance feedback model RM3. Unlike a common approach where RM3 is used for queryexpansion, we use it in re-ranking mode (Diaz, 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 156, |
|
"text": "(Diaz, 2015)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring Modules", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Although FlexNeuART supports complex scoring models, these can be computationally too expensive to be used directly for retrieval (Boytsov et al., 2016; Boytsov, 2018 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 152, |
|
"text": "(Boytsov et al., 2016;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 153, |
|
"end": 166, |
|
"text": "Boytsov, 2018", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring Modules", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Instead we should stick to a simple vectorspace model, where similarity is computed as the inner product between query and document vectors (Manning et al., 2010) . The respective retrieval procedure is a maximum inner-product search (a form of k-NN search). For example both BM25 and the cosine similarity between query and document embeddings belong to this class of scorers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 162, |
|
"text": "(Manning et al., 2010)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring Modules", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Under the vector-space framework we need to (1) generate/read a set of fieldspecific vectors for queries and documents, (2) compute field-specific scores using the inner product between query and document vectors, and (3) aggregate the scores using a linear model. Alternatively, we can create composite queries and document vectors, where we concatenate field-specified vectors multiplied by field weights. Then, the overall similarity score is computed as the inner product between composite query and document vectors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring Modules", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Our system supports both computation scenarios. To this end, all inner-product equivalent scorers should inherit from a specific abstract class and implement the functions to generate respective query and document vectors. This abstraction simplifies generation of sparse and sparse-dense query/document vectors, which can be subsequently indexed by NMSLIB.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring Modules", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We carry out experiments with two objectives: (1) measuring effectiveness of implemented ranking models; (2) demonstrating the value of a well-tuned traditional IR system. We use two recently released MS MARCO collections (Craswell et al., 2020; Nguyen et al., 2016 ) and a community question answering (CQA) collection Yahoo Answers Manner (Surdeanu et al., 2011) . Collection statistics is summarized in Table 1. MS MARCO has a document and a passage re-ranking task where all queries can be answered using a short text snippet. There are three sets of queries in each task. In addition to one large query set with sparse judgments, there are two small evaluation sets from the TREC 2019/2020 deep learning track (Craswell et al., 2020) . MS MARCO collections query sets were randomly split into training, development (to tune hyper parameters), and test sets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 222, |
|
"end": 245, |
|
"text": "(Craswell et al., 2020;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 246, |
|
"end": 265, |
|
"text": "Nguyen et al., 2016", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 341, |
|
"end": 364, |
|
"text": "(Surdeanu et al., 2011)", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 715, |
|
"end": 738, |
|
"text": "(Craswell et al., 2020)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 406, |
|
"end": 414, |
|
"text": "Table 1.", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Yahoo Answers Manner has a large number of paired question-answer pairs. We include it in our experiments, because Model 1 was shown to be effective for CQA data in the past (Jeon et al., 2005; Riezler et al., 2007; Surdeanu et al., 2011; Xue et al., 2008) . It was randomly split into the training and evaluation sets. Document text is processed using Spacy 2.2.3 (Honnibal and Montani, 2017) to extract tokens and lemmas. The frequently occurred tokens and lemmas are filtered out using Indri's list of stopwords (Strohman et al., 2005) , which is expanded to include a few contractions such as \"n't\" and \"'ll\". Lemmas are indexed using Lucene 7.6. In the case of MS MARCO documents, entries come in the HTML format. We extract HTML body and title (and store/index them separately).", |
|
"cite_spans": [ |
|
{ |
|
"start": 174, |
|
"end": 193, |
|
"text": "(Jeon et al., 2005;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 194, |
|
"end": 215, |
|
"text": "Riezler et al., 2007;", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 216, |
|
"end": 238, |
|
"text": "Surdeanu et al., 2011;", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 239, |
|
"end": 256, |
|
"text": "Xue et al., 2008)", |
|
"ref_id": "BIBREF53" |
|
}, |
|
{ |
|
"start": 365, |
|
"end": 393, |
|
"text": "(Honnibal and Montani, 2017)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 515, |
|
"end": 538, |
|
"text": "(Strohman et al., 2005)", |
|
"ref_id": "BIBREF46" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In additional to traditional tokenizers, we also use the BERT tokenizer from the Hug-gingFace Transformers library (Wolf et al., 2019) . This tokenizer can split a single word into several sub-word pieces (Wu et al., 2016) . The stopword list is not applied to BERT tokens.", |
|
"cite_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 134, |
|
"text": "(Wolf et al., 2019)", |
|
"ref_id": "BIBREF49" |
|
}, |
|
{ |
|
"start": 205, |
|
"end": 222, |
|
"text": "(Wu et al., 2016)", |
|
"ref_id": "BIBREF51" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Training Model 1, which is a translation model, requires a parallel corpus where queries are paired with respective relevant documents. The parallel corpus is also known as a bitext. In the case of MS MARCO collections documents are much longer than queries, which makes it impossible to compute translation probabilities using standard alignment tools (Och and Ney, 2003). 18 Hence, for each pair of query q and its relevant document d, we first split d into multiple", |
|
"cite_spans": [ |
|
{ |
|
"start": 353, |
|
"end": 376, |
|
"text": "(Och and Ney, 2003). 18", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "short chunks d 1 , d 2 , . . . d n . Then, we replace the pair (q, d) with a set of pairs {(q, d i )}.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We evaluate performance of several models and their combinations. Each model name is abbreviated as X (Y), where X is a type of the model (see \u00a73.3 for details) and Y is a type of the text field. Specifically, we index original tokens, lemmas, as well as BERT tokens extracted from the main document text. For MS MARCO documents, which come in HTML format, we also extract tokens and lemmas from the title field.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "First, we evaluate performance of the tuned BM25 (lemmas). Second, we evaluate fusion models that combine BM25 (lemmas) with BM25, proximity, and Model 1 scores (see \u00a73.3) computed for various fields. Note that our fusion models are linear. Third, we evaluate collection-specific combinations of manually-selected models: Except for minor changes these are the fusion models that we used in our TREC 2019 and 2020 submissions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "All models were trained and/or tuned using training and development sets listed in J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002) , which the main metric in the TREC deep learning track (Craswell et al., 2020) . For subsets of MS MARCO collections, we use the mean reciprocal rank (MRR) as suggested by Craswell et al. (2020) . From the experiments in Table 3 , we can see that for all large query sets the fusion models outperform BM25 (lemmas). In particular, the best MS MARCO fusion models are 13-15% better than BM25 (lemmas). In the case of Yahoo Answers Manner, combining BM25 (lemmas) with Model 1 scores computed for BERT tokens also boost performance by about 15%. For small TREC 2019 and 2020 query sets the gains are marginal. However, our fusion models are still better than BM25 (lemmas) by 4-8%.", |
|
"cite_spans": [ |
|
{ |
|
"start": 83, |
|
"end": 113, |
|
"text": "J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 170, |
|
"end": 193, |
|
"text": "(Craswell et al., 2020)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 287, |
|
"end": 309, |
|
"text": "Craswell et al. (2020)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 336, |
|
"end": 343, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We further compare the accuracy of the BERT-based re-ranker (Nogueira and Cho, 2019) applied to the output of the tuned traditional IR system with the accuracy of the same BERT-based re-ranker applied to the output of Lucene (with a BM25 scorer). The BERT scorer is used to re-rank 150 documents: Further increasing the number of candidates degraded performance on the TREC 2019 test set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 84, |
|
"text": "(Nogueira and Cho, 2019)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "By mistake we used the same BM25 parameters for both passages and documents. As a result, MS MARCO documents candidate generator was suboptimal (passage retrieval did use the properly tuned BM25 scorer). However, we refrained from correcting this error to illustrate how a good fusion model can produce a strong ranker via a combination of suboptimal weak rankers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Indeed, as we can see from Table 2 , there is a substantial 4.5-7% loss in accuracy by re-ranking the output of BM25 compared to re-ranking the output of the well-tuned traditional pipeline. This degradation occurs in all four experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 27, |
|
"end": 34, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We present to the NLP community an existing k-NN search library NMSLIB, a new retrieval toolkit FlexNeuART, as well as their integration capabilities, which enable efficient retrieval of sparse and sparse-dense document representations. FlexNeuART implements a variety of effective traditional relevance signals, which we plan to use for a fairer comparison with recent neural retrieval systems based on representing queries and documents via fixed-size dense vectors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "https://lucene.apache.org/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/nmslib/nmslib 3 https://pypistats.org/packages/nmslib 4 https://amzn.to/3aDCMtC 5 https://github.com/nmslib/nmslib 6 https://thrift.apache.org/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/facebookresearch/faiss/ wiki/Running-on-GPUs 8 https://www.nvidia.com/en-us/data-center/a100/ 9 https://github.com/aaalgo/kgraph", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/nmslib/nmslib/tree/ master/manual 12 https://microsoft.github.io/msmarco/ #docranking", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://lucene.apache.org/ 15 https://thrift.apache.org/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://thrift.apache.org/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/moses-smt/mgiza/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was done primarily while Leonid Boytsov was a PhD student at CMU where he was supported by the NSF grant #1618159. We thank Sean MacAvaney for making CEDR (MacAvaney et al., 2019) publicly available and Igor Brigadir for suggesting to experiment with indexing of BERT word pieces.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": "6" |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Approximate nearest neighbor queries in fixed dimensions", |
|
"authors": [ |
|
{ |
|
"first": "Sunil", |
|
"middle": [], |
|
"last": "Arya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mount", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Proceedings of the fourth annual ACM-SIAM symposium on Discrete algorithms", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "271--280", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sunil Arya and David M Mount. 1993. Approx- imate nearest neighbor queries in fixed di- mensions. In Proceedings of the fourth an- nual ACM-SIAM symposium on Discrete algo- rithms, pages 271-280.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "ANN-benchmarks: A benchmarking tool for approximate nearest neighbor algorithms", |
|
"authors": [ |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Aum\u00fcller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erik", |
|
"middle": [], |
|
"last": "Bernhardsson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Faithfull", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Information Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martin Aum\u00fcller, Erik Bernhardsson, and Alexander Faithfull. 2019. ANN-benchmarks: A benchmarking tool for approximate nearest neighbor algorithms. Information Systems.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Bridging the lexical chasm: statistical approaches to answer-finding", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Berger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rich", |
|
"middle": [], |
|
"last": "Caruana", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Cohn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dayne", |
|
"middle": [], |
|
"last": "Freitag", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vibhu", |
|
"middle": [ |
|
"O" |
|
], |
|
"last": "Mittal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "SIGIR 2000: Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "192--199", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/345508.345576" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam L. Berger, Rich Caruana, David Cohn, Dayne Freitag, and Vibhu O. Mittal. 2000. Bridging the lexical chasm: statistical ap- proaches to answer-finding. In SIGIR 2000: Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and De- velopment in Information Retrieval, July 24- 28, 2000, Athens, Greece, pages 192-199.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Efficient and Accurate Non-Metric k-NN Search with Applications to Text Matching", |
|
"authors": [ |
|
{ |
|
"first": "Leonid", |
|
"middle": [], |
|
"last": "Boytsov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leonid Boytsov. 2018. Efficient and Accurate Non-Metric k-NN Search with Applications to Text Matching. Ph.D. thesis, Carnegie Mellon University.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Evaluating learning-to-rank methods in the web track adhoc task", |
|
"authors": [ |
|
{ |
|
"first": "Leonid", |
|
"middle": [], |
|
"last": "Boytsov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Belova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "TREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leonid Boytsov and Anna Belova. 2011. Evaluat- ing learning-to-rank methods in the web track adhoc task. In TREC.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Engineering efficient and effective nonmetric space library", |
|
"authors": [ |
|
{ |
|
"first": "Leonid", |
|
"middle": [], |
|
"last": "Boytsov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bilegsaikhan", |
|
"middle": [], |
|
"last": "Naidan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of SISAP 2013", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "280--293", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leonid Boytsov and Bilegsaikhan Naidan. 2013a. Engineering efficient and effective non- metric space library. In Proceedings of SISAP 2013, pages 280-293. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Learning to prune in metric and non-metric spaces", |
|
"authors": [ |
|
{ |
|
"first": "Leonid", |
|
"middle": [], |
|
"last": "Boytsov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bilegsaikhan", |
|
"middle": [], |
|
"last": "Naidan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1574--1582", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leonid Boytsov and Bilegsaikhan Naidan. 2013b. Learning to prune in metric and non-metric spaces. In Advances in Neural Information Processing Systems, pages 1574-1582.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Off the beaten path: Let's replace term-based retrieval with k-NN search", |
|
"authors": [ |
|
{ |
|
"first": "Leonid", |
|
"middle": [], |
|
"last": "Boytsov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Novak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yury", |
|
"middle": [], |
|
"last": "Malkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Nyberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of CIKM 2016", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1099--1108", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leonid Boytsov, David Novak, Yury Malkov, and Eric Nyberg. 2016. Off the beaten path: Let's replace term-based retrieval with k-NN search. In Proceedings of CIKM 2016, pages 1099-1108. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Accurate and fast retrieval for complex non-metric data via neighborhood graphs", |
|
"authors": [ |
|
{ |
|
"first": "Leonid", |
|
"middle": [], |
|
"last": "Boytsov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Nyberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "International Conference on Similarity Search and Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "128--142", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leonid Boytsov and Eric Nyberg. 2019a. Accu- rate and fast retrieval for complex non-metric data via neighborhood graphs. In Interna- tional Conference on Similarity Search and Applications, pages 128-142. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Pruning algorithms for low-dimensional nonmetric k-nn search: A case study", |
|
"authors": [ |
|
{ |
|
"first": "Leonid", |
|
"middle": [], |
|
"last": "Boytsov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Nyberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Similarity Search and Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leonid Boytsov and Eric Nyberg. 2019b. Prun- ing algorithms for low-dimensional non- metric k-nn search: A case study. In Similar- ity Search and Applications.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "From RankNet to LambdaRank to LambdaMart: An overview", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Burges", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher JC Burges. 2010. From RankNet to LambdaRank to LambdaMart: An overview. Microsoft Technical Report MSR-TR-2010-82.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Overview of the trec 2019 deep learning track", |
|
"authors": [ |
|
{ |
|
"first": "Nick", |
|
"middle": [], |
|
"last": "Craswell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bhaskar", |
|
"middle": [], |
|
"last": "Mitra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emine", |
|
"middle": [], |
|
"last": "Yilmaz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Campos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ellen", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Voorhees", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2003.07820" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M Voorhees. 2020. Overview of the trec 2019 deep learning track. arXiv preprint arXiv:2003.07820.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Bert: Pretraining of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.04805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre- training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Condensed list relevance models", |
|
"authors": [ |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Diaz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 International Conference on The Theory of Information Retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "313--316", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fernando Diaz. 2015. Condensed list relevance models. In Proceedings of the 2015 Interna- tional Conference on The Theory of Informa- tion Retrieval, pages 313-316.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Efficient k-nearest neighbor graph construction for generic similarity measures", |
|
"authors": [ |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Charikar", |
|
"middle": [], |
|
"last": "Moses", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 20th international conference on World wide web", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "577--586", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wei Dong, Charikar Moses, and Kai Li. 2011. Ef- ficient k-nearest neighbor graph construction for generic similarity measures. In Proceed- ings of the 20th international conference on World wide web, pages 577-586.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Matchzoo: A toolkit for deep text matching", |
|
"authors": [ |
|
{ |
|
"first": "Yixing", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Pang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianpeng", |
|
"middle": [], |
|
"last": "Hou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiafeng", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yanyan", |
|
"middle": [], |
|
"last": "Lan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xueqi", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1707.07270" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yixing Fan, Liang Pang, JianPeng Hou, Jiafeng Guo, Yanyan Lan, and Xueqi Cheng. 2017. Matchzoo: A toolkit for deep text matching. arXiv preprint arXiv:1707.07270.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Neural vector spaces for unsupervised information retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Christophe", |
|
"middle": [], |
|
"last": "Van Gysel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maarten", |
|
"middle": [], |
|
"last": "De Rijke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evangelos", |
|
"middle": [], |
|
"last": "Kanoulas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "ACM Transactions on Information Systems (TOIS)", |
|
"volume": "36", |
|
"issue": "4", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christophe Van Gysel, Maarten De Rijke, and Evangelos Kanoulas. 2018. Neural vector spaces for unsupervised information retrieval. ACM Transactions on Information Systems (TOIS), 36(4):38.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Efficient sparse-matrix multi-vector product on gpus", |
|
"authors": [ |
|
{ |
|
"first": "Changwan", |
|
"middle": [], |
|
"last": "Hong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aravind", |
|
"middle": [], |
|
"last": "Sukumaran-Rajam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bortik", |
|
"middle": [], |
|
"last": "Bandyopadhyay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinsung", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Israt", |
|
"middle": [], |
|
"last": "S\u00fcreyya Emre Kurt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shivani", |
|
"middle": [], |
|
"last": "Nisa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sabhlok", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "\u00dcmit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Srinivasan", |
|
"middle": [], |
|
"last": "\u00c7ataly\u00fcrek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Parthasarathy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sadayappan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Symposium on High-Performance Parallel and Distributed Computing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "66--79", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Changwan Hong, Aravind Sukumaran-Rajam, Bortik Bandyopadhyay, Jinsung Kim, S\u00fcreyya Emre Kurt, Israt Nisa, Shivani Sabhlok, \u00dcmit V \u00c7ataly\u00fcrek, Srinivasan Parthasarathy, and P Sadayappan. 2018. Efficient sparse-matrix multi-vector product on gpus. In Proceedings of the 27th Inter- national Symposium on High-Performance Parallel and Distributed Computing, pages 66-79.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Honnibal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ines", |
|
"middle": [], |
|
"last": "Montani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neu- ral networks and incremental parsing. To ap- pear.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Cumulated gain-based evaluation of ir techniques", |
|
"authors": [ |
|
{ |
|
"first": "Kalervo", |
|
"middle": [], |
|
"last": "J\u00e4rvelin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaana", |
|
"middle": [], |
|
"last": "Kek\u00e4l\u00e4inen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "ACM Transactions on Information Systems (TOIS)", |
|
"volume": "20", |
|
"issue": "4", |
|
"pages": "422--446", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kalervo J\u00e4rvelin and Jaana Kek\u00e4l\u00e4inen. 2002. Cumulated gain-based evaluation of ir tech- niques. ACM Transactions on Information Systems (TOIS), 20(4):422-446.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Finding similar questions in large question and answer archives", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"Bruce" |
|
], |
|
"last": "Jiwoon Jeon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joon Ho", |
|
"middle": [], |
|
"last": "Croft", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 2005 ACM CIKM International Conference on Information and Knowledge Management", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "84--90", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/1099554.1099572" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiwoon Jeon, W. Bruce Croft, and Joon Ho Lee. 2005. Finding similar questions in large ques- tion and answer archives. In Proceedings of the 2005 ACM CIKM International Confer- ence on Information and Knowledge Manage- ment, Bremen, Germany, October 31 -Novem- ber 5, 2005, pages 84-90.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Matthijs Douze, and Herv\u00e9 J\u00e9gou. 2017. Billion-scale similarity search with gpus", |
|
"authors": [ |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1702.08734" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeff Johnson, Matthijs Douze, and Herv\u00e9 J\u00e9- gou. 2017. Billion-scale similarity search with gpus. arXiv preprint arXiv:1702.08734.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Dense passage retrieval for open-domain question answering", |
|
"authors": [ |
|
{ |
|
"first": "Vladimir", |
|
"middle": [], |
|
"last": "Karpukhin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barlas", |
|
"middle": [], |
|
"last": "Oguz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sewon", |
|
"middle": [], |
|
"last": "Min", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ledell", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Edunov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wen-Tau", |
|
"middle": [], |
|
"last": "Yih", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.04906" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. arXiv preprint arXiv:2004.04906.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Colbert: Efficient and effective passage search via contextualized late interaction over bert", |
|
"authors": [ |
|
{ |
|
"first": "Omar", |
|
"middle": [], |
|
"last": "Khattab", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matei", |
|
"middle": [], |
|
"last": "Zaharia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '20", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "39--48", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/3397271.3401075" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Omar Khattab and Matei Zaharia. 2020. Col- bert: Efficient and effective passage search via contextualized late interaction over bert. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Devel- opment in Information Retrieval, SIGIR '20, page 39-48, New York, NY, USA. Association for Computing Machinery.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Leveraging semantic and lexical matching to improve the recall of document retrieval systems: A hybrid approach", |
|
"authors": [ |
|
{ |
|
"first": "Saar", |
|
"middle": [], |
|
"last": "Kuzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mingyang", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cheng", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Bendersky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Najork", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2010.01195" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saar Kuzi, Mingyang Zhang, Cheng Li, Michael Bendersky, and Marc Najork. 2020. Lever- aging semantic and lexical matching to im- prove the recall of document retrieval sys- tems: A hybrid approach. arXiv preprint arXiv:2010.01195.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Latent retrieval for weakly supervised open domain question answering", |
|
"authors": [ |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1906.00300" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. arXiv preprint arXiv:1906.00300.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Learning to rank for information retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Tie-Yan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Foundations and Trends\u00ae in Information Retrieval", |
|
"volume": "3", |
|
"issue": "3", |
|
"pages": "225--331", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tie-Yan Liu et al. 2009. Learning to rank for information retrieval. Foundations and Trends\u00ae in Information Retrieval, 3(3):225- 331.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "OpenNIR: A complete neural ad-hoc ranking pipeline", |
|
"authors": [ |
|
{ |
|
"first": "Sean", |
|
"middle": [], |
|
"last": "Macavaney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "WSDM 2020", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sean MacAvaney. 2020. OpenNIR: A complete neural ad-hoc ranking pipeline. In WSDM 2020.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Cedr: Contextualized embeddings for document ranking", |
|
"authors": [ |
|
{ |
|
"first": "Sean", |
|
"middle": [], |
|
"last": "Macavaney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Yates", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arman", |
|
"middle": [], |
|
"last": "Cohan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nazli", |
|
"middle": [], |
|
"last": "Goharian", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1101--1104", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sean MacAvaney, Andrew Yates, Arman Cohan, and Nazli Goharian. 2019. Cedr: Contextu- alized embeddings for document ranking. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Develop- ment in Information Retrieval, pages 1101- 1104.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Declarative experimentation in information retrieval using pyterrier", |
|
"authors": [ |
|
{ |
|
"first": "Craig", |
|
"middle": [], |
|
"last": "Macdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicola", |
|
"middle": [], |
|
"last": "Tonellotto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2007.14271" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Craig Macdonald and Nicola Tonellotto. 2020. Declarative experimentation in information retrieval using pyterrier. arXiv preprint arXiv:2007.14271.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Approximate nearest neighbor algorithm based on navigable small world graphs", |
|
"authors": [ |
|
{ |
|
"first": "Yury", |
|
"middle": [], |
|
"last": "Malkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Ponomarenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrey", |
|
"middle": [], |
|
"last": "Logvinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vladimir", |
|
"middle": [], |
|
"last": "Krylov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Information Systems", |
|
"volume": "45", |
|
"issue": "", |
|
"pages": "61--68", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yury Malkov, Alexander Ponomarenko, Andrey Logvinov, and Vladimir Krylov. 2014. Approx- imate nearest neighbor algorithm based on navigable small world graphs. Information Systems, 45:61-68.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Yury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Malkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dmitry A Yashunin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yury A Malkov and Dmitry A Yashunin. 2018. Ef- ficient and robust approximate nearest neigh- bor search using hierarchical navigable small world graphs. IEEE transactions on pattern analysis and machine intelligence.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Introduction to information retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prabhakar", |
|
"middle": [], |
|
"last": "Raghavan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hinrich", |
|
"middle": [], |
|
"last": "Sch\u00fctze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Natural Language Engineering", |
|
"volume": "16", |
|
"issue": "1", |
|
"pages": "100--103", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher Manning, Prabhakar Raghavan, and Hinrich Sch\u00fctze. 2010. Introduction to information retrieval. Natural Language En- gineering, 16(1):100-103.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "A markov random field model for term dependencies", |
|
"authors": [ |
|
{ |
|
"first": "Donald", |
|
"middle": [], |
|
"last": "Metzler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bruce", |
|
"middle": [], |
|
"last": "Croft", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "472--479", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Donald Metzler and W Bruce Croft. 2005. A markov random field model for term depen- dencies. In Proceedings of the 28th annual international ACM SIGIR conference on Re- search and development in information re- trieval, pages 472-479.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Linear feature-based models for information retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Donald", |
|
"middle": [], |
|
"last": "Metzler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W. Bruce", |
|
"middle": [], |
|
"last": "Croft", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Inf. Retr", |
|
"volume": "10", |
|
"issue": "3", |
|
"pages": "257--274", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/s10791-006-9019-z" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Donald Metzler and W. Bruce Croft. 2007. Lin- ear feature-based models for information re- trieval. Inf. Retr., 10(3):257-274.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Nonmetric space library manual", |
|
"authors": [ |
|
{ |
|
"first": "Leonid", |
|
"middle": [], |
|
"last": "Bilegsaikhan Naidan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yury", |
|
"middle": [], |
|
"last": "Boytsov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Malkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Novak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1508.05470" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bilegsaikhan Naidan, Leonid Boytsov, Yury Malkov, and David Novak. 2015a. Non- metric space library manual. arXiv preprint arXiv:1508.05470.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Permutation search methods are efficient, yet faster search is possible", |
|
"authors": [ |
|
{ |
|
"first": "Leonid", |
|
"middle": [], |
|
"last": "Bilegsaikhan Naidan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Boytsov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Nyberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the VLDB Endowment", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bilegsaikhan Naidan, Leonid Boytsov, and Eric Nyberg. 2015b. Permutation search meth- ods are efficient, yet faster search is possible. Proceedings of the VLDB Endowment, 8(12).", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Ms marco: A human generated machine reading comprehension dataset", |
|
"authors": [ |
|
{ |
|
"first": "Tri", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mir", |
|
"middle": [], |
|
"last": "Rosenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xia", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Saurabh", |
|
"middle": [], |
|
"last": "Tiwary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rangan", |
|
"middle": [], |
|
"last": "Majumder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tri Nguyen, Mir Rosenberg, Xia Song, Jian- feng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine reading comprehension dataset.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Passage re-ranking with bert", |
|
"authors": [ |
|
{ |
|
"first": "Rodrigo", |
|
"middle": [], |
|
"last": "Nogueira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1901.04085" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with bert. arXiv preprint arXiv:1901.04085.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "A systematic comparison of various statistical alignment models", |
|
"authors": [ |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Franz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Computational Linguistics", |
|
"volume": "29", |
|
"issue": "1", |
|
"pages": "19--51", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/089120103321337421" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Terrier: A high performance and scalable information retrieval platform", |
|
"authors": [ |
|
{ |
|
"first": "Iadh", |
|
"middle": [], |
|
"last": "Ounis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gianni", |
|
"middle": [], |
|
"last": "Amati", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vassilis", |
|
"middle": [], |
|
"last": "Plachouras", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Craig", |
|
"middle": [], |
|
"last": "Macdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christina", |
|
"middle": [], |
|
"last": "Lioma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the OSIR Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "18--25", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Iadh Ounis, Gianni Amati, Vassilis Plachouras, Ben He, Craig Macdonald, and Christina Li- oma. 2006. Terrier: A high performance and scalable information retrieval platform. In Proceedings of the OSIR Workshop, pages 18- 25.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Squad: 100, 000+ questions for machine comprehension of text", |
|
"authors": [ |
|
{ |
|
"first": "Pranav", |
|
"middle": [], |
|
"last": "Rajpurkar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of EMNLP 2016", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2383--2392", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopy- rev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In Proceedings of EMNLP 2016, pages 2383- 2392.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Statistical machine translation for query expansion in answer retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Riezler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Vasserman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ioannis", |
|
"middle": [], |
|
"last": "Tsochantaridis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Vibhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Mittal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stefan Riezler, Alexander Vasserman, Ioannis Tsochantaridis, Vibhu O. Mittal, and Yi Liu. 2007. Statistical machine translation for query expansion in answer retrieval. In ACL 2007, Proceedings of the 45th Annual Meet- ing of the Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Understanding inverse document frequency: on theoretical arguments for IDF", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Robertson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Journal of Documentation", |
|
"volume": "60", |
|
"issue": "5", |
|
"pages": "503--520", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1108/00220410410560582" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen Robertson. 2004. Understanding in- verse document frequency: on theoretical ar- guments for IDF. Journal of Documentation, 60(5):503-520.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Real-time open-domain question answering with dense-sparse phrase index", |
|
"authors": [ |
|
{ |
|
"first": "Minjoon", |
|
"middle": [], |
|
"last": "Seo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinhyuk", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Kwiatkowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Ankur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ali", |
|
"middle": [], |
|
"last": "Parikh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hannaneh", |
|
"middle": [], |
|
"last": "Farhadi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hajishirzi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1906.05807" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minjoon Seo, Jinhyuk Lee, Tom Kwiatkowski, Ankur P Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2019. Real-time open-domain question answering with dense-sparse phrase index. arXiv preprint arXiv:1906.05807.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "A survey on graph-based methods for similarity searches in metric spaces", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Larissa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rafael", |
|
"middle": [], |
|
"last": "Shimomura", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcos", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Seidi Oyamada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel S", |
|
"middle": [], |
|
"last": "Vieira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kaster", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Larissa C Shimomura, Rafael Seidi Oyamada, Marcos R Vieira, and Daniel S Kaster. 2020. A survey on graph-based methods for similarity searches in metric spaces. Information Sys- tems, page 101507.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Indri: A language-model based search engine for complex queries", |
|
"authors": [ |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Strohman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donald", |
|
"middle": [], |
|
"last": "Metzler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Howard", |
|
"middle": [], |
|
"last": "Turtle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W Bruce", |
|
"middle": [], |
|
"last": "Croft", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Trevor Strohman, Donald Metzler, Howard Turtle, and W Bruce Croft. 2005. In- dri: A language-model based search engine for complex queries. http://ciir.cs.umass. edu/pubfiles/ir-407.pdf [Last Checked Apr 2017].", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Learning to rank answers to non-factoid questions from web collections", |
|
"authors": [ |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massimiliano", |
|
"middle": [], |
|
"last": "Ciaramita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hugo", |
|
"middle": [], |
|
"last": "Zaragoza", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Computational Linguistics", |
|
"volume": "37", |
|
"issue": "2", |
|
"pages": "351--383", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/COLI_a_00051" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mihai Surdeanu, Massimiliano Ciaramita, and Hugo Zaragoza. 2011. Learning to rank answers to non-factoid questions from web collections. Computational Linguistics, 37(2):351-383.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "Succinct nearest neighbor search", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [ |
|
"Sadit" |
|
], |
|
"last": "Tellez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edgar", |
|
"middle": [], |
|
"last": "Ch\u00e1vez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gonzalo", |
|
"middle": [], |
|
"last": "Navarro", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Inf. Syst", |
|
"volume": "38", |
|
"issue": "7", |
|
"pages": "1019--1030", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric Sadit Tellez, Edgar Ch\u00e1vez, and Gonzalo Navarro. 2013. Succinct nearest neighbor search. Inf. Syst., 38(7):1019-1030.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "Huggingface's transformers: State-ofthe-art natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clement", |
|
"middle": [], |
|
"last": "Delangue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Moi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierric", |
|
"middle": [], |
|
"last": "Cistac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R\u00e9mi", |
|
"middle": [], |
|
"last": "Louf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morgan", |
|
"middle": [], |
|
"last": "Funtowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joe", |
|
"middle": [], |
|
"last": "Davison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Shleifer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clara", |
|
"middle": [], |
|
"last": "Patrick Von Platen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yacine", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Jernite", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Canwen", |
|
"middle": [], |
|
"last": "Plu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Teven", |
|
"middle": [ |
|
"Le" |
|
], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sylvain", |
|
"middle": [], |
|
"last": "Scao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mariama", |
|
"middle": [], |
|
"last": "Gugger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quentin", |
|
"middle": [], |
|
"last": "Drame", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Lhoest", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, An- thony Moi, Pierric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. Huggingface's transformers: State-of- the-art natural language processing. ArXiv, abs/1910.03771.", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "Starspace: Embed all the things!", |
|
"authors": [ |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Ledell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sumit", |
|
"middle": [], |
|
"last": "Fisch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keith", |
|
"middle": [], |
|
"last": "Chopra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Adams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Bordes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of AAAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ledell Yu Wu, Adam Fisch, Sumit Chopra, Keith Adams, Antoine Bordes, and Jason Weston. 2018. Starspace: Embed all the things! In Proceedings of AAAI 2018.", |
|
"links": null |
|
}, |
|
"BIBREF51": { |
|
"ref_id": "b51", |
|
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Yonghui", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhifeng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Norouzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maxim", |
|
"middle": [], |
|
"last": "Krikun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuan", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qin", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Klingner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Apurva", |
|
"middle": [], |
|
"last": "Shah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melvin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaobing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Gouws", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshikiyo", |
|
"middle": [], |
|
"last": "Kato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Taku", |
|
"middle": [], |
|
"last": "Kudo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hideto", |
|
"middle": [], |
|
"last": "Kazawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keith", |
|
"middle": [], |
|
"last": "Stevens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Kurian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nishant", |
|
"middle": [], |
|
"last": "Patil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Oriol Vinyals", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural ma- chine translation system: Bridging the gap be- tween human and machine translation. CoRR, abs/1609.08144.", |
|
"links": null |
|
}, |
|
"BIBREF52": { |
|
"ref_id": "b52", |
|
"title": "Approximate nearest neighbor negative contrastive learning for dense text retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Lee", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chenyan", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ye", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kwok-Fung", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jialin", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Bennett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junaid", |
|
"middle": [], |
|
"last": "Ahmed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arnold", |
|
"middle": [], |
|
"last": "Overwijk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2007.00808" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, and Arnold Overwijk. 2020. Approximate nearest neighbor negative contrastive learn- ing for dense text retrieval. arXiv preprint arXiv:2007.00808.", |
|
"links": null |
|
}, |
|
"BIBREF53": { |
|
"ref_id": "b53", |
|
"title": "Retrieval models for question and answer archives", |
|
"authors": [ |
|
{ |
|
"first": "Xiaobing", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiwoon", |
|
"middle": [], |
|
"last": "Jeon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"Bruce" |
|
], |
|
"last": "Croft", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "475--482", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/1390334.1390416" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaobing Xue, Jiwoon Jeon, and W. Bruce Croft. 2008. Retrieval models for question and an- swer archives. In Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2008, Singapore, July 20-24, 2008, pages 475-482.", |
|
"links": null |
|
}, |
|
"BIBREF54": { |
|
"ref_id": "b54", |
|
"title": "Anserini: Reproducible ranking baselines using Lucene", |
|
"authors": [ |
|
{ |
|
"first": "Peilin", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hui", |
|
"middle": [], |
|
"last": "Fang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "J. Data and Information Quality", |
|
"volume": "10", |
|
"issue": "4", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/3239571" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peilin Yang, Hui Fang, and Jimmy Lin. 2018. Anserini: Reproducible ranking baselines us- ing Lucene. J. Data and Information Quality, 10(4):16:1-16:20.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"text": "Figure 1: Retrieval Architecture and Workflow Overview", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"num": null, |
|
"text": "Sample scoring configuration.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"num": null, |
|
"text": "Sample experimental configuration.", |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td colspan=\"3\">: Data set statistics</td><td/></tr><tr><td>candidate</td><td colspan=\"2\">MS MARCO</td><td colspan=\"2\">MS MARCO</td></tr><tr><td>generator</td><td colspan=\"2\">documents</td><td>passages</td><td/></tr><tr><td/><td>TREC 2019</td><td>develop.</td><td>TREC 2019</td><td>develop.</td></tr><tr><td>BM25</td><td>0.647</td><td>0.443</td><td>0.707</td><td>0.452</td></tr><tr><td>Tuned system</td><td>0.693</td><td>0.472</td><td>0.739</td><td>0.480</td></tr><tr><td>Gain</td><td>7.08%</td><td>6.39%</td><td>4.57%</td><td>6.08%</td></tr></table>", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td>: The effect of using a more effec-</td></tr><tr><td>tive candidate generator (evaluation metric is</td></tr><tr><td>NDCG@10). BM25 is tuned for MS MARCO pas-</td></tr><tr><td>sages, but not documents.</td></tr></table>", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"text": "Evaluation of various fusion models.", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF5": { |
|
"text": "For TREC 2019 and 2020 query sets (as well as for Yahoo Answers Manner), the evaluation metric is NDCG@10 (", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null |
|
} |
|
} |
|
} |
|
} |