diff --git "a/5NE3T4oBgHgl3EQfQgli/content/tmp_files/load_file.txt" "b/5NE3T4oBgHgl3EQfQgli/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/5NE3T4oBgHgl3EQfQgli/content/tmp_files/load_file.txt" @@ -0,0 +1,985 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf,len=984 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='04413v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='IR] 11 Jan 2023 CoSPLADE: Contextualizing SPLADE for Conversational Information Retrieval Nam Le Hai1[0000−0002−9020−8790], Thomas Gerald2, Thibault Formal1,3, Jian-Yun Nie4, Benjamin Piwowarski1[0000−0001−6792−3262], and Laure Soulier1,2[0000−0001−9827−7400] 1 Sorbonne Université, CNRS, ISIR, F-75005 Paris, France first.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='last @sorbonne-universite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='fr 2 Université Paris-Saclay, CNRS, LISN, 91405 Orsay France first.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='last @lisn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='fr 3 Naver Labs Europe, Meylan, France first.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='last @naverlabs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='com 4 University of Montreal, Montreal, Canada nie@iro.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='umontreal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='ca Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Conversational search is a difficult task as it aims at retriev- ing documents based not only on the current user query but also on the full conversation history.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Most of the previous methods have focused on a multi-stage ranking approach relying on query reformulation, a criti- cal intermediate step that might lead to a sub-optimal retrieval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Other approaches have tried to use a fully neural IR first-stage, but are ei- ther zero-shot or rely on full learning-to-rank based on a dataset with pseudo-labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' In this work, leveraging the CANARD dataset, we propose an innovative lightweight learning technique to train a first-stage ranker based on SPLADE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' By relying on SPLADE sparse representations, we show that, when combined with a second-stage ranker based on T5Mono, the results are competitive on the TREC CAsT 2020 and 2021 tracks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Keywords: information retrieval · conversational search · first-stage ranking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 1 Introduction With the introduction of conversational assistants like Siri, Alexa or Cortana, conversational Information Retrieval, a variant of adhoc IR, has emerged as an important research domain [4,6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' In conversational IR, a search is conducted within a session, and the user’s information need is expressed through a sequence of queries, similarly to natural conversations – thus introducing complex inter- dependencies between queries and responses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Not surprisingly, neural IR models have been shown to perform the best on conversational IR [5,7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Most prior works rely on a Historical Query Expansion step [34], i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' a query expansion mechanism that takes into account all past queries and their associated answers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Such query expansion model is learned on the CANARD dataset [8], which is composed of a series of questions and their associated answers, together with a disambiguated query, referred to as gold query in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' However, relying on a reformulation step is computationally 2 Le Hai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' costly and might be sub-optimal as underlined in [13,16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Krasakis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' [13] proposed to use ColBERT [12] in a zero-shot manner, replacing the query by the sequence of queries, without any training of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' [16] proposed to learn a dense contextualized representation of the query history, optimizing a learning-to-rank loss over a dataset composed of weak labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' This makes the training process complex (labels are not reliable) and long.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' In this work, we follow this direction of research but propose a much lighter training process for the first-stage ranker, where we focus on queries and do not make use of any passage – and thus of a learning-to-rank training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' It moreover sidesteps the problem of having to derive weak labels from the CANARD dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Given this strong supervision, we can consider more context – i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' we use the answers provided by the system the user is interacting with, which allows to better contextualize the query, as shown in our experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' The training loss we propose leverages the sparse representation of queries and documents provided by the SPLADE model [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' In a nutshell, we require that the representation of the query matches that of the disambiguated query (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' the gold query).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Our first-stage ranker achieves high performances, especially on recall – the most important measure in a multi-stage approach, comparable to the best systems in TREC CAsT [7], but also on precision-oriented measures – which shows the potential of our methodology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Finally, to perform well, the second-stage ranker (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' re-ranker) needs to consider the conversation as well, which might require a set of heuristics to select some content and/or query reformulation such as those used in [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Leveraging the fact that our first-stage ranker outputs weights over the (BERT) vocabulary, we propose a simple mechanism that provides a conversational context to the re-ranker in the form of keywords selected by SPLADE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' In summary, our contributions are the following: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' We propose a new loss to optimize a first-stage ranker resulting in a lightweight training strategy and state-of-the-art results in terms of recall;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' We show that, when combined with a second-stage ranker based on a context derived from the SPLADE query representation of the first stage, we obtain results on par with the best approaches in TREC CAsT 2020 and 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 2 Related Works The first edition [5] of the TREC Conversational Assistance Track (CAsT) was implemented in 2019, providing a new challenge on Conversational Search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' The principle is the following: a user queries the system with questions in natural language, and each time gets a response from the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' The challenge differs from classical search systems as involving previous utterances (either queries or answers) is key to better comprehending the user intent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' In conversational IR, and in TREC CAsT [6,5,7] in particular, the sheer size of the document collection implies to design an efficient (and effective) search system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Conversational IR is closely related to conversational Question-Answering [25,27,26] in the sense that they both include interaction turns in natural lan- guage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' However, the objective is intrinsically different.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' While the topic or the CoSPLADE: Contextualizing SPLADE for Conversational IR 3 context (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', the passage containing answers) is known in conversational QA, conversational IR aims to search among a huge collection of documents with po- tentially more exploratory topics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' With this in mind, in the following, we focus on the literature review of conversational IR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' We can distinguish two lines of work in conversational search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' The first one [29,30,32,3] focuses on a Contextual Query Reformulation (CQR) to produce a (plain or bag-of-words) query, representing ideally the information need free of context, which is fed into a search model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' One strategy of CQR consists in selecting keywords from previous utterances by relying on a graph weighted by either word2vec similarity [29], term-based importance using BM25 [19], or clas- sification models [30].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Other approaches [14,19,18,33,28] leverage the potential of generative language models (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', GPT2 or T5) to rewrite the query.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Such approaches are particularly effective, reaching top performances in the TREC CAsT 2020 edition [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Query reformulation models also differ in the selected evidence sources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Models either focus on the early stage of the conversation [1], on a set of the queries filtered either heuristically [2] or by a classification model [21], or on both previous queries and documents [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Finally, to avoid the prob- lem of generating a single query, [14,20] have proposed to use different generated queries and aggregate the returned documents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' The reformulation step is however a bottleneck since there is no guarantee that the “gold query” is optimal and thus generalizes well [16,13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Moreover, generating text is time-consuming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' To avoid these problems, the second line of work aims to directly integrate the conversation history into the retrieval model, bypassing the query reformulation step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' As far as we know, only a few studies followed this path in conversational search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Qu et al [24] compute a query rep- resentation using the k last queries in the dialogue [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Similarly Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' [16] average contextualized tokens embeddings over the whole query history.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' The representation is learned by optimizing a learning-to-rank loss over a collection with weak labels, which requires much care to ensure good generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Fi- nally, Krasakis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' [13] use a more lexical neural model, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' ColBERT [12], to encode the query with its context – but they do not finetune it at all.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' In this work, we go further by using a sparse model SPLADE [9], using a novel loss tailored to such sparse representations, and by using a lightweight train- ing procedure that does not rely on passages, but only on a dataset containing reformulated queries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 3 Model In TREC CAsT [5,7], each retrieval session contains around 10 turns of ex- change.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Each turn corresponds to a query and its associated canonical answer5 is provided as context for future queries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Let us now introduce some notations that we use to describe our model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' For each turn n ≤ N, where N is the last turn of the conversation, we denote by qn and an respectively the corresponding query and its response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Finally, the context of a query qn at turn n corresponds 5 Selected by the organizer as the most relevant answer of a baseline system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 4 Le Hai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' to all the previous queries and answers, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' q1, a1, q2, a2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', qn−1, an−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' The main objective of the TREC CAsT challenges is to retrieve, for each query qn and its context, the relevant passages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' In the next sections, we present our first-stage ranker and second-stage re- ranker, along with their training procedure, both based, directly or indirectly, on the SPLADE (v2) model described in [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' SPLADE has shown results on par with dense approaches on in-domain collections while exhibiting stronger abilities to generalize in a zero-shot setting [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' It outputs a sparse representation of a document or a query in the BERT vocabulary, which is key to our model during training and inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' The SPLADE model we use includes a contextual encoding function, followed by some aggregation steps: ReLU, log saturation, and max pooling over each token in the text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' The output of SPLADE is a sparse vector with only positive or zero components in the BERT vocabulary space R|V |.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' In this work, we use several sets of parameters for the same SPLADE architecture and distinguish each version by its parameters θ, and the corresponding model by SPLADE(.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1 First stage The original SPLADE model [9] scores a document using the dot product be- tween the sparse representation of a document ( ˆd) and of a query (ˆq): s(ˆq, ˆd) = ˆq · ˆd (1) In our work, like in [16], we suppose that the document representation has been sufficiently well-tuned on the standard ad-hoc IR task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' The document embedding ˆd is thus obtained using the pre-trained SPLADE model, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' ˆd = SPLADE([CLS] d;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' θSP LADE) where θSP LADE are the original SPLADE param- eters obtained from HuggingFace6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' These parameters are not fine-tuned during the training process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' We can thus use standard indices built from the original SPLADE document representations to retrieve efficiently the top-k documents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' In the following, we present how to contextualize the query representation using the conversation history.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Then, we detail the training loss aiming at reducing the gap between the representation of the gold query and the contextualized representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Query representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Like state-of-the-art approaches for first-stage conversa- tional ranking [16,13], we contextualize the query with the previous ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Going further, we propose to include the answers in the query representation process, which is easier to do thanks to our lightweight training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' To leverage both contexts, we use a simple model where the contextual query representation at turn n, denoted by ˆqn,k, is the combination of two representa- tions, ˆqqueries n which encodes the current query in the context of all the previous 6 The weights can be found at https://huggingface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='co/naver/splade-cocondenser-ensembledistil CoSPLADE: Contextualizing SPLADE for Conversational IR 5 queries, and ˆqanswers n,k which encodes the current query in the context of k the past answers7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Formally, the contextualized query representation ˆqn,k is: ˆqn,k = ˆqqueries n + ˆqanswers n,k (2) where we use two versions of SPLADE parameterized by θqueries for the full query history and θanswers,k for the answers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' These parameters are learned by optimizing the loss defined in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' (8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Following [16], we define ˆqqueries n to be the query representation produced by encoding the concatenation of the current query and all the previous ones: ˆqqueries n = SPLADE([CLS] qn [SEP] q1 [SEP] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' [SEP] qn−1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' θqueries) (3) using a set of specific parameters θqueries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' To take into account the answers that the user had access to, we need to include them in the representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Following prior work [2], we can consider a various number of answers k, and in particular, we can either choose k = 1 (the last answer) or k = n−1 (all the previous answers).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Formally, the representation ˆqanswers n,k is computed as: ˆqanswers n,k = 1 k n−1 � i=n−k SPLADE(qn [SEP] ai;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' θanswers,k) (4) Training Based on the above, training aims at obtaining a good representation ˆqn for the last issued query qn, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' to contextualize qn using the previous queries and answers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' To do so, we can leverage the gold query q∗ n, that is, a (hopefully) contextualized and unambiguous query.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' We can compute the representation ˆq∗ n of this query by using the original SPLADE model, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' ˆq∗ n = SPLADE(q∗ n;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' θSP LADE) (5) For example, for a query "How old is he?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='" the matching gold query could be "How old is Obama?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' The representation of the latter given by SPLADE would be as follows: [(”Obama”, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='5), (”Barack”, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='2), (”age”, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='2), (”old”, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0), (”president”, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='] where the terms “Obama” and “Barack” clearly appear alongside other words related to the current query (“old” and the semantically related “age”).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' We can now define the goal of the training, which is to reduce the difference between the gold query representation ˆq∗ n and the representation ˆqn,k computed by our model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' An obvious choice of a loss function is to match the predicted and gold representations using cosine loss (since the ranking is invariant when scaling the query).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' However, as shown in the result section, we experimentally 7 In the experiments, we also explore an alternative model where answers and queries are considered at once.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 6 Le Hai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' found better results with a modified MSE loss, whose first component is the standard MSE loss: LossMSE(ˆqn,k, ˆq∗ n) = MSE(ˆqn,k, ˆq∗ n) (6) In our experiments, we observed that models trained with the direct MSE do not capture well words from the context, especially for words from the answers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' The reason is that the manually reformulated gold query usually only contains a few additional words from the previous turns that are directly implied by the last query.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Other potentially useful words from the answers may not be included.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' This is a conservative expansion strategy which may not be the best example to follow by an automatic query rewriting process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' We thus added an asymmetric MSE, designed to encourage term expansion from past answers, but avoid introducing noise by restricting the terms to those present in the gold query q∗ n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Formally, our asymmetric loss is: Lossasym(ˆqanswers n,k , ˆq∗ n) = � max(ˆq∗ n − ˆqanswers n,k , 0) �2 (7) where the maximum is component-wise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' This loss thus pushes the answer-biased representation ˆqanswers n,k to include tokens from the gold answer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Contrarily to MSE, it does not impose (directly) an upper bound on the components of the ˆqanswers n,k representation – this is done indirectly through the final loss function described below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' The final loss we optimize is a simple linear combination of the losses defined above, and only relies on computing two query representations: Loss(ˆqn,k, ˆq∗ n) = LossMSE(ˆqn,k, ˆq∗ n) + Lossasym(ˆqanswers n,k , ˆq∗ n) (8) There is an interplay between the two components of the global loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' More pre- cisely, Lossasym pushes the ˆqanswers n,k representation to match the golden query representation ˆq∗ n if it can, and LossMSE pushes the queries-biased representa- tion ˆqn,k to compensate if not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' It thus puts a strong focus on extracting infor- mation from past answers, which is shown to be beneficial in our experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Implementation details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' For the first-stage, we initialize both encoders (one en- coding the queries, and the other encoding the previous answer) with pre-trained weights from SPLADE model for adhoc retrieval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' We use the ADAM optimizer with train batch size 16, learning rate 2e-5 for the first encoder and 3e-5 for the second.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' We fine-tune for only 1 epoch over the CANARD dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='2 Reranking We perform reranking using a T5Mono [22] approach, where we enrich the raw query qn with keywords identified by the first-stage ranker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Our motivation is that these words should capture the information needed to contextualize the raw query.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' The enriched query q+ n for conversational turn n is as follows: q+ n = qn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Context : q1 q2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' qn−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Keywords : w1, w2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', wK (9) CoSPLADE: Contextualizing SPLADE for Conversational IR 7 where the wi are the top-K most important words that we select by leveraging the first-stage ranker as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' First, to reduce noise, we only consider words that appear either in any query qi or in the associated answers ai (for i ≤ n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Second, we order words by using the maximum SPLADE weight over tokens that compose the word.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8 We denote the T5 model fine-tuned for this input as T 5+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' As in the original paper [22], the relevance score of a document d for the query qn is the proba- bility of generating the token “true” given a prompt pt(q+ n , d) = “Query: q+ n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Document: d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Relevant:”: score(q+ n , d;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' θ) = pT 5(true|pt(q+ n , d);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' θ) pT 5(true|pt(q+ n , d);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' θ) + pT 5(false|pt(q+ n , d);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' θ) (10) where θ are the parameters of the T5Mono model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Differently to the first stage training, we fine-tune the ranker by aligning the scores of the documents, and not the weight of a query (which is obviously not possible with the T5 model).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Here the “gold” score of a document is computed us- ing the original T5Mono with the gold query q∗ n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' The T5 model is initialized with weights made public by the original authors9, denoted as θT 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' More precisely, we finetune the pre-trained T5Mono model using the MSE-Margin loss [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' The loss function for the re-ranker (at conversation turn n, given documents d1 and d2) is calculated as follows: LR = �� s(q+ n , d1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' θT 5+) − s(q+ n , d2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' θT 5+) � − (s(q∗ n, d1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' θT 5) − s(q∗ n, d2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' θT 5)) �2 We optimize the θT 5+ parameters by keeping the original θT 5 to evaluate the score of gold queries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Implementation details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' We initialize θT 5+ as θT 5, and fine-tune for 3 epochs, with a batch size of 8 and a learning rate 1e-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' We sample pairs (d1, d2) using the first-stage top-1000 documents: d1 is sampled among the top-3, and d2 among the remaining 997 to push the model to focus on important differences in scores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 4 Experimental Protocol We designed the evaluation protocol to satisfy two main evaluation objectives: (i) Evaluating separately the effectiveness of the first-stage and the second-stage ranking components of our CoSPLADE model;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' (ii) Comparing the effectiveness of our CoSPLADE model with TREC CAsT 2020 and 2021 participants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 8 To improve coherence, we chose to make keywords follow their order of appearance in the context, but did not vary this experimental setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 9 We used the Huggingface checkpoint https://huggingface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='co/castorini/monot5-base-msmarco 8 Le Hai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1 Datasets To train our model, we used the CANARD corpus10, a conversational dataset fo- cusing on context-based query rewriting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' More specifically, the CANARD dataset is a list of conversation histories, each being composed of a series of queries, short answers (human written) and reformulated queries (contextualised).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' The train- ing, development, and test sets include respectively 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='538, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='418, and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='571 contextual and reformulated queries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' To evaluate our model, we used the TREC CAsT 2020 and 2021 datasets which include respectively 25 and 26 information needs (topics) and a document collection composed of the MS MARCO dataset, an updated dump of Wikipedia from the KILT benchmark, and the Washington Post V4 collection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' For each topic, a conversation is available, alternating questions and responses (manually selected passages from the collection, aka canonical answers).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' For each question (216 and 239 in total), the dataset provides its manually rewritten form as well as a set of about 20 relevant documents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' We use the former to define an upper- bound baseline (Splade_GoldQuery).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='2 Metrics and baselines We used the official evaluation metrics considered in the TREC CAsT 2020 and 2021, namely nDCG@3, MRR, Recall@X, MAP@X, nDCG@X, where the cut-off is set to 1000 for the CAsT 2020 and 500 for the CAsT 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' For each metric, we calculate the mean and variance of performance across the different queries in the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' With this in mind, we present below the different baselines and scenarios used to evaluate each component of our model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' First-stage ranking scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' To evaluate the effectiveness of our first-stage ranking model (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1), we compare our approach CoSPLADE, based on the query representation of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' (2) with different variants (the document en- coder is set to the original SPLADE encoder throughout our experiments): SPLADE_rawQuery (lower bound): SPLADE [10] using only the original and ambiguous user queries qn;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' SPLADE_goldQuery (kind of upper bound): SPLADE using the manually rewritten query q∗ n;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' CQE [16], a state-of-the-art dense contextualized query representation learned using learning-to-rank on a dataset with pseudo-labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' To model answers when representing the query using ˆqanswers n,k , we used two historical ranges (“All” with k = n−1 answers and “Last” where we use only the last one, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' k = 1) and three types of answer inputs: Answer in which answers are the canonical answers;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Answer-Short in which sentences are filtered as in the best performing TREC CAsT approach [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' This allows for consistent input length, at the expense of losing information;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Answer-Long As answers from CANARD are short (a few sentences extracted from Wikipedia – contrarily to CAsT ones), we expand them to reduce the discrepancy between training and 10 https://sites.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='google.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='com/view/qanta/projects/canard CoSPLADE: Contextualizing SPLADE for Conversational IR 9 inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' For each sentence, we find the Wikipedia passage it appears in (if it exists in ORConvQA [23]), and sample a short snippet of 3 adjacent sentences from it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Finally, we also conducted ablation studies (on the best of the above vari- ants) by modifying either the way to use the historical context or the training loss: flatContext a one-encoder version of our SPLADE approach in which we concatenate all information of the context to apply SPLADE to obtain a single representation of the query (instead of two representations ˆqqueries n and ˆqanswers n,k as in Equations 2 and 3) trained using a MSE loss function (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 6) since there is no more two representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' MSE the version of our SPLADE approach trained with the MSE loss (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 6) instead of the proposed one (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 8);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' cosine the ver- sion of our SPLADE approach trained with a cosine loss instead of the proposed loss (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' The cosine loss is interesting because it is invariant to the scaling factor that preserves the document ordering (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Second-stage ranking scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' We consider different scenario for our second- stage ranking model: T5Mono_RawQuery the T5Mono ranking model [22] applied on raw queries;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' T5Mono_GoldQuery the T5Mono ranking model applied on gold queries;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' T5Mono_CQR the T5Mono ranking model applied on query reformulation generated with a pre-trained T5 (using the CANARD dataset);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' CoSPLADE_[context]_[number] : different versions of our second- stage ranking model input (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 9), varying 1) the number K of keywords identi- fied as relevant by the first-stage ranker: 5, 10, 20, and 2) the presence or absence of the past queries within the reformulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' TREC participant baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' For each evaluation campaign (2020 and 2021), we also compare our model with the best, the median and the lowest TREC CAsT participants presented in the two overviews [5,7], where participant are ranked according to the nDCG@3 metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 5 Results 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1 First-stage ranking effectiveness In this section, we focus on the first-stage ranking component of our CoSPLADE model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' To do so, we experiment different scenarios aiming at evaluating the impact of the designed loss (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 8) and the modeling/utility of evidence sources (Equations 3 and 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Results of these different baselines and scenarios on the TREC CAsT 2021 dataset are provided in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Similar trends are observed on CAsT 2020, but are not reported due to space limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' In general, one can see that all variants of our approach (CoSPLADE_* models) outperform the scenario applying the initial version of SPLADE on raw and, more importantly, gold queries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' This is very encouraging since this latter scenario might be considered an oracle, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' the query is manually disambiguated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Finally, we improve the results over CQE [16] for all the metrics – showing that 10 Le Hai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Recall@500 MAP@500 MRR nDCG@500 nDCG@3 Baselines SPLADE_rawQuery 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='7 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='5±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1 SPLADE_goldQuery 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='2 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='5±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='7 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8 CQE [17] from [7] 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='7 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8 Effect of answer processing: CoSPLADE_.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' AllAnswers 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='5±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='2 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='7 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='7±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='5±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9 AllAnswers-short 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='6 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='7±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='5±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0 AllAnswers-long 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='2 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='6±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0 LastAnswer 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='2±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0 LastAnswer-short 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='2±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='2 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0 LastAnswer-long 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='2±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='7 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3±03.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='6±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0 CoSPLADE_LastAnswer-long variants flatContext 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0 MSE loss 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='6±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='7 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='7±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='2±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='6±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1 cosine loss 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='5 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='6±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='7 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='5±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='2 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0 Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Effectiveness of different scenarios of our first-stage ranking model on the TREC CAsT 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' our simple learning mechanism, combined with SPLADE, allows for achieving SOTA performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Leveraging queries and answers history better contextualizes the current query.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' The results of the flatContext scenario w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' to the SPLADE_goldQuery allows comparing the impact of evidence sources related to the conversation since they both use the same architecture (SPLADE).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' We can observe that it obtains better results than SPLADE_goldQuery (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', 77 vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8 for the Recall@500 metric), highlighting the usefulness of context to better understand the information need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' More detailed answers perform better.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Since answers are more verbose than ques- tions, including them is more complex, and we need to study the different pos- sibilities (CoSPLADE_AllAnswers* and CoSPLADE_LastAnswer*).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' One can see that: 1) trimming answers (*-short) into a few keywords is less effective than considering canonical answers, but 2) it might be somehow effective when com- bined with the associated Wikipedia passage (*-long).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Moreover, it seems more effective to consider only the last answer rather than the whole set of answers in the conversation history11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Taking all together, these observations highlight the importance of the way to incorporate information from answers into the reformulation process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Dual query representation with asymmetric loss leverages sparse query represen- tations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' The results of the flatContext scenario show that considering at once past queries and answers perform better (compared to the MSE loss scenario which is directly comparable).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' However, if we separate the representations and 11 This might be due to the simple way to use past answers, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 4, but all the other variations we tried did not perform better CoSPLADE: Contextualizing SPLADE for Conversational IR 11 Recall@500 MAP@500 MRR nDCG@500 nDCG@3 Baselines T5Mono_RawQuery 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='6±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='2 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0 T5Mono_GoldQuery 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='7 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='7±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='7 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='5±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='6±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8 T5Mono_CQR 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='2 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='2±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='6±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='2 coSPLADE-based second stage variants CoSPLADE_NoContext_5 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='7±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='6±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1 CoSPLADE_NoContext_10 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='7 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9 CoSPLADE_NoContext_20 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='7 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='6±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9 CoSPLADE_Context_5 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='7 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='7±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='5±02.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9 CoSPLADE_Context_10 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='7 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='5±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1 CoSPLADE_Context_20 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='7 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='5±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='2±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9 Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Effectiveness of different scenarios of our second-stage ranking model on TREC CAsT 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' use an asymmetric loss function, the conclusion changes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Moreover, the compar- ison of our best scenario CoSPLADE_LastAnswer-long with a similar scenario trained by simply using a MSE or a cosine losses reveals the effectiveness of our asymmetric MSE (Equation 7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Remember that this asymmetric loss encourages the consideration of previous answers in the query encoding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' This reinforces our intuition that the conversation context, and particularly verbose answers, is im- portant for the conversational search task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' It also reveals that the context should be included at different levels in the architecture (input and loss).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='2 Second-stage ranking effectiveness In this section, we rely on the CoSPLADE_LastAnswer-long model as a first stage ranker, and evaluate different variants of the second-stage ranking method relying on the T5Mono model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' For fair comparison, we also mention results ob- tained by a T5Mono ranking model applied on raw and gold queries, as well as query reformulated using a T5 generative model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Results on the TREC CAsT 2021 dataset are presented in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' The analysis of the CoSPLADE model variants allows to highlight different observations regarding the usability of the context and the number of keywords added to the query.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' First, adding the previous questions to the current query in the prompt (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', “Context”) seems to improve the query understanding and, therefore, positively impacts the retrieval effectiveness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' For instance, when 5 keywords are added, the context allows reaching 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='5% for the nDCG@3 against 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9% without context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Second, the effectiveness metrics tend to increase with the number of additional keywords, particularly for scenarios without context, which is sensible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' This trend is less noticeable for the scenarios with context since the best metrics are alternatively obtained by the scenario adding either 10 or 20 keywords.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' It is worth noting however that adding 10 or 20 keywords is more valuable than adding only 5 (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4% vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='5% for the nDCG@3 metric).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' It thus seems that 1) keywords help to reformulate the initial information need, 12 Le Hai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 2) but they can lead to saturation when they are both numerous and combined with other information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' By comparing the best model scenarios with the more basic scenarios apply- ing the T5Mono second-stage ranker on raw and gold queries, we can observe that our method allows improving the retrieval effectiveness regarding initial queries but is not sufficient for reaching the performance of T5Mono_GoldQuery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' How- ever, results obtained when applying T5Mono on queries reformulated by T5 highlight that the contextualization of an initial query is a difficult task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Indeed, the T5Mono_CQR scenario is less effective than the T5Mono_GoldQuery one with between 6 and 20 points of difference depending on the metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Moreover, it is interesting to note that the SPLADE model applied on raw and gold queries (first-stage ranking in Table 1) obtains lower results than the T5Mono model on the same data (second-stage ranking in Table 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' It can be ex- plained by the purpose of those two architectures which are different: SPLADE is a sparse model focusing on query/document expansion while T5Mono is partic- ularly devoted to increase precision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' However, it is worth noting that combining SPLADE and T5Mono as first and second-stage rankers reaches the highest ef- fectiveness results in our experimental evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' This shows the effectiveness of CoSPLADE to both contextualize queries and effectively rank documents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3 Effectiveness compared to TREC CAsT participants We finally compare our approach with TREC CAsT participants for the 2020 and 2021 evaluation campaigns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' For both years, we can see that we obtain effec- tiveness metrics that are very close or higher than the ones reached by the best participants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Indeed, CoSPLADE surpasses the best TREC participant for the 2020 evaluation campaign regarding Recall@1000 and nDCG@1000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' For 2021, our model obtains better results than the best one for the MRR and nDCG@3 metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' For both years, the best participant is the h2oloo team [18,7] where they use query reformulation techniques, either using AllenAI or T5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Our re- sults suggest that our approach focusing on a sparse first-stage ranking model allows combining the benefit of query expansion and document ranking in a sin- gle model that eventually helps the final reranking step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' In other words, simply rewriting the query without performing a joint learning document ranking can hinder the overall performance of the search task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 6 Conclusion In this paper, we have shown how a sparse retrieval neural IR model, namely SPLADE [9], could be leveraged together with a lightweight learning process to obtain a state-of-the-art first-stage ranker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' We further showed that this first- stage ranker could be used to provide context to the second-stage ranker, leading to results comparable with the best-performing systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Future work may ex- plore strategies to better capture the information from the context or to explicitly treat user feedback present in the evaluation dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' CoSPLADE: Contextualizing SPLADE for Conversational IR 13 TREC CAsT 2020 Recall@1000 MAP@1000 MRR nDCG@1000 nDCG@3 TREC Participant (best) 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='2 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='6 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8 TREC Participant (median) 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='2 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4 TREC Participant (low) 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='2 CoSPLADE 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='5 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='2±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='7 TREC CAsT 2021 Recall@500 MAP@500 MRR nDCG@500 nDCG@3 TREC Participants 1 (best) 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='6 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='6 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='6 TREC Participants 2 (median) 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='6 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='6 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='7 TREC Participants 3 (low) 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='6 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='0 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4 CoSPLADE 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='7 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='5±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='8±3 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='2±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='4±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='9 Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' TREC CAsT 2020 and 2021 performances regarding participants References 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Aliannejadi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Chakraborty, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Ríssola, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Crestani, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': Har- nessing evolution of multi-turn conversations for effective answer retrieval pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 33–42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1145/3343413.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3377968, http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/abs/1912.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='10554 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Arabzadeh, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Clarke, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' : Waterlooclarke at the trec 2020 conversational assistant track (2020) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Clarke, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' : Waterlooclarke at the TREC 2019 conversational as- sistant track.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' In: Voorhees, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Ellis, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' (eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=') Proceedings of the Twenty-Eighth Text REtrieval Conference, TREC 2019, Gaithersburg, Maryland, USA, November 13-15, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' NIST Special Publication, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 1250.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' National Institute of Standards and Technology (NIST) (2019), https://trec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='nist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='gov/pubs/trec28/papers/WaterlooClarke.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='pdf 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Culpepper, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Diaz, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Smucker, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' : Research frontiers in information retrieval: Report from the third strategic workshop on information retrieval in lorne (SWIRL 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' SIGIR Forum 52(1), 34–90 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1145/3274784.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3274788, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1145/3274784.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3274788 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Dalton, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Xiong, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Callan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': CAsT 2020: The conversational assistance track overview p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 10 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Dalton, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Xiong, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Callan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': TREC CAsT 2019: The conversational assistance track overview http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/abs/2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='13624 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Dalton, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Xiong, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Callan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': TREC CAsT 2021: The Conversational Assistance Track Overview p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 7 (2021) 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Elgohary, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Peskov, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Boyd-Graber, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': Can You Unpack That?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Learning to Rewrite Questions-in-Context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 5918–5924.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Association for Computational Linguis- tics, Hong Kong, China (Nov 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='18653/v1/D19-1605, https://aclanthology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/D19-1605 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Formal, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Lassance, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Piwowarski, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Clinchant, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': From Distil- lation to Hard Negative Sampling: Making Sparse Neural IR Mod- els More Effective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' In: Proceedings of the 45th International ACM SI- GIR Conference on Research and Development in Information Retrieval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 2353–2359.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' SIGIR ’22, Association for Computing Machinery, New 14 Le Hai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' York, NY, USA (Jul 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1145/3477495.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3531857, http://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1145/3477495.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3531857 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Formal, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Piwowarski, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Clinchant, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': SPLADE: Sparse Lexical and Ex- pansion Model for First Stage Ranking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' In: Proceedings of the 44th In- ternational ACM SIGIR Conference on Research and Development in In- formation Retrieval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 2288–2292.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' SIGIR ’21, Association for Computing Machinery, New York, NY, USA (Jul 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/10/gm2tf2, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1145/3404835.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3463098 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Hofstätter, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Althammer, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Schröder, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Sertkan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Hanbury, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': Improv- ing efficient neural ranking models with cross-architecture knowledge distillation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' ArXiv abs/2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='02666 (2020) 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Khattab, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Zaharia, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': ColBERT: Efficient and effective passage search via contextualized late interaction over BERT http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/abs/2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='12832 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Krasakis, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Yates, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Kanoulas, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': Zero-shot Query Contextualiza- tion for Conversational Search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' In: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Re- trieval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 1880–1884.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' SIGIR ’22, Association for Computing Machinery, New York, NY, USA (Jul 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1145/3477495.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3531769, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1145/3477495.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3531769 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Kumar, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Callan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': Making information seeking easier: An improved pipeline for conversational search p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 10 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Lan, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Chen, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Goodman, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Gimpel, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Sharma, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Soricut, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': ALBERT: A lite BERT for self-supervised learning of language representa- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' In: 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' OpenReview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='net (2020), https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='id=H1eA7AEtvS 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Lin, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Lin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': Contextualized query embeddings for conversational search http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/abs/2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='08707 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Lin, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Lin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': In-batch negatives for knowledge dis- tillation with tightly-coupled teachers for dense retrieval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' In: Pro- ceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 163–173.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Association for Computa- tional Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='18653/v1/2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='repl4nlp-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='17, https://aclanthology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='repl4nlp-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='17 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Lin, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Lin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': TREC 2020 Notebook: CAsT Track.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Tech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' rep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', TREC (Dec 2021) 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Lin, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Nogueira, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Tsai, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Wang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Lin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': Multi-stage conversational passage retrieval: An approach to fusing term importance estimation and neural query rewriting http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/abs/2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='02230 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Lin, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Nogueira, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Tsai, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Wang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Lin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': Query reformu- lation using query history for passage retrieval in conversational search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' CoRR abs/2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='02230 (2020), https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/abs/2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='02230 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Mele, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Muntean, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Nardini, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Perego, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Tonellotto, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': Finding Context through Utterance Dependencies in Search Conversations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Tech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' rep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' (2021) 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Nogueira, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Jiang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Pradeep, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Lin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': Document ranking with a pre- trained sequence-to-sequence model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' In: Findings of the Association for Com- putational Linguistics: EMNLP 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 708–718.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Association for Compu- tational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='18653/v1/2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='findings-emnlp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='63, https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='aclweb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/anthology/2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='findings-emnlp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='63 CoSPLADE: Contextualizing SPLADE for Conversational IR 15 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Qu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Yang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Chen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Qiu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Croft, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Iyyer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': Open-retrieval conversational question answer- ing pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 539–548.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1145/3397271.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3401110, http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/abs/2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='11364 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Qu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Yang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Chen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Qiu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Croft, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Iyyer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': Open- retrieval conversational question answering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' In: Proceedings of the 43rd In- ternational ACM SIGIR Conference on Research and Development in Infor- mation Retrieval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 539–548.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' SIGIR ’20, Association for Computing Machin- ery, New York, NY, USA (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1145/3397271.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3401110, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1145/3397271.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3401110 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Qu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Yang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Qiu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Croft, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Iyyer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': Bert with history answer embedding for conversational question answering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' In: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in In- formation Retrieval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 1133–1136.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' SIGIR’19, Association for Computing Machin- ery, New York, NY, USA (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1145/3331184.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3331341, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1145/3331184.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3331341 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Qu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Yang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Qiu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Chen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Croft, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Iyyer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': Attentive history selection for conversational question answering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 1391–1400 (2019) 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Reddy, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Chen, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Manning, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' : CoQA: A conversational ques- tion answering challenge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Transactions of the Association for Computa- tional Linguistics 7, 249–266 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1162/tacl_a_00266, https://aclanthology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/Q19-1016 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Vakulenko, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Longpre, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Tu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Anantha, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': Question rewrit- ing for conversational question answering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' In: Proceedings of the 14th ACM International Conference on Web Search and Data Min- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 355–363.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' ACM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1145/3437963.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3441748, https://dl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='acm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/doi/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1145/3437963.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3441748 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Voskarides, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Li, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Panteli, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Ren, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': ILPS at TREC 2019 conversational assistant track p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 4 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Voskarides, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Li, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Ren, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Kanoulas, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', de Rijke, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': Query resolution for conversational search with limited super- vision pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' 921–930.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='1145/3397271.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='3401130, http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/abs/2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='11723 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Yan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Clarke, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Arabzadeh, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': Waterlooclarke at the trec 2021 conver- sational assistant track (2021) 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Lin, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Wang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Lin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Tsai, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' : Query and answer expan- sion from conversation history.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' In: TREC (2019) 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Yu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Liu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Xiong, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Bennett, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Gao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': Few-shot gener- ative conversational query rewriting http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/abs/2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='05009 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' Zamani, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Trippas, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Dalton, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=', Radlinski, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=': Conversational In- formation Seeking (Jan 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='08808, http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='org/abs/2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='08808, arXiv:2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'} +page_content='08808 [cs]' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE3T4oBgHgl3EQfQgli/content/2301.04413v1.pdf'}