{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:31:16.896396Z" }, "title": "Conversational Search with Mixed-Initiative -Asking Good Clarification Questions backed-up by Passage Retrieval", "authors": [ { "first": "Yosi", "middle": [], "last": "Mass", "suffix": "", "affiliation": { "laboratory": "", "institution": "IBM Research AI Haifa University", "location": { "addrLine": "Mount Carmel", "postCode": "31905", "settlement": "Haifa", "region": "HA", "country": "Israel" } }, "email": "yosimass@il.ibm.com" }, { "first": "Doron", "middle": [], "last": "Cohen", "suffix": "", "affiliation": { "laboratory": "", "institution": "IBM Research AI Haifa University", "location": { "addrLine": "Mount Carmel", "postCode": "31905", "settlement": "Haifa", "region": "HA", "country": "Israel" } }, "email": "doronc@il.ibm.com" }, { "first": "Asaf", "middle": [], "last": "Yehudai", "suffix": "", "affiliation": { "laboratory": "", "institution": "IBM Research AI Haifa University", "location": { "addrLine": "Mount Carmel", "postCode": "31905", "settlement": "Haifa", "region": "HA", "country": "Israel" } }, "email": "asaf.yehudai@ibm.com" }, { "first": "David", "middle": [], "last": "Konopnicki", "suffix": "", "affiliation": { "laboratory": "", "institution": "IBM Research AI Haifa University", "location": { "addrLine": "Mount Carmel", "postCode": "31905", "settlement": "Haifa", "region": "HA", "country": "Israel" } }, "email": "davidko@il.ibm.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We deal with the scenario of conversational search, where user queries are under-specified or ambiguous. This calls for a mixed-initiative setup. User-asks (queries) and system-answers, as well as system-asks (clarification questions) and user response, in order to clarify her information needs. We focus on the task of selecting the next clarification question, given the conversation context. Our method leverages passage retrieval from a background content to fine-tune two deep-learning models for ranking candidate clarification questions. We evaluated our method on two different use-cases. The first is an open domain conversational search in a large web collection. The second is a taskoriented customer-support setup. We show that our method performs well on both use-cases.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "We deal with the scenario of conversational search, where user queries are under-specified or ambiguous. This calls for a mixed-initiative setup. User-asks (queries) and system-answers, as well as system-asks (clarification questions) and user response, in order to clarify her information needs. We focus on the task of selecting the next clarification question, given the conversation context. Our method leverages passage retrieval from a background content to fine-tune two deep-learning models for ranking candidate clarification questions. We evaluated our method on two different use-cases. The first is an open domain conversational search in a large web collection. The second is a taskoriented customer-support setup. We show that our method performs well on both use-cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "A key task in information and knowledge discovery is the retrieval of relevant information given the user's information need (usually expressed by a query). With the abundance of textual knowledge sources and their diversity, it becomes more and more difficult for users, even expert ones, to query such sources and obtain valuable insights.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Thus, users need to go beyond the traditional ad-hoc (one-shot) retrieval paradigm. This requires to support the new paradigm of conversational search -a sophisticated combination of various mechanisms for exploratory search, interactive IR, and response generation. In particular, the conversational paradigm can support mixed-initiative: namely, the traditional user asks -system answers interaction in addition to system-asks (clarification questions) and user-answers, to better guide the system and reach the information needed (Krasakis et al., 2020) .", "cite_spans": [ { "start": 533, "end": 556, "text": "(Krasakis et al., 2020)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Existing approaches for asking clarification questions include selection or generation. In the selection approach, the system selects clarification questions from a pool of pre-determined questions (Aliannejadi et al., 2019) . In the generation approach, the system generates clarification questions using rules or using neural generative models (Zamani et al., 2020) .", "cite_spans": [ { "start": 198, "end": 224, "text": "(Aliannejadi et al., 2019)", "ref_id": "BIBREF1" }, { "start": 346, "end": 367, "text": "(Zamani et al., 2020)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work we focus on the selection task. While the latter (i.e., generation) may represent a more realistic use-case, still there is an interest in the former (i.e., selection) as evident by the Clarifying Questions for Open-Domain Dialogue Systems (ClariQ) challenge . Moreover, the selection task represents a controlled and less noisy scenario, where the pool of clarifications can be mined from e.g., query logs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we deal with content-grounded conversations. Thus, a conversation starts with an initial user query, continues with several rounds of conversation utterances (0 or more), and finally ends with one or more documents being returned to the user. Some of the agent utterances are marked as clarification questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The task at hand is defined as follows. Given a conversation context up to (and not including) a clarification-question utterance, predict the next clarification question. A more formal definition is given in Section 3.2 below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Intuitively, clarification questions should be used to distinguish between several possible intents of the user. We approximate those possible intents through passages that are retrieved from a given corpus of documents. A motivating example from the challenge is given in Figure 1 . The user wants to get information about the topic all men are created equal. Through the retrieved passage, the system can ask the mentioned clarification questions.", "cite_spans": [], "ref_spans": [ { "start": 273, "end": 281, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We use two deep-learning models. The first one learns an association between conversation context and clarification questions. The second learns an association between conversation context, candidate passages and clarification questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Evaluation was done on two different use-cases. The first one is an open domain search in a large web corpus . The second is an internal task-oriented customer-support setup, where users ask technical questions. We show that our method performs well on both use-cases. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We focus on works that deal with clarificationquestions selection. Aliannejadi et al. (2019) describes a setup very similar to ours for the aforementioned task. They apply a two-step process. In the first step, they use BERT (Devlin et al., 2019) to retrieve candidate clarification questions and, in the second step, they re-rank the candidates using multiple sources of information. Among them are the scores of retrieved documents using the clarification questions. However, they do not look at passage content as we do. The ClariQ 1 challenge organized a competition for selecting the best clarification questions in an open-domain conversational search. The system by NTES ALONG (Ou and Lin, 2020) was ranked first. They first retrieve candidate clarification questions and then re-rank them using a ROBERTA (Liu et al., 2019) model, that is fine-tuned on the relation between a query and a clarification question. Unlike our method, they do not exploit passage content.", "cite_spans": [ { "start": 67, "end": 92, "text": "Aliannejadi et al. (2019)", "ref_id": "BIBREF1" }, { "start": 225, "end": 246, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF3" }, { "start": 684, "end": 702, "text": "(Ou and Lin, 2020)", "ref_id": "BIBREF10" }, { "start": 813, "end": 831, "text": "(Liu et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "In Rao and Daum\u00e9 III (2018) , they select clarification questions using the expected value of perfect information, namely a good question is one whose expected answer will be useful. They do not assume a background corpus of documents.", "cite_spans": [ { "start": 3, "end": 27, "text": "Rao and Daum\u00e9 III (2018)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "3 Clarification-questions Selection", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "A conversation C is a list of utterances, C = {c 0 , ..., c n } where c 0 is the initial user query. Each 1 http://convai.io utterance has a speaker which is either a user or an agent. 2 Since we deal with content-grounded conversations, the last utterance is an agent utterance, that points to a document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem definition", "sec_num": "3.1" }, { "text": "We further assume that agent utterances are tagged with a clarification flag where a value of 1 indicates that the utterance is a clarification question. This flag is either given as part of the dataset (e.g., in the open domain dataset, ClariQ) or is derived automatically by using a rule-based model or a classifier. We discuss such rules for the second task-oriented customer-support dataset (see Section 4.1 below).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem definition", "sec_num": "3.1" }, { "text": "The Clarification-questions Selection task is defined as follows. Given a conversation context C j = {c 0 , ..., c j\u22121 }, predict a clarification question at the next utterance of the conversation. 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem definition", "sec_num": "3.1" }, { "text": "The proposed run-time architecture is depicted in Figure 2 . It contains two indices and two fine-tuned BERT models. The Documents index contains the corpus of documents (recall that we deal with conversations that end with a document(s) being retrieved). This index supports passage retrieval. The Clarification-questions index contains the pool of clarification questions. The two BERT models are used for re-ranking of candidate clarification questions as described below.", "cite_spans": [], "ref_spans": [ { "start": 50, "end": 58, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Method", "sec_num": "3.2" }, { "text": "Given a conversation context C j , we first retrieve top-k passages from the Document index (See Section 3.3 below). We then use those passages, to retrieve candidate clarification questions from the Clarification-questions index (See Section 3.4 below). We thus have, for each passage, a list of candidate clarification questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3.2" }, { "text": "The next step re-ranks those candidate clarification questions. Re-ranking is done by the fusion of ranking obtain through two BERT models. Each model re-ranks the clarification questions by their relevance to the given conversation context and the retrieved passages (see Section 3.5 below). The components of the architecture are described next in more details.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3.2" }, { "text": "Documents in the document index are represented using two fields. The first field contains the actual document content. The second field augments the document's representation with the text of all dialogs that link to it in the train-set (Amitay et al., 2005) . We refer to these two fields as text and anchor respectively. We also keep a third field anchor and text that contain the concatenation of the above two fields.", "cite_spans": [ { "start": 238, "end": 259, "text": "(Amitay et al., 2005)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Conversation-based passage retrieval", "sec_num": "3.3" }, { "text": "Given a conversation context C j , Passage retrieval is performed in two steps. First, top-k documents are retrieved from the anchor and text field. using a disjunctive query over all words in the conversation C j . Following (Ganhotra et al., 2020) , we treat the dialog query as a verbose query and apply the Fixed-Point (FP) method (Paik and Oard, 2014) for weighting its words. Yet, compared to \"traditional\" verbose queries, dialogs are further segmented into distinct utterances. Using this observation, we implement an utterance-biased extension for enhanced word-weighting. To this end, we first score the various utterances based on the initial FP weights of words they contain. We then propagate utterance scores back to their associated words.", "cite_spans": [ { "start": 226, "end": 249, "text": "(Ganhotra et al., 2020)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Conversation-based passage retrieval", "sec_num": "3.3" }, { "text": "In the second step, candidate passages are extracted from those top-k documents using a sliding window of fixed size with some overlap. Each candidate passage p is assigned an initial score based on the coverage of terms in C j by p. The coverage is defined as the sum over all terms in each utterance, using terms' global idf (inverse document frequency) and their (scaled) tf (term frequency). The final passage score is a linear combinations of its initial score and the score of the document it is extracted from. Details are given in appendix A.1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conversation-based passage retrieval", "sec_num": "3.3" }, { "text": "The pool of clarification questions is indexed into a Clarification index. We use the passages returned for a given conversation context C j , to extract an initial set of candidate clarification questions as follows. For each passage P , we concatenate its content to the text of all utterances in C j , and use it as a query to the Clarification index.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clarification-questions retrieval", "sec_num": "3.4" }, { "text": "We thus have, for each passage, a list of candidate clarification questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clarification-questions retrieval", "sec_num": "3.4" }, { "text": "The input to this step is a conversation context C j , a list of candidate passages, and a list of candidate clarification questions for each passage. We use two BERT (Devlin et al., 2019) models to rerank the candidate clarification questions. The first model, BERT-C-cq learns an association between conversation contexts and clarification questions. The second model, BERT-C-P-cq learns an association between conversation contexts, passages and clarification questions. Training and using the two models is described below. Fine-tuning of the models. The first model, BERT-C-cq, is fine-tuned through a triplet network (Hoffer and Ailon, 2015) that is adopted for BERT fine-tuning (Mass et al., 2019) . It uses triplets (C j , cq + , cq \u2212 ), where cq + is the clarification question of conversation C at utterance c j (as given in the conversations of the training set). Negative examples (cq \u2212 ) are randomly selected from the pool of clarification questions (not associated with C).", "cite_spans": [ { "start": 685, "end": 704, "text": "(Mass et al., 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Clarification-questions re-ranking", "sec_num": "3.5" }, { "text": "For fine-tuning the second model, BERT-C-P-cq, we need to retrieve relevant passages. We use a weak-supervision assumption that all passages in a relevant document (i.e., a document returned for C), are relevant as well. A triplet for the second BERT model is thus (C j [SEP ] P, cq + , cq \u2212 ), where P is a passage retrieved for C j , [SEP ] is BERT's separator token, cq + and cq \u2212 are positive and negative clarification questions selected as described above for the first model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clarification-questions re-ranking", "sec_num": "3.5" }, { "text": "Due to the BERT limitation on max number of tokens (512), we represent a conversation context C j using the last m utterances whose total length is less than 512 characters. We also take the passage window size to be 512 characters. 4 Re-ranking with the models. Each candidate clarification question cq i is fed to the first model with the conversation context as (C j , cq i ), and to the second model as (C j [SEP ] P, cq i ), where P is the passage that was used to retrieve cq i . Final scores of the candidates is set by simple Comb-SUM (Wu, 2012) fusion of their scores from the two BERT models.", "cite_spans": [ { "start": 233, "end": 234, "text": "4", "ref_id": null }, { "start": 543, "end": 553, "text": "(Wu, 2012)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Clarification-questions re-ranking", "sec_num": "3.5" }, { "text": "We evaluated our method on two datasets. The first, ClariQ represents an information-seeking use-case. The second, Support contains conversations and technical documents of an internal customer support site. Statistics on the two datasets are given in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 252, "end": 259, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "The ClariQ dataset was built by crowd sourcing for the task of clarification-questions selection, thus it has high quality clarification questions. Each conversation has exactly three turns. Initial user query, an agent clarification question and the user response to the clarification question. The agent utterance is always a clarification question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "The Support dataset contains noisy logs of human-to-human conversations, that contain a lot of chit-chat utterances such as Thanks for your help or Are you still there? We thus applied the following rules to identify agent clarification questions. i) We consider only sentences in agent utterances that contain a question mark. ii) We look for question words in the text (e.g., what, how, where, did, etc.) and consider only the text between such a word and the question mark. iii) If no question words were found, we run the sentences with the question mark through Allennlp's constituency parser (Joshi et al., 2018) , and keep sentences with a Penn-Treebank clause type of SQ or SBARQ 5 .", "cite_spans": [ { "start": 371, "end": 406, "text": "(e.g., what, how, where, did, etc.)", "ref_id": null }, { "start": 598, "end": 618, "text": "(Joshi et al., 2018)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "The above rules can be used to detect questiontype sentences. However, we are interested in clarification questions that are related to the background collection of documents and not in chit-chat questions (such as e.g., how are you today?). To filter out such chit-chat question types, we apply a 4th rule as follows. iv) Recall that each conversation ends with a document answer. We send the detected question and its answer (the next user's utterance), as a passage retrieval query (see Section 3.1 above) to the Documents index and keep only those questions that returned in their top-3 results, a passage from the document of the conversation. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "We use Apache Lucene 6 for indexing the documents. We use English language analyzer and default BM25 similarity (Robertson and Zaragoza, 2009) . For the customer support dataset (Support) we used the anchor and text field for initial document retrieval, since most documents in the dataset do have training conversations.", "cite_spans": [ { "start": 112, "end": 142, "text": "(Robertson and Zaragoza, 2009)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Setup of the experiments", "sec_num": "4.2" }, { "text": "The open-domain dataset (ClariQ) contains a large number of documents (2.7M), but only a small portion of them do have training conversations. Using the anchor and text field for retrieval will prefer that small subset of documents (since only they have anchor text). Thus for this dataset, we used the text field for retrieval.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setup of the experiments", "sec_num": "4.2" }, { "text": "For passage retrieval, we used a sliding window of 512 characters on retrieved documents' content. We used common values for the hyper parameters, with \u03bb = 0.5 to combine document and passage scores, and \u00b5 = 2000 for the dirichlet smoothing of the documents LM used in the FixedPoint reranking. Details of the passage retrieval are given in Apendix A.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setup of the experiments", "sec_num": "4.2" }, { "text": "The full conversations were used to retrieve passages. For feeding to the BERT models, we concatenated the last m utterances whose total length was less than 512 characters (we take full utterances that fit the above size. We do not cut utterances).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setup of the experiments", "sec_num": "4.2" }, { "text": "We used the pytorch huggingface implementation of BERT 7 . For the two BERT models we used bert-base-uncased (12-layers, 768-hidden, 12-heads, 110M parameters). Fine-tuning was done with the following default hyper parameters. max seq len of 256 tokens 8 for the BERT-C-cq model and 384 for the BERT-C-P-cq model, learning rate of 2e-5 and 3 training epochs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setup of the experiments", "sec_num": "4.2" }, { "text": "We retrieved at most 1000 initial candidate clarifications for each passage. All experiments were run on a 32GB V100 GPUs. The re-ranking times of 1000 clarification questions for each conversation took about 1 \u2212 2 sec. For evaluation metrics we followed the ClariQ leaderboard 9 and used the Recall@30 as the main metrics. Table 2 reports the results on the dev sets of the two datasets. 10 On both datasets, each of the BERT re-rankers showed a significant improvement over the initial retrieval from the Clarification-questions index (denoted by IR-Base). For example on Support, BERT-C-cq achieved R@30=0.538 compared to R@30=0.294 of IR-Base (an improvement of 82%).", "cite_spans": [], "ref_spans": [ { "start": 324, "end": 331, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Setup of the experiments", "sec_num": "4.2" }, { "text": "We can further see that the two BERT models (BERT-C-cq and BERT-C-P-cq), yield quite similar results on both datasets, but, when fusing their scores (BERT-fusion), there is another improvement of about 2.5% over each of the rankers separately. For example on ClariQ, BERT-fusion achieved R@30=0.791, compared to R@30=0.77 of BERT-C-cq.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "This improvement can be attributed to complementary matching that each of the two BERT models learns. The second model learns latent features that are revealed only through the retrieved passages, while the first model works better for cases where the retrieved passages are noisy. For example for query 133 in Clariq, all men are created equal (see Figure 1 above) , BERT-C-P-cq could find nine correct clarification questions out of 14 7 https://bit.ly/2Me0Gk1 8 note that here we use tokens while for the passages and representation of conversation we use characters 9 https://convai.io 10 We compare our methods on the dev sets since in Clariq we had access only to the dev set. We note that in both datasets, the dev sets wer not used during the training, thus they can be regarded as an held-out test set in its top-30 (including those two in the Figure) , while BERT-C-cq found only three of them. Table 3 shows the official Clariq leaderboard result on the test set. We can see that our method BERT-fusion 11 was ranked forth but was the second best as a team. We note that the top performing system (NTES ALONG) gave preferences to clarification questions from the test data, capitalizing the specific Clariq properties that test topics came from different domain than the train topics. This is not a valid assumption in general. In contrast, we treat all clarification questions equally in the given pool of clarification questions. ", "cite_spans": [], "ref_spans": [ { "start": 350, "end": 365, "text": "Figure 1 above)", "ref_id": null }, { "start": 853, "end": 860, "text": "Figure)", "ref_id": null }, { "start": 905, "end": 912, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "We presented a method for clarification-questions selection in conversational-search scenarios that end with documents as answers. We showed that using passages, combined with deep-learning models, improves the quality of the selected clarification questions. We evaluated our method on two diversified dataset. On both datasets, the usage of passages for clarificationquestions re-ranking achieved improvement of 12% \u2212 87% over base IR retrieval.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "An agent can be either a human agent or a bot.3 We always return clarification questions. We leave it for future work to decide whether a clarification is required.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "note that BERT uses tokens while for the passages and representation of conversation we use characters", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://gist.github.com/nlothian/9240750 6 https://lucene.apache.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Our run was labeled CogIR in the official leaderboard", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We use Apache Lucene for indexing the documents, configured with English language analyzer and default BM25 similarity (Robertson and Zaragoza, 2009) .After retrieving top-k documents, candidate passages are extracted from those documents using a sliding window of fixed size with some overlap. Each retrieved passage p is assigned an initial score based on the coverage of terms in C j by p. The coverage is defined as the sum over all terms in each utterance, using terms' global idf (inverse document frequency) and their (scaled) tf (term frequency). Let c be a conversation with n utterances c = u1, ...un. Passage score is computed as a linear combination of its initial score scoreinit(p, c) and the score of its enclosing document. Both scores are normalized.We used lambda=0.5, i.e., fixed equal weights for the document and the passage scores. The initial passage score scoreinit(p, c) is computed as a weighted sum over its utterances scores scoreut(p, ui). Utterance scores are discounted such that later utterances have greater effect on the passage score.Utterance score scoreut(p, u) reflects utterance's terms coverage by the passage, considering terms' global idf (inverse document frequency) and their (scaled) tf (term frequency). Multiple coverage scorers are applied, which differ by their term frequency scaling schemes. Finally, the utterance score is a product of these coverage scorest p (terms appearing in both)t p , t u = (passage terms, utterance terms)Different scaling schemes provide different interpretations of terms' importance. We combine two tf scaling methods, one that scales by a BM25 term score, and another that scales by the minimum of tf (t) in the utterance and passage.The final passage score is a linear combinations of its initial score and the score of the document it is extracted from. Candidate passage ranking exploits a cascade of scorers.", "cite_spans": [ { "start": 119, "end": 149, "text": "(Robertson and Zaragoza, 2009)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "A.1 Passage Retrieval details", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Convai3: Generating clarifying questions for opendomain dialogue systems (clariq)", "authors": [ { "first": "Mohammad", "middle": [], "last": "Aliannejadi", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Kiseleva", "suffix": "" }, { "first": "Aleksandr", "middle": [], "last": "Chuklin", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dalton", "suffix": "" }, { "first": "Mikhail", "middle": [], "last": "Burtsev", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohammad Aliannejadi, Julia Kiseleva, Aleksandr Chuklin, Jeff Dalton, and Mikhail Burtsev. 2020. Convai3: Generating clarifying questions for open- domain dialogue systems (clariq).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Asking clarifying questions in open-domain information-seeking conversations", "authors": [ { "first": "Mohammad", "middle": [], "last": "Aliannejadi", "suffix": "" }, { "first": "Hamed", "middle": [], "last": "Zamani", "suffix": "" }, { "first": "Fabio", "middle": [], "last": "Crestani", "suffix": "" }, { "first": "W", "middle": [ "Bruce" ], "last": "Croft", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR'19", "volume": "", "issue": "", "pages": "475--484", "other_ids": { "DOI": [ "10.1145/3331184.3331265" ] }, "num": null, "urls": [], "raw_text": "Mohammad Aliannejadi, Hamed Zamani, Fabio Crestani, and W. Bruce Croft. 2019. Asking clar- ifying questions in open-domain information-seeking conversations. In Proceedings of the 42nd Interna- tional ACM SIGIR Conference on Research and De- velopment in Information Retrieval, SIGIR'19, page 475-484, New York, NY, USA. Association for Com- puting Machinery.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Queries as anchors: selection by association", "authors": [ { "first": "Einat", "middle": [], "last": "Amitay", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Darlow", "suffix": "" }, { "first": "David", "middle": [], "last": "Konopnicki", "suffix": "" }, { "first": "Uri", "middle": [], "last": "Weiss", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 16th ACM Conference on Hypertext and Hypermedia", "volume": "", "issue": "", "pages": "193--201", "other_ids": {}, "num": null, "urls": [], "raw_text": "Einat Amitay, Adam Darlow, David Konopnicki, and Uri Weiss. 2005. Queries as anchors: selection by association. In Proceedings of the 16th ACM Confer- ence on Hypertext and Hypermedia, pages 193-201.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Conversational document prediction to assist customer care agents", "authors": [ { "first": "Jatin", "middle": [], "last": "Ganhotra", "suffix": "" }, { "first": "Haggai", "middle": [], "last": "Roitman", "suffix": "" }, { "first": "Doron", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Nathaniel", "middle": [], "last": "Mills", "suffix": "" }, { "first": "R", "middle": [ "Chulaka" ], "last": "Gunasekara", "suffix": "" }, { "first": "Yosi", "middle": [], "last": "Mass", "suffix": "" }, { "first": "Sachindra", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Luis", "middle": [ "A" ], "last": "Lastras", "suffix": "" }, { "first": "David", "middle": [], "last": "Konopnicki", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing", "volume": "2020", "issue": "", "pages": "349--356", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jatin Ganhotra, Haggai Roitman, Doron Cohen, Nathaniel Mills, R. Chulaka Gunasekara, Yosi Mass, Sachindra Joshi, Luis A. Lastras, and David Konop- nicki. 2020. Conversational document prediction to assist customer care agents. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, Novem- ber 16-20, 2020, pages 349-356. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Deep metric learning using triplet network", "authors": [ { "first": "Elad", "middle": [], "last": "Hoffer", "suffix": "" }, { "first": "Nir", "middle": [], "last": "Ailon", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elad Hoffer and Nir Ailon. 2015. Deep metric learning using triplet network. In 3rd International Confer- ence on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Workshop Track Proceedings.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Extending a parser to distant domains using a few dozen partially annotated examples", "authors": [ { "first": "Vidur", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Hopkins", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vidur Joshi, Matthew Peters, and Mark Hopkins. 2018. Extending a parser to distant domains using a few dozen partially annotated examples.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Analysing the effect of clarifying questions on document ranking in conversational search", "authors": [ { "first": "Antonios", "middle": [], "last": "Minas Krasakis", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Aliannejadi", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 ACM SIGIR on International Conference on Theory of Information Retrieval", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1145/3409256.3409817" ] }, "num": null, "urls": [], "raw_text": "Antonios Minas Krasakis, Mohammad Aliannejadi, Nikos Voskarides, and Evangelos Kanoulas. 2020. Analysing the effect of clarifying questions on docu- ment ranking in conversational search. Proceedings of the 2020 ACM SIGIR on International Conference on Theory of Information Retrieval.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A study of bert for non-factoid question-answering under passage length constraints", "authors": [ { "first": "Yosi", "middle": [], "last": "Mass", "suffix": "" }, { "first": "Shai", "middle": [], "last": "Haggai Roitman", "suffix": "" }, { "first": "Or", "middle": [], "last": "Erera", "suffix": "" }, { "first": "Bar", "middle": [], "last": "Rivlin", "suffix": "" }, { "first": "David", "middle": [], "last": "Weiner", "suffix": "" }, { "first": "", "middle": [], "last": "Konopnicki", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yosi Mass, Haggai Roitman, Shai Erera, Or Rivlin, Bar Weiner, and David Konopnicki. 2019. A study of bert for non-factoid question-answering under passage length constraints. CoRR, abs/1908.06780.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A clarifying question selection system from ntes along in convai3 challenge", "authors": [ { "first": "Wenjie", "middle": [], "last": "Ou", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2020, "venue": "CoRR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenjie Ou and Yue Lin. 2020. A clarifying question se- lection system from ntes along in convai3 challenge. CoRR, abs/2010.14202.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A fixed-point method for weighting terms in verbose informational queries", "authors": [ { "first": "H", "middle": [], "last": "Jiaul", "suffix": "" }, { "first": "Douglas", "middle": [ "W" ], "last": "Paik", "suffix": "" }, { "first": "", "middle": [], "last": "Oard", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, CIKM '14", "volume": "", "issue": "", "pages": "131--140", "other_ids": { "DOI": [ "10.1145/2661829.2661957" ] }, "num": null, "urls": [], "raw_text": "Jiaul H. Paik and Douglas W. Oard. 2014. A fixed-point method for weighting terms in verbose informational queries. In Proceedings of the 23rd ACM Interna- tional Conference on Conference on Information and Knowledge Management, CIKM '14, page 131-140, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Learning to ask good questions: Ranking clarification questions using neural expected value of perfect information", "authors": [ { "first": "Sudha", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2737--2746", "other_ids": { "DOI": [ "10.18653/v1/P18-1255" ] }, "num": null, "urls": [], "raw_text": "Sudha Rao and Hal Daum\u00e9 III. 2018. Learning to ask good questions: Ranking clarification questions us- ing neural expected value of perfect information. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2737-2746, Melbourne, Aus- tralia. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The probabilistic relevance framework: Bm25 and beyond", "authors": [ { "first": "Stephen", "middle": [], "last": "Robertson", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Zaragoza", "suffix": "" } ], "year": 2009, "venue": "Found. Trends Inf. Retr", "volume": "3", "issue": "4", "pages": "333--389", "other_ids": { "DOI": [ "10.1561/1500000019" ] }, "num": null, "urls": [], "raw_text": "Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: Bm25 and be- yond. Found. Trends Inf. Retr., 3(4):333--389.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Data Fusion in Information Retrieval", "authors": [ { "first": "Shengli", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2012, "venue": "", "volume": "13", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1007/978-3-642-28866-1" ] }, "num": null, "urls": [], "raw_text": "Shengli Wu. 2012. Data Fusion in Information Re- trieval, volume 13.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Generating clarifying questions for information retrieval", "authors": [ { "first": "Hamed", "middle": [], "last": "Zamani", "suffix": "" }, { "first": "Susan", "middle": [], "last": "Dumais", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Craswell", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Bennett", "suffix": "" }, { "first": "Gord", "middle": [], "last": "Lueck", "suffix": "" } ], "year": 2020, "venue": "Proceedings of The Web Conference 2020, WWW '20", "volume": "", "issue": "", "pages": "418--428", "other_ids": { "DOI": [ "10.1145/3366423.3380126" ] }, "num": null, "urls": [], "raw_text": "Hamed Zamani, Susan Dumais, Nick Craswell, Paul Bennett, and Gord Lueck. 2020. Generating clar- ifying questions for information retrieval. In Pro- ceedings of The Web Conference 2020, WWW '20, page 418-428, New York, NY, USA. Association for Computing Machinery.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "All men are created equal\" is arguably the bestknown phrase in any of America's political documents, \u2026. Thomas Jefferson first used the phrase in the Declaration of Independence. Corpus Figure 1: A motivating example", "uris": null, "num": null }, "FIGREF1": { "type_str": "figure", "text": "Clarification-questions selection run-time architecture", "uris": null, "num": null }, "TABREF1": { "text": "Datasets statistics", "num": null, "html": null, "content": "
ClariQ 2.7M #conversations (train/dev/test) 187/50/60 500/39/43 Support #docs 520 #total clarifications 3940 704 #avg/max turns per C 3/3 8.2/80.5 #avg/max clarifications per C 14/18 1.27/5
", "type_str": "table" }, "TABREF2": { "text": "Retrieval quality on the dev set of the two datasets", "num": null, "html": null, "content": "
ClariQ IR-Base BERT-C-cq BERT-C-P-cq BERT-fusionR@5 R@10 R@20 R@30 .327 .575 .669 .706 .352 .631 .743 .770 .344 .615 .750 .774 .353 .639 .758 .791
Support IR-Base BERT-C-cq BERT-C-P-cq BERT-fusion.102 .358 .217 .294.153 .410 .294 .410.269 .487 .487 .500.294 .538 .538 .551
", "type_str": "table" }, "TABREF3": { "text": "Retrieval quality on the test set of the ClariQ dataset", "num": null, "html": null, "content": "
ClariQ NTES ALONG NTES ALONG NTES ALONG BERT-fusion TAL-ML Karl SodaR@5 R@10 R@20 R@30 .340 .632 .833 .874 .341 .635 .831 .872 .338 .624 .817 .868 .338 .631 .807 .857 .339 .625 .817 .856 .335 .623 .799 .849 .327 .606 .801 .843
", "type_str": "table" } } } }