{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:38:37.900770Z" }, "title": "Slice-Aware Neural Ranking", "authors": [ { "first": "Gustavo", "middle": [], "last": "Penha", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Claudia", "middle": [], "last": "Hauff", "suffix": "", "affiliation": {}, "email": "c.hauff@tudelft.nl" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Understanding when and why neural ranking models fail for an IR task via error analysis is an important part of the research cycle. Here we focus on the challenges of (i) identifying categories of difficult instances (a pair of question and response candidates) for which a neural ranker is ineffective and (ii) improving neural ranking for such instances. To address both challenges we resort to slice-based learning (Chen et al., 2019) for which the goal is to improve effectiveness of neural models for slices (subsets) of data. We address challenge (i) by proposing different slicing functions (SFs) that select slices of the datasetbased on prior work we heuristically capture different failures of neural rankers. Then, for challenge (ii) we adapt a neural ranking model to learn slice-aware representations, i.e. the adapted model learns to represent the question and responses differently based on the model's prediction of which slices they belong to. Our experimental results 1 across three different ranking tasks and four corpora show that slice-based learning improves the effectiveness by an average of 2% over a neural ranker that is not slice-aware.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Understanding when and why neural ranking models fail for an IR task via error analysis is an important part of the research cycle. Here we focus on the challenges of (i) identifying categories of difficult instances (a pair of question and response candidates) for which a neural ranker is ineffective and (ii) improving neural ranking for such instances. To address both challenges we resort to slice-based learning (Chen et al., 2019) for which the goal is to improve effectiveness of neural models for slices (subsets) of data. We address challenge (i) by proposing different slicing functions (SFs) that select slices of the datasetbased on prior work we heuristically capture different failures of neural rankers. Then, for challenge (ii) we adapt a neural ranking model to learn slice-aware representations, i.e. the adapted model learns to represent the question and responses differently based on the model's prediction of which slices they belong to. Our experimental results 1 across three different ranking tasks and four corpora show that slice-based learning improves the effectiveness by an average of 2% over a neural ranker that is not slice-aware.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Retrieving text for a given information need is a fundamental task in Information Retrieval (IR). For a long time neural networks failed to convincingly outperform traditional term matching approaches with pseudo-relevance feedback, e.g. RM3 (Abdul-Jaleel et al., 2004) , for text retrieval tasks including the classic adhoc retrieval task (Yang et al., 2019a) . However, with recent breakthroughs in natural language processing (NLP), neural approachesprominently BERT (Devlin et al., 2019) -are achieving state-of-the-art effectiveness across a 1 The source code and data are available at https:// github.com/Guzpenha/slice_based_learning. def sf_long_question(x, t=5):", "cite_spans": [ { "start": 242, "end": 269, "text": "(Abdul-Jaleel et al., 2004)", "ref_id": "BIBREF0" }, { "start": 340, "end": 360, "text": "(Yang et al., 2019a)", "ref_id": "BIBREF26" }, { "start": 470, "end": 491, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" }, { "start": 547, "end": 548, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "return len(x.question.split(\" \")) > t def sf_BERT_difficulty(x, t=0.1): p_rel = np.mean ([BERT.pred(x.question, res) \\ for res in x.rel_resp]) p_not_rel = np.mean ([BERT.pred(x.question, res) \\ for res in x.not_rel_resp]) return (p_rel -p_not_rel) < t SF 0 SF 1 Figure 1 : Examples of slicing functions (SFs) to capture subsets of difficult tuples of question and response list. The SFs also have access to relevance labels for the training set, as they are not required at test time by the slice-aware neural ranker. SF 0 uses the question length as a proxy for question complexity, and SF 1 calculates how distinguishable relevant and non-relevant responses are based on BERT predictions.", "cite_spans": [ { "start": 88, "end": 116, "text": "([BERT.pred(x.question, res)", "ref_id": null }, { "start": 163, "end": 191, "text": "([BERT.pred(x.question, res)", "ref_id": null } ], "ref_spans": [ { "start": 262, "end": 270, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "range of text retrieval tasks (Yang et al., 2019b; Nogueira and Cho, 2019) .", "cite_spans": [ { "start": 30, "end": 50, "text": "(Yang et al., 2019b;", "ref_id": "BIBREF27" }, { "start": 51, "end": 74, "text": "Nogueira and Cho, 2019)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Understanding when and why retrieval models fail is an important part of the research cycle. Even tough we have clues about the failures of neural rankers-obtained for instance by the study of question performance prediction (He and Ounis, 2006) , diagnostic datasets (C\u00e2mara and Hauff, 2020) and error analysis )-automatically identifying difficult instances (tuples of question and response list) and improving the effectiveness of models for such difficult instances are still open challenges. We consider here difficult instances to be question and responses for which a given neural ranker retrieval effectiveness is below the average. A recent approach, referred to as slice-based learning , has been proposed to identify and improve the effectiveness of subsets of data (so-called slices), as opposed to focusing on all data equally. The core idea is that a slice-aware neural model will represent instances differently depending on the slices of data they come from. Slice-based learning has been applied to computer vision and NLP tasks, with overall effectiveness improvements up to 3.5% over a model that is not slice-aware.", "cite_spans": [ { "start": 225, "end": 245, "text": "(He and Ounis, 2006)", "ref_id": "BIBREF11" }, { "start": 268, "end": 292, "text": "(C\u00e2mara and Hauff, 2020)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we focus on the challenges of (i) detecting difficult instances for neural rankers and (ii) improving the retrieval effectiveness for such instances. We address the challenges by (i) creating slicing functions (SFs), i.e., functions that define whether an instance belongs to a slice which heuristically capture different errors of rankers (cf. Figure 2 for examples of SFs); and (ii) employing a slice-aware neural ranker, i.e., a neural ranker that learns to represent each instance differently based on its prediction of which slice the input belongs to (cf. Figure 2 for a diagram of the sliceaware neural ranker). Our main research questions are the following two. RQ1: To what extent can slice-based learning improve neural ranking models? RQ2: What are the underlying reasons for the effectiveness of slice-based learning? Our experimental results on three different conversational tasks show that slice-based learning is beneficial to IR, showing positive evidence for RQ1. The gains are observed for both overall effectiveness and the effectiveness for slices of the data. Concerning RQ2, we evaluate to which extent the effectiveness gains observed for the sliceaware model come from the effect of ensemble learning (Dietterich et al., 2002), a direction not explored empirically by previous work . We find that, when using random SFs we can also significantly improve upon a non sliceaware neural ranker. We note though that not all improvements of slice-based learning can be attributed to the effect of ensemble learning, and carefully implementing SFs is indeed advantageous.", "cite_spans": [], "ref_spans": [ { "start": 359, "end": 367, "text": "Figure 2", "ref_id": null }, { "start": 576, "end": 584, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Slice-based learning is an approach based on the engineering of SFs that capture slices of data. The SFs all follow the same format: they receive the instance as input (in our case a question and a list of candidate responses) and return a boolean variable indicating whether the instance belongs to the slice. Based on the SFs a neural model is adapted to improve the effectiveness of such slices of data, for example, by having a different set of weights for each slice. Training a different model for each slice, and combining their predictions is inefficient: training and maintaining a different neural ranking model for each slicing function amounts to a large number of parameters and an increased prediction time. As an efficient solution, Chen et al. 2019 Figure 2 : Overview of the slice-aware neural ranker. For each SF we define we have a SRAM module to learn slice-expert representations, that are then combined with an attention mechanism into a slice-aware representation.", "cite_spans": [], "ref_spans": [ { "start": 765, "end": 773, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Slice-based Learning", "sec_num": "2" }, { "text": "slice-aware approach for neural models that shares parameters in a similar manner to multi-task learning (Caruana, 1997) .", "cite_spans": [ { "start": 105, "end": 120, "text": "(Caruana, 1997)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Slice-based Learning", "sec_num": "2" }, { "text": "3 Slice-based learning for IR", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Slice-based Learning", "sec_num": "2" }, { "text": "We first introduce the SFs we defined to heuristically capture subsets of data containing different categories of errors, for which the effectiveness is lower than average, based on intuitions drawn from prior work (RQ1). We then introduce the random SFs we deploy to study the effect of ensemble learning in slice-based learning (RQ2). Finally we describe the slice-aware neural ranker.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Slice-based Learning", "sec_num": "2" }, { "text": "We divide our SFs into two categories: those based only on the question text (question based) and those that uses both the question and the list of candidate responses (question-responses based). The relevance labels for the training instances are also inputs to the SFs, which are not required at inference time as the slice-aware neural ranker learns to predict slice-membership.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Slicing Functions", "sec_num": "3.1" }, { "text": "Question Length (QL): the number of question terms is higher than the threshold T QL . QL was shown to correlate negatively with the effectiveness of retrieval methods in adhoc retrieval (Bendersky and Croft, 2009) . Long questions (questions with high QL) provide a way of expressing complex information needs as opposed to short questions (Phan et al., 2007) . Context Length (CL) 2 : the number of turns in the dialogue context is higher than the threshold T CL . CL was shown to correlate negatively with model's effectiveness for the conversation response ranking task when using different neural rankers (Tao et al., 2019) .", "cite_spans": [ { "start": 187, "end": 214, "text": "(Bendersky and Croft, 2009)", "ref_id": "BIBREF1" }, { "start": 341, "end": 360, "text": "(Phan et al., 2007)", "ref_id": "BIBREF19" }, { "start": 610, "end": 628, "text": "(Tao et al., 2019)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Question-based SFs", "sec_num": "3.1.1" }, { "text": "Question Category (QC): question is about a certain semantic category, e.g. QC = travel selects questions about travel. Knowing which topic a question belongs to can lead to retrieval effectiveness improvements, for instance by using federated search (Shokouhi and Si, 2011) , intent-aware ranking (Glater et al., 2017) or multi-task learning (Liu et al., 2015) . Instances from different categories could display different effectiveness values, e.g. questions about physics could be a potential difficult category. Question type (5W1H): a categorization into types of question (who, what, where, when, why, how), e.g. 5W 1H = what selects what questions. 5W1H has been used to inform dialogue management modules (Han et al., 2013) . The type of question can yield different models' effectiveness (Kim et al., 2019) .", "cite_spans": [ { "start": 251, "end": 274, "text": "(Shokouhi and Si, 2011)", "ref_id": "BIBREF21" }, { "start": 298, "end": 319, "text": "(Glater et al., 2017)", "ref_id": "BIBREF7" }, { "start": 343, "end": 361, "text": "(Liu et al., 2015)", "ref_id": "BIBREF16" }, { "start": 713, "end": 731, "text": "(Han et al., 2013)", "ref_id": "BIBREF9" }, { "start": 797, "end": 815, "text": "(Kim et al., 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Question-based SFs", "sec_num": "3.1.1" }, { "text": "Question Response Term Match (QDTM): The number of words that appear in both the question and a relevant response is smaller than the threshold T QDT M . The difference in vocabulary, i.e. lexical gap, between queries and documents has shown to be a problem in IR (Lee et al., 2008) and has to lead to remedies such as query expansion (Voorhees, 1994) and the use of neural ranking models for semantic matching (Guo et al., 2019) . Responses Lexical Similarity (DLS): average TF-IDF similarity between the top-k most similar responses in the candidate list to the relevant response is higher than the threshold T DLS . The amount of internal coherence, i.e. similarity between responses, has been used to predict query difficulty (He et al., 2008) . The SFs can be easily extended for multiple relevant responses, e.g. by using the average or considering one representative relevant response.", "cite_spans": [ { "start": 264, "end": 282, "text": "(Lee et al., 2008)", "ref_id": "BIBREF15" }, { "start": 335, "end": 351, "text": "(Voorhees, 1994)", "ref_id": "BIBREF23" }, { "start": 411, "end": 429, "text": "(Guo et al., 2019)", "ref_id": "BIBREF8" }, { "start": 730, "end": 747, "text": "(He et al., 2008)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Question-Responses based SFs", "sec_num": "3.1.2" }, { "text": "The random SF randomly samples X% of the training data, where X is a hyperparameter. Figure 2 displays a diagram of the slice-aware neural ranker. Based on a backbone (BERT) that learns a representation of the question and response concatenation, the slice-aware neural ranker learns to (1) predict how much each instance belongs to each of the k slices or not (supervision is based on the boolean output of the k SFs) 3 ; has k slice expert representations with its own set of weights trained using a shared prediction head (2) which predicts relevance for the question and response combination using only instances of the slice k; and (3) combines all representations from the SRAMs using attention into a single slice-aware representation that is used to make the final relevance prediction. The SFs are only used during training and thus are not needed at inference time. This is an adaptation of SRAMs , and the backbone could be replaced by any other neural ranker.", "cite_spans": [], "ref_spans": [ { "start": 85, "end": 93, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Random SFs", "sec_num": "3.1.3" }, { "text": "We employ four datasets and three retrieval tasks: MSDialog (Qu et al., 2018) and MANtIS (Penha et al., 2019) for conversation response ranking, Quora (Iyer et al., 2017) for similar question retrieval and ANTIQUE (Hashemi et al., 2019) for non-factoid question answering. We use the official train, validation and test sets provided by the datasets' creators. As a strong neural ranking baseline model we fine-tune BERT 4 for sentence classification, using the CLS token to predict whether the concatenation of a question and response is relevant or not, following recent research in IR (Nogueira and Cho, 2019; Yang et al., 2019b) . Using 512 input tokens (larger inputs are truncated) and a batch size of 8 we train each model for 5 epochs.", "cite_spans": [ { "start": 60, "end": 77, "text": "(Qu et al., 2018)", "ref_id": "BIBREF20" }, { "start": 89, "end": 109, "text": "(Penha et al., 2019)", "ref_id": "BIBREF18" }, { "start": 151, "end": 170, "text": "(Iyer et al., 2017)", "ref_id": "BIBREF13" }, { "start": 214, "end": 236, "text": "(Hashemi et al., 2019)", "ref_id": "BIBREF10" }, { "start": 588, "end": 612, "text": "(Nogueira and Cho, 2019;", "ref_id": "BIBREF17" }, { "start": 613, "end": 632, "text": "Yang et al., 2019b)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "When employing SRAMs ) with a BERT backbone for neural ranking using both the question-based and question-responses based SFs we refer to the model as BERT-SA. When using random SFs we refer to the model as BERT-SA-R. For the SFs that have a threshold value (e.g., QL), we choose thresholds that select less than 50% of the data to avoid selecting the majority of the training instances in each slice. For SFs that include a categorical value, e.g., question category (QC) physics, we add one slice per category in the dataset. For the random SFs we create 10 different slices 5 for which 50% of randomly chosen instances from the training data belong to 6 . We train each model 5 times with different random seeds and report the test set effectiveness using Mean Average Precision (MAP). \u2206MAP indicates the difference between BERT-SA(-R) and BERT ", "cite_spans": [ { "start": 655, "end": 656, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "Let us first consider RQ1. We observe in Table 1 that with the exception of MSDialog, BERT-SA significantly improves over the baseline (BERT) for both the overall (column MAP) and per slice performance (column slice \u2206MAP). This demonstrates that slice-based learning is useful for neural ranking, with gains up to 3.8% overall and up to 13% per slice in terms of MAP. To better understand which features of a slice correlate the most with the observed gains from BERT-SA, we study how three properties of the slices correlate with the slice \u2206MAP (i.e., the improvement over BERT): we consider (1) the size of the slice, (2) the classification accuracy of the slice-aware model to predict slice membership, and, (3) the BERT model effectiveness for each slice. The only property that has a statistically significant Pearson correlation (0.504 average for the different datasets) with MAP gains is the BERT baseline performance , suggesting that focusing on failures of neural ranking models (slices for which BERT has low effectiveness) when implementing SFs is effective.", "cite_spans": [], "ref_spans": [ { "start": 41, "end": 48, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "To provide insights into the underlying reasons of the effectiveness of slice-based learning (RQ2), we replace the SFs that capture error categories with random SFs, i.e. BERT-SA-R. We find that this model also has a significantly better effectiveness than the BERT baseline, with the exception of Quora. This indicates that part of the gains provided by slice-based learning could be attributed to the effect of ensemble learning, since each slice-aware representation is trained on random parts of the data and are then combined 7 . We note however that the slice gains of BERT-SA are higher than BERT-SA-R for ANTIQUE and Quora with statistical significance. This indicates that not all improvements of slice-based learning can be attributed to the effect of ensemble learning and carefully implementing SFs is advantageous.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "In this paper we demonstrated that a slice-aware neural ranker is an effective approach to IR, increasing the effectiveness of rankers by margins up to 3.8% overall and up to 13% per slice in terms of MAP. As future work we plan to study slice-aware neural rankers that do listwise optimization-such a ranker could learn better representations particularly for SFs that uses several responses as input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "This SF is only suited for QA tasks with multiple turns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The model has an extra SF that all instances belong to, so every instance will always belong to at least to this slice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "bert-base-uncased with default hyperparameters(Wolf et al., 2019).5 Initial experiments varying the number of SFs showed a validation plateau around 10.6 Initial experiments varying revealed that only small percentages, less than 20%, degraded the effectiveness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Another potential reason for the success of slice-based learning could be the capacity obtained by the additional number of weights compared to the baseline (e.g. from 110M to 116M for MANtIS).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research has been supported by NWO projects SearchX (639.022.722) and NWO Aspasia (015.013.027).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Umass at trec 2004: Novelty and hard", "authors": [ { "first": "Nasreen", "middle": [], "last": "Abdul-Jaleel", "suffix": "" }, { "first": "James", "middle": [], "last": "Allan", "suffix": "" }, { "first": "Bruce", "middle": [], "last": "Croft", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Diaz", "suffix": "" }, { "first": "Leah", "middle": [], "last": "Larkey", "suffix": "" }, { "first": "Xiaoyan", "middle": [], "last": "Li", "suffix": "" }, { "first": "D", "middle": [], "last": "Mark", "suffix": "" }, { "first": "Courtney", "middle": [], "last": "Smucker", "suffix": "" }, { "first": "", "middle": [], "last": "Wade", "suffix": "" } ], "year": 2004, "venue": "Computer Science Department Faculty Publication Series", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nasreen Abdul-Jaleel, James Allan, W Bruce Croft, Fernando Diaz, Leah Larkey, Xiaoyan Li, Mark D Smucker, and Courtney Wade. 2004. Umass at trec 2004: Novelty and hard. Computer Science Depart- ment Faculty Publication Series, page 189.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Analysis of long queries in a large scale search log", "authors": [ { "first": "Michael", "middle": [], "last": "Bendersky", "suffix": "" }, { "first": "Bruce", "middle": [], "last": "Croft", "suffix": "" } ], "year": 2009, "venue": "Workshop on Web Search Click Data", "volume": "", "issue": "", "pages": "8--14", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Bendersky and W Bruce Croft. 2009. Anal- ysis of long queries in a large scale search log. In Workshop on Web Search Click Data, pages 8-14.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Diagnosing bert with retrieval heuristics", "authors": [ { "first": "Arthur", "middle": [], "last": "C\u00e2mara", "suffix": "" }, { "first": "Claudia", "middle": [], "last": "Hauff", "suffix": "" } ], "year": 2020, "venue": "European Conference on Information Retrieval", "volume": "", "issue": "", "pages": "605--618", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arthur C\u00e2mara and Claudia Hauff. 2020. Diagnos- ing bert with retrieval heuristics. In European Con- ference on Information Retrieval, pages 605-618. Springer.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Multitask learning. Machine learning", "authors": [ { "first": "Rich", "middle": [], "last": "Caruana", "suffix": "" } ], "year": 1997, "venue": "", "volume": "28", "issue": "", "pages": "41--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rich Caruana. 1997. Multitask learning. Machine learning, 28(1):41-75.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Slice-based learning: A programming model for residual learning in critical data slices", "authors": [ { "first": "Vincent", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Sen", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Alexander", "middle": [ "J" ], "last": "Ratner", "suffix": "" }, { "first": "Jen", "middle": [], "last": "Weng", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "R\u00e9", "suffix": "" } ], "year": 2019, "venue": "NeurIPS", "volume": "", "issue": "", "pages": "9392--9402", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vincent Chen, Sen Wu, Alexander J Ratner, Jen Weng, and Christopher R\u00e9. 2019. Slice-based learning: A programming model for residual learning in critical data slices. In NeurIPS, pages 9392-9402.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "NAACL", "volume": "", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In NAACL, pages 4171-4186.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Ensemble learning. The handbook of brain theory and neural networks", "authors": [ { "first": "G", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "", "middle": [], "last": "Dietterich", "suffix": "" } ], "year": 2002, "venue": "", "volume": "2", "issue": "", "pages": "110--125", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas G Dietterich et al. 2002. Ensemble learning. The handbook of brain theory and neural networks, 2:110-125.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Intent-aware semantic query annotation", "authors": [ { "first": "Rafael", "middle": [], "last": "Glater", "suffix": "" }, { "first": "L", "middle": [ "T" ], "last": "Rodrygo", "suffix": "" }, { "first": "Nivio", "middle": [], "last": "Santos", "suffix": "" }, { "first": "", "middle": [], "last": "Ziviani", "suffix": "" } ], "year": 2017, "venue": "SIGIR", "volume": "", "issue": "", "pages": "485--494", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rafael Glater, Rodrygo LT Santos, and Nivio Ziviani. 2017. Intent-aware semantic query annotation. In SIGIR, pages 485-494.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A deep look into neural ranking models for information retrieval", "authors": [ { "first": "Jiafeng", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Yixing", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Liu", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Qingyao", "middle": [], "last": "Ai", "suffix": "" }, { "first": "Hamed", "middle": [], "last": "Zamani", "suffix": "" }, { "first": "Chen", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Bruce", "middle": [], "last": "Croft", "suffix": "" }, { "first": "Xueqi", "middle": [], "last": "Cheng", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1903.06902" ] }, "num": null, "urls": [], "raw_text": "Jiafeng Guo, Yixing Fan, Liang Pang, Liu Yang, Qingyao Ai, Hamed Zamani, Chen Wu, W Bruce Croft, and Xueqi Cheng. 2019. A deep look into neural ranking models for information retrieval. arXiv preprint arXiv:1903.06902.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Counseling dialog system with 5w1h extraction", "authors": [ { "first": "Sangdo", "middle": [], "last": "Han", "suffix": "" }, { "first": "Kyusong", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Donghyeon", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Gary Geunbae", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2013, "venue": "SIGDIAL", "volume": "", "issue": "", "pages": "349--353", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sangdo Han, Kyusong Lee, Donghyeon Lee, and Gary Geunbae Lee. 2013. Counseling dialog system with 5w1h extraction. In SIGDIAL, pages 349-353.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Antique: A non-factoid question answering benchmark", "authors": [ { "first": "Helia", "middle": [], "last": "Hashemi", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Aliannejadi", "suffix": "" }, { "first": "Hamed", "middle": [], "last": "Zamani", "suffix": "" }, { "first": "W Bruce", "middle": [], "last": "Croft", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1905.08957" ] }, "num": null, "urls": [], "raw_text": "Helia Hashemi, Mohammad Aliannejadi, Hamed Za- mani, and W Bruce Croft. 2019. Antique: A non-factoid question answering benchmark. arXiv preprint arXiv:1905.08957.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Query performance prediction", "authors": [ { "first": "Ben", "middle": [], "last": "He", "suffix": "" }, { "first": "Iadh", "middle": [], "last": "Ounis", "suffix": "" } ], "year": 2006, "venue": "Information Systems", "volume": "31", "issue": "7", "pages": "585--594", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ben He and Iadh Ounis. 2006. Query performance pre- diction. Information Systems, 31(7):585-594.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Using coherence-based measures to predict query difficulty", "authors": [ { "first": "Jiyin", "middle": [], "last": "He", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Larson", "suffix": "" }, { "first": "Maarten", "middle": [], "last": "De Rijke", "suffix": "" } ], "year": 2008, "venue": "ECIR", "volume": "", "issue": "", "pages": "689--694", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiyin He, Martha Larson, and Maarten De Rijke. 2008. Using coherence-based measures to predict query difficulty. In ECIR, pages 689-694. Springer.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "First quora dataset release: Question pairs. data. quora", "authors": [ { "first": "Shankar", "middle": [], "last": "Iyer", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Dandekar", "suffix": "" }, { "first": "Korn\u00e9l", "middle": [], "last": "Csernai", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shankar Iyer, Nikhil Dandekar, and Korn\u00e9l Csernai. 2017. First quora dataset release: Question pairs. data. quora. com.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Probing what different nlp tasks teach machines about function word comprehension", "authors": [ { "first": "Najoung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Roma", "middle": [], "last": "Patel", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Poliak", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Ross", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1904.11544" ] }, "num": null, "urls": [], "raw_text": "Najoung Kim, Roma Patel, Adam Poliak, Alex Wang, Patrick Xia, R Thomas McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, et al. 2019. Probing what different nlp tasks teach machines about function word comprehension. arXiv preprint arXiv:1904.11544.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Bridging lexical gaps between queries and questions on large online q&a collections with compact translation models", "authors": [ { "first": "Jung-Tae", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sang-Bum", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2008, "venue": "EMNLP", "volume": "", "issue": "", "pages": "410--418", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jung-Tae Lee, Sang-Bum Kim, Young-In Song, and Hae-Chang Rim. 2008. Bridging lexical gaps be- tween queries and questions on large online q&a collections with compact translation models. In EMNLP, pages 410-418.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Representation learning using multi-task deep neural networks for semantic classification and information retrieval", "authors": [ { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Duh", "suffix": "" }, { "first": "Ye-Yi", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaodong Liu, Jianfeng Gao, Xiaodong He, Li Deng, Kevin Duh, and Ye-Yi Wang. 2015. Representation learning using multi-task deep neural networks for semantic classification and information retrieval.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Passage re-ranking with bert", "authors": [ { "first": "Rodrigo", "middle": [], "last": "Nogueira", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1901.04085" ] }, "num": null, "urls": [], "raw_text": "Rodrigo Nogueira and Kyunghyun Cho. 2019. Pas- sage re-ranking with bert. arXiv preprint arXiv:1901.04085.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Introducing MANtIS: a novel Multi-Domain Information Seeking Dialogues Dataset", "authors": [ { "first": "Gustavo", "middle": [], "last": "Penha", "suffix": "" }, { "first": "Alexandru", "middle": [], "last": "Balan", "suffix": "" }, { "first": "Claudia", "middle": [], "last": "Hauff", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1912.04639" ] }, "num": null, "urls": [], "raw_text": "Gustavo Penha, Alexandru Balan, and Claudia Hauff. 2019. Introducing MANtIS: a novel Multi-Domain Information Seeking Dialogues Dataset. arXiv preprint arXiv:1912.04639.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Understanding the relationship of information need specificity to search query length", "authors": [ { "first": "Nina", "middle": [], "last": "Phan", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Bailey", "suffix": "" }, { "first": "Ross", "middle": [], "last": "Wilkinson", "suffix": "" } ], "year": 2007, "venue": "SIGIR", "volume": "", "issue": "", "pages": "709--710", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nina Phan, Peter Bailey, and Ross Wilkinson. 2007. Understanding the relationship of information need specificity to search query length. In SIGIR, pages 709-710.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Analyzing and characterizing user intent in informationseeking conversations", "authors": [ { "first": "Chen", "middle": [], "last": "Qu", "suffix": "" }, { "first": "Liu", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Bruce", "middle": [], "last": "Croft", "suffix": "" }, { "first": "Johanne", "middle": [ "R" ], "last": "Trippas", "suffix": "" }, { "first": "Yongfeng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Minghui", "middle": [], "last": "Qiu", "suffix": "" } ], "year": 2018, "venue": "SIGIR", "volume": "", "issue": "", "pages": "989--992", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen Qu, Liu Yang, W Bruce Croft, Johanne R Trip- pas, Yongfeng Zhang, and Minghui Qiu. 2018. Ana- lyzing and characterizing user intent in information- seeking conversations. In SIGIR, pages 989-992.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Federated search", "authors": [ { "first": "Milad", "middle": [], "last": "Shokouhi", "suffix": "" }, { "first": "Luo", "middle": [], "last": "Si", "suffix": "" } ], "year": 2011, "venue": "Foundations and Trends R in Information Retrieval", "volume": "5", "issue": "1", "pages": "1--102", "other_ids": { "DOI": [ "10.1561/1500000010" ] }, "num": null, "urls": [], "raw_text": "Milad Shokouhi and Luo Si. 2011. Federated search. Foundations and Trends R in Information Retrieval, 5(1):1-102.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Multirepresentation fusion network for multi-turn response selection in retrieval-based chatbots", "authors": [ { "first": "Chongyang", "middle": [], "last": "Tao", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Can", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Wenpeng", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Dongyan", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Yan", "suffix": "" } ], "year": 2019, "venue": "WSDM", "volume": "", "issue": "", "pages": "267--275", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, and Rui Yan. 2019. Multi- representation fusion network for multi-turn re- sponse selection in retrieval-based chatbots. In WSDM, pages 267-275.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Query expansion using lexical-semantic relations", "authors": [ { "first": "M", "middle": [], "last": "Ellen", "suffix": "" }, { "first": "", "middle": [], "last": "Voorhees", "suffix": "" } ], "year": 1994, "venue": "SIGIR", "volume": "", "issue": "", "pages": "61--69", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen M Voorhees. 1994. Query expansion using lexical-semantic relations. In SIGIR, pages 61-69.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Huggingface's transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R'emi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Brew", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Errudite: Scalable, reproducible, and testable error analysis", "authors": [ { "first": "Tongshuang", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Marco", "middle": [ "Tulio" ], "last": "Ribeiro", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Heer", "suffix": "" }, { "first": "Daniel", "middle": [ "S" ], "last": "Weld", "suffix": "" } ], "year": 2019, "venue": "ACL", "volume": "", "issue": "", "pages": "747--763", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer, and Daniel S Weld. 2019. Errudite: Scalable, repro- ducible, and testable error analysis. In ACL, pages 747-763.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Critically Examining theNeural Hype: Weak Baselines and the Additivity of Effectiveness Gains from Neural Ranking Models", "authors": [ { "first": "Wei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Kuang", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Peilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2019, "venue": "SIGIR", "volume": "", "issue": "", "pages": "1129--1132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Yang, Kuang Lu, Peilin Yang, and Jimmy Lin. 2019a. Critically Examining theNeural Hype: Weak Baselines and the Additivity of Effectiveness Gains from Neural Ranking Models. In SIGIR, pages 1129-1132, New York, NY, USA.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Simple applications of bert for ad hoc document retrieval", "authors": [ { "first": "Wei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Haotian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1903.10972" ] }, "num": null, "urls": [], "raw_text": "Wei Yang, Haotian Zhang, and Jimmy Lin. 2019b. Simple applications of bert for ad hoc document re- trieval. arXiv preprint arXiv:1903.10972.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "proposed Slice-Residual-Attention Modules (SRAMs), which is a", "uris": null, "type_str": "figure", "num": null }, "TABREF0": { "type_str": "table", "text": "Average of 5 runs for slice-based learning. Superscript \u2020 denote statistically significant improvements over the baseline(BERT) where no slice-based learning is applied at 95% confidence interval using Student's t-tests. Bold indicates the highest MAP for each dataset.", "content": "
DevTest
MAP (std)MAP (std)slice \u2206 MAP
DatasetModelAvg.Max.
BERT0.853 (.026) 0.850 (.015)--
ANTIQUEBERT-SA-R 0.874 (.025) \u2020 0.877 (.005) \u2020 0.028 0.063
BERT-SA0.878 (.024) \u2020 0.883 (.005) \u2020 0.035 0.112
BERT0.655 (.006) 0.684 (.006)--
MANtIS 50BERT-SA-R 0.671 (.006) \u2020 0.690 (.014) \u2020 0.025 0.035
BERT-SA0.702 (.006) \u2020 0.689 (.022) \u2020 0.025 0.034
BERT0.754 (.010) 0.830 (.002)--
MSDialogBERT-SA-R 0.815 (.009) \u2020 0.840 (.011) \u2020 0.028 0.084
BERT-SA0.810 (.009) \u2020 0.818 (.010)-0.004 0.067
BERT0.799 (.037) 0.819 (.008)--
QuoraBERT-SA-R 0.819 (.035) \u2020 0.837 (.004)0.011 0.038
BERT-SA0.834 (.034) \u2020 0.840 (.007) \u2020 0.019 0.065
for the slices defined by the SFs.
", "num": null, "html": null } } } }