{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:57:54.716498Z" }, "title": "TextGraphs 2021 Shared Task on Multi-Hop Inference for Explanation Regeneration", "authors": [ { "first": "Mokanarangan", "middle": [], "last": "Thayaparan", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Manchester", "location": { "country": "UK" } }, "email": "mokanarangan.thayaparan@manchester.ac.uk" }, { "first": "Marco", "middle": [], "last": "Valentino", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Manchester", "location": { "country": "UK" } }, "email": "marco.valentino@manchester.ac.uk" }, { "first": "Peter", "middle": [], "last": "Jansen", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Arizona", "location": { "country": "USA" } }, "email": "pajansen@email.arizona.edu" }, { "first": "Dmitry", "middle": [], "last": "Ustalov", "suffix": "", "affiliation": { "laboratory": "Crowdsourcing Research Group Yandex", "institution": "", "location": { "country": "Russia" } }, "email": "dustalov@yandex-team.ru" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The Shared Task on Multi-Hop Inference for Explanation Regeneration asks participants to compose large multi-hop explanations to questions by assembling large chains of facts from a supporting knowledge base. While previous editions of this shared task aimed to evaluate explanatory completeness-finding a set of facts that form a complete inference chain, without gaps, to arrive from question to correct answer, this 2021 instantiation concentrates on the subtask of determining relevance in large multi-hop explanations. To this end, this edition of the shared task makes use of a large set of approximately 250k manual explanatory relevancy ratings that augment the 2020 shared task data. In this summary paper, we describe the details of the explanation regeneration task, the evaluation data, and the participating systems. Additionally, we perform a detailed analysis of participating systems, evaluating various aspects involved in the multi-hop inference process. The best performing system achieved an NDCG of 0.82 on this challenging task, substantially increasing performance over baseline methods by 32%, while also leaving significant room for future improvement.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "The Shared Task on Multi-Hop Inference for Explanation Regeneration asks participants to compose large multi-hop explanations to questions by assembling large chains of facts from a supporting knowledge base. While previous editions of this shared task aimed to evaluate explanatory completeness-finding a set of facts that form a complete inference chain, without gaps, to arrive from question to correct answer, this 2021 instantiation concentrates on the subtask of determining relevance in large multi-hop explanations. To this end, this edition of the shared task makes use of a large set of approximately 250k manual explanatory relevancy ratings that augment the 2020 shared task data. In this summary paper, we describe the details of the explanation regeneration task, the evaluation data, and the participating systems. Additionally, we perform a detailed analysis of participating systems, evaluating various aspects involved in the multi-hop inference process. The best performing system achieved an NDCG of 0.82 on this challenging task, substantially increasing performance over baseline methods by 32%, while also leaving significant room for future improvement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Multi-hop inference is the task of aggregating more than one fact to perform an inference. In the context of natural language processing, multi-hop inference is typically evaluated using auxiliary tasks such as question answering, where multiple sentences from external corpora need to be retrieved and composed Figure 1 : The motivating example provided to participants. Given a question and correct answer (top), the explanation regeneration task requires participating models to find sets of facts that, taken together, provide a detailed chain-of-reasoning for the answer (bottom). This 2021 instantiation of the shared task focuses on the subtask of collecting the most relevant facts for building explanations.", "cite_spans": [], "ref_spans": [ { "start": 312, "end": 320, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "to form reasoning chains that support the correct answer (see Figure 1 ). As such, multi-hop inference represents a crucial step towards explainability in complex question answering, as the set of supporting facts can be interpreted as an explanation for the underlying inference process (Thayaparan et al., 2020) .", "cite_spans": [ { "start": 288, "end": 313, "text": "(Thayaparan et al., 2020)", "ref_id": "BIBREF30" } ], "ref_spans": [ { "start": 62, "end": 70, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Constructing long inference chains can be extremely challenging for existing models, which generally exhibit a large drop in performance when composing explanations and inference chains requiring more than 2 inference steps (Fried et al., 2015; Jansen et al., , 2018 Khashabi et al., 2019; Yadav et al., 2020) . To this end, this Shared Task on Multi-hop Inference for Explanation Regeneration (Jansen and Ustalov, 2019, 2020) has focused on expanding the capacity of models to compose long inference chains, where participants are asked to develop systems capable of reconstructing detailed explanations for science exam questions drawn from the WorldTree explanation corpus (Xie et al., 2020; Jansen et al., 2018) , which range in compositional complexity from 1 to 16 facts (with the average explanation including 6 facts).", "cite_spans": [ { "start": 224, "end": 244, "text": "(Fried et al., 2015;", "ref_id": "BIBREF10" }, { "start": 245, "end": 266, "text": "Jansen et al., , 2018", "ref_id": "BIBREF16" }, { "start": 267, "end": 289, "text": "Khashabi et al., 2019;", "ref_id": "BIBREF20" }, { "start": 290, "end": 309, "text": "Yadav et al., 2020)", "ref_id": "BIBREF38" }, { "start": 394, "end": 405, "text": "(Jansen and", "ref_id": "BIBREF12" }, { "start": 406, "end": 426, "text": "Ustalov, 2019, 2020)", "ref_id": null }, { "start": 676, "end": 694, "text": "(Xie et al., 2020;", "ref_id": "BIBREF37" }, { "start": 695, "end": 715, "text": "Jansen et al., 2018)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Large explanations are typically evaluated on two dimensions: relevance and completeness. Relevance refers to whether each fact in an explanation is relevant, topical, and required to complete the chain of inference that moves from question to correct answer. Conversely, completeness evaluates whether the entire set of facts in the explanation, together, composes a complete chain of inference from question to answer, without significant gaps. In practice, both of these are challenging to evaluate automatically (Buckley and Voorhees, 2004; Voorhees, 2002) , given that multi-hop datasets typically include a single example of a complete explanation, in large part due to the time and expense associated with generating such annotation. Underscoring this difficulty, post-competition manual analyses on participating systems in the previous two iterations of this shared task showed that models may be performing up to 20% better at retrieving correct facts to build their explanation from, highlighting this significant methodological challenge.", "cite_spans": [ { "start": 516, "end": 544, "text": "(Buckley and Voorhees, 2004;", "ref_id": "BIBREF1" }, { "start": 545, "end": 560, "text": "Voorhees, 2002)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This 2021 instantiation of the Shared Task on Explanation Regeneration focuses on the theme of determining relevance in large multi-hop explanations. To this end, participants were given access to a large pre-release dataset of approximately 250k explanatory relevancy ratings that augment the 2020 shared task data (Jansen and Ustalov, 2020) , and were tasked with ranking the facts most critical to assembling large explanations for a given question highest. Similarly to the previous instances of our competition, the shared task has been organized on the CodaLab platform. 1 We released train and development datasets along with the baseline solution in advance to allow one to get to know the task specifics. 2 We ran the practice phase from February 15 till March 9, 2021. Then we released the test dataset without answers and ran the official evaluation phase from March 10 till March 24, 2021. After that we established postcompetition phase to enable long-term evaluation of the methods beyond our shared task. Participating systems substantially increased task performance compared to a supplied baseline system by 32%, while achieving moderate overall absolute task performance -highlighting both the success of this shared task, as well as the continued challenge of determining relevancy in large multi-hop inference problems.", "cite_spans": [ { "start": 316, "end": 342, "text": "(Jansen and Ustalov, 2020)", "ref_id": "BIBREF15" }, { "start": 577, "end": 578, "text": "1", "ref_id": null }, { "start": 714, "end": 715, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Semantic Drift. Multi-hop question answering systems suffer from the tendency of composing out-of-context inference chains as the number of required hops (aggregated facts) increases. This phenomenon, known as semantic drift, has been observed in a number of works (Fried et al., 2015; Jansen, 2017) , which have empirically demonstrated that multi-hop inference models exhibit a substantial drop in performance when aggregating more than 2 facts or paragraphs. Semantic drift has been observed across a variety of representations and traversal methods, including word and dependency level (Pan et al., 2017; Fried et al., 2015) , sentence level , and paragraph level (Clark and Gardner, 2018) . Khashabi et al. (2019) have demonstrated that ongoing efforts on \"very long\" multi-hop reasoning are unlikely to succeed without the adoption of a richer underlying representation that allows for reasoning with fewer hops.", "cite_spans": [ { "start": 265, "end": 285, "text": "(Fried et al., 2015;", "ref_id": "BIBREF10" }, { "start": 286, "end": 299, "text": "Jansen, 2017)", "ref_id": "BIBREF12" }, { "start": 590, "end": 608, "text": "(Pan et al., 2017;", "ref_id": "BIBREF26" }, { "start": 609, "end": 628, "text": "Fried et al., 2015)", "ref_id": "BIBREF10" }, { "start": 668, "end": 693, "text": "(Clark and Gardner, 2018)", "ref_id": "BIBREF6" }, { "start": 696, "end": 718, "text": "Khashabi et al. (2019)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Many-hop multi-hop training data. There is a recent explosion of explanation-centred datasets for multi-hop question answering (Jhamtani and Clark, 2020; Xie et al., 2020; Jansen et al., 2018; Yang et al., 2018; Thayaparan et al., 2020; Wiegreffe and Marasovi\u0107, 2021) . However, most of these datasets require the aggregation of only two sentences or paragraphs, making it hard to evaluate the robustness of the models in terms of semantic drift. On the other hand, the WorldTree corpus (Xie et al., 2020; Jansen et al., 2018 ) used in this shared task is explicitly designed to test multi-hop inference models on the reconstruction of long inference chains requiring the aggregation of an average of 6 facts, and as many as 16 facts.", "cite_spans": [ { "start": 127, "end": 153, "text": "(Jhamtani and Clark, 2020;", "ref_id": "BIBREF18" }, { "start": 154, "end": 171, "text": "Xie et al., 2020;", "ref_id": "BIBREF37" }, { "start": 172, "end": 192, "text": "Jansen et al., 2018;", "ref_id": "BIBREF16" }, { "start": 193, "end": 211, "text": "Yang et al., 2018;", "ref_id": "BIBREF39" }, { "start": 212, "end": 236, "text": "Thayaparan et al., 2020;", "ref_id": "BIBREF30" }, { "start": 237, "end": 267, "text": "Wiegreffe and Marasovi\u0107, 2021)", "ref_id": "BIBREF35" }, { "start": 487, "end": 505, "text": "(Xie et al., 2020;", "ref_id": "BIBREF37" }, { "start": 506, "end": 525, "text": "Jansen et al., 2018", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Question: Which of the following best explains why the Sun appears to move across the sky every day?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Answer: Earth rotates on its axis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Explanatory Relevance Ratings # Fact (Table Row) Relevance 1 The Earth rotating on its axis causes the Sun to appear to move across the sky during the day 6 2 If a human is on a rotating planet then other celestial bodies will appear to move from that human's perspective due to the rotation of that planet 6 3 The Earth rotates on its tilted axis 6 4", "cite_spans": [], "ref_spans": [ { "start": 37, "end": 62, "text": "(Table Row) Relevance 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Diurnal motion is when objects in the sky appear to move due to Earth's rotation on its axis 6 5 Apparent motion is when an object appears to move relative to another object's perspective / another object 's position 5 6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Earth rotating on its axis occurs once per day 4 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Rotation is a kind of motion 4 8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "A rotation is a kind of movement 4 9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The Sun sets in the west 2 10 The Sun is a kind of star 2 11 Earth is a kind of planet 2 12 Earth's angle of tilt causes the length of day and night to vary 0 13 The Earth being tilted on its rotating axis causes seasons 0 14 Revolving is a kind of motion 0 15 The Earth revolving around the Sun causes stars to appear in different areas in the sky at different times of year 0 Explanation regeneration approaches on WorldTree. A number of approaches have been proposed for the explanation regeneration task on WorldTree, including those from previous iterations of this shared task. These approaches adopt a set of diverse techniques ranging from graph-based learning , to Transformer-based language models (Cartuyvels et al., 2020; Das et al., 2019; Pawate et al., 2020; Chia et al., 2019) , Integer Linear Programming (Gupta and Srinivasaraghavan, 2020) , and sparse retrieval models (Valentino et al., 2021; Chia et al., 2019) . The current state-of-the-art on the explanation regeneration task is represented by a model that employs a combination of language models and Graph Neural Networks (GNN) , with the bulk of performance contributed from the language model. Strong performance is also achieved by transformer models adapted to rank inference chains (Das et al., 2019) or operating in an iterative and recursive fashion (Cartuyvels et al., 2020) . In contrast with neural-based models, recent works (Valentino et al., 2021) have shown that the explanatory patterns emerging in the WorldTree corpus can be leveraged to improve sparse retrieval models and provide a viable way to alleviate semantic drift.", "cite_spans": [ { "start": 708, "end": 733, "text": "(Cartuyvels et al., 2020;", "ref_id": "BIBREF2" }, { "start": 734, "end": 751, "text": "Das et al., 2019;", "ref_id": "BIBREF8" }, { "start": 752, "end": 772, "text": "Pawate et al., 2020;", "ref_id": "BIBREF28" }, { "start": 773, "end": 791, "text": "Chia et al., 2019)", "ref_id": "BIBREF4" }, { "start": 821, "end": 856, "text": "(Gupta and Srinivasaraghavan, 2020)", "ref_id": "BIBREF11" }, { "start": 887, "end": 911, "text": "(Valentino et al., 2021;", "ref_id": "BIBREF32" }, { "start": 912, "end": 930, "text": "Chia et al., 2019)", "ref_id": "BIBREF4" }, { "start": 1262, "end": 1280, "text": "(Das et al., 2019)", "ref_id": "BIBREF8" }, { "start": 1332, "end": 1357, "text": "(Cartuyvels et al., 2020)", "ref_id": "BIBREF2" }, { "start": 1411, "end": 1435, "text": "(Valentino et al., 2021)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Following the previous editions of the shared task, we frame explanation generation as a ranking problem. Specifically, for a given science question, a model is supplied both the question and correct answer text, and must then selectively rank all the atomic scientific and world knowledge facts in the knowledge base such that those that were labelled as most relevant to building an explanation by a human annotator are ranked the highest. Additional details on the ranking problem are described in the 2019 shared task summary paper (Jansen and Ustalov, 2019) .", "cite_spans": [ { "start": 536, "end": 562, "text": "(Jansen and Ustalov, 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "3" }, { "text": "Questions and Explanations: The 2021 shared task adopts the same set of questions and knowledge base included in the 2020 shared task (Jansen and Ustalov, 2020) , with additional relevance annotation described below. The questions and explanations are drawn from the WorldTree V2 explanation corpus (Xie et al., 2020) , a set of detailed multi-fact explanations to standardized elementary and middle-school science exam questions drawn from the Aristo Reasoning Challenge (ARC) corpus Relevancy Ratings: The WorldTree V2 dataset used in previous iterations of the shared task includes a single complete explanation per question, supplied as a list of binary classifications that describe which facts are included in the gold explanation for a given question. This 2021 edition of the shared task augments these original WorldTree explanations with a pre-release dataset 3 of approximately 250,000 manual relevancy ratings. Specifically, for each question in the corpus, a set of 30 facts determined to be the most likely facts relevant to building an explanation were manually assigned relevancy ratings by annotators. Ratings are on a 7-point scale (0-6), where facts rated as a 6 are the most critical to building an explanation, while facts rated as 0 are unrelated to the question. An example of these relevance ratings is shown in Table 1 .", "cite_spans": [ { "start": 134, "end": 160, "text": "(Jansen and Ustalov, 2020)", "ref_id": "BIBREF15" }, { "start": 299, "end": 317, "text": "(Xie et al., 2020)", "ref_id": "BIBREF37" } ], "ref_spans": [ { "start": 1336, "end": 1343, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Training and Evaluation Dataset", "sec_num": "4" }, { "text": "Evaluation Metrics: Historically, performance on the explanation regeneration task was evaluated using Mean Average Precision (MAP) , using the binary ratings (gold or not gold) associated with each fact for a given explanation. To leverage the new graded annotation schema, here we switch to evaluate system performance using Normalized Discounted Cumulative Gain (NDCG) (J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002; Wang et al., 2013) .", "cite_spans": [ { "start": 372, "end": 403, "text": "(J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002;", "ref_id": "BIBREF17" }, { "start": 404, "end": 422, "text": "Wang et al., 2013)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Training and Evaluation Dataset", "sec_num": "4" }, { "text": "The 2021 shared task received 4 submissions, with 3 teams choosing to submit system description papers. The performance of the submitted systems are shown in Table 2 . Overall, we observe that all participating teams substantially improved upon the NDCG score achieved by the baseline model, with increases of up to 30%. In this section, we summarize the key features of the approaches proposed by the teams.", "cite_spans": [], "ref_spans": [ { "start": 158, "end": 165, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "System Descriptions and Performance", "sec_num": "5" }, { "text": "Baseline (tf.idf). We adopt a term frequencyinverse document frequency (tf.idf) baseline (see, e.g. Manning et al., 2008, Ch. 6) . Specifically, given a question and its correct answer, the baseline calculates the cosine similarity between a query vector (representing the question and correct answer) and document vectors (representing a given fact) for each fact in the knowledge base. The model then adopts the tf.idf weighting scheme to rank each fact in the knowledge base for a given question-answer pair. This baseline achieves a NDCG score of 0.501 on the test set.", "cite_spans": [ { "start": 100, "end": 128, "text": "Manning et al., 2008, Ch. 6)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "System Descriptions and Performance", "sec_num": "5" }, { "text": "DeepBlueAI. The model presented by Pan et al. (2021) represents the top-performing system in this edition of the shared task with a NDCG score of 0.820 -representing a substantial 32% improvement when compared to the tf.idf baseline. The model employs a two step retrieval strategy. In the first step, a pre-trained language model is finetuned to retrieve the top-K (K > 100) relevant facts for each question and answer pair. Subsequently, the same architecture is adopted to build a re-ranking model to refine the list of the top-K candidate facts. The authors propose the use of a triplet loss for the fine-tuning of the model. Specifically, the triplet loss minimizes the distance between an anchor and a positive example, while maximizing the distance between the same anchor and a negative example. The team treats question and correct answer as the anchor, while the facts annotated with high ratings are adopted as positive examples. Different experiments are conducted with three negative sampling strategies for retrieval and re-ranking. The best results are obtained when sampling negative examples from the same tables of highly relevant facts. The authors find that the best performance is obtained when averaging the results from RoBERTa (Liu et al., 2019) and ERNIE 2.0 with different random seeds.", "cite_spans": [ { "start": 35, "end": 52, "text": "Pan et al. (2021)", "ref_id": "BIBREF27" }, { "start": 1251, "end": 1269, "text": "(Liu et al., 2019)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "System Descriptions and Performance", "sec_num": "5" }, { "text": "Kalyan et al. (2021) combines iterative information retrieval with an ensemble of language models, achieving a NDCG score of 0.771. The first step of the proposed approach is to retrieve a limited number of facts to be subsequently re-ranked by language models. The first step is a modification of the approach proposed by Chia et al. 2020, where the model iteratively selects the closest n Table 4 : Performance (NDCG) when restricted to examining facts with a given minimum relevance rating.", "cite_spans": [], "ref_spans": [ { "start": 391, "end": 398, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "RedDragonAI. The system developed by", "sec_num": null }, { "text": "facts to the question using BM25 vectors and then update the query vector via a max operation. The iterative retrieval step is performed until a list of K = 200 facts is selected from the knowledge base. Subsequently, the top K explanation facts are re-ranked using language models. The best model consists of an ensemble of BERT (Devlin et al., 2019) and SciBERT (Beltagy et al., 2019) . These models are fine-tuned to predict the target explanatory relevance ratings using the following input: Question + Answer [SEP] Explanation. Specifically, the authors frame the problem as a regression via mean squared error loss. The ensemble is achieved by linearly combining the scores of the models. The authors reported two negative results obtained using a two-stage approach and different negative sampling techniques. In the two-stage approach, the facts were firstly categorized using binary scores to discriminate between relevant and irrelevant sentences, and then re-ranked predicting the target explanatory relevance rating. Regarding the negative sampling strategy, the authors noticed that highest percentage of errors occurring at inference time was due to irrelevant facts that are lexically close to highly relevant explanation sentences. They attempted to alleviate this problem by randomly sampling facts from the knowledge base and retrieving close negative examples during training. Neither of these two methods resulted in significant improvements.", "cite_spans": [ { "start": 330, "end": 351, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF9" }, { "start": 364, "end": 386, "text": "(Beltagy et al., 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "RedDragonAI. The system developed by", "sec_num": null }, { "text": "Google-BERT. Xiang et al. (2021) propose a framework composed of three main steps. In the first step, the model adopts a simple tf.idf model with cosine similarity to retrieve the top-K relevant explanation sentences (K = 50) for each question and correct answer pair. In the second step, the authors employ an autoregressive model which selects the most relevant facts in a iterative manner. Specifically, the authors propose the adoption of a BERT-based model (Devlin et al., 2019) that selects the facts at iteration n given the facts retrieved in the previous step. The model uses up to 4 iterations. Finally, the authors employ a re-ranking module to re-score the retrieved candidate explanations computing the relevance between each fact and the question-answer pairs. The re-ranking model is implemented using a BERT model for binary classification. The ablation study shows that the first two steps allow achieving a performance of 0.679 NDCG, that is improved up to 0.700 NDCG using the re-ranking model. Moreover, the experiments show that the best performance is achieved when the re-ranking model is adopted to re-score the top K = 30 facts.", "cite_spans": [ { "start": 13, "end": 32, "text": "Xiang et al. (2021)", "ref_id": "BIBREF36" }, { "start": 462, "end": 483, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "RedDragonAI. The system developed by", "sec_num": null }, { "text": "In order to better understand the behavior and contribution of the proposed systems, we perform a detailed analysis by grouping the explanatory facts in the supporting knowledge base in different categories. Specifically, we adopt categories that cover various aspects of the multi-hop inference process, ranging from different kinds of knowledge to different degrees of explanatory relevance and lexical overlap, to analyse the performance of each model beyond the overall explanation regeneration score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detailed Analysis", "sec_num": "6" }, { "text": "Similarly to the previous editions of the shared task (Jansen and Ustalov, 2019, 2020) , we present the results achieved by the systems considering Table 6 : Percentage of lexical overlap and respective NDCG scores for each model. In this experiment, we measure the performance of the systems considering only those facts that have a percentage of overlap \u2264 a given threshold T . The percentage of overlap is computed by dividing the number of shared terms between question-answer pair and a fact by the total number of unique terms. To evaluate the systems in the most challenging setting, we gradually decrease the value of T down to 0.", "cite_spans": [ { "start": 54, "end": 65, "text": "(Jansen and", "ref_id": "BIBREF12" }, { "start": 66, "end": 86, "text": "Ustalov, 2019, 2020)", "ref_id": null } ], "ref_spans": [ { "start": 148, "end": 155, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Performance by Table Knowledge Types", "sec_num": "6.1" }, { "text": "different knowledge types in the knowledge base. The explanatory facts in the WorldTree corpus are stored in semi-structured tables that are broadly divided into three main categories:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance by Table Knowledge Types", "sec_num": "6.1" }, { "text": "\u2022 Retrieval: Facts that generally encode knowledge about taxonomic relations or properties.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance by Table Knowledge Types", "sec_num": "6.1" }, { "text": "\u2022 Inference-Supporting: Facts that include knowledge about actions, affordances, uses of materials or devices, sources of things, requirements, or affect relationships.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance by Table Knowledge Types", "sec_num": "6.1" }, { "text": "\u2022 Complex Inference: Facts that encode knowledge of causality, processes, changes, coupled relationships, and if/then relationships.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance by Table Knowledge Types", "sec_num": "6.1" }, { "text": "We break down the NDCG performance of each model across these knowledge types and report the results in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 104, "end": 111, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Performance by Table Knowledge Types", "sec_num": "6.1" }, { "text": "In line with previous editions of the shared task, we observe that the performance of the models tends to be higher for the retrieval type, while de-creasing for inference-supporting and complex inference facts. This can be explained by the fact that retrieval knowledge is generally specific to the concepts in the questions and therefore easier to rank, while inference-supporting and complex facts typically include more abstract scientific knowledge requiring multi-hop inference. These results are consistent across all the models except from Google-BERT, which exhibits the best performance on the inference-supporting type and more stable results in general. We attribute this outcome to the autoregressive component adopted by the system, which may facilitate the ranking of more challenging explanatory facts. With respect to the general performance of the models, we observe that DeepBlueAI consistently outperforms other approaches across all knowledge categories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance by Table Knowledge Types", "sec_num": "6.1" }, { "text": "As described in Section 4, the dataset for the 2021 shared task includes relevance ratings that range from 0 (not relevant) to 6 (highly relevant). To better understand the quality of the facts retrieved by each model, we calculated the NDCG score of each model broken down by relevance ratings. The results of this analysis are reported in Table 4 .", "cite_spans": [], "ref_spans": [ { "start": 341, "end": 348, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Performance by Relevance Ratings", "sec_num": "6.2" }, { "text": "Similar to the results obtained on different knowledge types, we observe that DeepBlueAI consistently outperforms other approaches across all relevance rating bins. In contrast to other models, DeepBlueAI exhibits increasing performance for higher relevance ratings, confirming that the model is particularly suited for retrieving highly relevant facts (i.e., facts with relevance ratings > 4). We conjecture that these results are due to the particular training configuration adopted by the system, which employs a triplet loss to encourage the retrieval of highly relevant facts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance by Relevance Ratings", "sec_num": "6.2" }, { "text": "We compute the Precision@k to complement the results obtained via the NDCG metric. In contrast to NDCG which weights facts based on relevancy ratings, here for this evaluation we consider all the facts with a rating greater than 0 as gold. The results of the analysis are reported in Table 5 . The results show that DeepBlueAI substantially outperforms other models for values of k \u2264 10. As k becomes large, other models overtake it's performance, though the difference between models becomes small.", "cite_spans": [], "ref_spans": [ { "start": 284, "end": 291, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Precision@k", "sec_num": "6.3" }, { "text": "One of the crucial issues regarding the evaluation of multi-hop inference models is the possibility to achieve strong overall performance without using real compositional methods (Min et al., 2019; Chen and Durrett, 2019; Trivedi et al., 2020) . Therefore, in order to evaluate multi-hop inference more explicitly, we break down the performance of each model with respect to the difficulty of accessing specific facts in an explanation via direct lexical overlap. This comes from the assumption that facts sharing many terms with question or answer are relatively easier to find and rank highly. Table 6 reports the performance of the systems by considering a difference percentage L of lexical overlaps between question-answer pairs and facts computed as follows:", "cite_spans": [ { "start": 179, "end": 197, "text": "(Min et al., 2019;", "ref_id": "BIBREF25" }, { "start": 198, "end": 221, "text": "Chen and Durrett, 2019;", "ref_id": "BIBREF3" }, { "start": 222, "end": 243, "text": "Trivedi et al., 2020)", "ref_id": "BIBREF31" } ], "ref_spans": [ { "start": 596, "end": 603, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Performance by Lexical Overlap", "sec_num": "6.4" }, { "text": "L = |t(Q||A) \u2229 t(F i )| |t(Q||A) \u222a t(F i )| \u00d7 100", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance by Lexical Overlap", "sec_num": "6.4" }, { "text": "In the equation above, t(Q||A) represents the set of unique terms (without stop-words) in question and correct answer, while t(F i ) is the set of unique terms in a given fact F i . The percentage of overlaps is then derived by dividing the number of shared terms between a question-answer pair and a fact by the number of their unique terms. Therefore, a value of L equal to 50%, for example, means that 50% of the unique terms in a question-answer pair and a fact are shared.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance by Lexical Overlap", "sec_num": "6.4" }, { "text": "Given a question and a value L computed for each fact annotated with relevance ratings, we measure the performance of the systems considering only those facts that have a percentage of overlaps \u2264 a given threshold T . To evaluate the systems in the most challenging setting, we gradually decrease the value of T down to 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance by Lexical Overlap", "sec_num": "6.4" }, { "text": "Overall, we observe that DeepBlueAI consistently outperforms all the other models across all the considered categories. Interestingly, we observe that Google-BERT performs better than Red-DragonAI when considering facts that have zero lexical overlaps with question or answer, confirming the importance of performing specific analysis for the evaluation of multi-hop inference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance by Lexical Overlap", "sec_num": "6.4" }, { "text": "Despite the substantial improvement on the baseline obtained by the competing models, we still observe a significant drop in performance with low degrees of lexical overlaps. This drop indicates that the proposed models still struggle to retrieve abstract explanatory facts requiring multi-hop inference, leaving wide space for future improvements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance by Lexical Overlap", "sec_num": "6.4" }, { "text": "The 2021 edition of the Shared Task on Multi-Hop Inference for Explanation Regeneration was a success, with 4 participating teams each substantially improving performance over the baseline model. The best performing team, DeepBlueAI, produced a system that improves absolute performance by 32%, up to 0.820 NDCG, bringing overall state-ofthe-art performance at this relevancy ranking aspect of multi-hop inference to a moderate level. We hope that future systems for many-hop multi-hop inference that aim to build large detailed explanations for question answering will be able to leverage these results to build strong relevancy retrieval subcomponents to augment their compositional inference algorithms. Award #1815948, \"Explainable Natural Language Inference\"). This edition of the shared task would not have been possible without the hard work of a number of relevance annotators, and their generous offer to anonymously use their data while their work is under review. A special thanks to Andr\u00e9 Freitas for the helpful discussions. Additionally, we would like to thank the Computational Shared Facility of the University of Manchester for providing the infrastructure to run our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "https://competitions.codalab.org/ competitions/23615 2 https://github.com/cognitiveailab/ tg2021task", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We thank the authors of this dataset for allowing it to be used anonymously for this shared task, while it is under consideration for publication.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Peter Jansen's work on the shared task was supported by National Science Foundation (NSF", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "SciB-ERT: A Pretrained Language Model for Scientific Text", "authors": [ { "first": "Iz", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Arman", "middle": [], "last": "Cohan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3615--3620", "other_ids": { "DOI": [ "10.18653/v1/D19-1371" ] }, "num": null, "urls": [], "raw_text": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciB- ERT: A Pretrained Language Model for Scientific Text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3615-3620, Hong Kong. Association for Computa- tional Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Retrieval Evaluation with Incomplete Information", "authors": [ { "first": "Chris", "middle": [], "last": "Buckley", "suffix": "" }, { "first": "Ellen", "middle": [ "M" ], "last": "Voorhees", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '04", "volume": "", "issue": "", "pages": "25--32", "other_ids": { "DOI": [ "10.1145/1008992.1009000" ] }, "num": null, "urls": [], "raw_text": "Chris Buckley and Ellen M. Voorhees. 2004. Re- trieval Evaluation with Incomplete Information. In Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '04, pages 25-32, Sheffield, UK. Association for Computing Machin- ery.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Autoregressive Reasoning over Chains of Facts with Transformers", "authors": [ { "first": "Ruben", "middle": [], "last": "Cartuyvels", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Spinks", "suffix": "" }, { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020", "volume": "", "issue": "", "pages": "6916--6930", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.610" ] }, "num": null, "urls": [], "raw_text": "Ruben Cartuyvels, Graham Spinks, and Marie- Francine Moens. 2020. Autoregressive Reasoning over Chains of Facts with Transformers. In Proceed- ings of the 28th International Conference on Com- putational Linguistics, COLING 2020, pages 6916- 6930, Barcelona, Spain (Online). International Com- mittee on Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Understanding Dataset Design Choices for Multi-hop Reasoning", "authors": [ { "first": "Jifan", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Durrett", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4026--4032", "other_ids": { "DOI": [ "10.18653/v1/N19-1405" ] }, "num": null, "urls": [], "raw_text": "Jifan Chen and Greg Durrett. 2019. Understanding Dataset Design Choices for Multi-hop Reasoning. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), NAACL- HLT 2019, pages 4026-4032, Minneapolis, MN, USA. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Red Dragon AI at TextGraphs 2019 Shared Task: Language Model Assisted Explanation Generation", "authors": [ { "first": "Ken", "middle": [], "last": "Yew", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Chia", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Witteveen", "suffix": "" }, { "first": "", "middle": [], "last": "Andrews", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13)", "volume": "", "issue": "", "pages": "85--89", "other_ids": { "DOI": [ "10.18653/v1/D19-5311" ] }, "num": null, "urls": [], "raw_text": "Yew Ken Chia, Sam Witteveen, and Martin Andrews. 2019. Red Dragon AI at TextGraphs 2019 Shared Task: Language Model Assisted Explanation Gener- ation. In Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Pro- cessing (TextGraphs-13), pages 85-89, Hong Kong. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Red Dragon AI at TextGraphs 2020 Shared Task : LIT : LSTM-Interleaved Transformer for Multi-Hop Explanation Ranking", "authors": [ { "first": "Ken", "middle": [], "last": "Yew", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Chia", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Witteveen", "suffix": "" }, { "first": "", "middle": [], "last": "Andrews", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Graph-based Methods for Natural Language Processing (TextGraphs). Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yew Ken Chia, Sam Witteveen, and Martin Andrews. 2020. Red Dragon AI at TextGraphs 2020 Shared Task : LIT : LSTM-Interleaved Transformer for Multi-Hop Explanation Ranking. In Proceedings of the Graph-based Methods for Natural Language Processing (TextGraphs). Association for Computa- tional Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Simple and Effective Multi-Paragraph Reading Comprehension", "authors": [ { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "845--855", "other_ids": { "DOI": [ "10.18653/v1/P18-1078" ] }, "num": null, "urls": [], "raw_text": "Christopher Clark and Matt Gardner. 2018. Simple and Effective Multi-Paragraph Reading Comprehen- sion. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), ACL 2018, pages 845-855, Melbourne, VIC, Australia. Association for Compu- tational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge", "authors": [ { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Isaac", "middle": [], "last": "Cowhey", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" }, { "first": "Tushar", "middle": [], "last": "Khot", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Sabharwal", "suffix": "" }, { "first": "Carissa", "middle": [], "last": "Schoenick", "suffix": "" }, { "first": "Oyvind", "middle": [], "last": "Tafjord", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have Solved Question An- swering? Try ARC, the AI2 Reasoning Challenge.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Chains-of-Reasoning at TextGraphs 2019 Shared Task: Reasoning over Chains of Facts for Explainable Multi-hop Inference", "authors": [ { "first": "Rajarshi", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ameya", "middle": [], "last": "Godbole", "suffix": "" }, { "first": "Manzil", "middle": [], "last": "Zaheer", "suffix": "" }, { "first": "Shehzaad", "middle": [], "last": "Dhuliawala", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13)", "volume": "", "issue": "", "pages": "101--117", "other_ids": { "DOI": [ "10.18653/v1/D19-5313" ] }, "num": null, "urls": [], "raw_text": "Rajarshi Das, Ameya Godbole, Manzil Zaheer, She- hzaad Dhuliawala, and Andrew McCallum. 2019. Chains-of-Reasoning at TextGraphs 2019 Shared Task: Reasoning over Chains of Facts for Ex- plainable Multi-hop Inference. In Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13), pages 101-117, Hong Kong. Association for Com- putational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), NAACL-HLT 2019, pages 4171-4186, Min- neapolis, MN, USA. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Higher-order Lexical Semantic Models for Non-factoid Answer Reranking", "authors": [ { "first": "Daniel", "middle": [], "last": "Fried", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Jansen", "suffix": "" }, { "first": "Gustave", "middle": [], "last": "Hahn-Powell", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2015, "venue": "Transactions of the Association for Computational Linguistics", "volume": "3", "issue": "", "pages": "197--210", "other_ids": { "DOI": [ "10.1162/tacl_a_00133" ] }, "num": null, "urls": [], "raw_text": "Daniel Fried, Peter Jansen, Gustave Hahn-Powell, Mi- hai Surdeanu, and Peter Clark. 2015. Higher-order Lexical Semantic Models for Non-factoid Answer Reranking. Transactions of the Association for Com- putational Linguistics, 3:197-210.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Explanation Regeneration via Multi-Hop ILP Inference over Knowledge Base", "authors": [ { "first": "Aayushee", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Gopalakrishnan", "middle": [], "last": "Srinivasaraghavan", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Graph-based Methods for Natural Language Processing (TextGraphs)", "volume": "", "issue": "", "pages": "109--114", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aayushee Gupta and Gopalakrishnan Srinivasaragha- van. 2020. Explanation Regeneration via Multi-Hop ILP Inference over Knowledge Base. In Proceed- ings of the Graph-based Methods for Natural Lan- guage Processing (TextGraphs), pages 109-114. As- sociation for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A Study of Automatically Acquiring Explanatory Inference Patterns from Corpora of Explanations: Lessons from Elementary Science Exams", "authors": [ { "first": "Peter", "middle": [], "last": "Jansen", "suffix": "" } ], "year": 2017, "venue": "6th Workshop on Automated Knowledge Base Construction (AKBC)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Jansen. 2017. A Study of Automatically Acquir- ing Explanatory Inference Patterns from Corpora of Explanations: Lessons from Elementary Science Ex- ams. In 6th Workshop on Automated Knowledge Base Construction (AKBC) 2017.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Framing QA as Building and Ranking Intersentence Answer Justifications. Computational Linguistics", "authors": [ { "first": "Peter", "middle": [], "last": "Jansen", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Sharp", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2017, "venue": "", "volume": "43", "issue": "", "pages": "407--449", "other_ids": { "DOI": [ "10.1162/COLI_a_00287" ] }, "num": null, "urls": [], "raw_text": "Peter Jansen, Rebecca Sharp, Mihai Surdeanu, and Pe- ter Clark. 2017. Framing QA as Building and Rank- ing Intersentence Answer Justifications. Computa- tional Linguistics, 43(2):407-449.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "TextGraphs 2019 Shared Task on Multi-Hop Inference for Explanation Regeneration", "authors": [ { "first": "Peter", "middle": [], "last": "Jansen", "suffix": "" }, { "first": "Dmitry", "middle": [], "last": "Ustalov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13)", "volume": "", "issue": "", "pages": "63--77", "other_ids": { "DOI": [ "10.18653/v1/D19-5309" ] }, "num": null, "urls": [], "raw_text": "Peter Jansen and Dmitry Ustalov. 2019. TextGraphs 2019 Shared Task on Multi-Hop Inference for Explanation Regeneration. In Proceedings of the Thirteenth Workshop on Graph- Based Methods for Natural Language Processing (TextGraphs-13), pages 63-77, Hong Kong. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "TextGraphs 2020 Shared Task on Multi-Hop Inference for Explanation Regeneration", "authors": [ { "first": "Peter", "middle": [], "last": "Jansen", "suffix": "" }, { "first": "Dmitry", "middle": [], "last": "Ustalov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Graph-based Methods for Natural Language Processing (TextGraphs)", "volume": "", "issue": "", "pages": "85--97", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Jansen and Dmitry Ustalov. 2020. TextGraphs 2020 Shared Task on Multi-Hop Inference for Explanation Regeneration. In Pro- ceedings of the Graph-based Methods for Natural Language Processing (TextGraphs), pages 85- 97, Barcelona, Spain (Online). Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "WorldTree: A Corpus of Explanation Graphs for Elementary Science Questions supporting Multi-hop Inference", "authors": [ { "first": "Peter", "middle": [], "last": "Jansen", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Wainwright", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Marmorstein", "suffix": "" }, { "first": "Clayton", "middle": [], "last": "Morrison", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018", "volume": "", "issue": "", "pages": "2732--2740", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Jansen, Elizabeth Wainwright, Steven Mar- morstein, and Clayton Morrison. 2018. WorldTree: A Corpus of Explanation Graphs for Elemen- tary Science Questions supporting Multi-hop In- ference. In Proceedings of the Eleventh Interna- tional Conference on Language Resources and Eval- uation, LREC 2018, pages 2732-2740, Miyazaki, Japan. European Language Resources Association (ELRA).", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Cumulated Gain-Based Evaluation of IR Techniques", "authors": [ { "first": "Kalervo", "middle": [], "last": "J\u00e4rvelin", "suffix": "" }, { "first": "Jaana", "middle": [], "last": "Kek\u00e4l\u00e4inen", "suffix": "" } ], "year": 2002, "venue": "ACM Transactions on Information Systems", "volume": "20", "issue": "4", "pages": "422--446", "other_ids": { "DOI": [ "10.1145/582415.582418" ] }, "num": null, "urls": [], "raw_text": "Kalervo J\u00e4rvelin and Jaana Kek\u00e4l\u00e4inen. 2002. Cu- mulated Gain-Based Evaluation of IR Techniques. ACM Transactions on Information Systems, 20(4):422-446.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Learning to explain: Datasets and models for identifying valid reasoning chains in multihop question-answering", "authors": [ { "first": "Harsh", "middle": [], "last": "Jhamtani", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harsh Jhamtani and Peter Clark. 2020. Learning to ex- plain: Datasets and models for identifying valid rea- soning chains in multihop question-answering.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "TextGraphs-15 Shared Task System Description : Multi-Hop Inference Explanation Regeneration by Matching Expert Ratings", "authors": [ { "first": "Sam", "middle": [], "last": "Sureshkumar Vivek Kalyan", "suffix": "" }, { "first": "", "middle": [], "last": "Witteveen", "suffix": "" } ], "year": 2021, "venue": "Proceedings of TextGraphs-15: Graph-based Methods for Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sureshkumar Vivek Kalyan, Sam Witteveen, and Mar- tin Andrews. 2021. TextGraphs-15 Shared Task Sys- tem Description : Multi-Hop Inference Explanation Regeneration by Matching Expert Ratings. In Pro- ceedings of TextGraphs-15: Graph-based Methods for Natural Language Processing. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "On the Possibilities and Limitations of Multi-hop Reasoning Under Linguistic Imperfections", "authors": [ { "first": "Daniel", "middle": [], "last": "Khashabi", "suffix": "" }, { "first": "Tushar", "middle": [], "last": "Erfan Sadeqi Azer", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Khot", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Sabharwal", "suffix": "" }, { "first": "", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Khashabi, Erfan Sadeqi Azer, Tushar Khot, Ashish Sabharwal, and Dan Roth. 2019. On the Possibilities and Limitations of Multi-hop Reason- ing Under Linguistic Imperfections.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "QASC: A Dataset for Question Answering via Sentence Composition", "authors": [ { "first": "Tushar", "middle": [], "last": "Khot", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Michal", "middle": [], "last": "Guerquin", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Jansen", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Sabharwal", "suffix": "" } ], "year": 2020, "venue": "The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20)", "volume": "", "issue": "", "pages": "8082--8090", "other_ids": { "DOI": [ "10.1609/aaai.v34i05.6319" ] }, "num": null, "urls": [], "raw_text": "Tushar Khot, Peter Clark, Michal Guerquin, Peter Jansen, and Ashish Sabharwal. 2020. QASC: A Dataset for Question Answering via Sentence Com- position. In The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20), pages 8082-8090, New York, NY, USA.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "PGL at TextGraphs 2020 Shared Task: Explanation Regeneration using Language and Graph Learning Methods", "authors": [ { "first": "Weibin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yuxiang", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Zhengjie", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Weiyue", "middle": [], "last": "Su", "suffix": "" }, { "first": "Jiaxiang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Shikun", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Graph-based Methods for Natural Language Processing (TextGraphs)", "volume": "", "issue": "", "pages": "98--102", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weibin Li, Yuxiang Lu, Zhengjie Huang, Weiyue Su, Ji- axiang Liu, Shikun Feng, and Yu Sun. 2020. PGL at TextGraphs 2020 Shared Task: Explanation Regen- eration using Language and Graph Learning Meth- ods. In Proceedings of the Graph-based Methods for Natural Language Processing (TextGraphs), pages 98-102. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretrain- ing Approach.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Introduction to Information Retrieval", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Prabhakar", "middle": [], "last": "Raghavan", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning, Prabhakar Raghavan, and Hinrich Sch\u00fctze. 2008. Introduction to Information Retrieval. Cambridge University Press.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Compositional Questions Do Not Necessitate Multi-hop Reasoning", "authors": [ { "first": "Sewon", "middle": [], "last": "Min", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Wallace", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, ACL 2019", "volume": "", "issue": "", "pages": "4249--4257", "other_ids": { "DOI": [ "10.18653/v1/P19-1416" ] }, "num": null, "urls": [], "raw_text": "Sewon Min, Eric Wallace, Sameer Singh, Matt Gard- ner, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2019. Compositional Questions Do Not Necessi- tate Multi-hop Reasoning. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, ACL 2019, pages 4249-4257, Florence, Italy. Association for Computational Lin- guistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "MEMEN: Multi-layer Embedding with Memory Networks for Machine Comprehension", "authors": [ { "first": "Boyuan", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zhou", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Boyuan Pan, Hao Li, Zhou Zhao, Bin Cao, Deng Cai, and Xiaofei He. 2017. MEMEN: Multi-layer Em- bedding with Memory Networks for Machine Com- prehension.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "DeepBlueAI at TextGraphs 2021 Shared Task: Treating Multi-Hop Inference Explanation Regeneration as A Ranking Problem", "authors": [ { "first": "Chunguang", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Bingyan", "middle": [], "last": "Song", "suffix": "" }, { "first": "Zhipeng", "middle": [], "last": "Luo", "suffix": "" } ], "year": 2021, "venue": "Proceedings of TextGraphs-15: Graph-based Methods for Natural Language Processing. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chunguang Pan, Bingyan Song, and Zhipeng Luo. 2021. DeepBlueAI at TextGraphs 2021 Shared Task: Treating Multi-Hop Inference Explanation Regen- eration as A Ranking Problem. In Proceedings of TextGraphs-15: Graph-based Methods for Natu- ral Language Processing. Association for Computa- tional Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "ChiSquareX at TextGraphs 2020 Shared Task: Leveraging Pretrained Language Models for Explanation Regeneration", "authors": [ { "first": "Aditya", "middle": [], "last": "Girish Pawate", "suffix": "" }, { "first": "Varun", "middle": [], "last": "Madhavan", "suffix": "" }, { "first": "Devansh", "middle": [], "last": "Chandak", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Graph-based Methods for Natural Language Processing (TextGraphs)", "volume": "", "issue": "", "pages": "103--108", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aditya Girish Pawate, Varun Madhavan, and Devansh Chandak. 2020. ChiSquareX at TextGraphs 2020 Shared Task: Leveraging Pretrained Language Mod- els for Explanation Regeneration. In Proceedings of the Graph-based Methods for Natural Language Processing (TextGraphs), pages 103-108. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "ERNIE 2.0: A Continual Pre-Training Framework for Language Understanding", "authors": [ { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Shuohuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yu-Kun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Shikun", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Hao Tian", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20)", "volume": "", "issue": "", "pages": "8968--8975", "other_ids": { "DOI": [ "10.1609/aaai.v34i05.6428" ] }, "num": null, "urls": [], "raw_text": "Yu Sun, Shuohuan Wang, Yu-Kun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. 2020. ERNIE 2.0: A Continual Pre-Training Framework for Language Understanding. In The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI- 20), pages 8968-8975, New York, NY, USA.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "A Survey on Explainability in Machine Reading Comprehension", "authors": [ { "first": "Mokanarangan", "middle": [], "last": "Thayaparan", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Valentino", "suffix": "" }, { "first": "Andr\u00e9", "middle": [], "last": "Freitas", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mokanarangan Thayaparan, Marco Valentino, and Andr\u00e9 Freitas. 2020. A Survey on Explainability in Machine Reading Comprehension.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Is Multihop QA in DIRE Condition? Measuring and Reducing Disconnected Reasoning", "authors": [ { "first": "Harsh", "middle": [], "last": "Trivedi", "suffix": "" }, { "first": "Niranjan", "middle": [], "last": "Balasubramanian", "suffix": "" }, { "first": "Tushar", "middle": [], "last": "Khot", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Sabharwal", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing", "volume": "2020", "issue": "", "pages": "8846--8863", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.712" ] }, "num": null, "urls": [], "raw_text": "Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2020. Is Multihop QA in DIRE Condition? Measuring and Reducing Dis- connected Reasoning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2020, pages 8846-8863, Online. Association for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Unification-based Reconstruction of Multi-hop Explanations for Science Questions", "authors": [ { "first": "Marco", "middle": [], "last": "Valentino", "suffix": "" }, { "first": "Mokanarangan", "middle": [], "last": "Thayaparan", "suffix": "" }, { "first": "Andr\u00e9", "middle": [], "last": "Freitas", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021", "volume": "", "issue": "", "pages": "200--211", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Valentino, Mokanarangan Thayaparan, and Andr\u00e9 Freitas. 2021. Unification-based Reconstruc- tion of Multi-hop Explanations for Science Ques- tions. In Proceedings of the 16th Conference of the European Chapter of the Association for Com- putational Linguistics: Main Volume, EACL 2021, pages 200-211, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "The Philosophy of Information Retrieval Evaluation", "authors": [ { "first": "Ellen", "middle": [ "M" ], "last": "Voorhees", "suffix": "" } ], "year": 2002, "venue": "Evaluation of Cross-Language Information Retrieval Systems", "volume": "", "issue": "", "pages": "355--370", "other_ids": { "DOI": [ "10.1007/3-540-45691-0_34" ] }, "num": null, "urls": [], "raw_text": "Ellen M. Voorhees. 2002. The Philosophy of In- formation Retrieval Evaluation. In Evaluation of Cross-Language Information Retrieval Systems, pages 355-370, Berlin, Heidelberg. Springer Berlin Heidelberg.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "A Theoretical Analysis of NDCG Type Ranking Measures", "authors": [ { "first": "Yining", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Liwei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yuanzhi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Di", "middle": [], "last": "He", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 26th Annual Conference on Learning Theory", "volume": "", "issue": "", "pages": "25--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yining Wang, Liwei Wang, Yuanzhi Li, Di He, and Tie- Yan Liu. 2013. A Theoretical Analysis of NDCG Type Ranking Measures. In Proceedings of the 26th Annual Conference on Learning Theory, vol- ume 30 of Proceedings of Machine Learning Re- search, pages 25-54, Princeton, NJ, USA. PMLR.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Teach Me to Explain: A Review of Datasets for Explainable NLP", "authors": [ { "first": "Sarah", "middle": [], "last": "Wiegreffe", "suffix": "" }, { "first": "Ana", "middle": [], "last": "Marasovi\u0107", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sarah Wiegreffe and Ana Marasovi\u0107. 2021. Teach Me to Explain: A Review of Datasets for Explainable NLP.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "A Three-step Method for Multi-Hop Inference Explanation Regeneration", "authors": [ { "first": "Yuejia", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "Yunyan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xiaoming", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Wandi", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xi", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2021, "venue": "Proceedings of TextGraphs-15: Graph-based Methods for Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuejia Xiang, Yunyan Zhang, Xiaoming Shi, Bo Liu, Wandi Xu, and Xi Chen. 2021. A Three-step Method for Multi-Hop Inference Explanation Re- generation. In Proceedings of TextGraphs-15: Graph-based Methods for Natural Language Pro- cessing. Association for Computational Linguistics.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "WorldTree V2: A Corpus of Science-Domain Structured Explanations and Inference Patterns supporting Multi-Hop Inference", "authors": [ { "first": "Zhengnan", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Thiem", "suffix": "" }, { "first": "Jaycie", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Wainwright", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Marmorstein", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Jansen", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Conference on Language Resources and Evaluation", "volume": "2020", "issue": "", "pages": "5456--5473", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhengnan Xie, Sebastian Thiem, Jaycie Martin, Eliz- abeth Wainwright, Steven Marmorstein, and Peter Jansen. 2020. WorldTree V2: A Corpus of Science- Domain Structured Explanations and Inference Pat- terns supporting Multi-Hop Inference. In Proceed- ings of the 12th Conference on Language Resources and Evaluation, LREC 2020, pages 5456-5473, Marseille, France. European Language Resources Association (ELRA).", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Unsupervised Alignment-based Iterative Evidence Retrieval for Multi-hop Question Answering", "authors": [ { "first": "Vikas", "middle": [], "last": "Yadav", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020", "volume": "", "issue": "", "pages": "4514--4525", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.414" ] }, "num": null, "urls": [], "raw_text": "Vikas Yadav, Steven Bethard, and Mihai Surdeanu. 2020. Unsupervised Alignment-based Iterative Ev- idence Retrieval for Multi-hop Question Answering. In Proceedings of the 58th Annual Meeting of the As- sociation for Computational Linguistics, ACL 2020, pages 4514-4525, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "HOTPOTQA: A Dataset for Diverse, Explainable Multi-hop Question Answering", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Saizheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "William", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2369--2380", "other_ids": { "DOI": [ "10.18653/v1/D18-1259" ] }, "num": null, "urls": [], "raw_text": "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christo- pher D. Manning. 2018. HOTPOTQA: A Dataset for Diverse, Explainable Multi-hop Question Answer- ing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018, pages 2369-2380, Brussels, Belgium. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "TABREF0": { "type_str": "table", "text": "", "html": null, "content": "", "num": null }, "TABREF2": { "type_str": "table", "text": "", "html": null, "content": "
: Overall task performance systems participat-
ing in the 2021 Shared Task on Multi-Hop Inference
for Explanation Regeneration. Performance is mea-
sured using Normalized Discounted Cumulative Gain
(NDCG).
", "num": null }, "TABREF3": { "type_str": "table", "text": "RedDragonAI Google-BERT Baseline (tf.idf)", "html": null, "content": "
Retrieval0.7750.7360.6710.477
Inference-supporting0.7160.7120.6830.433
Complex inference0.7380.6880.6640.406
", "num": null }, "TABREF4": { "type_str": "table", "text": "Performance (NDCG) of the systems when considering different types of knowledge.", "html": null, "content": "
Relevance (>) DeepBlueAI RedDragonAI Google-BERT Baseline (tf.idf)
00.8200.7710.7000.501
20.8180.7640.6860.489
40.8310.6920.6010.416
", "num": null }, "TABREF5": { "type_str": "table", "text": "Precison@k DeepBlueAI RedDragonAI Google-BERT Baseline (tf.idf)", "html": null, "content": "
k = 10.9410.9180.8450.715
k = 30.8780.8490.7910.582
k = 50.8170.7840.7430.501
k = 100.6860.6610.6470.381
k = 200.5120.5070.5230.272
k = 500.2960.3030.3150.161
", "num": null }, "TABREF6": { "type_str": "table", "text": "Precision@k for each model across varying values of k.", "html": null, "content": "
Overlaps (\u2264 T ) DeepBlueAI RedDragonAI Google-BERT Baseline (tf.idf)
100.0%0.8200.7710.7000.501
90.0%0.8200.7710.7000.501
80.0%0.8200.7710.6990.501
70.0%0.8180.7690.6980.497
60.0%0.8160.7660.6950.493
50.0%0.8130.7630.6910.487
40.0%0.8040.7540.6790.471
30.0%0.7910.7380.6610.443
20.0%0.7510.7040.6280.382
10.0%0.6530.6030.5590.261
0.0%0.4670.3580.4250.134
", "num": null } } } }