{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:57:56.897562Z" }, "title": "A Three-step Method for Multi-Hop Inference Explanation Regeneration", "authors": [ { "first": "Yuejia", "middle": [], "last": "Xiang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Northeastern University", "location": {} }, "email": "yuejiaxiang@tencent.com" }, { "first": "Yunyan", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Northeastern University", "location": {} }, "email": "yunyanzhang@tencent.com" }, { "first": "Xiaoming", "middle": [], "last": "Shi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Northeastern University", "location": {} }, "email": "xiaomingshi@tencent.com" }, { "first": "Bo", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Northeastern University", "location": {} }, "email": "" }, { "first": "Wandi", "middle": [], "last": "Xu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Northeastern University", "location": {} }, "email": "xuwandi@stumail.neu.edu.cn" }, { "first": "Chen", "middle": [], "last": "Xi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Northeastern University", "location": {} }, "email": "" }, { "first": "Tencent", "middle": [ "Jarvis" ], "last": "Lab", "suffix": "", "affiliation": { "laboratory": "", "institution": "Northeastern University", "location": {} }, "email": "" }, { "first": "Cetc", "middle": [], "last": "Scmsit", "suffix": "", "affiliation": { "laboratory": "", "institution": "Northeastern University", "location": {} }, "email": "" }, { "first": "", "middle": [], "last": "Scrxx", "suffix": "", "affiliation": { "laboratory": "", "institution": "Northeastern University", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Multi-hop inference for explanation generation is to combine two or more facts to make an inference. The task focuses on generating explanations for elementary science questions. In the task, the relevance between the explanations and the QA pairs is of vital importance. To address the task, a three-step framework is proposed. Firstly, vector distance between two texts is utilized to recall the top-K relevant explanations for each question, reducing the calculation consumption. Then, a selection module is employed to choose those most relative facts in an autoregressive manner, giving a preliminary order for the retrieved facts. Thirdly, we adopt a re-ranking module to re-rank the retrieved candidate explanations with relevance between each fact and the QA pairs. Experimental results illustrate the effectiveness of the proposed framework with an improvement of 39.78% in NDCG over the official baseline. 1", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Multi-hop inference for explanation generation is to combine two or more facts to make an inference. The task focuses on generating explanations for elementary science questions. In the task, the relevance between the explanations and the QA pairs is of vital importance. To address the task, a three-step framework is proposed. Firstly, vector distance between two texts is utilized to recall the top-K relevant explanations for each question, reducing the calculation consumption. Then, a selection module is employed to choose those most relative facts in an autoregressive manner, giving a preliminary order for the retrieved facts. Thirdly, we adopt a re-ranking module to re-rank the retrieved candidate explanations with relevance between each fact and the QA pairs. Experimental results illustrate the effectiveness of the proposed framework with an improvement of 39.78% in NDCG over the official baseline. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Multi-hop inference for explanation generation (Jansen and Ustalov, 2020) , aiming to combing two or more facts to make an inference and providing users with human-readable explanations, has shown significant potential and alluring technological value to improve medical or judicial systems. A typical application in natural language processing is question answering tasks (QA). Multi-hop explanation generation for QA aims to retrieve multiple textual facts from pre-defined candidates (typically retrieved from different books, web pages, or other documents) for a given question-answer pair. Figure 1 shows an example. The input is a QA sample and candidate facts, and the task is designed to retrieve facts f 1 , f 2 , f 3 , which contribute greatly to inferring the answer.", "cite_spans": [ { "start": 47, "end": 73, "text": "(Jansen and Ustalov, 2020)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 595, "end": 603, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Multi-hop explanation generation for QA suffers from a key issue: computationally prohibitory, which causes by unaffordable amount of fact combinations, especially when the number of facts required to perform an inference increases. Empirically speaking, the issue causes large drops in performance (Fried et al., 2015; Jansen et al., 2017) and limits the inference capacity (Khashabi et al., 2019) . To solve the issue, previous works compute scores for facts in isolation, or by severely limiting the number of combinations of facts (Das et al., 2019; Banerjee, 2019; Chia et al., 2019) . Cartuyvels et al. (2020) proposed a two-step inference algorithm for multi-hop explanation regeneration with a relevant fact recall step and an autoregressive fact selection step. In this way, the two-step algorithm prompts efficiency and accuracy.", "cite_spans": [ { "start": 299, "end": 319, "text": "(Fried et al., 2015;", "ref_id": "BIBREF6" }, { "start": 320, "end": 340, "text": "Jansen et al., 2017)", "ref_id": "BIBREF7" }, { "start": 375, "end": 398, "text": "(Khashabi et al., 2019)", "ref_id": "BIBREF9" }, { "start": 535, "end": 553, "text": "(Das et al., 2019;", "ref_id": "BIBREF4" }, { "start": 554, "end": 569, "text": "Banerjee, 2019;", "ref_id": "BIBREF0" }, { "start": 570, "end": 588, "text": "Chia et al., 2019)", "ref_id": "BIBREF3" }, { "start": 591, "end": 615, "text": "Cartuyvels et al. (2020)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the TextGraphs 2021 Shared Task, the relevance between the explanations and the QA pairs is of vital importance. However, the autogression selection process may hinder model's ability to recognize the relevance between each fact and QA. The main reason is that the autogression selection proceess emphasizes the relevance between QA and the retrieved facts, paying more attention on retrieved facts when there are many retrieved facts. As the example in Figure 2 , the two-step algorithm fails to recognize the order of the retrieved two facts form means kind and ultraviolet rays means ultraviolet light. To ad-dress the problem, we propose a reranking module to fine-rank the results of the two-step method with the relevance between each fact and the QA pair. Then, we propose a three-step framework to solve the task: recall, selection and reranking, aiming to iteratively recall facts, select core facts, and then rerank retrieved core facts, respectively.", "cite_spans": [], "ref_spans": [ { "start": 457, "end": 465, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Experiments on the 2021 version of the task demonstrate the effectiveness of the proposed method, which achieves improvements of 39.78% in NDCG, in comparison with the official baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The proposed framework is designed to predict a ranked list of facts inferring a QA sample, including three modules: a recall, selection and reranking module, as illustrated in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 177, "end": 185, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "We stitch the text of Question and Answer together as q a . We extract the roots of words in q a and all e to reduce the number of different textual expressions caused by singular and plural tenses. For example, cats and made are modified to cat and make, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recall Module", "sec_num": "2.1" }, { "text": "The recall module aims to iteratively recall facts with high relevance from the candidates. Formally, the recall module can be defined as a function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recall Module", "sec_num": "2.1" }, { "text": "f (q, a, f 1 , \u2022 \u2022 \u2022 , f i , \u2022 \u2022 \u2022) : T L \u2192 R |C| ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recall Module", "sec_num": "2.1" }, { "text": "where q denotes the question token sequence, a denotes the answer token sequence, f i denotes the recalled facts, T denotes the token set, L denotes the length of the", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recall Module", "sec_num": "2.1" }, { "text": "sequence [q, a, f 1 , \u2022 \u2022 \u2022 , f i , \u2022 \u2022 \u2022]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recall Module", "sec_num": "2.1" }, { "text": ", and C denotes the candidate set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recall Module", "sec_num": "2.1" }, { "text": "Specifically, we use the distances between tf-idf vectors to compute the distances between two texts. Let", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recall Module", "sec_num": "2.1" }, { "text": "s i = [q a , f * 1 , ..., f * i ],", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recall Module", "sec_num": "2.1" }, { "text": "where f * i is the ith best candidate selected from C i by the selection module (refer to subsection 'Selection Module'). For the convenience of expression, we will write q a as s 0 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recall Module", "sec_num": "2.1" }, { "text": "First, we compute the Topk of f i with the smallest distance from q a , forming C 1 . Then we compute the top K f i with the smallest distance from s 1 to form C 2 . And so on.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recall Module", "sec_num": "2.1" }, { "text": "We first normalize the score of each candidate fact to between 0 and 1. Since the score of s i is 0 to 6, we divide the score by 6 to complete the normalization. Then we use Bert's own binary classification model to calculate the probability size ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selection Module", "sec_num": "2.2" }, { "text": "P (f i |s i\u22121 ) = BERT (f i , s i\u22121 ) of each candidate f i under s i\u22121 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selection Module", "sec_num": "2.2" }, { "text": "Eventually, we will select a preferred choice with the highest probability as f * i . In the prediction process, we keep TopB candidates for each f i for iteration and treat the currently used fact in TopB as f * i in the iteration process. That is, for a q a our method will generate B (m\u22121) * K fact links of length m. The probability of each fact link is obtained by chain decomposition to P (q a , f 1 , ..., f m ) = P (f 1 |s 0 )P (f 2 |s 1 )....... Our algorithm computes only sequences of length m < M . We finally sort the output sequences", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selection Module", "sec_num": "2.2" }, { "text": "(f r 1 1 , f r 1 2 , ..., f r 1 m , f r 2 1 , f r 2 2 , ..., f r 2 m , ...). where f r j i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selection Module", "sec_num": "2.2" }, { "text": "denotes the f i of the fact link of sort jth. Then the output sequence is de-weighted by removing the non-first occurrence of the fact, to obtain the sequence O.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selection Module", "sec_num": "2.2" }, { "text": "The selection module hypothesizes that the predicted facts are always true and predicts the next fact given the previous facts. Such a process tends to suffer from error propagation since errors in the early modules cannot be corrected in later modules. Furthermore, one QA pair may have 20-30 relevant facts in average. The selection module may pay attention to QA at the beginning, but retrieved facts when there are many retrieved facts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rerank Module", "sec_num": "2.3" }, { "text": "To relieve this problem, we introduce a rerank We select the top N candidate facts from the predicted results of the selection module and assign a score for every candidate fact according to its order. In the inference process, we calculate the probability for each candidate fact. If the probability is above a threshold, the original score of the specific candidate fact is added by a constant. After that, we rerank these candidate facts according to the updated scores. In this way, the model can obtain complementary results from both the selection module and rerank module.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rerank Module", "sec_num": "2.3" }, { "text": "In the 2021 version of the task, some facts are marked as deleted, duplicated, or low quality. We removed these facts, leaving 8983 facts in the end. The training dataset has 2206 data, the development dataset has 496 data, and the test dataset has 1664 data. This year, the sponsors include a very large dataset of approximately 250,000 expertannotated relevancy ratings for facts ranked highly by baseline language models from previous years (e.g. BERT, RoBERTa).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data and Setting", "sec_num": "3.1" }, { "text": "We ran experiments on one 16GB Nvidia Tesla P100 GPU. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data and Setting", "sec_num": "3.1" }, { "text": "The evaluation uses NDCG and the organizer provides a very large dataset of approximately 250,000 expert-annotated relevancy ratings for facts ranked highly by baseline language models from previous years (e.g. BERT, RoBERTa).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.2" }, { "text": "The shared task data distribution includes a baseline that uses a term frequency model (tf.idf) to rank how likely table row sentences are to be a part of a given explanation. The performance of this baseline on the development partition is 0.513 NDCG. 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": "3.3" }, { "text": "It can be seen from the experimental results that our method is significantly better than the baseline model. At the same time, the Rerank module brings an improvement of 2.14%. The experimental results prove that our strategy of recall module and selection module is effective, which is 17.79% higher than the baseline. The rerank module also brings performance improvements as we expected, thus the rerank module make the results more focused on question is reasonable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Main Results", "sec_num": "3.4" }, { "text": "We show three cases in Table 3 . For each case, we show the top10 facts before the rerank module and after the rerank module. We can see from these cases that after applying the recall module and the section module, most of the top10 facts are related to the question and the answer. But there will be some irrelevant facts or less relevant facts that are ranked higher. And, after applying the rerank module, The ranking of facts with high references has generally been improved.", "cite_spans": [], "ref_spans": [ { "start": 23, "end": 30, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Case Study", "sec_num": "3.5" }, { "text": "Recall+Selection Recall+Selection+Rerank Fact (Top10) Ref. Fact (Top10)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case Study", "sec_num": "3.5" }, { "text": "Ref. the amount of daylight is greatest in the summer 6 the amount of daylight is greatest in the summer 6 summer is a kind of season 4 summer is a kind of season 4 daylight hours means time during daylight 0 summer has the most sunlight 6 the amount of daylight is least in the winter 2 increase means more 0 winter is a kind of season 2 daylight means sunlight 0 increase means more 0 summer is hemisphere tilted towards the sun 5 daylight means sunlight 0 high is similar to increase 0 summer is hemisphere tilted towards the sun 5 greatest means largest; highest 1 summer has the most sunlight 6 receiving sunlight synonymous absorbing sunlight 0 high is similar to increase 0 amount of daylight means length of daylight 0 (a) Question: About how long does it take Earth to make one revolution around the Sun? Answer: summer. matter; molecules vibrating can cause sound 5 a violin is a kind of musical instrument 4 plucking; strumming a string cause that string to vibrate 6 to cause means to be responsible for 0 a violin is a kind of musical instrument 4 musical instruments make sound when they are played 4 musical instruments make sound when they are played 4 matter; molecules vibrating can cause sound 5 a string is a kind of object 3 a string is a part of a guitar for producing sound 1 to cause means to be responsible for 0 a string is a kind of object 3 a string is a part of a guitar for producing sound 1 a guitar is a kind of musical instrument 0 a musical instrument is a kind of object 3 a musical instrument is a kind of object 3 make means produce 0 make means produce 0 vibrating matter can produce sound 5 (c) Question: Bruce plays his violin every Friday night for the symphony. Before he plays, he plucks each string to see if his violin is in tune. Which is most responsible for the generation of sound waves from his violin? Answer: vibrations of the string. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case Study", "sec_num": "3.5" }, { "text": "Different number of parameter N in the rerank module can affect the performance to some extent, thus we report the performances using different parameter N. As shown in Table 4 , the model achieves best performance with 70.03% NDCG score when N is 50. The NDCG score decreases when N is too low since the rerank module does not play its due role. Further more, a larger N is not necessary.", "cite_spans": [], "ref_spans": [ { "start": 169, "end": 176, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Parameters in Rerank Module", "sec_num": "3.6" }, { "text": "We proposed our approach to the shared task on \"Multi-hop Inference Explanation Regeneration\". Our framework consists of three modules: a recall module, a selection module and a reranking module. The recall module retrieves top-K relevant facts using the distances between tf-idf vectors. Then an antoregressive fact selection module is applied to predict the next fact considering the retrived facts. Finally a rerank module is applied to correct the order. The proposed framework achieved an improvement of 39.78% over the official baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "conclusion", "sec_num": "4" }, { "text": "https://github.com/cognitiveailab/tg2021task", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Asu at textgraphs 2019 shared task: Explanation regeneration using language models and iterative re-ranking. ACL Workshop", "authors": [ { "first": "Pratyay", "middle": [], "last": "Banerjee", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pratyay Banerjee. 2019. Asu at textgraphs 2019 shared task: Explanation regeneration using language mod- els and iterative re-ranking. ACL Workshop.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A large annotated corpus for learning natural language inference", "authors": [ { "first": "Gabor", "middle": [], "last": "Samuel R Bowman", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Potts", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.05326" ] }, "num": null, "urls": [], "raw_text": "Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large anno- tated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Autoregressive reasoning over chains of facts with transformers", "authors": [ { "first": "Ruben", "middle": [], "last": "Cartuyvels", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Spinks", "suffix": "" }, { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2020, "venue": "ACL Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruben Cartuyvels, Graham Spinks, and Marie- Francine Moens. 2020. Autoregressive reasoning over chains of facts with transformers. ACL Work- shop.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Red dragon ai at textgraphs 2019 shared task: Language model assisted explanation generation", "authors": [ { "first": "Ken", "middle": [], "last": "Yew", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Chia", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Witteveen", "suffix": "" }, { "first": "", "middle": [], "last": "Andrews", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yew Ken Chia, Sam Witteveen, and Martin Andrews. 2019. Red dragon ai at textgraphs 2019 shared task: Language model assisted explanation genera- tion. ACL Workshop.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Chains-of-reasoning at textgraphs 2019 shared task: Reasoning over chains of facts for explainable multihop inference", "authors": [ { "first": "Rajarshi", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ameya", "middle": [], "last": "Godbole", "suffix": "" }, { "first": "Manzil", "middle": [], "last": "Zaheer", "suffix": "" }, { "first": "Shehzaad", "middle": [], "last": "Dhuliawala", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13)", "volume": "", "issue": "", "pages": "101--117", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rajarshi Das, Ameya Godbole, Manzil Zaheer, She- hzaad Dhuliawala, and Andrew McCallum. 2019. Chains-of-reasoning at textgraphs 2019 shared task: Reasoning over chains of facts for explainable multi- hop inference. In Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13), pages 101- 117.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Higherorder lexical semantic models for non-factoid answer reranking", "authors": [ { "first": "Daniel", "middle": [], "last": "Fried", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Jansen", "suffix": "" }, { "first": "Gustave", "middle": [], "last": "Hahn-Powell", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2015, "venue": "Transactions of the Association for Computational Linguistics", "volume": "3", "issue": "", "pages": "197--210", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Fried, Peter Jansen, Gustave Hahn-Powell, Mi- hai Surdeanu, and Peter Clark. 2015. Higher- order lexical semantic models for non-factoid an- swer reranking. Transactions of the Association for Computational Linguistics, 3:197-210.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Framing qa as building and ranking intersentence answer justifications", "authors": [ { "first": "Peter", "middle": [], "last": "Jansen", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Sharp", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2017, "venue": "Computational Linguistics", "volume": "43", "issue": "2", "pages": "407--449", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Jansen, Rebecca Sharp, Mihai Surdeanu, and Pe- ter Clark. 2017. Framing qa as building and ranking intersentence answer justifications. Computational Linguistics, 43(2):407-449.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "TextGraphs 2020 Shared Task on Multi-Hop Inference for Explanation Regeneration", "authors": [ { "first": "Peter", "middle": [], "last": "Jansen", "suffix": "" }, { "first": "Dmitry", "middle": [], "last": "Ustalov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Graph-based Methods for Natural Language Processing (TextGraphs)", "volume": "", "issue": "", "pages": "85--97", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Jansen and Dmitry Ustalov. 2020. TextGraphs 2020 Shared Task on Multi-Hop Inference for Explanation Regeneration. In Pro- ceedings of the Graph-based Methods for Natural Language Processing (TextGraphs), pages 85- 97, Barcelona, Spain (Online). Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "On the capabilities and limitations of reasoning for natural language understanding", "authors": [ { "first": "Daniel", "middle": [], "last": "Khashabi", "suffix": "" }, { "first": "Tushar", "middle": [], "last": "Erfan Sadeqi Azer", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Khot", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Sabharwal", "suffix": "" }, { "first": "", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1901.02522" ] }, "num": null, "urls": [], "raw_text": "Daniel Khashabi, Erfan Sadeqi Azer, Tushar Khot, Ashish Sabharwal, and Dan Roth. 2019. On the capabilities and limitations of reasoning for natural language understanding. arXiv preprint arXiv:1901.02522.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "authors": [ { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Samuel R", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.05426" ] }, "num": null, "urls": [], "raw_text": "Adina Williams, Nikita Nangia, and Samuel R Bow- man. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "An example of multi-hop inference for explanation generation.", "num": null, "uris": null }, "FIGREF1": { "type_str": "figure", "text": "An overview of our method.", "num": null, "uris": null }, "TABREF1": { "num": null, "text": "", "html": null, "content": "
: Hyperparameters
module, which computes the relevance between
the q-a pair and each fact. Unlike the selection
module, rerank module does not consider the cor-
relations between pieces of facts, which is com-
plementary to the selection module. Inspired by
Natural Language Inference(NLI) task (Williams
et al., 2017; Bowman et al., 2015), we cast q-a pairs
", "type_str": "table" }, "TABREF2": { "num": null, "text": "The details of the experimental setup are shown in the table 1. The parameters not men-", "html": null, "content": "
methodNDCG
Baseline50.10%
Recall+Selection67.89%
Recall+Selection+Rerank 70.03%
", "type_str": "table" }, "TABREF3": { "num": null, "text": "Main Results tioned in the table use the default parameter settings of the Bert model.", "html": null, "content": "", "type_str": "table" }, "TABREF4": { "num": null, "text": "Female seals usually return to the same beaches year after year to give birth. If they are repeatedly disturbed by humans at those beaches, how will the seals most likely respond? Answer: They will give birth at different beaches.", "html": null, "content": "
Recall+Selection+Rerank
Fact (Top10)Ref. Fact (Top10)Ref.
seals return the same beaches to give birth4if humans disturb animals; move to different location6
a seal is a kind of animal4a seal is a kind of sea mammal4
if humans disturb animals; move to different location6a seal is a kind of animal4
a seal is a kind of sea mammal4seals return the same beaches to give birth4
mammals give birth to live young0a mammal is a kind of animal2
a mammal is a kind of animal2mammals give birth to live young0
a beach is a kind of habitat; environment4a beach is a kind of location4
a beach is a kind of location4a human is a kind of mammal2
if something moves; something in different location0an environment is a kind of place2
a human is a kind of mammal2an animal is a kind of living thing2
(b) Question: Recall+SelectionRecall+Selection+Rerank
Fact (Top10)Ref. Fact (Top10)Ref.
plucking; strumming a string cause that string to vibrate 6
", "type_str": "table" }, "TABREF5": { "num": null, "text": "Some cases in evaluation dataset.", "html": null, "content": "", "type_str": "table" }, "TABREF7": { "num": null, "text": "Experiments on parameter of K", "html": null, "content": "
", "type_str": "table" } } } }