{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:40:31.090534Z" }, "title": "Generating Hypothetical Events for Abductive Inference", "authors": [ { "first": "Debjit", "middle": [], "last": "Paul", "suffix": "", "affiliation": { "laboratory": "Research Training Group", "institution": "AIPHES Institute for Computational Linguistics Heidelberg University", "location": {} }, "email": "paul@cl.uni-heidelberg.de" }, { "first": "Anette", "middle": [], "last": "Frank", "suffix": "", "affiliation": { "laboratory": "Research Training Group AIPHES Institute for Computational Linguistics Heidelberg University", "institution": "", "location": {} }, "email": "frank@cl.uni-heidelberg.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Abductive reasoning starts from some observations and aims at finding the most plausible explanation for these observations. To perform abduction, humans often make use of temporal and causal inferences, and knowledge about how some hypothetical situation can result in different outcomes. This work offers the first study of how such knowledge impacts the Abductive \u03b1NLI task-which consists in choosing the more likely explanation for given observations. We train a specialized language model LM I that is tasked to generate what could happen next from a hypothetical scenario that evolves from a given event. We then propose a multi-task model MT L to solve the \u03b1NLI task, which predicts a plausible explanation by a) considering different possible events emerging from candidate hypothesesevents generated by LM I-and b) selecting the one that is most similar to the observed outcome. We show that our MT L model improves over prior vanilla pre-trained LMs finetuned on \u03b1NLI. Our manual evaluation and analysis suggest that learning about possible next events from different hypothetical scenarios supports abductive inference.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Abductive reasoning starts from some observations and aims at finding the most plausible explanation for these observations. To perform abduction, humans often make use of temporal and causal inferences, and knowledge about how some hypothetical situation can result in different outcomes. This work offers the first study of how such knowledge impacts the Abductive \u03b1NLI task-which consists in choosing the more likely explanation for given observations. We train a specialized language model LM I that is tasked to generate what could happen next from a hypothetical scenario that evolves from a given event. We then propose a multi-task model MT L to solve the \u03b1NLI task, which predicts a plausible explanation by a) considering different possible events emerging from candidate hypothesesevents generated by LM I-and b) selecting the one that is most similar to the observed outcome. We show that our MT L model improves over prior vanilla pre-trained LMs finetuned on \u03b1NLI. Our manual evaluation and analysis suggest that learning about possible next events from different hypothetical scenarios supports abductive inference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Abductive reasoning (AR) is inference to the best explanation. It typically starts from an incomplete set of observations about everyday situations and comes up with what can be considered the most likely possible explanation given these observations (Pople, 1973; Douven, 2017) . One of the key characteristics that make abductive reasoning more challenging and distinct from other types of reasoning is its non-monotonic character (Strasser and Antonelli, 2019) i.e., even the most likely explanations are not necessarily correct. For example, in Figure 1 , the most likely explanation for Observation 1: \"wet grass outside my house\" is that \"it has been raining\". However, when a new piece of information (observation or evidence) becomes available, the explanation must possibly be retracted, showing the defeasible character of abduction. With the new observation (\"the sprinkler was switched on\") the most plausible explanation changes to \"Sprinkler caused the grass to be wet\". Humans, in such situations, could induce or validate such abductive inferences by performing hypothetical reasoning (such as \"What would happen if the sprinkler was switched on?\") to arrive at a plausible explanation for \"wet grass outside my house\".", "cite_spans": [ { "start": 251, "end": 264, "text": "(Pople, 1973;", "ref_id": "BIBREF29" }, { "start": 265, "end": 278, "text": "Douven, 2017)", "ref_id": null }, { "start": 433, "end": 463, "text": "(Strasser and Antonelli, 2019)", "ref_id": "BIBREF39" } ], "ref_spans": [ { "start": 549, "end": 558, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we focus on the \u03b1NLI task (Bhagavatula et al., 2020) , where given two observations (O 1 at time t 1 , O 2 at time t 2 , with t 1 < t 2 ) as an incomplete context, the task is to predict which of two given hypothesized events (H 1 or H 2 ) is more plausible to have happened between O 1 and O 2 . Figure 2 illustrates this with an example: given observations O 1 :\"Priya decided to try a new restaurant.\" and O 2 : \"Priya thought her food was delicious.\", the task is to predict whether H 1 or H 2 is the more plausible explanation given observations O 1 and O 2 . Both H 1 and H 2 are different plausible hypothetical situations that can evolve from the same observation (premise) O 1 .", "cite_spans": [ { "start": 40, "end": 66, "text": "(Bhagavatula et al., 2020)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 311, "end": 319, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we hypothesize that learning how different hypothetical scenarios (H 1 and H 2 ) can result in different outcomes (e.g., O H j 2 , Fig. 2 ) can help in performing abductive inference. In order to decide which H i , is more plausible given observa- Figure 2: Motivational example for \u03b1NLI : The top box (red) shows the observations and two callout clouds (green) contain the hypotheses. The implications (O Hi i ) -generated by the LM conditioned on each hypothesis and the observations -are given in pink colored boxes.", "cite_spans": [], "ref_spans": [ { "start": 146, "end": 152, "text": "Fig. 2", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "tions, we assume each H i to be true and generate a possible next event O H i 2 for each of them independently (e.g.: What will happen if Priya's ordered food was microwaved and precooked?). We then compare the generated sentences (O H 1 2 , O H 2 2 in Fig. 2) to what has been observed (O 2 ) and choose as most plausible hypothesis the one whose implication is closest to observation O 2 .", "cite_spans": [], "ref_spans": [ { "start": 253, "end": 261, "text": "Fig. 2)", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We design a language model (LM I ) which, given observations and a hypothesis, generates a possible event that could happen next, given one hypothesis. In order to train this language model, we use the TIMETRAVEL (TT) corpus (Qin et al., 2019 ) (a subpart of the ROCStories corpus 1 ). We utilize the LM I model to generate a possible next event for each hypothesis, given the observations. We then propose a multi-task learning model MT L that jointly chooses from the generated possible next events (O H 1 2 or O H 2 2 ) the one most similar to the observation O 2 and predicts the most plausible hypothesis (H 1 or H 2 ).", "cite_spans": [ { "start": 225, "end": 242, "text": "(Qin et al., 2019", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contributions are: i) To our best knowledge, we are the first to demonstrate that a model that learns to perform hypothetical reasoning can support and improve abductive tasks such as \u03b1NLI. We show that ii) for \u03b1NLI our multi-task model outperforms a strong BERT baseline (Bhagavatula et al., 2020) .", "cite_spans": [ { "start": 276, "end": 302, "text": "(Bhagavatula et al., 2020)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our code is made publicly available. 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main idea is to learn to generate assumptions, in a given situation, about \"What could have hap- 1 We ensure that \u03b1NLI testing instances are held out.", "cite_spans": [ { "start": 101, "end": 102, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Learning about Counterfactual Scenarios", "sec_num": "2" }, { "text": "2 https://github.com/Heidelberg-NLP/ HYPEVENTS pened (next) if we had done X?\" or \"What could happen (next) if we do X?\" (Bhatt and Flanagan, 2010) . Figure 3 (a) depicts the \u03b1NLI task framework. We hypothesize that getting to know what will happen (next) if any of two hypotheses occurs, will help us verifying which of them is more plausible (see Fig. 3 (c)). Therefore, we encourage the model to learn how different hypothetical events (including counterfactual events) evolving from the same premise (s 1 ) can lead to different or similar outcomes (see Fig. 3(b) ). Accordingly, we teach a pre-trained GPT-2 (Radford et al., 2019) language model how to generate a sequence of possible subsequent events given different hypothetical situations in a narrative setting. Training such a model on narrative texts encourages it to learn causal and temporal relations between events. We train a conditional language model, LM I , which generates a possible event that could happen next, given some counterfactual scenarios for a given story. We train this model on the TIMETRAVEL (TT) dataset (Qin et al., 2019) , by fine-tuning GPT-2 to learn about possible next events emerging from a situation in a story, given some alternative, counterfactual event. The TT dataset consists of fivesentence instances S=(s 1 ,s 2 ,..,s 5 ) 3 from the ROC-Stories corpus 1 plus additional crowd-sourced sen-O 1 : Dotty was being very grumpy. O 2 : She felt much better afterwards. H 1 : Dotty ate something bad. H 2 : Dotty call some close friends to chat. O H 1 2 : She started to feel sick. O H 2 2 : They all tried to make her happy. tences s 2:5 , where s 2 is counterfactual 4 to s 2 from the original story 5 . There are two reasons for using the TT dataset for our purposes: a) the domains on which GPT-2 was pretrained are broad 6 and different from the domain of ROCStories, b) the model can see how alternative situations can occur starting from the same premise s 1 , resulting in similar or different outcomes. Note that, although intermediate situations may be counterfactual to each other, the future outcome can still be similar to the original ending due to causal invariance 7 .", "cite_spans": [ { "start": 121, "end": 147, "text": "(Bhatt and Flanagan, 2010)", "ref_id": "BIBREF2" }, { "start": 1091, "end": 1109, "text": "(Qin et al., 2019)", "ref_id": "BIBREF30" } ], "ref_spans": [ { "start": 150, "end": 158, "text": "Figure 3", "ref_id": "FIGREF1" }, { "start": 349, "end": 355, "text": "Fig. 3", "ref_id": "FIGREF1" }, { "start": 558, "end": 567, "text": "Fig. 3(b)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Learning about Counterfactual Scenarios", "sec_num": "2" }, { "text": "! \" ! # $ % ! \" & ' & \" ! # $ ( & #", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning about Counterfactual Scenarios", "sec_num": "2" }, { "text": "Concretely, the language model LM I reads the premise (s 1 ) and the alternative event(s) (s 2 or s 2 ), the masked token (serving as a placeholder for the missing possible next event(s) (s 3:i or s 3:i ), then the rest of the story (s i+1:5 or s i+1:5 ) and again the premise (s 1 ). We train the model to maximize the log-likelihood of the missing ground-truth sentence(s) (s 3:i ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning about Counterfactual Scenarios", "sec_num": "2" }, { "text": "L LM I (\u03b2) = log p \u03b2 (s 3:i |[S]s 1 , [M ], s i+1:5 , [E], [S], s 1 , s 2 ) +log p \u03b2 (s 3:i |[S]s 1 , [M ], s i+1:5 , [E], [S], s 1 , s 2 ) (1) where i \u2208 [3, 4], s i ={w s i 1 , .., w s i n } a sequence of tokens, [S]=start-of-sentence token, [E]=end-of- sentence token, [M ]=mask token.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning about Counterfactual Scenarios", "sec_num": "2" }, { "text": "3 Generating Hypothetical Events to support the \u03b1NLI task", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning about Counterfactual Scenarios", "sec_num": "2" }, { "text": "In this paper, we aim to investigate whether models perform better on the \u03b1NLI task when explicitly learning about events that could follow other events in a hypothetical scenario. We do so by introducing two methods LM I + BERTScore and LM I + MT L for unsupervised and supervised settings, respectively. We first apply the trained model LM I on the \u03b1NLI task, where the given observations O 1 and O 2 , and alternative hypotheses H j are fed as shown", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning about Counterfactual Scenarios", "sec_num": "2" }, { "text": "in (2) below. 8 O H j 2 = \u03b2([S], O 1 , [M ], O 2 , [E], [S], O 1 , H j ) (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning about Counterfactual Scenarios", "sec_num": "2" }, { "text": "We generate a possible next event for each hypothetical event H j , i.e., O H 1 2 and O H 2 2 (or: what will happen if some hypothesis H j occurs given the observations), where j \u2208 [1, 2]. Table 1 illustrates an example where different O H j 2 are generated using LM I . One of the challenges when generating subsequent events given a hypothetical situation is that there can be infinite numbers of possible next events. Therefore, to constrain this range, we chose to give future events (O 2 ) as input, such that the model can generate subsequent events in a constrained context.", "cite_spans": [], "ref_spans": [ { "start": 189, "end": 196, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Learning about Counterfactual Scenarios", "sec_num": "2" }, { "text": "In this setting, we do not train any supervised model to explicitly predict which hypothesis is more plausible given the observations. Instead, we apply the fine-tuned LM I model to the \u03b1NLI data, generate possible next events O H j 2 given O 1 and H j , as described above, and measure the similarity between such possible next events (O H j 2 ) and the observation (O 2 ) in an unsupervised way, using BERTScore (BS) (Zhang et al., 2020) 9 . We evaluate our hypothesis that the generated possible next event O H j 2 given the more plausible hypothesis H j should be more similar to observation O 2 . Table 1 illustrates an example where H 2 is the more plausible hypothesis. We impose the constraint that for a correctly predicted instance BS", "cite_spans": [], "ref_spans": [ { "start": 602, "end": 609, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Unsupervised Setting", "sec_num": "3.1" }, { "text": "(O 2 H + , O 2 ) > BS(O 2 H \u2212 , O 2 ) should hold, where H + , H \u2212", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Setting", "sec_num": "3.1" }, { "text": "are the more plausible vs. implausible hypothesis, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Setting", "sec_num": "3.1" }, { "text": "In this setting, displayed in Figure 4 , we explore the benefits of training a multi-task MT L model that predicts i) the most plausible hypothesis and ii) which possible next event (O", "cite_spans": [], "ref_spans": [ { "start": 30, "end": 38, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Supervised Setting", "sec_num": "3.2" }, { "text": "H j 2 ) is more similar ! \" , $ % , ! % $ % , ! % & ' , ! % $ \" , ! % & ( , ! % Linear Layer ! \" , $ \" , ! % (b) BERT (MTL) $ \" or $ % ! % & ( or ! % & '", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised Setting", "sec_num": "3.2" }, { "text": "Linear Layer", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised Setting", "sec_num": "3.2" }, { "text": ",-../0 = ,2345 + 7 * ,9:;:<=>:?@ (a) GPT-2 (LMI) ) at the same time to perform the \u03b1NLI task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised Setting", "sec_num": "3.2" }, { "text": "! \" , A , ! % , ! \" , $ B", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised Setting", "sec_num": "3.2" }, { "text": "to the observation (O 2 ). Multi-task learning aims to improve the performance of a model for a task by utilizing the knowledge acquired by learning related tasks (Ruder, 2019) . We hypothesize that a) the possible next event O H j 2 of the more plausible hypothesis H j should be most similar to observation O 2 , and that b) learning which possible next event is more similar supports the model in the \u03b1NLI task (inductive transfer).", "cite_spans": [ { "start": 163, "end": 176, "text": "(Ruder, 2019)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Supervised Setting", "sec_num": "3.2" }, { "text": "The architecture of LM I + MT L model is shown in Figure 4 . The model marked (a) in Figure 4 depicts the LM I model as described in \u00a73. The outputs of the LM I model, which we get from Eq. (2) for both hypotheses are incorporated as an input to the MT L model. Concretely, we feed the MT L classifier a sequence of tokens as stated in part (b) of Figure 4 , and aim to compute their contextualized representations using pre-trained BERT. The input format is described in Table 3 . Similar to (Devlin et al., 2019) , two additional tokens are added [CLS] at the start of each sequence input and [SEP] at the end of each sentence. In the shared layers (see Fig 4(b) ), the model first transform the input sequence to a sequence of embedding vectors. Then it applies an attention mechanism that learns contextual relations between words (or sub-words) in the input sequence.", "cite_spans": [ { "start": 493, "end": 514, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF6" }, { "start": 549, "end": 554, "text": "[CLS]", "ref_id": null } ], "ref_spans": [ { "start": 50, "end": 58, "text": "Figure 4", "ref_id": null }, { "start": 85, "end": 93, "text": "Figure 4", "ref_id": null }, { "start": 348, "end": 356, "text": "Figure 4", "ref_id": null }, { "start": 472, "end": 479, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 656, "end": 664, "text": "Fig 4(b)", "ref_id": null } ], "eq_spans": [], "section": "Supervised Setting", "sec_num": "3.2" }, { "text": "For each instance we get four", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised Setting", "sec_num": "3.2" }, { "text": "[CLS] embed- dings (CLS H j , CLS O H j 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised Setting", "sec_num": "3.2" }, { "text": "; j \u2208 [1, 2]) which are then passed through two linear layers, one for the \u03b1NLI (main task) and another for predicting the ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised Setting", "sec_num": "3.2" }, { "text": "Data. We conduct experiments on the ART (Bhagavatula et al., 2020) dataset. Data statistics are given in Table 2 . For evaluation, we measure accuracy for \u03b1NLI.", "cite_spans": [], "ref_spans": [ { "start": 105, "end": 112, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "Hyperparameters. To train the LM I model we use learning rate of 5e \u2212 05. We decay the learning rate linearly until the end of training; batch size: 12. In the supervised setting for the \u03b1NLI task, we use the following set of hyperparameters for our MT L model with integrated LM I model (LM I + MT L): batch size: {8, 16}; epochs: {3, 5}; learning rate: {2e-5, 5e-6}. For evaluation, we measure accuracy. We use Adam Optimizer, and dropout rate = 0.1. We experimented on GPU size of 11GB and 24GB. Training is performed using cross-entropy loss. The loss function is L \u03b1N LI + w * L similarity , where w is a trainable parameter. During our experiment we initialize w = 1. The input format is depicted in Table 3 . We report performance by averaging results along with the variance obtained for 5 different seeds.", "cite_spans": [], "ref_spans": [ { "start": 706, "end": 713, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "Baselines. We compare to the following baseline models that we apply to the \u03b1NLI task, training them on the training portion of the ART dataset (cf. Table 2 ).", "cite_spans": [], "ref_spans": [ { "start": 149, "end": 156, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "\u2022 ESIM + ELMo is based on the ESIM model previously used for NLI (Chen et al., 2017) . We use (a) ELMo to encode the observations and hypothesis, followed by (b) an attention As baselines for using the MT L model, we replace LM I with alternative generative LMs:", "cite_spans": [ { "start": 65, "end": 84, "text": "(Chen et al., 2017)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "\u2022 GPT-2 + MT L. In this setup, we directly use the pretrained GPT-2 model and task it to generate a next sentence conditioned on each hypothesis (O H i 2 ) without finetuning it on the TIMETRAVEL data. We then use the supervised MT L model to predict the most plausible hypothesis and which of the generated observations is more similar to O 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "\u2022 COMET + MT L. In this setting, we make use of inferential if-then knowledge from ATOMIC (Sap et al., 2019a) as background knowledge. Specifically, we use COMET to generate objects with Effect 10 relations for each hypothesis as a textual phrase.", "cite_spans": [ { "start": 90, "end": 109, "text": "(Sap et al., 2019a)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "In Table 4 , we compare our models LM I + BERTScore and LM I + MT L against the models proposed in Bhagavatula et al. (2020) : a majority baseline, supervised models (Infersent and ESIM+ELMo), as well as BERT Large . Bhagavatula et al. (2020) re-train the ESIM+ELMo and Infersent models on the ART dataset and fine-tuned the BERT model on the \u03b1NLI task and report the results. We find that our unsupervised model with BERTScore (LM I + BERTScore) outperforms (by +9.28 pp. and +1.28 pp.) strong ESIM+ELMo and Infersent baseline models. Table 5 shows some examples of our generation model LM I along with the obtained BERTScores.", "cite_spans": [ { "start": 99, "end": 124, "text": "Bhagavatula et al. (2020)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 4", "ref_id": null }, { "start": 536, "end": 543, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Unlike the unsupervised LM I + BERTScore, our supervised LM I + MT L model also improves over the BERT Large baseline, by +3.3 pp. We can attribute the improvement to the model having been jointly trained to assess the similarity and dissimilarity of possible next events O H j 2 and observations (O 2 ) along with the \u03b1NLI task. One of the advantages of training our proposed multitask learning (MT L) model, instead of directly feeding the possible next events O H j 2 as knowledge inputs is that it adds an explainable component to the model. One can view the generated next events O H j 2 as natural language rationales and our multitask model explicitly chooses one of them. Hence, the multi-task framework makes the model more expressive. Finally, we compare, for the MT L model, our embedded generation model LM I to pre-trained GPT-2 and COMET. Table 4 shows that LM I + MT L yields better performance compared to both COMET + MT L (+3.1 pp.) and GPT-2 + MT L (+3.4 pp.) -the intuitive reason being that the next events generated by LM I are more helpful than events generated using pretrained GPT-2 and objects generated by COMET. Table 5 illustrates some examples where our MT L model not only chooses the correct hypothesis, but also a likely possible next event that is similar to the observation O 2 . Interestingly, during training of MT L we initialize w = 1, and after training the model we found the w value had been adjusted to a range between 0.85 and 0.75, which intuitively shows both the effectiveness of our LM I -generated possible next events, and their similarity to the given observations O 2 . (i) examples (a), (b) and (d) depicting the scenario where possible next events and observation pairs correctly achieve higher BERTscores 11 , (ii) example (c) depicting the scenario where an incorrect possible next event and observation pair achieves higher BERTscores than the correct one. Intuitive reasons for these scenarios are, for example, for (a): there is a higher word overlap and semantic similarity between a correct next event and observation O 2 , for example (b): there is higher semantic similarity; whereas for example (c): although there is a higher semantic dissimilarity, the word overlap between the wrong possible next event (\"She started to feel sick.\") and the observation (\"She felt much better afterwards.\") is much higher.", "cite_spans": [], "ref_spans": [ { "start": 853, "end": 860, "text": "Table 4", "ref_id": null }, { "start": 1140, "end": 1147, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Since the automatic scores only account for wordlevel similarity between observations and generated possible next events, we conduct a manual evaluation study, to assess the quality of sentences generated by our LM I model. Annotation Study on LM I generations. The annotation was performed by three annotators with computational linguistic background. We provide each of the three annotators with observations, hypotheses and sentences, as produced by our LM I model, for 50 randomly chosen instances from the \u03b1NLI task. They obtain i) generated sentences for a next possible event for the correct and incorrect hypothesis, as well as ii) the sentence stating observation O 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Manual Evaluation", "sec_num": "6" }, { "text": "We ask each annotator to rate the sentences according to four quality aspects as stated below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Manual Evaluation", "sec_num": "6" }, { "text": "Grammaticality: the sentence is i) grammatical, ii) not entirely grammatical but understandable, or iii) completely not understandable;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Manual Evaluation", "sec_num": "6" }, { "text": "Redundancy: the sentence contains redundant or repeated information;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Manual Evaluation", "sec_num": "6" }, { "text": "Contradiction: the sentence contains any pieces of information that are contradicting the given observation O 2 or not;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Manual Evaluation", "sec_num": "6" }, { "text": "Relevance: the possible next event is i) relevant, ii) partially relevant, or iii) not relevant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Manual Evaluation", "sec_num": "6" }, { "text": "For each aspect, they are asked to judge the sentence generated for the correct hypothesis 12 . Only for Contradiction, they are asked to judge both sentences, for correct and the incorrect hypotheses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Manual Evaluation", "sec_num": "6" }, { "text": "Results and Discussion. Figures 5, 7 , and 6 present the results of manual evaluations of the generation quality, according to the different criteria described above.", "cite_spans": [], "ref_spans": [ { "start": 24, "end": 36, "text": "Figures 5, 7", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Manual Evaluation", "sec_num": "6" }, { "text": "12.0% Grammatical Understandable Gibberish Figure 5 : Human evaluation of the grammaticality of generated sentences: ratio of i) grammatical, ii) not entirely grammatical but understandable, iii) completely not understandable sentences. For measuring inter-annotator agreement, we computed Krippendorff's \u03b1 (Hayes and Krippendorff, 2007) for Grammaticality and Relevance, as it is suited for ordinal values, and Cohen's Kappa \u03ba for Redundancy and Contradiction. We found \u03b1 values are 0.587 and 0.462 for Grammaticality and Relevance, respectively (moderate agreement) and \u03ba values 0.61 and 0.74 for Redundancy and Contradiction (substantial agreement). We aggregated the annotations from the three annotators using majority vote. Figure 5 shows that the majority of sentences (96%) are grammatical or understandable. Figure 7 shows that most sentences for correct labels are non-redundant (84%) and noncontradictory (88%), whereas for incorrect labels 39 instances are found to be contradictory with the observation O 2 (78%). The manual evaluation supports our hypothesis that the generated sentences for correct labels should be more similar (less contradictory) compared to the sentences generated for incorrect labels. Figure 6 shows the ratio of sentences considered by humans as relevant, partially relevant, and irrelevant. The results show that 46% of cases are relevant (based on majority agreement) and 24% of cases are partially relevant. This yields that the generated sentences are (partially) relevant in most cases and thus should support abduction for both unsupervised (LM I + BERTScore) and supervised (LM I + MT L) models.", "cite_spans": [ { "start": 307, "end": 337, "text": "(Hayes and Krippendorff, 2007)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 43, "end": 51, "text": "Figure 5", "ref_id": null }, { "start": 730, "end": 738, "text": "Figure 5", "ref_id": null }, { "start": 817, "end": 825, "text": "Figure 7", "ref_id": "FIGREF3" }, { "start": 1223, "end": 1231, "text": "Figure 6", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "4.0%", "sec_num": null }, { "text": "Impact of Reasoning types. Finally, to better assess the performance of our model, we determine what types of reasoning underly the abductive reasoning tasks in our data, and examine to what extent our models capture or not these reasoning types. We consider again the 50 instances that were annotated by our previous annotators and manually classify them into different reasoning types. We broadly divided the data into 6 categories: (i) Motivation, (ii) Spatial-Temporal, (iii) Emotional, (iv) Negation, (v) Reaction, (vi) Situational fact. The most frequent type was Emotional (10), most infrequent was Spatial (7). We ask a new annotator to annotate the reasoning types for these 50 instances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.0%", "sec_num": null }, { "text": "Considering the relevance and contradiction categories from the previous annotations we determine that for Negation (8), Emotional (10), and Reaction (8) all generated events for correct labels are partially or fully relevant and non-contradictory. An intuitive reason can be that we train our LM I model to learn how different counterfactual hypothetical events emerging from a single premise can lead to the same or different outcomes through a series of events. Some counterfactual events (s 2 ) are negations of the original event (s 2 ) in the TIME-TRAVEL dataset. This may support the reasoning class Negation. For the other categories: Motivation, Spatial-temporal, and Situational fact, we detect errors regarding (missing) Relevance in 21%, 14% and 28% of cases, respectively. Table 6 illustrates an example from the class Situational Fact, where our generated next event is irrelevant and redundant.", "cite_spans": [], "ref_spans": [ { "start": 786, "end": 793, "text": "Table 6", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "4.0%", "sec_num": null }, { "text": "O 1 : Jenna hit the weight hard in the gym. O 2 : She took a cold bath in order to alleviate her pain. H 1 : Her neck pain stopped because of this. H 2 : Jenna pulled a muscle lifting weights. O H1 2 : She decided to take a break . O H2 2 : Jenna lost weight in the gym. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.0%", "sec_num": null }, { "text": "Commonsense Reasoning. There is growing interest in this research field, which led to the creation of several new resources on commonsense reasoning, in form of both datasets, such as So-cialIQA (Sap et al., 2019b) , CommonsenseQA (Talmor et al., 2019) , CosmosQA (Huang et al., 2019) and knowledge bases, e.g. ConceptNet (Speer et al., 2017) , ATOMIC (Sap et al., 2019a ), or Event2Mind (Rashkin et al., 2018 . Recently, many works proposed to utilize external static knowledge graphs (KGs) to address the bottleneck of obtaining relevant commonsense knowledge. Lin et al. 2019proposed to utilize knowledge graph embeddings to rank and select relevant knowledge triples or paths. Paul and Frank (2019) proposed to extract subgraphs from KGs using graph-based ranking methods and further adopted the graph-based ranking method and proposed to dynamically extend the KG to combat sparsity. In concurrent work, Paul and Frank (2021) introduced a method to dynamically generate contextually relevant knowledge that guides a model while performing the narrative story completion task. Both hypothetical reasoning and abductive reasoning are understudied problems in NLP. Recently, Tandon et al. (2019) proposed a first large-scale dataset of \"What if...\" questions over procedural text. They introduced the dataset to study the effect of perturbations in procedural text. Related to our work, Qin et al. (2019) investigated the capabilities of state-of-the-art LMs to rewrite stories with counterfactual reasoning. In our work we utilize this dataset to model how to generate possible next events emerging from different hypothetical and counterfactual events. Mostafazadeh et al. (2016) designed the narrative cloze task, a task to choose the correct ending of a story. 13 Conversely, Bhagavatula et al. (2020) proposed a task that requires reasoning about plausible explanations for narrative omissions. Our research touches on the issue of hypothetical reasoning about alternative situations. We found that making language models learn how different hypothetical events can evolve from a premise and result in similar or different future events forming from a premise and how these events can result in similar or different future events helps models to perform better in abduction. Explainability. Despite the success of large pretrained language models, recent studies have raised some critical points such as: high accuracy scores do not necessarily reflect understanding (Min et al., 2019) , large pretrained models may exploit superficial clues and annotation artifacts (Gururangan et al., 2018; Kavumba et al., 2019) . Therefore, the ability of models to generate explanations has become desirable, as this enhances interpretability. Recently, there has been substantial effort to build datasets with natural language explanations (Camburu et al., 2018; Park et al., 2018; Thayaparan et al., 2020) . There have also been numerous research works proposing models that are interpretable or explainable (Rajani et al., 2019; Atanasova et al., 2020; Latcinnik and Berant, 2020; Wiegreffe and Marasovi\u0107, 2021) . Our work sheds light in this direction, as our MT L model not only predicts the plausible hypothesis H j but also generates possible next events O H j 2 and chooses the one that is closer to the given context, thereby making our model more expressive.", "cite_spans": [ { "start": 195, "end": 214, "text": "(Sap et al., 2019b)", "ref_id": "BIBREF36" }, { "start": 231, "end": 252, "text": "(Talmor et al., 2019)", "ref_id": "BIBREF40" }, { "start": 264, "end": 284, "text": "(Huang et al., 2019)", "ref_id": "BIBREF10" }, { "start": 322, "end": 342, "text": "(Speer et al., 2017)", "ref_id": "BIBREF38" }, { "start": 352, "end": 370, "text": "(Sap et al., 2019a", "ref_id": "BIBREF35" }, { "start": 371, "end": 409, "text": "), or Event2Mind (Rashkin et al., 2018", "ref_id": null }, { "start": 681, "end": 702, "text": "Paul and Frank (2019)", "ref_id": "BIBREF21" }, { "start": 909, "end": 930, "text": "Paul and Frank (2021)", "ref_id": "BIBREF23" }, { "start": 1177, "end": 1197, "text": "Tandon et al. (2019)", "ref_id": "BIBREF41" }, { "start": 1389, "end": 1406, "text": "Qin et al. (2019)", "ref_id": "BIBREF30" }, { "start": 1657, "end": 1683, "text": "Mostafazadeh et al. (2016)", "ref_id": "BIBREF19" }, { "start": 1767, "end": 1769, "text": "13", "ref_id": null }, { "start": 2474, "end": 2492, "text": "(Min et al., 2019)", "ref_id": "BIBREF18" }, { "start": 2574, "end": 2599, "text": "(Gururangan et al., 2018;", "ref_id": "BIBREF8" }, { "start": 2600, "end": 2621, "text": "Kavumba et al., 2019)", "ref_id": "BIBREF13" }, { "start": 2836, "end": 2858, "text": "(Camburu et al., 2018;", "ref_id": "BIBREF3" }, { "start": 2859, "end": 2877, "text": "Park et al., 2018;", "ref_id": "BIBREF20" }, { "start": 2878, "end": 2902, "text": "Thayaparan et al., 2020)", "ref_id": "BIBREF42" }, { "start": 3005, "end": 3026, "text": "(Rajani et al., 2019;", "ref_id": "BIBREF32" }, { "start": 3027, "end": 3050, "text": "Atanasova et al., 2020;", "ref_id": "BIBREF0" }, { "start": 3051, "end": 3078, "text": "Latcinnik and Berant, 2020;", "ref_id": "BIBREF16" }, { "start": 3079, "end": 3109, "text": "Wiegreffe and Marasovi\u0107, 2021)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "Abductive Reasoning. There has been longstanding work on theories of abductive reasoning (Peirce, 1903 (Peirce, , 1965a Kuipers, 1992 Kuipers, , 2013 . Researchers have applied various frameworks, some focused on pure logical frameworks (Pople, 1973; Kakas et al., 1992) , some on probabilistic frameworks (Pearl, 1988) , and others on Markov Logics (Singla and Mooney, 2011) . Recently, moving away from logic-based abductive reasoning, Bhagavatula et al. (2020) proposed to study languagebased abductive reasoning. They introduced two tasks: Abductive Natural Language Inference (\u03b1NLI) and Generation (\u03b1NLG). They establish baseline performance based on state-of-the-art language models and make use of inferential structured knowledge from ATOMIC (Sap et al., 2019a) as background knowledge. Zhu et al. (2020) proposed to use a learning-to-rank framework to address the abductive reasoning task. Ji et al. (2020) proposed a model GRF that enables pre-trained models (GPT-2) with dynamic multi-hop reasoning on multi-relational paths extracted from the external ConceptNet commonsense knowledge graph for the \u03b1NLG task. have proposed a multi-head knowledge attention method to incorporate commonsense knowledge to tackle the \u03b1NLI task. Unlike our previous work in , which focused on leveraging structured knowledge, in this work, we focus on learning about what will happen next from different counterfactual situations in a story context through language model fine-tuning. Specifically, we study the impact of such forward inference on the \u03b1NLI task in a multi-task learning framework and show how it can improve performance over a strong BERT model.", "cite_spans": [ { "start": 89, "end": 102, "text": "(Peirce, 1903", "ref_id": "BIBREF26" }, { "start": 103, "end": 119, "text": "(Peirce, , 1965a", "ref_id": "BIBREF27" }, { "start": 120, "end": 133, "text": "Kuipers, 1992", "ref_id": "BIBREF14" }, { "start": 134, "end": 149, "text": "Kuipers, , 2013", "ref_id": "BIBREF15" }, { "start": 237, "end": 250, "text": "(Pople, 1973;", "ref_id": "BIBREF29" }, { "start": 251, "end": 270, "text": "Kakas et al., 1992)", "ref_id": "BIBREF12" }, { "start": 306, "end": 319, "text": "(Pearl, 1988)", "ref_id": "BIBREF25" }, { "start": 350, "end": 375, "text": "(Singla and Mooney, 2011)", "ref_id": "BIBREF37" }, { "start": 750, "end": 769, "text": "(Sap et al., 2019a)", "ref_id": "BIBREF35" }, { "start": 795, "end": 812, "text": "Zhu et al. (2020)", "ref_id": "BIBREF11" }, { "start": 899, "end": 915, "text": "Ji et al. (2020)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "We have introduced a novel method for addressing the abductive reasoning task by explicitly learning what events could follow other events in a hypothetical scenario, and learning to generate such events, conditioned on a premise or hypothesis. We show how a language model -fine-tuned for this capability on a suitable narrative dataset -can be leveraged to support abductive reasoning in the \u03b1NLI tasks, in two settings: an unsupervised setting in combination with BertScore, to select the proper hypothesis, and a supervised setting in a MT L setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "The relatively strong performance of our proposed models demonstrates that learning to choose from generated hypothetical next events the one that is most similar to the observation, supports the prediction of the most plausible hypothesis. Our experiments show that our unsupervised LM I +BERTScore model outperforms some of the strong supervised baseline systems on \u03b1NLI. Our research thus offers new perspectives for training generative models in different ways for various complex reasoning tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "s1 = premise, s2 = initial context, s3:5 = original ending", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "a counterfactual s states something that is contrary to s 5 During our experiments we treat them as two separate instances: S1=(s1:5) and S2 = (s1,s 2:5 ).6 GPT-2 was trained on the WebText Corpus. 7 the future events that are invariant under the counterfactual conditions(Qin et al., 2019)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For definition of placeholders see (1). 9 BERTScore is an automatic evaluation metric for text generation that leverages the pre-trained contextual embeddings from BERT and matches words in candidate and reference sentences by cosine similarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "as a result PersonX feels; as a result PersonX wants; PersonX then", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "BERTscore matches words in candidate and reference sentences by cosine similarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The correct hypothesis was marked for the annotation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Their dataset, ROCStories, was later extended inQin et al. (2019) andBhagavatula et al. (2020).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work has been supported by the German Research Foundation as part of the Research Training Group \"Adaptive Preparation of Information from Heterogeneous Sources\" (AIPHES) under grant No. GRK 1994/1. We thank our annotators for their valuable annotations. We also thank NVIDIA Corporation for donating GPUs used in this research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Generating fact checking explanations", "authors": [ { "first": "Pepa", "middle": [], "last": "Atanasova", "suffix": "" }, { "first": "Jakob", "middle": [ "Grue" ], "last": "Simonsen", "suffix": "" }, { "first": "Christina", "middle": [], "last": "Lioma", "suffix": "" }, { "first": "Isabelle", "middle": [], "last": "Augenstein", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7352--7364", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.656" ] }, "num": null, "urls": [], "raw_text": "Pepa Atanasova, Jakob Grue Simonsen, Christina Li- oma, and Isabelle Augenstein. 2020. Generating fact checking explanations. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 7352-7364, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Abductive commonsense reasoning", "authors": [ { "first": "Chandra", "middle": [], "last": "Bhagavatula", "suffix": "" }, { "first": "Chaitanya", "middle": [], "last": "Ronan Le Bras", "suffix": "" }, { "first": "Keisuke", "middle": [], "last": "Malaviya", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Sakaguchi", "suffix": "" }, { "first": "Hannah", "middle": [], "last": "Holtzman", "suffix": "" }, { "first": "Doug", "middle": [], "last": "Rashkin", "suffix": "" }, { "first": "Wen", "middle": [], "last": "Downey", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Tau Yih", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Han- nah Rashkin, Doug Downey, Wen tau Yih, and Yejin Choi. 2020. Abductive commonsense reasoning. In International Conference on Learning Representa- tions.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Spatio-temporal abduction for scenario and narrative completion ( a preliminary statement)", "authors": [ { "first": "M", "middle": [], "last": "Bhatt", "suffix": "" }, { "first": "G", "middle": [], "last": "Flanagan", "suffix": "" } ], "year": 2010, "venue": "ECAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Bhatt and G. Flanagan. 2010. Spatio-temporal ab- duction for scenario and narrative completion ( a pre- liminary statement). In ECAI.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "e-SNLI: Natural Language Inference with Natural Language Explanations", "authors": [ { "first": "Oana-Maria", "middle": [], "last": "Camburu", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rockt\u00e4schel", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Lukasiewicz", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2018, "venue": "Advances in Neural Information Processing Systems 31", "volume": "", "issue": "", "pages": "9539--9549", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oana-Maria Camburu, Tim Rockt\u00e4schel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-SNLI: Nat- ural Language Inference with Natural Language Ex- planations. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, ed- itors, Advances in Neural Information Processing Systems 31, pages 9539-9549. Curran Associates, Inc.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Enhanced LSTM for natural language inference", "authors": [ { "first": "Qian", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Zhen-Hua", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Si", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Inkpen", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1657--1668", "other_ids": { "DOI": [ "10.18653/v1/P17-1152" ] }, "num": null, "urls": [], "raw_text": "Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 1657-1668, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Supervised learning of universal sentence representations from natural language inference data", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Lo\u00efc", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "670--680", "other_ids": { "DOI": [ "10.18653/v1/D17-1070" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo\u00efc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 670-680, Copen- hagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The Stanford Encyclopedia of Philosophy", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Igor Douven. 2017. Abduction. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy, summer 2017 edition. Metaphysics Research Lab, Stanford University.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Annotation artifacts in natural language inference data", "authors": [ { "first": "Swabha", "middle": [], "last": "Suchin Gururangan", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Swayamdipta", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "107--112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A Smith. 2018. Annotation artifacts in natural lan- guage inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107-112.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Answering the call for a standard reliability measure for coding data", "authors": [ { "first": "A", "middle": [], "last": "Hayes", "suffix": "" }, { "first": "K", "middle": [], "last": "Krippendorff", "suffix": "" } ], "year": 2007, "venue": "Communication Methods and Measures", "volume": "1", "issue": "", "pages": "77--89", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Hayes and K. Krippendorff. 2007. Answering the call for a standard reliability measure for coding data. Communication Methods and Measures, 1:77 -89.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Cosmos QA: Machine reading comprehension with contextual commonsense reasoning", "authors": [ { "first": "Lifu", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Le", "middle": [], "last": "Ronan", "suffix": "" }, { "first": "Chandra", "middle": [], "last": "Bras", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Bhagavatula", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2391--2401", "other_ids": { "DOI": [ "10.18653/v1/D19-1243" ] }, "num": null, "urls": [], "raw_text": "Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Cosmos QA: Machine reading comprehension with contextual commonsense rea- soning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 2391-2401, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Language generation with multi-hop reasoning on commonsense knowledge graph", "authors": [ { "first": "Haozhe", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Pei", "middle": [], "last": "Ke", "suffix": "" }, { "first": "Shaohan", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Xiaoyan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Minlie", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "725--736", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.54" ] }, "num": null, "urls": [], "raw_text": "Haozhe Ji, Pei Ke, Shaohan Huang, Furu Wei, Xiaoyan Zhu, and Minlie Huang. 2020. Language generation with multi-hop reasoning on commonsense knowl- edge graph. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 725-736, Online. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Abductive logic programming", "authors": [ { "first": "C", "middle": [], "last": "Antonis", "suffix": "" }, { "first": "Robert", "middle": [ "A" ], "last": "Kakas", "suffix": "" }, { "first": "Francesca", "middle": [], "last": "Kowalski", "suffix": "" }, { "first": "", "middle": [], "last": "Toni", "suffix": "" } ], "year": 1992, "venue": "Journal of logic and computation", "volume": "2", "issue": "6", "pages": "719--770", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antonis C Kakas, Robert A. Kowalski, and Francesca Toni. 1992. Abductive logic programming. Journal of logic and computation, 2(6):719-770.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "When Choosing Plausible Alternatives, Clever Hans can be Clever", "authors": [ { "first": "Pride", "middle": [], "last": "Kavumba", "suffix": "" }, { "first": "Naoya", "middle": [], "last": "Inoue", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Heinzerling", "suffix": "" }, { "first": "Keshav", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Reisert", "suffix": "" }, { "first": "Kentaro", "middle": [], "last": "Inui", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing", "volume": "", "issue": "", "pages": "33--42", "other_ids": { "DOI": [ "10.18653/v1/D19-6004" ] }, "num": null, "urls": [], "raw_text": "Pride Kavumba, Naoya Inoue, Benjamin Heinzerling, Keshav Singh, Paul Reisert, and Kentaro Inui. 2019. When Choosing Plausible Alternatives, Clever Hans can be Clever. In Proceedings of the First Work- shop on Commonsense Inference in Natural Lan- guage Processing, pages 33-42, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Naive and refined truth approximation", "authors": [ { "first": "A", "middle": [ "F" ], "last": "Theo", "suffix": "" }, { "first": "", "middle": [], "last": "Kuipers", "suffix": "" } ], "year": 1992, "venue": "Synthese", "volume": "93", "issue": "3", "pages": "299--341", "other_ids": {}, "num": null, "urls": [], "raw_text": "Theo AF Kuipers. 1992. Naive and refined truth ap- proximation. Synthese, 93(3):299-341.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "From instrumentalism to constructive realism: On some relations between confirmation, empirical progress, and truth approximation", "authors": [ { "first": "A", "middle": [ "F" ], "last": "Theo", "suffix": "" }, { "first": "", "middle": [], "last": "Kuipers", "suffix": "" } ], "year": 2013, "venue": "", "volume": "287", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Theo AF Kuipers. 2013. From instrumentalism to con- structive realism: On some relations between con- firmation, empirical progress, and truth approxima- tion, volume 287. Springer Science & Business Me- dia.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Explaining question answering models through text generation", "authors": [ { "first": "Veronica", "middle": [], "last": "Latcinnik", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.05569" ] }, "num": null, "urls": [], "raw_text": "Veronica Latcinnik and Jonathan Berant. 2020. Ex- plaining question answering models through text generation. arXiv preprint arXiv:2004.05569.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Kagnet: Knowledge-aware graph networks for commonsense reasoning", "authors": [ { "first": "Xinyue", "middle": [], "last": "Bill Yuchen Lin", "suffix": "" }, { "first": "Jamin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Ren", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2822--2832", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xi- ang Ren. 2019. Kagnet: Knowledge-aware graph networks for commonsense reasoning. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 2822-2832.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Compositional questions do not necessitate multi-hop reasoning", "authors": [ { "first": "Sewon", "middle": [], "last": "Min", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Wallace", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4249--4257", "other_ids": { "DOI": [ "10.18653/v1/P19-1416" ] }, "num": null, "urls": [], "raw_text": "Sewon Min, Eric Wallace, Sameer Singh, Matt Gard- ner, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2019. Compositional questions do not necessitate multi-hop reasoning. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 4249-4257, Florence, Italy. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A corpus and cloze evaluation for deeper understanding of commonsense stories", "authors": [ { "first": "Nasrin", "middle": [], "last": "Mostafazadeh", "suffix": "" }, { "first": "Nathanael", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Lucy", "middle": [], "last": "Vanderwende", "suffix": "" }, { "first": "Pushmeet", "middle": [], "last": "Kohli", "suffix": "" }, { "first": "James", "middle": [], "last": "Allen", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "839--849", "other_ids": { "DOI": [ "10.18653/v1/N16-1098" ] }, "num": null, "urls": [], "raw_text": "Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A cor- pus and cloze evaluation for deeper understanding of commonsense stories. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 839-849, San Diego, California. Association for Computational Linguis- tics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Multimodal explanations: Justifying decisions and pointing to the evidence", "authors": [ { "first": "L", "middle": [], "last": "Dong Huk Park", "suffix": "" }, { "first": "Zeynep", "middle": [], "last": "Hendricks", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Akata", "suffix": "" }, { "first": "B", "middle": [], "last": "Rohrbach", "suffix": "" }, { "first": "T", "middle": [], "last": "Schiele", "suffix": "" }, { "first": "M", "middle": [], "last": "Darrell", "suffix": "" }, { "first": "", "middle": [], "last": "Rohrbach", "suffix": "" } ], "year": 2018, "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "8779--8788", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dong Huk Park, L. Hendricks, Zeynep Akata, Anna Rohrbach, B. Schiele, T. Darrell, and M. Rohrbach. 2018. Multimodal explanations: Justifying deci- sions and pointing to the evidence. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recog- nition, pages 8779-8788.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Ranking and selecting multi-hop knowledge paths to better predict human needs", "authors": [ { "first": "Debjit", "middle": [], "last": "Paul", "suffix": "" }, { "first": "Anette", "middle": [], "last": "Frank", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "3671--3681", "other_ids": { "DOI": [ "10.18653/v1/N19-1368" ] }, "num": null, "urls": [], "raw_text": "Debjit Paul and Anette Frank. 2019. Ranking and se- lecting multi-hop knowledge paths to better predict human needs. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 3671-3681, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Social commonsense reasoning with multi-head knowledge attention", "authors": [ { "first": "Debjit", "middle": [], "last": "Paul", "suffix": "" }, { "first": "Anette", "middle": [], "last": "Frank", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "2969--2980", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.267" ] }, "num": null, "urls": [], "raw_text": "Debjit Paul and Anette Frank. 2020. Social common- sense reasoning with multi-head knowledge atten- tion. In Findings of the Association for Computa- tional Linguistics: EMNLP 2020, pages 2969-2980, Online. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "COINS: Dynamically Generating COntextualized Inference Rules for Narrative Story Completion", "authors": [ { "first": "Debjit", "middle": [], "last": "Paul", "suffix": "" }, { "first": "Anette", "middle": [], "last": "Frank", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021), Online. Association for Computational Linguistics. Long Paper", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Debjit Paul and Anette Frank. 2021. COINS: Dynami- cally Generating COntextualized Inference Rules for Narrative Story Completion. In Proceedings of the Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Lan- guage Processing (ACL-IJCNLP 2021), Online. As- sociation for Computational Linguistics. Long Pa- per.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Argumentative Relation Classification with Background Knowledge", "authors": [ { "first": "Debjit", "middle": [], "last": "Paul", "suffix": "" }, { "first": "Juri", "middle": [], "last": "Opitz", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Becker", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Kobbe", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" }, { "first": "Anette", "middle": [], "last": "Frank", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 8th International Conference on Computational Models of Argument", "volume": "326", "issue": "", "pages": "319--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Debjit Paul, Juri Opitz, Maria Becker, Jonathan Kobbe, Graeme Hirst, and Anette Frank. 2020. Argu- mentative Relation Classification with Background Knowledge. In Proceedings of the 8th International Conference on Computational Models of Argument (COMMA 2020), volume 326 of Frontiers in Artifi- cial Intelligence and Applications, pages 319-330. Computational Models of Argument.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference", "authors": [ { "first": "Judea", "middle": [], "last": "Pearl", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Judea Pearl. 1988. Probabilistic Reasoning in Intelli- gent Systems: Networks of Plausible Inference. Mor- gan Kaufmann Publishers Inc., CA.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Pragmatism as the Logic of Abduction", "authors": [ { "first": "C", "middle": [ "S" ], "last": "Peirce", "suffix": "" } ], "year": 1903, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. S. Peirce. 1903. Pragmatism as the Logic of Abduc- tion. https://www.textlog.de/7663.html.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Collected papers of Charles Sanders Peirce", "authors": [ { "first": "Charles", "middle": [], "last": "Sanders", "suffix": "" }, { "first": "Peirce", "middle": [], "last": "", "suffix": "" } ], "year": 1965, "venue": "", "volume": "5", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles Sanders Peirce. 1965a. Collected papers of Charles Sanders Peirce, volume 5. Harvard Uni- versity Press. http://www.hup.harvard.edu/ catalog.php?isbn=9780674138001.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Pragmatism and pragmaticism", "authors": [ { "first": "Charles", "middle": [], "last": "Sanders", "suffix": "" }, { "first": "Peirce", "middle": [], "last": "", "suffix": "" } ], "year": 1965, "venue": "", "volume": "5", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles Sanders Peirce. 1965b. Pragmatism and pragmaticism, volume 5. Belknap Press of Har- vard University Press. https://www.jstor.org/ stable/224970.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "On the mechanization of abductive logic", "authors": [ { "first": "", "middle": [], "last": "Harry E Pople", "suffix": "" } ], "year": 1973, "venue": "Proceedings of the 3rd international joint conference on Artificial intelligence", "volume": "", "issue": "", "pages": "147--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harry E Pople. 1973. On the mechanization of ab- ductive logic. In Proceedings of the 3rd inter- national joint conference on Artificial intelligence, pages 147-152.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Counterfactual story reasoning and generation", "authors": [ { "first": "Lianhui", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bosselut", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Holtzman", "suffix": "" }, { "first": "Chandra", "middle": [], "last": "Bhagavatula", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "2019 Conference on Empirical Methods in Natural Language Processing., Hongkong, China. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lianhui Qin, Antoine Bosselut, Ari Holtzman, Chandra Bhagavatula, Elizabeth Clark, and Yejin Choi. 2019. Counterfactual story reasoning and generation. In 2019 Conference on Empirical Methods in Natural Language Processing., Hongkong, China. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Explain yourself! leveraging language models for commonsense reasoning", "authors": [ { "first": "Bryan", "middle": [], "last": "Nazneen Fatema Rajani", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Mccann", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4932--4942", "other_ids": { "DOI": [ "10.18653/v1/P19-1487" ] }, "num": null, "urls": [], "raw_text": "Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense rea- soning. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 4932-4942, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Event2Mind: Commonsense inference on events, intents, and reactions", "authors": [ { "first": "Maarten", "middle": [], "last": "Hannah Rashkin", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Sap", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Allaway", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "463--473", "other_ids": { "DOI": [ "10.18653/v1/P18-1043" ] }, "num": null, "urls": [], "raw_text": "Hannah Rashkin, Maarten Sap, Emily Allaway, Noah A. Smith, and Yejin Choi. 2018. Event2Mind: Commonsense inference on events, intents, and reac- tions. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 463-473, Melbourne, Australia. Association for Computational Linguis- tics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Neural transfer learning for natural language processing", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Ruder. 2019. Neural transfer learning for natural language processing. Ph.D. thesis, NUI Gal- way.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "ATOMIC: an atlas of machine commonsense for if-then reasoning", "authors": [ { "first": "Maarten", "middle": [], "last": "Sap", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Ronan Le Bras", "suffix": "" }, { "first": "Chandra", "middle": [], "last": "Allaway", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Bhagavatula", "suffix": "" }, { "first": "Hannah", "middle": [], "last": "Lourie", "suffix": "" }, { "first": "Brendan", "middle": [], "last": "Rashkin", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Roof", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019", "volume": "", "issue": "", "pages": "3027--3035", "other_ids": { "DOI": [ "10.1609/aaai.v33i01.33013027" ] }, "num": null, "urls": [], "raw_text": "Maarten Sap, Ronan Le Bras, Emily Allaway, Chan- dra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, and Yejin Choi. 2019a. ATOMIC: an atlas of machine commonsense for if-then reasoning. In The Thirty-Third AAAI Con- ference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial In- telligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial In- telligence, EAAI 2019, Honolulu, Hawaii, USA, Jan- uary 27 -February 1, 2019., pages 3027-3035.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Social IQa: Commonsense reasoning about social interactions", "authors": [ { "first": "Maarten", "middle": [], "last": "Sap", "suffix": "" }, { "first": "Hannah", "middle": [], "last": "Rashkin", "suffix": "" }, { "first": "Derek", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Ronan Le Bras", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "4463--4473", "other_ids": { "DOI": [ "10.18653/v1/D19-1454" ] }, "num": null, "urls": [], "raw_text": "Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019b. Social IQa: Com- monsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 4463- 4473, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Abductive markov logic for plan recognition", "authors": [ { "first": "Parag", "middle": [], "last": "Singla", "suffix": "" }, { "first": "J", "middle": [], "last": "Raymond", "suffix": "" }, { "first": "", "middle": [], "last": "Mooney", "suffix": "" } ], "year": 2011, "venue": "Twenty-Fifth AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Parag Singla and Raymond J Mooney. 2011. Abduc- tive markov logic for plan recognition. In Twenty- Fifth AAAI Conference on Artificial Intelligence.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Conceptnet 5.5: An open multilingual graph of general knowledge", "authors": [ { "first": "Robyn", "middle": [], "last": "Speer", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Chin", "suffix": "" }, { "first": "Catherine", "middle": [], "last": "Havasi", "suffix": "" } ], "year": 2017, "venue": "Thirty-First AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of gen- eral knowledge. In Thirty-First AAAI Conference on Artificial Intelligence.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Nonmonotonic logic", "authors": [ { "first": "Christian", "middle": [], "last": "Strasser", "suffix": "" }, { "first": "G. Aldo", "middle": [], "last": "Antonelli", "suffix": "" } ], "year": 2019, "venue": "The Stanford Encyclopedia of Philosophy", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian Strasser and G. Aldo Antonelli. 2019. Non- monotonic logic. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy, summer 2019 edition. Metaphysics Research Lab, Stanford Uni- versity.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "CommonsenseQA: A question answering challenge targeting commonsense knowledge", "authors": [ { "first": "Alon", "middle": [], "last": "Talmor", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Herzig", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Lourie", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4149--4158", "other_ids": { "DOI": [ "10.18653/v1/N19-1421" ] }, "num": null, "urls": [], "raw_text": "Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A ques- tion answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149-4158, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "reasoning over procedural text", "authors": [ { "first": "Niket", "middle": [], "last": "Tandon", "suffix": "" }, { "first": "Bhavana", "middle": [], "last": "Dalvi", "suffix": "" }, { "first": "Keisuke", "middle": [], "last": "Sakaguchi", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bosselut", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "6076--6085", "other_ids": { "DOI": [ "10.18653/v1/D19-1629" ] }, "num": null, "urls": [], "raw_text": "Niket Tandon, Bhavana Dalvi, Keisuke Sakaguchi, Pe- ter Clark, and Antoine Bosselut. 2019. WIQA: A dataset for \"what if...\" reasoning over procedural text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 6076-6085, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "A survey on explainability in machine reading comprehension", "authors": [ { "first": "Mokanarangan", "middle": [], "last": "Thayaparan", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Valentino", "suffix": "" }, { "first": "Andr\u00e9", "middle": [], "last": "Freitas", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.00389" ] }, "num": null, "urls": [], "raw_text": "Mokanarangan Thayaparan, Marco Valentino, and Andr\u00e9 Freitas. 2020. A survey on explainability in machine reading comprehension. arXiv preprint arXiv:2010.00389.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Teach me to explain: A review of datasets for explainable nlp", "authors": [ { "first": "Sarah", "middle": [], "last": "Wiegreffe", "suffix": "" }, { "first": "Ana", "middle": [], "last": "Marasovi\u0107", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sarah Wiegreffe and Ana Marasovi\u0107. 2021. Teach me to explain: A review of datasets for explainable nlp. ArXiv:2102.12060.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "BERTScore: Evaluating Text Generation with BERT", "authors": [ { "first": "Tianyi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Varsha", "middle": [], "last": "Kishore", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Kilian", "middle": [ "Q" ], "last": "Weinberger", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. BERTScore: Evaluating Text Generation with BERT. In Interna- tional Conference on Learning Representations.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "2020. l2r 2 : Leveraging ranking for abductive reasoning", "authors": [ { "first": "Yunchang", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Yanyan", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Xueqi", "middle": [], "last": "Cheng", "suffix": "" } ], "year": null, "venue": "Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '20", "volume": "", "issue": "", "pages": "1961--1964", "other_ids": { "DOI": [ "10.1145/3397271.3401332" ] }, "num": null, "urls": [], "raw_text": "Yunchang Zhu, Liang Pang, Yanyan Lan, and Xueqi Cheng. 2020. l2r 2 : Leveraging ranking for abduc- tive reasoning. In Proceedings of the 43rd Interna- tional ACM SIGIR Conference on Research and De- velopment in Information Retrieval, SIGIR '20, page 1961-1964, New York, NY, USA. Association for Computing Machinery.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "Motivational example illustrating Abductive Reasoning and its non-monotonic character." }, "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "Different reasoning schemes and settings for our task and approach. The arrows denote the direction (temporal flow) of the reasoning chain. The dotted arrow in (b) denotes the derivation of a counterfactual situation s 2 from a factual s 2 . In (c), the dotted arrows denote the learned inference; the dotted lines indicate the similarity between O 2 and O Hi 2 ." }, "FIGREF2": { "type_str": "figure", "num": null, "uris": null, "text": "Human evaluation of the Relevance of generated sentences for possible next events." }, "FIGREF3": { "type_str": "figure", "num": null, "uris": null, "text": "Human evaluation of Redundancy and Contradiction of generations for possible next events." }, "TABREF0": { "num": null, "text": "Priya decided to try a new restaurant. O 2 : Priya thought her food was delicious. Priya was disappointed in the quality of the food. She was excited to try them out.", "type_str": "table", "content": "
Observations
! \" What if # $ : Hypothesis ' ( : She ordered two shrimp dishes ! \" # & : LM LM ' (' \"' \" : The food that Priya ordered was microwaved and precooked.
", "html": null }, "TABREF1": { "num": null, "text": "Example of generated possible next events O", "type_str": "table", "content": "
Hj 2 using the LM I model. Bold hypothesis (H 2 ) is
more plausible.
", "html": null }, "TABREF2": { "num": null, "text": "Overview of our LM I + MT L model for \u03b1NLI: (a) language model LM I takes the input in a particular format to generate different possible next events, (b) the MT L model learns to predict the best explanation (H j ) and possible next events (O", "type_str": "table", "content": "
MN,OSimilarity
Shared Layers
What if CD EFGGHIJ?What if CL EFGGHIJ?
Figure 4: Hj
2
", "html": null }, "TABREF4": { "num": null, "text": "Dataset Statistics: nb. of instances O 1 [SEP] H i [SEP] O 2 [SEP] H 1 or H 2 [CLS] H i [SEP] O H i 2 [SEP] O 2 [SEP] O H1 2 or O H2 2", "type_str": "table", "content": "
Input FormatOutput
[CLS]
", "html": null }, "TABREF5": { "num": null, "text": "Input and output format for the \u03b1NLI task: We compute the joint loss function L = L \u03b1N LI + w * L similarity ; where w is a trainable parameter, L \u03b1N LI and L similarity are the loss function for the \u03b1N LI task and auxiliary task, respectively.", "type_str": "table", "content": "
[CLS] is a special token used for classification, [SEP]
a delimiter.
similarity (auxiliary task) between O 2 and O 2 . H j
", "html": null }, "TABREF7": { "num": null, "text": "displays possible next events, generated by our LM I model -along with the BERTscore measured between the possible next events O", "type_str": "table", "content": "
H j 2 and
", "html": null }, "TABREF8": { "num": null, "text": "Examples of generated possible next events for solving \u03b1NLI using our LM I model. Column 3: Hypothesis and possible next events chosen by our LM I + MT L model; Column 4: Reasoning type between the hypothesis H j and O 2 ; Column 5: BERTScore between the O Hj 2 and O 2 ; Column5: Human evaluation of the possible next events with respect the observation O 2 .", "type_str": "table", "content": "", "html": null }, "TABREF9": { "num": null, "text": "", "type_str": "table", "content": "
: Error Analysis: An example of generated pos-
sible next event O 2 from Situational Fact category. Hj
", "html": null } } } }