{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:35:09.976421Z" }, "title": "CURIE: An Iterative Querying Approach for Reasoning About Situations", "authors": [ { "first": "Dheeraj", "middle": [], "last": "Rajagopal", "suffix": "", "affiliation": {}, "email": "dheeraj@cs.cmu.edu" }, { "first": "Aman", "middle": [], "last": "Madaan", "suffix": "", "affiliation": {}, "email": "amadaan@cs.cmu.edu" }, { "first": "Niket", "middle": [], "last": "Tandon", "suffix": "", "affiliation": { "laboratory": "Allen Institute for Artificial Intelligence", "institution": "", "location": {} }, "email": "nikett@allenai.org" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "", "affiliation": {}, "email": "yiming@cs.cmu.edu" }, { "first": "Shrimai", "middle": [], "last": "Prabhumoye", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Abhilasha", "middle": [], "last": "Ravichander", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "", "affiliation": { "laboratory": "Allen Institute for Artificial Intelligence", "institution": "", "location": {} }, "email": "peterc@allenai.org" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "", "affiliation": {}, "email": "hovy@cs.cmu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Predicting the effects of unexpected situations is an important reasoning task, e.g., would cloudy skies help or hinder plant growth? Given a context, the goal of such situational reasoning is to elicit the consequences of a new situation (st) that arises in that context. We propose CURIE, a method to iteratively build a graph of relevant consequences explicitly in a structured situational graph (st graph) using natural language queries over a finetuned language model. Across multiple domains, CURIE generates st graphs that humans find relevant and meaningful in eliciting the consequences of a new situation (75% of the graphs were judged correct by humans). We present a case study of a situation reasoning end task (WIQA-QA), where simply augmenting their input with st graphs improves accuracy by 3 points. We show that these improvements mainly come from a hard subset of the data, that requires background knowledge and multi-hop reasoning.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Predicting the effects of unexpected situations is an important reasoning task, e.g., would cloudy skies help or hinder plant growth? Given a context, the goal of such situational reasoning is to elicit the consequences of a new situation (st) that arises in that context. We propose CURIE, a method to iteratively build a graph of relevant consequences explicitly in a structured situational graph (st graph) using natural language queries over a finetuned language model. Across multiple domains, CURIE generates st graphs that humans find relevant and meaningful in eliciting the consequences of a new situation (75% of the graphs were judged correct by humans). We present a case study of a situation reasoning end task (WIQA-QA), where simply augmenting their input with st graphs improves accuracy by 3 points. We show that these improvements mainly come from a hard subset of the data, that requires background knowledge and multi-hop reasoning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "A long-standing challenge in reasoning is to model the consequences of an unseen situation in a context. In the real world unexpected situations are common. Machines capable of situational reasoning are crucial because they are expected to gracefully handle such unexpected situations. For example, when eating leftover food, would it be more safer from virus if we microwave the food? -answering this requires understanding the complex events virus contamination and effect of heat on virus. Much of this information remains implicit (by Grice's maxim of quantity (Grice, 1975) ), thus requiring inference.", "cite_spans": [ { "start": 565, "end": 578, "text": "(Grice, 1975)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, NLP literature has shown renewed interest in situational reasoning with applications in qualitative reasoning (Tandon et al., 2019; Figure 1: RQ1: CURIE generates situational graphs by iteratively querying a model, making explicit the model's knowledge of effects of influences (+ve / -ve). RQ2: Situational graphs improve situational reasoning QA when appended to the question context. , physical commonsense reasoning Bisk et al., 2020) , and defeasible inference (Rudinger et al., 2020) . These tasks take as input a context providing background information, a situation (st), and an ending, and predict the reachability from st to that ending. However, these systems have three limitations: (i) systems trained on these tasks are often domain specific, (ii) these tasks do not require a supporting structure that elicits the dynamics of the reasoning process, and (iii) these tasks are addressed as a classification problem restricting to a closed vocabulary setting.", "cite_spans": [ { "start": 120, "end": 141, "text": "(Tandon et al., 2019;", "ref_id": "BIBREF38" }, { "start": 142, "end": 142, "text": "", "ref_id": null }, { "start": 431, "end": 449, "text": "Bisk et al., 2020)", "ref_id": "BIBREF2" }, { "start": 477, "end": 500, "text": "(Rudinger et al., 2020)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To address these limitations, we propose CURIEa system to iteratively query pretrained language models to generate an explicit structured graph of consequences, that we call a situational reasoning graph (st-graph). The task is illustrated in Figure 1 : given some context and situation st (short phrase), our system generates a st-graph based on the contextual knowledge. CURIE supports the following kinds of reasoning:", "cite_spans": [], "ref_spans": [ { "start": 243, "end": 251, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 If a situation st occurs, which event is more/less likely to happen imminently/ eventually?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Which event will support/ prevent situation st from happening imminently/ eventually?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As shown in Figure 1 , our approach to this task is to iteratively compile the answers to questions 1 and 2 to construct the st-graph where imminent/eventual capture multihop reasoning questions. Compared to a free-form text output obtained from an out-of-the-box sequence-to-sequence model, our approach gives more control and flexibility over the graph generation process, including arbitrarily reasoning for any particular node in the graph. The generated st-graphs are of high quality as judged by humans for correctness. In addition to human evaluation, we also show that a downstream task that requires reasoning about situations can compose natural language queries to construct a st-reasoning graph via CURIE. The resulting st-graph can be simply augmented to their input to achieve performance gains, specifically on the subset of hard questions that require background knowledge and multihop reasoning. In summary, this paper addresses the following research questions:", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "RQ1: Given a context and a situation, how can we generate a situational reasoning (st) graph?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To answer RQ1, we present CURIE, the first domain-agnostic situational reasoning system that takes as input a context and a situation st and iteratively generates a situational reasoning graph ( \u00a72). Our system is effective at situational reasoning across three datasets as validated by human evaluation and automated metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "RQ2: Can the st-graphs generated by CURIE improve performance of a downstream task? To answer RQ2, we show that st graphs generated by CURIE improve a st-reasoning task (WIQA-QA) by 3 points on accuracy by simply augmenting their input with our generated situational graphs, especially for a hard subset that requires background knowledge and multi-hop reasoning ( \u00a74).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "CURIE provides both a general framework for situational reasoning and a method for constructing streasoning graphs from pretrained language models. M st tasks model st-graph Figure 2 : CURIE framework consists of two components: (i) a formulation that adapts datasets that allow st-reasoning for pretraining (ii) a method to iteratively build structured st-graphs using natural language queries over a fine-tuned language model (M). Figure 2 shows the overall architecture of CURIE. CURIE framework consists of two components: (i) st-reasoning task formulation : a formulation that adapts datasets that allow situational reasoning (ii) st-graph construction : a method to fine-tune language model M to generate the consequences of a situation and iteratively construct structured situational graphs (shown in Figure 1 ). In this section, we present (i) our task formulation ( \u00a72.1), (ii) adapting existing datasets for CURIE task formulation ( \u00a72.2), (iii) the learning procedure ( \u00a72.3), and (iv) the st-graph generation process ( \u00a72.4).", "cite_spans": [], "ref_spans": [ { "start": 174, "end": 182, "text": "Figure 2", "ref_id": null }, { "start": 433, "end": 441, "text": "Figure 2", "ref_id": null }, { "start": 809, "end": 817, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "CURIE for Situational Reasoning", "sec_num": "2" }, { "text": "We describe the general task formulation for adapting pretraining language models to the st-reasoning task. Given a context T = {s 1 , s 2 , . . . , s N } comprising of N sentences, and a situation st, our goal is to generate an st-graph G that captures the effects of situation st. An st-graph G(V, E) is an unweighted directed acyclic graph. A vertex v \u2208 V is an event or a state that describes a change to the original conditions in T . Each edge e ij \u2208 E is labeled with a relationship r ij , that indicates whether v i positively or negatively influences v j . Positive influences are represented via green edges comprising one of {entails, strengthens, helps} and negative influences represented via red edges that depict one of {contradicts, weakens, hurts}. Our relation set is general and can accommodate various st-reasoning tasks. Given two nodes v i , v k \u2208 V , if a path from v i to v k has more than one edge, we describe the effect c as eventual and a direct effect as imminent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Formulation", "sec_num": "2.1" }, { "text": "We derive the training data by transforming a repository of (context T , st-graph G) tuples into a set of question-answer pairs. Each pair of vertices Given context and st: dog is a sheep dog Q1: What does st strengthen imminently ? A1: The men are farmers st: men are studying tour maps Q2: What does st weaken imminently? A2: The men are farmers Table 1 : The datasets used by CURIE and how we re-purpose them for st reasoning graph generation task. As explained in \u00a72.1, the green edges set depicts relation (r) (entail, strengthen, helps) and red edges depict one of (contradict, weaken, hurts). The { imminent, eventual } effects (c) are used to support multihop reasoning. DEFEAS = DEFEASIBLE, chain refers to reasoning chain. Some examples are cut to fit. The key insight is that an st-graph can be decomposed into a series of QA pairs, enabling us to leverage seq-to-seq approaches for st-reasoning.", "cite_spans": [], "ref_spans": [ { "start": 348, "end": 355, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Task Formulation", "sec_num": "2.1" }, { "text": "v s , v t \u2208 G", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Formulation", "sec_num": "2.1" }, { "text": "one question-answer pair to the training data for CURIE, such that every question comprises of: i) context T , ii) a st-vertex v s , iii) a relation r, and iv) the nature of the effect c and the answer is the target node v t . An example is shown in Figure 1 . Compared to an end-to-end approach to graph generation, our approach gives more flexibility over the generation process, enabling reasoning for any chosen node in the graph. Thus the training data consists of tuples (", "cite_spans": [], "ref_spans": [ { "start": 250, "end": 259, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Task Formulation", "sec_num": "2.1" }, { "text": "x i , y i ), with x i = (T, v s , r, c) i and y i is the target situation v t .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Formulation", "sec_num": "2.1" }, { "text": "Despite theoretical advances, lack of a large-scale general situational reasoning dataset presents a challenge to train seq-to-seq language models. We describe how we generalize existing diverse datasets towards st-reasoning towards finetuning a language model M. If a reasoning dataset contains a context, a st-situation and can describe the influence of st in terms of green and/or red edges, it can be seamlessly adapted to CURIE framework. Due to the lack of existing datasets that directly support our task formulation, we adapt the following three diverse datasets -WIQA, QUAREL and DEFEASIBLE for CURIE (dataset statistics in Table 3 ).", "cite_spans": [], "ref_spans": [ { "start": 633, "end": 640, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Generalizing Existing Datasets", "sec_num": "2.2" }, { "text": "WIQA: WIQA task studies the effect of a perturbation in a procedural text (Tandon et al., 2019) . The context T is a procedural text describing a physical process, and st is a perturbation i.e., an external situation deviating from T , and the effect of st is either helps or hurts. See Table 1 for examples.", "cite_spans": [ { "start": 74, "end": 95, "text": "(Tandon et al., 2019)", "ref_id": "BIBREF38" } ], "ref_spans": [ { "start": 287, "end": 294, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Generalizing Existing Datasets", "sec_num": "2.2" }, { "text": "QUAREL: QUAREL dataset contains qualitative story questions where T is a narrative, and st is a qualitative statement. T and st are also expressed in a simpler, logical form, which we use as it highlights the reasoning challenge. The effect of st is entails or contradicts (see Table 1 ). 14.9k 15.4k Table 3 : Dataset wise statistics, we maintain the splits DEFEASIBLE: The DEFEASIBLE reasoning task (Rudinger et al., 2020) studies inference in the presence of a counterfactual. The context T is a premise describing an everyday context, and the situation st is an observed evidence which either strengthens or weakens the hypothesis. We adapt the original abductive setup as shown in Table 1 . In addition to commonsense situations, DEFEASIBLE-st also comprises of social situations, thereby contributing to the diversity of our datasets.", "cite_spans": [ { "start": 401, "end": 424, "text": "(Rudinger et al., 2020)", "ref_id": "BIBREF31" } ], "ref_spans": [ { "start": 278, "end": 285, "text": "Table 1", "ref_id": null }, { "start": 301, "end": 308, "text": "Table 3", "ref_id": null }, { "start": 686, "end": 693, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Generalizing Existing Datasets", "sec_num": "2.2" }, { "text": "To reiterate our task formulation ( \u00a72.1), for a given context and st, we first specify a set of questions and the resulting outputs for the questions is then compiled to form a st-graph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning to Generate st-graphs", "sec_num": "2.3" }, { "text": "The training data consists of tuples (x i , y i ), with x i = (T, st , r, c) i where T denotes the context, st the situation, r is the edge (green or red), c indicates the nature of the effect (imminent or eventual), and y i is the output (a short sentence or a phrase depicting the effect). The output of N Q such questions is compiled into a graph G = {y i } 1:N Q (Fig. 1) .", "cite_spans": [], "ref_spans": [ { "start": 367, "end": 375, "text": "(Fig. 1)", "ref_id": null } ], "eq_spans": [], "section": "Learning to Generate st-graphs", "sec_num": "2.3" }, { "text": "We use a pretrained language model M to estimate the probability of generating an answer y i for an input x i . We first transform the tuple", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning to Generate st-graphs", "sec_num": "2.3" }, { "text": "x i = x 1 i , x 2 i , . . . , x N i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning to Generate st-graphs", "sec_num": "2.3" }, { "text": "into a single query sequence of tokens by concatenating its components i.e. x i = concat(T, st , r, c), where concat is string concatenation. Let the sequence of tokens representing the target event be", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning to Generate st-graphs", "sec_num": "2.3" }, { "text": "y i = y 1 i , y 2 i , . . . , y M i ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning to Generate st-graphs", "sec_num": "2.3" }, { "text": "where N and M are the lengths of the query and the target event sequences. We model the conditional Given: CURIE language model M.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning to Generate st-graphs", "sec_num": "2.3" }, { "text": "Given: Context T , situation st, a set R = {(r i , c i )} N Q i=1 of N Q (r, c) tuples. Result: st graph G: i th node is generated with relation r i , effect type c i . Init: G \u2190 \u2205 for i \u2190 1, 2, . . . , N Q do / * Create a query * / x i = concat(T, st, r i , c i ); / * Sample a node from M * / y i \u223c M(x i ); / * Add sampled node, edge * / G = G \u222a (r i , c i , y i ); end return G probability p \u03b8 (y i | x i ) as a series of conditional next token distributions parameterized by \u03b8: as p \u03b8 (y i | x i ) = M k=1 p \u03b8 (y k i | x i , y 1 i , .., y k\u22121 i ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning to Generate st-graphs", "sec_num": "2.3" }, { "text": "The auto-regressive factorization of the language model p \u03b8 allows us to efficiently generate target event influences for a given test input x j . The process of decoding begins by sampling the first token", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference to Decode st-graphs", "sec_num": "2.4" }, { "text": "y 1 j \u223c p \u03b8 (y | x j ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference to Decode st-graphs", "sec_num": "2.4" }, { "text": "The next token is then drawn by sampling", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference to Decode st-graphs", "sec_num": "2.4" }, { "text": "y 2 j \u223c p \u03b8 (y | x j , y 1 j ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference to Decode st-graphs", "sec_num": "2.4" }, { "text": "The process is repeated until a specified end-symbol token is drawn at the K th step. We use nucleus sampling in practice. The tokens y 1 j , y 2 j , . . . , y K\u22121 j are then returned as the generated answer. To generate the final streasoning graph G, we combine all the generated answers {y i } 1:N Q that had the same context and st pair (T, st ) over all (r, c) combinations. We can then use generated answer st \u2208 {y i } 1:N Q , as a new input to M as (T, st ) to recursively expand the st-graph to arbitrary depth and structures (Al-gorithm 1). One such instance of using CURIE st graphs for a downstream QA task is shown in \u00a74.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference to Decode st-graphs", "sec_num": "2.4" }, { "text": "This section reports on the quality of the generated st reasoning graphs and establishes strong baseline scores for st-graph generation. We use the datasets described in section \u00a72.2 for our experiments. Table 4 : Generation results for CURIE with baselines for language model M. We find that context is essential for performance (w/o T ). We provide these baseline scores as a reference for future research.", "cite_spans": [], "ref_spans": [ { "start": 204, "end": 211, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "RQ1: Establishing Baselines for st-graph Generation", "sec_num": "3" }, { "text": "To reiterate, CURIE is composed of (i) task formulation component and (ii) graph construction component, that uses a language model M to construct the st-graph. We want to emphasize that any language model architecture can be a candidate for M. Since our st-task formulation is novel, we establish strong baselines over the three datasets. Our experiments include large-scale language models (LSTM and pretrained transformer) with varying parameter sizes and pre-training, and the corresponding ablation studies. Our choices for M are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Language Models", "sec_num": "3.1" }, { "text": "LSTM Seq-to-Seq: We train an LSTM (Hochreiter and Schmidhuber, 1997) based sequence to sequence model (Bahdanau et al., 2015 ) which uses global attention described in (Luong et al., 2015) .", "cite_spans": [ { "start": 102, "end": 124, "text": "(Bahdanau et al., 2015", "ref_id": "BIBREF1" }, { "start": 168, "end": 188, "text": "(Luong et al., 2015)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Language Models", "sec_num": "3.1" }, { "text": "We initialize the embedding layer with pre-trained 300 dimensional Glove (Pennington et al., 2014) 1 . We use 2 layers of LSTM encoder and decoder with a hidden size of 500. The encoder is bidirectional.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Language Models", "sec_num": "3.1" }, { "text": "We use the original design of GPT (Radford et al., 2018) with 12 layers, 768-dimensional hidden states, and 12 attention heads.", "cite_spans": [ { "start": 34, "end": 56, "text": "(Radford et al., 2018)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "GPT:", "sec_num": null }, { "text": "We use the medium (355M) variant of GPT-2 (Radford et al., 2019) with 24 layers, 1024 hidden size, 16 attention heads. For both GPT and GPT-2, we initialize the model with the pre-trained weights and use the implementation provided by Wolf et al. (2019) . We use Adam (Kingma and Ba, 2014) for optimization with a learning rate of 5e \u2212 05. All the dropouts (Srivastava et al., 2014) were set to 0.1. We found the best hyperparameter settings by searching the space using the following hyperparameters.", "cite_spans": [ { "start": 42, "end": 64, "text": "(Radford et al., 2019)", "ref_id": "BIBREF28" }, { "start": 235, "end": 253, "text": "Wolf et al. (2019)", "ref_id": "BIBREF39" }, { "start": 357, "end": 382, "text": "(Srivastava et al., 2014)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "GPT-2:", "sec_num": null }, { "text": "1. embedding dropout = {0.1, 0.2, 0.3} 2. learning rate = {1e-05, 2e-05, 5e-05, 1e-06}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GPT-2:", "sec_num": null }, { "text": "We compare the st-graphs generated by various language models with the gold-standard reference graphs. To compare the two graphs, we first flatten both the reference graph and the st-graph as text sequences and then compute the overlap between them. Due to a lack of strong automated metrics, we use the commonly used evaluation metrics for generation BLEU (Papineni et al., 2002) , and ROUGE (Lin, 2004) 2 . Our results shown in Table 4 indicate that the task of st generation is challenging, and suggests that incorporating st-reasoning specific inductive biases might be beneficial. At the same time, Table 4 shows that even strong models like GPT-2 achieve low BLEU and ROUGE scores (specifically on WIQA and DEFEASIBLE), leaving a lot of room for model improvements in the future.", "cite_spans": [ { "start": 357, "end": 380, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF23" }, { "start": 393, "end": 404, "text": "(Lin, 2004)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 430, "end": 437, "text": "Table 4", "ref_id": null }, { "start": 604, "end": 611, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "GPT-2:", "sec_num": null }, { "text": "We also show ablation results for the model with respect to the context T ( \u00a72.1), by fine-tuning without the context. We find that context is essential for performance for both GPT and GPT-2 (indicated with w/o T in Table 4 ). Further, we note that the gains achieved by adding context are higher for GPT-2, hinting that larger models can more effectively utilize the context 3 . ", "cite_spans": [], "ref_spans": [ { "start": 217, "end": 224, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "GPT-2:", "sec_num": null }, { "text": "N-gram metrics such as BLEU and ROUGE are known to be limited, specifically for reasoning tasks. Further, we observe from Table 4 that context is crucial for generation quality. To better understand this effect, we perform human evaluation on a random sample from the dev set to compare GPT-2-w/o T and GPT-2 models. Our goal is to assess quality of generations, and the importance of grounding generations in context. Four human judges annotated 100 unique samples for correctness, relevance and reference, described next.", "cite_spans": [], "ref_spans": [ { "start": 122, "end": 129, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Human Evaluation", "sec_num": "3.2" }, { "text": "We conducted a human evaluation to evaluate the correctness of the generated graphs where we aggregated nodes for a given st. The user interface for the annotation (shown in Figure 3 ) displayed the context T and the corresponding graph G generated by GPT-2 using Algorithm 1. The human judges were asked to annotate the nodes, edges, and the overall graph for correctness. A graph was labeled as correct if either a) all the nodes and edges were correct, or b) the graph had a minor issue that the judges deem not detrimental to the overall correctness. The inter-annotator agreement on graph correctness was substantial with a Fleiss' Kappa score (Fleiss and Cohen, 1973) of 0.69. Table 6 shows that human judges rated >75% of the graphs to be correct given the context, showing that CURIE generates high-quality graphs for a diverse set of contexts.", "cite_spans": [], "ref_spans": [ { "start": 174, "end": 182, "text": "Figure 3", "ref_id": null }, { "start": 683, "end": 690, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Correctness:", "sec_num": null }, { "text": "Relevance: The annotators are provided with the context T , the situation st, and the relational ques-Attribute Node Edge Graph % Correct 79.71 77.78 75.36 Table 6 : Human Analysis of Graph Correctness. About 75% of the graphs were deemed as correct.", "cite_spans": [], "ref_spans": [ { "start": 156, "end": 163, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Correctness:", "sec_num": null }, { "text": "tions. The annotators were asked, \"Which system (A or B) is more accurate relative to the background information given in the context?\" They could also pick option C (no preference). The order of the references was randomized. Table 7 (row 1) shows that GPT-2 outperforms GPT-2 (w/o T ), confirming our hypothesis that context is important as GPT-2 generates target events that are grounded in the passage and source events. Reference: We measure how accurately each system-generated event reflects the reference (true) event. Here, the annotators saw only the reference sentence and the outputs of two systems (A and B) in a randomized order. We asked the annotators, \"Which system's output is closest in meaning to the reference?\" The annotators could pick the options A, B, or C (no preference). Table 7 (row 2) illus- Figure 3 : User interface for graph correctness evaluation. The human judges were asked to rate the if the the generated nodes, edges, and the overall graph are correct for the given context. The paragraph for this example was: Grass and small plants grow in an area. These plants die. The soil gains organic material. The soil becomes more fertile. Larger plants are able to be supported. Trees eventually grow.", "cite_spans": [], "ref_spans": [ { "start": 227, "end": 242, "text": "Table 7 (row 1)", "ref_id": "TABREF7" }, { "start": 799, "end": 806, "text": "Table 7", "ref_id": "TABREF7" }, { "start": 822, "end": 830, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Correctness:", "sec_num": null }, { "text": "Task GPT-2 (w/o T ) GPT-2 No", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correctness:", "sec_num": null }, { "text": "trates that the output generated by GPT-2 is closer in meaning to the reference compared to GPT-2 (w/o T ) reinforcing the importance of context. Both the models (with and without context) produced similarly grammatically fluent outputs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correctness:", "sec_num": null }, { "text": "The reference and relevance task scores together show that GPT-2 does not generate target events that are exactly similar to the reference target events, but are correct in the context of the passage and source event. To investigate this, we analyze a random sample of 100 points from the dev set. Out of the erroneous samples, we observe the following error categories (shown in Table 5 ):", "cite_spans": [], "ref_spans": [ { "start": 380, "end": 387, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "3.3" }, { "text": "\u2022 Polarity (7%): Predicted polarity was wrong but the event was correct.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "3.3" }, { "text": "\u2022 Linguistic Variability (27%): Output was a linguistic variant of the reference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "3.3" }, { "text": "\u2022 Related event (23%): Output was related but different reference expected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "3.3" }, { "text": "\u2022 Wrong (40%): Output was fully unrelated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "3.3" }, { "text": "\u2022 Erroneous reference (3%): Gold annotations themselves were erroneous.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "3.3" }, { "text": "Finally, we measure if the generated st-graphs are consistent. Consider a path of length two in the generated st-graph (say, A \u2192 B \u2192 C). A consistent graph would have identical answers to what does A help eventually i.e., \"C\", and what does B help imminently i.e., \"C\". To analyze consistency, we manually evaluated 50 random generated lengthtwo paths, selected from WIQA-st dev set. We observe that 58% samples had consistent output w.r.t the generated output. We also measure consistency w.r.t. the gold standard (the true outputs in the dev set), and observe that the system output is \u224848% consistent. Despite being trained on independent samples, st-graphs show reasonable consistency and improving consistency further is an interesting future research direction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Consistency Analysis", "sec_num": "3.4" }, { "text": "In summary, CURIE allows adapting pretrained language models to generate st-graphs that humans meaningful and relevant with a high degree of correctness. We also perform an in-depth analysis of the errors of CURIE. We establish multiple baselines with diverse language models to guide future research. We show that context is more important than model size for st-reasoning tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3.5" }, { "text": "In this section, we describe the approach for augmenting st graphs for downstream reasoning tasks. We first identify the choice of tasks (st-tasks) for domain adaptive pretraining (Gururangan et al., 2020) and obtain CURIE language model M (based on GPT-2). The downstream task then provides input context, st and (relation, type) tuples of interest, and obtains the st-graphs (see Algorithm 1) from CURIE. We describe one such instantiation in \u00a74.1.", "cite_spans": [ { "start": 180, "end": 205, "text": "(Gururangan et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "RQ2: CURIE for Downstream Tasks", "sec_num": "4" }, { "text": "We examine the utility of CURIE-generated graphs in the WIQA-QA (Tandon et al., 2019) downstream question answering benchmark. Input to this task is a context supplied in form of a passage T , a starting event c, an ending event e, and the output is a label {helps, hurts, or no_effect} depicting how the ending e is influenced by the event c. We hypothesize that CURIE can augment c and e with their influences, giving a more comprehensive scenario than the context alone. We use CURIE trained on WIQA-st to augment the event influences in each sample in the QA task as additional context. We obtain the influence graphs for c and e by defining R f wd = {(helps, imminent), (hurts, imminent) } and R rev = { (helped by, imminent), (hurt by, imminent)}, and using algorithm 1 as follows:", "cite_spans": [ { "start": 64, "end": 85, "text": "(Tandon et al., 2019)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "CURIE augmented WIQA-QA", "sec_num": "4.1" }, { "text": "G(c) = IGEN(T, c, R f wd )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CURIE augmented WIQA-QA", "sec_num": "4.1" }, { "text": "G(e) = IGEN(T, e, R rev )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CURIE augmented WIQA-QA", "sec_num": "4.1" }, { "text": "We hypothesize that WIQA-st graphs are able to generate reasoning chains that connect c to e, even if e is not an immediate consequence of c. Following Tandon et al. 2019, we encode the input sequence concat(T, c, e) using the BERT encoder E (Devlin et al., 2019) , and use the [CLS] token representation (\u0125 i ) as our sequence representation.", "cite_spans": [ { "start": 242, "end": 263, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "CURIE augmented WIQA-QA", "sec_num": "4.1" }, { "text": "We then use the same encoder E to encode the generated effects concat(G(c), G(e)), and use the [CLS] token to get a representation for augmented c and e (\u0125 a ). Following the encoded inputs, we compute the final loss as: l i = MLP 1 (\u0125 i ), and l a = MLP 1 (\u0125 a ) and L = \u03b1 \u00d7 L i + \u03b2 \u00d7 L a , where l i , l a represent the logits from\u0125 i and\u0125 a respectively, and L i and L a are their corresponding crossentropy losses. \u03b1 and \u03b2 are hyperparameters that decide the contribution of the generated influence graphs and the procedural text to the loss. We set \u03b1 = 1 and \u03b2 = 0.9 across experiments. Table 8 shows the accuracy of our method vs. the vanilla WIQA-BERT model by question type and number of hops between c and e. We also observe from Table 8 that augmenting the context with generated influences from CURIE leads to considerable gains over WIQA-BERT based model, with the largest improvement seen in 3-hop questions (questions where the e and c are at a distance of three reasoning hops in the influence graphs). The strong performance on the 3-hop question supports our hypothesis that generated influences might be able to connect two event influences that are farther apart in the reasoning chain. We also show in Table 8 Table 8 : QA accuracy by number of hops, and question type. WIQA-BERT refers to the original WIQA-BERT results reported in Tandon et al. (2019) , and WIQA-BERT + CURIE are the results obtained by augmenting the QA dataset with the influences generated by CURIE.", "cite_spans": [ { "start": 1353, "end": 1373, "text": "Tandon et al. (2019)", "ref_id": "BIBREF38" } ], "ref_spans": [ { "start": 592, "end": 599, "text": "Table 8", "ref_id": null }, { "start": 739, "end": 746, "text": "Table 8", "ref_id": null }, { "start": 1222, "end": 1229, "text": "Table 8", "ref_id": null }, { "start": 1230, "end": 1237, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "CURIE augmented WIQA-QA", "sec_num": "4.1" }, { "text": "Out-of-para category of questions, which requires background knowledge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QA Evaluation Results", "sec_num": null }, { "text": "Source of improved performance: st graphs? Since CURIE uses GPT-2 model to generate the graphs, we perform an additional experiment to verify whether simply using GPT-2 classifier for WIQA would achieve the same performance gains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QA Evaluation Results", "sec_num": null }, { "text": "To establish this, we train a GPT-2 classifier, and augment it with CURIE graphs to compare their relative performances on WIQA. Table 9 shows that augmenting CURIE graphs to both WIQA-BERT and GPT-2 classifiers provides consistent gains, suggesting the effectiveness of CURIE graphs.", "cite_spans": [], "ref_spans": [ { "start": 129, "end": 136, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "QA Evaluation Results", "sec_num": null }, { "text": "WIQA-BERT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Accuracy", "sec_num": null }, { "text": "76.92 * GPT-2 72.70 GPT-2 + CURIE 74.33 * Table 9 : WIQA-QA results for both WIQA-BERT and GPT-2 augmented with CURIE graphs. Across both classifiers, augmenting CURIE graphs shows performance gains. * -indicates statistical significance WIQA-BERT scores are slightly lower than the GPT-2 scores for WIQA classification despite having similar parameter size. We hypothesize that this is due to the pretrained classification token ([CLS]) in WIQA-BERT, while GPT-2 uses the pooling operation over the sequence for classification. In summary, the evaluation highlights the value of CURIE as a framework for improving performance on downstream tasks that require coun-terfactual reasoning and serves as an evaluation of the ability of CURIE to reason about st-scenarios.", "cite_spans": [], "ref_spans": [ { "start": 42, "end": 49, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "WIQA-BERT + CURIE", "sec_num": "73.80" }, { "text": "In summary, we show substantial gains when a generated st-graph is fed as an additional input to the QA model. Our approach forces the model to reason about influences within a context, and then answer the question, which proves to be better than answering the questions directly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.2" }, { "text": "Language Models for Knowledge Generation: Using large scale neural networks to generate knowledge has been studied under various task settings Bosselut et al., 2021; . Another line of querying language models (LMs) aims to understand the type of knowledge LMs contain. Davison et al. (2019) explore whether BERT prefers true or fictitious statements over ConceptNet (Speer et al., 2017) . Logan et al. (2019) observe that the LM over-generalize to produce wrong facts, while Kassner and Sch\u00fctze 2019show that negated facts are also considered valid in an LM.", "cite_spans": [ { "start": 143, "end": 165, "text": "Bosselut et al., 2021;", "ref_id": "BIBREF3" }, { "start": 269, "end": 290, "text": "Davison et al. (2019)", "ref_id": "BIBREF6" }, { "start": 366, "end": 386, "text": "(Speer et al., 2017)", "ref_id": "BIBREF34" }, { "start": 389, "end": 408, "text": "Logan et al. (2019)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Our work closely aligns with Tandon et al. (2019) , , and Bosselut et al. (2021) . Compared to , CURIE gives a method that can naturally incorporate context and reason about situation via hops and nature of the influence. Additionally, any node can be arbitrarily expanded via the iterative procedure, producing complete graphs for situations. We reformulate the task of studying event influence from a QA task (Tandon et al., 2019) to a generation task. Our framework is similar in spirit to , but extend it for situational reasoning with LMs. Bosselut et al. (2021) aim to generate events that can aid commonsense tasks. In contrast, our focus is context-grounded st graph generation. To this end, our formulation includes multiple forward/backward reactions, imminent and eventual edges, and an algorithm to compile the individual nodes to a complete graph (Algorithm 1).", "cite_spans": [ { "start": 29, "end": 49, "text": "Tandon et al. (2019)", "ref_id": "BIBREF38" }, { "start": 58, "end": 80, "text": "Bosselut et al. (2021)", "ref_id": "BIBREF3" }, { "start": 411, "end": 432, "text": "(Tandon et al., 2019)", "ref_id": "BIBREF38" }, { "start": 545, "end": 567, "text": "Bosselut et al. (2021)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Situational reasoning : There has been immense interest in extracting event chains (as causal graphs) in stories and news corpora in both unsupervised (Chambers and Jurafsky, 2008) and supervised (Rudinger et al., 2015; Liu et al., 2018; Asghar, 2016; Dunietz et al., 2017; Nordon et al., 2019; Zhao et al., 2017) settings. Such approaches often depend on events that are explicitly mentioned in the input text, thereby unable to generate events beyond the input text.", "cite_spans": [ { "start": 151, "end": 180, "text": "(Chambers and Jurafsky, 2008)", "ref_id": "BIBREF5" }, { "start": 196, "end": 219, "text": "(Rudinger et al., 2015;", "ref_id": "BIBREF30" }, { "start": 220, "end": 237, "text": "Liu et al., 2018;", "ref_id": "BIBREF18" }, { "start": 238, "end": 251, "text": "Asghar, 2016;", "ref_id": "BIBREF0" }, { "start": 252, "end": 273, "text": "Dunietz et al., 2017;", "ref_id": "BIBREF8" }, { "start": 274, "end": 294, "text": "Nordon et al., 2019;", "ref_id": "BIBREF22" }, { "start": 295, "end": 313, "text": "Zhao et al., 2017)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Recently, there has been interest in st reasoning from a retrieval setting (Lin et al., 2019) and also generation setting, attributed partially to the rise of neural generation models (Yangfeng Ji and Celikyilmaz, 2020) as knowledge bases (Petroni et al., 2019; Roberts et al., 2020; Talmor et al., 2020; . Qin et al. (2019) present generation models to generate the path from a counterfactual to an ending in a story. Current systems make some simplifying assumptions, e.g. that the ending is known. Multiple st (e.g., more sunlight, more pollution) can happen at the same time, and these systems can only handle one situation at a time. All of these systems assume that st happens once in a context. Our framework strengthens this line of work by not assuming that the ending is given during deductive st reasoning.", "cite_spans": [ { "start": 75, "end": 93, "text": "(Lin et al., 2019)", "ref_id": "BIBREF17" }, { "start": 239, "end": 261, "text": "(Petroni et al., 2019;", "ref_id": "BIBREF25" }, { "start": 262, "end": 283, "text": "Roberts et al., 2020;", "ref_id": "BIBREF29" }, { "start": 284, "end": 304, "text": "Talmor et al., 2020;", "ref_id": null }, { "start": 307, "end": 324, "text": "Qin et al. (2019)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "We present CURIE, a situational reasoning that: (i) is effective at generating st-reasoning graphs, validated by automated metrics and human evaluations, (ii) improves performance on two downstream tasks by simply augmenting their input with the generated st graphs. Further, our framework supports recursively querying for any node in the st-graph. Our future work is to design models that seek consistency, and study recursive st-reasoning as a bridge between dialog and reasoning. Table 12 : Sample Generations. Topic matches captures whether the topic of the generated event matches with the context. Path length = 1 refers to the immediate effects, and Path length > 1 refers to eventual effects. (section 3).", "cite_spans": [], "ref_spans": [ { "start": 484, "end": 492, "text": "Table 12", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "https://github.com/OpenNMT/OpenNMT-py 2 https://github.com/Maluuba/nlg-eval 3 More qualitative examples shown in appendix B", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Peter Clark for the thoughtful discussions and useful feedback on the draft. We also want to thank the anonymous reviewers for valuable feedback. This material is partly based on research sponsored in part by the Air Force Research Laboratory under agreement number FA8750-19-2-0200. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory or the U.S. Government.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "To compute polarity for error analysis, we use the following words as guidelines.Increasing words helps, more, higher, increase, increases, stronger, faster, greater, longer, larger, helpingDecreasing words hurts, less, lower, decrease, decreases, weaker, slower, smaller, hurting, softer, fewer", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Polarity Words", "sec_num": null }, { "text": "In table 12, we show some qualitative QA examples from CURIE. Here, Topic Matches signifies whether the generated answers is relevant to the context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Examples from CURIE", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Automatic extraction of causal relations from natural language texts: A comprehensive survey", "authors": [ { "first": "Nabiha", "middle": [], "last": "Asghar", "suffix": "" } ], "year": 2016, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nabiha Asghar. 2016. Automatic extraction of causal relations from natural language texts: A comprehen- sive survey. ArXiv, abs/1605.07895.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Piqa: Reasoning about physical commonsense in natural language", "authors": [ { "first": "Yonatan", "middle": [], "last": "Bisk", "suffix": "" }, { "first": "Rowan", "middle": [], "last": "Zellers", "suffix": "" }, { "first": "Ronan", "middle": [], "last": "Lebras", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2020, "venue": "AAAI", "volume": "", "issue": "", "pages": "7432--7439", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yonatan Bisk, Rowan Zellers, Ronan LeBras, Jian- feng Gao, and Yejin Choi. 2020. Piqa: Reasoning about physical commonsense in natural language. In AAAI, pages 7432-7439.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Dynamic neuro-symbolic knowledge graph construction for zero-shot commonsense question answering", "authors": [ { "first": "Antoine", "middle": [], "last": "Bosselut", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Ronan Le Bras", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antoine Bosselut, Ronan Le Bras, and Yejin Choi. 2021. Dynamic neuro-symbolic knowledge graph construction for zero-shot commonsense question answering. In Proceedings of the 35th AAAI Con- ference on Artificial Intelligence (AAAI).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Comet: Commonsense transformers for automatic knowledge graph construction", "authors": [ { "first": "Antoine", "middle": [], "last": "Bosselut", "suffix": "" }, { "first": "Hannah", "middle": [], "last": "Rashkin", "suffix": "" }, { "first": "Maarten", "middle": [], "last": "Sap", "suffix": "" }, { "first": "Chaitanya", "middle": [], "last": "Malaviya", "suffix": "" }, { "first": "Asli", "middle": [], "last": "\u00c7elikyilmaz", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chai- tanya Malaviya, Asli \u00c7elikyilmaz, and Yejin Choi. 2019. Comet: Commonsense transformers for auto- matic knowledge graph construction. In ACL.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Unsupervised learning of narrative event chains", "authors": [ { "first": "Nathanael", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "789--797", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nathanael Chambers and Dan Jurafsky. 2008. Unsuper- vised learning of narrative event chains. In Proceed- ings of ACL-08: HLT, pages 789-797, Columbus, Ohio. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Commonsense knowledge mining from pretrained models", "authors": [ { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Feldman", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "1173--1178", "other_ids": { "DOI": [ "10.18653/v1/D19-1109" ] }, "num": null, "urls": [], "raw_text": "Joe Davison, Joshua Feldman, and Alexander Rush. 2019. Commonsense knowledge mining from pre- trained models. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 1173-1178, Hong Kong, China. As- sociation for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Automatically tagging constructions of causation and their slot-fillers", "authors": [ { "first": "Jesse", "middle": [], "last": "Dunietz", "suffix": "" }, { "first": "Lori", "middle": [ "S" ], "last": "Levin", "suffix": "" }, { "first": "J", "middle": [], "last": "Carbonell", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "117--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jesse Dunietz, Lori S. Levin, and J. Carbonell. 2017. Automatically tagging constructions of causation and their slot-fillers. Transactions of the Association for Computational Linguistics, 5:117-133.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educational and psychological measurement", "authors": [ { "first": "L", "middle": [], "last": "Joseph", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Fleiss", "suffix": "" }, { "first": "", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 1973, "venue": "", "volume": "33", "issue": "", "pages": "613--619", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph L Fleiss and Jacob Cohen. 1973. The equiv- alence of weighted kappa and the intraclass corre- lation coefficient as measures of reliability. Educa- tional and psychological measurement, 33(3):613- 619.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Logic and conversation syntax and semantics", "authors": [ { "first": "H", "middle": [], "last": "Grice", "suffix": "" } ], "year": 1975, "venue": "Logic and conversation Syntax and Semantics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Grice. 1975. Logic and conversation syntax and se- mantics. In Logic and conversation Syntax and Se- mantics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "2020. Don't stop pretraining: Adapt language models to domains and tasks", "authors": [ { "first": "Ana", "middle": [], "last": "Suchin Gururangan", "suffix": "" }, { "first": "Swabha", "middle": [], "last": "Marasovi\u0107", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Swayamdipta", "suffix": "" }, { "first": "Iz", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Doug", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Downey", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.10964" ] }, "num": null, "urls": [], "raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. arXiv preprint arXiv:2004.10964.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The curious case of neural text degeneration", "authors": [ { "first": "Ari", "middle": [], "last": "Holtzman", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Buys", "suffix": "" }, { "first": "Li", "middle": [], "last": "Du", "suffix": "" }, { "first": "Maxwell", "middle": [], "last": "Forbes", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1904.09751" ] }, "num": null, "urls": [], "raw_text": "Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Negated lama: Birds cannot fly", "authors": [ { "first": "Nora", "middle": [], "last": "Kassner", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.03343" ] }, "num": null, "urls": [], "raw_text": "Nora Kassner and Hinrich Sch\u00fctze. 2019. Negated lama: Birds cannot fly. arXiv preprint arXiv:1911.03343.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Rouge: A package for automatic evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Text summarization branches out", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Reasoning over paragraph effects in situations", "authors": [ { "first": "Kevin", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Oyvind", "middle": [], "last": "Tafjord", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" } ], "year": 2019, "venue": "MRQA@EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Lin, Oyvind Tafjord, Peter Clark, and Matt Gard- ner. 2019. Reasoning over paragraph effects in situ- ations. In MRQA@EMNLP.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Narrative modeling with memory chains and semantic supervision", "authors": [ { "first": "Fei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "278--284", "other_ids": { "DOI": [ "10.18653/v1/P18-2045" ] }, "num": null, "urls": [], "raw_text": "Fei Liu, Trevor Cohn, and Timothy Baldwin. 2018. Narrative modeling with memory chains and seman- tic supervision. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 278- 284, Melbourne, Australia. Association for Compu- tational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Barack's wife hillary: Using knowledge graphs for fact-aware language modeling", "authors": [ { "first": "Robert", "middle": [], "last": "Logan", "suffix": "" }, { "first": "F", "middle": [], "last": "Nelson", "suffix": "" }, { "first": "Matthew", "middle": [ "E" ], "last": "Liu", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5962--5971", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Logan, Nelson F Liu, Matthew E Peters, Matt Gardner, and Sameer Singh. 2019. Barack's wife hillary: Using knowledge graphs for fact-aware lan- guage modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 5962-5971.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Effective approaches to attentionbased neural machine translation", "authors": [ { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1412--1421", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention- based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1412-1421.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Exploiting structural and semantic context for commonsense knowledge base completion", "authors": [ { "first": "Chaitanya", "middle": [], "last": "Malaviya", "suffix": "" }, { "first": "Chandra", "middle": [], "last": "Bhagavatula", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bosselut", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.02915" ] }, "num": null, "urls": [], "raw_text": "Chaitanya Malaviya, Chandra Bhagavatula, Antoine Bosselut, and Yejin Choi. 2019. Exploiting structural and semantic context for commonsense knowledge base completion. arXiv preprint arXiv:1910.02915.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Building causal graphs from medical literature and electronic medical records", "authors": [ { "first": "Galia", "middle": [], "last": "Nordon", "suffix": "" }, { "first": "Gideon", "middle": [], "last": "Koren", "suffix": "" }, { "first": "Varda", "middle": [], "last": "Shalev", "suffix": "" }, { "first": "Benny", "middle": [], "last": "Kimelfeld", "suffix": "" }, { "first": "Uri", "middle": [], "last": "Shalit", "suffix": "" }, { "first": "Kira", "middle": [], "last": "Radinsky", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "1102--1109", "other_ids": {}, "num": null, "urls": [], "raw_text": "Galia Nordon, Gideon Koren, Varda Shalev, Benny Kimelfeld, Uri Shalit, and Kira Radinsky. 2019. Building causal graphs from medical literature and electronic medical records. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 33, pages 1102-1109.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th annual meeting on association for computational linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Language models as knowledge bases?", "authors": [ { "first": "Fabio", "middle": [], "last": "Petroni", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rockt\u00e4schel", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Anton", "middle": [], "last": "Bakhtin", "suffix": "" }, { "first": "Yuxiang", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Miller", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2463--2473", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabio Petroni, Tim Rockt\u00e4schel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- edge bases? In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2463-2473.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Counterfactual story reasoning and generation", "authors": [ { "first": "Lianhui", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bosselut", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Holtzman", "suffix": "" }, { "first": "Chandra", "middle": [], "last": "Bhagavatula", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lianhui Qin, Antoine Bosselut, Ari Holtzman, Chan- dra Bhagavatula, Elizabeth Clark, and Yejin Choi. 2019. Counterfactual story reasoning and genera- tion. EMNLP.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Improving language understanding by generative pre-training", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Karthik", "middle": [], "last": "Narasimhan", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Salimans", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai- assets/researchcovers/languageunsupervised/language understanding paper. pdf.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "OpenAI Blog", "volume": "1", "issue": "8", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "How much knowledge can you pack into the parameters of a language model", "authors": [ { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "5418--5426", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the param- eters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5418-5426.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Script induction as language modeling", "authors": [ { "first": "Rachel", "middle": [], "last": "Rudinger", "suffix": "" }, { "first": "Pushpendre", "middle": [], "last": "Rastogi", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Ferraro", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1681--1686", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rachel Rudinger, Pushpendre Rastogi, Francis Ferraro, and Benjamin Van Durme. 2015. Script induction as language modeling. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Language Processing, pages 1681-1686.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Thinking like a skeptic: Defeasible inference in natural language", "authors": [ { "first": "Rachel", "middle": [], "last": "Rudinger", "suffix": "" }, { "first": "Vered", "middle": [], "last": "Shwartz", "suffix": "" }, { "first": "Jena", "middle": [ "D" ], "last": "Hwang", "suffix": "" }, { "first": "Chandra", "middle": [], "last": "Bhagavatula", "suffix": "" }, { "first": "Maxwell", "middle": [], "last": "Forbes", "suffix": "" }, { "first": "Le", "middle": [], "last": "Ronan", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Bras", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "4661--4675", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rachel Rudinger, Vered Shwartz, Jena D. Hwang, Chandra Bhagavatula, Maxwell Forbes, Ronan Le Bras, Noah A. Smith, and Yejin Choi. 2020. Thinking like a skeptic: Defeasible inference in nat- ural language. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4661-4675, Online. Association for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Atomic: An atlas of machine commonsense for ifthen reasoning", "authors": [ { "first": "Maarten", "middle": [], "last": "Sap", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Ronan Le Bras", "suffix": "" }, { "first": "Chandra", "middle": [], "last": "Allaway", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Bhagavatula", "suffix": "" }, { "first": "Hannah", "middle": [], "last": "Lourie", "suffix": "" }, { "first": "Brendan", "middle": [], "last": "Rashkin", "suffix": "" }, { "first": "", "middle": [], "last": "Roof", "suffix": "" }, { "first": "A", "middle": [], "last": "Noah", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "3027--3035", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maarten Sap, Ronan Le Bras, Emily Allaway, Chan- dra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019. Atomic: An atlas of machine commonsense for if- then reasoning. In Proceedings of the AAAI Con- ference on Artificial Intelligence, volume 33, pages 3027-3035.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Unsupervised commonsense question answering with self-talk", "authors": [ { "first": "Vered", "middle": [], "last": "Shwartz", "suffix": "" }, { "first": "Peter", "middle": [], "last": "West", "suffix": "" }, { "first": "Le", "middle": [], "last": "Ronan", "suffix": "" }, { "first": "Chandra", "middle": [], "last": "Bras", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Bhagavatula", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "4615--4629", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vered Shwartz, Peter West, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020. Unsupervised commonsense question answering with self-talk. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4615-4629.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Conceptnet 5.5: An open multilingual graph of general knowledge", "authors": [ { "first": "Robyn", "middle": [], "last": "Speer", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Chin", "suffix": "" }, { "first": "Catherine", "middle": [], "last": "Havasi", "suffix": "" } ], "year": 2017, "venue": "Thirty-First AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of gen- eral knowledge. In Thirty-First AAAI Conference on Artificial Intelligence.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research", "authors": [ { "first": "Nitish", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2014, "venue": "", "volume": "15", "issue": "", "pages": "1929--1958", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Quarel: A dataset and models for answering questions about qualitative relationships", "authors": [ { "first": "Oyvind", "middle": [], "last": "Tafjord", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Sabharwal", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "7063--7071", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oyvind Tafjord, Peter Clark, Matt Gardner, Wen-tau Yih, and Ashish Sabharwal. 2019. Quarel: A dataset and models for answering questions about qualita- tive relationships. In Proceedings of the AAAI Con- ference on Artificial Intelligence, volume 33, pages 7063-7071.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "2020. olmpics-on what language model pre-training captures", "authors": [ { "first": "Alon", "middle": [], "last": "Talmor", "suffix": "" }, { "first": "Yanai", "middle": [], "last": "Elazar", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" } ], "year": null, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "743--758", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2020. olmpics-on what language model pre-training captures. Transactions of the As- sociation for Computational Linguistics, 8:743-758.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "reasoning over procedural text", "authors": [ { "first": "Niket", "middle": [], "last": "Tandon", "suffix": "" }, { "first": "Bhavana", "middle": [], "last": "Dalvi", "suffix": "" }, { "first": "Keisuke", "middle": [], "last": "Sakaguchi", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bosselut", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "6078--6087", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niket Tandon, Bhavana Dalvi, Keisuke Sakaguchi, Pe- ter Clark, and Antoine Bosselut. 2019. Wiqa: A dataset for \"what if...\" reasoning over procedural text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 6078-6087.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Huggingface's transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "", "middle": [], "last": "Debut", "suffix": "" }, { "first": "J", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "C", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "P", "middle": [], "last": "Moi", "suffix": "" }, { "first": "", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "", "middle": [], "last": "Rault", "suffix": "" }, { "first": "", "middle": [], "last": "Louf", "suffix": "" }, { "first": "", "middle": [], "last": "Funtowicz", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, L Debut, V Sanh, J Chaumond, C De- langue, A Moi, P Cistac, T Rault, R Louf, M Fun- towicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "The amazing world of generation", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Yangfeng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bosselut", "suffix": "" }, { "first": "Asli", "middle": [], "last": "Celikyilmaz", "suffix": "" } ], "year": 2020, "venue": "EMNLP tutorials", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf Yangfeng Ji, Antoine Bosselut and Asli Celikyilmaz. 2020. The amazing world of genera- tion. EMNLP tutorials.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Constructing and embedding abstract event causality networks from text snippets", "authors": [ { "first": "Sendong", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Quan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Sean", "middle": [], "last": "Massung", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" }, { "first": "T", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chengxiang", "middle": [], "last": "Zhai", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Tenth ACM International Conference on Web Search and Data Mining", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sendong Zhao, Quan Wang, Sean Massung, Bing Qin, T. Liu, Bin Wang, and ChengXiang Zhai. 2017. Con- structing and embedding abstract event causality net- works from text snippets. Proceedings of the Tenth ACM International Conference on Web Search and Data Mining.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "Algorithm 1: ITERATIVEGRAPHGEN (IGEN): generating st graphs with CURIE" }, "TABREF2": { "content": "
Datasettraindevtest
WIQA119.2k 34.8k 34.8k
QUAREL4.6k1.3k652
DEFEASIBLE 200k
", "type_str": "table", "text": "Overview of experiments", "html": null, "num": null }, "TABREF5": { "content": "", "type_str": "table", "text": "", "html": null, "num": null }, "TABREF7": { "content": "
", "type_str": "table", "text": "Results of human evaluation. The numbers show the percentage(%) of times a particular option was selected for each metric.", "html": null, "num": null }, "TABREF10": { "content": "
Path LengthQuestionAnswerPredicted AnswerTopic Matches
3hurts more conservation achievedLESS flowers being pollinatedless nectar available.Yes
hurts the eggs become
2food forMORE fishLess larvae eat and grow.Yes
other fish
2helps more magma inside volcanomore magma changes in pressureMORE/GREATER eruptions?.Yes
2helps less commercial fishingmore fry emergeLESS damage by acid rain.
2hurts more stormy weather occursless plant growth occurs MORE vegetables.Yes
2helps more pumpkin seeds plantedMORE or LARGER pumpkinsmore water used for more flowers.No
2hurts more Global warming causes extreme temperaturesRains are plentiful and more regularMORE vegetables?.Yes
2helps warmer weather evaporates more watera MORE INTENSE water cycleMORE/STRONGER storms?.Yes
2helps dry hot environment evaporates waterLESS frogsMORE or LARGER frogs. Yes
3helps stronger heat sourceMORE evaporationmore heat causes to increase in energy. the moleculesYes
2helps living in a rain forestmore water collects in the bodies of waterMORE salt being removed from the water.No
2hurts there is no tadpole from the eggMORE frogsMORE ELABORATE swimming.No
1helps more pulling and stretching of tetonic platesmore cracks in earths crustMORE or STRONGER earthquakes.Yes
2hurts less animals that hunt frogsless tadpoles loses their tailsmore fish grow bigger.No
2hurts both kidneys are present and functioningless waste is removed from the bloodless waste is removed in the blood.Yes
", "type_str": "table", "text": "Sample Generations. Topic matches captures whether the topic of the generated event matches with the context. (section 3). Path length = 1 refers to the immediate effects, and Path length > 1 refers to eventual effects.", "html": null, "num": null }, "TABREF11": { "content": "
Path LengthQuestionAnswerPredicted AnswerTopic Matches
2helps the bees have a very hairy leg genethe bees would carry away from the flower more pollena LARGER nectar star. Yes
2hurts If more eggs are layedMORE frogsthe mouth will grow smaller.No
1hurts bees are importedfewer bees land on flowersa SMALLER hive.No
1hurts more adolescent fish grow to adulthoodfewer fish can lay more eggsLESS damage by acid rain.No
2helps the heat risesgreater precipitations will happenMORE/STRONGER .Yes
2helps All the eggs were eatenThere were few eggs laidless eggs are laid..Yes
1hurts plates move away from each otheredges of plates crumple moreMORE or GREATER eruptions.Yes
1hurts more proteins availableless help occursless endowment of nucleotides.Yes
", "type_str": "table", "text": "Sample Generations. Topic matches captures whether the topic of the generated event matches with the context. (section 3). Path length = 1 refers to the immediate effects, and Path length > 1 refers to eventual effects. (section 3).", "html": null, "num": null } } } }