{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:35:10.704135Z" }, "title": "Knowledge-Augmented Language Models for Cause-Effect Relation Classification", "authors": [ { "first": "Pedram", "middle": [], "last": "Hosseini", "suffix": "", "affiliation": { "laboratory": "", "institution": "The George Washington University", "location": {} }, "email": "phosseini@gwu.edu" }, { "first": "David", "middle": [ "A" ], "last": "Broniatowski", "suffix": "", "affiliation": { "laboratory": "", "institution": "The George Washington University", "location": {} }, "email": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "", "affiliation": { "laboratory": "", "institution": "The George Washington University", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Previous studies have shown the efficacy of knowledge augmentation methods in pretrained language models. However, these methods behave differently across domains and downstream tasks. In this work, we investigate the augmentation of pretrained language models with knowledge graph data in the causeeffect relation classification and commonsense causal reasoning tasks. After automatically verbalizing triples in ATOMIC 20 20 , a wide coverage commonsense reasoning knowledge graph, we continually pretrain BERT and evaluate the resulting model on cause-effect pair classification and answering commonsense causal reasoning questions. Our results show that a continually pretrained language model augmented with commonsense reasoning knowledge outperforms our baselines on two commonsense causal reasoning benchmarks, COPA and BCOPA-CE, and a Temporal and Causal Reasoning (TCR) dataset, without additional improvement in model architecture or using quality-enhanced data for fine-tuning.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Previous studies have shown the efficacy of knowledge augmentation methods in pretrained language models. However, these methods behave differently across domains and downstream tasks. In this work, we investigate the augmentation of pretrained language models with knowledge graph data in the causeeffect relation classification and commonsense causal reasoning tasks. After automatically verbalizing triples in ATOMIC 20 20 , a wide coverage commonsense reasoning knowledge graph, we continually pretrain BERT and evaluate the resulting model on cause-effect pair classification and answering commonsense causal reasoning questions. Our results show that a continually pretrained language model augmented with commonsense reasoning knowledge outperforms our baselines on two commonsense causal reasoning benchmarks, COPA and BCOPA-CE, and a Temporal and Causal Reasoning (TCR) dataset, without additional improvement in model architecture or using quality-enhanced data for fine-tuning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Automatic extraction and classification of causal relations in text has been an important yet challenging task in natural language understanding. Early methods in the 80s and 90s (Joskowicz et al., 1989; Kaplan and Berry-Rogghe, 1991; Garcia et al., 1997; Khoo et al., 1998) mainly relied on defining hand-crafted rules to find cause-effect relations. Starting 2000, machine learning tools were utilized in building causal relation extraction models (Girju, 2003; Choi, 2004, 2006; Blanco et al., 2008; Do et al., 2011; Hashimoto et al., 2012; Hidey and McKeown, 2016) . Word-embeddings and Pretrained Language Models (PLMs) have also been leveraged in training models for understanding causality in language in recent years (Dunietz et al., 2018; Pennington et al., 2014; Dasgupta et al., 2018; Gao et al., 2019) .", "cite_spans": [ { "start": 179, "end": 203, "text": "(Joskowicz et al., 1989;", "ref_id": "BIBREF20" }, { "start": 204, "end": 234, "text": "Kaplan and Berry-Rogghe, 1991;", "ref_id": "BIBREF21" }, { "start": 235, "end": 255, "text": "Garcia et al., 1997;", "ref_id": "BIBREF11" }, { "start": 256, "end": 274, "text": "Khoo et al., 1998)", "ref_id": "BIBREF23" }, { "start": 450, "end": 463, "text": "(Girju, 2003;", "ref_id": "BIBREF12" }, { "start": 464, "end": 481, "text": "Choi, 2004, 2006;", "ref_id": null }, { "start": 482, "end": 502, "text": "Blanco et al., 2008;", "ref_id": "BIBREF1" }, { "start": 503, "end": 519, "text": "Do et al., 2011;", "ref_id": "BIBREF8" }, { "start": 520, "end": 543, "text": "Hashimoto et al., 2012;", "ref_id": "BIBREF16" }, { "start": 544, "end": 568, "text": "Hidey and McKeown, 2016)", "ref_id": "BIBREF18" }, { "start": 725, "end": 747, "text": "(Dunietz et al., 2018;", "ref_id": "BIBREF9" }, { "start": 748, "end": 772, "text": "Pennington et al., 2014;", "ref_id": "BIBREF27" }, { "start": 773, "end": 795, "text": "Dasgupta et al., 2018;", "ref_id": "BIBREF5" }, { "start": 796, "end": 813, "text": "Gao et al., 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Investigating the true capability of pretrained language models in understanding causality in text is still an open question. More recently, Knowledge Graphs (KGs) have been used in combination with pretrained language models to address commonsense reasoning. Two examples are Causal-BERT (Li et al., 2020) for guided generation of Cause and Effect and the model introduced by Guan et al. (2020) for commonsense story generation. Motivated by the success of continual pretraining of PLMs for downstream tasks (Gururangan et al., 2020), we explore the impact of common sense knowledge injection as a form of continual pretraining for causal reasoning and cause-effect relation classification. It is worth highlighting that even though there are studies to show the efficacy of knowledge injection with continual pretraining for commonsense reasoning (Guan et al., 2020) , performance of these techniques is very dependent on the domain and downstream tasks (Gururangan et al., 2020) . And, to the best of our knowledge, there are limited studies on the effect of commonsense knowledge injection with knowledge graph data on cause-effect relation classification (Dalal et al., 2021) . Our contributions are as follows:", "cite_spans": [ { "start": 289, "end": 306, "text": "(Li et al., 2020)", "ref_id": "BIBREF25" }, { "start": 377, "end": 395, "text": "Guan et al. (2020)", "ref_id": "BIBREF13" }, { "start": 849, "end": 868, "text": "(Guan et al., 2020)", "ref_id": "BIBREF13" }, { "start": 956, "end": 981, "text": "(Gururangan et al., 2020)", "ref_id": "BIBREF14" }, { "start": 1160, "end": 1180, "text": "(Dalal et al., 2021)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We study performance of PLMs augmented with knowledge graph data in the less investigated cause-effect relation classification task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We demonstrate that a simple masked language modeling framework using automatically verbalized knowledge graph triples, without any further model improvement (e.g., new architecture or loss function) or quality enhanced data for fine-tuning, can significantly boost the performance in cause-effect pair classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We publicly release our knowledge graph verbalization codes and continually pretrained models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The overview of our method is shown in Figure 1 . 1 We first convert triples in ATOMIC 20 20 (Hwang et al., 2021) knowledge graph to natural language texts. Then we continually pretrain BERT using Masked Language Modeling (MLM) and evaluate performance of the resulting model on different benchmarks. Samples in ATOMIC 20 20 are stored as triples in the form of (head/subject, relation, tail/target) in three splits including train, development, and test. ATOMIC 20 20 has 23 relation types that are classified into three categorical types including commonsense relations of social interactions, physicalentity commonsense relations, and event-centric commonsense relations. In the rest of the paper, we refer to these three categories as social, physical, and event, respectively.", "cite_spans": [ { "start": 50, "end": 51, "text": "1", "ref_id": null }, { "start": 93, "end": 113, "text": "(Hwang et al., 2021)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 39, "end": 47, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "We remove all duplicates and ignore all triples in which the target value is none. Moreover, we ignore all triples that include a blank. Since in masked language modeling we need to know the gold value of masked tokens, a triple that already has a blank (masked token/word) in it may not help our pretraining. For instance, in the triple: [PersonX affords another ___, xAttr, useful] it is hard to know why or understand what it means for a person to be useful without knowing what they afforded. This preprocessing step yields in 782,848 triples with 121,681, 177,706, and 483,461 from event, physical, and social categories, respectively. Distribution of these relations is shown in Figure 2 . We verbalize ATOMIC2020 knowledge graph ", "cite_spans": [], "ref_spans": [ { "start": 685, "end": 693, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Filtering Triples", "sec_num": "2.1" }, { "text": "Each relation in ATOMIC 20 20 is associated with a human-readable template. For example, xEffect's and HasPrerequisite's templates are as a result, PersonX will and to do this, one requires, respectively. We use these templates to convert triples in ATOMIC 20 20 to sentences in natural language by concatenating the subject, relation template, and target. Examples of converting triples to text are shown in Figure 3 .", "cite_spans": [], "ref_spans": [ { "start": 409, "end": 417, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Converting Triples", "sec_num": "2.2" }, { "text": "When we convert triples to natural language text, ideally we want to have grammatically correct sentences. Human readable templates provided by ATOMIC 20 20 are not necessarily rendered in a way to form error-free sentences when concatenated with subject and target in a triple. To address this issue, we use an open-source grammar and spell checker, LanguageTool, 2 to double-check our converted triples to ensure they do not contain obvious grammatical mistakes or spelling errors. Similar approaches that include deterministic grammatical transformations were also previously used to convert KG triples to coherent sentences (Davison et al., 2019) . It is worth pointing out that the Data-To-Text generation (KG verbalization) for itself is a separate task and there have been efforts to address this task (Agarwal et al., 2021) . We leave investigating the effects of using other Data-To-Text and grammar-checking methods to future research.", "cite_spans": [ { "start": 628, "end": 650, "text": "(Davison et al., 2019)", "ref_id": "BIBREF6" }, { "start": 809, "end": 831, "text": "(Agarwal et al., 2021)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Checking Grammar", "sec_num": "2.3" }, { "text": "As mentioned earlier, we use MLM to continually pretrain our PLM, BERT-large-cased (Devlin et al., 2018) . We follow the same procedure as BERT to create the input data to our pretraining (e.g., number of tokens to mask in input examples). We run the pretraining using ATOMIC 20 20 's train and development splits as our training and evaluation sets, respectively, for 10 epochs on Google Colab TPU v2 using PyTorch/XLA package with a maximum sequence length of 30 and batch size of 128. 3 To avoid overfitting, we use early stopping with the patience of 3 on evaluation loss. We select the best model based on the lowest evaluation loss at the end of training. 4", "cite_spans": [ { "start": 83, "end": 104, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF7" }, { "start": 488, "end": 489, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Continual Pretraining", "sec_num": "2.4" }, { "text": "We chose multiple benchmarks of commonsense causal reasoning and cause-effect relation classification to ensure we thoroughly test the effects of our newly trained models. These benchmarks include: 1) Temporal and Causal Reasoning (TCR) dataset (Ning et al., 2018) , a benchmark for joint reasoning of temporal and causal relations; 2) Choice Of Plausible Alternatives (COPA) (Roemmele et al., 2011) dataset which is a widely used and notable benchmark (Rogers et al., 2021) for commonsense causal reasoning; And 3) BCOPA-CE (Han and Wang, 2021) , a new benchmark inspired by COPA, that contains unbiased token distributions which makes it a more challenging benchmark. For COPA-related experiments, since COPA does not have a training set, we use COPA's development set for fine-tuning our models and testing them on COPA's test set (COPA-test) and BCOPA-CE. For hyperparameter tuning, we randomly split COPA's development set into train (%90) and dev (%10) and find the best learning rate, batch size, and number of train epochs based on the evaluation accuracy on the development set. Then using COPA's original development set and best set of hyperparameters, we fine-tune our models and evaluate them on the test set. In all experiments, we report the average performance of models using four different random seeds. For TCR, we fine-tune and evaluate our models on train and test splits, respectively.", "cite_spans": [ { "start": 245, "end": 264, "text": "(Ning et al., 2018)", "ref_id": "BIBREF26" }, { "start": 453, "end": 474, "text": "(Rogers et al., 2021)", "ref_id": "BIBREF30" }, { "start": 525, "end": 545, "text": "(Han and Wang, 2021)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Benchmarks", "sec_num": "3.1" }, { "text": "We use bert-large-cased pre-trained model in all experiments as our baseline. For COPA and BCOPA-CE, we convert all instances to a SWAG-formatted data (Zellers et al., 2018) and use Huggingface's BertForMultipleChoice -a BERT model with a multiple-choice classification head on top. And for TCR, we convert every instance by adding special tokens to input sequences as event boundaries and use the R-BERT 5 model (Wu and He, 2019) . We chose R-BERT for our relation classification since it not only leverages the pretrained embeddings but also transfers information of target entities (e.g., events in a relation) through model's architecture and incorporates encodings of the targets entities. Examples of COPA and TCR are shown in Figure 4 . BCOPA-CE has the same format as COPA.", "cite_spans": [ { "start": 151, "end": 173, "text": "(Zellers et al., 2018)", "ref_id": "BIBREF33" }, { "start": 413, "end": 430, "text": "(Wu and He, 2019)", "ref_id": "BIBREF32" } ], "ref_spans": [ { "start": 733, "end": 741, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Models and Baseline", "sec_num": "3.2" }, { "text": ": The computer crashed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models and Baseline", "sec_num": "3.2" }, { "text": "! : I backed up my files.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models and Baseline", "sec_num": "3.2" }, { "text": "asks-for=\"cause\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models and Baseline", "sec_num": "3.2" }, { "text": "\" : I downloaded a virus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models and Baseline", "sec_num": "3.2" }, { "text": "The death toll climbed to 99 on Sunday after a suicide car bomb exploded Friday in the middle of a group of men playing volleyball in northwest Pakistan, police said. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "COPA TCR", "sec_num": null }, { "text": "Results of our experiments on TCR are shown in Table 1 . As can be seen, our model significantly outperforms both our baseline and the joint infer-ence framework by Ning et al. (2018) formulated as an integer linear programming (ILP) problem.", "cite_spans": [ { "start": 165, "end": 183, "text": "Ning et al. (2018)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 47, "end": 54, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "Model Acc (%) Joint system (Ning et al., 2018) 77.3 BERT-large (baseline) \u2748 75.0 ATOMIC-BERT-large M LM \u2748 91.0 Results of experiments on COPA-test are shown in Table 2 . We initially observed that a continually pretrained model using all three types of relations has a lower performance than our baseline. By taking a closer look at each relation type, we decided to train another model, this time only using the event relations. The reason is that event-centric relations in ATOMIC 20 20 specifically contain commonsense knowledge about event interaction for understating likely causal relations between events in the world (Hwang et al., 2021) . In addition, event relations have a relatively longer context (# of tokens) than the average of all three relation types combined which means more context for a model to learn from. Our new pretrained model outperformed the baseline by nearly %5 which shows the effect of augmented pretrained language model with commonsense reasoning knowledge.", "cite_spans": [ { "start": 27, "end": 46, "text": "(Ning et al., 2018)", "ref_id": "BIBREF26" }, { "start": 625, "end": 645, "text": "(Hwang et al., 2021)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 160, "end": 167, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "Acc (%) PMI (Roemmele et al., 2011) 58.8 b-l-reg (Han and Wang, 2021) 71.1 Google T5-base (Raffel et al., 2019) 71.2 BERT-large (Kavumba et al., 2019) 76.5 CausalBERT (Li et al., 2020) 78.6 BERT-SocialIQA (Sap et al., 2019) * 80.1 BERT-large (baseline) \u2748 74.4 ATOMIC-BERT-large M LM \u2748 -Event only 79.2 Google T5-11B (Raffel et al., 2019) 94.8 DeBERTa-1.5B (He et al., 2020) 96.8 We further experiment on the Easy and Hard question splits in COPA-test separated by Kavumba et al. (2019) to see how our best model performs on harder questions that do not contain superficial cues. Results are shown in Table 3 . As can be seen, our ATOMIC-BERT model significantly outperforms both the baseline and former models on Hard and Easy questions.", "cite_spans": [ { "start": 12, "end": 35, "text": "(Roemmele et al., 2011)", "ref_id": "BIBREF29" }, { "start": 49, "end": 69, "text": "(Han and Wang, 2021)", "ref_id": "BIBREF15" }, { "start": 90, "end": 111, "text": "(Raffel et al., 2019)", "ref_id": "BIBREF28" }, { "start": 128, "end": 150, "text": "(Kavumba et al., 2019)", "ref_id": "BIBREF22" }, { "start": 167, "end": 184, "text": "(Li et al., 2020)", "ref_id": "BIBREF25" }, { "start": 205, "end": 225, "text": "(Sap et al., 2019) *", "ref_id": "BIBREF31" }, { "start": 316, "end": 337, "text": "(Raffel et al., 2019)", "ref_id": "BIBREF28" }, { "start": 356, "end": 373, "text": "(He et al., 2020)", "ref_id": "BIBREF17" }, { "start": 464, "end": 485, "text": "Kavumba et al. (2019)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 600, "end": 607, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "Easy \u2191 Hard \u2191 (Han and Wang, 2021) -69.7 (Kavumba et al., 2019) 83.9 71.9 BERT-large (baseline) \u2748 83.0 69.2 ATOMIC-BERT-large \u2748 88.9 73.1 It is worth mentioning three points here. First, our model, BERT-large, has a significantly lower number of parameters than state-of-the-art models, Google T5-11B (\u223c32x) and DeBERTa-1.5B \u223c4xand it shows how smaller models can be competitive and benefit from continual pretraining. Second, we have not yet applied any model improvement methods such as using a margin-based loss introduced by and used in Causal-BERT (Li et al., 2020) , an extra regularization loss proposed by Han and Wang (2021) , or fine-tuning with quality-enhanced training data, BCOPA, introduced by Kavumba et al. (2019) . As a result, there is still great room to improve current models that can be a proper next step. Third, we achieved a performance on par with BERT-SocialIQA (Sap et al., 2019) 6 while we did not use crowdsourcing or any manual re-writing/correction, which is expensive, for verbalizing KG triples to create our pretraining data.", "cite_spans": [ { "start": 41, "end": 63, "text": "(Kavumba et al., 2019)", "ref_id": "BIBREF22" }, { "start": 553, "end": 570, "text": "(Li et al., 2020)", "ref_id": "BIBREF25" }, { "start": 614, "end": 633, "text": "Han and Wang (2021)", "ref_id": "BIBREF15" }, { "start": 709, "end": 730, "text": "Kavumba et al. (2019)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "Acc (%) b-l-aug (Han and Wang, 2021) Table 4 : BCOPA-CE Accuracy results. \u2748 Our models.", "cite_spans": [ { "start": 16, "end": 36, "text": "(Han and Wang, 2021)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 37, "end": 44, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "* Base model in b-l-* is BERT-large.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "Results of experiments on BCOPA-CE are shown in Table 4 . As expected based on the results also reported by Han and Wang (2021), we initially observed that our models are performing nearly as random baseline. Since we do not use the type of question when encoding input sequences, we decided to see whether adding question type as a prompt to input sequences will improve the performance. We added It is because and As a result, as prompt for asks-for=\"cause\" and asks-for=\"effect\", respectively. Interestingly, the new model outperforms the baseline and Han and Wang (2021)'s b-l-aug model that is fine-tuned with the same data as ours, when question types are added as prompts to input sequences of correct and incorrect answers in the test set. We also ran a similar experiment on COPA-test (Table 5) in which adding prompt did not help with performance improvement.", "cite_spans": [], "ref_spans": [ { "start": 48, "end": 55, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "BCOPA-CE: Prompt vs. No Prompt", "sec_num": "4.1" }, { "text": "Train / Test \u2717 Prompt \u2713 Prompt \u2717 Prompt 79.2 76.4 \u2713 Prompt 75.5 77.9 Table 5 : COPA-test Accuracy ablation study results for prompt vs. no prompt.", "cite_spans": [], "ref_spans": [ { "start": 69, "end": 76, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "BCOPA-CE: Prompt vs. No Prompt", "sec_num": "4.1" }, { "text": "We introduced a simple framework for augmenting PLMs with commonsense knowledge created by automatically verbalizing ATOMIC 20 20 . Our results show that commonsense knowledge-augmented PLMs outperform the original PLMs on causeeffect pair classification and answering commonsense causal reasoning questions. As the next step, it would be interesting to see how the previously proposed model improvement methods or using unbiased fine-tuning datasets can potentially enhance the performance of our knowledgeaugmented models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Codes and models are publicly available at https:// github.com/phosseini/causal-reasoning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://tinyurl.com/yc77k3fb 3 %99.99 of ATOMIC 20 20 instances have 30 tokens or less.4 We use Huggingface's BertForMaskedLM implementation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use the following implementation of R-BERT: https://github.com/monologg/R-BERT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Our best random seed run achieved %81.4 accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Knowledge graph based synthetic corpus generation for knowledge-enhanced language model pre-training", "authors": [ { "first": "Oshin", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Heming", "middle": [], "last": "Ge", "suffix": "" }, { "first": "Siamak", "middle": [], "last": "Shakeri", "suffix": "" }, { "first": "Rami", "middle": [], "last": "Al-Rfou", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "3554--3565", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oshin Agarwal, Heming Ge, Siamak Shakeri, and Rami Al-Rfou. 2021. Knowledge graph based synthetic corpus generation for knowledge-enhanced language model pre-training. In Proceedings of the 2021 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 3554-3565.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Causal relation extraction", "authors": [ { "first": "Eduardo", "middle": [], "last": "Blanco", "suffix": "" }, { "first": "Nuria", "middle": [], "last": "Castell", "suffix": "" }, { "first": "Dan", "middle": [ "I" ], "last": "Moldovan", "suffix": "" } ], "year": 2008, "venue": "Lrec", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eduardo Blanco, Nuria Castell, and Dan I Moldovan. 2008. Causal relation extraction. In Lrec.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Causal relation extraction using cue phrase and lexical pair probabilities", "authors": [ { "first": "Du-Seong", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Key-Sun", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2004, "venue": "International Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "61--70", "other_ids": {}, "num": null, "urls": [], "raw_text": "Du-Seong Chang and Key-Sun Choi. 2004. Causal relation extraction using cue phrase and lexical pair probabilities. In International Conference on Natural Language Processing, pages 61-70. Springer.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Incremental cue phrase learning and bootstrapping method for causality extraction using cue phrase and word pair probabilities. Information processing & management", "authors": [ { "first": "Du-Seong", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Key-Sun", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2006, "venue": "", "volume": "42", "issue": "", "pages": "662--678", "other_ids": {}, "num": null, "urls": [], "raw_text": "Du-Seong Chang and Key-Sun Choi. 2006. Incremen- tal cue phrase learning and bootstrapping method for causality extraction using cue phrase and word pair probabilities. Information processing & manage- ment, 42(3):662-678.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Enhancing multiple-choice question answering with causal knowledge", "authors": [ { "first": "Dhairya", "middle": [], "last": "Dalal", "suffix": "" }, { "first": "Mihael", "middle": [], "last": "Arcan", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Buitelaar", "suffix": "" } ], "year": 2021, "venue": "Proceedings of Deep Learning Inside Out (DeeLIO): The 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures", "volume": "", "issue": "", "pages": "70--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dhairya Dalal, Mihael Arcan, and Paul Buitelaar. 2021. Enhancing multiple-choice question answering with causal knowledge. In Proceedings of Deep Learning Inside Out (DeeLIO): The 2nd Workshop on Knowl- edge Extraction and Integration for Deep Learning Architectures, pages 70-80.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Automatic extraction of causal relations from text using linguistically informed deep neural networks", "authors": [ { "first": "Tirthankar", "middle": [], "last": "Dasgupta", "suffix": "" }, { "first": "Rupsa", "middle": [], "last": "Saha", "suffix": "" }, { "first": "Lipika", "middle": [], "last": "Dey", "suffix": "" }, { "first": "Abir", "middle": [], "last": "Naskar", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue", "volume": "", "issue": "", "pages": "306--316", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tirthankar Dasgupta, Rupsa Saha, Lipika Dey, and Abir Naskar. 2018. Automatic extraction of causal rela- tions from text using linguistically informed deep neural networks. In Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, pages 306-316.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Commonsense knowledge mining from pretrained models", "authors": [ { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Feldman", "suffix": "" }, { "first": "Alexander M", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "1173--1178", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joe Davison, Joshua Feldman, and Alexander M Rush. 2019. Commonsense knowledge mining from pre- trained models. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Pro- cessing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1173-1178.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Minimally supervised event causality identification", "authors": [ { "first": "Yee", "middle": [], "last": "Quang Xuan Do", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Seng Chan", "suffix": "" }, { "first": "", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "294--303", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quang Xuan Do, Yee Seng Chan, and Dan Roth. 2011. Minimally supervised event causality identification. In Proceedings of the Conference on Empirical Meth- ods in Natural Language Processing, pages 294-303. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Deepcx: A transition-based approach for shallow semantic parsing with complex constructional triggers", "authors": [ { "first": "Jesse", "middle": [], "last": "Dunietz", "suffix": "" }, { "first": "G", "middle": [], "last": "Jaime", "suffix": "" }, { "first": "Lori", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "", "middle": [], "last": "Levin", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1691--1701", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jesse Dunietz, Jaime G Carbonell, and Lori Levin. 2018. Deepcx: A transition-based approach for shallow se- mantic parsing with complex constructional triggers. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 1691-1701.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Modeling document-level causal structures for event causal relation identification", "authors": [ { "first": "Lei", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Prafulla", "middle": [], "last": "Kumar Choubey", "suffix": "" }, { "first": "Ruihong", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1808--1817", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lei Gao, Prafulla Kumar Choubey, and Ruihong Huang. 2019. Modeling document-level causal structures for event causal relation identification. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1808-1817.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Coatis, an nlp system to locate expressions of actions connected by causality links", "authors": [ { "first": "Daniela", "middle": [], "last": "Garcia", "suffix": "" } ], "year": 1997, "venue": "International Conference on Knowledge Engineering and Knowledge Management", "volume": "", "issue": "", "pages": "347--352", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniela Garcia et al. 1997. Coatis, an nlp system to locate expressions of actions connected by causality links. In International Conference on Knowledge En- gineering and Knowledge Management, pages 347- 352. Springer.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Automatic detection of causal relations for question answering", "authors": [ { "first": "Roxana", "middle": [], "last": "Girju", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the ACL 2003 workshop on Multilingual summarization and question answering", "volume": "12", "issue": "", "pages": "76--83", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roxana Girju. 2003. Automatic detection of causal re- lations for question answering. In Proceedings of the ACL 2003 workshop on Multilingual summariza- tion and question answering-Volume 12, pages 76-83. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A knowledge-enhanced pretraining model for commonsense story generation", "authors": [ { "first": "Jian", "middle": [], "last": "Guan", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Zhihao", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Xiaoyan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Minlie", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2020, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "93--108", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jian Guan, Fei Huang, Zhihao Zhao, Xiaoyan Zhu, and Minlie Huang. 2020. A knowledge-enhanced pre- training model for commonsense story generation. Transactions of the Association for Computational Linguistics, 8:93-108.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Don't stop pretraining: Adapt language models to domains and tasks", "authors": [ { "first": "Ana", "middle": [], "last": "Suchin Gururangan", "suffix": "" }, { "first": "Swabha", "middle": [], "last": "Marasovi\u0107", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Swayamdipta", "suffix": "" }, { "first": "Iz", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Doug", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Downey", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8342--8360", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342-8360.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Doing good or doing right? exploring the weakness of commonsense causal reasoning models", "authors": [ { "first": "Mingyue", "middle": [], "last": "Han", "suffix": "" }, { "first": "Yinglin", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "151--157", "other_ids": { "DOI": [ "10.18653/v1/2021.acl-short.20" ] }, "num": null, "urls": [], "raw_text": "Mingyue Han and Yinglin Wang. 2021. Doing good or doing right? exploring the weakness of common- sense causal reasoning models. In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Conference on Natural Language Processing (Vol- ume 2: Short Papers), pages 151-157, Online. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Excitatory or inhibitory: A new semantic orientation extracts contradiction and causality from the web", "authors": [ { "first": "Chikara", "middle": [], "last": "Hashimoto", "suffix": "" }, { "first": "Kentaro", "middle": [], "last": "Torisawa", "suffix": "" }, { "first": "", "middle": [], "last": "Stijn De", "suffix": "" }, { "first": "Jong-Hoon", "middle": [], "last": "Saeger", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Oh", "suffix": "" }, { "first": "", "middle": [], "last": "Kazama", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "619--630", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chikara Hashimoto, Kentaro Torisawa, Stijn De Saeger, Jong-Hoon Oh, and Jun'ichi Kazama. 2012. Ex- citatory or inhibitory: A new semantic orientation extracts contradiction and causality from the web. In Proceedings of the 2012 Joint Conference on Empir- ical Methods in Natural Language Processing and Computational Natural Language Learning, pages 619-630. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Deberta: Decoding-enhanced bert with disentangled attention", "authors": [ { "first": "Pengcheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Weizhu", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2006.03654" ] }, "num": null, "urls": [], "raw_text": "Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Identifying causal relations using parallel Wikipedia articles", "authors": [ { "first": "Christopher", "middle": [], "last": "Hidey", "suffix": "" }, { "first": "Kathy", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1424--1433", "other_ids": { "DOI": [ "10.18653/v1/P16-1135" ] }, "num": null, "urls": [], "raw_text": "Christopher Hidey and Kathy McKeown. 2016. Identi- fying causal relations using parallel Wikipedia arti- cles. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1424-1433, Berlin, Ger- many. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Comet-atomic 2020: On symbolic and neural commonsense knowledge graphs", "authors": [ { "first": "Jena", "middle": [ "D" ], "last": "Hwang", "suffix": "" }, { "first": "Chandra", "middle": [], "last": "Bhagavatula", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Ronan Le Bras", "suffix": "" }, { "first": "Keisuke", "middle": [], "last": "Da", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Sakaguchi", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Bosselut", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2021, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sakaguchi, Antoine Bosselut, and Yejin Choi. 2021. Comet-atomic 2020: On sym- bolic and neural commonsense knowledge graphs. In AAAI.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Deep domain models for discourse analysis", "authors": [ { "first": "Leo", "middle": [], "last": "Joskowicz", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Ksiezyck", "suffix": "" }, { "first": "", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 1989, "venue": "Proceedings. The Annual AI Systems in Government Conference", "volume": "", "issue": "", "pages": "195--200", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leo Joskowicz, T Ksiezyck, and Ralph Grishman. 1989. Deep domain models for discourse analysis. In [1989] Proceedings. The Annual AI Systems in Gov- ernment Conference, pages 195-200. IEEE.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Knowledge-based acquisition of causal relationships in text", "authors": [ { "first": "M", "middle": [], "last": "Randy", "suffix": "" }, { "first": "Genevieve", "middle": [], "last": "Kaplan", "suffix": "" }, { "first": "", "middle": [], "last": "Berry-Rogghe", "suffix": "" } ], "year": 1991, "venue": "Knowledge Acquisition", "volume": "3", "issue": "3", "pages": "317--337", "other_ids": {}, "num": null, "urls": [], "raw_text": "Randy M Kaplan and Genevieve Berry-Rogghe. 1991. Knowledge-based acquisition of causal relationships in text. Knowledge Acquisition, 3(3):317-337.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "When choosing plausible alternatives, clever hans can be clever", "authors": [ { "first": "Pride", "middle": [], "last": "Kavumba", "suffix": "" }, { "first": "Naoya", "middle": [], "last": "Inoue", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Heinzerling", "suffix": "" }, { "first": "Keshav", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Reisert", "suffix": "" }, { "first": "Kentaro", "middle": [], "last": "Inui", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pride Kavumba, Naoya Inoue, Benjamin Heinzerling, Keshav Singh, Paul Reisert, and Kentaro Inui. 2019. When choosing plausible alternatives, clever hans can be clever. EMNLP 2019, page 33.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Automatic extraction of cause-effect information from newspaper text without knowledge-based inferencing", "authors": [ { "first": "S", "middle": [ "G" ], "last": "Christopher", "suffix": "" }, { "first": "Jaklin", "middle": [], "last": "Khoo", "suffix": "" }, { "first": "", "middle": [], "last": "Kornfilt", "suffix": "" }, { "first": "N", "middle": [], "last": "Robert", "suffix": "" }, { "first": "Sung", "middle": [ "Hyon" ], "last": "Oddy", "suffix": "" }, { "first": "", "middle": [], "last": "Myaeng", "suffix": "" } ], "year": 1998, "venue": "Literary and Linguistic Computing", "volume": "13", "issue": "4", "pages": "177--186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher SG Khoo, Jaklin Kornfilt, Robert N Oddy, and Sung Hyon Myaeng. 1998. Automatic extrac- tion of cause-effect information from newspaper text without knowledge-based inferencing. Literary and Linguistic Computing, 13(4):177-186.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Learning to rank for plausible plausibility", "authors": [ { "first": "Zhongyang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Tongfei", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4818--4823", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhongyang Li, Tongfei Chen, and Benjamin Van Durme. 2019. Learning to rank for plausible plausibility. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4818- 4823.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Guided generation of cause and effect", "authors": [ { "first": "Zhongyang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xiao", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhongyang Li, Xiao Ding, Ting Liu, J Edward Hu, and Benjamin Van Durme. 2020. Guided generation of cause and effect. IJCAI.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Joint reasoning for temporal and causal relations", "authors": [ { "first": "Qiang", "middle": [], "last": "Ning", "suffix": "" }, { "first": "Zhili", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2278--2288", "other_ids": { "DOI": [ "10.18653/v1/P18-1212" ] }, "num": null, "urls": [], "raw_text": "Qiang Ning, Zhili Feng, Hao Wu, and Dan Roth. 2018. Joint reasoning for temporal and causal relations. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2278-2288, Melbourne, Aus- tralia. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter J", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.10683" ] }, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Choice of plausible alternatives: An evaluation of commonsense causal reasoning", "authors": [ { "first": "Melissa", "middle": [], "last": "Roemmele", "suffix": "" }, { "first": "Andrew S", "middle": [], "last": "Cosmin Adrian Bejan", "suffix": "" }, { "first": "", "middle": [], "last": "Gordon", "suffix": "" } ], "year": 2011, "venue": "2011 AAAI Spring Symposium Series", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Melissa Roemmele, Cosmin Adrian Bejan, and An- drew S Gordon. 2011. Choice of plausible alter- natives: An evaluation of commonsense causal rea- soning. In 2011 AAAI Spring Symposium Series.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Qa dataset explosion: A taxonomy of nlp resources for question answering and reading comprehension", "authors": [ { "first": "Anna", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Isabelle", "middle": [], "last": "Augenstein", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2107.12708" ] }, "num": null, "urls": [], "raw_text": "Anna Rogers, Matt Gardner, and Isabelle Augenstein. 2021. Qa dataset explosion: A taxonomy of nlp resources for question answering and reading com- prehension. arXiv preprint arXiv:2107.12708.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Social IQa: Commonsense reasoning about social interactions", "authors": [ { "first": "Maarten", "middle": [], "last": "Sap", "suffix": "" }, { "first": "Hannah", "middle": [], "last": "Rashkin", "suffix": "" }, { "first": "Derek", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Ronan Le Bras", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "4463--4473", "other_ids": { "DOI": [ "10.18653/v1/D19-1454" ] }, "num": null, "urls": [], "raw_text": "Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019. Social IQa: Com- monsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 4463- 4473, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Enriching pretrained language model with entity information for relation classification", "authors": [ { "first": "Shanchan", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Yifan", "middle": [], "last": "He", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 28th ACM international conference on information and knowledge management", "volume": "", "issue": "", "pages": "2361--2364", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shanchan Wu and Yifan He. 2019. Enriching pre- trained language model with entity information for relation classification. In Proceedings of the 28th ACM international conference on information and knowledge management, pages 2361-2364.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Swag: A large-scale adversarial dataset for grounded commonsense inference", "authors": [ { "first": "Rowan", "middle": [], "last": "Zellers", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Bisk", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "93--104", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 93-104.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Overview of our proposed framework to continually pretrain PLMs with commonsense reasoning knowledge.", "uris": null, "type_str": "figure", "num": null }, "FIGREF1": { "text": "Examples of converting two triples in ATOMIC 20 20 to natural language text using human readable templates. FollowingSap et al. (2019), we replace PersonX with a name.", "uris": null, "type_str": "figure", "num": null }, "FIGREF2": { "text": "COPA and TCR examples. The COPA instance is converted to Multiple Choice format.", "uris": null, "type_str": "figure", "num": null }, "FIGREF3": { "text": "51.1 b-l-reg (Han and Wang, 2021) 64.1 BERT-large (baseline) \u2748 55.8 ATOMIC-BERT-large M LM \u2748 -Event only 58.1", "uris": null, "type_str": "figure", "num": null }, "TABREF2": { "content": "", "text": "TCR Accuracy results. \u2748 Our models", "html": null, "type_str": "table", "num": null }, "TABREF3": { "content": "
For a fair comparison, we report BERT-SocialIQA's average performance.
", "text": "COPA-test Accuracy results. \u2748 Our models.", "html": null, "type_str": "table", "num": null }, "TABREF4": { "content": "", "text": "COPA-test Accuracy results on Easy and Hard question subsets. \u2748 Our models.", "html": null, "type_str": "table", "num": null } } } }