{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:35:10.704135Z"
},
"title": "Knowledge-Augmented Language Models for Cause-Effect Relation Classification",
"authors": [
{
"first": "Pedram",
"middle": [],
"last": "Hosseini",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The George Washington University",
"location": {}
},
"email": "phosseini@gwu.edu"
},
{
"first": "David",
"middle": [
"A"
],
"last": "Broniatowski",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The George Washington University",
"location": {}
},
"email": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The George Washington University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Previous studies have shown the efficacy of knowledge augmentation methods in pretrained language models. However, these methods behave differently across domains and downstream tasks. In this work, we investigate the augmentation of pretrained language models with knowledge graph data in the causeeffect relation classification and commonsense causal reasoning tasks. After automatically verbalizing triples in ATOMIC 20 20 , a wide coverage commonsense reasoning knowledge graph, we continually pretrain BERT and evaluate the resulting model on cause-effect pair classification and answering commonsense causal reasoning questions. Our results show that a continually pretrained language model augmented with commonsense reasoning knowledge outperforms our baselines on two commonsense causal reasoning benchmarks, COPA and BCOPA-CE, and a Temporal and Causal Reasoning (TCR) dataset, without additional improvement in model architecture or using quality-enhanced data for fine-tuning.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Previous studies have shown the efficacy of knowledge augmentation methods in pretrained language models. However, these methods behave differently across domains and downstream tasks. In this work, we investigate the augmentation of pretrained language models with knowledge graph data in the causeeffect relation classification and commonsense causal reasoning tasks. After automatically verbalizing triples in ATOMIC 20 20 , a wide coverage commonsense reasoning knowledge graph, we continually pretrain BERT and evaluate the resulting model on cause-effect pair classification and answering commonsense causal reasoning questions. Our results show that a continually pretrained language model augmented with commonsense reasoning knowledge outperforms our baselines on two commonsense causal reasoning benchmarks, COPA and BCOPA-CE, and a Temporal and Causal Reasoning (TCR) dataset, without additional improvement in model architecture or using quality-enhanced data for fine-tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Automatic extraction and classification of causal relations in text has been an important yet challenging task in natural language understanding. Early methods in the 80s and 90s (Joskowicz et al., 1989; Kaplan and Berry-Rogghe, 1991; Garcia et al., 1997; Khoo et al., 1998) mainly relied on defining hand-crafted rules to find cause-effect relations. Starting 2000, machine learning tools were utilized in building causal relation extraction models (Girju, 2003; Choi, 2004, 2006; Blanco et al., 2008; Do et al., 2011; Hashimoto et al., 2012; Hidey and McKeown, 2016) . Word-embeddings and Pretrained Language Models (PLMs) have also been leveraged in training models for understanding causality in language in recent years (Dunietz et al., 2018; Pennington et al., 2014; Dasgupta et al., 2018; Gao et al., 2019) .",
"cite_spans": [
{
"start": 179,
"end": 203,
"text": "(Joskowicz et al., 1989;",
"ref_id": "BIBREF20"
},
{
"start": 204,
"end": 234,
"text": "Kaplan and Berry-Rogghe, 1991;",
"ref_id": "BIBREF21"
},
{
"start": 235,
"end": 255,
"text": "Garcia et al., 1997;",
"ref_id": "BIBREF11"
},
{
"start": 256,
"end": 274,
"text": "Khoo et al., 1998)",
"ref_id": "BIBREF23"
},
{
"start": 450,
"end": 463,
"text": "(Girju, 2003;",
"ref_id": "BIBREF12"
},
{
"start": 464,
"end": 481,
"text": "Choi, 2004, 2006;",
"ref_id": null
},
{
"start": 482,
"end": 502,
"text": "Blanco et al., 2008;",
"ref_id": "BIBREF1"
},
{
"start": 503,
"end": 519,
"text": "Do et al., 2011;",
"ref_id": "BIBREF8"
},
{
"start": 520,
"end": 543,
"text": "Hashimoto et al., 2012;",
"ref_id": "BIBREF16"
},
{
"start": 544,
"end": 568,
"text": "Hidey and McKeown, 2016)",
"ref_id": "BIBREF18"
},
{
"start": 725,
"end": 747,
"text": "(Dunietz et al., 2018;",
"ref_id": "BIBREF9"
},
{
"start": 748,
"end": 772,
"text": "Pennington et al., 2014;",
"ref_id": "BIBREF27"
},
{
"start": 773,
"end": 795,
"text": "Dasgupta et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 796,
"end": 813,
"text": "Gao et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Investigating the true capability of pretrained language models in understanding causality in text is still an open question. More recently, Knowledge Graphs (KGs) have been used in combination with pretrained language models to address commonsense reasoning. Two examples are Causal-BERT (Li et al., 2020) for guided generation of Cause and Effect and the model introduced by Guan et al. (2020) for commonsense story generation. Motivated by the success of continual pretraining of PLMs for downstream tasks (Gururangan et al., 2020), we explore the impact of common sense knowledge injection as a form of continual pretraining for causal reasoning and cause-effect relation classification. It is worth highlighting that even though there are studies to show the efficacy of knowledge injection with continual pretraining for commonsense reasoning (Guan et al., 2020) , performance of these techniques is very dependent on the domain and downstream tasks (Gururangan et al., 2020) . And, to the best of our knowledge, there are limited studies on the effect of commonsense knowledge injection with knowledge graph data on cause-effect relation classification (Dalal et al., 2021) . Our contributions are as follows:",
"cite_spans": [
{
"start": 289,
"end": 306,
"text": "(Li et al., 2020)",
"ref_id": "BIBREF25"
},
{
"start": 377,
"end": 395,
"text": "Guan et al. (2020)",
"ref_id": "BIBREF13"
},
{
"start": 849,
"end": 868,
"text": "(Guan et al., 2020)",
"ref_id": "BIBREF13"
},
{
"start": 956,
"end": 981,
"text": "(Gururangan et al., 2020)",
"ref_id": "BIBREF14"
},
{
"start": 1160,
"end": 1180,
"text": "(Dalal et al., 2021)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We study performance of PLMs augmented with knowledge graph data in the less investigated cause-effect relation classification task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We demonstrate that a simple masked language modeling framework using automatically verbalized knowledge graph triples, without any further model improvement (e.g., new architecture or loss function) or quality enhanced data for fine-tuning, can significantly boost the performance in cause-effect pair classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We publicly release our knowledge graph verbalization codes and continually pretrained models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The overview of our method is shown in Figure 1 . 1 We first convert triples in ATOMIC 20 20 (Hwang et al., 2021) knowledge graph to natural language texts. Then we continually pretrain BERT using Masked Language Modeling (MLM) and evaluate performance of the resulting model on different benchmarks. Samples in ATOMIC 20 20 are stored as triples in the form of (head/subject, relation, tail/target) in three splits including train, development, and test. ATOMIC 20 20 has 23 relation types that are classified into three categorical types including commonsense relations of social interactions, physicalentity commonsense relations, and event-centric commonsense relations. In the rest of the paper, we refer to these three categories as social, physical, and event, respectively.",
"cite_spans": [
{
"start": 50,
"end": 51,
"text": "1",
"ref_id": null
},
{
"start": 93,
"end": 113,
"text": "(Hwang et al., 2021)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 39,
"end": 47,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "We remove all duplicates and ignore all triples in which the target value is none. Moreover, we ignore all triples that include a blank. Since in masked language modeling we need to know the gold value of masked tokens, a triple that already has a blank (masked token/word) in it may not help our pretraining. For instance, in the triple: [PersonX affords another ___, xAttr, useful] it is hard to know why or understand what it means for a person to be useful without knowing what they afforded. This preprocessing step yields in 782,848 triples with 121,681, 177,706, and 483,461 from event, physical, and social categories, respectively. Distribution of these relations is shown in Figure 2 . We verbalize ATOMIC2020 knowledge graph ",
"cite_spans": [],
"ref_spans": [
{
"start": 685,
"end": 693,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Filtering Triples",
"sec_num": "2.1"
},
{
"text": "Each relation in ATOMIC 20 20 is associated with a human-readable template. For example, xEffect's and HasPrerequisite's templates are as a result, PersonX will and to do this, one requires, respectively. We use these templates to convert triples in ATOMIC 20 20 to sentences in natural language by concatenating the subject, relation template, and target. Examples of converting triples to text are shown in Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 409,
"end": 417,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Converting Triples",
"sec_num": "2.2"
},
{
"text": "When we convert triples to natural language text, ideally we want to have grammatically correct sentences. Human readable templates provided by ATOMIC 20 20 are not necessarily rendered in a way to form error-free sentences when concatenated with subject and target in a triple. To address this issue, we use an open-source grammar and spell checker, LanguageTool, 2 to double-check our converted triples to ensure they do not contain obvious grammatical mistakes or spelling errors. Similar approaches that include deterministic grammatical transformations were also previously used to convert KG triples to coherent sentences (Davison et al., 2019) . It is worth pointing out that the Data-To-Text generation (KG verbalization) for itself is a separate task and there have been efforts to address this task (Agarwal et al., 2021) . We leave investigating the effects of using other Data-To-Text and grammar-checking methods to future research.",
"cite_spans": [
{
"start": 628,
"end": 650,
"text": "(Davison et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 809,
"end": 831,
"text": "(Agarwal et al., 2021)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Checking Grammar",
"sec_num": "2.3"
},
{
"text": "As mentioned earlier, we use MLM to continually pretrain our PLM, BERT-large-cased (Devlin et al., 2018) . We follow the same procedure as BERT to create the input data to our pretraining (e.g., number of tokens to mask in input examples). We run the pretraining using ATOMIC 20 20 's train and development splits as our training and evaluation sets, respectively, for 10 epochs on Google Colab TPU v2 using PyTorch/XLA package with a maximum sequence length of 30 and batch size of 128. 3 To avoid overfitting, we use early stopping with the patience of 3 on evaluation loss. We select the best model based on the lowest evaluation loss at the end of training. 4",
"cite_spans": [
{
"start": 83,
"end": 104,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 488,
"end": 489,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Continual Pretraining",
"sec_num": "2.4"
},
{
"text": "We chose multiple benchmarks of commonsense causal reasoning and cause-effect relation classification to ensure we thoroughly test the effects of our newly trained models. These benchmarks include: 1) Temporal and Causal Reasoning (TCR) dataset (Ning et al., 2018) , a benchmark for joint reasoning of temporal and causal relations; 2) Choice Of Plausible Alternatives (COPA) (Roemmele et al., 2011) dataset which is a widely used and notable benchmark (Rogers et al., 2021) for commonsense causal reasoning; And 3) BCOPA-CE (Han and Wang, 2021) , a new benchmark inspired by COPA, that contains unbiased token distributions which makes it a more challenging benchmark. For COPA-related experiments, since COPA does not have a training set, we use COPA's development set for fine-tuning our models and testing them on COPA's test set (COPA-test) and BCOPA-CE. For hyperparameter tuning, we randomly split COPA's development set into train (%90) and dev (%10) and find the best learning rate, batch size, and number of train epochs based on the evaluation accuracy on the development set. Then using COPA's original development set and best set of hyperparameters, we fine-tune our models and evaluate them on the test set. In all experiments, we report the average performance of models using four different random seeds. For TCR, we fine-tune and evaluate our models on train and test splits, respectively.",
"cite_spans": [
{
"start": 245,
"end": 264,
"text": "(Ning et al., 2018)",
"ref_id": "BIBREF26"
},
{
"start": 453,
"end": 474,
"text": "(Rogers et al., 2021)",
"ref_id": "BIBREF30"
},
{
"start": 525,
"end": 545,
"text": "(Han and Wang, 2021)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmarks",
"sec_num": "3.1"
},
{
"text": "We use bert-large-cased pre-trained model in all experiments as our baseline. For COPA and BCOPA-CE, we convert all instances to a SWAG-formatted data (Zellers et al., 2018) and use Huggingface's BertForMultipleChoice -a BERT model with a multiple-choice classification head on top. And for TCR, we convert every instance by adding special tokens to input sequences as event boundaries and use the R-BERT 5 model (Wu and He, 2019) . We chose R-BERT for our relation classification since it not only leverages the pretrained embeddings but also transfers information of target entities (e.g., events in a relation) through model's architecture and incorporates encodings of the targets entities. Examples of COPA and TCR are shown in Figure 4 . BCOPA-CE has the same format as COPA.",
"cite_spans": [
{
"start": 151,
"end": 173,
"text": "(Zellers et al., 2018)",
"ref_id": "BIBREF33"
},
{
"start": 413,
"end": 430,
"text": "(Wu and He, 2019)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 733,
"end": 741,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Models and Baseline",
"sec_num": "3.2"
},
{
"text": ": The computer crashed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Baseline",
"sec_num": "3.2"
},
{
"text": "! : I backed up my files.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Baseline",
"sec_num": "3.2"
},
{
"text": "asks-for=\"cause\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Baseline",
"sec_num": "3.2"
},
{
"text": "\" : I downloaded a virus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Baseline",
"sec_num": "3.2"
},
{
"text": "The death toll
For a fair comparison, we report BERT-SocialIQA's average performance. |