{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:35:19.814537Z" }, "title": "Causal Augmentation for Causal Sentence Classification", "authors": [ { "first": "Fiona", "middle": [ "Anting" ], "last": "Tan", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Singapore", "location": {} }, "email": "tan.f@u.nus.edu" }, { "first": "Devamanyu", "middle": [], "last": "Hazarika", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Singapore", "location": {} }, "email": "hazarika@comp.nus.edu.sg" }, { "first": "See-Kiong", "middle": [], "last": "Ng", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Singapore", "location": {} }, "email": "seekiong@nus.edu.sg" }, { "first": "Soujanya", "middle": [], "last": "Poria", "suffix": "", "affiliation": { "laboratory": "", "institution": "Singapore University of Technology", "location": {} }, "email": "sporia@sutd.edu.sg" }, { "first": "Roger", "middle": [], "last": "Zimmermann", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Singapore", "location": {} }, "email": "rogerz@comp.nus.edu.sg" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Scarcity of annotated causal texts leads to poor robustness when training state-of-the-art language models for causal sentence classification. In particular, we found that models misclassify on augmented sentences that have been negated or strengthened with respect to its causal meaning. This is worrying since minor linguistic differences in causal sentences can have disparate meanings. Therefore, we propose the generation of counterfactual causal sentences by creating contrast sets (Gardner et al., 2020) to be included during model training. We experimented on two model architectures and predicted on two outof-domain corpora. While our strengthening schemes proved useful in improving model performance, for negation, regular edits were insufficient. Thus, we also introduce heuristics like shortening or multiplying root words of a sentence. By including a mixture of edits when training, we achieved performance improvements beyond the baseline across both models, and within and out of corpus' domain, suggesting that our proposed augmentation can also help models generalize.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Scarcity of annotated causal texts leads to poor robustness when training state-of-the-art language models for causal sentence classification. In particular, we found that models misclassify on augmented sentences that have been negated or strengthened with respect to its causal meaning. This is worrying since minor linguistic differences in causal sentences can have disparate meanings. Therefore, we propose the generation of counterfactual causal sentences by creating contrast sets (Gardner et al., 2020) to be included during model training. We experimented on two model architectures and predicted on two outof-domain corpora. While our strengthening schemes proved useful in improving model performance, for negation, regular edits were insufficient. Thus, we also introduce heuristics like shortening or multiplying root words of a sentence. By including a mixture of edits when training, we achieved performance improvements beyond the baseline across both models, and within and out of corpus' domain, suggesting that our proposed augmentation can also help models generalize.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Causality is an important concept for knowledge discovery as it conveys the idea of cause and effect. In the simplest sense, a causal relation exists between entities A and B through the statement \"A causes B\" or \"B is caused by A\". In recent years, causal relation extraction from text has garnered significant interest in Natural Language Processing (NLP) (Asghar, 2016; Xu et al., 2020; Yang et al., 2021) .", "cite_spans": [ { "start": 358, "end": 372, "text": "(Asghar, 2016;", "ref_id": "BIBREF1" }, { "start": 373, "end": 389, "text": "Xu et al., 2020;", "ref_id": null }, { "start": 390, "end": 408, "text": "Yang et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Causal sentence classification (CSC) is the task of identifying sentences that contain causal meaning (Yu et al., 2019; Sumner et al., 2014; Mariko et al., 2020) . Identification of causal sentences is often the first step in tasks like generating plot structures (Mirza and Tonelli, 2016a; 2017a) or constructing causal knowledge graphs (Heindorf et al., 2020) for further downstream Natural Language Understanding applications, like Question Answering (Dalal et al., 2021) . Figure 1 demonstrates examples where similar claims are categorized by their causal strengths. CSC is challenging because the syntax of causality varies in context. Thus, it is difficult to exhaustively capture causal expressions, especially for implicit occurrences (Asghar, 2016) . Negations and the absence of causality further complicate automatic causality identification (Heindorf et al., 2020) .", "cite_spans": [ { "start": 102, "end": 119, "text": "(Yu et al., 2019;", "ref_id": null }, { "start": 120, "end": 140, "text": "Sumner et al., 2014;", "ref_id": null }, { "start": 141, "end": 161, "text": "Mariko et al., 2020)", "ref_id": null }, { "start": 264, "end": 290, "text": "(Mirza and Tonelli, 2016a;", "ref_id": "BIBREF27" }, { "start": 338, "end": 361, "text": "(Heindorf et al., 2020)", "ref_id": "BIBREF12" }, { "start": 454, "end": 474, "text": "(Dalal et al., 2021)", "ref_id": "BIBREF6" }, { "start": 744, "end": 758, "text": "(Asghar, 2016)", "ref_id": "BIBREF1" }, { "start": 854, "end": 877, "text": "(Heindorf et al., 2020)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 477, "end": 485, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Furthermore, there is a lack of good quality CSC datasets (Asghar, 2016; Xu et al., 2020) . Most NLP datasets typically treat causal relation extraction as a subtask of relation extraction, where \"Cause-Effect\" is one of the many relation labels. However, we think that causality is a complex relation best learned using dedicated causal relation datasets. Such corpora that exist are mostly small in size (< 5000 sentences), except for AltLex (Hidey and McKeown, 2016) that has over 40000 sentences. Datasets also tend to label causal relations in an overly simplistic binary level (as 'causal' or 'not causal'). Only some works classify text by causal strengths (Girju and Moldovan, 2002; Yu et al., 2019; Sumner et al., 2014) .", "cite_spans": [ { "start": 58, "end": 72, "text": "(Asghar, 2016;", "ref_id": "BIBREF1" }, { "start": 73, "end": 89, "text": "Xu et al., 2020)", "ref_id": null }, { "start": 444, "end": 469, "text": "(Hidey and McKeown, 2016)", "ref_id": "BIBREF14" }, { "start": 664, "end": 690, "text": "(Girju and Moldovan, 2002;", "ref_id": "BIBREF11" }, { "start": 691, "end": 707, "text": "Yu et al., 2019;", "ref_id": null }, { "start": 708, "end": 728, "text": "Sumner et al., 2014)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Data augmentation is a natural avenue for handling small-sized datasets. Augments created must be meaningful to explain representation gaps in the current datasets. In causality, both the causal direction and strength matter. As such, we believe that models should be sensitive towards negations and semantics of words to avoid misclassification. For example, in Figure 1 , the first three sentences include words related to \"help\". However, the context of its usage and inclusion of modal words like \"may\" easily alters the intended causal strength of the sentence. This observation motivates us to artificially construct meaningful counterfactuals that would reflect the model's decision boundaries. We do so by applying rule-based schemes that negate causal relations or strengthen conditionally causal sentences. However, for negations, we notice that introducing edits is insufficient to improve model performance. Thus, we also explore adding heuristic edits.", "cite_spans": [], "ref_spans": [ { "start": 363, "end": 371, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We find that state-of-the-art (SOTA) language models, such as BERT (Devlin et al., 2019) with MLP or SVM classifiers, achieve improvements in classification performance when trained with our created counterfactuals. In addition, our evaluation on cross-domain datasets shows that training on augmented datasets (original plus edits) improves model generalization to out-of-domain (OOD) contexts. This is consistent with findings from (Kaushik et al., 2020a,b) in sentiment analysis and natural language inference contexts. In summary, we make the following contributions:", "cite_spans": [ { "start": 67, "end": 88, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF8" }, { "start": 434, "end": 459, "text": "(Kaushik et al., 2020a,b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. We show that current SOTA models are not robust to minimally perturbed sentences that differ in causal direction and strength. Therefore, we propose causal negation and strengthening schemes based on dependency and part-ofspeech (POS) tags to augment causal sentences. To our knowledge, we are the first to study the effects of counterfactual augmentation in the context of causal claims classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. We observe that simple heuristic edits on negated counterfactuals improve model effectiveness for the CSC task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. We show that a mixture of counterfactuals improves performance in the trained domain and also generalizes better to OOD corpora such as SCITE (Li et al., 2021) and AltLex (Hidey and McKeown, 2016) .", "cite_spans": [ { "start": 145, "end": 162, "text": "(Li et al., 2021)", "ref_id": "BIBREF21" }, { "start": 174, "end": 199, "text": "(Hidey and McKeown, 2016)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Section 2 details related works in the literature and positions our work amongst them. Section 3 explains our methods for data augmentation, data processing and modeling. Section 4 presents and discusses our findings while Section 5 concludes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although causality is an important concept for knowledge discovery, benchmarking datasets and standardization of labeling rules have been limited, thus prohibiting empirical comparisons across methodologies (Asghar, 2016; Xu et al., 2020) . Most NLP benchmarking datasets define causal relations as just one out of many class labels (e.g. Part-Whole) (Jurgens et al., 2012; G\u00e1bor et al., 2018; Caselli and Vossen, 2017b; Mirza et al., 2014; Mirza and Tonelli, 2016b) . Others focus on causal relations and define such relations as a binary label (Li et al., 2021; Mariko et al., 2020; Hidey and McKeown, 2016) . However, causality may not always occur at extremes in real-life statements, and correlation can get confused for causation (Buhse et al., 2018) . As such, instead of using a binary model of causality, a better way is to classify varying \"strengths\" of causal relations in sentences. In fact, a seven-point scheme 1 was proposed by Sumner et al. (2014) to categorize causal statements from health-related news and academic press releases. Subsequently, Yu et al. (2019) adapted this for scientific texts into a four-level system. In this work, we adopt the four-level causality labeled corpus and classification model by Yu et al.", "cite_spans": [ { "start": 207, "end": 221, "text": "(Asghar, 2016;", "ref_id": "BIBREF1" }, { "start": 222, "end": 238, "text": "Xu et al., 2020)", "ref_id": null }, { "start": 351, "end": 373, "text": "(Jurgens et al., 2012;", "ref_id": "BIBREF16" }, { "start": 374, "end": 393, "text": "G\u00e1bor et al., 2018;", "ref_id": "BIBREF9" }, { "start": 394, "end": 420, "text": "Caselli and Vossen, 2017b;", "ref_id": "BIBREF4" }, { "start": 421, "end": 440, "text": "Mirza et al., 2014;", "ref_id": "BIBREF26" }, { "start": 441, "end": 466, "text": "Mirza and Tonelli, 2016b)", "ref_id": "BIBREF28" }, { "start": 546, "end": 563, "text": "(Li et al., 2021;", "ref_id": "BIBREF21" }, { "start": 564, "end": 584, "text": "Mariko et al., 2020;", "ref_id": null }, { "start": 585, "end": 609, "text": "Hidey and McKeown, 2016)", "ref_id": "BIBREF14" }, { "start": 736, "end": 756, "text": "(Buhse et al., 2018)", "ref_id": "BIBREF2" }, { "start": 944, "end": 964, "text": "Sumner et al. (2014)", "ref_id": null }, { "start": 1051, "end": 1081, "text": "Subsequently, Yu et al. (2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Causal Sentence Classification", "sec_num": "2.1" }, { "text": "(2019) 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Causal Sentence Classification", "sec_num": "2.1" }, { "text": "There is also an often observed issue that NLP systems that perform well on task datasets do not generalize to \"real-life scenarios\", thereby misleading and overstating the accuracies and usefulness of their models. Ensuring model generalizability to other domains can be challenging. For example, Ramesh et al. (2012) showed discourse triggers are 1 The seven levels of causal strengths are (1) no statement, (2) explicit statement of no relation, (3) correlational, (4) ambiguous (i.e., a relationship is present, but the direction and level is ambiguous), (5) conditional causal, (6) can cause, and (7) unconditionally causal. 2 We were unable to work on Sumner et al.'s dataset as it was not publicly available and had very limited samples per class label. different between the biomedical and general domains. More focus has been placed on ensuring sufficient data representativeness and transferability of results onto OOD settings in recent years. In this work, we will also evaluate the generalizability of our models to classify causal sentences from other domains.", "cite_spans": [ { "start": 285, "end": 318, "text": "For example, Ramesh et al. (2012)", "ref_id": null }, { "start": 349, "end": 350, "text": "1", "ref_id": null }, { "start": 630, "end": 631, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Causal Sentence Classification", "sec_num": "2.1" }, { "text": "Counterfactual generation is a popular strategy for NLP researchers to test and improve model robustness via adversarial learning and attacks (Morris et al., 2020; Mahler et al., 2017) or for mitigating bias (Kaushik et al., 2020a; Maudslay et al., 2019) . Gardner et al. (2020) proposed using counterfactuals to fill local theoretical gaps in a model's decision boundary. They relied on expert judgments to generate similar but meaningfully different sentences and showed that SOTA models struggle on contrast sets compared to original test sets across multiple tasks. Recently, Wu et al. 2021proposed a general-purpose counterfactual generator built on GPT-2 and also showed that the inclusion of realistic counterfactuals was useful across three different tasks. Their control codes included negation, delete, and restructure, amongst other options. 3 In our work, we generate counterfactuals purposefully for CSC, such as moving sentences across labels during Negation (causal \u2192 no relationship) and Strengthening (conditional causal \u2192 causal) strategies. We provide an automatic rule-based schema to negate and strengthen causal statements, focusing on precision over full coverage. 4 Kaushik et al. (2020a) manually revised documents that would correspond to a counterfactual target label for sentiment analysis and natural language inference tasks. They showed that training with similar quantities of augmented data compared to the original improves generalization ability to OOD datasets. In this paper, we have also found that counterfactuals can help to improve model generalizability for CSC. Unlike their work, our linguistics-based augments do not rely on human intervention.", "cite_spans": [ { "start": 142, "end": 163, "text": "(Morris et al., 2020;", "ref_id": "BIBREF29" }, { "start": 164, "end": 184, "text": "Mahler et al., 2017)", "ref_id": "BIBREF22" }, { "start": 208, "end": 231, "text": "(Kaushik et al., 2020a;", "ref_id": "BIBREF17" }, { "start": 232, "end": 254, "text": "Maudslay et al., 2019)", "ref_id": "BIBREF24" }, { "start": 257, "end": 278, "text": "Gardner et al. (2020)", "ref_id": "BIBREF10" }, { "start": 853, "end": 854, "text": "3", "ref_id": null }, { "start": 1188, "end": 1189, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Counterfactuals in NLP", "sec_num": "2.2" }, { "text": "Our CSC task involved classifying a span of text with a causal label based on its intended meaning. We used the PubMed-based CSci corpus (Yu et al., 2019) 5 comprising of 3061 sentences, annotated with four levels of causal relation: no relationship (c 0 ), causal (c 1 ), conditional causal (c 2 ), and correlational (c 3 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Details", "sec_num": "3.1" }, { "text": "In a low-resource setting, we propose creating counterfactuals that push causal sentences across labels to improve the robustness of models. Figure 2 demonstrates the two main strategies we employed to generate counterfactual examples for CSC: (1) Causal Negation (c 1 \u2192 c 0 ) and (2) Causal 4 Contemporaneously, we also contributed our rule-based algorithm to an open-source text augmentation effort at https://github.com/ GEM-benchmark/NL-Augmenter under the transformation negate_strengthen.", "cite_spans": [ { "start": 292, "end": 293, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 141, "end": 147, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Counterfactual Generation", "sec_num": "3.2" }, { "text": "5 https://github.com/junwang4/ causal-language-use-in-science Strengthening (c 2 \u2192 c 1 ). We discuss these strategies next. 6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Counterfactual Generation", "sec_num": "3.2" }, { "text": "In NEGATION, we negate the direction of causal statements from causal (c 1 ) to no relationship (c 0 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Causal Negation", "sec_num": "3.2.1" }, { "text": "After obtaining POS tags and root words based on dependency trees 7 , we performed negations around the root word. Our coding schema (Algorithm 1 in the Appendix) inserted negative words like 'no', 'not', 'nor' or 'did not' to negate the meaning of the sentence. 12 negation linguistic templates were used. Successfully negated sentences were termed as EDIT sentences. If no matching templates were found, the sentence was skipped. Of the 493 original (causal) sentences from the CSci corpus, 384 sentences had available negations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Causal Negation", "sec_num": "3.2.1" }, { "text": "To improve text flow, we used antonyms to replace negated edits where applicable. We did so by searching for antonyms of the original root word based on WordNet (Miller, 1995) and termed successful antonym edits as EDIT-ALT. To ensure a similar tense was used, we detected the original word's tense and applied the same tense onto the antonym word using the Pattern package (De Smedt and Daelemans, 2012 ). An example EDIT and EDIT-ALT sentence is shown in Table 1 .", "cite_spans": [ { "start": 161, "end": 175, "text": "(Miller, 1995)", "ref_id": "BIBREF25" }, { "start": 374, "end": 403, "text": "(De Smedt and Daelemans, 2012", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 457, "end": 464, "text": "Table 1", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Causal Negation", "sec_num": "3.2.1" }, { "text": "To decide between EDIT and EDIT-ALT versions, we calculated the Levenshtein edit distance of the original word versus the antonym. We selected EDIT-ALT only if the edit distance is less than or equal to 30% of the length of the longer word, rounded to the nearest integer. This allowed us to keep conversions like 'able' \u2192 'unable' for more natural word flow, but discard bolder and more drastic changes like 'safe' \u2192 'dangerous' and 'had' \u2192 'refused' that suggested causality in the opposite direction (rather than no relationship) or were outright wrong. Finally, after dropping duplicates, we obtained 381 sentences that represented noncausality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Causal Negation", "sec_num": "3.2.1" }, { "text": "We were able to apply 11 out of the 12 linguistic templates to generate causal negation for the sentences in CSci. Most edits fell into the category where we negated the root verb or adjective of the sentence. Table A1 shows one randomly sampled example per available negation method when applied onto the CSci corpus. With respect to this table, Appendix A.1 briefly discusses the grammatical sanity of these sentences. We inspected these randomly sampled counterfactuals to verify that sentence flows were natural and desirable.", "cite_spans": [], "ref_spans": [ { "start": 210, "end": 218, "text": "Table A1", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Causal Negation", "sec_num": "3.2.1" }, { "text": "For STRENGTHEN, we increased the strength of causal statements from conditional causal (c 2 ) to causal (c 1 ) by exploiting modal words. Similar to negation, we first obtained the POS tags and dependency trees for each sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Causal Strengthening", "sec_num": "3.2.2" }, { "text": "Algorithm 2 in the Appendix outlines the rulebased pseudo-code. To summarize, the 5 linguistic templates created converted modals based on the dictionary: {'could', 'should', 'would'} \u2192 'would' and {'can', 'may', 'might', 'will'} \u2192 'will'. If modals interacted with verbs with the lemma 'be', we replaced 'modal+be' with 'was' instead to convey certainty in the causal meaning. For special cases where the modal terms interacted with 'have', thereby forming conditional perfect tense, we converted the examples into simple past tense by replacing 'modal+have' with 'had'. When a modal was followed by an adverb (E.g. \"can possibly\"), the adverb was removed to avoid any deviation of the causal meaning from certainty. Table A2 shows a randomly sampled example per causal strengthening method when applied onto the CSci corpus. Of the 213 available sentences, we successfully augmented 174 of them.", "cite_spans": [], "ref_spans": [ { "start": 718, "end": 726, "text": "Table A2", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Causal Strengthening", "sec_num": "3.2.2" }, { "text": "7 duplicated examples existed in the original CSci corpus and surfaced when we appended the edits with the original sentences. For such scenarios, we applied de-duplication based on priority rules discussed in Appendix A.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Processing", "sec_num": "3.3" }, { "text": "As our augmentations would increase the sample size for particular class labels, we randomly selected sentences to maintain the original class distribution. Our primary analysis focuses on randomly sampled datasets to eliminate the concern that any improved performance might result from increased data size or advantageous train set distribution. 8 As a side note, since the final dataset size is always slightly smaller than the original baseline EditMoreover, TT genotype will reduce the risk of CAD in diabetic patients. due to the de-duplication step, the final distribution after random sampling slightly differs. The final sample counts across class labels per augmented dataset is reflected in Appendix Table A5 .", "cite_spans": [ { "start": 348, "end": 349, "text": "8", "ref_id": null } ], "ref_spans": [ { "start": 711, "end": 719, "text": "Table A5", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Dataset Processing", "sec_num": "3.3" }, { "text": "Later in results Section 4.4.2, we observed that simple edits which highlight the main counterfactual phrase improved performance. Although these heuristics resulted in non-grammatical sentences, we believe that these edits explicitly emphasize augmented keywords for the model to learn the local syntactic changes better. Since we still trained the model with the original sentences (in fact, the majority), the model will not memorize on only non-grammatical examples. An example sentence is detailed in Table 1 with the two heuristic options as follows:", "cite_spans": [], "ref_spans": [ { "start": 506, "end": 513, "text": "Table 1", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Further Heuristics", "sec_num": "3.4" }, { "text": "\u2022 SHORTEN: We reduced the sentence length based on target/root word to cover a minimally interpretable phrase based on dependency parser. The final sentence might not be a consecutive slice from the original.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Further Heuristics", "sec_num": "3.4" }, { "text": "\u2022 MULTIPLES: We defined a phrase as one word before and after the target/root word (i.e. P hraseLength = 3). Phrases were then duplicated by a multiple of OriginalSentenceLength/P hraseLength rounded to the nearest integer. This ensured that the final sentence was up to as long as the original length. Note that in the EDIT-ALT example of Table 1 , \"is ineffective\" represents \"is not effective\". Thus, although the actual phrase length was 2, the intended meaning is based off the latter phrase that had a length of 3. Hence, we maintained a fixed P hraseLength for all sentences.", "cite_spans": [], "ref_spans": [ { "start": 340, "end": 347, "text": "Table 1", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Further Heuristics", "sec_num": "3.4" }, { "text": "In addition to training and validating on the CSci corpus, we also applied our trained models on two other datasets to demonstrate that exposing models to meaningful counterfactuals during training helps in OOD settings. While the CSci corpus was constructed from scientific PubMed-based sentences, the SCITE (Li et al., 2021) 9 corpus comprised of general sentences extended from the SemEval 2010 Task 8 dataset (Hendrickx et al., 2010) . On the other hand, AltLex (Hidey and McKeown, 2016) 10 contained sentences from English Wikipedia that included causal relations signaled by lexical markers. In AltLex, sentences can be duplicated if they have multiple relation markers and entities. Thus, we had to revise the corpus such that if a sentence had any causal relation, the sentence was labeled as causal and only one example was retained.", "cite_spans": [ { "start": 309, "end": 326, "text": "(Li et al., 2021)", "ref_id": "BIBREF21" }, { "start": 413, "end": 437, "text": "(Hendrickx et al., 2010)", "ref_id": "BIBREF13" }, { "start": 466, "end": 491, "text": "(Hidey and McKeown, 2016)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Out-of-domain Testing", "sec_num": "3.5" }, { "text": "Additionally, because SCITE and AltLex have binary labels, we created two measures of accuracy. The first, 'Acc', considered only exact class labels (no relationship (c 0 ) and causal (c 1 )) (i.e. predicting the other two labels is a misclassification). The second, 'Acc Group ', calculated accuracy after grouping [no relationship, correlational] into no relationship (c 0 ) and [causal, conditional causal] into causal (c 1 ) to align with the binary labels.", "cite_spans": [ { "start": 381, "end": 409, "text": "[causal, conditional causal]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Out-of-domain Testing", "sec_num": "3.5" }, { "text": "In total, we tested on 4439 sentences from SCITE and 37677 sentences from AltLex.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Out-of-domain Testing", "sec_num": "3.5" }, { "text": "In each setting, we trained and validated using K = 5 folds, with 5 epochs per fold. In both neural network set-ups, we used the standard crossentropy loss for multi-class classification. For OOD testing, we took the majority prediction from the five trained models across the five folds. We implemented two models as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling", "sec_num": "3.6" }, { "text": "We replicated the best performing model on the CSci corpus (Yu et al., 2019) which was a BioBERT (Lee et al., 2020) plus multi-layer perceptron (MLP) pipeline. The default architecture used BioBERT embeddings fed through a single MLP layer serving as the classifier.", "cite_spans": [ { "start": 97, "end": 115, "text": "(Lee et al., 2020)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "BERT+MLP (MLP)", "sec_num": "3.6.1" }, { "text": "Instead of applying LinearSVM based off unigrams and bigrams like the original authors (Yu et al., 2019), we believe a fairer comparison would be to use BERT embeddings as inputs into an SVM model. To allow for representation updates, for each sentence (s), the BioBERT encoder was first applied. Next, the BERT pooled output 11 (z) ran through two MLP layers (M LP 1 and M LP 2 ) to predict the class labels. After training, the second layer was discarded, and the hidden representation (r) was fed as fixed inputs into the SVM classifier. The equations below outlines this pipeline,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BERT+MLP+SVM (SVM)", "sec_num": "3.6.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "z = BERT (s), z \u2208 R h 1 (1) r = M LP 1 (z), r \u2208 R h 2 (2) o = M LP 2 (r), o \u2208 R c (3) p = SV M (r), p \u2208 R 1 ,", "eq_num": "(4)" } ], "section": "BERT+MLP+SVM (SVM)", "sec_num": "3.6.2" }, { "text": "where, p represents the final predicted label, and h 1 = 768, h 2 = 24, and c = 4. Pooled output takes the hidden state from the first token. STRENGTHEN\u00d7REGULAR) during training returned the best performance across all metrics. Accuracy improved by 1.35% over our MLP baseline, achieving Acc Orig of 90.60%. 12 Notice that we found improvements of accuracy and F-score beyond the original reported scores, even though our replicated scores were lower. The SVM model also demonstrated that including a mixture of edits during training improves performance, but in this setting, NEGATION\u00d7MULTIPLES with STRENGTHEN\u00d7REGULAR performed the best on average across metrics.", "cite_spans": [ { "start": 308, "end": 310, "text": "12", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "BERT+MLP+SVM (SVM)", "sec_num": "3.6.2" }, { "text": "A possible explanation for our findings is that we successfully exposed our models to more sentence types of the real world. Furthermore, we intentionally created augments around label boundaries (i.e. the minor edits changes the sentences' labels). Therefore, the model learns better for the CSC task. Interestingly, for NEGATION, the heuristic edits improved performance against baseline more so than the REGULAR edits itself. Section 4.4.2 will expand on this finding. Table 3 highlights how current SOTA models are not robust to minimally altered sentences that changes in causal direction and strength.", "cite_spans": [], "ref_spans": [ { "start": 472, "end": 479, "text": "Table 3", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "BERT+MLP+SVM (SVM)", "sec_num": "3.6.2" }, { "text": "To conduct the experiment, we randomly split the available negated edits (n=381) by half, keeping 191 negated sentences for training and the remaining 190 for testing. The 190 original sentences that correspond to the negated test set were removed from the original CSci corpus to avoid exposing models to highly similar sentences during training. 13 Models trained with this base train set dangerously predicted 157 out of 190 test sentences in the opposite direction as causal instead of no relationship. A shockingly dismal test accuracy of 12.63% was attained at best, and prediction counts are available in Appendix Table A6 .", "cite_spans": [ { "start": 348, "end": 350, "text": "13", "ref_id": null } ], "ref_spans": [ { "start": 621, "end": 629, "text": "Table A6", "ref_id": "TABREF18" } ], "eq_spans": [], "section": "Robustness on Edits", "sec_num": "4.2" }, { "text": "Our finding surfaces the problem that the models were likely memorizing key causal terms instead of understanding sentence structure and flow. Therefore, they were unable to discern the negation involved. Inclusion of counterfactual examples helped to fill this representation gap. We created augmented sets by combining the base train set with the 191 negated train sentences for retraining. Once we exposed the models to these negated examples during training, the same models could predict the right label with up to 73.68% accuracy. We also tested the models' efficacy on strengthened sentences converted from conditional causal to causal. Once counterfactual examples were included in the train set, improvements on test accuracy was obtained to a significant, but smaller, extent of +13.79% improvement at best.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Robustness on Edits", "sec_num": "4.2" }, { "text": "In Table 4 , we show that inclusion of edits during training also helps to improve generalization in cross-domain applications. Although our train dataset was an academic and scientific-based text represented by a BioBERT language model, we show that when we applied the same model to the general-based SCITE and Wikipedia-based AltLex corpora, inclusion of edits improved classification performance. For SCITE, we found improvements in generalization for the SVM model but not the MLP model. This could be due to our limited edit schemes that might not complement SCITE's sentence types. Nevertheless, for AltLex, consistent improvements for almost all edit combinations were obtained across both models. Overall, the mixture of edits with both conversion types once again reported the best average performance, demonstrating how such augments can indeed aid help models generalize. of these edits during training thus proved useful in highlighting the syntax that makes a sentence causal or conditional causal to the models.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Improving Generalization", "sec_num": "4.3" }, { "text": "Earlier in Table 2 , we noted that models exposed to NEGATION\u00d7REGULAR edits were unable to effectively learn the label boundaries: Acc Orig fell by 0.95% for the MLP model and 1.28% for the SVM compared to our baselines. However, when we performed simple heuristics like MULTIPLES, accuracy improved by +0.49% and +0.32% respectively. As for SHORTEN, accuracy rose by +0.18% for the SVM model, while the MLP model had a negligible reduction of -0.04%. We study the net change in classification counts per model per label in Table 5 to explore this phenomenon. Given class labels i and j predicted by a model and our baseline respectively, we report the model's N etChange", "cite_spans": [], "ref_spans": [ { "start": 11, "end": 18, "text": "Table 2", "ref_id": "TABREF5" }, { "start": 524, "end": 531, "text": "Table 5", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Need for Heuristic Edits", "sec_num": "4.4.2" }, { "text": "i = Right i \u2212 W rong i = j =i n (i=true)j \u2212 i =j n i(j=true)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Need for Heuristic Edits", "sec_num": "4.4.2" }, { "text": ", where i, j = c 0 , c 1 , c 2 , c 3 and n refers to the number of observations. Right i (W rong i ) is the number of observations where a model predicts correctly (wrongly) for class label i but baseline predicts wrongly (correctly). When either MLP or SVM model is trained with the augmented NEGATION\u00d7REGULAR dataset, the model became confused and predicted poorly for causal (c 1 ) and no relationship (c 0 ) classes. Once the edits were presented in the heuristic forms, this situation improved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Need for Heuristic Edits", "sec_num": "4.4.2" }, { "text": "We offer two plausible explanations for our findings: (1) It could be the case that highlighting the model to the short spans of (non-)causality aids its identification of the exact borders it needs to be sensitive to. (2) In the REGULAR form, non-causal sentences are linguistically very similar to causal ones. As mentioned in Section 4.4.1, these noncausal sentences only represent one out of many possible sentence types from c 0 . Therefore, feeding some non-grammatical examples of c 0 might help make it more explicit to the model that c 0 can take a wide variety of sentences types. More work is needed to confirm either hypotheses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Need for Heuristic Edits", "sec_num": "4.4.2" }, { "text": "Interestingly, we observed improvements in classification for labels we did not edit (c 3 ) in the majority of settings. This highlights the possibility that exposing models to minimally perturbed sentences around label boundaries might also improve comprehension beyond the introduced edits.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Need for Heuristic Edits", "sec_num": "4.4.2" }, { "text": "One benefit of capitalizing on CSci's four-label format is that our methodology is now able to identify causal strengths in SCITE and AltLex corpora beyond the original binary labels. For SCITE, the baseline MLP model originally labeled five sentences as conditional causal. When training the model with STRENGTHEN\u00d7REGULAR edits, four remained as conditional causal (c 2 ) while one of the sentence 14 correctly switched label to causal (c 1 ). For the baseline SVM model, seven sentences were tagged as c 2 , of which four remained, and the same one as MLP's converted to c 1 . One 15 cor-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Capturing Causal Strengths", "sec_num": "4.4.3" }, { "text": "Edit Type rectly switched to no relationship (c 0 ) as labeled, while the last sentence 16 converted to correlational (c 3 ), which is surprising because we did not edit any sentences to or from class c 3 . Unfortunately, the authors of SCITE tagged this sentence as causal, which means this is considered to be mislabeled. However, the sentence contains signals like 'corresponds to', which we believe should be correlational, not causal. Our short qualitative analysis again supports the earlier quantitative study that exposing models to meaningfully augmented sentences across labels could improve classification even for the other uninvolved labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conversion", "sec_num": null }, { "text": "MLP SVM c 0 c 1 c 2 c 3 Total c 0 c 1 c 2 c", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conversion", "sec_num": null }, { "text": "We also explored other popular methodologies but did not obtain consistent and significant improvements from baseline. These include, (i) creating more edit types (using masking, synonyms and paraphrasers), (ii) extending to a five-way classification problem (by labelling negated edits as a new class label representing not causal, separate from no relationship (c 0 )), and (iii) experimenting with some contrastive learning loss functions. Appendix Section A.3 details these experiments further for interested readers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Experiments", "sec_num": "4.4.4" }, { "text": "We explored the task of CSC in a low-resource setting. Following recent literature, we generated counterfactual sentences via rule-based edits that change sentences' causal direction and strength. We showed that SOTA CSC models worryingly misclassifies on such augmented sentences. This concern can be mitigated by including of our edits", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion & Future Work", "sec_num": "5" }, { "text": "16 \"The increase of the signal might correspond to formation of the high-density excitons, while the reduction of the signal originates from the relaxation.\" during training. We demonstrated that our proposal improves classification performance both on original and edit sentences, and within and outside of the corpus' domain. However, for NEGATION, we found that the regular format was insufficient to teach effective decision boundaries given limited data size and augmentation templates. Therefore, proposed heuristic edits and found performance improvements for both training and OOD contexts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion & Future Work", "sec_num": "5" }, { "text": "For future work, Yu et al. (2020)'s recent corpus using scientific press statements annotated with the same four class labels of causality is a promising dataset to replicate our findings upon. Additionally, we utilized rule-based augment schemes which have a finite number of working templates. Thus, our augmentations might not be lexically diverse. Therefore, our subsequent steps would be to explore SOTA NLP augmentation and generation tools, like from Wu et al. (2021) and Ross et al. (2021) . Furthermore, it might be worthwhile to find alternative models that can learn directly from the augmented datasets without the need for heuristics.", "cite_spans": [ { "start": 479, "end": 497, "text": "Ross et al. (2021)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion & Future Work", "sec_num": "5" }, { "text": "Lastly, our work did not go beyond the \"correctness\" of the claims. However, in reality, one has to distinguish between causal effects as factual events of real-world or at the level of \"meta-causality\" (Andersson et al., 2020) . Hence, grounding the claims to world knowledge will be an important research avenue to pursue. ", "cite_spans": [ { "start": 203, "end": 227, "text": "(Andersson et al., 2020)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion & Future Work", "sec_num": "5" }, { "text": "Appendix Table A1 shows one randomly sampled example per available negation method when applied onto the CSci corpus. As shown, most examples fell into 'VB_3.1', 'VB_5.1', 'JJ_1.3' and 'VB_1.2' types, for which the templates in Algorithm 1 worked well for 17 . For rarer method types, like 'VB_2.1', the templates seemed to work poorly. Further investigation shows that the error arose from the POS tagging step: \"Both\" was tagged as a VB but should have been a DT or CC, for which, we have no template for at the moment, so the example would have been correctly skipped. As for 'VB_4.1', the negated example was unnatural but not grammatically wrong.", "cite_spans": [], "ref_spans": [ { "start": 9, "end": 17, "text": "Table A1", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "A.1 Negation Examples", "sec_num": null }, { "text": "After appending original sentences with edits, we conducted de-duplication. Appendix Table A3 shows problematic duplicates that had differing labels. The original CSci corpus contained 7 duplicate sentences instances which were removed. 6 of them were exact duplicates (same label, same sentence), while the last one (sentence S/N 1) was duplicated with different labels (c 0 and c 2 ). We manually changed this to retain only the c 0 -labeled example. The total data size thus reduced from 3061 to 3054. This explains the differences in data size and distribution when comparing the original versus augmented sets shown in Table A5 . We also take this chance to highlight concerns that some sentences in CSci were labeled contrary to how we understood them. Subsequent duplicates were handled via rulebased removal. The motivation was to ensure identical sentences do not have different labels which adds noise to our training. Our assumption was that if an edit was performed but remained identical to the original, the original must have been mislabeled. We note that our rule-based de-duplication cannot accommodate multi-label cases, as there was one sentence (S/N 4) that correctly reflected both c 0 and c 1 labels in different parts of the sentence, but due to de-duplication, we only kept the c 0 label. 17 We highlight the main POS tags used and mentioned: VB (verbs, e.g. 'eating'), JJ (adjective, e.g. 'big'), IN (preposition or subordinating conjunction, e.g. 'by'), DT (determiner, e.g. 'he'), CC (coordinating conjunction, e.g. 'and'), MD (modal, e.g. 'should').", "cite_spans": [ { "start": 1313, "end": 1315, "text": "17", "ref_id": null } ], "ref_spans": [ { "start": 85, "end": 93, "text": "Table A3", "ref_id": "TABREF8" }, { "start": 624, "end": 632, "text": "Table A5", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "A.2 De-duplication", "sec_num": null }, { "text": "Other experiments that were conducted but did not produce significant improvements are mentioned here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.3 Other Experiments", "sec_num": null }, { "text": "Other Edit Types Three were explored:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.3 Other Experiments", "sec_num": null }, { "text": "\u2022 MASK: Based on POS, all nouns were replaced by the token \"[MASK]\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.3 Other Experiments", "sec_num": null }, { "text": "\u2022 SYNONYMS: Using WordNet synonyms, we skipped common words 18 and randomly substituted up to 5 words. Synonyms matched the tense and plurarity of original words using Pattern package, which we note, had imperfections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.3 Other Experiments", "sec_num": null }, { "text": "\u2022 T5PARA: We ran the sentence through a pretrained T5-paraphraser model 19 to generate paraphrased sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.3 Other Experiments", "sec_num": null }, { "text": "Appendix Table A4 shows an example sentence with the above edits for the same causal sentence of Table 1 . With the SVM model, only STRENGTHEN\u00d7SYNONYMS appended with original increased accuracy on CSci by 1.01% while STRENGTHEN\u00d7T5PARA increased accuracy by 0.39%. However, these findings could not be replicated across to the MLP model nor for NEGATION.", "cite_spans": [], "ref_spans": [ { "start": 9, "end": 17, "text": "Table A4", "ref_id": "TABREF16" }, { "start": 97, "end": 104, "text": "Table 1", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "A.3 Other Experiments", "sec_num": null }, { "text": "Extending to a Five-way Classification In our main set up, we focused on edits that matched the original labels and were randomly sampled such that the unified train set matches base class distribution for fairer comparison to baseline. Successful NEGATION examples were labeled no relationship (c 0 ). However, to the extent that we believe negated causal statements deserve a class of their own, we also explore the event when negations were labeled with a new level not causal (c 4 ) instead. Based on the set up for Table 3 , we obtained even higher improvements in accuracy of +70.53% and +74.74% for the MLP and SVM model respectively. This could be due to the clearer distinction of a not causal sentence structure compared to if we were to combine them with other no relationship statements. When we extended the MLP and SVM model to work with such a five-way classification set up, we did observe improvements in Acc Orig for SHORTEN, MULTIPLES and SYNONYMS edit types. However, because we cannot truly balance the dataset (random sampling does not apply here because we have a whole new class), we cannot be certain if the improvements were due to the larger dataset or the model picking up on the boundaries. Furthermore, the improvements did not generalize on our OOD set ups.", "cite_spans": [], "ref_spans": [ { "start": 520, "end": 527, "text": "Table 3", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "A.3 Other Experiments", "sec_num": null }, { "text": "Other Training Setups In addition to standard cross-entropy based supervised learning, we also explored contrastive learning schemes. In particular, we trained with Supervised Contrastive Loss (SupCon) (Khosla et al., 2020; and Triplet Margin Loss (Paszke et al., 2019) . In the contrastive setup, we introduced counterfactuals as the negative examples for each anchor sentence. For positive samples, we used SHORTEN, SYNONYMS and T5PARA augmentation strategies derived from the original anchor sentence. However, our results did not provide performance improvements in either CSci or OOD datasets, highlighting the challenge in building a generalized scheme of counterfactual generations. Exploring avenues in contrastive learning remains a critical future work.", "cite_spans": [ { "start": 202, "end": 223, "text": "(Khosla et al., 2020;", "ref_id": null }, { "start": 248, "end": 269, "text": "(Paszke et al., 2019)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "A.3 Other Experiments", "sec_num": null }, { "text": "We include additional details about our main experiment not highlighted in other parts of the paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.4 Reproducibility Checklist", "sec_num": null }, { "text": "\u2022 Computing Infrastructure: Tesla V100 SXM2 32 GB", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.4 Reproducibility Checklist", "sec_num": null }, { "text": "\u2022 MLP Hyperparameters: \"atten-tion_probs_dropout_prob\": 0.1, \"hidden_act\": \"gelu\", \"hidden_dropout_prob\": 0.1, \"hid-den_size\": 768, \"initializer_range\": 0.02, \"intermediate_size\": 3072, \"layer_norm_eps\": 1e-12, \"max_position_embeddings\": 512, \"num_attention_heads\": 12, \"num_hidden_layers\": 12, \"type_vocab_size\": 2, \"vocab_size\": 28996", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.4 Reproducibility Checklist", "sec_num": null }, { "text": "\u2022 SVM Hyperparameters: kernel: \"linear\", \"C\": 1e-2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.4 Reproducibility Checklist", "sec_num": null }, { "text": "\u2022 Average Runtime: For 5 epochs and 5 folds, our baseline MLP model took approximately 22 minutes 51 seconds to train and validate for the CSci dataset. Method REGULAR (EDIT) REGULAR (EDIT-ALT) n VB_1.2 Eyes with better vision at baseline had no more favorable prognosis, whereas eyes with initial macular detachment, intraoperative iatrogenic break, or heavy SO showed more unfavorable outcomes.", "cite_spans": [], "ref_spans": [ { "start": 153, "end": 174, "text": "Method REGULAR (EDIT)", "ref_id": null } ], "eq_spans": [], "section": "A.4 Reproducibility Checklist", "sec_num": null }, { "text": "Eyes with better vision at baseline abstained a more favorable prognosis, whereas eyes with initial macular detachment, intraoperative iatrogenic break, or heavy SO showed more unfavorable outcomes. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.5 Additional Figures & Tables", "sec_num": null }, { "text": "IN_1.1 Although further investigation of long-term and prospective studies is not needed, we identified four variables as predisposing factors for higher major amputation in diabetic patients through meta-analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "53", "sec_num": null }, { "text": "-1 Table A1 : Example negated causal sentences per method Notes. \"Method\" refers to NEGATION method label as per Algorithm 1. REGULAR (EDIT) refers to direct negation from this Algorithm. REGULAR (EDIT-ALT) refers to alternate intervention using same negation location, but based off antonyms from WordNet, if available. Interventions, excluding lemmatization or case-changes, are highlighted in green. \"n\" is the number of successful conversions applicable in CSci corpus. Table A2 : Example strengthened conditional causal sentences per method. Notes. \"Method\" refers to strengthening method label as per Algorithm 2, resulting in augments as per REGULAR (EDIT). Interventions, excluding lemmatization or case-changes, are highlighted in green. Words removed from original version are striked out and highlighted in red. \"n\" is the number of successful conversions applicable in CSci corpus.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Table A1", "ref_id": "TABREF3" }, { "start": 474, "end": 482, "text": "Table A2", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "53", "sec_num": null }, { "text": "Label S/N Sentence c 0 c 1 c 2 c 3 Conversion 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "53", "sec_num": null }, { "text": "None the less, both artificially sweetened beverages and fruit juice were unlikely to be healthy alternatives to sugar sweetened beverages for the prevention of type 2 diabetes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "53", "sec_num": null }, { "text": "There was no effect on lumen volume, fibro-fatty and necrotic tissue volumes. In two randomized trials comparing the PCSK9 inhibitor bococizumab with placebo, bococizumab had no benefit with respect to major adverse cardiovascular events in the trial involving lowerrisk patients but did have a significant benefit in the trial involving higher-risk patients.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Original 2", "sec_num": "1" }, { "text": "1 1 NEGATION 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Original 2", "sec_num": "1" }, { "text": "Altering margin policies to follow either SSO-ASTRO or ABS guidelines would result in a modest reduction in the national reexcision rate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Original 2", "sec_num": "1" }, { "text": "1 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Original 2", "sec_num": "1" }, { "text": "Adding an allowance for accumulation of thyroidal iodine stores would produce an EAR of 72 \u00c3\u017d\u00c2\u00bcg and a recommended dietary allowance of 80 \u00c3\u017d\u00c2\u00bcg.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STRENGTHEN 6", "sec_num": null }, { "text": "1 1 STRENGTHEN 7 \" In a randomized controlled trial of 230 infants with genetic risk factors for celiac disease, we did not find evidence that weaning to a diet of extensively hydrolyzed formula compared with cows milk-based formula would decrease the risk for celiac disease later in life.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STRENGTHEN 6", "sec_num": null }, { "text": "1 1 STRENGTHEN Table A3 : Sentences that had duplicates with differing labels. Notes. Rule-based de-duplication was performed, with the final label kept highlighted in green. \"Conversion\" refers to the augmented edit dataset that when we merge with the original, the duplicate appears. Do note that Sentence S/N 7, to us, should be labeled as no relationship (c 0 ), but was labeled as conditional causal (c 2 ) by original authors.", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 25, "text": "STRENGTHEN Table A3", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "STRENGTHEN 6", "sec_num": null }, { "text": "TyG is effective to identify individuals at risk for NAFLD.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Original", "sec_num": null }, { "text": "TyG is not effective to identify individuals at risk for NAFLD. REGULAR (EDIT-ALT) TyG is ineffective to identify individuals at risk for NAFLD. TyG exists inefficient to describe someone at take chances for NAFLD.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "REGULAR (EDIT)", "sec_num": null }, { "text": "Ineffective for identifying individuals at risk for NAFLD. Table A5 : Number of sentences per class label after appending edits with base corpus, de-duplication and random sampling. Note that the dataset corresponding to the first row did not undergo de-duplication (i.e. we used the original corpus as is). Table A7 : Performance of BERT+MLP on CSci corpus. Notes. BioBERT models trained on variations of CSci corpus (Original plus edits), with edits matching existing labels and randomly sampled to match base class distribution. Results are for test set when trained and predicted over 5-folds. Precision (P), Recall (R), macro F-score (F1) and accuracy (Acc) are reported in %. Columns with lowerscript \"Orig\" are calculated for base items only (i.e. Edits are ignored). Rows below \"Ours (Base)\" report relative changes to it. The best performance per column is bolded.", "cite_spans": [], "ref_spans": [ { "start": 59, "end": 67, "text": "Table A5", "ref_id": "TABREF11" }, { "start": 308, "end": 316, "text": "Table A7", "ref_id": null } ], "eq_spans": [], "section": "T5PARA", "sec_num": null }, { "text": "Unfortunately, we did not investigate the negation and delete functions provided by Wu et al. (2021) but acknowledge this to be an important future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Our edit schemes, model pipeline and augmented datasets are available at https://github.com/tanfiona/ CausalAugment.7 We used NLTK (Wagner, 2010) to obtain POS tags in PennTreeBank format and spaCy (Honnibal et al., 2020) for dependency tree extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The aim of our paper is to demonstrate that any improvements in our scores are due to increased variations of examples per class label. These variations must be meaningful for any improvement in scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/Das-Boot/scite 10 https://github.com/chridey/AltLex", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The full original set achieved 90.33% accuracy if we were to include the subset that is dropped out due to random sampling. To arrive at this value, we predicted the labels for this dropped-out subset like an OOD dataset, i.e. taken across 5-folds after training completes.13 In experiments not shown, the models trained on the full original CSci corpus almost certainly wrongly predicts the 190 negated sentences as causal To prove our point that models are memorizing causal terms, we removed the overlapping sentences to eliminate the possibility of the models memorizing similar sentences in train and test set instead.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "\"In the present recession, which has been triggered by a collapse in land prices, land-value taxation would reverse the collapse -not by re-inflating a temporary speculative bubble, but by inducing investment in infrastructure that permanently enhances the utility of the land.\"15 \"The glass tealight holder appears to float inside the metal spiral as it spins in the gentle breeze.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We do not try to find synonyms for common words with these POS types: 'DT','IN', 'EX', 'CC', 'MD', 'WP', 'WD', 'WR', 'UH', 'RP', 'SY', 'PO'19 https://huggingface.co/ ramsrigouthamg/t5_paraphraser", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "return text, method, edit_id", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research is supported by Singapore Ministry of Education Academic Research Fund Tier 1 under MOE's official grant number T1 251RES2029.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "Algorithm 1: NegationRules -Causal negation scheme Input: edit_id, text_ids, text, pos, sentid2tid, max_try=2, curr_try=0 Output: text, method, edit_id 1 curr_try \u2190 curr_try + 1 2 curr_pos, curr_word \u2190 pos [edit_id] , text [edit_id] 3 prev_pos, prev_word \u2190 pos [ Table A8 : Performance of BERT+MLP+SVM on CSci corpus. Notes. Yu et al.'s SVM method does not use BERT inputs. Our BioBERT models are trained on variations of CSci corpus (Original plus edits), with edits matching existing labels and randomly sampled to match base class distribution. Results are for test set when trained and predicted over 5-folds. Precision (P), Recall (R), macro F-score (F1) and accuracy (Acc) are reported in %. Columns with lowerscript \"Orig\" are calculated for base items only (i.e. Edits are ignored). Rows below \"Ours (Base)\" report relative changes to it. The best performance per column is bolded.", "cite_spans": [ { "start": 51, "end": 66, "text": "Input: edit_id,", "ref_id": null }, { "start": 67, "end": 76, "text": "text_ids,", "ref_id": null }, { "start": 77, "end": 82, "text": "text,", "ref_id": null }, { "start": 83, "end": 87, "text": "pos,", "ref_id": null }, { "start": 88, "end": 99, "text": "sentid2tid,", "ref_id": null }, { "start": 100, "end": 110, "text": "max_try=2,", "ref_id": null }, { "start": 111, "end": 121, "text": "curr_try=0", "ref_id": null }, { "start": 206, "end": 215, "text": "[edit_id]", "ref_id": null }, { "start": 223, "end": 232, "text": "[edit_id]", "ref_id": null }, { "start": 261, "end": 262, "text": "[", "ref_id": null } ], "ref_spans": [ { "start": 263, "end": 271, "text": "Table A8", "ref_id": null } ], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A sentiment-annotated dataset of English causal connectives", "authors": [ { "first": "Marta", "middle": [], "last": "Andersson", "suffix": "" }, { "first": "Murathan", "middle": [], "last": "Kurfal\u0131", "suffix": "" }, { "first": "Robert", "middle": [], "last": "\u00d6stling", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 14th Linguistic Annotation Workshop", "volume": "", "issue": "", "pages": "24--33", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marta Andersson, Murathan Kurfal\u0131, and Robert \u00d6stling. 2020. A sentiment-annotated dataset of English causal connectives. In Proceedings of the 14th Linguistic Annotation Workshop, pages 24-33, Barcelona, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Automatic extraction of causal relations from natural language texts: A comprehensive survey", "authors": [ { "first": "Nabiha", "middle": [], "last": "Asghar", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nabiha Asghar. 2016. Automatic extraction of causal relations from natural language texts: A comprehen- sive survey. CoRR, abs/1605.07895.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Causal interpretation of correlational studies-analysis of medical news on the website of the official journal for german physicians", "authors": [ { "first": "Susanne", "middle": [], "last": "Buhse", "suffix": "" }, { "first": "Anne", "middle": [ "Christin" ], "last": "Rahn", "suffix": "" }, { "first": "Merle", "middle": [], "last": "Bock", "suffix": "" }, { "first": "Ingrid", "middle": [], "last": "M\u00fchlhauser", "suffix": "" } ], "year": 2018, "venue": "PLoS One", "volume": "13", "issue": "5", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Susanne Buhse, Anne Christin Rahn, Merle Bock, and Ingrid M\u00fchlhauser. 2018. Causal interpretation of correlational studies-analysis of medical news on the website of the official journal for german physi- cians. PLoS One, 13(5):e0196833.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The event StoryLine corpus: A new benchmark for causal and temporal relation extraction", "authors": [ { "first": "Tommaso", "middle": [], "last": "Caselli", "suffix": "" }, { "first": "Piek", "middle": [], "last": "Vossen", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Events and Stories in the News Workshop", "volume": "", "issue": "", "pages": "77--86", "other_ids": { "DOI": [ "10.18653/v1/W17-2711" ] }, "num": null, "urls": [], "raw_text": "Tommaso Caselli and Piek Vossen. 2017a. The event StoryLine corpus: A new benchmark for causal and temporal relation extraction. In Proceedings of the Events and Stories in the News Workshop, pages 77- 86, Vancouver, Canada. Association for Computa- tional Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The event storyline corpus: A new benchmark for causal and temporal relation extraction", "authors": [ { "first": "Tommaso", "middle": [], "last": "Caselli", "suffix": "" }, { "first": "Piek", "middle": [], "last": "Vossen", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Events and Stories in the News Workshop@ACL 2017", "volume": "", "issue": "", "pages": "77--86", "other_ids": { "DOI": [ "10.18653/v1/w17-2711" ] }, "num": null, "urls": [], "raw_text": "Tommaso Caselli and Piek Vossen. 2017b. The event storyline corpus: A new benchmark for causal and temporal relation extraction. In Proceedings of the Events and Stories in the News Workshop@ACL 2017, Vancouver, Canada, August 4, 2017, pages 77- 86. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A simple framework for contrastive learning of visual representations", "authors": [ { "first": "Ting", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Kornblith", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Norouzi", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 37th International Conference on Machine Learning", "volume": "2020", "issue": "", "pages": "1597--1607", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. 2020. A simple framework for contrastive learning of visual representations. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Ma- chine Learning Research, pages 1597-1607. PMLR.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Enhancing multiple-choice question answering with causal knowledge", "authors": [ { "first": "Dhairya", "middle": [], "last": "Dalal", "suffix": "" }, { "first": "Mihael", "middle": [], "last": "Arcan", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Buitelaar", "suffix": "" } ], "year": 2021, "venue": "Proceedings of Deep Learning Inside Out (DeeLIO): The 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures", "volume": "", "issue": "", "pages": "70--80", "other_ids": { "DOI": [ "10.18653/v1/2021.deelio-1.8" ] }, "num": null, "urls": [], "raw_text": "Dhairya Dalal, Mihael Arcan, and Paul Buitelaar. 2021. Enhancing multiple-choice question answering with causal knowledge. In Proceedings of Deep Learning Inside Out (DeeLIO): The 2nd Workshop on Knowl- edge Extraction and Integration for Deep Learning Architectures, pages 70-80, Online. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Pattern for python", "authors": [ { "first": "Tom", "middle": [], "last": "De", "suffix": "" }, { "first": "Smedt", "middle": [], "last": "", "suffix": "" }, { "first": "Walter", "middle": [], "last": "Daelemans", "suffix": "" } ], "year": 2012, "venue": "J. Mach. Learn. Res", "volume": "13", "issue": "1", "pages": "2063--2067", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom De Smedt and Walter Daelemans. 2012. Pattern for python. J. Mach. Learn. Res., 13(1):2063-2067.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/n19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Pa- pers), pages 4171-4186. Association for Computa- tional Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Semeval-2018 task 7: Semantic relation extraction and classification in scientific papers", "authors": [ { "first": "Kata", "middle": [], "last": "G\u00e1bor", "suffix": "" }, { "first": "Davide", "middle": [], "last": "Buscaldi", "suffix": "" }, { "first": "Anne-Kathrin", "middle": [], "last": "Schumann", "suffix": "" }, { "first": "Behrang", "middle": [], "last": "Qasemizadeh", "suffix": "" }, { "first": "Ha\u00effa", "middle": [], "last": "Zargayouna", "suffix": "" }, { "first": "Thierry", "middle": [], "last": "Charnois", "suffix": "" } ], "year": 2018, "venue": "Proceedings of The 12th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2018", "volume": "", "issue": "", "pages": "679--688", "other_ids": { "DOI": [ "10.18653/v1/s18-1111" ] }, "num": null, "urls": [], "raw_text": "Kata G\u00e1bor, Davide Buscaldi, Anne-Kathrin Schu- mann, Behrang QasemiZadeh, Ha\u00effa Zargayouna, and Thierry Charnois. 2018. Semeval-2018 task 7: Semantic relation extraction and classifica- tion in scientific papers. In Proceedings of The 12th International Workshop on Semantic Evalua- tion, SemEval@NAACL-HLT 2018, New Orleans, Louisiana, USA, June 5-6, 2018, pages 679-688. As- sociation for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Evaluating models' local decision boundaries via contrast sets", "authors": [ { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" }, { "first": "Victoria", "middle": [], "last": "Basmova", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Bogin", "suffix": "" }, { "first": "Sihao", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Pradeep", "middle": [], "last": "Dasigi", "suffix": "" }, { "first": "Dheeru", "middle": [], "last": "Dua", "suffix": "" }, { "first": "Yanai", "middle": [], "last": "Elazar", "suffix": "" }, { "first": "Ananth", "middle": [], "last": "Gottumukkala", "suffix": "" }, { "first": "Nitish", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" }, { "first": "Gabriel", "middle": [], "last": "Ilharco", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Khashabi", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Jiangming", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Nelson", "middle": [ "F" ], "last": "Liu", "suffix": "" }, { "first": "Phoebe", "middle": [], "last": "Mulcaire", "suffix": "" }, { "first": "Qiang", "middle": [], "last": "Ning", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "Sanjay", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Reut", "middle": [], "last": "Tsarfaty", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Wallace", "suffix": "" }, { "first": "Ally", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, EMNLP 2020", "volume": "", "issue": "", "pages": "1307--1323", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.117" ] }, "num": null, "urls": [], "raw_text": "Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hannaneh Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nel- son F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou. 2020. Evaluating models' local decision boundaries via contrast sets. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing: Findings, EMNLP 2020, Online Event, 16-20 November 2020, pages 1307-1323. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Text mining for causal relations", "authors": [ { "first": "Roxana", "middle": [], "last": "Girju", "suffix": "" }, { "first": "Dan", "middle": [ "I" ], "last": "Moldovan", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Fifteenth International Florida Artificial Intelligence Research Society Conference", "volume": "", "issue": "", "pages": "360--364", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roxana Girju and Dan I. Moldovan. 2002. Text min- ing for causal relations. In Proceedings of the Fif- teenth International Florida Artificial Intelligence Research Society Conference, May 14-16, 2002, Pen- sacola Beach, Florida, USA, pages 360-364. AAAI Press.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Causenet: Towards a causality graph extracted from the web", "authors": [ { "first": "Stefan", "middle": [], "last": "Heindorf", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Scholten", "suffix": "" }, { "first": "Henning", "middle": [], "last": "Wachsmuth", "suffix": "" }, { "first": "Axel-Cyrille Ngonga", "middle": [], "last": "Ngomo", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Potthast", "suffix": "" } ], "year": 2020, "venue": "CIKM '20: The 29th ACM International Conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "3023--3030", "other_ids": { "DOI": [ "10.1145/3340531.3412763" ] }, "num": null, "urls": [], "raw_text": "Stefan Heindorf, Yan Scholten, Henning Wachsmuth, Axel-Cyrille Ngonga Ngomo, and Martin Potthast. 2020. Causenet: Towards a causality graph ex- tracted from the web. In CIKM '20: The 29th ACM International Conference on Information and Knowledge Management, Virtual Event, Ireland, Oc- tober 19-23, 2020, pages 3023-3030. ACM.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "SemEval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals", "authors": [ { "first": "Iris", "middle": [], "last": "Hendrickx", "suffix": "" }, { "first": "Su", "middle": [ "Nam" ], "last": "Kim", "suffix": "" }, { "first": "Zornitsa", "middle": [], "last": "Kozareva", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "\u00d3", "middle": [], "last": "Diarmuid", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "S\u00e9aghdha", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Pad\u00f3", "suffix": "" }, { "first": "Lorenza", "middle": [], "last": "Pennacchiotti", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Romano", "suffix": "" }, { "first": "", "middle": [], "last": "Szpakowicz", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 5th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "33--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid \u00d3 S\u00e9aghdha, Sebastian Pad\u00f3, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. SemEval-2010 task 8: Multi-way classification of semantic relations be- tween pairs of nominals. In Proceedings of the 5th International Workshop on Semantic Evalua- tion, pages 33-38, Uppsala, Sweden. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Identifying causal relations using parallel wikipedia articles", "authors": [ { "first": "Christopher", "middle": [], "last": "Hidey", "suffix": "" }, { "first": "Kathy", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016", "volume": "1", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/p16-1135" ] }, "num": null, "urls": [], "raw_text": "Christopher Hidey and Kathy McKeown. 2016. Iden- tifying causal relations using parallel wikipedia arti- cles. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Lin- guistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "spaCy: Industrial-strength Natural Language Processing in Python", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.5281/zenodo.1212303" ] }, "num": null, "urls": [], "raw_text": "Matthew Honnibal, Ines Montani, Sofie Van Lan- deghem, and Adriane Boyd. 2020. spaCy: Industrial-strength Natural Language Processing in Python.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Semeval-2012 task 2: Measuring degrees of relational similarity", "authors": [ { "first": "David", "middle": [], "last": "Jurgens", "suffix": "" }, { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Peter", "middle": [ "D" ], "last": "Turney", "suffix": "" }, { "first": "Keith", "middle": [ "J" ], "last": "Holyoak", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 6th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2012", "volume": "", "issue": "", "pages": "356--364", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Jurgens, Saif Mohammad, Peter D. Turney, and Keith J. Holyoak. 2012. Semeval-2012 task 2: Mea- suring degrees of relational similarity. In Proceed- ings of the 6th International Workshop on Seman- tic Evaluation, SemEval@NAACL-HLT 2012, Mon- tr\u00e9al, Canada, June 7-8, 2012, pages 356-364. The Association for Computer Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Learning the difference that makes A difference with counterfactuallyaugmented data", "authors": [ { "first": "Divyansh", "middle": [], "last": "Kaushik", "suffix": "" }, { "first": "Eduard", "middle": [ "H" ], "last": "Hovy", "suffix": "" }, { "first": "Zachary", "middle": [ "Chase" ], "last": "Lipton", "suffix": "" } ], "year": 2020, "venue": "8th International Conference on Learning Representations", "volume": "2020", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Divyansh Kaushik, Eduard H. Hovy, and Zachary Chase Lipton. 2020a. Learning the differ- ence that makes A difference with counterfactually- augmented data. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Explaining the efficacy of counterfactually-augmented data", "authors": [ { "first": "Divyansh", "middle": [], "last": "Kaushik", "suffix": "" }, { "first": "Amrith", "middle": [], "last": "Setlur", "suffix": "" }, { "first": "Eduard", "middle": [ "H" ], "last": "Hovy", "suffix": "" }, { "first": "Zachary", "middle": [ "C" ], "last": "Lipton", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Divyansh Kaushik, Amrith Setlur, Eduard H. Hovy, and Zachary C. Lipton. 2020b. Explaining the ef- ficacy of counterfactually-augmented data. CoRR, abs/2010.02114.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning", "authors": [ { "first": "Prannay", "middle": [], "last": "Khosla", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Teterwak", "suffix": "" }, { "first": "Chen", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Sarna", "suffix": "" }, { "first": "Yonglong", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Phillip", "middle": [], "last": "Isola", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Maschinot", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.11362" ] }, "num": null, "urls": [], "raw_text": "Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. arXiv preprint arXiv:2004.11362.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining", "authors": [ { "first": "Jinhyuk", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Wonjin", "middle": [], "last": "Yoon", "suffix": "" }, { "first": "Sungdong", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Donghyeon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Sunkyu", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Chan", "middle": [], "last": "Ho So", "suffix": "" }, { "first": "Jaewoo", "middle": [], "last": "Kang", "suffix": "" } ], "year": 2020, "venue": "Bioinform", "volume": "36", "issue": "4", "pages": "1234--1240", "other_ids": { "DOI": [ "10.1093/bioinformatics/btz682" ] }, "num": null, "urls": [], "raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinform., 36(4):1234- 1240.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Causality extraction based on self-attentive bilstm-crf with transferred embeddings", "authors": [ { "first": "Zhaoning", "middle": [], "last": "Li", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xiaotian", "middle": [], "last": "Zou", "suffix": "" }, { "first": "Jiangtao", "middle": [], "last": "Ren", "suffix": "" } ], "year": 2021, "venue": "Neurocomputing", "volume": "423", "issue": "", "pages": "207--219", "other_ids": { "DOI": [ "10.1016/j.neucom.2020.08.078" ] }, "num": null, "urls": [], "raw_text": "Zhaoning Li, Qi Li, Xiaotian Zou, and Jiangtao Ren. 2021. Causality extraction based on self-attentive bilstm-crf with transferred embeddings. Neurocom- puting, 423:207-219.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Breaking NLP: Using morphosyntax, semantics, pragmatics and world knowledge to fool sentiment analysis systems", "authors": [ { "first": "Taylor", "middle": [], "last": "Mahler", "suffix": "" }, { "first": "Willy", "middle": [], "last": "Cheung", "suffix": "" }, { "first": "Micha", "middle": [], "last": "Elsner", "suffix": "" }, { "first": "David", "middle": [], "last": "King", "suffix": "" }, { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Cory", "middle": [], "last": "Shain", "suffix": "" }, { "first": "Symon", "middle": [], "last": "Stevens-Guille", "suffix": "" }, { "first": "Michael", "middle": [], "last": "White", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the First Workshop on Building Linguistically Generalizable NLP Systems", "volume": "", "issue": "", "pages": "33--39", "other_ids": { "DOI": [ "10.18653/v1/W17-5405" ] }, "num": null, "urls": [], "raw_text": "Taylor Mahler, Willy Cheung, Micha Elsner, David King, Marie-Catherine de Marneffe, Cory Shain, Symon Stevens-Guille, and Michael White. 2017. Breaking NLP: Using morphosyntax, semantics, pragmatics and world knowledge to fool sentiment analysis systems. In Proceedings of the First Work- shop on Building Linguistically Generalizable NLP Systems, pages 33-39, Copenhagen, Denmark. As- sociation for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Hugues De Mazancourt, and Mahmoud El-Haj. 2020. The financial document causality detection shared task (FinCausal 2020)", "authors": [ { "first": "Dominique", "middle": [], "last": "Mariko", "suffix": "" }, { "first": "Hanna", "middle": [], "last": "Abi-Akl", "suffix": "" }, { "first": "Estelle", "middle": [], "last": "Labidurie", "suffix": "" }, { "first": "Stephane", "middle": [], "last": "Durfort", "suffix": "" } ], "year": null, "venue": "Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation", "volume": "", "issue": "", "pages": "23--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dominique Mariko, Hanna Abi-Akl, Estelle Labidurie, Stephane Durfort, Hugues De Mazancourt, and Mah- moud El-Haj. 2020. The financial document causal- ity detection shared task (FinCausal 2020). In Pro- ceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Sum- marisation, pages 23-32, Barcelona, Spain (Online). COLING.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "It's all in the name: Mitigating gender bias with name-based counterfactual data substitution", "authors": [ { "first": "Hila", "middle": [], "last": "Rowan Hall Maudslay", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Gonen", "suffix": "" }, { "first": "Simone", "middle": [], "last": "Cotterell", "suffix": "" }, { "first": "", "middle": [], "last": "Teufel", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "5266--5274", "other_ids": { "DOI": [ "10.18653/v1/D19-1530" ] }, "num": null, "urls": [], "raw_text": "Rowan Hall Maudslay, Hila Gonen, Ryan Cotterell, and Simone Teufel. 2019. It's all in the name: Mit- igating gender bias with name-based counterfactual data substitution. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing, EMNLP- IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 5266-5274. Association for Computa- tional Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Wordnet: A lexical database for english", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1995, "venue": "Commun. ACM", "volume": "38", "issue": "11", "pages": "39--41", "other_ids": { "DOI": [ "10.1145/219717.219748" ] }, "num": null, "urls": [], "raw_text": "George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM, 38(11):39-41.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Annotating causality in the TempEval-3 corpus", "authors": [ { "first": "Paramita", "middle": [], "last": "Mirza", "suffix": "" }, { "first": "Rachele", "middle": [], "last": "Sprugnoli", "suffix": "" }, { "first": "Sara", "middle": [], "last": "Tonelli", "suffix": "" }, { "first": "Manuela", "middle": [], "last": "Speranza", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the EACL 2014 Workshop on Computational Approaches to Causality in Language (CAtoCL)", "volume": "", "issue": "", "pages": "10--19", "other_ids": { "DOI": [ "10.3115/v1/W14-0702" ] }, "num": null, "urls": [], "raw_text": "Paramita Mirza, Rachele Sprugnoli, Sara Tonelli, and Manuela Speranza. 2014. Annotating causality in the TempEval-3 corpus. In Proceedings of the EACL 2014 Workshop on Computational Ap- proaches to Causality in Language (CAtoCL), pages 10-19, Gothenburg, Sweden. Association for Com- putational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "CATENA: CAusal and TEmporal relation extraction from NAtural language texts", "authors": [ { "first": "Paramita", "middle": [], "last": "Mirza", "suffix": "" }, { "first": "Sara", "middle": [], "last": "Tonelli", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "64--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paramita Mirza and Sara Tonelli. 2016a. CATENA: CAusal and TEmporal relation extraction from NAt- ural language texts. In Proceedings of COLING 2016, the 26th International Conference on Compu- tational Linguistics: Technical Papers, pages 64-75, Osaka, Japan. The COLING 2016 Organizing Com- mittee.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "CATENA: causal and temporal relation extraction from natural language texts", "authors": [ { "first": "Paramita", "middle": [], "last": "Mirza", "suffix": "" }, { "first": "Sara", "middle": [], "last": "Tonelli", "suffix": "" } ], "year": 2016, "venue": "COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers", "volume": "", "issue": "", "pages": "64--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paramita Mirza and Sara Tonelli. 2016b. CATENA: causal and temporal relation extraction from natu- ral language texts. In COLING 2016, 26th Inter- national Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, December 11-16, 2016, Osaka, Japan, pages 64-75. ACL.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in NLP", "authors": [ { "first": "John", "middle": [ "X" ], "last": "Morris", "suffix": "" }, { "first": "Eli", "middle": [], "last": "Lifland", "suffix": "" }, { "first": "Jin", "middle": [ "Yong" ], "last": "Yoo", "suffix": "" }, { "first": "Jake", "middle": [], "last": "Grigsby", "suffix": "" }, { "first": "Di", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Yanjun", "middle": [], "last": "Qi", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2020 -Demos", "volume": "", "issue": "", "pages": "119--126", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-demos.16" ] }, "num": null, "urls": [], "raw_text": "John X. Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020. Textattack: A framework for adversarial attacks, data augmenta- tion, and adversarial training in NLP. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demon- strations, EMNLP 2020 -Demos, Online, November 16-20, 2020, pages 119-126. Association for Com- putational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Pytorch: An imperative style, high-performance deep learning library", "authors": [ { "first": "Adam", "middle": [], "last": "Paszke", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Massa", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lerer", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Chanan", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Killeen", "suffix": "" }, { "first": "Zeming", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Gimelshein", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Antiga", "suffix": "" }, { "first": "Alban", "middle": [], "last": "Desmaison", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "K\u00f6pf", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zachary", "middle": [], "last": "Devito", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Raison", "suffix": "" }, { "first": "Alykhan", "middle": [], "last": "Tejani", "suffix": "" }, { "first": "Sasank", "middle": [], "last": "Chilamkurthy", "suffix": "" }, { "first": "Benoit", "middle": [], "last": "Steiner", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Junjie", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Soumith", "middle": [], "last": "Chintala", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "8024--8035", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas K\u00f6pf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Py- torch: An imperative style, high-performance deep learning library. In Advances in Neural Informa- tion Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 8024-8035.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Automatic discourse connective detection in biomedical text", "authors": [ { "first": "Rashmi", "middle": [], "last": "Balaji Polepalli Ramesh", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Prasad", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Miller", "suffix": "" }, { "first": "Hong", "middle": [], "last": "Harrington", "suffix": "" }, { "first": "", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2012, "venue": "J. Am. Medical Informatics Assoc", "volume": "19", "issue": "5", "pages": "800--808", "other_ids": { "DOI": [ "10.1136/amiajnl-2011-000775" ] }, "num": null, "urls": [], "raw_text": "Balaji Polepalli Ramesh, Rashmi Prasad, Tim Miller, Brian Harrington, and Hong Yu. 2012. Automatic discourse connective detection in biomedical text. J. Am. Medical Informatics Assoc., 19(5):800-808.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Tailor: Generating and perturbing text with semantic controls", "authors": [ { "first": "Alexis", "middle": [], "last": "Ross", "suffix": "" }, { "first": "Tongshuang", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Ross, Tongshuang Wu, Hao Peng, Matthew E. Peters, and Matt Gardner. 2021. Tailor: Generating and perturbing text with semantic controls. CoRR, abs/2107.07150.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Bethan Dalton, Fred Boy, and Christopher D Chambers. 2014. The association between exaggeration in health related science news and academic press releases: retrospective observational study", "authors": [ { "first": "Petroc", "middle": [], "last": "Sumner", "suffix": "" }, { "first": "Solveiga", "middle": [], "last": "Vivian-Griffiths", "suffix": "" }, { "first": "Jacky", "middle": [], "last": "Boivin", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Christos", "middle": [ "A" ], "last": "Venetis", "suffix": "" }, { "first": "Aim\u00e9e", "middle": [], "last": "Davies", "suffix": "" }, { "first": "Jack", "middle": [], "last": "Ogden", "suffix": "" }, { "first": "Leanne", "middle": [], "last": "Whelan", "suffix": "" }, { "first": "Bethan", "middle": [], "last": "Hughes", "suffix": "" } ], "year": null, "venue": "BMJ", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1136/bmj.g7015" ] }, "num": null, "urls": [], "raw_text": "Petroc Sumner, Solveiga Vivian-Griffiths, Jacky Boivin, Andy Williams, Christos A Venetis, Aim\u00e9e Davies, Jack Ogden, Leanne Whelan, Bethan Hughes, Bethan Dalton, Fred Boy, and Christo- pher D Chambers. 2014. The association between exaggeration in health related science news and aca- demic press releases: retrospective observational study. BMJ, 349.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "text": "Causal sentence classification classifies textual claims into various categories of causal strengths.", "uris": null }, "FIGREF1": { "type_str": "figure", "num": null, "text": "Strategies to generate counterfactual examples for CSC.", "uris": null }, "TABREF2": { "num": null, "type_str": "table", "content": "
ConversionEdit TypeSentence
", "text": "(EDIT) TyG is not effective to identify individuals at risk for NAFLD. REGULAR (EDIT-ALT) TyG is ineffective to identify individuals at risk for NAFLD.", "html": null }, "TABREF3": { "num": null, "type_str": "table", "content": "", "text": "Examples of counterfactual causal sentence augments. Notes. Interventions are highlighted in green. Causal Strengthening can also have SHORTEN and MULTIPLES edits but is excluded due to space constrains.", "html": null }, "TABREF5": { "num": null, "type_str": "table", "content": "
(2019)'s unigram and bigrams method as we ob-
served significant improvements of accuracy from
77.2% to 88.86% and macro F-score from 72.2%
to 86.95%.
In our experiments, including a mix-
ture of edits (NEGATION\u00d7SHORTEN with
11
", "text": "reports our performance on the CSci corpus. For the MLP baseline model, we were unable to exactly replicate the reported scores by Yu et al. (2019) of 90.1% accuracy and 88.1% macro Fscore: We achieved slightly lower scores of 89.15% and 87.01% respectively. For SVM, our proposed implementation using updated BERT embeddings with a detached head was superior over Yu et al.", "html": null }, "TABREF7": { "num": null, "type_str": "table", "content": "
ConversionnMLPSVM
Original190 12.6310.53
NEGATION190 +61.05 +62.63
Original8777.0173.56
", "text": "Performance on CSci corpus. Notes. BioBERT models trained on variations of CSci corpus (Original plus edits), with edits matching existing labels and randomly sampled to match base class distribution. Results are for validation set when trained and predicted over 5-folds. Macro F-score (F1) and accuracy (Acc) are in %. Columns with lowerscript \"Orig\" are calculated for original sentences only (i.e. Edits are ignored). Rows below \"Ours (Base)\" report relative changes to it. Best performance per column is bolded. Precision and Recall scores are available inAppendix Tables A7 and A8.", "html": null }, "TABREF8": { "num": null, "type_str": "table", "content": "", "text": "", "html": null }, "TABREF9": { "num": null, "type_str": "table", "content": "
SCITEAltLex
ConversionEdit TypeMLPSVMMLPSVM
Acc Acc Ours (Base) 86.28 85.8385.0484.5085.5784.6485.9184.68
NEGATIONREGULAR-1.46-1.67-0.36-0.41-0.22-0.44+0.18+0.41
NEGATIONSHORTEN-0.20-0.27+0.02+0.02+0.61+0.54+0.74+1.05
NEGATIONMULTIPLES-0.18-0.16-0.38-0.38+0.89+0.95+1.19+1.58
STRENGTHENREGULAR-0.27-0.14+1.01+1.10+0.51+0.69+0.54+0.84
STRENGTHENSHORTEN-3.40-3.36-0.11-0.05+0.30+0.37+0.99+1.38
STRENGTHENMULTIPLES-1.31-1.28-0.90-0.90+0.88+0.99+0.07+0.29
NEGATION\u00d7SHORT, STRENGTHEN\u00d7REGU-0.02-0.05+0.79+0.63+0.94+0.84+0.31+0.41
NEGATION\u00d7MULTI, STRENGTHEN\u00d7REGU-0.18-0.16+0.56+0.56+0.74+0.88+1.11+1.33
Table 4: Performance on OOD datasets. Notes. BioBERT models trained on variations of CSci corpus (Original
plus edits), with edits matching existing labels and randomly sampled to match base class distribution. For SCITE
and AltLex, predictions are from takes mode class over 5-folds. Accuracies (Acc) are reported in %. Columns
'Acc' considers exact class labels, while 'Acc Group ' calculates accuracy after converting the four class labels into
binary labels. Rows below \"Ours (Base)\" report relative changes to it. The best performance per column is bolded.
", "text": "4.4.1 NEGATION vs. STRENGTHENWhile analyzing both resultTables 2 and 4, one might wonder why the REGULAR edit schemes helped improve performance for STRENGTHEN, but not for NEGATION conversions. One possible explanation for this phenomenon is as such -Since any sentence that did not represent any form of correlational or causal meaning falls under c 0 , sentences that could fall under no relationship are lexically diverse. In other words, it is challenging to create edits that exhaustively reflect all c 0 sentence types. By and large, our negation schemes only covered one category of no relationship sentences, namely, sentences that imply not causal. On the other hand, conditional causal sentences were relatively well-defined in the original corpus. Therefore, STRENGTHEN did successfully represent most of the sentence types under c 2 . Inclusion Group Acc Acc Group Acc Acc Group Acc Acc Group", "html": null }, "TABREF11": { "num": null, "type_str": "table", "content": "", "text": "Net change in correct classification counts on CSci corpus compared to \"Ours (Base)\" for original examples. Note that NEGATION is the conversion of c 1 \u2192c 0 and STRENGTHEN is the conversion of c 2 \u2192c 1 ;", "html": null }, "TABREF12": { "num": null, "type_str": "table", "content": "
Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer,
and Daniel Weld. 2021. Polyjuice: Generating coun-
terfactuals for explaining, evaluating, and improving
models. In Proceedings of the 59th Annual Meet-
ing of the Association for Computational Linguistics
and the 11th International Joint Conference on Nat-
ural Language Processing (Volume 1: Long Papers),
pages 6707-6723, Online. Association for Computa-
tional Linguistics.
Jinghang Xu, Wanli Zuo, Shining Liang, and Xian-
glin Zuo. 2020. A review of dataset and labeling
methods for causality extraction. In Proceedings of
the 28th International Conference on Computational
Linguistics, COLING 2020, Barcelona, Spain (On-
line), December 8-13, 2020, pages 1519-1531. Inter-
national Committee on Computational Linguistics.
Jie Yang, Soyeon Caren Han, and Josiah Poon. 2021. A
survey on extraction of causal relations from natural
language text. CoRR, abs/2101.06426.
Bei Yu, Yingya Li, and Jun Wang. 2019. Detecting
causal language use in science findings. In Proceed-
ings of the 2019 Conference on Empirical Methods
in Natural Language Processing and the 9th Inter-
national Joint Conference on Natural Language Pro-
cessing, EMNLP-IJCNLP 2019, Hong Kong, China,
November 3-7, 2019, pages 4663-4673. Association
for Computational Linguistics.
Bei Yu, Jun Wang, Lu Guo, and Yingya Li. 2020. Mea-
suring correlation-to-causation exaggeration in press
releases. In Proceedings of the 28th International
Conference on Computational Linguistics, COLING
2020, Barcelona, Spain (Online), December 8-13,
2020, pages 4860-4872. International Committee on
Computational Linguistics.
", "text": "Wiebke Wagner. 2010. Steven bird, ewan klein and edward loper: Natural language processing with python, analyzing text with the natural language toolkit -o'reilly media, beijing, 2009, ISBN 978-0-596-51649-9. Lang. Resour. Evaluation, 44(4):421-424.", "html": null }, "TABREF14": { "num": null, "type_str": "table", "content": "
7if lemma(next_word) = 'be' then
8Replace curr_word with *was*// Method 'MOD_1.2'
9Replace next_word with empty string
10else if lemma(next_word) = 'have' then
11if lemma(nnext_word) = 'be' then
12Replace curr_word with *was*// Method 'MOD_3.2'
13Replace next_word and nnext_word with empty string
14else
15Replace curr_word with *had*// Method 'MOD_3.1'
// Method 'MOD_4.1'
19Replace next_word with empty string
20else
21Replace curr_word with M odalDict[curr_word]// Method 'MOD_1.1'
MethodREGULAR (EDIT)n
MOD_1.1 Physical therapy in conjunction with nutritional therapymay will help prevent weakness98
in HSCT recipients.
MOD_2.1 The rs7044343 polymorphismcould be was involved in regulating the production of42
IL-33.
MOD_3.1 Increased titers of cows milk antibody before anti-TG2A and celiac disease indicates that21
subjects with celiac diseasemight have had increased intestinal permeability in early life.
MOD_4.1 Physical rehabilitation aimed at improving exercise tolerance( ( ( ( ( ( can possibly will improve13
the long-term prognosis after operations for lung cancer.
", "text": "Algorithm 2: StrengthenRules -Causal strengthening scheme Input: edit_id, text_ids, text, pos, sentid2tid, curr_try=0 Output: text, method, edit_id 1 Initialize M odalDict 2 curr_try \u2190 curr_try + 1 3 curr_pos, curr_word \u2190 pos[edit_id], text[edit_id] 4 next_pos, next_word \u2190 pos[edit_id + 1], text[edit_id + 1] if valid else None 5 nnext_pos, nnext_word \u2190 pos[edit_id + 2], text[edit_id + 2] if valid else None 6 while curr_try <= max_try do Replace next_word with empty string 17 else if curr_pos = M D & next_pos = RB then 18 Replace curr_word with M odalDict[curr_word] Define method as method name if applicable edit occurs 23 return text, method, edit_id", "html": null }, "TABREF16": { "num": null, "type_str": "table", "content": "
ConversionEdit Typen_c 0 n_c 1 n_c 2 n_c 3n
Original (Yu et al., 2019)1356 494213998 3061
NEGATIONREGULAR1356 491212995 3054
NEGATIONSHORTEN1356 491212995 3054
NEGATIONMULTIPLES1356 491212995 3054
STRENGTHENREGULAR1353 494209995 3051
STRENGTHENSHORTEN1353 494209995 3051
STRENGTHENMULTIPLES1353 494209995 3051
NEGATION\u00d7SHORT, STRENGTHEN\u00d7REGU1356 494209995 3054
NEGATION\u00d7MULTI, STRENGTHEN\u00d7REGU1356 494210995 3055
", "text": "Extended examples of counterfactual causal sentence augments Notes. Interventions are highlighted in green.", "html": null }, "TABREF18": { "num": null, "type_str": "table", "content": "
ConversionEdit TypePRF1Acc P 90.10
Ours (Base)86.02 88.13 87.01 89.15 86.02 88.13 87.0189.15
NEGATIONREGULAR-1.81 -1.20 -1.55 -1.92 +0.29 -0.71-0.19-0.95
NEGATIONSHORTEN+0.76 +1.45 +1.06 +0.89 +0.46 +0.78 +0.57-0.04
NEGATIONMULTIPLES+1.47 +1.44 +1.46 +1.45 +1.05 +0.81 +0.93+0.49
STRENGTHENREGULAR+1.96 +1.51 +1.75 +1.14 +0.98 +0.58 +0.80+0.84
STRENGTHENSHORTEN+1.54 +0.54 +1.08 +0.91 +0.52 -0.29+0.16+0.62
STRENGTHENMULTIPLES+1.51 +0.38 +0.98 +0.98 +0.53 -0.70-0.05+0.57
NEGATION\u00d7SHORT, STRENGTHEN\u00d7REGU+2.98 +2.57 +2.80 +2.33 +1.90 +1.54 +1.73+1.35
NEGATION\u00d7MULTI, STRENGTHEN\u00d7REGU+1.72 +1.91 +1.81 +1.35 -0.02 +0.23 +0.09-0.10
", "text": "Number of sentences predicted per class label for augmented dataset when trained on only original CSci corpus. Notes. Counts correspond to accuracy scores reported in Rows 1 and 3 of Table 3. Orig R Orig F1 Orig Acc Orig Yu et al. (2019) 87.80 88.60 88.10 90.10 87.80 88.60 88.10", "html": null } } } }