{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:10:57.707299Z" }, "title": "Increasing Sentence-Level Comprehension Through Text Classification of Epistemic Functions", "authors": [ { "first": "Maria", "middle": [], "last": "Berger", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ruhr University Bochum", "location": {} }, "email": "maria.berger-a2l@rub.de" }, { "first": "Elizabeth", "middle": [ "J" ], "last": "Goldstein", "suffix": "", "affiliation": {}, "email": "lizgoldstein15@gmail.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Word embeddings capture semantic meaning of individual words. How to bridge word-level linguistic knowledge with sentence-level language representation is an open problem. This paper examines whether sentence-level representations can be achieved by building a custom sentence database focusing on one aspect of a sentence's meaning. Our three separate semantic aspects are whether the sentence: (1) communicates a causal relationship, (2) indicates that two things are correlated with each other, and (3) expresses information or knowledge. The three classifiers provide epistemic information about a sentence's content.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Word embeddings capture semantic meaning of individual words. How to bridge word-level linguistic knowledge with sentence-level language representation is an open problem. This paper examines whether sentence-level representations can be achieved by building a custom sentence database focusing on one aspect of a sentence's meaning. Our three separate semantic aspects are whether the sentence: (1) communicates a causal relationship, (2) indicates that two things are correlated with each other, and (3) expresses information or knowledge. The three classifiers provide epistemic information about a sentence's content.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In reading comprehension, the sum is greater than the parts. A strong reader combines the reader's prior knowledge, reasoning ability, and the text's substance to reason about the events, entities, and their relations across a full document (Ko\u010disk\u1ef3 et al., 2018) . In essence, reading comprehension requires the reader to develop high levels of abstraction. (Ko\u010disk\u1ef3 et al., 2018) Each classification task introduced in the paper, if mastered by a computer model, allow a model to comprehend a sentence's meaning at greater level of abstraction than comparing word meaning similarity between sentences. With this new ability, the model should be able to master more complex downstream tasks than a model limited to wordlevel comprehension (Kim et al., 2019) .", "cite_spans": [ { "start": 241, "end": 263, "text": "(Ko\u010disk\u1ef3 et al., 2018)", "ref_id": "BIBREF14" }, { "start": 359, "end": 381, "text": "(Ko\u010disk\u1ef3 et al., 2018)", "ref_id": "BIBREF14" }, { "start": 740, "end": 758, "text": "(Kim et al., 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Causality: Causal relationships are a key to critical reasoning (Magliano and Pillow, 2021; Asghar, 2016) . Causal relationship knowledge allows people to control their environment and make predictions about the future (Magliano and Pillow, 2021) . It links one's actions with their probable consequences. Text causality recognition is crucial for natural language information retrieval, event prediction, question answering, generating future scenarios, decision processing, medical text mining, and behavior prediction (Li et al., 2021) .", "cite_spans": [ { "start": 64, "end": 91, "text": "(Magliano and Pillow, 2021;", "ref_id": "BIBREF16" }, { "start": 92, "end": 105, "text": "Asghar, 2016)", "ref_id": "BIBREF1" }, { "start": 219, "end": 246, "text": "(Magliano and Pillow, 2021)", "ref_id": "BIBREF16" }, { "start": 521, "end": 538, "text": "(Li et al., 2021)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Causal and Correlating Relations", "sec_num": "1.1" }, { "text": "For purposes of this paper, a sentence containing a causal relationship expresses a cause and effect relationship. Following (Li et al., 2021) , the expression can be explicit such as, \"Financial stress is one of the main causes of divorce.\" This paper also recognizes cause and effect relationships, which are implicit such as, \"He could choose to go on a diet, but this would bring significant muscle loss.\" Correlation: Correlations are patterns where changes in one variable are associated with changes in a second variable. In other words, one variable's changes are statistically dependent on the other variable. For purposes of this paper, causal relationships are not considered correlations because we have developed a separate model for causal relationships. Examples of sentences containing correlations are, \"Roosters always crow before the sun rises.\" and \"Chocolate sales were 30% higher in communities with a high number of Covid-cases\". Since there is no causal link, correlative relationship identification allows one to anticipate the future, but not to control one's surroundings. (Meehan, 1988) For example, prohibiting chocolate sales cannot reduce a community's Covid-19 rates.", "cite_spans": [ { "start": 125, "end": 142, "text": "(Li et al., 2021)", "ref_id": "BIBREF15" }, { "start": 1100, "end": 1114, "text": "(Meehan, 1988)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Causal and Correlating Relations", "sec_num": "1.1" }, { "text": "Knowledge: Humans are master pattern recognizers. Humans automatically and mostly without conscious effort turn experiences into simple patterns that they use to anticipate what will happen, to make things happen, and to choose among options. Spence (Spence, 2005; Meehan, 1988) writes that these simple patterns are generalized, meaning that they are assumed to transcend time. These patterns are what the authors define as knowledge. Information: Information is knowledge's building blocks. It consists of specific instances (Spence, 2005; Meehan, 1988) of, say, descriptions of events, objects, and relations among things. (Spence, 2005; Meehan, 1988) Facts, when used to mean an actual occurrence at a specific time and place, fit within information's scope. Data, on the other hand, is a smaller unit contained within facts. Bouthillier and Sheare define information as data in context. They identify \"-10 degrees\" as data and \"it is -10 degrees outside\" as information. (Bouthillier and Shearer, 2002) John Dewey explains the concept of knowledge as follows: To run against a hard painful stone is not of itself, I should say, an act of knowing; but if running into a hard and painful thing is an outcome predicted after inspection of data and elaboration of a hypothesis, then the hardness and the painful bruise which define the thing as a stone also constitute it emphatically an object of knowledge. (Ratner et al., 1939, p. 932) In other words, data about past individual experiences with the pain caused by the skin's contacts with particular stones at a particular instance is information. One can aggregate these individual data points into a hypothesis, which is referred to in this paper as knowledge.", "cite_spans": [ { "start": 250, "end": 264, "text": "(Spence, 2005;", "ref_id": "BIBREF28" }, { "start": 265, "end": 278, "text": "Meehan, 1988)", "ref_id": "BIBREF18" }, { "start": 527, "end": 541, "text": "(Spence, 2005;", "ref_id": "BIBREF28" }, { "start": 542, "end": 555, "text": "Meehan, 1988)", "ref_id": "BIBREF18" }, { "start": 626, "end": 640, "text": "(Spence, 2005;", "ref_id": "BIBREF28" }, { "start": 641, "end": 654, "text": "Meehan, 1988)", "ref_id": "BIBREF18" }, { "start": 976, "end": 1007, "text": "(Bouthillier and Shearer, 2002)", "ref_id": "BIBREF2" }, { "start": 1410, "end": 1439, "text": "(Ratner et al., 1939, p. 932)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Knowledge and Information", "sec_num": "1.2" }, { "text": "(Marco and Navarro, 1993) define epistemology as, \"the study of the process of human knowledge, its logic, origins and basis\" and recognize that its study is essential to \"the design and implementation of better cognitive strategies for guiding the process of documentary analysis\". (Marco and Navarro, 1993, p. 126-132) The authors, following Meehan's applied, tactical approach to epistemology, define epistemic as describing the patterns used by humans to organize their experience thereby allowing them to generate future expectations. (Meehan, 1988) Humans create patterns by aggregating specific experiences into general rules to address the future. These patterns include: causation, correlation, and knowledge. Causal and correlative relationships divined from aggregating individual experiences allow humans to anticipate future events. Both causal and correlative relationships which are generalized from individual events fit within the scope of knowledge. Knowledge encompasses all generalized patterns, which humans use to create future expectations. Text classification relating to epistemic distinctions apply universally. (Meehan, 1988) Thus, epistemic text classifications can be utilized by models across disciplines to solve problems presented by downstream tasks. (Meehan, 1988) introduces an epistemic framework for computational knowledge generation. Humans construct knowledge to serve as a tool to accomplish three basic purposes: anticipating the future, causing change, and choosing among options. (Meehan, 1988) Others refer to computational knowledge engineering as computational epistemology. (i Segura, 2009) This paper's classification tasks fit within (Meehan, 1988) 's epistemic framework.", "cite_spans": [ { "start": 11, "end": 25, "text": "Navarro, 1993)", "ref_id": "BIBREF17" }, { "start": 294, "end": 320, "text": "Navarro, 1993, p. 126-132)", "ref_id": null }, { "start": 540, "end": 554, "text": "(Meehan, 1988)", "ref_id": "BIBREF18" }, { "start": 1138, "end": 1152, "text": "(Meehan, 1988)", "ref_id": "BIBREF18" }, { "start": 1284, "end": 1298, "text": "(Meehan, 1988)", "ref_id": "BIBREF18" }, { "start": 1524, "end": 1538, "text": "(Meehan, 1988)", "ref_id": "BIBREF18" }, { "start": 1622, "end": 1638, "text": "(i Segura, 2009)", "ref_id": "BIBREF26" }, { "start": 1684, "end": 1698, "text": "(Meehan, 1988)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Epistemic Classification", "sec_num": "2.1" }, { "text": "Following (Petukhova and Bunt, 2011) , language can be divided into two main components: the communication's functions and its semantic content. While we have not found earlier research on classifying text based upon whether the text contains a correlative relationship or encodes knowledge, these tasks can be viewed as examining the sentence's function through an epistemic lens. Researchers have examined various algorithms' ability to detect communicative functions such as identifying whether a speaker has posed a question (Zhang et al., 2015) or a sentence serves a certain rhetorical purpose in an academic paper. (Iwatsuki et al., 2020). (Asghar, 2016) divides cause-effect extraction techniques into two main categories: (i) non-statistical, pattern-matching techniques (Khoo et al. (1998) , Girju et al. (2002) ) and (ii) machine learning techniques Girju (2003) , Girju et al. (2010) , Sil et al. (2010) , Rink et al. (2010) , (Zhao et al., 2016) ). Each approach has its limitations. The non-statistical rule based technique cannot succeed across domains, produces significantly skewed precision and recall scores, and requires subjectmatter experts to craft the pattern-recognition rules. In contrast, statistical and machine learning approaches require a significant amount of time for feature engineering through experimentation (Li et al., 2021) , and large, manually created, domainspecifically labeled data sets.", "cite_spans": [ { "start": 10, "end": 36, "text": "(Petukhova and Bunt, 2011)", "ref_id": "BIBREF21" }, { "start": 529, "end": 549, "text": "(Zhang et al., 2015)", "ref_id": "BIBREF29" }, { "start": 647, "end": 661, "text": "(Asghar, 2016)", "ref_id": "BIBREF1" }, { "start": 780, "end": 799, "text": "(Khoo et al. (1998)", "ref_id": "BIBREF12" }, { "start": 802, "end": 821, "text": "Girju et al. (2002)", "ref_id": "BIBREF10" }, { "start": 861, "end": 873, "text": "Girju (2003)", "ref_id": "BIBREF8" }, { "start": 876, "end": 895, "text": "Girju et al. (2010)", "ref_id": "BIBREF9" }, { "start": 898, "end": 915, "text": "Sil et al. (2010)", "ref_id": "BIBREF27" }, { "start": 918, "end": 936, "text": "Rink et al. (2010)", "ref_id": "BIBREF25" }, { "start": 939, "end": 958, "text": "(Zhao et al., 2016)", "ref_id": "BIBREF30" }, { "start": 1345, "end": 1362, "text": "(Li et al., 2021)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Comprehension Functions", "sec_num": "2.2" }, { "text": "Most research models approach causality extraction as a two-step classification problem requiring the model to first identify cause-effect pair candidates and then to remove those candidates that do not share a causal link. (Li et al., 2021) However, in recent work, (Li et al., 2021) represent causality extraction as a sequence labeling problem and utiliz a BiLSTM-CRF model with Flair contextual embeddings to extract cause-effect pairs directly (Flair embeddings (Akbik et al., 2018)) Work by (Mirza and Tonelli, 2016) also fits into this group. The authors present CATENA, a sievebased system for extracting temporal and causal relations. The authors evaluating their system on TempEval-3 and TimeBank-Dense data and show that each of the sieves, the rule-based, the machinelearned and the reasoning-based one contribute to achieving state-of-the-art performance. An analysis of the interaction between the temporal and the causal components show a tight connection between the temporal and the causal dimension of texts.", "cite_spans": [ { "start": 224, "end": 241, "text": "(Li et al., 2021)", "ref_id": "BIBREF15" }, { "start": 267, "end": 284, "text": "(Li et al., 2021)", "ref_id": "BIBREF15" }, { "start": 467, "end": 488, "text": "(Akbik et al., 2018))", "ref_id": "BIBREF0" }, { "start": 497, "end": 522, "text": "(Mirza and Tonelli, 2016)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Causal Relation Extraction", "sec_num": "2.3" }, { "text": "Current annotated data sets are relatively limited. For cause and effect pair extraction, the largest data set is EventStoryLine (Caselli and Vossen, 2017) . (Zuo et al., 2021) It only contains 258 documents, 4,316 sentences, and 1,770 causal event pairs.", "cite_spans": [ { "start": 129, "end": 155, "text": "(Caselli and Vossen, 2017)", "ref_id": "BIBREF3" }, { "start": 158, "end": 176, "text": "(Zuo et al., 2021)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Limited Data Sets and Data Augmentation Technique Guidance", "sec_num": "2.4" }, { "text": "It is an open question on what are the optimal data augmentation techniques. One part of our work is to experiment with data enrichment of training data by generating noisy labels based on labeling functions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limited Data Sets and Data Augmentation Technique Guidance", "sec_num": "2.4" }, { "text": "3 Study Overview", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limited Data Sets and Data Augmentation Technique Guidance", "sec_num": "2.4" }, { "text": "In this paper, we examine whether machine and deep learning techniques can identify sentences which communicate causal or correlative relationships and distinguish between sentences that contain knowledge and information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Research Question", "sec_num": "3.1" }, { "text": "Our models do not seek to identify candidate pairs (e.g., prior candidate extraction). Pair selection can be difficult when a sentence contains a cause with multiple effects or implicit relationships. Instead we use a gold-labeled data set and test a diverse set of classifiers including a weakly supervised that enriches training data with noisy labels form generated functions. For causality relationships, our technique has the potential to identify sentences that do not utilize traditional, causal function words (e.g., \"because\") or impact verbs (e.g., \"to poison\"). An example of a sentence implicitly indicating a causal relationship, but not utilizing such words is, \"Other countries in Southeast Asia-Thailand in particular-wanted to take advantage of these expanding urban markets and jumped into the hot economic fray.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Research Question", "sec_num": "3.1" }, { "text": "Following, our contributions are listed:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contribution", "sec_num": "3.2" }, { "text": "1. a new approximately 8,300 sentence data set (referred to as the Gold data set) derived from internet articles covering diverse subjectmatter, manually labeled for the following three categories: causation, correlation, and knowledge; 2. a new 500 sentence data set (referred to as WikiGold data) derived from Wiki News, manually labeled for the same three categories; 3. experiments assessing the ability of traditional classifiers and neural networks powered by a pre-trained language model to accomplish sentence-level classification for the three classification categories; and 4. experiments evaluating whether a famous data augmentation approach (aka Snorkel, c.f., Sec. 6) can improve model training for discerning the three classification categories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contribution", "sec_num": "3.2" }, { "text": "Our classification tasks are defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Tasks", "sec_num": "3.3" }, { "text": "1. Does the sentence communicate a causal relationship or is none communicated? 2. Does the sentence communicate either a causal or correlative relationship or is neither communicated? 3. Does the sentence communicate a causal relationship or a correlative relationship? 4. Does the sentence contain information or knowledge?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Tasks", "sec_num": "3.3" }, { "text": "To run the experiments, we use established statistical-based algorithms and neural classifiers (c.f., Sec. 5). We also utilize Snorkel's weakly supervised data augmentation approach (c.f., Sec. 6).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Tasks", "sec_num": "3.3" }, { "text": "4 Text Data Used", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Tasks", "sec_num": "3.3" }, { "text": "To accomplish each of the classification tasks, we created a Gold data set by manually annotating articles from the internet (e.g., Reuters, The Guardian, etc. See Appendix A for details) written for general readership on a wide range of topics, including artificial intelligence, books, finance, and Covid-19 (see Tab. 1). Fig.1 shows the topic-wise distribution of the Gold data by sentences, articles, and classification tasks. We can see that the topics are distributed similarly in all the tasks and that-for most of the topics-the balancing in both classes is comparable too. The data set contains 8,327 labeled sentences. Each article's sentences were manually labeled for the three classification categories, causality, correlation, and knowledge. No pattern recognition rules were used to label the data. Thus, the labeling method captured a diverse range of sentences which met the labeling criteria.", "cite_spans": [], "ref_spans": [ { "start": 324, "end": 329, "text": "Fig.1", "ref_id": null } ], "eq_spans": [], "section": "Gold Data", "sec_num": "4.1" }, { "text": "We create task-specific balanced data sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gold Data", "sec_num": "4.1" }, { "text": "For the first two aspects, causality and correlation, sentences expressing that one such relationship existed between two things were labeled as positively fitting within the respective class. For example, \"Eating a cup of blueberries a day does not reduce your risk of cancer\" was labeled a causal sentence. The reason for this labeling rule is sentences such as the foregoing one allow the reader to extract causation information from the sentence. If a sentence contained both a causal and correlative relationship, the sentence was labeled as causal. One such sentence is, \"And a study we looked at in 2012 suggested people who owned cats had a higher risk of suicide, as their pets could make them vulnerable to a Toxoplasma gondii (T. gondii) infection.\" If a correlation solely related to time, a trend, it was not labeled as a correlation. An example of such a database sentence is, \"By the late 1990s, countries such as Indonesia and Brazil were increasing their commercial production by about 10 percent a year.\" We chose not to label trends as we sought to identify correlations that are time invariant. If a sentence contained both information and knowledge, it was labeled as fitting within the smaller class, knowledge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gold Data Labeling Conventions", "sec_num": "4.2" }, { "text": "Snorkel works well in increasing the classification accuracy based on a big data set of noisily labeled data together with a small gold labeled data set. As a big data set, we downloaded a recent version of the English Wikinews data from the web. 1 We filter out articles with titles that contain the prefixes \"Template:\", \"Portal:\", \"Wikinews:\", \"Category:\" or \"Comments:\", because we are not interested in those. \"Templates\" because they are empty placeholders. \"Portal\" indicates specific information on the Wikinews portal and its usage. Articles containing the \"Wikinews\" prefix often simply list short headlines of independent news which are too short for our use cases. \"Categories\" also indicate some templates that allocate space for news articles with a certain function such as future articles or single dates. Finally, \"Comments\" are articles that only indicate legacy comments that are now moved to a newly created comments namespace. We also filter out articles that start with a \"Redirect\" in their text bodies. From a total of 100,277 articles originally culled from Wikinews, 28,460 remained after the filtering-out process. Using NLTK's sentence parser, we derived 327,366 sentences from these articles. For our experiments we use a subset of this data holding 10,000 sentences. We refer to this data set by \"Wiki\" data.", "cite_spans": [ { "start": 247, "end": 248, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Wikinews Big Data Set", "sec_num": "4.3" }, { "text": "From the Wiki data, we sampled 500 sentences that we annotated manually for causal and correlating relations. The task-specific, balanced data sets (Sec. 3.3) have sample sizes: 296 (causal versus none), 332 (causal-correlating versus neither), and 36 (causal versus correlating). We refer to this data set as the \"WikiGold\" data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wikinews Gold Data Set", "sec_num": "4.4" }, { "text": "Data Pre-processing: We perform sentence preprocessing by removing punctuation and lower casing the text. We also enrich the text with part-ofspeech (POS) tags using the NLTK library version 3.5. We did not remove stop words because such words include key function words such as \"because\" which often indicate causal relationships. 2 For the traditional classifiers, we compute the tfidf matrix using TfidfVectorizer of Python's scikitlearn module, and for the bag of words matrix, we use CountVectorizer (c.f., Tab. 2 showing classifiers used). 4 390 19 3 89 122 86 animal 2 65 6 3 18 18 15 art protest 3 147 13 3 68 30 65 blm 3 483 25 18 115 142 97 books 6 251 7 4 54 91 121 china 1 176 21 8 74 49 66 covid 19 1398 82 82 398 424 316 creativity 8 617 41 61 232 171 171 culture 12 715 20 3 107 241 153 depression 10 724 57 77 319 164 242 economy 12 868 34 37 191 368 BERT Model: We choose a BERT pre-trained model using the implementation for TensorFlow (bert-for-tf2). Specifically, we use the model known as \"BERT-base\". For the hyper parameters, we choose the maximum token as 150, a batch size of 32, a learning rate of 1e-5, and a maximum of 5 epochs with a patience of 1. We utilize an Adam optimizer and a sparse categorical cross-entropy loss function. The BERT model's sentence representation vector is the hidden state of the representation of the first token (CLS) in the final BERT layer. (Devlin et al., 2018) Our model feeds this CLS token vector into a drop out layer (set to .8) to prevent overfitting. Then, the model utilizes a dense layer to output the probabilities for the binary classes.", "cite_spans": [ { "start": 332, "end": 333, "text": "2", "ref_id": null }, { "start": 1496, "end": 1517, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 546, "end": 961, "text": "4 390 19 3 89 122 86 animal 2 65 6 3 18 18 15 art protest 3 147 13 3 68 30 65 blm 3 483 25 18 115 142 97 books 6 251 7 4 54 91 121 china 1 176 21 8 74 49 66 covid 19 1398 82 82 398 424 316 creativity 8 617 41 61 232 171 171 culture 12 715 20 3 107 241 153 depression 10 724 57 77 319 164 242 economy 12 868 34 37 191 368", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Classifier Set Up", "sec_num": "5.1" }, { "text": "For the BERT model, there was a strong positive relationship between the training data size and accuracy. The BERT model was the most successful model for all classification tasks except correlation versus causation. This is not surprising since this task had significantly less training data than the other three tasks. The BERT model on average did 5.9% better than the next most successful model for the three classification tasks where there was a significant amount of training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results & Discussion", "sec_num": "5.2" }, { "text": "On average, the BERT models had a wider dispersion between the precision and recall scores (9.3%) than the best simple models for each classification task (2.5%). There was an even split for both the BERT and simple models on whether they did better on accuracy or recall.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results & Discussion", "sec_num": "5.2" }, { "text": "In examining the true positives generated by the BERT causal versus none model, we saw that four of the ten sentences BERT was most confident about (with 99.9% confidence level) all contained the word \"lead\". The remaining six sentences all contained verbs that indicate causation. For some sentences, BERT was confident but wrong when a word associated with causal relationships (such as worst) was used without introducing any causeeffect pair such as, \"one of the worst behaviors sometimes exhibited by owl observers is feeding the bird\". BERT was most confident and correct in identifying sentences that were not causal when Table 2 : Results of comparing experiment showing (p)recision, (r)ecall, (f1)-score, and (a)ccuracy; Experiments run using (Bayes G)aussian, BERT, (Log)istic (Reg)ression, (Na\u00efve B)ayes and (R)andom (F)orest classifiers and model; Classes are causal, (co)rrelating and none/neither, and knowledge and (info)rmation; For BERT, training data set sizes include validation data (80-20-split) the sentence had a proper name in it.", "cite_spans": [ { "start": 1003, "end": 1016, "text": "(80-20-split)", "ref_id": null } ], "ref_spans": [ { "start": 629, "end": 636, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Results & Discussion", "sec_num": "5.2" }, { "text": "For the cause-correlation versus neither task, BERT was most confident and correct when a verb strongly associated with causation or correlation appeared in the sentence. 4 The sentences which correctly classified neither causative nor correlative and for which the model had the highest confidence were ones which were either short or contained proper nouns.", "cite_spans": [ { "start": 171, "end": 172, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Results & Discussion", "sec_num": "5.2" }, { "text": "For the correlation versus causal task, we found that the three sentences that Naive Bayes, the most successful algorithm, most confidently labeled as causal (91% confidence level) all contained \"percent\" in them. 5 30% of the top 10 sentences that the Na\u00efve Bayes algorithm correctly identified as causal and for which Na\u00efve Bayes was most confident contained function words associated with causation. 6 For the knowledge versus information task, the 20 sentences the BERT model was most confident and correct classifying as knowledge mostly related to depression or other health issues and reflects the database's subject matter. For the 20 sentences the model was most confident and correct in identifying as informational sentences, the sentences contained either a pronoun or a proper noun and a majority contained both of these parts of speech. 7", "cite_spans": [ { "start": 403, "end": 404, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Results & Discussion", "sec_num": "5.2" }, { "text": "To determine the effectiveness of data augmentation, we now compare neural classification through supervised learning to a mixed approach of weak supervision and neural classification. The experiments are performed on our three main tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weakly Supervised Classification", "sec_num": "6" }, { "text": "We use the recently released programming library Snorkel (Ratner et al., 2020) 8 , which was developed to improve utilizing handcrafted rules to build training data. It works by detecting unidentified correlations and other dependencies among these rules (called \"labeling functions\"). Snorkel's learning approach is twofold: First, the LabelModel learns the parameters for accuracy and correlation structure based on weak supervision represented by the labeling functions. The LabelModel generates noisy labels for the training data by weighting the labeling functions. In addition, a final classification model (an LSTM in this work) is applied, which generalizes the information learned from the weighted labeling functions by using the noisy labeled data as training data (Ratner et al., 2020) . 5 An example sentence is, \"four in ten hispanics are members of the working class compared to 28 percent of blacks 25 percent of whites but just 16 percent of asians\"", "cite_spans": [ { "start": 57, "end": 78, "text": "(Ratner et al., 2020)", "ref_id": "BIBREF22" }, { "start": 776, "end": 797, "text": "(Ratner et al., 2020)", "ref_id": "BIBREF22" }, { "start": 800, "end": 801, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Snorkel Library", "sec_num": "6.1" }, { "text": "6 These functions words were: because, by and so. 7 These sentences contained a pronoun and referred to a specific individual's experience, thus contained information. One of the informational sentences the BERT model correctly and most confidently labeled was, \"when she was 17 in 2013 mary climbed out of her bedroom window and ran across a field\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Snorkel Library", "sec_num": "6.1" }, { "text": "We use the Wiki data (Sec. 4.3) for training Snorkel's LabelModel 9 and the Gold data (comprising our Gold (Sec. 4.1) and WikiGold (Sec. 4.4) data) for testing it. The neural model then uses the predicted labels from the LabelModel as training data, and, again, the Gold data for testing. Note that we combine 80% of our Gold/WikiGold data with the noisy labeled data to train the neural classifier model and 20% of the Gold/WikiGold data for testing. Fig. illustrates this procedure.", "cite_spans": [], "ref_spans": [ { "start": 452, "end": 468, "text": "Fig. illustrates", "ref_id": null } ], "eq_spans": [], "section": "Train-Test Data Splits", "sec_num": "6.2" }, { "text": "First, we run a baseline experiment using GloVe embeddings (Pennington et al., 2014) . 10 Our LSTM uses GloVe embeddings from the 100dimensions file as features for the neural network. We choose 1000 as the vocabulary length, 100 as output dimension and 100 as input length. Following an Embedding layer, we add the LSTM layer (followed by a 256-dimensions Dense layer with RELU Activation and 0.5 Dropout). For the output, another 1-dimensional Dense layer with Sigmoid Activation is added. While fitting the model, we choose a batch size of 128, 10 epochs and a min delta of 1e-4. 11", "cite_spans": [ { "start": 59, "end": 84, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF20" }, { "start": 87, "end": 89, "text": "10", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Baseline on Gold Data", "sec_num": "6.3" }, { "text": "The LSTM (same parameters as above) is trained using noisily labeled Wiki data plus 80% of Gold/WikiGold, and tested on 20% of the Gold/WikiGold (see Tab. 4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Train on Wiki Data using Function Rules", "sec_num": "6.4" }, { "text": "Our labeling functions are generated by a script that uses keywords typically appearing in causal and correlating sentences. For causal sentences we collected keywords affect, alter, ameliorate, aggravate, because, cause, consequence, contribute, decrease, due, effect, exacerbate, help, impact, improve, increase, lead, reason, reduce, relate, result, so, spur, trigger and worsen. As keywords for correlating sentences we use accompany, associate, correlate, indicate, link, predict, and tend. We collected these terms while preparing the Gold data. 9 Snorkel's LabelModel class (Ratner et al., 2019) uses the covariances of the junction tree representation of the dependency tree built by applying the learning functions to all data points. After matrix completion, the resulting conditional probabilities of the functions serve as parameters to re-weight and combine the label output.", "cite_spans": [ { "start": 581, "end": 602, "text": "(Ratner et al., 2019)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Train on Wiki Data using Function Rules", "sec_num": "6.4" }, { "text": "10 Using GloVe gives us a comprehensive view on the problem. Being more straightforward, it better applies for testing against weak supervision and keeps computational costs low.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Train on Wiki Data using Function Rules", "sec_num": "6.4" }, { "text": "11 Presenting a comparable problem, we follow https://www.kaggle.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Train on Wiki Data using Function Rules", "sec_num": "6.4" }, { "text": "This approach is comparable with work by (Girju, 2003) who uses specific verbs placed between two noun phrases to indicate causal relations. Depending on the task, we label a sentence 0 when none of the causal or causal and correlating words appear in the sentences. For causal versus correlating, we label -1 when neither appear.", "cite_spans": [ { "start": 41, "end": 54, "text": "(Girju, 2003)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "com/kredy10/simple-lstm-for-text-classification", "sec_num": null }, { "text": "We also add labeling functions generated from POS tags. 12 Tab. 5 lists the POS tags used. Specifically, the rules applied are: (1) if either of the following POS tags is contained in the sentence's POS tag sequence: IN, WDT, RB, MD, then the sentence fits in the causal category; and (2) if the CC tag is found within the sentence, then the sentence fits in the correlating sentences category.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Part-of-Speech Features on Top", "sec_num": "6.5" }, { "text": "After experimenting with a simple entropy measure to find POS tag signals, we expand the POS tags list for identifying causal rules. We add: PRP, RP, and TO.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Part-of-Speech Features on Top", "sec_num": "6.5" }, { "text": "For the baseline experiment, the accuracy, recall, precision and f1-score values are listed in Tab. 3. The cause versus correlation task performs significantly worse than the other two tasks. However, the training data we have available for this task is only about one fifth of the training data available for the other two tasks. Comparing these results to those in Tab. 2, one finds that the baseline accuracy scores align with the BERT accuracy scores. This means the baseline models and the BERT model perform best on the task with the most data; the cause-correlation versus neither task, and the poorest on the one with the least data: the cause versus correlation task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results & Discussion", "sec_num": "6.6" }, { "text": "Utilizing POS tags and keywords for creating the labeling rules, we reach the baseline in the causalcorrelation versus neither task (see Tab. 4). The recall score is better than the baseline. Thus, the training data generated from the POS tags must provide the model with more diverse data; consequently, the classification model is able to recognize more nuanced patterns indicative of causation and correlation. This ability might help the model to perform well in finding true positives (causal and correlative sentences). However, many false positives also show that the rules (key words and POS tags) are not specific enough, since they also appear in \"neither\" sentences. The causal versus none task performs comparable to the baseline, showing a slightly lower recall and f1-score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results & Discussion", "sec_num": "6.6" }, { "text": "The results of the LabelModel show an accuracy of 50% for the causal versus correlation task. Only having one POS feature to identify correlating sentences causes this dismal performance. The LabelModel appears to ignore the correlative POS rule, and instead predicts almost only causal sentences. Since the majority of model rules points to each sentence falling within the causal category, the model only reaches 50% accuracy. Moreover, in the causal versus correlation task, because we arbitrarily assigned the correlative sentences to the positive polarity and the causal sentences to the negative polarity, the model created with the POS feature rules leads to almost no true positives.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results & Discussion", "sec_num": "6.6" }, { "text": "However, the LSTM reaches a much higher precision (at the cost of low recall) and higher accuracy compared to the baseline. Looking at samples, one can conclude that the \"CC\" pattern is a weak one as most true negatives (causal sentence votes) do contain the \"CC\" pattern too. One also finds that many of false negatives (predicted causal (0), actual correlating (1))-which negatively impacts the recall-do also contain a \"CC\" pattern. These samples also tend to be relatively short, which often is indicative for being a causal sentence rather than a correlating one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results & Discussion", "sec_num": "6.6" }, { "text": "Inspired by (Cheng et al., 2007) , we perform a Fischer Score test (Duda et al., 1973; Fisher, 1934) to verify suitable POS features. The Fisher Score is defined as follows:", "cite_spans": [ { "start": 12, "end": 32, "text": "(Cheng et al., 2007)", "ref_id": "BIBREF4" }, { "start": 67, "end": 86, "text": "(Duda et al., 1973;", "ref_id": "BIBREF6" }, { "start": 87, "end": 100, "text": "Fisher, 1934)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Fisher Score for POS Tag Features", "sec_num": "6.7" }, { "text": "F S = c i=1 n i (\u00b5 i \u2212 \u00b5) 2 c i=1 n i \u03c3 2 i (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fisher Score for POS Tag Features", "sec_num": "6.7" }, { "text": "where c is the number of classes (2 in our case), n i is the number of data samples in class i, \u00b5 i and \u00b5 are the average appearance of the given feature in class i and the whole dataset respectively. \u03c3 i is the standard deviation of the feature's appearance in class i. Our features have a boolean character, which means that they either appear in a sample or they don't. Hence, our \u00b5's cannot be greater than 1. The higher the FS, the more important is the given feature. The POS-tags IN, MD, and TO are outstanding for each of the three tasks, all of which we use to find causal sentences. It confirms that modals and subordinating conjunctions play a special role for classifying causal sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fisher Score for POS Tag Features", "sec_num": "6.7" }, { "text": "Summarizing, Snorkel works moderately well in the sentence classifying epistemic tasks. The causal versus none task works comparable to the baseline, the causal versus correlating task reaches a higher f1-score with lower accuracy, and the causal-correlating versus neither task reaches a 15%-higher f1-score while accuracy is on par with the baseline. The overall aim to improve classification accuracy using a big corpus of noisily labeled data worked partly for our problem. Note that even though we ran some basic clean-up steps on our Wiki data, we are aware that special characters, prefixes or re-codings of them can corrupt a significant portion of the data set's sentences, hence leading to affecting noise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary", "sec_num": "6.8" }, { "text": "To classify sentences based upon their epistemic functions, we created a new data set 14 containing 8,327 sentences that we annotated for whether a sentence communicates a causal or correlating relationship, and whether it expresses information or knowledge. Utilizing BERT, the three epistemic function models achieved sufficient accuracy that future work should include determining the models' abilities to solve real world problems. In that vein, we plan to utilize our models for a range of 14 available at: https:// github.com/Goldstein-Berger/ law-dmr21-sentence-meaning-annotation downstream tasks such as event detection, question answering, and discourse parsing. Our approach may be able to improve upon these tasks by providing the underlying model additional features through which to comprehend the text in a more nuanced fashion. In addition, as the Snorkel approach appears promising, we plan in future work to determine if there is a systematic method, which can be utilized to determine the underlying functions needed to gather the weakly-supervised data. If this can be accomplished, then a lesser amount of gold data may be needed. As gold data is generally the biggest bottleneck that NLP practitioners face, a systematic approach to Snorkel may allow for a significant time reduction for building future NLP models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "\"enwikinews-latest-pages-articles 2020 10 09.xml\" https://dumps.wikimedia.org/enwikinews/ 2 We performed experiments with lemmatizing and stemming the text, but this did not increase performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "These are: affect, contribute, encourage, foster, produce, improve, and modify", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The verbs strongly associated with causation-correlation showing the highest confidence levels are: associated, bring, cause, contribute, lead, make, and relieve.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/snorkel-team/snorkel", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The Wiki and WikiGold data were POS-tagged using the NLTK library version 3.4.5. The Gold data was POS-tagged using NLTK 3.5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "https://www.medicalnewstoday.com/articles/8933.php 1/6/2020 EJG causes of depression Google search Could depression be the result of a brain infection?https://www.nhs.uk/news/mental-health/could-depression-be-the-result-of-a-brain-infection/ 2/22/20 EJG causes of depression Google search Cannabis Lightens Mood, but May Worsen Depression Over Time https://www.psychcongress.com/article/cannabis-lightens-mood-may-worsen-depression-over-time 2/22/20 EJG causes of depression Google search Online mindfulness-based cognitive therapy to improve lingering depression https://www.medicalnewstoday.com/articles/online-mindfulness-based-cognitive-therapy-to-improve-lingering-depression 2/22/20 EJG causes of depression Google search The Link Between Migraine, Depression and Anxiety https://americanmigrainefoundation.org/resource-library/link-between-migraine-depression-anxiety/ 2/22/20 EJG causes of depression Google search Can Artificial Intelligence Help With Depression?https://www.verywellmind.com/can-artificial-intelligence-help-depression-4158330 2/22/20 EJG causes of depression Google search How Artificial Intelligence Can Help Pick the Best Depression Treatments for You https://time.com/5786081/depression-medication-treatment-artificial-intelligence/ 2/22/20 EJG causes of depression Google search Lifelong poverty increases heart disease risks https://www.reuters.com/article/us-lifelong-poverty/lifelong-poverty-increases-heart-disease-risks-idUSTRE52Q3S520090327 3/21/2020 EJG poverty & heart disease Google search Higher heart disease risk persists for low-income populations https://www.modernhealthcare.com/article/20170607/NEWS/170609924/higher-heart-disease-risk-persists-for-low-income-populations 3/21/2020 EJG poverty & heart disease Google search Understanding the connection between poverty, childhood trauma and heart disease https://www.heart.org/en/news/2019/08/27/understanding-the-connection-between-poverty-childhood-trauma-and-heart-disease 3/21/2020 EJG poverty & heart disease Google search Shining a light on poverty and heart disease https://blog.providence.org/archive/shining-a-light-on-poverty-and-heart-disease 3/22/2020 EJG poverty & heart disease Google search Poverty is the main predictor of heart disease, says Canadian report https://www.thelancet.com/pdfs/journals/lancet/PIIS0140673602086117.pdf 3/22/2020 EJG poverty & heart disease Google search Lower socioeconomic status linked with heart disease despite improvements in other risk factors https://health.ucdavis.edu/health-news/newsroom/lower-socioeconomic-status-linked-with-heart-disease-despite-improvements-in-otherrisk-factors/2011/08 https://getpocket.com/explore/item/steve-jobs-said-1-thing-separates-successful-people-from-everyone-else-and-will-make-all-the 7/4/2020 EJG success EJG 7/4/20 pocket feed While Statues Sleep https://www.lrb.co.uk/the-paper/v42/n12/thomas-laqueur/while-statues-sleep 7/4/2020 EJG black lives matter EJG hand-selected China cuts Uighur births with IUDs, abortion, sterilization https://apnews.com/269b3de1af34e17c1941a514f78d764c 7/4/2020 EJG china EJG Nuzzel Feed 6/30/20 A Warning from the Chickens of the World https://thewalrus.ca/a-warning-from-the-chickens-of-the-world/ 7/14/2020 EJG coronavirus EJG pocket feed 7/11/20 https://getpocket.com/explore/item/could-consciousness-all-come-down-to-the-way-things-vibrate 7/25/2020 EJG science EJG pocket feed 7/23/20 How You Feel Depends on Where You Are https://www.wsj.com/articles/how-you-feel-depends-on-where-you-are-11594311622? shareToken=st3e83d8abc09841fbbd5505af8ded165c&mod=pckt187 7/25/2020 EJG psychology EJG pocket feed 7/18/20 The Message Behind Gold's Rally: The World Economy Is in Trouble https://www.bloomberg.com/news/articles/2020-07-25/the-message-behind-gold-s-rally-the-world-economy-is-in-trouble 7/26/2020 EJG economy EJG pocket feed 7/26/20 SuperGLUE: The Slippery Benchmark with no Language Understanding https://medium.com/pat-inc/super-glue-the-slippery-benchmark-with-no-language-understanding-eb92680bfb14 8/3/2020 EJG AI EJG hand-selected Stevie Smith: \"Not Waving but Drowning https://www.poetryfoundation.org/poems/46479/not-waving-but-drowning 8/10/2020 EJG poetry EJG hand-selected Manhattan Apartment Rents Plunge 10% in Pandemic-Fueled Exodus https://www.bloomberg.com/news/articles/2020-08-13/manhattan-apartment-rents-plunge-10-in-pandemic-fueled-exodus#:~: text=Manhattan%20apartment%20rents%20plunged%20last,that's%20sparked%20an%20urban%20exodus 8/13/2010 EJG covid EJG hand-selected Obsession and Desire in an Ancient Assyrian Library https://lithub.com/obsession-and-desire-in-an-ancient-assyrian-library/ 9/2/2020 EJG history EJG hand-selected Pop diva Dana International turns literary agent https://www.israelhayom.com/2020/09/01/pop-diva-dana-international-turns-literary-agent/ 9/2/2020 EJG literature EJG hand-selected Helen Cullen on Jane Austen, Michael Cunningham, and Donna Tartt https://bookmarks.reviews/helen-cullen-on-jane-austen-michael-cunningham-and-donna-tartt/ 9/3/2020 EJG literature EJG hand-selected How to Read Aloud https://www.lrb.co.uk/the-paper/v42/n17/irina-dumitrescu/how-to-read-aloud 9/3/2020 EJG literature EJG hand-selected Maestro Bogomolny https://medium.com/incerto/maestro-bogomolny-8498f08c0f0c 11/22/2020 EJG math EJG hand-selected No Revenue Is No Problem in the 2020 Stock Market https://www.bloomberg.com/opinion/articles/2020-09-18/spac-deals-no-revenue-is-no-problem-in-the-2020-stock-market 11/22/2020 EJG economy EJG hand-selected Evolution Made Really Smart People Long to Be Loners https://www.inverse.com/article/24819-intelligent-people-friendships-satisfaction-savanna-theory 11/22/2020 EJG economy EJG hand-selected Reading Comprehension -Elephant Ivory Trade https://www.myenglishpages.com/site_php_files/reading-elephants-ivory-trade.php 9/19/2020 EJG psychology EJG hand-selected People's words and actions can actually shape your brain -a neuroscientist explains how https://ideas.ted.com/peoples-words-and-actions-can-actually-shape-your-brain-a-neuroscientist-explains-how/?utm_source=pocket&utm_medium=email&utm_campaign=pockethit 11/22/2020 EJG psychology EJG hand-selected Heirloom corn harvests produce restaurants' grits, tortillas, and Pennsylvania polenta https://www.inquirer.com/food/craig-laban/philadelphia-restaurants-corn-harvest-lancaster-farms-20201106.html 11/8/2020 EJG farming EJG hand-selected Motorists warned not to let moose lick their cars https://www.msn.com/en-us/news/technology/motorists-warned-not-to-let-moose-lick-their-cars/ar-BB1bfLI1?ocid=msedgdhp 11/22/2020 EJG animal EJG hand-selected UFC fighter accidentally gets Beyonc\u00e9 'Halo' as walk-up music, hilariously starts singing along https://www.msn.com/en-us/sports/mma-ufc/ufc-fighter-accidentally-gets-beyonc%C3%A9-halo-as-walk-up-music-hilariously-startssinging-along/ar-BB1bfOPH?ocid=msedgdhp 11/22/2020 EJG sports EJG hand-selected Five Reasons Vinyl Is Making a Comeback https://hub.yamaha.com/five-reasons-vinyl-is-making-a-comeback/ 11/28/2020 EJG music EJG hand-selected GIFTS THEY MIGHT ACTUALLY WANT https://nymag.com/strategist/article/best-cozy-gifts.html 11/28/2020 EJG christmas EJG hand-selected How to Make Baked Potatoes Fluffy and Crispy https://getpocket.com/explore/item/the-secret-to-better-baked-potatoes-cook-them-like-the-british-do 11/29/2020 EJG food EJG pocket feed 11/28/20 Raccoon Was Once a Thanksgiving Feast Fit for a President https://getpocket.com/explore/item/raccoon-was-once-a-thanksgiving-feast-fit-for-a-president 11/29/2020 EJG food EJG pocket feed 11/26/20 Neutrinos Lead to Unexpected Discovery in Basic Math https://www.quantamagazine.org/neutrinos-lead-to-unexpected-discovery-in-basic-math-20191113/ 11/29/2020 EJG math EJG hand-selected The Ten Best Books About Food of 2020 https://www.smithsonianmag.com/arts-culture/ten-best-books-about-food-2020-180976406/? utm_source=Sailthru&utm_medium=email&utm_campaign=Today%20in%20Books% 20112820&utm_content=Final&utm_term=BookRiot_TodayInBooks_DormantSuppress11/30/2020 EJG food EJG hand-selected L.A.'s dance crisis: Studios fight to survive the pandemic https://www.latimes.com/entertainment-arts/story/2020-09-02/la-dance-studio-classes-covid-closures 12/2/2020 EJG covid EJG hand-selected How to Finally Organize Your Kitchen Cabinets-For Good This Time https://getpocket.com/explore/item/how-to-finally-organize-your-kitchen-cabinets-for-good-this-time 12/2/2020 EJG home EJG pocket feed 12/1/20 Archaeologists uncover ancient street food shop in Pompeii https://www.reuters.com/article/italy-pompeii-idUSKBN2900D3 12/27/2020 EJG history EJG Nuzzel feed 12/27/20 Some notes on funniness https://www.newyorker.com/magazine/2020/12/28/some-notes-on-funniness?utm_source=pocket&utm_medium=email&utm_campaign=pockethits 12/27/2020 EJG entertainment EJG pocket feed 12/26/20 Gitanjali Rao Is Time Magazine's First \"Kid Of The Year\" https://www.dogonews.com/2020/12/14/gitanjali-rao-is-time-magazines-first-kid-of-the-year 12/27/2020 EJG science EJG hand-selected ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Contextual string embeddings for sequence labeling", "authors": [ { "first": "Alan", "middle": [], "last": "Akbik", "suffix": "" }, { "first": "Duncan", "middle": [], "last": "Blythe", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Vollgraf", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th international conference on computational linguistics", "volume": "", "issue": "", "pages": "1638--1649", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence la- beling. In Proceedings of the 27th international con- ference on computational linguistics, pages 1638- 1649.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Automatic extraction of causal relations from natural language texts: a comprehensive survey", "authors": [ { "first": "Nabiha", "middle": [], "last": "Asghar", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1605.07895" ] }, "num": null, "urls": [], "raw_text": "Nabiha Asghar. 2016. Automatic extraction of causal relations from natural language texts: a comprehen- sive survey. arXiv preprint arXiv:1605.07895.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Understanding knowledge management and information management: the need for an empirical perspective", "authors": [ { "first": "France", "middle": [], "last": "Bouthillier", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Shearer", "suffix": "" } ], "year": 2002, "venue": "Information research", "volume": "8", "issue": "1", "pages": "8--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "France Bouthillier and Kathleen Shearer. 2002. Under- standing knowledge management and information management: the need for an empirical perspective. Information research, 8(1):8-1.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The event storyline corpus: A new benchmark for causal and temporal relation extraction", "authors": [ { "first": "Tommaso", "middle": [], "last": "Caselli", "suffix": "" }, { "first": "Piek", "middle": [], "last": "Vossen", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Events and Stories in the News Workshop", "volume": "", "issue": "", "pages": "77--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tommaso Caselli and Piek Vossen. 2017. The event storyline corpus: A new benchmark for causal and temporal relation extraction. In Proceedings of the Events and Stories in the News Workshop, pages 77- 86.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Discriminative frequent pattern analysis for effective classification", "authors": [ { "first": "Hong", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Xifeng", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Han", "suffix": "" }, { "first": "Chih-Wei", "middle": [], "last": "Hsu", "suffix": "" } ], "year": 2007, "venue": "2007 IEEE 23rd international conference on data engineering", "volume": "", "issue": "", "pages": "716--725", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hong Cheng, Xifeng Yan, Jiawei Han, and Chih-Wei Hsu. 2007. Discriminative frequent pattern analysis for effective classification. In 2007 IEEE 23rd in- ternational conference on data engineering, pages 716-725. IEEE.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Pattern classification and scene analysis", "authors": [ { "first": "O", "middle": [], "last": "Richard", "suffix": "" }, { "first": "Peter", "middle": [ "E" ], "last": "Duda", "suffix": "" }, { "first": "", "middle": [], "last": "Hart", "suffix": "" } ], "year": 1973, "venue": "", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard O Duda, Peter E Hart, et al. 1973. Pattern classification and scene analysis, volume 3. Wiley New York.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Two new properties of mathematical likelihood", "authors": [ { "first": "Ronald Aylmer", "middle": [], "last": "Fisher", "suffix": "" } ], "year": 1934, "venue": "Proceedings of the Royal Society of London. Series A", "volume": "144", "issue": "852", "pages": "285--307", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronald Aylmer Fisher. 1934. Two new properties of mathematical likelihood. Proceedings of the Royal Society of London. Series A, Containing Pa- pers of a Mathematical and Physical Character, 144(852):285-307.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Automatic detection of causal relations for question answering", "authors": [ { "first": "Roxana", "middle": [], "last": "Girju", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the ACL 2003 workshop on Multilingual summarization and question answering", "volume": "", "issue": "", "pages": "76--83", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roxana Girju. 2003. Automatic detection of causal re- lations for question answering. In Proceedings of the ACL 2003 workshop on Multilingual summariza- tion and question answering, pages 76-83.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A knowledgerich approach to identifying semantic relations between nominals. Information processing & management", "authors": [ { "first": "Roxana", "middle": [], "last": "Girju", "suffix": "" }, { "first": "Brandon", "middle": [], "last": "Beamer", "suffix": "" }, { "first": "Alla", "middle": [], "last": "Rozovskaya", "suffix": "" } ], "year": 2010, "venue": "", "volume": "46", "issue": "", "pages": "589--610", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roxana Girju, Brandon Beamer, Alla Rozovskaya, An- drew Fister, and Suma Bhat. 2010. A knowledge- rich approach to identifying semantic relations be- tween nominals. Information processing & manage- ment, 46(5):589-610.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Text mining for causal relations", "authors": [ { "first": "Roxana", "middle": [], "last": "Girju", "suffix": "" }, { "first": "Dan", "middle": [ "I" ], "last": "Moldovan", "suffix": "" } ], "year": 2002, "venue": "FLAIRS conference", "volume": "", "issue": "", "pages": "360--364", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roxana Girju, Dan I Moldovan, et al. 2002. Text mining for causal relations. In FLAIRS conference, pages 360-364.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "An evaluation dataset for identifying communicative functions of sentences in english scholarly papers", "authors": [ { "first": "Kenichi", "middle": [], "last": "Iwatsuki", "suffix": "" }, { "first": "Florian", "middle": [], "last": "Boudin", "suffix": "" }, { "first": "Akiko", "middle": [], "last": "Aizawa", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "1712--1720", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenichi Iwatsuki, Florian Boudin, and Akiko Aizawa. 2020. An evaluation dataset for identifying com- municative functions of sentences in english schol- arly papers. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 1712- 1720.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Automatic extraction of cause-effect information from newspaper text without knowledge-based inferencing", "authors": [ { "first": "S", "middle": [ "G" ], "last": "Christopher", "suffix": "" }, { "first": "Jaklin", "middle": [], "last": "Khoo", "suffix": "" }, { "first": "", "middle": [], "last": "Kornfilt", "suffix": "" }, { "first": "N", "middle": [], "last": "Robert", "suffix": "" }, { "first": "Sung", "middle": [ "Hyon" ], "last": "Oddy", "suffix": "" }, { "first": "", "middle": [], "last": "Myaeng", "suffix": "" } ], "year": 1998, "venue": "Literary and Linguistic Computing", "volume": "13", "issue": "4", "pages": "177--186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher SG Khoo, Jaklin Kornfilt, Robert N Oddy, and Sung Hyon Myaeng. 1998. Automatic extrac- tion of cause-effect information from newspaper text without knowledge-based inferencing. Literary and Linguistic Computing, 13(4):177-186.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Categorical metadata representation for customized text classification", "authors": [ { "first": "Jihyeok", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Reinald", "middle": [], "last": "Kim Amplayo", "suffix": "" }, { "first": "Kyungjae", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2019, "venue": "Transactions of the Association for Computational Linguistics", "volume": "7", "issue": "", "pages": "201--215", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jihyeok Kim, Reinald Kim Amplayo, Kyungjae Lee, Sua Sung, Minji Seo, and Seung-won Hwang. 2019. Categorical metadata representation for customized text classification. Transactions of the Association for Computational Linguistics, 7:201-215.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The narrativeqa reading comprehension challenge", "authors": [ { "first": "Tom\u00e1\u0161", "middle": [], "last": "Ko\u010disk\u1ef3", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Schwarz", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Karl", "middle": [ "Moritz" ], "last": "Hermann", "suffix": "" }, { "first": "G\u00e1bor", "middle": [], "last": "Melis", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Grefenstette", "suffix": "" } ], "year": 2018, "venue": "Transactions of the Association for Computational Linguistics", "volume": "6", "issue": "", "pages": "317--328", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom\u00e1\u0161 Ko\u010disk\u1ef3, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, G\u00e1bor Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge. Transactions of the Asso- ciation for Computational Linguistics, 6:317-328.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Causality extraction based on self-attentive bilstm-crf with transferred embeddings", "authors": [ { "first": "Zhaoning", "middle": [], "last": "Li", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xiaotian", "middle": [], "last": "Zou", "suffix": "" }, { "first": "Jiangtao", "middle": [], "last": "Ren", "suffix": "" } ], "year": 2021, "venue": "Neurocomputing", "volume": "423", "issue": "", "pages": "207--219", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhaoning Li, Qi Li, Xiaotian Zou, and Jiangtao Ren. 2021. Causality extraction based on self-attentive bilstm-crf with transferred embeddings. Neurocom- puting, 423:207-219.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Learning -causal reasoning -events, event, students, and effect", "authors": [ { "first": "P", "middle": [], "last": "Joseph", "suffix": "" }, { "first": "Bradford", "middle": [ "H" ], "last": "Magliano", "suffix": "" }, { "first": "", "middle": [], "last": "Pillow", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph P. Magliano and Bradford H. Pillow. 2021. Learning -causal reasoning -events, event, stu- dents, and effect. https://education. stateuniversity.com/pages/2163/ Learning-CAUSAL-REASONING.html# ixzz6vc5QYKKa. Accessed: May 2021.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "On some contributions of the cognitive sciences and epistemology to a theory of classification", "authors": [ { "first": "Francisco", "middle": [ "Javier" ], "last": "", "suffix": "" }, { "first": "Garcia", "middle": [], "last": "Marco", "suffix": "" }, { "first": "Miguel Angel Esteban", "middle": [], "last": "Navarro", "suffix": "" } ], "year": 1993, "venue": "KO KNOWLEDGE ORGANIZA-TION", "volume": "20", "issue": "3", "pages": "126--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Francisco Javier Garcia Marco and Miguel Angel Es- teban Navarro. 1993. On some contributions of the cognitive sciences and epistemology to a theory of classification. KO KNOWLEDGE ORGANIZA- TION, 20(3):126-132.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The Thinking Game: A Guide to Effective Study", "authors": [ { "first": "J", "middle": [], "last": "Egene", "suffix": "" }, { "first": "", "middle": [], "last": "Meehan", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Egene J. Meehan. 1988. The Thinking Game: A Guide to Effective Study. Chatham House Publishers, Inc.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Catena: Causal and temporal relation extraction from natural language texts", "authors": [ { "first": "Paramita", "middle": [], "last": "Mirza", "suffix": "" }, { "first": "Sara", "middle": [], "last": "Tonelli", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "64--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paramita Mirza and Sara Tonelli. 2016. Catena: Causal and temporal relation extraction from natural lan- guage texts. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 64-75.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Incremental dialogue act understanding", "authors": [ { "first": "Volha", "middle": [], "last": "Petukhova", "suffix": "" }, { "first": "Harry", "middle": [], "last": "Bunt", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Ninth International Conference on Computational Semantics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Volha Petukhova and Harry Bunt. 2011. Incremental dialogue act understanding. In Proceedings of the Ninth International Conference on Computational Semantics (IWCS 2011).", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Snorkel: Rapid training data creation with weak supervision", "authors": [ { "first": "Alexander", "middle": [], "last": "Ratner", "suffix": "" }, { "first": "H", "middle": [], "last": "Stephen", "suffix": "" }, { "first": "Henry", "middle": [], "last": "Bach", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Ehrenberg", "suffix": "" }, { "first": "Sen", "middle": [], "last": "Fries", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "R\u00e9", "suffix": "" } ], "year": 2020, "venue": "The VLDB Journal", "volume": "29", "issue": "", "pages": "709--730", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Ratner, Stephen H Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and Christopher R\u00e9. 2020. Snorkel: Rapid training data creation with weak su- pervision. The VLDB Journal, 29:709-730.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Training complex models with multi-task weak supervision", "authors": [ { "first": "Alexander", "middle": [], "last": "Ratner", "suffix": "" }, { "first": "Braden", "middle": [], "last": "Hancock", "suffix": "" }, { "first": "Jared", "middle": [], "last": "Dunnmon", "suffix": "" }, { "first": "Frederic", "middle": [], "last": "Sala", "suffix": "" }, { "first": "Shreyash", "middle": [], "last": "Pandey", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "R\u00e9", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "4763--4771", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Ratner, Braden Hancock, Jared Dunnmon, Frederic Sala, Shreyash Pandey, and Christopher R\u00e9. 2019. Training complex models with multi-task weak supervision. In Proceedings of the AAAI Con- ference on Artificial Intelligence, volume 33, pages 4763-4771.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Intelligence in the modern world. john dewey's philosophy", "authors": [ { "first": "Joseph", "middle": [], "last": "Ratner", "suffix": "" } ], "year": 1939, "venue": "Journal of Philosophy", "volume": "36", "issue": "21", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph Ratner et al. 1939. Intelligence in the modern world. john dewey's philosophy. Journal of Philoso- phy, 36(21).", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Learning textual graph patterns to detect causal event relations", "authors": [ { "first": "Bryan", "middle": [], "last": "Rink", "suffix": "" }, { "first": "Sanda", "middle": [], "last": "Cosmin Adrian Bejan", "suffix": "" }, { "first": "", "middle": [], "last": "Harabagiu", "suffix": "" } ], "year": 2010, "venue": "Twenty-Third International FLAIRS Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bryan Rink, Cosmin Adrian Bejan, and Sanda Harabagiu. 2010. Learning textual graph patterns to detect causal event relations. In Twenty-Third In- ternational FLAIRS Conference.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Computational epistemology and e-science: A new way of thinking. Minds and Machines", "authors": [ { "first": "Jordi", "middle": [], "last": "Vallverd\u00fa I Segura", "suffix": "" } ], "year": 2009, "venue": "", "volume": "19", "issue": "", "pages": "557--567", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jordi Vallverd\u00fa i Segura. 2009. Computational epis- temology and e-science: A new way of thinking. Minds and Machines, 19(4):557-567.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Extracting action and event semantics from web text", "authors": [ { "first": "Avirup", "middle": [], "last": "Sil", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Yates", "suffix": "" } ], "year": 2010, "venue": "AAAI Fall Symposium: Commonsense Knowledge", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Avirup Sil, Fei Huang, and Alexander Yates. 2010. Ex- tracting action and event semantics from web text. In AAAI Fall Symposium: Commonsense Knowl- edge. Citeseer.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Knowledge versus information. Unpublished, on file with authors", "authors": [ { "first": "Larry", "middle": [], "last": "Spence", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Larry Spence. 2005. Knowledge versus information. Unpublished, on file with authors.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Multi-factor duplicate question detection in stack overflow", "authors": [ { "first": "Yun", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "David", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Jian-Ling", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2015, "venue": "Journal of Computer Science and Technology", "volume": "30", "issue": "5", "pages": "981--997", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yun Zhang, David Lo, Xin Xia, and Jian-Ling Sun. 2015. Multi-factor duplicate question detection in stack overflow. Journal of Computer Science and Technology, 30(5):981-997.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Event causality extraction based on connectives analysis", "authors": [ { "first": "Sendong", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Sicheng", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Yiheng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jian-Yun", "middle": [], "last": "Nie", "suffix": "" } ], "year": 2016, "venue": "Neurocomputing", "volume": "173", "issue": "", "pages": "1943--1950", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sendong Zhao, Ting Liu, Sicheng Zhao, Yiheng Chen, and Jian-Yun Nie. 2016. Event causality extrac- tion based on connectives analysis. Neurocomput- ing, 173:1943-1950.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Learnda: Learnable knowledge-guided data augmentation for event causality identification", "authors": [ { "first": "Xinyu", "middle": [], "last": "Zuo", "suffix": "" }, { "first": "Pengfei", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Yubo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Weihua", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Yuguang", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2106.01649" ] }, "num": null, "urls": [], "raw_text": "Xinyu Zuo, Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao, Weihua Peng, and Yuguang Chen. 2021. Learnda: Learnable knowledge-guided data aug- mentation for event causality identification. arXiv preprint arXiv:2106.01649.", "links": null } }, "ref_entries": { "TABREF0": { "type_str": "table", "text": "task-wise topic distributions in #sentences topic #articles #sentences causal vs. corr caus/corr vs. neither causal vs. none causal correlating causal/corr. neither causal none ai", "content": "", "num": null, "html": null }, "TABREF2": { "type_str": "table", "text": "", "content": "
", "num": null, "html": null }, "TABREF4": { "type_str": "table", "text": "Results of baseline experiment", "content": "
LabelModel (on keywords and POS tags)LSTM (on textual representations)
tasktrain on test onr p f1 a train (validate) ontest onr p f1 a loss
ca vs noneWikiWikiGold+Gold .87 .60 .71 .64 Wiki+WikiGold+Gold WikiGold+Gold .92 .57 .70 .62 .79
sample size9929421810642 (2661)844
ca/co vs neither WikiWikiGold+Gold .85 .59 .70 .63 Wiki+WikiGold+Gold WikiGold+Gold .88 .61 .72 .64 .77
sample size9929518011258 (2815)1036
ca vs coWikiWikiGold+Gold .00 1.0 .00 .50 Wiki+WikiGold+Gold WikiGold+Gold .46 .62 .53 .60 .72
sample size99299627908 (1977)192
", "num": null, "html": null }, "TABREF5": { "type_str": "table", "text": "Results of mixed approach: Validation data on top of training data.", "content": "
Tables show (r)ecall, (p)recision,
", "num": null, "html": null }, "TABREF7": { "type_str": "table", "text": "Fisher Score for our POS-tag 13 features calculated on our Gold dataset", "content": "", "num": null, "html": null } } } }