{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:14:11.975869Z" }, "title": "Extracting Events from Industrial Incident Reports", "authors": [ { "first": "Nitin", "middle": [], "last": "Ramrakhiyani", "suffix": "", "affiliation": { "laboratory": "", "institution": "TCS Research", "location": { "country": "India" } }, "email": "nitin.ramrakhiyani@tcs.com" }, { "first": "Swapnil", "middle": [], "last": "Hingmire", "suffix": "", "affiliation": { "laboratory": "", "institution": "TCS Research", "location": { "country": "India" } }, "email": "swapnil.hingmire@tcs.com" }, { "first": "Sangameshwar", "middle": [], "last": "Patil", "suffix": "", "affiliation": { "laboratory": "", "institution": "TCS Research", "location": { "country": "India" } }, "email": "sangameshwar.patil@tcs.com" }, { "first": "Alok", "middle": [], "last": "Kumar", "suffix": "", "affiliation": { "laboratory": "", "institution": "TCS Research", "location": { "country": "India" } }, "email": "" }, { "first": "Girish", "middle": [ "K" ], "last": "Palshikar", "suffix": "", "affiliation": { "laboratory": "", "institution": "TCS Research", "location": { "country": "India" } }, "email": "gk.palshikar@tcs.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Incidents in industries have huge social and political impact and minimizing the consequent damage has been a high priority. However, automated analysis of repositories of incident reports has remained a challenge. In this paper, we focus on automatically extracting events from incident reports. Due to absence of event annotated datasets for industrial incidents we employ a transfer learning based approach which is shown to outperform several baselines. We further provide detailed analysis regarding effect of increase in pre-training data and provide explainability of why pre-training improves the performance.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Incidents in industries have huge social and political impact and minimizing the consequent damage has been a high priority. However, automated analysis of repositories of incident reports has remained a challenge. In this paper, we focus on automatically extracting events from incident reports. Due to absence of event annotated datasets for industrial incidents we employ a transfer learning based approach which is shown to outperform several baselines. We further provide detailed analysis regarding effect of increase in pre-training data and provide explainability of why pre-training improves the performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The industrial revolution 1 has had a profound effect on the socio-political fabric of the world. Economic progress of societies has been highly correlated with their degree of industrialization. However, one of the flip sides of this progress has been the cost of large industrial accidents in terms of injuries to workers, damage to material and property as well as the irreparable loss of innocent human lives. Such major industrial incidents have had large social and political impacts and have prompted policy makers to devise multiple regulations towards prevention of such incidents. As an instance, the huge social uproar after the Bhopal Gas Leakage tragedy 2 had many political ramifications and resulted in creation of many new acts, rules and institutions in India and internationally. Governmental agencies in-charge of industrial safety (OSHA; MINERVA) as well as the industrial enterprises themselves try and minimize the possibility of recurrence of industrial incidents. For this 1 https://en.wikipedia.org/wiki/ Industrial_Revolution 2 https://en.wikipedia.org/wiki/Bhopal_ disaster On February 1, 2014, at approximately 11:37 a.m., a 340 ft.-high guyed telecommunication tower, suddenly collapsed during upgrading activities. Four employees were working on the tower removing its diagonals. In the process, no temporary supports were installed. As a result of the tower 's collapse , two employees were killed and two others were badly injured. purpose, they carry out detailed investigations of incidents that have previously occurred to identify root causes and suggest preventive actions. In most cases, reports summarizing the incidents as well as their investigation are maintained in incident document repositories 3 . For example, Table 1 shows a sample incident report summary in the construction domain. However, most of these investigative studies are carried out manually. There is little work towards automated processing of repositories of incident reports. Automated processing of incident reports requires us to solve multiple sub-problems such as identification of domain-specific entities, events, different states or conditions, relations between the events, resolving coreferences etc. As an example, we show the entities, events and states marked in red, blue and green respectively in Table 1 . In this paper, we focus on an important stage from the above pipeline -extraction of events from incident reports. Event identification is central to the automated processing of incident reports because they pithily capture what exactly happened during an incident. Identification of events is also an important task required for down the line applications such as narrative understanding and visualization through knowledge representations such as Message Se-quence Charts (MSC) (Palshikar et al., 2019; Hingmire et al., 2020) and event timelines (Bedi et al., 2017) . Further, most of the work in event detection has focused on events in general domain such as ACE (Linguistic Data Consortium, 2005) and ECB (Bejan and Harabagiu, 2010) . Little attention has been paid in the literature towards automated event extraction and analysis from industrial incident reports. To the best of our knowledge, there is no dataset of incident reports comprising of annotations for event identification (spans and attributes). This motivates us to experiment with unsupervised or weakly supervised approaches. In addition to experimenting with unsupervised baselines, we propose a transfer learning approach to extract events which first learns the nature of events in general domain through pre-training and then requires posttraining with minimal training data in the domain of incidents. We consider incident reports from two industriescivil aviation and construction and focus on identifying events involving risk-prone machinery or vehicles, common causes, human injuries and casualties and remedial measures, if any. We show that on both domains, the proposed transfer learning based approach outperforms several unsupervised and weakly supervised baselines. We further supplement the results with detailed analysis regarding effect of increase in pre-training data and explainability of pre-training through a novel clustering based approach. We discuss relevant related work in Section 2. In Section 3, we cover the event extraction process detailing the annotation guidelines and proposed approach. In Section 4, we explain the experimental setup, evaluation and analysis. We finally conclude in Section 5.", "cite_spans": [ { "start": 997, "end": 998, "text": "1", "ref_id": null }, { "start": 2815, "end": 2839, "text": "(Palshikar et al., 2019;", "ref_id": "BIBREF18" }, { "start": 2840, "end": 2862, "text": "Hingmire et al., 2020)", "ref_id": null }, { "start": 2883, "end": 2902, "text": "(Bedi et al., 2017)", "ref_id": "BIBREF2" }, { "start": 3002, "end": 3036, "text": "(Linguistic Data Consortium, 2005)", "ref_id": null }, { "start": 3045, "end": 3072, "text": "(Bejan and Harabagiu, 2010)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 1757, "end": 1764, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 2325, "end": 2332, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This section discusses important related work on two important aspects -automated analysis of textual incident reports/descriptions and unsupervised or weakly supervised event extraction approaches. As per the best of our knowledge, this is the first work on labelling and predicting events (a token level object) from incident report text. However, there are multiple papers which analyze incident reports at the document or sentence level for various tasks such as classification, cause-effect extraction and incident similarity. Tanguy et al.(2016) use NLP techniques to analyze aviation safety re-ports. The authors focus on classification of reports into different categories as well as use probabilistic topic models to analyze different aspects of incidents. The authors also propose the timePlot system to identify similar incident reports. Similar to (Tanguy et al., 2016) , (Pence et al., 2020) perform text classification of event reports in nuclear power plants. However, both (Tanguy et al., 2016) and (Pence et al., 2020) do not focus on extraction of specific events from incident reports. Dasgupta et al. (2018) use neural network techniques to extract occupational health and safety related information from News articles related to industrial incidents. Specifically, they focus on extraction of target organization, safety issues, geographical location of the incident and penalty mentioned in the article. In the context of event extraction approaches, multiple state-of-the-art supervised approaches have been proposed in the literature recently. However, the complex neural network architectures demand significant amounts of training data which is not available in the current scenario of event extraction in incident reports. Hence, we discuss two event extraction approaches which are weakly supervised in nature. In (Palshikar et al., 2019) , the authors propose a rule based approach which considers all past tense verbs as events with a WordNet based filter retaining only \"action\" or \"communication\" events. There is no support for extraction of nominal events proposed by the authors. (Araki and Mitamura, 2018) propose an Open Domain Event Extraction approach which uses linguistic resources like Word-Net and Wikipedia to generate training data in a distantly supervised manner and then train a BiL-STM based supervised event detection model using this data. Wang et al.(2019) propose a weakly supervised approach for event detection. The authors first construct a large-scale event-related candidate set and then use an adversarial training mechanism to identify events. We use the first two approaches - (Palshikar et al., 2019) and (Araki and Mitamura, 2018) as our baselines and discuss them in detail in Section 4. The third approach (Wang et al., 2019) based on adversarial training is evaluated on closeddomain datasets and hence it would be difficult to tune it and use it as a baseline for an open-domain event extraction task like ours.", "cite_spans": [ { "start": 532, "end": 551, "text": "Tanguy et al.(2016)", "ref_id": "BIBREF21" }, { "start": 860, "end": 881, "text": "(Tanguy et al., 2016)", "ref_id": "BIBREF21" }, { "start": 884, "end": 904, "text": "(Pence et al., 2020)", "ref_id": "BIBREF19" }, { "start": 989, "end": 1010, "text": "(Tanguy et al., 2016)", "ref_id": "BIBREF21" }, { "start": 1015, "end": 1035, "text": "(Pence et al., 2020)", "ref_id": "BIBREF19" }, { "start": 1105, "end": 1127, "text": "Dasgupta et al. (2018)", "ref_id": "BIBREF4" }, { "start": 1842, "end": 1866, "text": "(Palshikar et al., 2019)", "ref_id": "BIBREF18" }, { "start": 2115, "end": 2141, "text": "(Araki and Mitamura, 2018)", "ref_id": "BIBREF1" }, { "start": 2391, "end": 2408, "text": "Wang et al.(2019)", "ref_id": "BIBREF22" }, { "start": 2638, "end": 2662, "text": "(Palshikar et al., 2019)", "ref_id": "BIBREF18" }, { "start": 2667, "end": 2693, "text": "(Araki and Mitamura, 2018)", "ref_id": "BIBREF1" }, { "start": 2771, "end": 2790, "text": "(Wang et al., 2019)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Events are specific occurrences that appear in the text to denote happenings or changes in states of the involved participants. Multiple guidelines defining events and their extents in text are proposed in the literature (Linguistic Data Consortium, 2005; Mitamura et al., 2017) . It is important to note that no event annotated data is available for any incident text dataset and this compels us to consider event extraction approaches which are either unsupervised or involve minimal training data. We make a two fold contribution in this regard. Firstly, we annotate a moderately sized incident text dataset 4 for evaluation and weak supervision. Secondly, we propose a transfer learning approach based on the standard BiLSTM sequence labelling architecture and compare with three baselines from literature.", "cite_spans": [ { "start": 221, "end": 255, "text": "(Linguistic Data Consortium, 2005;", "ref_id": null }, { "start": 256, "end": 278, "text": "Mitamura et al., 2017)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Event Extraction in Incident Reports", "sec_num": "3" }, { "text": "For incident reports, we define events to be specific verbs and nouns which describe pre-incident, incident and post-incident happenings. Though the semantics of the events are specific to this domain, the nature and function of verbs and nouns representing events in standard domains is preserved. In this paper, we focus on extraction of event triggers i.e. the primary verb/noun token indicative of an event, as against an event phrase spanning multiple tokens. Identification of the event triggers is pivotal to the event extraction problem and once an event trigger is identified it is straightforward to construct an event span by collecting specific dependency children of the trigger. We present a set of examples of sentences and their event triggers we focus on extracting in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 786, "end": 793, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Describing and Annotating Events in Incidents Reports", "sec_num": "3.1" }, { "text": "The pilot pulled the collective to control the descent. The helicopter crashed in the field and sustained substantial damage. Keeping in mind the domain specific semantics of the events, we choose the Open Event extraction guidelines proposed by (Araki, 2018) . We differ with these guidelines at a few places and suitably modify them before guiding our annotators for the task. The details of the differences are described as follows:", "cite_spans": [ { "start": 336, "end": 349, "text": "(Araki, 2018)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Describing and Annotating Events in Incidents Reports", "sec_num": "3.1" }, { "text": "\u2022 (Araki, 2018) suggests labelling of individual adjectives and adverbs as events. Based on our observations of incident text data, we rarely find adjectives or adverbs being \"eventive\". Hence, we restrict our events to be either verbs (verbbased) or nouns (nominal).", "cite_spans": [ { "start": 2, "end": 15, "text": "(Araki, 2018)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Describing and Annotating Events in Incidents Reports", "sec_num": "3.1" }, { "text": "\u2022 (Araki, 2018) suggests labelling of states and conditions as events. In the current work, we only focus on extraction of instantaneous events and do not extract events describing long-going state-like situations or general factual information. For example, we do not extract had in the sentence The plane had three occupants as an event as it only gives information about the plane but we extract all events such as crashed in the sentence The plane crashed in the sea.", "cite_spans": [ { "start": 2, "end": 15, "text": "(Araki, 2018)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Describing and Annotating Events in Incidents Reports", "sec_num": "3.1" }, { "text": "\u2022 (Araki, 2018) suggests considering light verb constructions (such as \"make a turn\") as a single combined event. However, we saw a need to consider more such combined verb formulations. As an example, consider the events scheduled and operate in the sentence The plane was scheduled to operate a sight seeing flight.", "cite_spans": [ { "start": 2, "end": 15, "text": "(Araki, 2018)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Describing and Annotating Events in Incidents Reports", "sec_num": "3.1" }, { "text": "To better capture the complete event semantics, we do not consider these words as separate events but as a single combined event scheduled to operate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Describing and Annotating Events in Incidents Reports", "sec_num": "3.1" }, { "text": "Event extraction can be posed as a supervised sequence labelling problem and a standard BiLSTM-CRF based sequence labeller (Lample et al., 2016) can be employed. However, we reiterate that, as a large event annotated dataset specific to the domain of incident reports is not available, it would be difficult to train such a sequence labeller with high accuracy. We hypothesize that pre-training the BiLSTM-CRF sequence labeller with event labelled data from the general domain would help the network know about the general nature of verbbased and nominal events (\"eventiveness\"). Later as part of a transfer learning procedure (Yang et al., 2017) , post-training of the network on a small event labelled dataset in incidents will provide us with an enriched incident event labeller. The proposed approach is based on this hypothesis and the transfer learnt model is then used to predict event triggers while testing. We base our experimentation on incidents from two domains -AVIATION and CONSTRUCTION.", "cite_spans": [ { "start": 123, "end": 144, "text": "(Lample et al., 2016)", "ref_id": "BIBREF9" }, { "start": 627, "end": 646, "text": "(Yang et al., 2017)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Proposed Transfer Learning approach", "sec_num": "3.2" }, { "text": "To develop the AVIATION dataset, we crawled all the 54 reports about civil aviation incidents 5 recorded in India between 2003 and 2011. For the CONSTRUCTION dataset, we crawled 67 incident report summaries 6 of some major construction incidents in New York (May 1990 to July 2019). We annotate 40 incident reports from AVIATION and 45 from the CONSTRUCTION dataset for both events and event temporal ordering. We treat 10 reports in AVATION and 15 in CONSTRUCTION as a small labelled training dataset. The annotated dataset statistics are presented in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 553, "end": 560, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Proposed Transfer Learning approach", "sec_num": "3.2" }, { "text": "As the first baseline (B1), we consider the approach proposed in (Palshikar et al., 2019) . The authors extract Message Sequence Charts (MSC) from textual narratives which depict messages being passing between actors (entities) in the narrative. Their message extraction approach forms the basis for this event extraction baseline. The approach first identifies past tense verbs and then considers flowing the past tense to its children present tense verbs.", "cite_spans": [ { "start": 65, "end": 89, "text": "(Palshikar et al., 2019)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.2" }, { "text": "It then classifies all identified verbs as either an \"action\" or \"communication\" using WordNet hypernyms of the verb itself or its nominal forms and ignores all verbs which are neither actions nor communications (mental events such as thought, envisioned). The approach doesn't extract nominal events, so we supplement this baseline with a simple nominal event extraction technique. We first consider a NomBank (Meyers et al., 2004) based approach which checks each noun for its presence in the NomBank and if found marks it as a nominal event. We also consider another approach based on the deverbal technique proposed by Gurevich et al. (Gurevich et al., 2008) , which checks if a candidate noun is the deverbal of any verb in the VerbNet (Palmer et al.) . It tags the noun as a nominal event, if such a verb is found. We take a union of the output of the two approaches and filter it using the WordNet to remove obvious false positives (such as entities, etc.) and obtain a final set of nominal events from the given incident report.", "cite_spans": [ { "start": 411, "end": 432, "text": "(Meyers et al., 2004)", "ref_id": "BIBREF13" }, { "start": 623, "end": 662, "text": "Gurevich et al. (Gurevich et al., 2008)", "ref_id": "BIBREF6" }, { "start": 741, "end": 756, "text": "(Palmer et al.)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.2" }, { "text": "As the second baseline (B2), we consider on Open Domain Event Extraction technique proposed in (Araki and Mitamura, 2018) . Most prior work on extraction of events is restricted to (i) closed domains such as ACE 2005 event ontology and (ii) limited syntactic types. In this paper, the authors highlight a need for open-domain event extraction where events are not restricted to a domain or a syntactic type and hence this becomes a suitable baseline. The authors propose a distant supervision method to identify events. The method comprises of two steps: (i) training data generation, and (ii) event detection. In the first step of distantly supervised data creation, candidate events are identified and filtered using WordNet to disambiguate for their eventiveness. Further, Wikipedia is used to identify events mentioned using proper nouns such as \"Hurricane Katrina\". Both these steps help to generate lots of good quality (but not gold) training data. In the second step, BiLSTM based supervised event detection model is trained on this distantly generated training data. The experimental results show that the distant supervision improves event detection performance in various domains, without any need for manual annotation of events. As the third baseline (B3), we use the standard BiLSTM based sequence labelling neural network (Lample et al., 2016) employed frequently in information extraction tasks such as Named Entity Recognition (NER). We use the small labelled training dataset to train this BiLSTM based sequence labeller for event identification and use it to extract events while testing.", "cite_spans": [ { "start": 95, "end": 121, "text": "(Araki and Mitamura, 2018)", "ref_id": "BIBREF1" }, { "start": 1337, "end": 1358, "text": "(Lample et al., 2016)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.2" }, { "text": "For representing the text tokens as input in the proposed neural network approaches, we experiment with the standard static embeddings (GloVe (Pennington et al., 2014)) and the more recent con- textual embeddings (BERT (Devlin et al., 2018) and RoBERTa ). We consider 100-dimensional GloVe embeddings and 768dimensional contextual BERT and RoBERTa representations for the experiments.", "cite_spans": [ { "start": 219, "end": 240, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Word Embeddings", "sec_num": "4.3.1" }, { "text": "The neural network architecture we use for baseline B3 and the proposed transfer learning approach is based on the BiLSTM-CRF architecture proposed by (Lample et al., 2016) for sequence labelling. It is shown in the Figure 1a . As part of the input we concatenate the word embeddings by 20 dimensional learnable POS and NER embeddings. We store these learnt embeddings alongwith the model and reload them during inference. An important aspect to note is that large amount of training data is not available and hence the number of parameters which the network needs to learn should be as minimum as possible to avoid high bias. In particular the connection between the input layer which is 140 dimensional (in case of GloVe embeddings, 100 + 20 P OS + 20 N ER) and the BiLSTM layer (with hidden units 140) is 140 \u00d7 140 \u00d7 2. In case of 768-dimensional BERT/RoBERTa based representations it blows up about 6 times to 768 \u00d7 768 \u00d7 2, assuming the LSTM hidden units are also 768. The network fails to learn while training using the limited data in case of 768-dimensional embeddings. So we devise a small change to the input layer to support learning in this case. We introduce a dense layer just after the 768-dimensional BERT/RoBERTa input with a linear activation function to map the 768-dimensional input into a smaller dimensional space, as shown in Figure 1b . Due to the linear activation, this layer behaves like a linear transformation of a high dimensional input vector to a lower dimensional input vector. Additionally, we concatenate the previously mentioned POS and NER learnable embeddings to the transformed input embeddings as the final input to the network.", "cite_spans": [ { "start": 151, "end": 172, "text": "(Lample et al., 2016)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 216, "end": 225, "text": "Figure 1a", "ref_id": null }, { "start": 1349, "end": 1358, "text": "Figure 1b", "ref_id": null } ], "eq_spans": [], "section": "Neural Network Design and Tuning", "sec_num": "4.3.2" }, { "text": "We employ 5-fold cross-validation on the small training dataset for tuning the hyperparameters of the neural network separately for both domains and embedding types. We found minimal difference in hyperparameter values across both Aviation and Construction datasets and hence, we use similar parameters in both cases. The tuned hyperparameters with their values are shown in Table 4 . Hyperparameter GloVe based model (Fig. 1a) BERT/ RoBERTa based model (Fig. 1b ", "cite_spans": [], "ref_spans": [ { "start": 375, "end": 382, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 418, "end": 427, "text": "(Fig. 1a)", "ref_id": null }, { "start": 454, "end": 462, "text": "(Fig. 1b", "ref_id": null } ], "eq_spans": [], "section": "Neural Network Design and Tuning", "sec_num": "4.3.2" }, { "text": "Baseline B1 is unsupervised and is implemented and used directly. Code for baseline B2 is made available by the authors 7 and we install and use it without any change. The BiLSTM-CRF sequence labelling networks, used for baseline (B3) and the transfer learning approach, is implemented using keras in python 3. These approaches are trained on the small training data shown in Table 3 . To handle randomness in neural network weight initialization and to ensure robustness of the results, we run every neural network experiment (both hyperparameter tuning as well as final test experiments) five times and report an average of the five runs. We were able to observe standard deviation in the precision, recall and F1 of these runs to be as low as 1-2%. With respect to the pre-training data for the transfer learning approach, we use the event annotations from the ECB dataset (Bejan and Harabagiu, 2010) . It is a dataset for Event Coreference tasks and has comprehensive event annotations (about 8.8K labelled events in about 9.7K sentences).", "cite_spans": [ { "start": 876, "end": 903, "text": "(Bejan and Harabagiu, 2010)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 376, "end": 383, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Implementation", "sec_num": "4.3.3" }, { "text": "As we can observe in Table 5 , the proposed transfer learning approach (TL) outperforms the other baselines (B1, B2 and B3) in performance irrespective of static or contextual embeddings. Further, as expected the BiLSTM based baseline B3 shows lower recall than the transfer learning approach in which we see significantly improved recall particularly for the Construction dataset for all embedding types. We observe a similar boost in recall particularly for BERT representations on the Aviation dataset. An important point to note here is that the amount of pre-training data, leading to best results, varies between 40% to 60% for combinations of dataset and embedding type. In Table 5 , we report the performance for best amount of pre-training data and present a detailed analysis on effect on increasing pre-training data in Section 4.4.1. As part of the analysis, we first measure the effect of increase in the amount of pre-training data in the transfer learning approach and find out what amount of pre-training leads to the best results. Secondly, we try to explain why the pre-training works through a novel clustering methodology over the BiLSTM learnt context representations of the input embeddings. And thirdly, we present an ensem-7 https://bitbucket.org/junaraki/ coling2018-event Table 5 : Evaluation -Event Extraction ble approach considering a practical standpoint of using these systems in real-life use cases.", "cite_spans": [], "ref_spans": [ { "start": 21, "end": 28, "text": "Table 5", "ref_id": null }, { "start": 681, "end": 688, "text": "Table 5", "ref_id": null }, { "start": 1298, "end": 1305, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Evaluation and Analysis", "sec_num": "4.4" }, { "text": "As an important part of the analysis, we measure what is the effect of increase in pre-training data in the transfer learning approach. We hypothesize that the performance would rise till a certain point with increasing pre-training data and would then stabilize and change minimally. This is based on the notion that pre-training positions the network weights in a better space from where the training on domain specific data should begin. However, beyond a certain amount of pre-training the initialization may not lead to any better initial values for the weights.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Amount of pre-training data", "sec_num": "4.4.1" }, { "text": "To check the validity of this hypothesis, we pretrained the network with varied amounts of pretraining data (1%, 5%, 10%, 20%, 30%, ..., 100%) and checked the performance on test data. Figure 2 and Figure 3 show the obtained F1 curves for these pre-training settings for Aviation and Construction datasets respectively. As with other experiments, each point in the graphs is an average of performance for 5 runs of training and testing. It can be seen that with increasing pre-training data, the performance improves and reaches a peak between 30% to 70% of pre-training data available, varying for different input embedding types. We observe a small dip in performance when amounts near complete pre-training data are used. Interestingly, BERT based representations start showing promise with even 1% of pre-training data for the Aviation dataset.", "cite_spans": [], "ref_spans": [ { "start": 185, "end": 193, "text": "Figure 2", "ref_id": null }, { "start": 198, "end": 206, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Amount of pre-training data", "sec_num": "4.4.1" }, { "text": "To explain why the pre-training is helping, we need wise output of the BiLSTM layer, which incorporates both the input embeddings and the context information and feeds these representations to the CRF layer as features for sequence learning/inference (See Figure 1a) . However, internal representations in a neural network are a set of numbers not comprehensible in a straightforward manner and would require an indirect observation to decipher what is captured by them. One such indirect analysis of these internal representations involves performing their clustering and observing if representations with similar semantics cluster together and rarely cluster with dissimilar representations. In this case, the desired semantics would mean capture of the \"eventiveness\" property in event tokens. We perform such a clustering based analysis on extractions in the Construction dataset.", "cite_spans": [], "ref_spans": [ { "start": 256, "end": 266, "text": "Figure 1a)", "ref_id": null } ], "eq_spans": [], "section": "Explanability of Pre-training", "sec_num": "4.4.2" }, { "text": "We consider all tokens which are marked as events in the gold and are also correctly predicted as events by the transfer learnt model (TL) such as tokens t1 and t2 in Table 6 . We obtain the BiLSTM output representations for these tokens by passing their sentences through the TL model truncated till the input of the CRF layer and collect these representations (r t1 T L and r t2 T L ) in a set R T L . As observed from the results, the baseline model B3 has a lower recall than the TL model and for tokens such as t1 and t2, we can categorize the predictions of the B3 model into either 'correctly predicted as events' or 'missed and marked as non-events'. We divide these tokens into the correct and incorrect sets as per their baseline model predictions. We obtain the BiLSTM output representations for these tokens from the B3 model in the similar way as earlier and respectively collect these representations (r t1 B3 and r t2 B3 ) in two sets R B3C (B3 corrects) and R B3I (B3 incorrects). We hypothesize that all the representations which lead to a correct event prediction should belong to a subspace of \"eventive\" representations and should be far from the representations which lead to an incorrect prediction. Hence, representations in the set R T L and R B3C should cluster differently from the representations in the set R B3I . So, in the context of the example tokens of Table 6 , representations r t1 T L , r t2 T L and r t1 B3 should cluster differently from r t2 B3 . On performing agglomerative clustering on the above representations with a maximum distance of 0.3 (standard similarity of 0.7), we find that the representations R T L and R B3C belong to multiple clusters which are highly separate from clusters housing the representations in R B3I . This validates our hypothesis and highlights positioning of R T L and R B3C representations closer to the required \"eventiveness\" subspace and far from the R B3I representation which lead to incorrect predictions. We further strengthen the claim by computing purity (Manning et al., 2008) of the representation clusters. The purity of a clustering gives a measure of the extent to which clusters contains instances of a single class. In case of predictions based on GloVe embeddings models, we observe a purity of 0.9781 and in case of BERT embeddings models, we observe a purity of 0.9832.", "cite_spans": [ { "start": 2038, "end": 2060, "text": "(Manning et al., 2008)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 167, "end": 174, "text": "Table 6", "ref_id": "TABREF8" }, { "start": 1387, "end": 1394, "text": "Table 6", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Explanability of Pre-training", "sec_num": "4.4.2" }, { "text": "We also performed a detailed analysis with regard to the errors in verb-based and nominal event predictions. It was observed that the deep learning approaches miss important verb-based events leading to low recall particularly for the verb-based events, but identify nominal events correctly in most cases. The rule based baseline B1, captures all the verb-based events mostly as it designates most past tense verbs as events. However, the rule based approach fails to identify nominal events correctly as it doesn't observe the context of a noun while deciding its event nature. This observation prompted us to perform a novel ensemble where we create a union of all verb-based event predictions of the rule based approach and all nominal event predictions of the transfer learning based approach using glove embeddings. We believe this ensemble approach holds value from a practical standpoint in two ways. Firstly, using GloVe embeddings eases compute and maintenance requirements in deployment environments, which are higher for handling BERT/RoBERTa based contextual models. Further, as seen from the results in Table 5 , GloVe embeddings perform at par with contextual representations. Secondly, when showing a user predictions of events from an incident report, she might get perturbed more because of incorrect nominal events than some extra verbal events. As seen in Table 5 , this ensemble approach (row marked as ENS) shows a respectable increase in precision over the Transfer learning approach in both datasets and may be useful to employ in real life incident event identification systems.", "cite_spans": [], "ref_spans": [ { "start": 1117, "end": 1124, "text": "Table 5", "ref_id": null }, { "start": 1376, "end": 1383, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Practical standpoint", "sec_num": "4.4.3" }, { "text": "In this paper we focused on extracting events from reports on incidents in Aviation and Construction domains. As there is no dataset of incident reports comprising of annotations for event extraction, we contributed by proposing modifications to a set of existing event guidelines and accordingly preparing a small annotated dataset. Keeping in mind the limited data settings, we proposed a transfer learning approach over the existing BiLSTM-CRF based sequence labelling approach and experimented with different static and contextual embeddings. We observed that pretraining improves performance of event extraction for all combinations of domains and embeddings. As part of the analysis, we showed the impact of employing varying amounts of pretraining data. We also performed a novel clustering based analysis to explain why pretraining improves performance of event extraction. We also propose a novel ensemble approach motivated from a practical viewpoint. As future work, we plan to pursue other important stages of the incident report analysis pipeline such as (i) entity/actor identification which involves finding the important participants in an incident, (ii) event argument identification which involves finding participants which are agents or experiencers of the event, (iii) state/condition identification which involve finding expressions describing long-running state-like conditions and (iv) eventevent relation identification which involves establishing of relation links between events.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "https://www.osha.gov/data", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "the dataset can be obtained through an email request to the authors", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://dgca.gov.in/digigov-portal/ ?page=IncidentReports 6 https://www.osha.gov/construction/ engineering", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Extraction of Event Structures from Text", "authors": [ { "first": "Jun", "middle": [], "last": "Araki", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun Araki. 2018. Extraction of Event Structures from Text. Ph.D. thesis, Carnegie Mellon University.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Open-Domain Event Detection using Distant Supervision", "authors": [ { "first": "Jun", "middle": [], "last": "Araki", "suffix": "" }, { "first": "Teruko", "middle": [], "last": "Mitamura", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics (COLING)", "volume": "", "issue": "", "pages": "878--891", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun Araki and Teruko Mitamura. 2018. Open-Domain Event Detection using Distant Supervision. In Proceed- ings of the 27th International Conference on Computa- tional Linguistics (COLING), pages 878-891, Santa Fe, NM, USA.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Event timeline generation from history textbooks", "authors": [ { "first": "Harsimran", "middle": [], "last": "Bedi", "suffix": "" }, { "first": "Sangameshwar", "middle": [], "last": "Patil", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 4th Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA 2017)", "volume": "", "issue": "", "pages": "69--77", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harsimran Bedi, Sangameshwar Patil, Swapnil Hing- mire, and Girish Palshikar. 2017. Event timeline gen- eration from history textbooks. In Proceedings of the 4th Workshop on Natural Language Processing Tech- niques for Educational Applications (NLPTEA 2017), pages 69-77.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Unsupervised event coreference resolution with rich linguistic features", "authors": [ { "first": "Cosmin", "middle": [], "last": "Bejan", "suffix": "" }, { "first": "Sanda", "middle": [], "last": "Harabagiu", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1412--1422", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cosmin Bejan and Sanda Harabagiu. 2010. Unsuper- vised event coreference resolution with rich linguistic features. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1412-1422, Uppsala, Sweden. Association for Compu- tational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Extraction and visualization of occupational health and safety related information from open web", "authors": [ { "first": "Tirthankar", "middle": [], "last": "Dasgupta", "suffix": "" }, { "first": "Abir", "middle": [], "last": "Naskar", "suffix": "" }, { "first": "Rupsa", "middle": [], "last": "Saha", "suffix": "" }, { "first": "Lipika", "middle": [], "last": "Dey", "suffix": "" } ], "year": 2018, "venue": "2018 IEEE/WIC/ACM International Conference on Web Intelligence, WI 2018", "volume": "", "issue": "", "pages": "434--439", "other_ids": { "DOI": [ "10.1109/WI.2018.00-56" ] }, "num": null, "urls": [], "raw_text": "Tirthankar Dasgupta, Abir Naskar, Rupsa Saha, and Lipika Dey. 2018. Extraction and visualization of oc- cupational health and safety related information from open web. In 2018 IEEE/WIC/ACM International Con- ference on Web Intelligence, WI 2018, Santiago, Chile, December 3-6, 2018, pages 434-439. IEEE Computer Society.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Deverbal nouns in knowledge representation", "authors": [ { "first": "Olga", "middle": [], "last": "Gurevich", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Crouch", "suffix": "" }, { "first": "Tracy", "middle": [ "Holloway" ], "last": "King", "suffix": "" }, { "first": "Valeria", "middle": [ "De" ], "last": "Paiva", "suffix": "" } ], "year": 2008, "venue": "Journal of Logic and Computation", "volume": "18", "issue": "3", "pages": "385--404", "other_ids": {}, "num": null, "urls": [], "raw_text": "Olga Gurevich, Richard Crouch, Tracy Holloway King, and Valeria De Paiva. 2008. Deverbal nouns in knowl- edge representation. Journal of Logic and Computa- tion, 18(3):385-404.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Girish Keshav Palshikar", "authors": [ { "first": "Swapnil", "middle": [], "last": "Hingmire", "suffix": "" }, { "first": "Nitin", "middle": [], "last": "Ramrakhiyani", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Swapnil Hingmire, Nitin Ramrakhiyani, Avinash Ku- mar Singh, Sangameshwar Patil, Girish Keshav Pal- shikar, Pushpak Bhattacharyya, and Vasudeva Varma.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Extracting Message Sequence Charts from Hindi Narrative Text", "authors": [], "year": 2020, "venue": "Proceedings of the First Joint Workshop on Narrative Understanding, Storylines, and Events, NUSE@ACL 2020, Online", "volume": "", "issue": "", "pages": "87--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "Extracting Message Sequence Charts from Hindi Narrative Text. In Proceedings of the First Joint Workshop on Narrative Understanding, Storylines, and Events, NUSE@ACL 2020, Online, July 9, 2020, pages 87-96. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Neural architectures for named entity recognition", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Sandeep", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Kazuya", "middle": [], "last": "Kawakami", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "260--270", "other_ids": { "DOI": [ "10.18653/v1/N16-1030" ] }, "num": null, "urls": [], "raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North Amer- ican Chapter of the Association for Computational Lin- guistics: Human Language Technologies, pages 260- 270, San Diego, California. Association for Computa- tional Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "ACE (Automatic Content Extraction) English Annotation Guidelines for Events", "authors": [], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linguistic Data Consortium. 2005. ACE (Automatic Content Extraction) English Annotation Guidelines for Events.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Introduction to Information Retrieval", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Prabhakar", "middle": [], "last": "Raghavan", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning, Prabhakar Raghavan, and Hinrich Sch\u00fctze. 2008. Introduction to Information Re- trieval. Cambridge University Press, USA.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The NomBank Project: An Interim Report", "authors": [ { "first": "A", "middle": [], "last": "Meyers", "suffix": "" }, { "first": "R", "middle": [], "last": "Reeves", "suffix": "" }, { "first": "C", "middle": [], "last": "Macleod", "suffix": "" }, { "first": "R", "middle": [], "last": "Szekely", "suffix": "" }, { "first": "V", "middle": [], "last": "Zielinska", "suffix": "" }, { "first": "B", "middle": [], "last": "Young", "suffix": "" }, { "first": "R", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2004, "venue": "HLT-NAACL 2004 Workshop: Frontiers in Corpus Annotation", "volume": "", "issue": "", "pages": "24--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Meyers, R. Reeves, C. Macleod, R. Szekely, V. Zielinska, B. Young, and R. Grishman. 2004. The NomBank Project: An Interim Report. In HLT-NAACL 2004 Workshop: Frontiers in Corpus Annotation, pages 24-31, Boston, Massachusetts, USA. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The MINERVA Portal of European Commission", "authors": [], "year": null, "venue": "Online; accessed 26-Apr", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "MINERVA. The MINERVA Portal of European Commission. https://minerva.jrc.ec.europa. eu/en/minerva/about. [Online; accessed 26-Apr-", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Events detection, coreference and sequencing: What's next? overview of the TAC KBP 2017 event track", "authors": [ { "first": "Teruko", "middle": [], "last": "Mitamura", "suffix": "" }, { "first": "Zhengzhong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Eduard", "middle": [ "H" ], "last": "Hovy", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Text Analysis Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Teruko Mitamura, Zhengzhong Liu, and Eduard H. Hovy. 2017. Events detection, coreference and se- quencing: What's next? overview of the TAC KBP 2017 event track. In Proceedings of the 2017 Text Anal- ysis Conference, TAC 2017, Gaithersburg, Maryland, USA, November 13-14, 2017. NIST.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Occupational Safety and Health Administration", "authors": [], "year": null, "venue": "Online; accessed 26-Apr", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "OSHA. Occupational Safety and Health Adminis- tration. https://www.osha.gov/Publications/ 3439at-a-glance.pdf. [Online; accessed 26-Apr-", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Verbnet", "authors": [ { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Bonial", "suffix": "" }, { "first": "Jena", "middle": [], "last": "Hwang", "suffix": "" } ], "year": null, "venue": "The Oxford Handbook of Cognitive Science", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martha Palmer, Claire Bonial, and Jena Hwang. Verb- net. In The Oxford Handbook of Cognitive Science.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Extraction of message sequence charts from narrative history text", "authors": [ { "first": "Girish", "middle": [], "last": "Palshikar", "suffix": "" }, { "first": "Sachin", "middle": [], "last": "Pawar", "suffix": "" }, { "first": "Sangameshwar", "middle": [], "last": "Patil", "suffix": "" }, { "first": "Swapnil", "middle": [], "last": "Hingmire", "suffix": "" }, { "first": "Nitin", "middle": [], "last": "Ramrakhiyani", "suffix": "" }, { "first": "Harsimran", "middle": [], "last": "Bedi", "suffix": "" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "" }, { "first": "Vasudeva", "middle": [], "last": "Varma", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the First Workshop on Narrative Understanding", "volume": "", "issue": "", "pages": "28--36", "other_ids": { "DOI": [ "10.18653/v1/W19-2404" ] }, "num": null, "urls": [], "raw_text": "Girish Palshikar, Sachin Pawar, Sangameshwar Patil, Swapnil Hingmire, Nitin Ramrakhiyani, Harsimran Bedi, Pushpak Bhattacharyya, and Vasudeva Varma. 2019. Extraction of message sequence charts from nar- rative history text. In Proceedings of the First Work- shop on Narrative Understanding, pages 28-36, Min- neapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Data-theoretic approach for socio-technical risk analysis: Text mining licensee event reports of u.s. nuclear power plants", "authors": [ { "first": "Justin", "middle": [], "last": "Pence", "suffix": "" }, { "first": "Pegah", "middle": [], "last": "Farshadmanesh", "suffix": "" }, { "first": "Jinmo", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Cathy", "middle": [], "last": "Blake", "suffix": "" }, { "first": "Zahra", "middle": [], "last": "Mohaghegh", "suffix": "" } ], "year": 2020, "venue": "Safety Science", "volume": "124", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1016/j.ssci.2019.104574" ] }, "num": null, "urls": [], "raw_text": "Justin Pence, Pegah Farshadmanesh, Jinmo Kim, Cathy Blake, and Zahra Mohaghegh. 2020. Data-theoretic approach for socio-technical risk analysis: Text min- ing licensee event reports of u.s. nuclear power plants. Safety Science, 124:104574.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "GloVe: Global Vectors for Word Representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global Vectors for Word Rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Natural language processing for aviation safety reports: From classification to interactive analysis", "authors": [ { "first": "Ludovic", "middle": [], "last": "Tanguy", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Tulechki", "suffix": "" }, { "first": "Assaf", "middle": [], "last": "Urieli", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Hermann", "suffix": "" }, { "first": "Cline", "middle": [], "last": "Raynal", "suffix": "" } ], "year": 2016, "venue": "Natural Language Processing and Text Analytics in Industry", "volume": "78", "issue": "", "pages": "80--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ludovic Tanguy, Nikola Tulechki, Assaf Urieli, Eric Hermann, and Cline Raynal. 2016. Natural language processing for aviation safety reports: From classifi- cation to interactive analysis. Computers in Industry, 78:80-95. Natural Language Processing and Text Ana- lytics in Industry.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Adversarial training for weakly supervised event detection", "authors": [ { "first": "Xiaozhi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Han", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Li", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "998--1008", "other_ids": { "DOI": [ "10.18653/v1/N19-1105" ] }, "num": null, "urls": [], "raw_text": "Xiaozhi Wang, Xu Han, Zhiyuan Liu, Maosong Sun, and Peng Li. 2019. Adversarial training for weakly su- pervised event detection. In Proceedings of the 2019 Conference of the North American Chapter of the As- sociation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 998-1008, Minneapolis, Minnesota. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Transfer learning for sequence tagging with hierarchical recurrent networks", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "William", "middle": [ "W" ], "last": "Cohen", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1703.06345" ] }, "num": null, "urls": [], "raw_text": "Zhilin Yang, Ruslan Salakhutdinov, and William W Cohen. 2017. Transfer learning for sequence tagging with hierarchical recurrent networks. arXiv preprint arXiv:1703.06345.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Figure 1: BiLSTM-CRF network models", "uris": null, "type_str": "figure", "num": null }, "FIGREF1": { "text": "to have an understanding of what the network is learning about the input embeddings of the tokens and their context from the bidirectional LSTM. It would be helpful if one could analyze the token-Increase in Pre-training Data -Construction", "uris": null, "type_str": "figure", "num": null }, "TABREF0": { "text": "Sample Incident Report summary from Construction Domain", "type_str": "table", "content": "", "num": null, "html": null }, "TABREF1": { "text": "", "type_str": "table", "content": "
", "num": null, "html": null }, "TABREF3": { "text": "Annotated Dataset Statistics", "type_str": "table", "content": "
4 Experimentation and Evaluation
4.1 Dataset
", "num": null, "html": null }, "TABREF5": { "text": "", "type_str": "table", "content": "", "num": null, "html": null }, "TABREF6": { "text": "GloVe100 0.83 0.83 0.83 0.87 0.69 0.77 TL GloVe100 0.86 0.84 0.85 0.91 0.75 0.82", "type_str": "table", "content": "
AVIATIONCONSTRUCTION
PRF1PRF1
B10.67 0.83 0.74 0.63 0.80.7
B20.71 0.89 0.79 0.64 0.95 0.77
B3 B3 BERT0.84 0.79 0.82 0.84 0.63 0.72
TL BERT0.87 0.83 0.85 0.90.73 0.81
B3 RoBERTa 0.87 0.83 0.85 0.80.63 0.71
TL RoBERTa 0.86 0.85 0.86 0.85 0.79 0.82
ENS0.90 0.83 0.86 0.95 0.75 0.84
", "num": null, "html": null }, "TABREF8": { "text": "Example Tokens and Predictions", "type_str": "table", "content": "", "num": null, "html": null } } } }