{ "paper_id": "U15-1016", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:10:00.257346Z" }, "title": "A comparison and analysis of models for event trigger detection", "authors": [ { "first": "Shang", "middle": [], "last": "Chun", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Sydney NSW 2006", "location": { "country": "Australia" } }, "email": "" }, { "first": "Sam", "middle": [], "last": "Wei", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Sydney NSW 2006", "location": { "country": "Australia" } }, "email": "" }, { "first": "Ben", "middle": [], "last": "Hachey", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Sydney", "location": { "postCode": "2006", "region": "NSW", "country": "Australia" } }, "email": "ben.hachey@gmail.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Interpreting event mentions in text is central to many tasks from scientific research to intelligence gathering. We present an event trigger detection system and explore baseline configurations. Specifically, we test whether it is better to use a single multi-class classifier or separate binary classifiers for each label. The results suggest that binary SVM classifiers outperform multi-class maximum entropy by 6.4 points F-score. Brown cluster and Word-Net features are complementary with more improvement from WordNet features.", "pdf_parse": { "paper_id": "U15-1016", "_pdf_hash": "", "abstract": [ { "text": "Interpreting event mentions in text is central to many tasks from scientific research to intelligence gathering. We present an event trigger detection system and explore baseline configurations. Specifically, we test whether it is better to use a single multi-class classifier or separate binary classifiers for each label. The results suggest that binary SVM classifiers outperform multi-class maximum entropy by 6.4 points F-score. Brown cluster and Word-Net features are complementary with more improvement from WordNet features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Events are frequently discussed in text, e.g., criminal activities such as violent attacks reported in police reports, corporate activities such as mergers reported in business news, biological processes such as protein interactions reported in scientific research. Interpreting these mentions is central to tasks like intelligence gathering and scientific research. Event extraction automatically identifies the triggers and arguments that constitute a textual mention of an event in the world. Consider:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Bob bought the book from Alice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Here, a trigger -\"bought\" (Transaction.Transfer-Ownership) -predicates an interaction between the arguments -\"Bob\" (Recipient), \"the book\" (Thing) and \"Alice\" (Giver). We focus on the trigger detection task, which is the first step in event detection and integration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Many event extraction systems use a pipelined approach, comprising a binary classifier to detect event triggers followed by a separate multi-class classifier to label the type of event (Ahn, 2006) . Our work is different in that we use a single classification step with sub-sampling to handle data skew. Chen and Ji (2009) use Maximum Entropy (ME) classifier in their work. However, their approach is similar to (Ahn, 2006) where they identify the trigger first then classify the trigger at later stage. Kolya et al. (2011) employ a hybrid approach by using Support Vector Machine (SVM) classifier and heuristics for event extraction.", "cite_spans": [ { "start": 185, "end": 196, "text": "(Ahn, 2006)", "ref_id": "BIBREF0" }, { "start": 304, "end": 322, "text": "Chen and Ji (2009)", "ref_id": "BIBREF2" }, { "start": 412, "end": 423, "text": "(Ahn, 2006)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We present an event trigger detection system that formulates the problem as a token-level classification task. Features include lexical and syntactic information from the current token and surrounding context. Features also include additional word class information from Brown clusters, WordNet and Nomlex to help generalise from a fairly small training set. Experiments explore whether multi-class or binary classification is better using SVM and ME.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Contributions include: (1) A comparison of binary and multi-class versions of SVM and ME on the trigger detection task. Experimental results suggest binary SVM outperform other approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(2) Analysis showing that Brown cluster, Nomlex and WordNet features contribute nearly 10 points F-score; WordNet+Nomlex features contribute more than Brown cluster features; and improvements from these sources of word class information increase recall substantially, sometimes at the cost of precision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We investigate the event trigger detection task from the 2015 Text Analysis Conference (TAC) shared task (Mitamura and Hovy, 2015) . The task defines 9 event types and 38 subtypes such as Life.Die, Conflict.Attack, Contact.Meet. An event trigger is the smallest extent of text (usually a word or short phrase) that predicates the occurrence of an event (LDC, 2015).", "cite_spans": [ { "start": 105, "end": 130, "text": "(Mitamura and Hovy, 2015)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Event Trigger Detection Task", "sec_num": "2" }, { "text": "In the following example, the words in bold trigger Life.Die and Life.Injure events respectively:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Event Trigger Detection Task", "sec_num": "2" }, { "text": "The explosion killed 7 and injured 20.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Event Trigger Detection Task", "sec_num": "2" }, { "text": "Note that an event mention can contain multiple events. Further, an event trigger can have multiple events. Consider:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Event Trigger Detection Task", "sec_num": "2" }, { "text": "The murder of John.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Event Trigger Detection Task", "sec_num": "2" }, { "text": "where \"murder\" is the trigger for both a Conflict.Attack event and a Life.Die event. Table 1 shows the distribution of the event subtypes in the training and development datasets.", "cite_spans": [], "ref_spans": [ { "start": 85, "end": 92, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Event Trigger Detection Task", "sec_num": "2" }, { "text": "We formulate event trigger detection as a tokenlevel classification task. Features include lexical and semantic information from the current token and surrounding context. Classifiers include binary and multi-class versions of SVM and ME.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "As triggers can be a phrase, we experimented with Inside Outside Begin 1 (IOB1) and Inside Outside Begin 2 (IOB2) encodings (Sang and Veenstra, 1999) . Table 2 contains an example illustrating the two schemes. Preliminary results showed little impact on accuracy. However, one of the issues with this task is data sparsity. Some event subtypes have few observations in the corpus. IOB2 encoding increases the total number of categories for the dataset. Thus make the data sparsity issue worse. Therefore we use the IOB1 encoding for the rest of the experiments.", "cite_spans": [ { "start": 124, "end": 149, "text": "(Sang and Veenstra, 1999)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 152, "end": 159, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "Another challenge is that the data is highly unbalanced. Most of the tokens are not event triggers. To address this, we various subsets of negative observations. Randomly sampling 10% of the negative examples for training works well here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "All models used same rich feature sets. The features are divided into three different groups.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "3.1" }, { "text": "Feature set 1 (FS1): Basic features including following. (1) Current token: Lemma, POS, named entity type, is it a capitalised word. (2) Within the window of size two: unigrams/bigrams of lemma, POS, and name entity type. (3) Dependency: governor/dependent type, governor/dependent type + lemma, governor/dependent type + POS, and governor/dependent type + named entity type.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "3.1" }, { "text": "Feature Feature set 3 (FS3): (1) WordNet features including hypernyms and synonyms of the current token. (2) Base form of the current token extracted from Nomlex (Macleod et al., 1998 Table 2 : IOB1 and IOB2 encoding comparison. \"B\" represents the first token of an event trigger. \"I\" represents a subsequent token of a multi-word trigger. \"O\" represents no event.", "cite_spans": [ { "start": 162, "end": 183, "text": "(Macleod et al., 1998", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 184, "end": 191, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Features", "sec_num": "3.1" }, { "text": "We train multi-class ME and SVM classifiers to detect and label events. L-BFGS (Liu and Nocedal, 1989 ) is used as the solver for ME. The SVM uses a linear kernel. We also compare binary versions of ME and SVM by building a single classifier for each event subtype.", "cite_spans": [ { "start": 79, "end": 101, "text": "(Liu and Nocedal, 1989", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Classifiers", "sec_num": "3.2" }, { "text": "4 Experimental setup", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifiers", "sec_num": "3.2" }, { "text": "The TAC 2015 training dataset (LDC2015E73) is used for the experiment. The corpus has a total of 158 documents from two genres: 81 newswire documents and 77 discussion forum documents. Preprocessing includes tokenisation, sentence splitting, POS tagging, named entity recognition, constituency parsing and dependency parsing using Stanford CoreNLP 3.5.2. 3 The dataset is split into 80% for training (126 documents) and 20% for development (32 documents. Listed in Appendix A).", "cite_spans": [ { "start": 355, "end": 356, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "Accuracy is measured using the TAC 2015 scorer. 4 Precision, recall and F-score are defined as:", "cite_spans": [ { "start": 48, "end": 49, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation metric", "sec_num": "4.2" }, { "text": "P = T P N S ; R = T P N G ; F 1 = 2P R P + R", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation metric", "sec_num": "4.2" }, { "text": "where T P is the number of correct triggers (true positives), N S is the total number of predicted system mentions, and N G is the total number of annotated gold mentions. An event trigger is counted as correct only if the boundary, the event type and the event subtype are all correctly identified. We report micro-averaged results. Table 3 : System performance comparison.", "cite_spans": [], "ref_spans": [ { "start": 334, "end": 341, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Evaluation metric", "sec_num": "4.2" }, { "text": "We perform a cumulative analysis to quantify the contribution of different feature sets. Table 4 shows that feature set 2 (Brown cluster) helped with recall sometimes at the cost of precision. The recall is further boosted by feature set 3 (WordNet and Nomlex). However, the precision dropped noticeably for SVM models. Figure 1 shows how classifiers perform on each event subtype. Binary SVM generally has better recall and slightly lower precision. Hence, the overall performance of the model improves. ", "cite_spans": [], "ref_spans": [ { "start": 89, "end": 96, "text": "Table 4", "ref_id": null }, { "start": 320, "end": 328, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Feature set", "sec_num": "5.1" }, { "text": "We sampled 20 precision and twenty recall errors from the binary SVM classifier. 40% of precision errors require better modelling of grammatical relations, e.g., labelling \"focus has moved\" as a transport event. 35% require better use of POS information, e.g., labelling \"said crime\" as a contact event. 65% of recall errors are tokens in multiword phrases, e.g., \"going to jail\". 45% are triggers that likely weren't seen in training and require better generalisation strategies. Several precision and recall errors seem to actually be correct.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error analysis", "sec_num": "5.3" }, { "text": "We presented an exploration of TAC event trigger detection and labelling, comparing classifiers and rich features. Results suggest that SVM outperforms maximum entropy and binary SVM gives the best results. Brown cluster information increases recall for all models, but sometimes at the cost of precision. WordNet and Nomlex features provide a bigger boost, improving F-score by 6 points for the best classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "http://metaoptimize.com/projects/wordreprs/ 2 http://nlp.cs.nyu.edu/nomlex/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://nlp.stanford.edu/software/corenlp.shtml 4 http://hunterhector.github.io/EvmEval/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Sam Wei is funded by the Capital Markets Cooperative Research Centre. Ben Hachey is the recipient of an Australian Research Council Discovery Early Career Researcher Award (DE120102900)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "Appendix A: Development set document IDs", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The stages of event extraction", "authors": [ { "first": "David", "middle": [], "last": "Ahn", "suffix": "" } ], "year": 2006, "venue": "COLING-ACL Workshop on Annotating and Reasoning About Time and Events", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Ahn. 2006. The stages of event extraction. In COLING-ACL Workshop on Annotating and Rea- soning About Time and Events, pages 1-8.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Class-based n-gram models of natural language", "authors": [ { "first": "F", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Peter", "middle": [ "V" ], "last": "Brown", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Desouza", "suffix": "" }, { "first": "Vincent", "middle": [ "J" ], "last": "Mercer", "suffix": "" }, { "first": "Jenifer", "middle": [ "C" ], "last": "Della Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Lai", "suffix": "" } ], "year": 1992, "venue": "Computational Linguistics", "volume": "18", "issue": "4", "pages": "467--479", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter F. Brown, Peter V. deSouza, Robert L. Mer- cer, Vincent J. Della Pietra, and Jenifer C. Lai. 1992. Class-based n-gram models of natural lan- guage. Computational Linguistics, 18(4):467-479.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Language specific issue and feature exploration in chinese event extraction", "authors": [ { "first": "Zheng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" } ], "year": 2009, "venue": "Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "209--212", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zheng Chen and Heng Ji. 2009. Language specific is- sue and feature exploration in chinese event extrac- tion. In Annual Conference of the North American Chapter of the Association for Computational Lin- guistics, pages 209-212.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A hybrid approach for event extraction and event actor identification", "authors": [ { "first": "Anup", "middle": [], "last": "Kumar Kolya", "suffix": "" }, { "first": "Asif", "middle": [], "last": "Ekbal", "suffix": "" }, { "first": "Sivaji", "middle": [], "last": "Bandyopadhyay", "suffix": "" } ], "year": 2011, "venue": "International Conference on Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "592--597", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anup Kumar Kolya, Asif Ekbal, and Sivaji Bandy- opadhyay. 2011. A hybrid approach for event ex- traction and event actor identification. In Interna- tional Conference on Recent Advances in Natural Language Processing, pages 592-597.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Rich ERE Annotation Guidelines Overview. Linguistic Data Consortium. Version 4.1", "authors": [], "year": 2015, "venue": "LDC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "LDC, 2015. Rich ERE Annotation Guidelines Overview. Linguistic Data Consortium. Version 4.1. Accessed 14 November 2015 from http: //cairo.lti.cs.cmu.edu/kbp/2015/ event/summary_rich_ere_v4.1.pdf.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "On the limited memory BFGS method for large scale optimization", "authors": [ { "first": "C", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Jorge", "middle": [], "last": "Liu", "suffix": "" }, { "first": "", "middle": [], "last": "Nocedal", "suffix": "" } ], "year": 1989, "venue": "Mathematical Programming", "volume": "45", "issue": "", "pages": "503--528", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dong C. Liu and Jorge Nocedal. 1989. On the limited memory BFGS method for large scale optimization. Mathematical Programming, 45:503-528.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Nomlex: A lexicon of nominalizations", "authors": [ { "first": "Catherine", "middle": [], "last": "Macleod", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Meyers", "suffix": "" }, { "first": "Leslie", "middle": [], "last": "Barrett", "suffix": "" }, { "first": "Ruth", "middle": [], "last": "Reeves", "suffix": "" } ], "year": 1998, "venue": "Euralex International Congress", "volume": "", "issue": "", "pages": "187--193", "other_ids": {}, "num": null, "urls": [], "raw_text": "Catherine Macleod, Ralph Grishman, Adam Meyers, Leslie Barrett, and Ruth Reeves. 1998. Nomlex: A lexicon of nominalizations. In Euralex International Congress, pages 187-193.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "TAC KBP Event Detection and Coreference Tasks for English. Version 1.0. Accessed", "authors": [ { "first": "Teruko", "middle": [], "last": "Mitamura", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Teruko Mitamura and Eduard Hovy, 2015. TAC KBP Event Detection and Corefer- ence Tasks for English. Version 1.0. Ac- cessed 14 November 2015 from http: //cairo.lti.cs.cmu.edu/kbp/2015/ event/Event_Mention_Detection_and_ Coreference-2015-v1.1.pdf.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Representing text chunks", "authors": [ { "first": "Erik", "middle": [ "F" ], "last": "Tjong", "suffix": "" }, { "first": "Kim", "middle": [], "last": "Sang", "suffix": "" }, { "first": "Jorn", "middle": [], "last": "Veenstra", "suffix": "" } ], "year": 1999, "venue": "Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "173--179", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erik F. Tjong Kim Sang and Jorn Veenstra. 1999. Rep- resenting text chunks. In Conference of the Euro- pean Chapter of the Association for Computational Linguistics, pages 173-179.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Word representations: A simple and general method for semisupervised learning", "authors": [ { "first": "Joseph", "middle": [], "last": "Turian", "suffix": "" }, { "first": "Lev-Arie", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2010, "venue": "Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "384--394", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and gen- eral method for semisupervised learning. In Annual Meeting of the Association for Computational Lin- guistics, pages 384-394. 3288ddfcb46d1934ad453afd8a4e292f AFP_ENG_20091015.0364", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "Performance by subtype.", "type_str": "figure" }, "TABREF2": { "type_str": "table", "html": null, "num": null, "text": "", "content": "
shows the results from each classifier. The
binary SVMs outperform all other models with an
F-score of 55.7. The score for multi-class SVM is
two points lower at 53.2. Multi-class and binary
ME comes next with binary performing worst.
SystemPRF1
Multi-class ME 62.2 40.8 49.2
Multi-class SVM 55.6 50.9 53.2
Binary ME77.8 28.2 41.4
Binary SVM64.7 48.9 55.7
" } } } }