{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:22:15.174134Z" }, "title": "Building an Event Extractor with Only a Few Examples", "authors": [ { "first": "Yu", "middle": [], "last": "Pengfei", "suffix": "", "affiliation": {}, "email": "pengfei4@illinois.edu" }, { "first": "Zixuan", "middle": [], "last": "1\u21e4", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Illinois Urbana-Champaign", "location": {} }, "email": "zixuan11@illinois.edu" }, { "first": "", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Illinois Urbana-Champaign", "location": {} }, "email": "" }, { "first": "Clare", "middle": [], "last": "Voss", "suffix": "", "affiliation": { "laboratory": "U.S. Army Combat Capabilities Development Command Army Research Laboratory", "institution": "", "location": {} }, "email": "clare.r.voss.civ@army.mil" }, { "first": "Jonathan", "middle": [], "last": "May", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of South California", "location": {} }, "email": "jonmay@isi.edu" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Illinois Urbana-Champaign", "location": {} }, "email": "hengji@illinois.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Supervised event extraction models require a substantial amount of training data to perform well. However, event annotation requires a lot of human effort and costs much time, which limits the application of existing supervised approaches to new event types. In order to reduce manual labor and shorten the time to build an event extraction system for an arbitrary event ontology, we present a new framework to train such systems much more efficiently without large annotations. Our event trigger labeling model uses a weak supervision approach, which only requires a set of keywords, a small number of examples and an unlabeled corpus, on which our approach automatically collects weakly supervised annotations. Our argument role labeling component performs zero-shot learning, which only requires the names of the argument roles of new event types. The source codes of our event trigger detection 1 and event argument extraction 2 models are publicly available for research purposes. We also release a dockerized system connecting the two models into an unified event extraction pipeline 3 .", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Supervised event extraction models require a substantial amount of training data to perform well. However, event annotation requires a lot of human effort and costs much time, which limits the application of existing supervised approaches to new event types. In order to reduce manual labor and shorten the time to build an event extraction system for an arbitrary event ontology, we present a new framework to train such systems much more efficiently without large annotations. Our event trigger labeling model uses a weak supervision approach, which only requires a set of keywords, a small number of examples and an unlabeled corpus, on which our approach automatically collects weakly supervised annotations. Our argument role labeling component performs zero-shot learning, which only requires the names of the argument roles of new event types. The source codes of our event trigger detection 1 and event argument extraction 2 models are publicly available for research purposes. We also release a dockerized system connecting the two models into an unified event extraction pipeline 3 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Supervised event extraction models require sufficient training data to achieve a good performance. However, event annotation is a challenging task costing a lot of time and manual effort due to the sparsity of event mentions in natural language and the potentially large number of emergent event types that human annotators need to keep in mind during annotation. Therefore, annotation becomes a bottleneck that slows down the development of supervised event extraction systems whenever a new scenario of interest emerges with new event types in need of new data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In order to meet the needs of fast development of event extraction systems for emergent new event types, we present a novel framework that can train event extraction systems with very few resources. Our proposed framework includes a weakly supervised approach to train a event trigger labeling model and a zero-shot model for argument role labeling. Our proposed weakly supervised event trigger labeling model only requires a few keywords and a small number of example event mentions. In our experiments on the ACE 2005 English dataset, 4 we use 4.9 keywords and 7.3 example mentions per event type on average, which are all extracted from the ACE annotation guidelines. We also propose a zero-shot argument role labeling model that only requires the argument role names of new event types to perform the task. Since such information is typically included in the target ontology and annotation guidelines, we believe this required input costs much less than human annotations. Our framework can be applied to any new event types. Our trigger labeling component outperforms existing few-shot and zero-shot methods (Huang et al., 2018; Li et al., 2021; Feng et al., 2020) on ACE 2005 English dataset.", "cite_spans": [ { "start": 537, "end": 538, "text": "4", "ref_id": null }, { "start": 1113, "end": 1133, "text": "(Huang et al., 2018;", "ref_id": "BIBREF10" }, { "start": 1134, "end": 1150, "text": "Li et al., 2021;", "ref_id": "BIBREF12" }, { "start": 1151, "end": 1169, "text": "Feng et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our framework includes two components: a trigger labeling model trained from a few keywords and example mentions per each new event type and an unlabeled corpus; and a zero-shot argument role labeling model which only needs the corresponding argument role names for extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "2" }, { "text": "As shown in Figure 1 , our framework requires a list of keywords {k 1 , . . . , k M } for each target event type and a set of event mentions as input. Our goal is to annotate an unlabeled corpus C = {s 1 , s 2 , . . . , s N }, which is a collection of sentences s i , and train a model on the weakly supervised annotations. The corpus for weak supervision is disjoint from the evaluation corpus.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Event Trigger Labeling", "sec_num": "2.1" }, { "text": "Keyword Representation For each keyword k i , we first find all its occurrences (including morphological inflection) in the corpus and summarize the semantics of each keyword into distributed representations by aggregating the hidden representation of each keyword occurrence using a large-scale language model M inspired by Meng et al. (2020) . M functions as a sentence encoder to transform tokens in a sentence into hidden representations. A keyword occurrence consists of a sentence s j 2 C and a token offset (b ij , e ij ) indicating the starting and ending offsets of k i . We average the token hidden representations from the language model M within the token span as the representation for the j-th occurrence, and use the mean vector of all occurrences as the keyword representation k i . This process is shown in the top right corner of Figure 1 .", "cite_spans": [ { "start": 325, "end": 343, "text": "Meng et al. (2020)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 848, "end": 856, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Event Trigger Labeling", "sec_num": "2.1" }, { "text": "Since some keywords have similar meanings, we propose an additional clustering step to group similar keywords together to find mentions of novel trigger words not in the keyword list. We show an example in Figure 1 for the Attack event. We apply spherical KMeans (Lloyd, 1982) to acquire a set of cluster centers for an event type{c 1 , c 2 , . . . , c m }. Letting t denote the representation of a token in an unlabeled sentence according to M, we compute the score S(t) of the token being an event trigger as the cosine similarity with the closest cluster representation for all the event type's clusters:", "cite_spans": [ { "start": 263, "end": 276, "text": "(Lloyd, 1982)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 206, "end": 214, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Keyword Clustering and Annotation", "sec_num": null }, { "text": "S(t) = max i cos_sim(c i , t).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Keyword Clustering and Annotation", "sec_num": null }, { "text": "We accept a token as an event trigger of this type if the score S(t) exceeds a threshold value. We select the threshold for which this annotation procedure achieves the best trigger labeling F1 score on example sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Keyword Clustering and Annotation", "sec_num": null }, { "text": "Training with Example-based Denoising At each minibatch training step, let B w be a sampled batch from the weakly supervised data. We further sample a batch B e from the example mentions (from the human annotation guidelines). We compute the information consistency between B w and B e as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Keyword Clustering and Annotation", "sec_num": null }, { "text": "d = I(r \u2713 L > Be r \u2713 L Bw > 0)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Keyword Clustering and Annotation", "sec_num": null }, { "text": "where I is the indicator function, L is the loss with respect to either the example batch or the weakly supervised batch, and \u2713 is the set of model parameters. If d = 0, the training gradient has deviated far from the example gradient, in which case we discard the training data for loss computation. The overall loss is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Keyword Clustering and Annotation", "sec_num": null }, { "text": "L B = d L Bw + (1 d )L Be ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Keyword Clustering and Annotation", "sec_num": null }, { "text": "where is a hyperparameter that interpolates joint training on example data and weakly supervised data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Keyword Clustering and Annotation", "sec_num": null }, { "text": "Our zero-shot event argument extraction model only requires the event argument role names (usually single words or phrases) for each event type (e.g., the event argument role names Giver, Beneficiary, Recipient and Place for event type Transaction: Transfer-Money). Note that our model does not require any detailed information such as natural language descriptions, example annotations or external resources (Zhang et al., 2021) . Our model is trained on existing event argument roles with annotations, and is using zero-shot learning to generalize well to any new argument roles.", "cite_spans": [ { "start": 409, "end": 429, "text": "(Zhang et al., 2021)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Event Argument Role Labeling", "sec_num": "2.2" }, { "text": "Zero-shot Training and Classification Inspired from many typical zero-shot learning tasks such as zero-shot image classification (Xian et al., 2018; Liu et al., 2018b) , we take a similar approach to build a shared embedding space for both role label semantics and the contextual text features between triggers and arguments.", "cite_spans": [ { "start": 129, "end": 148, "text": "(Xian et al., 2018;", "ref_id": "BIBREF28" }, { "start": 149, "end": 167, "text": "Liu et al., 2018b)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Event Argument Role Labeling", "sec_num": "2.2" }, { "text": "Given an input sentence, we first perform named entity recognition (NER) with Spacy 5 to extract all entity mentions in a sentence. After that, given the event role names {r 1 , r 2 , \u2022 \u2022 \u2022 , r R } for a certain event type, we first obtain the semantic embeddings {r 1 , r 2 , \u2022 \u2022 \u2022 , r R } using the pretrained language model BERT (Devlin et al., 2019) . We also use BERT to get the representation vectors for all extracted event triggers t i and entity mentions e i within the sentence, and concatenate the vectors as [t i , e i ] to represent a trigger-entity pair. The intuition here is to learn two separate neural network projection functions to map each role label and trigger-entity pair into a single shared embedding space, where each trigger-entity pair stays near its correct roles and far from all other event argument roles. During training, we minimize the cosine distance between each [t i , e i ] and its role label r i , while maximizing the distance between [t i , e i ] and all other role labels. Specifically, if we use R to represent the set of all argument role embeddings and use Figure 1 : The weakly supervised event trigger labeling framework to represent trigger-entity pairs, the training objective is to minimize the hinge loss", "cite_spans": [ { "start": 332, "end": 353, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 1104, "end": 1112, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Event Argument Role Labeling", "sec_num": "2.2" }, { "text": "x i = [t i , e i ] attack", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Event Argument Role Labeling", "sec_num": "2.2" }, { "text": "L i = P j6 =i,rj 2R max (m C(x i , r i ) + C(x i , r j ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Event Argument Role Labeling", "sec_num": "2.2" }, { "text": ", where C(x, r) denotes the cosine similarity. In this way, the trigger-entity pair representations tend to be centered around their argument role labels. During testing, we directly classify each trigger-entity pair as its nearest role label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Event Argument Role Labeling", "sec_num": "2.2" }, { "text": "We evaluate our models with the English portion of the ACE 2005 dataset. It contains 33 event types with 22 event argument role types. We use the training split as the weak supervision corpus, while in zero-shot event argument role labeling, we follow previous work (Huang et al., 2018; Zhang et al., 2021) and use the 10 most frequent event types as training types and other event types along with their role types for testing. ", "cite_spans": [ { "start": 266, "end": 286, "text": "(Huang et al., 2018;", "ref_id": "BIBREF10" }, { "start": 287, "end": 306, "text": "Zhang et al., 2021)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "3.1" }, { "text": "Event Detection. We evaluate event detection performance on two tasks. The first is the traditional trigger labeling. The model detects trigger spans from sentences and predicts an event type for each span. The second task is sentence level event detection (Feng et al., 2020) , where the model predicts whether a sentence contains a mention of each event type. We evaluate both of the tasks with the F1 score. To further evaluate the impact of weak supervision, we compare with the Example baseline, which uses the same architecture but is trained only with example mentions in the human annotation guidelines. We also show ablation results for the keyword clustering step and example-based denoising step. As an efficient approach for event detection, we also compare with other zero-shot and few-shot methods for each task, as specified next below. We provide more implementation details in the Appendix.", "cite_spans": [ { "start": 257, "end": 276, "text": "(Feng et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "3.2" }, { "text": "We show the performance of our framework on trigger labeling in Table 2 . We compare with the reported performance using two zero-shot methods: ZSL (Huang et al., 2018) and TapKey (Li et al., 2021) . Our framework has the best performance among all the methods. We also show some inconsistent weakly supervised annotations (d = 0 in the denoising step in Section 2.1) from the denoising component in Table 3 to demonstrate the effectiveness of the denoising component. To further understand the effect of weak supervision, we compare the weakly supervised results with supervised models trained on varying percentages of training data Table 2 : Trigger labeling performance (in %). Huang et al. (2018) evaluated on a 23-event-type subset of the complete ACE event ontology. We compute our model's performance on these types for comparison. The slots with \"-\" are unreported results. in Figure 2 . For sentence-level detection, we compare with the best few-shot (9-shot) results (Feng et al., 2020) in Table 4 . Our weakly supervised approach has improved the performance. Table 5 : Event argument role labeling performance on ACE dataset. We report both overall scores and also top-3 scores on specific event argument roles.", "cite_spans": [ { "start": 148, "end": 168, "text": "(Huang et al., 2018)", "ref_id": "BIBREF10" }, { "start": 180, "end": 197, "text": "(Li et al., 2021)", "ref_id": "BIBREF12" }, { "start": 682, "end": 701, "text": "Huang et al. (2018)", "ref_id": "BIBREF10" }, { "start": 978, "end": 997, "text": "(Feng et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 64, "end": 71, "text": "Table 2", "ref_id": null }, { "start": 400, "end": 407, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 635, "end": 642, "text": "Table 2", "ref_id": null }, { "start": 886, "end": 894, "text": "Figure 2", "ref_id": "FIGREF0" }, { "start": 1001, "end": 1008, "text": "Table 4", "ref_id": "TABREF6" }, { "start": 1072, "end": 1079, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "3.2" }, { "text": "frequent event types in ACE dataset for training and the other 23 types for testing. We report the precision, recall, and F1 scores on the test split of ACE dataset as shown in Table 5 .", "cite_spans": [], "ref_spans": [ { "start": 177, "end": 184, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "3.2" }, { "text": "Supervised Event Detection Event detection under supervised settings has been widely studied (Ji and Grishman, 2008; Chen et al., 2015; Feng et al., 2016; Liu et al., , 2018a Liu et al., , 2019a Lu et al., 2019; Ding et al., 2019; Yan et al., 2019; Tong et al., 2020; Du and Cardie, 2020; Li et al., 2021) . Other methods on joint information extraction (Li et al., 2013; Wadden et al., 2019; Lin et al., 2020) also include event detection as a subtask. However, supervised methods heavily rely on human annotations to perform well.", "cite_spans": [ { "start": 93, "end": 116, "text": "(Ji and Grishman, 2008;", "ref_id": "BIBREF11" }, { "start": 117, "end": 135, "text": "Chen et al., 2015;", "ref_id": "BIBREF1" }, { "start": 136, "end": 154, "text": "Feng et al., 2016;", "ref_id": "BIBREF6" }, { "start": 155, "end": 174, "text": "Liu et al., , 2018a", "ref_id": "BIBREF15" }, { "start": 175, "end": 194, "text": "Liu et al., , 2019a", "ref_id": "BIBREF14" }, { "start": 195, "end": 211, "text": "Lu et al., 2019;", "ref_id": "BIBREF21" }, { "start": 212, "end": 230, "text": "Ding et al., 2019;", "ref_id": "BIBREF3" }, { "start": 231, "end": 248, "text": "Yan et al., 2019;", "ref_id": "BIBREF29" }, { "start": 249, "end": 267, "text": "Tong et al., 2020;", "ref_id": "BIBREF24" }, { "start": 268, "end": 288, "text": "Du and Cardie, 2020;", "ref_id": "BIBREF4" }, { "start": 289, "end": 305, "text": "Li et al., 2021)", "ref_id": "BIBREF12" }, { "start": 354, "end": 371, "text": "(Li et al., 2013;", "ref_id": null }, { "start": 372, "end": 392, "text": "Wadden et al., 2019;", "ref_id": "BIBREF25" }, { "start": 393, "end": 410, "text": "Lin et al., 2020)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "Weakly Supervised Event Extraction Some previous weakly supervised event extraction methods aim at augmenting data for existing event types. Ferguson et al. (2018) propose a semi-supervised method which requires a strong supervised event extractor for data collection. propose a distant supervision based framework using Freebase Compound Value Types (CVTs). Wang et al. (2019) follow Chen et al. (2015) and introduce a novel adversarial training method to denoise the noisy training data for event extraction.", "cite_spans": [ { "start": 141, "end": 163, "text": "Ferguson et al. (2018)", "ref_id": "BIBREF7" }, { "start": 359, "end": 377, "text": "Wang et al. (2019)", "ref_id": "BIBREF26" }, { "start": 385, "end": 403, "text": "Chen et al. (2015)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "Zero-shot Event Argument Extraction In zeroshot learning (Zhang and Saligrama, 2015; Romera-Paredes and Torr, 2015; , the model is required to make predictions on types that are not observed during training. Such a problem setting has also been widely explored in Computer Vision, especially for zero-shot image classification (Gu et al., 2021; Hanouti and Borgne, 2022) . In terms of zero-shot event extraction, Huang et al. (2018) propose a semantic similarity based learning method, and more recently, Zhang et al. (2021) fur-ther use resources from external corpus as weaklysupervised example annotations.", "cite_spans": [ { "start": 57, "end": 84, "text": "(Zhang and Saligrama, 2015;", "ref_id": "BIBREF32" }, { "start": 85, "end": 115, "text": "Romera-Paredes and Torr, 2015;", "ref_id": "BIBREF23" }, { "start": 327, "end": 344, "text": "(Gu et al., 2021;", "ref_id": "BIBREF8" }, { "start": 345, "end": 370, "text": "Hanouti and Borgne, 2022)", "ref_id": "BIBREF9" }, { "start": 413, "end": 432, "text": "Huang et al. (2018)", "ref_id": "BIBREF10" }, { "start": 505, "end": 524, "text": "Zhang et al. (2021)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "In this work we present an efficient event extraction framework that can be trained with only a few keywords and example event mentions per new event type. We use weak supervision for trigger labeling and apply a zero-shot framework for argument role labeling. Our framework can collect training data and build models for emergent new event types in a significantly shortened time without needing to acquire large-scale human annotations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "This research is based upon work supported in part by U.S. DARPA LORELEI Program No. HR0011-15-C-0115, U.S. DARPA AIDA Program No. FA8750-18-2-0014 and KAIROS Program No. FA8750-19-2-1004 . The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.", "cite_spans": [ { "start": 108, "end": 187, "text": "DARPA AIDA Program No. FA8750-18-2-0014 and KAIROS Program No. FA8750-19-2-1004", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null }, { "text": "\u21e4 These authors contributed equally to this work. 1 https://github.com/Perfec-Yu/ efficient-event-extraction 2 https://github.com/zhangzx-uiuc/ zero-shot-event-arguments 3 https://hub.docker.com/repository/ docker/zixuan11/event-extractor", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://catalog.ldc.upenn.edu/ LDC2006T06", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://spacy.io/api/entityrecognizer", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 73-82, Sofia, Bulgaria. Association for Computational Linguistics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Compared with traditional KMeans (Lloyd, 1982) , there are two modifications in spherical KMeans. Firstly, the cluster assignment at each iteration step is decided according to the cosine similarities to the cluster centers instead of the Euclidean distance. Besides, after computing the cluster centers as the mean vectors of those keyword representations that are assigned to the corresponding clusters, we add an additional normalizing step to make all cluster centers have unit norm. We use the implementation in https://github. com/jasonlaska/spherecluster for experiments.", "cite_spans": [ { "start": 33, "end": 46, "text": "(Lloyd, 1982)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "A.1 Spherical KMeans for Keyword Clustering", "sec_num": null }, { "text": "We adopt a sequence labeling model for trigger labeling. Since we observe very few consecutive trigger spans, we use a simplified 'IO' tagging method instead of 'BIO' tagging. Specifically, we assign each token in a sentence a label 'I-' if it is in a trigger span of the corresponding event type. For the model architecture, we use Roberta-Large (Liu et al., 2019b) to encode each token in the sentences into a hidden representation. Then we adopt an additional linear layer to classify each token into one of the tags. We use training batch size of 8 sentences. We truncate sentences to contain at most 96 tokens. For optimization, we use AdamW (Loshchilov and Hutter, 2019) optimizer with initial learning rate 10 5 . We also use a linear warmup with 1200 warmup steps. We run experiments with 4 random seeds and report the average score.", "cite_spans": [ { "start": 359, "end": 378, "text": "(Liu et al., 2019b)", "ref_id": null }, { "start": 659, "end": 688, "text": "(Loshchilov and Hutter, 2019)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "A.2 Implementation Details for Trigger Labeling", "sec_num": null }, { "text": "We use a Roberta-Large model finetuned on MultiNLI (Williams et al., 2018) dataset for textual entailment. The input to the model consists of a candidate sentence and an event-type-specific entailment sentence, such as Agent attacked Target for Attack event. The complete list of used entailment sentences can be found in the supplementary materials. The model outputs scores for the three labels: s e for entailment, s n neutral and s c contradiction. We compute the probability of mentioning an event as P (Mention) =e se e se +e sn+sc . We use cross entropy loss to train the model. For evaluation, consider the candidate sentence mentioning an event if the probability of entailment is greater than 0.5. We use the same training hyper-parameters as trigger labeling. We run experiments with 4 random seeds and report the average score.", "cite_spans": [ { "start": 51, "end": 74, "text": "(Williams et al., 2018)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "A.3 Implementation Details for Sentence-level Event Detection", "sec_num": null }, { "text": "For the weak annotation, The threshold is chosen from 0.4 to 1.0 with 0.05 incremental steps. We choose the threshold as 0.65 to have the best F1 score on the example mentions. Since we use the ACE 2005 English training corpus for weak supervision, we also compute the F1 score of the weakly supervised annotation directly. The F1 score is 0.46. For the example-based denoising, we choose the weight parameter = 0.7 for trigger labeling and = 0.5 for sentence-level event detection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.4 Implementation Details for Weak Supervision", "sec_num": null }, { "text": "We show keywords for each event type in Table 6 . We include example mentions in the supplementary materials. We have a total of 173 sentences and 241 event mentions in the example data. : Keywords used for each event type. Although we performed lemmatization for matching, there are some situations that lemmatization cannot handle perfectly. Therefore we also include various tenses for some verbs.", "cite_spans": [], "ref_spans": [ { "start": 40, "end": 47, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "B Keywords and Example Mentions", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Automatically labeled data generation for large scale event extraction", "authors": [ { "first": "Yubo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Shulin", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "409--419", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yubo Chen, Shulin Liu, Xiang Zhang, Kang Liu, and Jun Zhao. 2017. Automatically labeled data genera- tion for large scale event extraction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 409-419, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Event extraction via dynamic multipooling convolutional neural networks", "authors": [ { "first": "Yubo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Liheng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Daojian", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "167--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multi- pooling convolutional neural networks. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), pages 167-176, Beijing, China. Association for Computational Lin- guistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Event detection with triggeraware lattice neural network", "authors": [ { "first": "Ning", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Ziran", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Haitao", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Zibo", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "347--356", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ning Ding, Ziran Li, Zhiyuan Liu, Haitao Zheng, and Zibo Lin. 2019. Event detection with trigger- aware lattice neural network. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Process- ing (EMNLP-IJCNLP), pages 347-356, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Event extraction by answering (almost) natural questions", "authors": [ { "first": "Xinya", "middle": [], "last": "Du", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "671--683", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinya Du and Claire Cardie. 2020. Event extraction by answering (almost) natural questions. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 671-683, Online. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Probing and fine-tuning reading comprehension models for few-shot event extraction", "authors": [ { "first": "Rui", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Chao", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.11325" ] }, "num": null, "urls": [], "raw_text": "Rui Feng, Jie Yuan, and Chao Zhang. 2020. Prob- ing and fine-tuning reading comprehension mod- els for few-shot event extraction. arXiv preprint arXiv:2010.11325.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A language-independent neural network for event detection", "authors": [ { "first": "Xiaocheng", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Lifu", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Duyu", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "66--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaocheng Feng, Lifu Huang, Duyu Tang, Heng Ji, Bing Qin, and Ting Liu. 2016. A language-independent neural network for event detection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 66-71, Berlin, Germany. Association for Com- putational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Semi-supervised event extraction with paraphrase clusters", "authors": [ { "first": "James", "middle": [], "last": "Ferguson", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Lockard", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Weld", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "359--364", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Ferguson, Colin Lockard, Daniel Weld, and Han- naneh Hajishirzi. 2018. Semi-supervised event ex- traction with paraphrase clusters. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Pa- pers), pages 359-364, New Orleans, Louisiana. As- sociation for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Zero-shot detection via vision and language knowledge distillation", "authors": [ { "first": "Xiuye", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Tsung-Yi", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Weicheng", "middle": [], "last": "Kuo", "suffix": "" }, { "first": "Yin", "middle": [], "last": "Cui", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2104.13921" ] }, "num": null, "urls": [], "raw_text": "Xiuye Gu, Tsung-Yi Lin, Weicheng Kuo, and Yin Cui. 2021. Zero-shot detection via vision and language knowledge distillation. arXiv preprint arXiv:2104.13921.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Learning semantic ambiguities for zero-shot learning", "authors": [ { "first": "Celina", "middle": [], "last": "Hanouti", "suffix": "" }, { "first": "", "middle": [], "last": "Herv\u00e9 Le Borgne", "suffix": "" } ], "year": 2022, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2201.01823" ] }, "num": null, "urls": [], "raw_text": "Celina Hanouti and Herv\u00e9 Le Borgne. 2022. Learning semantic ambiguities for zero-shot learning. arXiv preprint arXiv:2201.01823.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Zero-shot transfer learning for event extraction", "authors": [ { "first": "Lifu", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Clare", "middle": [], "last": "Voss", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2160--2170", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lifu Huang, Heng Ji, Kyunghyun Cho, Ido Dagan, Se- bastian Riedel, and Clare Voss. 2018. Zero-shot transfer learning for event extraction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2160-2170, Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Refining event extraction through cross-document inference", "authors": [ { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2008, "venue": "Columbus, Ohio. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "254--262", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heng Ji and Ralph Grishman. 2008. Refining event extraction through cross-document inference. In Pro- ceedings of ACL-08: HLT, pages 254-262, Colum- bus, Ohio. Association for Computational Linguis- tics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Document-level event argument extraction by conditional generation", "authors": [ { "first": "Sha", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ji", "middle": [], "last": "Heng", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Han", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sha Li, Heng Ji, and Jiawei Han. 2021. Document-level event argument extraction by conditional generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A joint neural model for information extraction with global features", "authors": [ { "first": "Ying", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Lingfei", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7999--8009", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ying Lin, Heng Ji, Fei Huang, and Lingfei Wu. 2020. A joint neural model for information extraction with global features. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 7999-8009, Online. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Neural cross-lingual event detection with minimal parallel resources", "authors": [ { "first": "Jian", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yubo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "738--748", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jian Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2019a. Neural cross-lingual event detection with minimal parallel resources. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 738-748, Hong Kong, China. As- sociation for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Exploiting contextual information via dynamic memory network for event detection", "authors": [ { "first": "Shaobo", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Xiaoming", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Xueqi", "middle": [], "last": "Cheng", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1030--1035", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shaobo Liu, Rui Cheng, Xiaoming Yu, and Xueqi Cheng. 2018a. Exploiting contextual information via dynamic memory network for event detection. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 1030-1035, Brussels, Belgium. Association for Com- putational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Generalized zero-shot learning with deep calibration network", "authors": [ { "first": "Shichen", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Mingsheng", "middle": [], "last": "Long", "suffix": "" }, { "first": "Jianmin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Michael I Jordan", "middle": [], "last": "", "suffix": "" } ], "year": 2018, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "2005--2015", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shichen Liu, Mingsheng Long, Jianmin Wang, and Michael I Jordan. 2018b. Generalized zero-shot learning with deep calibration network. In Advances in Neural Information Processing Systems, pages 2005-2015.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Exploiting argument information to improve event detection via supervised attention mechanisms", "authors": [ { "first": "Shulin", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yubo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1789--1798", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shulin Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2017. Exploiting argument information to improve event detection via supervised attention mechanisms. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1789-1798, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Least squares quantization in PCM", "authors": [ { "first": "P", "middle": [], "last": "Stuart", "suffix": "" }, { "first": "", "middle": [], "last": "Lloyd", "suffix": "" } ], "year": 1982, "venue": "IEEE Trans. Inf. Theory", "volume": "28", "issue": "2", "pages": "129--136", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stuart P. Lloyd. 1982. Least squares quantization in PCM. IEEE Trans. Inf. Theory, 28(2):129-136.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Decoupled weight decay regularization", "authors": [ { "first": "Ilya", "middle": [], "last": "Loshchilov", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Hutter", "suffix": "" } ], "year": 2019, "venue": "7th International Conference on Learning Representations, ICLR 2019", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenRe- view.net.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Distilling discrimination and generalization knowledge for event detection via delta-representation learning", "authors": [ { "first": "Yaojie", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Hongyu", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Xianpei", "middle": [], "last": "Han", "suffix": "" }, { "first": "Le", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4366--4376", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaojie Lu, Hongyu Lin, Xianpei Han, and Le Sun. 2019. Distilling discrimination and generalization knowl- edge for event detection via delta-representation learning. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 4366-4376, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Text classification using label names only: A language model self-training approach", "authors": [ { "first": "Yu", "middle": [], "last": "Meng", "suffix": "" }, { "first": "Yunyi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jiaxin", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Chenyan", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Chao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Han", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "9006--9017", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu Meng, Yunyi Zhang, Jiaxin Huang, Chenyan Xiong, Heng Ji, Chao Zhang, and Jiawei Han. 2020. Text classification using label names only: A language model self-training approach. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9006-9017, Online. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "An embarrassingly simple approach to zero-shot learning", "authors": [ { "first": "Bernardino", "middle": [], "last": "Romera", "suffix": "" }, { "first": "-", "middle": [], "last": "Paredes", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Torr", "suffix": "" } ], "year": 2015, "venue": "International conference on machine learning", "volume": "", "issue": "", "pages": "2152--2161", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bernardino Romera-Paredes and Philip Torr. 2015. An embarrassingly simple approach to zero-shot learn- ing. In International conference on machine learning, pages 2152-2161. PMLR.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Improving event detection via open-domain trigger knowledge", "authors": [ { "first": "Meihan", "middle": [], "last": "Tong", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Shuai", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yixin", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Hou", "suffix": "" }, { "first": "Juanzi", "middle": [], "last": "Li", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5887--5897", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meihan Tong, Bin Xu, Shuai Wang, Yixin Cao, Lei Hou, Juanzi Li, and Jun Xie. 2020. Improving event detec- tion via open-domain trigger knowledge. In Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5887-5897, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Entity, relation, and event extraction with contextualized span representations", "authors": [ { "first": "David", "middle": [], "last": "Wadden", "suffix": "" }, { "first": "Ulme", "middle": [], "last": "Wennberg", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "5784--5789", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Wadden, Ulme Wennberg, Yi Luan, and Han- naneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 5784- 5789, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Adversarial training for weakly supervised event detection", "authors": [ { "first": "Xiaozhi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Han", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Li", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "998--1008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaozhi Wang, Xu Han, Zhiyuan Liu, Maosong Sun, and Peng Li. 2019. Adversarial training for weakly supervised event detection. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 998-1008, Minneapolis, Min- nesota. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "authors": [ { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1112--1122", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguis- tics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Feature generating networks for zero-shot learning", "authors": [ { "first": "Yongqin", "middle": [], "last": "Xian", "suffix": "" }, { "first": "Tobias", "middle": [], "last": "Lorenz", "suffix": "" }, { "first": "Bernt", "middle": [], "last": "Schiele", "suffix": "" }, { "first": "Zeynep", "middle": [], "last": "Akata", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "5542--5551", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yongqin Xian, Tobias Lorenz, Bernt Schiele, and Zeynep Akata. 2018. Feature generating networks for zero-shot learning. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 5542-5551.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Event detection with multiorder graph convolution and aggregated attention", "authors": [ { "first": "Haoran", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Xiaolong", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Xiangbin", "middle": [], "last": "Meng", "suffix": "" }, { "first": "Jiafeng", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Xueqi", "middle": [], "last": "Cheng", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "5766--5770", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haoran Yan, Xiaolong Jin, Xiangbin Meng, Jiafeng Guo, and Xueqi Cheng. 2019. Event detection with multi- order graph convolution and aggregated attention. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 5766- 5770, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Zero-shot Label-aware Event Trigger and Argument Classification", "authors": [ { "first": "Hongming", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Haoyu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2021, "venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", "volume": "", "issue": "", "pages": "1331--1340", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hongming Zhang, Haoyu Wang, and Dan Roth. 2021. Zero-shot Label-aware Event Trigger and Argu- ment Classification. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1331-1340, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Learning a deep embedding model for zero-shot learning", "authors": [ { "first": "Li", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "Shaogang", "middle": [], "last": "Gong", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "2021--2030", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Zhang, Tao Xiang, and Shaogang Gong. 2017. Learn- ing a deep embedding model for zero-shot learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2021-2030.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Zeroshot learning via semantic similarity embedding", "authors": [ { "first": "Ziming", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Venkatesh", "middle": [], "last": "Saligrama", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the IEEE international conference on computer vision", "volume": "", "issue": "", "pages": "4166--4174", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ziming Zhang and Venkatesh Saligrama. 2015. Zero- shot learning via semantic similarity embedding. In Proceedings of the IEEE international conference on computer vision, pages 4166-4174.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Supervised performance with respect to training data portion. Dotted lines indicate the performance of the weakly supervised methods.", "num": null, "uris": null }, "TABREF0": { "content": "
Keyword Occurrences in Unlabeled CorpusKeyword Representations
battleattack
invade
warPretrained
violenceLM
terrorism ...battle
Example MentionsCorpus AnnotationKeyword Clustering
A car bomb exploded Thursday...
U.S. forces continued to bomb Fallujah.
...
forattack invade
Example-basedthe
DenoisingbattleClustering
finalwar
assaultviolence
Loss for Model Trainingterrorism
", "type_str": "table", "html": null, "num": null, "text": "one determined to undermine peace talks by supporting militant groups that attack Israelis.The test designed to measure the responsiveness of emergency workers to a terrorist attack.... air power remains an important part of the battle.We preview the potential battles ahead and the strategies in play ..." }, "TABREF2": { "content": "", "type_str": "table", "html": null, "num": null, "text": "Dataset statistics." }, "TABREF5": { "content": "
MethodPRF
9-shot (Feng et al., 2020) 54.5 57.0 61.8
Example Ours66.4 68.0 66.9 66.2 74.2 69.9
", "type_str": "table", "html": null, "num": null, "text": "Inconsistent weakly supervised annotations from the denoising step." }, "TABREF6": { "content": "
Event Argument Extraction. In our experiments of event argument extraction, we use the top 10
", "type_str": "table", "html": null, "num": null, "text": "Sentence level event detection result (%)." } } } }