{ "paper_id": "P17-1009", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:16:00.542807Z" }, "title": "Joint Learning for Event Coreference Resolution", "authors": [ { "first": "Jing", "middle": [], "last": "Lu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Human Language Technology Research Institute University of Texas at Dallas Richardson", "location": { "postCode": "75083-0688", "region": "TX" } }, "email": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "", "affiliation": { "laboratory": "", "institution": "Human Language Technology Research Institute University of Texas at Dallas Richardson", "location": { "postCode": "75083-0688", "region": "TX" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "While joint models have been developed for many NLP tasks, the vast majority of event coreference resolvers, including the top-performing resolvers competing in the recent TAC KBP 2016 Event Nugget Detection and Coreference task, are pipelinebased, where the propagation of errors from the trigger detection component to the event coreference component is a major performance limiting factor. To address this problem, we propose a model for jointly learning event coreference, trigger detection, and event anaphoricity. Our joint model is novel in its choice of tasks and its features for capturing cross-task interactions. To our knowledge, this is the first attempt to train a mention-ranking model and employ event anaphoricity for event coreference. Our model achieves the best results to date on the KBP 2016 English and Chinese datasets.", "pdf_parse": { "paper_id": "P17-1009", "_pdf_hash": "", "abstract": [ { "text": "While joint models have been developed for many NLP tasks, the vast majority of event coreference resolvers, including the top-performing resolvers competing in the recent TAC KBP 2016 Event Nugget Detection and Coreference task, are pipelinebased, where the propagation of errors from the trigger detection component to the event coreference component is a major performance limiting factor. To address this problem, we propose a model for jointly learning event coreference, trigger detection, and event anaphoricity. Our joint model is novel in its choice of tasks and its features for capturing cross-task interactions. To our knowledge, this is the first attempt to train a mention-ranking model and employ event anaphoricity for event coreference. Our model achieves the best results to date on the KBP 2016 English and Chinese datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Within-document event coreference resolution is the task of determining which event mentions in a text refer to the same real-world event. Compared to entity coreference resolution, event coreference resolution is not only much less studied, but it is arguably more challenging. The challenge stems in part from the fact that an event coreference resolver typically lies towards the end of the standard information extraction pipeline, assuming as input the noisy outputs of its upstream components. One such component is the trigger detection system, which is responsible for identifying event triggers and determining their event subtypes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As is commonly known, trigger detection is another challenging task that is far from being solved. In fact, in the recent TAC KBP 2016 Event Nugget Detection and Coreference task, trigger detection (a.k.a. event nugget detection in KBP) is deliberately made more challenging by focusing only on detecting the 18 subtypes of triggers on which the KBP 2015 participating systems' performances were the poorest . The best-performing KBP 2016 system on English trigger detection achieved only an F-score of 47 . 1 Given the difficulty of trigger detection, it is conceivable that many errors will propagate from the trigger detection component to the event coreference component in any pipeline architecture where trigger detection precedes event coreference resolution. These trigger detection errors could severely harm event coreference performance. For instance, two event mentions could be wrongly posited as coreferent if the underlying triggers were wrongly predicted to have the same subtype. Nevertheless, the top-performing systems in the KBP 2016 event coreference task all adopted the aforementioned pipeline architecture Nguyen et al., 2016) . Their performances are not particularly impressive, however: the best English event coreference F-score (averaged over four scoring metrics) is only around 30%.", "cite_spans": [ { "start": 508, "end": 509, "text": "1", "ref_id": null }, { "start": 1130, "end": 1150, "text": "Nguyen et al., 2016)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To address this error propagation problem, we describe a joint model of trigger detection, event coreference, and event anaphoricity in this paper. Our choice of these three tasks is motivated in part by their inter-dependencies. As mentioned above, it is well-known that trigger detection performance has a huge impact on event coreference performance. Though largely underinvestigated, event coreference could also improve trigger detection. For instance, if two event mentions are posited as coreferent, then the underlying triggers must have the same event subtype. While the use of anaphoricity information for entity coreference has been extensively studied (see Ng (2010) ), to our knowledge there has thus far been no attempt to explicitly model event anaphoricity for event coreference. 2 Although the mention-ranking model we employ for event coreference also allows an event mention to be posited as non-anaphoric (by resolving it to a null candidate antecedent), our decision to train a separate anaphoricity model and integrate it into our joint model is motivated in part by the recent successes of Wiseman et al. (2015) , who showed that there are benefits in jointly training a noun phrase anaphoricity model and a mention-ranking model for entity coreference resolution. Finally, event anaphoricity and trigger detection can also mutually benefit each other. For instance, any verb posited as a non-trigger cannot be anaphoric, and any verb posited as anaphoric must be a trigger. Note that in our joint model, anaphoricity serves as an auxiliary task: its intended use is to improve trigger detection and event coreference, potentially mediating the interaction between trigger detection and event coreference.", "cite_spans": [ { "start": 669, "end": 678, "text": "Ng (2010)", "ref_id": "BIBREF30" }, { "start": 796, "end": 797, "text": "2", "ref_id": null }, { "start": 1113, "end": 1134, "text": "Wiseman et al. (2015)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Being a structured conditional random field, our model encompasses two types of factors. Unary factors encode the features specific for each task. Binary and ternary factors capture the interaction between each pair of tasks in a soft manner, enabling the learner to learn which combinations of values of the output variables are more probable. For instance, the learner should learn that it is not a good idea to classify a verb both as anaphoric and as a non-trigger. Our model is similar in spirit to Durrett and Klein's (2014) joint model for entity analysis, which performs joint learning for entity coreference, entity linking and semantic typing via the use of interaction features.", "cite_spans": [ { "start": 504, "end": 530, "text": "Durrett and Klein's (2014)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contributions are two-fold. First, we present a joint model of event coreference, trigger detection, and anaphoricity that is novel in terms of the choice of tasks and the features used to capture cross-task interactions. Second, our model achieves the best results to date on the KBP 2016 English and Chinese event coreference tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 Following the entity coreference literature, we overload the term anaphoricity, saying that an event mention is anaphoric if it is coreferent with a preceding mention in the associated text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 Definitions, Task, and Corpora", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We employ the following definitions in our discussion of trigger detection and event coreference:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definitions", "sec_num": "2.1" }, { "text": "\u2022 An event mention is an explicit occurrence of an event consisting of a textual trigger, arguments or participants (if any), and the event type/subtype.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definitions", "sec_num": "2.1" }, { "text": "\u2022 An event trigger is a string of text that most clearly expresses the occurrence of an event, usually a word or a multi-word phrase", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definitions", "sec_num": "2.1" }, { "text": "\u2022 An event argument is an argument filler that plays a certain role in an event.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definitions", "sec_num": "2.1" }, { "text": "\u2022 An event coreference chain (a.k.a. an event hopper) is a group of event mentions that refer to the same real-world event. They must have the same event (sub)type. To understand these definitions, consider the example in Table 1 , which contains two coreferent event mentions, ev1 and ev2. left is the trigger for ev1 and departed is the trigger for ev2. Both triggers have subtype Movement.Transport-Person. ev1 has three arguments, Georges Cipriani, prison, and Wednesday with roles Person, Origin, and Time respectively. ev2 also has three arguments, He, Ensisheim, and police vehicle with roles Person, Origin, and Instrument respectively.", "cite_spans": [], "ref_spans": [ { "start": 222, "end": 229, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Definitions", "sec_num": "2.1" }, { "text": "The version of the event coreference task we focus on in this paper is the Event Nugget Detection and Coreference task in the TAC KBP 2016 Event Track. While we discuss the role played by event arguments in event coreference in the previous subsection, KBP 2016 addresses event argument detection as a separate shared task. In other words, the KBP 2016 Event Nugget Detection and Coreference task focuses solely on trigger detection and event coreference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task", "sec_num": "2.2" }, { "text": "It is worth mentioning that the KBP Event Nugget Detection and Coreference task, which started in 2015, aims to address a major weakness of the ACE 2005 event coreference task. Specifically, ACE 2005 adopts a strict notion of event identity, with which two event mentions were annotated as coreferent if and only if \"they had the same agent(s), patient(s), time, and location\" (Song et al., 2015) , and their event attributes (polarity, modality, genericity, and tense) were not incompatible. In contrast, KBP adopts a more relaxed definition of event coreference, allowing two event mentions to be coreferent as long as they intuitively refer to the same real-world event. Under this definition, two event mentions can be coreferent even if their time and location arguments are not coreferent. In our example in Table 1 , ev1 and ev2 are coreferent in KBP because they both refer to the same event of Cipriani leaving the prison. However, they are not coreferent in ACE because their Origin arguments are not coreferent (one Origin argument involves a prison in Ensisheim while the other involves the city Ensisheim).", "cite_spans": [ { "start": 377, "end": 396, "text": "(Song et al., 2015)", "ref_id": "BIBREF35" } ], "ref_spans": [ { "start": 814, "end": 821, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Task", "sec_num": "2.2" }, { "text": "Given our focus on the KBP 2016 Event Nugget Detection and Coreference task, we employ the English and Chinese corpora used in this task for evaluation, referring to these corpora as the KBP 2016 English and Chinese corpora for brevity. There are no official training sets: the task organizers simply made available a number of event coreference-annotated corpora for training. For English, we use LDC2015E29, E68, E73, and E94 for training. These corpora are composed of two types of documents, newswire documents and discussion forum documents. Together they contain 648 documents with 18739 event mentions distributed over 9955 event coreference chains. For Chinese, we use LDC2015E78, E105, and E112 for training. These corpora are composed of discussion forum documents only. Together they contain 383 documents with 4870 event mentions distributed over 3614 event coreference chains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "2.3" }, { "text": "The test set for English consists of 169 newswire and discussion forum documents with 4155 event mentions distributed over 3191 event coreference chains. The test set for Chinese consists of 167 newswire and discussion forum documents with 2518 event mentions distributed over 1912 event coreference chains. Note that these test sets contain only annotations for event triggers and event coreference (i.e., there are no event argument annotations). While some of the training sets additionally contain event argument annotations, we do not make use of event argument annotations in model training to ensure a fairer comparison to the teams participating in the KBP 2016 Event Nugget Detection and Coreference task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "2.3" }, { "text": "Our model, which is a structured conditional random field, operates at the document level. Specifically, given a test document, we first extract from it (1) all single-word nouns and verbs and (2) all words and phrases that have appeared at least once as a trigger in the training data. We treat each of these extracted words and phrases as a candidate event mention. 3 The goal of the model is to make joint predictions for the candidate event mentions in a document. Three predictions will be made for each candidate event mention that correspond to the three tasks in the model: its trigger subtype, its anaphoricity, and its antecedent.", "cite_spans": [ { "start": 368, "end": 369, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "3.1" }, { "text": "Given this formulation, we define three types of output variables:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "3.1" }, { "text": "\u2022 Event subtype variables t = (t 1 , . . . , t n ). Each t i takes a value in the set of 18 event subtypes defined in KBP 2016 or NONE, which indicates that the event mention is not a trigger.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "3.1" }, { "text": "\u2022 Anaphoricity variables a = (a 1 , . . . , a n ). Each a i is either ANAPHORIC or NOT ANAPHORIC.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "3.1" }, { "text": "\u2022 Coreference variables c = (c 1 , . . . , c n ), where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "3.1" }, { "text": "c i \u2208 {1, . . . , i \u2212 1, NEW}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "3.1" }, { "text": "In other words, the value of each c i is the id of its antecedent, which can be one of the preceding event mentions or NEW (if the event mention underlying c i starts a new cluster).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "3.1" }, { "text": "Each candidate event mention is associated with exactly one coreference variable, one event subtype variable, and one anaphoricity variable. Our model induces the following log-linear probability distribution over these variables: where \u03b8 i \u2208 \u0398 is the weight associated with feature function f i and x is the input document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "3.1" }, { "text": "p(t, a, c|x; \u0398) \u221d exp( i \u03b8 i f i (t, a, c,x))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "3.1" }, { "text": "Given that our model is a structured conditional random field, the features can be divided into two types: (1) task-specific features, and (2) crosstask features, which capture the interactions between a pair of tasks. We express these two types of features in factor graph notation. The taskspecific features are encoded in unary factors, each of which is connected to the corresponding variable ( Figure 1 ). The cross-task features are encoded in binary or ternary factors, each of which couples the output variables from two tasks (Figure 2) . Next, we describe these two types of features. Each feature is used to train models for both English and Chinese unless otherwise stated.", "cite_spans": [], "ref_spans": [ { "start": 399, "end": 407, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 535, "end": 545, "text": "(Figure 2)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Features", "sec_num": "3.2" }, { "text": "We begin by describing the task-specific features, which are encoded in unary factors, as well as each of the three independent models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task-Specific Features", "sec_num": "3.2.1" }, { "text": "When applied in isolation, our trigger detection model returns a distribution over possible subtypes given a candidate trigger. Each candidate trigger t is represented using t's word, t's lemma, word bigrams formed with a window size of three from t, as well as feature conjunctions created by pairing t's lemma with each of the following features: the head word of the entity syntactically closest to t, the head word of the entity textually closest to t, the entity type of the entity that is syntactically closest to t, and the entity type of the entity that is textually closest to t. 4 In addition, for event mentions with verb triggers, we use the head words and the entity types of their subjects and objects as features, where the subjects and objects are extracted from the dependency parse trees obtained using Stanford CoreNLP (Manning et al., 2014) . For event mentions with noun triggers, we create the same features that we did for verb triggers, except that we replace the subjects and verbs with heuristically extracted agents and patients. Finally, for the Chinese trigger detector, we additionally create two features from each character in t, one encoding the character itself and the other encoding the entry number of the corresponding character in a Chinese synonym dictionary. 5", "cite_spans": [ { "start": 589, "end": 590, "text": "4", "ref_id": null }, { "start": 838, "end": 860, "text": "(Manning et al., 2014)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Trigger Detection", "sec_num": "3.2.1.1" }, { "text": "We employ a mention-ranking model for event coreference that selects the most probable antecedent for a mention to be resolved (or NEW if the mention is non-anaphoric) from its set of candidate antecedents. When applied in isolation, the model is trained to maximize the condi-tional likelihood of collectively resolving the mentions to their correct antecedents in the training texts (Durrett and Klein, 2013 ). Below we describe the features used to represent the candidate antecedents for the mention to be resolved, m j . Features representing the NULL candidate antecedent: Besides m j 's word and m j 's lemma, we employ feature conjunctions given their usefulness in entity coreference (Fernandes et al., 2014) . Specifically, we create a conjunction between m j 's lemma and the number of sentences preceding m j , as well as a conjunction between m j 's lemma and the number of mentions preceding m j in the document. ", "cite_spans": [ { "start": 385, "end": 409, "text": "(Durrett and Klein, 2013", "ref_id": "BIBREF13" }, { "start": 693, "end": 717, "text": "(Fernandes et al., 2014)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Event Coreference", "sec_num": "3.2.1.2" }, { "text": "When used in isolation, the anaphoricity model returns the probability that the given event mention is anaphoric. To train the model, we represent each event mention m j using the following features: (1) the head word of each candidate antecedent paired with m j 's word, (2) whether at least one candidate antecedent has the same lemma as that of m j , and (3) the probability that m j is anaphoric in the training data (if m j never appears in the training data, this probability is set to 0.5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anaphoricity Determination", "sec_num": "3.2.1.3" }, { "text": "Cross-task interaction features are associated with the binary and ternary factors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Task Interaction Features", "sec_num": "3.2.2" }, { "text": "We fire features that conjoin each candidate event mention's event subtype, the lemma of its trigger and its anaphoricity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Trigger Detection and Anaphoricity", "sec_num": "3.2.2.1" }, { "text": "We define our joint coreference and trigger detection factors such that the features defined on subtype variables t i and t j are fired only if current mention m j is coreferent with preceding mention m i . These features are: (1) the pair of m i and m j 's subtypes, (2) the pair of m j 's subtype and m i 's word, and (3) the pair of m i 's subtype and m j 's word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Trigger Detection and Coreference", "sec_num": "3.2.2.2" }, { "text": "We fire a feature that conjoins event mention m j 's anaphoricity and whether or not a non-NULL antecedent is selected for m j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coreference and Anaphoricity", "sec_num": "3.2.2.3" }, { "text": "We learn the model parameters \u0398 from a set of d training documents, where document i contains content x i , gold triggers t * i and gold event coreference partition C * i . Before learning, there are a couple of issues we need to address.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.3" }, { "text": "First, we need to derive gold anaphoricity labels a * i from C * i . This is straightforward: the first mention of each coreference chain is NOT ANAPHORIC, whereas the rest are ANAPHORIC.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.3" }, { "text": "Second, we employ gold event mentions for model training, but training models only on gold mentions is not sufficient: for instance, a trigger detector trained solely on gold mentions will not be able to classify a candidate event mention as NONE during testing. To address this issue, we additionally train the models on candidate event mentions corresponding to non-triggers. We create these candidate event mentions as follows. For each word w that appears as a true trigger at least once in the training data, we create a candidate event mention from each occurrence of w in the training data that is not annotated as a true trigger.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.3" }, { "text": "Third, since our model produces event coreference output in the form of an antecedent vector (with one antecedent per event mention), it needs to be trained on antecedent vectors. However, since the coreference annotation for each document i is provided in the form of a clustering C * i , we follow previous work on entity coreference resolution (Durrett and Klein, 2013): we sum over all antecedent structures A(C * i ) that are consistent with C * i (i.e., the first mention of a cluster has antecedent NEW, whereas each of the subsequent mentions can select any of the preceding mentions in the same cluster as its antecedent).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.3" }, { "text": "Next, we learn the model parameters to maximize the following conditional likelihood of the training data with L1 regularization:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.3" }, { "text": "L(\u0398) = d i=1 log c * \u2208A(C * i ) p \u2032 (t * i , a * i , c * |x i ; \u0398)+\u03bb \u0398 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.3" }, { "text": "In this objective, p \u2032 is obtained by augmenting the distribution p (defined in Section 3.1) with task-specific parameterized loss functions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.3" }, { "text": "p \u2032 (t, a, c|x i ; \u0398) \u221d p(t, a, c|x i ; \u0398) exp[\u03b1 t l t (t, t * ) + \u03b1 a l a (a, a * ) + \u03b1 c l c (c, C * )]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.3" }, { "text": "where l t , l a and l c are task-specific loss functions, and \u03b1 t , \u03b1 a and \u03b1 c are the associated weight parameters that specify the relative importance of the three tasks in the objective function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.3" }, { "text": "Softmax-margin, the technique of integrating task-specific loss functions into the objective function, was introduced by Gimpel and Smith (2010) and subsequently used by Klein (2013, 2014) . By encoding task-specific knowledge, these loss functions can help train a model that places less probability mass on less desirable output configurations.", "cite_spans": [ { "start": 121, "end": 144, "text": "Gimpel and Smith (2010)", "ref_id": "BIBREF17" }, { "start": 170, "end": 188, "text": "Klein (2013, 2014)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.3" }, { "text": "Our loss function for event coreference, l c , is motivated by the one Durrett and Klein (2013) developed for entity coreference. It is a weighted sum of the counts of three error types:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.3" }, { "text": "l c (c, C * ) = \u03b1 c,F A F A(c, C * )+\u03b1 c,F N F N (c, C * ) + \u03b1 c,W L W L(c, C * )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.3" }, { "text": "where F A(c, C * ) is the number of non-anaphoric mentions misclassified as anaphoric, F N (c, C * ) is the number of anaphoric mentions misclassified as non-anaphoric, and W L(c, C * ) is the number of incorrectly resolved anaphoric mentions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.3" }, { "text": "Our loss function for trigger detection, l t , is parameterized in a similar way, having three parameters associated with three error types: \u03b1 t,F T is associated with the number of non-triggers misclassified as triggers, \u03b1 t,F N is associated with the number of triggers misclassified as non-triggers, and \u03b1 t,W L is associated with the number of triggers labeled with the wrong subtype.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.3" }, { "text": "Finally, our loss function for anaphoricity determination, l a , is also similarly parameterized, having two parameters: \u03b1 a,F A and \u03b1 a,F N are associated with the number of false anaphors and the number of false non-anaphors, respectively. Following Durrett and Klein (2014), we use AdaGrad (Duchi et al., 2011) to optimize our objective with \u03bb = 0.001 in our experiments.", "cite_spans": [ { "start": 293, "end": 313, "text": "(Duchi et al., 2011)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.3" }, { "text": "Inference, which is performed during training and decoding, involves computing the marginals for a variable or a set of variables to which a factor connects. For efficiency, we perform approximate inference using belief propagation rather than exact inference. Given that convergence can typically be reached within five iterations of belief propagation, we employ five iterations in all experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.4" }, { "text": "Performing inference using belief propagation on the full factor graph defined in Section 3.1 can still be computationally expensive, however. One reason is that the number of ternary factors grows quadratically with the number of event mentions in a document. To improve scalability, we restrict the domains of the coreference variables. Rather than allow the domain of coreference variable c j to be of size j, we allow a preceding mention m i to be a candidate antecedent of mention m j if (1) the sentence distance between the two mentions is less than an empirically determined threshold and (2) either they are coreferent at least once in the training data or their head words have the same lemma. Doing so effectively enables us to prune the unlikely candidate antecedents for each event mention. As Durrett and Klein (2014) point out, such pruning has the additional benefit of reducing \"the memory footprint and time needed to build a factor graph\", as we do not need to create any factor between m i and m j and its associated features if m i is pruned. To further reduce the memory footprint, we additionally restrict the domains of the event subtype variables. Given a candidate event mention created from word w, we allow the domain of its subtype variable to include only NONE as well as those subtypes that w is labeled with at least once in the training data.", "cite_spans": [ { "start": 807, "end": 831, "text": "Durrett and Klein (2014)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.4" }, { "text": "For decoding, we employ minimum Bayes risk, which computes the marginals of each variable w.r.t. the joint model and derives the most probable assignment to each variable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.4" }, { "text": "We perform training and evaluation on the KBP 2016 English and Chinese corpora. For English, we train models on 509 of the training documents, tune parameters on 139 training documents, and report results on the official KBP 2016 English test set. 6 For Chinese, we train models on 302 of the training documents, tune parameters on 81 training documents, and report results on the official Results of event coreference and trigger detection are obtained using version 1.7.2 of the official scorer provided by the KBP 2016 organizers. To evaluate event coreference performance, the scorer employs four scoring measures, namely MUC (Vilain et al., 1995) , B 3 (Bagga and Baldwin, 1998) , CEAF e (Luo, 2005) and BLANC (Recasens and Hovy, 2011) , as well as the unweighted average of their F-scores (AVG-F). The scorer reports event mention detection performance in terms of F-score, considering a mention correctly detected if it has an exact match with a gold mention in terms of boundary, event type, and event subtype. In addition, we report anaphoricity determination performance in terms of the F-score computed over anaphoric mentions, counting an extracted anaphoric mention as a true positive if it has an exact match with a gold anaphoric mention in terms of boundary.", "cite_spans": [ { "start": 630, "end": 651, "text": "(Vilain et al., 1995)", "ref_id": "BIBREF36" }, { "start": 658, "end": 683, "text": "(Bagga and Baldwin, 1998)", "ref_id": "BIBREF3" }, { "start": 693, "end": 704, "text": "(Luo, 2005)", "ref_id": "BIBREF26" }, { "start": 715, "end": 740, "text": "(Recasens and Hovy, 2011)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "Results are shown in Table 2 where performance on all three tasks (event coreference, trigger detection, and anaphoricity determination) is expressed in terms of F-score. The top half of the table shows the results on the English evaluation set. Specifically, row 1 shows the performance of the best event coreference system participating in KBP 2016 . This system adopts a pipeline architecture. It first uses an ensemble of one-nearest-neighbor classifiers for trigger detection. Using the extracted triggers, it then applies a pipeline of three sieves, each of which is a one-nearest-neighbor classifier, for event coreference. As we can see, this system achieves an AVG-F of 30.08 for event coreference and an F-score of 46.99 for trigger detection.", "cite_spans": [], "ref_spans": [ { "start": 21, "end": 28, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4.2" }, { "text": "Row 2 shows the performance of the independent models, each of which is trained independently of the other models. Specifically, each independent model is trained using only the unary factors associated with it. As we can see, the independent models outperform the top KBP 2016 system by 1.2 points in AVG-F for event coreference and 1.83 points for trigger detection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4.2" }, { "text": "Results of our joint model are shown in row 3. The absolute performance differences between the joint model and the independent models are shown in row 4. As we can see, the joint model outperforms the independent models for all three tasks: by 1.80 points for event coreference, 0.48 points for trigger detection and 4.59 points for anaphoricity determination. Most encouragingly, the joint model outperforms the top KBP 2016 system for both event coreference and trigger detection. For event coreference, it outperforms the top KBP system w.r.t. all scoring metrics, yielding an improvement of 3 points in AVG-F. For trigger detection, it outperforms the top KBP system by 2.31 points. Table 2 shows the results on the Chinese evaluation set. The top KBP 2016 event coreference system on Chinese is also the system. While the top KBP system outperforms the independent models for both tasks (by 0.59 points in AVG-F for event coreference and 0.19 points for trigger detection), our joint model outperforms the independent models for all three tasks: by 1.95 points for event coreference, 4.02 points for anaphoricity determination, and 0.71 points for trigger detection. Like its English counterpart, our Chinese joint model outperforms the top KBP system for both event coreference and trigger detection. For event coreference, it outperforms the top KBP system w.r.t. all but the CEAF e metric, yielding an absolute improvement of 1.36 points in AVG-F. For trigger detection, it outperforms the top KBP system by 0.52 points. For both datasets, the joint model's superior performance to the independent coreference model stems primarily from considerable improvements in MUC F-score. As MUC is a link-based measure, these results provide suggestive evidence that joint modeling has enabled more event coreference links to be discovered.", "cite_spans": [], "ref_spans": [ { "start": 688, "end": 695, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4.2" }, { "text": "To evaluate the importance of each of the three types of joint factors in the joint model, we perform ablation experiments. 7 Table 3 shows the results on the English and Chinese datasets when we add each type of joint factors to the independent model and remove each type of joint factors from the full joint model. The results of each task are expressed in terms of changes to the corresponding independent model's F-score.", "cite_spans": [], "ref_spans": [ { "start": 126, "end": 133, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Model Ablations", "sec_num": "4.3" }, { "text": "Coref-Trigger interactions. Among the three types of factors, this one contributes the most to coreference performance, regardless of whether it is applied in isolation or in combination with the other two types of factors to the independent coreference model. In addition, it is the most effective type of factor for improving trigger detection. When applied in combination, it also improves anaphoricity determination, although less effectively than the other two types of factors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Ablations", "sec_num": "4.3" }, { "text": "Coref-Anaphoricity interactions. When applied in isolation to the independent models, this type of factor improves coreference resolution but has a mixed impact on anaphoricity determination. When applied in combination with other types of factors, it improves both tasks, particularly anaphoricity determination. Its impact on trigger detection, however, is generally negative.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Ablations", "sec_num": "4.3" }, { "text": "Trigger-Anaphoricity interactions. When applied in isolation to the independent models, this type of factor improves both trigger detection and anaphoricity determination. When applied in combination with other types of factors, it still improves anaphoricity determination (particularly on Chinese), but has a mixed effect on trigger detection. Among the three types of factors, it has the least impact on coreference resolution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Ablations", "sec_num": "4.3" }, { "text": "Next, we conduct an analysis of the major sources of error made by our joint coreference model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.4" }, { "text": "Erroneous and mistyped triggers. Our trigger model tends to assign the same subtype to event mentions triggered by the same word. As a result, it often assigns the wrong subtype to triggers that possess different subtypes in different contexts. For the same reason, words that are only sometimes used as triggers are often wrongly posited as triggers when they are not. These two types of triggers have in turn led to the establishment of incorrect coreference links. 8 Failure to extract arguments. In the absence of an annotated corpus for training an argument classifier, we exploit dependency relations for argument extraction. Doing so proves inadequate, particularly for noun triggers, owing to the absence of dependency relations that can be used to reliably extract their arguments. Moreover, using dependency relations does not allow the extraction of arguments that do not appear in the same sentence as their trigger. Since the presence of incompatible arguments is an important indicator of noncoreference, our model's failure to extract arguments has resulted in incorrect coreference links.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two Major Types of Precision Error", "sec_num": "4.4.1" }, { "text": "Missing triggers. Our trigger model fails to identify trigger words that are unseen or rarelyoccurring in the training data. As a result, many coreference links cannot be established. Lack of entity coreference information. Entity coreference information is useful for event coreference because the corresponding arguments of two event mentions are typically coreferent. Since our model does not exploit entity coreference information, it treats two lexically different event arguments as non-coreferent/unrelated. This in turn weakens its ability to determine whether two event mentions are coreferent. This issue is particularly serious in discussion forum documents, where it is not uncommon to see pronouns serve as subjects and objects of event mentions. The situation is further aggravated in Chinese documents, where zero pronouns are prevalent. Lack of contextual understanding. Our model only extracts features from the sentence in which an event mention appears. However, additional contextual information present in neighboring sentences may be needed for correct coreference resolution. This is particularly true in discussion forum documents, where the same event may be described differently by different people. For exam-ple, when describing the fact that Tim Cook will attend Apple's Istanbul store opening, one person said \"Cook is expected to return to Turkey for the store opening\", and another person described this event as \"Tim travels abroad YET AGAIN to be feted by the not-so-high-and-mighty\". It is by no means easy to determine that return and travel trigger two coreferent mentions in these sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Three Major Types of Recall Error", "sec_num": "4.4.2" }, { "text": "Existing event coreference resolvers have been evaluated on different corpora, such as MUC (e.g., Humphreys et al. (1997) ), ACE (e.g., Ahn (2006) , Ji (2009), McConky et al. (2012) , Sangeetha and Arock (2012) , Ng (2015, 2016) , Krause et al. (2016) ), OntoNotes (e.g., Chen et al. (2011) ), the Intelligence Community corpus (e.g., Cybulska and Vossen (2012) , , ), the ECB corpus (e.g., Lee et al. (2012) , Bejan and Harabagiu (2014)) and its extension ECB+ (e.g., Yang et al. (2015) ), and ProcessBank (e.g., Araki and Mitamura (2015) ). The newest event coreference corpora are the ones used in the KBP 2015 and 2016 Event Nugget Detection and Coreference shared tasks, in which the best performers in 2015 and 2016 are RPI's system (Hong et al., 2015 ) and UTD's system , respectively. The KBP 2015 corpus has recently been used to evaluate Peng et al.'s (2016) minimally supervised approach and Lu et al.'s (2016) joint inference approach to event coreference. With the rarest exceptions (e.g., ), existing resolvers have adopted a pipeline architecture in which trigger detection is performed prior to coreference resolution.", "cite_spans": [ { "start": 98, "end": 121, "text": "Humphreys et al. (1997)", "ref_id": "BIBREF19" }, { "start": 136, "end": 146, "text": "Ahn (2006)", "ref_id": "BIBREF0" }, { "start": 149, "end": 181, "text": "Ji (2009), McConky et al. (2012)", "ref_id": null }, { "start": 184, "end": 210, "text": "Sangeetha and Arock (2012)", "ref_id": "BIBREF34" }, { "start": 213, "end": 228, "text": "Ng (2015, 2016)", "ref_id": null }, { "start": 231, "end": 251, "text": "Krause et al. (2016)", "ref_id": "BIBREF20" }, { "start": 272, "end": 290, "text": "Chen et al. (2011)", "ref_id": "BIBREF5" }, { "start": 335, "end": 361, "text": "Cybulska and Vossen (2012)", "ref_id": "BIBREF11" }, { "start": 391, "end": 408, "text": "Lee et al. (2012)", "ref_id": "BIBREF21" }, { "start": 469, "end": 487, "text": "Yang et al. (2015)", "ref_id": "BIBREF38" }, { "start": 514, "end": 539, "text": "Araki and Mitamura (2015)", "ref_id": "BIBREF2" }, { "start": 739, "end": 757, "text": "(Hong et al., 2015", "ref_id": "BIBREF18" }, { "start": 848, "end": 868, "text": "Peng et al.'s (2016)", "ref_id": null }, { "start": 903, "end": 921, "text": "Lu et al.'s (2016)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "We proposed a joint model of event coreference resolution, trigger detection, and event anaphoricity determination. The model is novel in its choice of tasks and the cross-task interaction features. When evaluated on the KBP 2016 English and Chinese corpora, our model not only outperforms the independent models but also achieves the best results to date on these corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "To train our joint model, however, the trigger annotations and the event coreference annotations in the training data must be consistent. Since we modified the trigger annotations (by merging event mentions and allowing combined subtypes), we make two modifications to the event coreference annotations to ensure consistency between the two sets of annotations. First, let C 1 and C 2 be two event coreference chains in a training document such that the set of words triggering the event mentions in C 1 (with subtype t 1 ) is the same as that triggering the event mentions in C 2 (with subtype t 2 ). If each of the event mentions in C 1 was merged with the corresponding event mention in C 2 during the aforementioned preprocessing of the trigger annotations (because combining t 1 and t 2 results in one of the three most frequent combined subtypes), then we delete one of the two coreference chains, and assign the combined subtype to the remaining chain. Finally, we remove any remaining event mentions that were merged during the preprocessing of trigger annotations from their respective coreference chains and create a singleton cluster for each of the merged mentions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "This is the best English nugget type result in KBP 2016. In this paper, we will not be concerned with realis classification, as it does not play any role in event coreference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "According to the KBP annotation guidelines, each word may trigger multiple event mentions (e.g., murder can trigger two event mentions with subtypes Life.Die and Conflict.Attack). Hence, our treating each extracted word as a candidate event mention effectively prevents a word from triggering multiple event mentions. Rather than complicate model design by relaxing this simplifying assumption, we present an alternative, though partial, solution to this problem wherein we allow each event mention to be associated with multiple event subtypes. See the Appendix for details.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We train a CRF-based entity extraction model for jointly identifying the entity mentions and their types. Details can be found in.5 The dictionary is available from http://ir.hit.edu.cn/. An entry number in this dictionary conceptually resembles a synset id in WordNet(Fellbaum, 1998).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The parameters to be tuned are the \u03b1's multiplying the loss functions and those inside the loss functions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Chen and Ng (2013) also performed ablation on their ACE-style Chinese event coreference resolver. However, given the differences in the tasks involved (e.g., they did not model event anaphoricity, but included tasks such as event argument extraction and role classification, entity coreference, and event mention attribute value computation) and the ablation setup (e.g., they ablated individual tasks/components in their pipeline-based system in an incremental fashion, whereas we ablate interaction factors rather than tasks), a direct comparison of their observations and ours is difficult.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In our joint model, mentions that are posited as coreferent are encouraged to have the same subtype. While it can potentially fix the errors involving coreferent mentions that have different subtypes, it cannot fix the errors in which the two mentions involved have the same erroneous subtype.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the three anonymous reviewers for their detailed comments. This work was supported in part by NSF Grants IIS-1219142 and IIS-1528037.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "In KBP, a word can trigger multiple event mentions. However, since we create exactly one candidate event mention from each extracted word in each test document, our model effectively prevents a word from triggering multiple event mentions. This poses a problem: each word cannot be associated with more than one event subtype. This appendix describes how we (partially) address this issue that involves allowing each event mention to be associated with multiple event subtypes.To address this problem, we preprocess the gold trigger annotations in the training data as follows. First, for each word triggering multiple event mentions (with different event subtypes), we merge their event mentions into one event mention having the combined subtype. In principle, we can add each of these combined subtypes into our event subtype inventory and allow our model to make predictions using them. However, to avoid over-complicating the prediction task (by having a large subtype inventory), we only add the three most frequently occurring combined subtypes in the training data to the inventory. Merged mentions whose combined subtype is not among the most frequent three will be unmerged in order to recover the original mentions so that the model can still be trained on them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix: Handling Words that Trigger Multiple Event Mentions", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The stages of event extraction", "authors": [ { "first": "David", "middle": [], "last": "Ahn", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the COLING/ACL Workshop on Annotating and Reasoning about Time and Events", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Ahn. 2006. The stages of event extraction. In Proceedings of the COLING/ACL Workshop on Annotating and Reasoning about Time and Events. pages 1-8.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Detecting subevent structure for event coreference resolution", "authors": [ { "first": "Jun", "middle": [], "last": "Araki", "suffix": "" }, { "first": "Zhengzhong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Teruko", "middle": [], "last": "Mitamura", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "4553--4558", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun Araki, Zhengzhong Liu, Eduard Hovy, and Teruko Mitamura. 2014. Detecting subevent structure for event coreference resolution. In Proceedings of the Ninth International Conference on Language Re- sources and Evaluation, pages 4553-4558.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Joint event trigger identification and event coreference resolution with structured perceptron", "authors": [ { "first": "Jun", "middle": [], "last": "Araki", "suffix": "" }, { "first": "Teruko", "middle": [], "last": "Mitamura", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2074--2080", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun Araki and Teruko Mitamura. 2015. Joint event trig- ger identification and event coreference resolution with structured perceptron. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2074-2080.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Algorithms for scoring coreference chains", "authors": [ { "first": "Amit", "middle": [], "last": "Bagga", "suffix": "" }, { "first": "Breck", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the Linguistic Coreference Workshop at The First International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "563--566", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In Proceedings of the Linguistic Coreference Workshop at The First In- ternational Conference on Language Resources and Evaluation, pages 563-566.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Unsupervised event coreference resolution", "authors": [ { "first": "Adrian", "middle": [], "last": "Cosmin", "suffix": "" }, { "first": "Sanda", "middle": [], "last": "Bejan", "suffix": "" }, { "first": "", "middle": [], "last": "Harabagiu", "suffix": "" } ], "year": 2014, "venue": "Computational Linguistics", "volume": "40", "issue": "2", "pages": "311--347", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cosmin Adrian Bejan and Sanda Harabagiu. 2014. Un- supervised event coreference resolution. Computa- tional Linguistics 40(2):311-347.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A unified event coreference resolution by integrating multiple resolvers", "authors": [ { "first": "Bin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Su", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifth International Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "102--110", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bin Chen, Jian Su, Sinno Jialin Pan, and Chew Lim Tan. 2011. A unified event coreference resolution by integrating multiple resolvers. In Proceedings of the Fifth International Conference on Natural Language Processing. pages 102-110.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Chinese event coreference resolution: Understanding the state of the art", "authors": [ { "first": "Chen", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 6th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "822--828", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen Chen and Vincent Ng. 2013. Chinese event coreference resolution: Understanding the state of the art. In Proceedings of the 6th International Joint Conference on Natural Language Processing. pages 822-828.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Chinese event coreference resolution: An unsupervised probabilistic model rivaling supervised resolvers", "authors": [ { "first": "Chen", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2015, "venue": "Proceedings of Human Language Technologies: The", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen Chen and Vincent Ng. 2015. Chinese event coreference resolution: An unsupervised probabilis- tic model rivaling supervised resolvers. In Proceed- ings of Human Language Technologies: The 2015", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Annual Conference of the North American Chapter of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "1097--1107", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Conference of the North American Chap- ter of the Association for Computational Linguistics. pages 1097-1107.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Joint inference over a lightly supervised information extraction pipeline: Towards event coreference resolution for resourcescarce languages", "authors": [ { "first": "Chen", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 30th AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "2913--2920", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen Chen and Vincent Ng. 2016. Joint inference over a lightly supervised information extraction pipeline: Towards event coreference resolution for resource- scarce languages. In Proceedings of the 30th AAAI Conference on Artificial Intelligence. pages 2913- 2920.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Graph-based event coreference resolution", "authors": [ { "first": "Zheng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Processing (TextGraphs-4)", "volume": "", "issue": "", "pages": "54--57", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zheng Chen and Heng Ji. 2009. Graph-based event coreference resolution. In Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Processing (TextGraphs-4), pages 54-57.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Using semantic relations to solve event coreference in text", "authors": [ { "first": "Agata", "middle": [], "last": "Cybulska", "suffix": "" }, { "first": "Piek", "middle": [], "last": "Vossen", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the LREC Workshop on Semantic Relations-II Enhancing Resources and Applications", "volume": "", "issue": "", "pages": "60--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agata Cybulska and Piek Vossen. 2012. Using se- mantic relations to solve event coreference in text. In Proceedings of the LREC Workshop on Semantic Relations-II Enhancing Resources and Applications, pages 60-67.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Adaptive subgradient methods for online learning and stochastic optimization", "authors": [ { "first": "John", "middle": [], "last": "Duchi", "suffix": "" }, { "first": "Elad", "middle": [], "last": "Hazan", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2121--2159", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12:2121-2159.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Easy victories and uphill battles in coreference resolution", "authors": [ { "first": "Greg", "middle": [], "last": "Durrett", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1971--1982", "other_ids": {}, "num": null, "urls": [], "raw_text": "Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In Proceed- ings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1971-1982.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A joint model for entity analysis: Coreference, typing, and linking", "authors": [ { "first": "Greg", "middle": [], "last": "Durrett", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2014, "venue": "Transactions of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "477--490", "other_ids": {}, "num": null, "urls": [], "raw_text": "Greg Durrett and Dan Klein. 2014. A joint model for entity analysis: Coreference, typing, and linking. Transactions of the Association for Computational Linguistics 2:477-490.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "WordNet: An Electronical Lexical Database", "authors": [ { "first": "Christiane", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christiane Fellbaum. 1998. WordNet: An Electronical Lexical Database. MIT Press, Cambridge, MA.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "C\u00edcero Nogueira dos Santos, and Ruy Luiz Milidiu", "authors": [ { "first": "", "middle": [], "last": "Eraldo Rezende Fernandes", "suffix": "" } ], "year": 2014, "venue": "Computational Linguistics", "volume": "40", "issue": "4", "pages": "801--835", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eraldo Rezende Fernandes, C\u00edcero Nogueira dos San- tos, and Ruy Luiz Milidiu. 2014. Latent trees for coreference resolution. Computational Linguistics 40(4):801-835.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Softmaxmargin CRFs: Training log-linear models with cost functions", "authors": [ { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "A", "middle": [], "last": "Noah", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2010, "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "733--736", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Gimpel and Noah A Smith. 2010. Softmax- margin CRFs: Training log-linear models with cost functions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Lin- guistics, pages 733-736.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "RPI BLENDER TAC-KBP2015 system description", "authors": [ { "first": "Yu", "middle": [], "last": "Hong", "suffix": "" }, { "first": "Di", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Dian", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Xiaoman", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Xiaobin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yadong", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Lifu", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Eighth Text Analysis Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu Hong, Di Lu, Dian Yu, Xiaoman Pan, Xiaobin Wang, Yadong Chen, Lifu Huang, and Heng Ji. 2015. RPI BLENDER TAC-KBP2015 system de- scription. In Proceedings of the Eighth Text Analysis Conference.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Event coreference for information extraction", "authors": [ { "first": "Kevin", "middle": [], "last": "Humphreys", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Gaizauskas", "suffix": "" }, { "first": "Saliha", "middle": [], "last": "Azzam", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the ACL/EACL Workshop on Operational Factors in Practical, Robust Anaphora Resolution for Unrestricted Texts", "volume": "", "issue": "", "pages": "75--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Humphreys, Robert Gaizauskas, and Saliha Az- zam. 1997. Event coreference for information ex- traction. In Proceedings of the ACL/EACL Work- shop on Operational Factors in Practical, Robust Anaphora Resolution for Unrestricted Texts, pages 75-81.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Event linking with sentential features from convolutional neural networks", "authors": [ { "first": "Sebastian", "middle": [], "last": "Krause", "suffix": "" }, { "first": "Feiyu", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Hans", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Weissenborn", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "239--249", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Krause, Feiyu Xu, Hans Uszkoreit, and Dirk Weissenborn. 2016. Event linking with sentential features from convolutional neural networks. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pages 239-249.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Joint entity and event coreference resolution across documents", "authors": [ { "first": "Heeyoung", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Marta", "middle": [], "last": "Recasens", "suffix": "" }, { "first": "Angel", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "489--500", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heeyoung Lee, Marta Recasens, Angel Chang, Mihai Surdeanu, and Dan Jurafsky. 2012. Joint entity and event coreference resolution across documents. In Proceedings of the 2012 Joint Conference on Empir- ical Methods in Natural Language Processing and Computational Natural Language Learning, pages 489-500.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Supervised within-document event coreference using information propagation", "authors": [ { "first": "Zhengzhong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Araki", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Teruko", "middle": [], "last": "Mitamura", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "4539--4544", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhengzhong Liu, Jun Araki, Eduard Hovy, and Teruko Mitamura. 2014. Supervised within-document event coreference using information propagation. In Pro- ceedings of the Ninth International Conference on Language Resources and Evaluation, pages 4539- 4544.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "CMU-LTI at KBP 2016 event nugget track", "authors": [ { "first": "Zhengzhong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Araki", "suffix": "" }, { "first": "Teruko", "middle": [], "last": "Mitamura", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Ninth Text Analysis Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhengzhong Liu, Jun Araki, Teruko Mitamura, and Ed- uard Hovy. 2016. CMU-LTI at KBP 2016 event nugget track. In Proceedings of the Ninth Text Anal- ysis Conference.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "UTD's event nugget detection and coreference system at KBP", "authors": [ { "first": "Jing", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Ninth Text Analysis Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jing Lu and Vincent Ng. 2016. UTD's event nugget detection and coreference system at KBP 2016. In Proceedings of the Ninth Text Analysis Conference.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Joint inference for event coreference resolution", "authors": [ { "first": "Jing", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Deepak", "middle": [], "last": "Venugopal", "suffix": "" }, { "first": "Vibhav", "middle": [], "last": "Gogate", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 26th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "3264--3275", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jing Lu, Deepak Venugopal, Vibhav Gogate, and Vin- cent Ng. 2016. Joint inference for event corefer- ence resolution. In Proceedings of the 26th Inter- national Conference on Computational Linguistics, pages 3264-3275.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "On coreference resolution performance metrics", "authors": [ { "first": "Xiaoqiang", "middle": [], "last": "Luo", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "25--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoqiang Luo. 2005. On coreference resolution per- formance metrics. In Proceedings of Human Lan- guage Technology Conference and Conference on Empirical Methods in Natural Language Process- ing, pages 25-32.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "The Stanford CoreNLP natural language processing toolkit", "authors": [ { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mcclosky", "suffix": "" } ], "year": 2014, "venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "55--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language pro- cessing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Lin- guistics: System Demonstrations, pages 55-60.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Improving event coreference by context extraction and dynamic feature weighting", "authors": [ { "first": "Katie", "middle": [], "last": "Mcconky", "suffix": "" }, { "first": "Rakesh", "middle": [], "last": "Nagi", "suffix": "" }, { "first": "Moises", "middle": [], "last": "Sudit", "suffix": "" }, { "first": "William", "middle": [], "last": "Hughes", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support", "volume": "", "issue": "", "pages": "38--43", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katie McConky, Rakesh Nagi, Moises Sudit, and William Hughes. 2012. Improving event co- reference by context extraction and dynamic feature weighting. In Proceedings of the 2012 IEEE Inter- national Multi-Disciplinary Conference on Cogni- tive Methods in Situation Awareness and Decision Support, pages 38-43.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Overview of TAC-KBP 2016 event nugget track", "authors": [ { "first": "Teruko", "middle": [], "last": "Mitamura", "suffix": "" }, { "first": "Zhengzhong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Ninth Text Analysis Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Teruko Mitamura, Zhengzhong Liu, and Eduard Hovy. 2016. Overview of TAC-KBP 2016 event nugget track. In Proceedings of the Ninth Text Analysis Conference.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Supervised noun phrase coreference research: The first fifteen years", "authors": [ { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1396--1411", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vincent Ng. 2010. Supervised noun phrase coreference research: The first fifteen years. In Proceedings of the 48th Annual Meeting of the Association for Com- putational Linguistics. pages 1396-1411.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "New York University 2016 system for KBP event nugget: A deep learning approach", "authors": [ { "first": "Adam", "middle": [], "last": "Thien Huu Nguyen", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Meyers", "suffix": "" }, { "first": "", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2016, "venue": "Proceedings of Ninth Text Analysis Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thien Huu Nguyen, Adam Meyers, and Ralph Grish- man. 2016. New York University 2016 system for KBP event nugget: A deep learning approach. In Proceedings of Ninth Text Analysis Conference.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Event detection and co-reference with minimal supervision", "authors": [ { "first": "Haoruo", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Yangqiu", "middle": [], "last": "Song", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "392--402", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haoruo Peng, Yangqiu Song, and Dan Roth. 2016. Event detection and co-reference with minimal su- pervision. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Pro- cessing. pages 392-402.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "BLANC: Implementing the Rand Index for coreference evaluation", "authors": [ { "first": "Marta", "middle": [], "last": "Recasens", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2011, "venue": "Natural Language Engineering", "volume": "17", "issue": "4", "pages": "485--510", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marta Recasens and Eduard Hovy. 2011. BLANC: Implementing the Rand Index for coreference eval- uation. Natural Language Engineering 17(4):485- 510.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Event coreference resolution using mincut based graph clustering", "authors": [ { "first": "S", "middle": [], "last": "Sangeetha", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Arock", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Fourth International Workshop on Computer Networks & Communications pages", "volume": "", "issue": "", "pages": "253--260", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Sangeetha and Michael Arock. 2012. Event coref- erence resolution using mincut based graph clus- tering. In Proceedings of the Fourth International Workshop on Computer Networks & Communica- tions pages 253-260.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "From light to rich ERE: Annotation of entities, relations, and events", "authors": [ { "first": "Zhiyi", "middle": [], "last": "Song", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Bies", "suffix": "" }, { "first": "Stephanie", "middle": [], "last": "Strassel", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Riese", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Mott", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Ellis", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Wright", "suffix": "" }, { "first": "Seth", "middle": [], "last": "Kulick", "suffix": "" }, { "first": "Neville", "middle": [], "last": "Ryant", "suffix": "" }, { "first": "Xiaoyi", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 3rd Workshop on EVENTS", "volume": "", "issue": "", "pages": "89--98", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiyi Song, Ann Bies, Stephanie Strassel, Tom Riese, Justin Mott, Joe Ellis, Jonathan Wright, Seth Kulick, Neville Ryant, and Xiaoyi Ma. 2015. From light to rich ERE: Annotation of entities, relations, and events. In Proceedings of the 3rd Workshop on EVENTS, pages 89-98.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "A modeltheoretic coreference scoring scheme", "authors": [ { "first": "Marc", "middle": [], "last": "Vilain", "suffix": "" }, { "first": "John", "middle": [], "last": "Burger", "suffix": "" }, { "first": "John", "middle": [], "last": "Aberdeen", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the Sixth Message Understanding Conference", "volume": "", "issue": "", "pages": "45--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marc Vilain, John Burger, John Aberdeen, Dennis Con- nolly, and Lynette Hirschman. 1995. A model- theoretic coreference scoring scheme. In Proceed- ings of the Sixth Message Understanding Confer- ence, pages 45-52.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Learning anaphoricity and antecedent ranking features for coreference resolution", "authors": [ { "first": "Sam", "middle": [], "last": "Wiseman", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" }, { "first": "Stuart", "middle": [], "last": "Shieber", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "1416--1426", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sam Wiseman, Alexander M. Rush, Stuart Shieber, and Jason Weston. 2015. Learning anaphoricity and an- tecedent ranking features for coreference resolution. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 1416-1426.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "A hierarchical distance-dependent Bayesian model for event coreference resolution", "authors": [ { "first": "Bishan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Frazier", "suffix": "" } ], "year": 2015, "venue": "Transactions of the Association for Computational Linguistics", "volume": "3", "issue": "", "pages": "517--528", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bishan Yang, Claire Cardie, and Peter Frazier. 2015. A hierarchical distance-dependent Bayesian model for event coreference resolution. Transactions of the Association for Computational Linguistics 3:517- 528.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Unary factors for the three tasks, the variables they are connected to, and the possible values of the variables. Unary factors encode taskspecific features. Each factor is connected to the corresponding output node. The features associated with a factor are used to predict the value of the output node it is connected to when a model is run independently of other models.", "type_str": "figure", "uris": null }, "FIGREF1": { "num": null, "text": "Binary and ternary factors. These higherorder factors capture cross-task interactions. The binary anaphoricity and trigger factors encourage anaphoric mentions to be triggers. The binary anaphoricity and coreference factors encourage non-anaphoric mentions to start a NEW coreference cluster. The ternary trigger and coreference factors encourage coreferent mentions to be triggers.", "type_str": "figure", "uris": null }, "FIGREF2": { "num": null, "text": "Features representing a non-NULL candidate antecedent, m i : m i 's word, m i 's lemma, whether m i and m j have the same lemma, and feature conjunctions including: (1) m i 's word paired with m j 's word, (2) m i 's lemma paired with m j 's lemma, (3) the sentence distance between m i and m j paired with m i 's lemma and m j 's lemma, (4) the mention distance between m i and m j paired with m i 's lemma and m j 's lemma, (5) a quadruple consisting of m i and m j 's subjects and their lemmas, and (6) a quadruple consisting of m i and m j 's objects and their lemmas.", "type_str": "figure", "uris": null }, "TABREF0": { "num": null, "content": "", "html": null, "text": "Georges Cipriani [P erson] , {left}ev1 the prison [Origin] in Ensisheim in northern France on parole on Wednesday [T ime] . He [P erson] {departed}ev2 Ensisheim [Origin] in a police vehicle [Instrument] bound for an open prison near Strasbourg.", "type_str": "table" }, "TABREF1": { "num": null, "content": "
", "html": null, "text": "Event coreference resolution example.", "type_str": "table" }, "TABREF3": { "num": null, "content": "
", "html": null, "text": "Results of all three tasks on the KBP 2016 evaluation sets. The KBP2016 results are those achieved by the best-performing coreference resolver in the official KBP 2016 evaluation. \u2206 is the performance difference between the JOINT model and the corresponding INDEP. model. All results are expressed in terms of F-score.", "type_str": "table" }, "TABREF4": { "num": null, "content": "
EnglishChinese
Coref Trigger Anaph Coref Trigger Anaph
INDEP. 31.2848.8227.3525.84 39.8219.31
INDEP.+CorefTrigger +0.39 +0.42\u22120.05 +0.02
JOINT\u2212CorefTrigger +0.56 \u22120.06 +4.41 +0.19 +0.35 JOINT\u2212CorefAnaph +0.63 +0.66 +1.46 +1.50 +0.88+3.34 +0.28
JOINT\u2212TriggerAnaph +1.89 +0.50+4.01 +1.65 +0.50+1.79
JOINT +1.80 +0.48+4.59 +1.95 +0.71+4.02
", "html": null, "text": "+0.95 +0.56 \u22120.37 INDEP.+CorefAnaph +0.40 \u22120.08 +3.45 +0.37 +0.04 \u22120.11 INDEP.+TriggerAnaph +0.11 +0.38 +1.35 +0.14 +0.52", "type_str": "table" }, "TABREF5": { "num": null, "content": "", "html": null, "text": "Results of model ablations on the KBP 2016 evaluation sets. Each row of ablation results is obtained by either adding one type of interaction factor to the INDEP. model or deleting one type of interaction factor from the JOINT model. For each column, the results are expressed in terms of changes to the INDEP. model's F-score shown in row 1.", "type_str": "table" } } } }