{ "paper_id": "P08-1030", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:34:44.244248Z" }, "title": "Refining Event Extraction through Cross-document Inference", "authors": [ { "first": "Heng", "middle": [], "last": "Ji", "suffix": "", "affiliation": { "laboratory": "", "institution": "New York University New York", "location": { "postCode": "10003", "region": "NY", "country": "USA" } }, "email": "hengji@cs.nyu.edu" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "", "affiliation": { "laboratory": "", "institution": "New York University New York", "location": { "postCode": "10003", "region": "NY", "country": "USA" } }, "email": "grishman@cs.nyu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We apply the hypothesis of \"One Sense Per Discourse\" (Yarowsky, 1995) to information extraction (IE), and extend the scope of \"discourse\" from one single document to a cluster of topically-related documents. We employ a similar approach to propagate consistent event arguments across sentences and documents. Combining global evidence from related documents with local decisions, we design a simple scheme to conduct cross-document inference for improving the ACE event extraction task 1. Without using any additional labeled data this new approach obtained 7.6% higher F-Measure in trigger labeling and 6% higher F-Measure in argument labeling over a state-of-the-art IE system which extracts events independently for each sentence.", "pdf_parse": { "paper_id": "P08-1030", "_pdf_hash": "", "abstract": [ { "text": "We apply the hypothesis of \"One Sense Per Discourse\" (Yarowsky, 1995) to information extraction (IE), and extend the scope of \"discourse\" from one single document to a cluster of topically-related documents. We employ a similar approach to propagate consistent event arguments across sentences and documents. Combining global evidence from related documents with local decisions, we design a simple scheme to conduct cross-document inference for improving the ACE event extraction task 1. Without using any additional labeled data this new approach obtained 7.6% higher F-Measure in trigger labeling and 6% higher F-Measure in argument labeling over a state-of-the-art IE system which extracts events independently for each sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Identifying events of a particular type within individual documents -'classical' information extraction -remains a difficult task. Recognizing the different forms in which an event may be expressed, distinguishing events of different types, and finding the arguments of an event are all challenging tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Fortunately, many of these events will be reported multiple times, in different forms, both within the same document and within topicallyrelated documents (i.e. a collection of documents sharing participants in potential events). We can 1 http://www.nist.gov/speech/tests/ace/ take advantage of these alternate descriptions to improve event extraction in the original document, by favoring consistency of interpretation across sentences and documents. Several recent studies involving specific event types have stressed the benefits of going beyond traditional singledocument extraction; in particular, Yangarber (2006) has emphasized this potential in his work on medical information extraction. In this paper we demonstrate that appreciable improvements are possible over the variety of event types in the ACE (Automatic Content Extraction) evaluation through the use of cross-sentence and cross-document evidence.", "cite_spans": [ { "start": 603, "end": 619, "text": "Yangarber (2006)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As we shall describe below, we can make use of consistency at several levels: consistency of word sense across different instances of the same word in related documents, and consistency of arguments and roles across different mentions of the same or related events. Such methods allow us to build dynamic background knowledge as required to interpret a document and can compensate for the limited annotated training data which can be provided for each event type.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The event extraction task we are addressing is that of the Automatic Content Extraction (ACE) evaluations 2 . ACE defines the following terminology: entity: an object or a set of objects in one of the semantic categories of interest mention: a reference to an entity (typically, a noun phrase) event trigger: the main word which most clearly expresses an event occurrence event arguments: the mentions that are involved in an event (participants) event mention: a phrase or sentence within which an event is described, including trigger and arguments", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ACE Event Extraction Task", "sec_num": "2.1" }, { "text": "The 2005 ACE evaluation had 8 types of events, with 33 subtypes; for the purpose of this paper, we will treat these simply as 33 distinct event types. For example, for a sentence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ACE Event Extraction Task", "sec_num": "2.1" }, { "text": "Barry Diller on Wednesday quit as chief of Vivendi Universal Entertainment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ACE Event Extraction Task", "sec_num": "2.1" }, { "text": "the event extractor should detect a \"Person-nel_End-Position\" event mention, with the trigger word, the position, the person who quit the position, the organization, and the time during which the event happened: We define the following standards to determine the correctness of an event mention:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ACE Event Extraction Task", "sec_num": "2.1" }, { "text": "\u2022 A trigger is correctly labeled if its event type and offsets match a reference trigger. \u2022 An argument is correctly identified if its event type and offsets match any of the reference argument mentions. \u2022 An argument is correctly identified and classified if its event type, offsets, and role match any of the reference argument mentions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ACE Event Extraction Task", "sec_num": "2.1" }, { "text": "We use a state-of-the-art English IE system as our baseline (Grishman et al., 2005) . This system extracts events independently for each sentence. Its training and test procedures are as follows.", "cite_spans": [ { "start": 60, "end": 83, "text": "(Grishman et al., 2005)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "A Baseline Within-Sentence Event Tagger", "sec_num": "2.2" }, { "text": "The system combines pattern matching with statistical models. For every event mention in the ACE training corpus, patterns are constructed based on the sequences of constituent heads separating the trigger and arguments. In addition, a set of Maximum Entropy based classifiers are trained:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Baseline Within-Sentence Event Tagger", "sec_num": "2.2" }, { "text": "\u2022 Trigger Labeling: to distinguish event mentions from non-event-mentions, to classify event mentions by type; \u2022 Argument Classifier: to distinguish arguments from non-arguments; \u2022 Role Classifier: to classify arguments by argument role.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Baseline Within-Sentence Event Tagger", "sec_num": "2.2" }, { "text": "\u2022 Reportable-Event Classifier: Given a trigger, an event type, and a set of arguments, to determine whether there is a reportable event mention. In the test procedure, each document is scanned for instances of triggers from the training corpus. When an instance is found, the system tries to match the environment of the trigger against the set of patterns associated with that trigger. This pattern-matching process, if successful, will assign some of the mentions in the sentence as arguments of a potential event mention. The argument classifier is applied to the remaining mentions in the sentence; for any argument passing that classifier, the role classifier is used to assign a role to it. Finally, once all arguments have been assigned, the reportable-event classifier is applied to the potential event mention; if the result is successful, this event mention is reported.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Baseline Within-Sentence Event Tagger", "sec_num": "2.2" }, { "text": "In this section we shall present our motivations based on error analysis for the baseline event tagger.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivations", "sec_num": "3" }, { "text": "Across a heterogeneous document corpus, a particular verb can sometimes be trigger and sometimes not, and can represent different event types. However, for a collection of topically-related documents, the distribution may be much more convergent. We investigate this hypothesis by automatically obtaining 25 related documents for each test text. As we can see from the table, the likelihood of a candidate word being an event trigger in the test document is closer to its distribution in the collection of related documents than the uniform training corpora. So if we can determine the sense (event type) of a word in the related documents, this will allow us to infer its sense in the test document. In this way related documents can help recover event mentions missed by within-sentence extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "One Trigger Sense Per Cluster", "sec_num": "3.1" }, { "text": "For example, in a document about \"the advance into Baghdad\":", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "One Trigger Sense Per Cluster", "sec_num": "3.1" }, { "text": "Example 1: [Test Sentence]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "One Trigger Sense Per Cluster", "sec_num": "3.1" }, { "text": "Most US army commanders believe it is critical to pause the breakneck advance towards Baghdad to secure the supply lines and make sure weapons are operable and troops resupplied\u2026.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "One Trigger Sense Per Cluster", "sec_num": "3.1" }, { "text": "British and US forces report gains in the advance on Baghdad and take control of Umm Qasr, despite a fierce sandstorm which slows another flank. \u2026", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "[Sentences from Related Documents]", "sec_num": null }, { "text": "The baseline event tagger is not able to detect \"advance\" as a \"Movement_Transport\" event trigger because there is no pattern \"advance towards [Place]\" in the ACE training corpora (\"advance\" by itself is too ambiguous). The training data, however, does include the pattern \"advance on [Place]\", which allows the instance of \"advance\" in the related documents to be successfully identified with high confidence by pattern matching as an event. This provides us much stronger \"feedback\" confidence in tagging 'advance' in the test sentence as a correct trigger.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "[Sentences from Related Documents]", "sec_num": null }, { "text": "On the other hand, if a word is not tagged as an event trigger in most related documents, then it's less likely to be correct in the test sentence despite its high local confidence. For example, in a document about \"assessment of Russian president Putin\":", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "[Sentences from Related Documents]", "sec_num": null }, { "text": "Example 2: [Test Sentence]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "[Sentences from Related Documents]", "sec_num": null }, { "text": "But few at the Kremlin forum suggested that Putin's own standing among voters will be hurt by Russia's apparent diplomacy failures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "[Sentences from Related Documents]", "sec_num": null }, { "text": "Putin boosted ties with the United States by throwing his support behind its war on terrorism after the Sept. 11 attacks, but the Iraq war has hurt the relationship. \u2026", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "[Sentences from Related Documents]", "sec_num": null }, { "text": "The word \"hurt\" in the test sentence is mistakenly identified as a \"Life_Injure\" trigger with high local confidence (because the within-sentence extractor misanalyzes \"voters\" as the object of \"hurt\" and so matches the pattern \"[Person] be hurt\"). Based on the fact that many other instances of \"hurt\" are not \"Life_Injure\" triggers in the related documents, we can successfully remove this wrong event mention in the test document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "[Sentences from Related Documents]", "sec_num": null }, { "text": "Inspired by the observation about trigger distribution, we propose a similar hypothesis -one argument role per cluster for event arguments. In other words, each entity plays the same argument role, or no role, for events with the same type in a collection of related documents. For example, The above test sentence doesn't include an explicit trigger word to indicate \"Vivendi\" as a \"seller\" of a \"Transaction_Transfer-Ownership\" event mention, but \"Vivendi\" is correctly identified as \"seller\" in many other related sentences (by matching patterns \"[Seller] sell\" and \"buy [Seller]'s\"). So we can incorporate such additional information to enhance the confidence of \"Vivendi\" as a \"seller\" in the test sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "One Argument Role Per Cluster", "sec_num": "3.2" }, { "text": "On the other hand, we can remove spurious arguments with low cross-document frequency and confidence. In the following example,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "One Argument Role Per Cluster", "sec_num": "3.2" }, { "text": "The Davao Medical Center, a regional government hospital, recorded 19 deaths with 50 wounded.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 4: [Test Sentence]", "sec_num": null }, { "text": "\"the Davao Medical Center\" is mistakenly tagged as \"Place\" for a \"Life_Die\" event mention. But the same annotation for this mention doesn't appear again in the related documents, so we can determine it's a spurious argument.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 4: [Test Sentence]", "sec_num": null }, { "text": "Based on the above motivations we propose to incorporate global evidence from a cluster of related documents to refine local decisions. This section gives more details about the baseline withinsentence event tagger, and the information retrieval system we use to obtain related documents. In the next section we shall focus on describing the inference procedure. Figure 1 depicts the general procedure of our approach. EMSet represents a set of event mentions which is gradually updated. ", "cite_spans": [], "ref_spans": [ { "start": 363, "end": 371, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "System Approach Overview", "sec_num": "4" }, { "text": "For each event mention in a test document t , the baseline Maximum Entropy based classifiers produce three types of confidence values:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Within-Sentence Event Extraction", "sec_num": "4.2" }, { "text": "\u2022 LConf(trigger,etype): The probability of a string trigger indicating an event mention with type etype; if the event mention is produced by pattern matching then assign confidence 1. \u2022 LConf(arg, etype): The probability that a mention arg is an argument of some particular event type etype. \u2022 LConf(arg, etype, role): If arg is an argument with event type etype, the probability of arg having some particular role.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Within-Sentence Event Extraction", "sec_num": "4.2" }, { "text": "We apply within-sentence event extraction to get an initial set of event mentions 0 t EMSet , and conduct cross-sentence inference (details will be presented in section 5) to get an updated set of event mentions 1 t EMSet .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Within-Sentence Event Extraction", "sec_num": "4.2" }, { "text": "We then use the INDRI retrieval system (Strohman et al., 2005) ", "cite_spans": [ { "start": 39, "end": 62, "text": "(Strohman et al., 2005)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Information Retrieval", "sec_num": "4.3" }, { "text": "The central idea of inference is to obtain document-wide and cluster-wide statistics about the frequency with which triggers and arguments are associated with particular types of events, and then use this information to correct event and argument identification and classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global Inference", "sec_num": "5" }, { "text": "For a set of event mentions we tabulate the following document-wide and cluster-wide confidence-weighted frequencies:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global Inference", "sec_num": "5" }, { "text": "\u2022 for each trigger string, the frequency with which it appears as the trigger of an event of a particular type;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global Inference", "sec_num": "5" }, { "text": "\u2022 for each event argument string and the names coreferential with or related to the argument, the frequency of the event type;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global Inference", "sec_num": "5" }, { "text": "\u2022 for each event argument string and the names coreferential with or related to the argument, the frequency of the event type and role. Besides these frequencies, we also define the following margin metric to compute the confidence of the best (most frequent) event type or role:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global Inference", "sec_num": "5" }, { "text": "A large margin indicates greater confidence in the most frequent value. We summarize the frequency and confidence metrics in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 125, "end": 132, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Margin = (WeightedFrequency (most frequent value) -WeightedFrequency (second most freq value))/ WeightedFrequency (second most freq value)", "sec_num": null }, { "text": "Based on these confidence metrics, we designed the inference rules in Table 4 . These rules are applied in the order (1) to (9) based on the principle of improving 'local' information before global 3 We tested different N \u2208 [10, 75] on dev set; and N=25 achieved best gains.", "cite_spans": [ { "start": 198, "end": 199, "text": "3", "ref_id": null } ], "ref_spans": [ { "start": 70, "end": 77, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Margin = (WeightedFrequency (most frequent value) -WeightedFrequency (second most freq value))/ WeightedFrequency (second most freq value)", "sec_num": null }, { "text": "propagation. Although the rules may seem complex, they basically serve two functions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Margin = (WeightedFrequency (most frequent value) -WeightedFrequency (second most freq value))/ WeightedFrequency (second most freq value)", "sec_num": null }, { "text": "\u2022 to remove triggers and arguments with low (local or cluster-wide) confidence;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Margin = (WeightedFrequency (most frequent value) -WeightedFrequency (second most freq value))/ WeightedFrequency (second most freq value)", "sec_num": null }, { "text": "\u2022 to adjust trigger and argument identification and classification to achieve (document-wide or cluster-wide) consistency.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Margin = (WeightedFrequency (most frequent value) -WeightedFrequency (second most freq value))/ WeightedFrequency (second most freq value)", "sec_num": null }, { "text": "In this section we present the results of applying this inference method to improve ACE event extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results and Analysis", "sec_num": "6" }, { "text": "We used 10 newswire texts from ACE 2005 training corpora (from March to May of 2003) as our development set, and then conduct blind test on a separate set of 40 ACE 2005 newswire texts. For each test text we retrieved 25 related texts from English TDT5 corpus which in total consists of 278,108 texts (from April to September of 2003).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "6.1" }, { "text": "We select the thresholds (\u03b4 k with k=1~13) for various confidence metrics by optimizing the Fmeasure score of each rule on the development set, as shown in Figure 2 and 3 as follows.", "cite_spans": [], "ref_spans": [ { "start": 156, "end": 164, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Confidence Metric Thresholding", "sec_num": "6.2" }, { "text": "Each curve in Figure 2 and 3 shows the effect on precision and recall of varying the threshold for an individual rule. The labeled point on each curve shows the best F-measure that can be obtained on the development set by adjusting the threshold for that rule. The gain obtained by applying successive rules can be seen in the progression of successive points towards higher recall and, for argument labeling, precision 4 . Table 5 shows the overall Precision (P), Recall (R) and F-Measure (F) scores for the blind test set. In addition, we also measured the performance of two human annotators who prepared the ACE 2005 training data on 28 newswire texts (a subset of the blind test set). The final key was produced by review and adjudication of the two annotations.", "cite_spans": [], "ref_spans": [ { "start": 14, "end": 22, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 425, "end": 432, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Confidence Metric Thresholding", "sec_num": "6.2" }, { "text": "Both cross-sentence and cross-document inferences provided significant improvement over the baseline with local confidence thresholds controlled.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overall Performance", "sec_num": "6.3" }, { "text": "We conducted the Wilcoxon Matched-Pairs Signed-Ranks Test on a document basis. The results show that the improvement using crosssentence inference is significant at a 99.9% confidence level for both trigger and argument labeling; adding cross-document inference is significant at a 99.9% confidence level for trigger labeling and 93.4% confidence level for argument labeling. 4 We didn't show the classification adjusting rules (2), (6) and (8) here because of their relatively small impact on dev set.", "cite_spans": [ { "start": 376, "end": 377, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Overall Performance", "sec_num": "6.3" }, { "text": "From table 5 we can see that for trigger labeling our approach dramatically enhanced recall (22.9% improvement) with some loss (7.4%) in precision. This precision loss was much larger than that for the development set (0.3%). This indicates that the trigger propagation thresholds optimized on the development set were too low for the blind test set and thus more spurious triggers got propagated. The improved trigger labeling is better than one human annotator and only 4.7% worse than another.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6.4" }, { "text": "For argument labeling we can see that crosssentence inference improved both identification (3.7% higher F-Measure) and classification (6.1% higher accuracy); and cross-document inference mainly provided further gains (1.9%) in classification. This shows that identification consistency may be achieved within a narrower context while the classification task favors more global background knowledge in order to solve some difficult cases. This matches the situation of human annotation as well: we may decide whether a mention is involved in some particular event or not by reading and analyzing the target sentence itself; but in order to decide the argument's role we may need to frequently refer to wider discourse in order to infer and confirm our decision. In fact sometimes it requires us to check more similar web pages or even wikipedia databases. This was exactly the intuition of our approach. We should also note that human annotators label arguments based on perfect entity mentions, but our system used the output from the IE system. So the gap was also partially due to worse entity detection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6.4" }, { "text": "Error analysis on the inference procedure shows that the propagation rules (3), (4), (7) and (9) produced a few extra false alarms. For trigger labeling, most of these errors appear for support verbs such as \"take\" and \"get\" which can only represent an event mention together with other verbs or nouns. Some other errors happen on nouns and adjectives. These are difficult tasks even for human annotators. As shown in table 5 the inter-annotator agreement on trigger identification is only about 40%. Besides some obvious overlooked cases (it's probably difficult for a human to remember 33 different event types during annotation), most difficulties were caused by judging generic verbs, nouns and adjectives. In fact, compared to a statistical tagger trained on the corpus after expert adjudication, a human annotator tends to make more mistakes in trigger classification. For example it's hard to decide whether \"named\" represents a \"Person-nel_Nominate\" or \"Personnel_Start-Position\" event mention; \"hacked to death\" represents a \"Life_Die\" or \"Conflict_Attack\" event mention without following more specific annotation guidelines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6.4" }, { "text": "The trigger labeling task described in this paper is in part a task of word sense disambiguation (WSD), so we have used the idea of sense consistency introduced in (Yarowsky, 1995) , extending it to operate across related documents. Almost all the current event extraction systems focus on processing single documents and, except for coreference resolution, operate a sentence at a time (Grishman et al., 2005; Ahn, 2006; Hardy et al., 2006) .", "cite_spans": [ { "start": 164, "end": 180, "text": "(Yarowsky, 1995)", "ref_id": "BIBREF9" }, { "start": 387, "end": 410, "text": "(Grishman et al., 2005;", "ref_id": "BIBREF1" }, { "start": 411, "end": 421, "text": "Ahn, 2006;", "ref_id": "BIBREF0" }, { "start": 422, "end": 441, "text": "Hardy et al., 2006)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "We share the view of using global inference to improve event extraction with some recent research. Yangarber et al. (Yangarber and Jokipii, 2005; Yangarber, 2006; Yangarber et al., 2007) applied cross-document inference to correct local extraction results for disease name, location and start/end time. Mann (2007) encoded specific inference rules to improve extraction of CEO (name, start year, end year) in the MUC management succession task. In addition, Patwardhan and Riloff (2007) also demonstrated that selectively applying event patterns to relevant regions can improve MUC event extraction. We expand the idea to more general event types and use informa-tion retrieval techniques to obtain wider background knowledge from related documents.", "cite_spans": [ { "start": 116, "end": 145, "text": "(Yangarber and Jokipii, 2005;", "ref_id": "BIBREF8" }, { "start": 146, "end": 162, "text": "Yangarber, 2006;", "ref_id": "BIBREF7" }, { "start": 163, "end": 186, "text": "Yangarber et al., 2007)", "ref_id": "BIBREF6" }, { "start": 303, "end": 314, "text": "Mann (2007)", "ref_id": "BIBREF3" }, { "start": 458, "end": 486, "text": "Patwardhan and Riloff (2007)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "One of the initial goals for IE was to create a database of relations and events from the entire input corpus, and allow further logical reasoning on the database. The artificial constraint that extraction should be done independently for each document was introduced in part to simplify the task and its evaluation. In this paper we propose a new approach to break down the document boundaries for event extraction. We gather together event extraction results from a set of related documents, and then apply inference and constraints to enhance IE performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "8" }, { "text": "In the short term, the approach provides a platform for many byproducts. For example, we can naturally get an event-driven summary for the collection of related documents; the sentences including high-confidence events can be used as additional training data to bootstrap the event tagger; from related events in different timeframes we can derive entailment rules; the refined consistent events can serve better for other NLP tasks such as template based question-answering. The aggregation approach described here can be easily extended to improve relation detection and coreference resolution (two argument mentions referring to the same role of related events are likely to corefer). Ultimately we would like to extend the system to perform essential, although probably lightweight, event prediction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "8" }, { "text": "In this paper we don't consider event mention coreference resolution and so don't distinguish event mentions and events.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This material is based upon work supported by the Defense Advanced Research Projects Agency under Contract No. HR0011-06-C-0023, and the Na-tional Science Foundation under Grant IIS-00325657. Any opinions, findings and conclusions expressed in this material are those of the authors and do not necessarily reflect the views of the U. S. Government.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": " XSent-Trigger-Freq(trigger, etype) The weighted frequency of string trigger appearing as the trigger of an event of type etype across all sentences within a document XDoc-Trigger-Freq (trigger, etype) The weighted frequency of string trigger appearing as the trigger of an event of type etype across all documents in a cluster XDoc-Trigger-BestFreq (trigger) Maximum over all etypes of XDoc-Trigger-Freq (trigger, etype) XDoc- Arg-Freq(arg, etype) The weighted frequency of arg appearing as an argument of an event of type etype across all documents in a cluster XDoc- Role-Freq(arg, etype, role) The weighted frequency of arg appearing as an argument of an event of type etype with role role across all documents in a cluster XDoc- Role-BestFreq(arg) Maximum over all etypes and roles of XDoc- Role-Freq(arg, etype, role) XSent-Trigger-Margin(trigger) The margin value of trigger in XSent- Trigger-Freq XDoc-Trigger-Margin(trigger) The margin value of trigger in XDoc-Trigger-Freq XDoc- Role-Margin(arg) The margin value of arg in XDoc-Role-Freq If XSent-Trigger-Margin(trigger) >\u03b4 4 , then propagate the most frequent etype to all event mentions with trigger in the document; and correct roles for corresponding arguments.", "cite_spans": [ { "start": 1, "end": 35, "text": "XSent-Trigger-Freq(trigger, etype)", "ref_id": null }, { "start": 167, "end": 201, "text": "XDoc-Trigger-Freq (trigger, etype)", "ref_id": null }, { "start": 328, "end": 359, "text": "XDoc-Trigger-BestFreq (trigger)", "ref_id": null }, { "start": 387, "end": 421, "text": "XDoc-Trigger-Freq (trigger, etype)", "ref_id": null }, { "start": 428, "end": 448, "text": "Arg-Freq(arg, etype)", "ref_id": null }, { "start": 570, "end": 597, "text": "Role-Freq(arg, etype, role)", "ref_id": null }, { "start": 734, "end": 752, "text": "Role-BestFreq(arg)", "ref_id": null }, { "start": 796, "end": 853, "text": "Role-Freq(arg, etype, role) XSent-Trigger-Margin(trigger)", "ref_id": null }, { "start": 892, "end": 933, "text": "Trigger-Freq XDoc-Trigger-Margin(trigger)", "ref_id": null }, { "start": 989, "end": 1005, "text": "Role-Margin(arg)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "If LConf(trigger, etype) > \u03b4 5 , then propagate etype to all unlabeled strings trigger in the document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule (3): Adjust Trigger Identification to Achieve Document-wide Consistency", "sec_num": null }, { "text": "If LConf(arg, etype) > \u03b4 6 , then in the document, for each sentence containing an event mention EM with etype, add any unlabeled mention in that sentence with the same head as arg as an argument of EM with role.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule (4): Adjust Argument Identification to Achieve Document-wide Consistency", "sec_num": null }, { "text": "If XDoc-Trigger-Freq (trigger, etype) < \u03b4 7 , then delete EM;If XDoc-Arg-Freq(arg, etype) < \u03b4 8 or XDoc-Role-Freq(arg, etype, role) < \u03b4 9 , then delete arg.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule (5): Remove Triggers and Arguments with Low Cluster-wide Confidence", "sec_num": null }, { "text": "If XDoc-Trigger-Margin(trigger) >\u03b4 10 , then propagate most frequent etype to all event mentions with trigger in the cluster; and correct roles for corresponding arguments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule (6): Adjust Trigger Classification to Achieve Cluster-wide Consistency", "sec_num": null }, { "text": "If XDoc-Trigger-BestFreq (trigger) >\u03b4 11 , then propagate etype to all unlabeled strings trigger in the cluster, override the results of Rule (3) if conflict.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule (7): Adjust Trigger Identification to Achieve Cluster-wide Consistency", "sec_num": null }, { "text": "If XDoc-Role-Margin(arg) >\u03b4 12 , then propagate the most frequent etype and role to all arguments with the same head as arg in the entire cluster.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule (8): Adjust Argument Classification to Achieve Cluster-wide Consistency", "sec_num": null }, { "text": "If XDoc-Role-BestFreq(arg) > \u03b4 13 , then in the cluster, for each sentence containing an event mention EM with etype, add any unlabeled mention in that sentence with the same head as arg as an argument of EM with role. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule (9): Adjust Argument Identification to Achieve Cluster-wide Consistency", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The stages of event extraction", "authors": [ { "first": "David", "middle": [], "last": "Ahn", "suffix": "" } ], "year": 2006, "venue": "Proc. COLING/ACL 2006 Workshop on Annotating and Reasoning about Time and Events. Sydney, Australia", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Ahn. 2006. The stages of event extraction. Proc. COLING/ACL 2006 Workshop on Annotating and Reasoning about Time and Events. Sydney, Aus- tralia.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "NYU's English ACE 2005 System Description", "authors": [ { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "David", "middle": [], "last": "Westbrook", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Meyers", "suffix": "" } ], "year": 2005, "venue": "Proc. ACE 2005 Evaluation Workshop. Washington", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ralph Grishman, David Westbrook and Adam Meyers. 2005. NYU's English ACE 2005 System Descrip- tion. Proc. ACE 2005 Evaluation Workshop. Wash- ington, US.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Automatic Event Classification Using Surface Text Features", "authors": [], "year": 2006, "venue": "Proc. AAAI06 Workshop on Event Extraction and Synthesis", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hilda Hardy, Vika Kanchakouskaya and Tomek Strzal- kowski. 2006. Automatic Event Classification Us- ing Surface Text Features. Proc. AAAI06 Workshop on Event Extraction and Synthesis. Boston, Massa- chusetts. US.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Multi-document Relationship Fusion via Constraints on Probabilistic Databases", "authors": [ { "first": "Gideon", "middle": [], "last": "Mann", "suffix": "" } ], "year": 2007, "venue": "Proc. HLT/NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gideon Mann. 2007. Multi-document Relationship Fu- sion via Constraints on Probabilistic Databases. Proc. HLT/NAACL 2007. Rochester, NY, US.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Effective Information Extraction with Semantic Affinity Patterns and Relevant Regions", "authors": [ { "first": "Siddharth", "middle": [], "last": "Patwardhan", "suffix": "" }, { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" } ], "year": 2007, "venue": "Proc. EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siddharth Patwardhan and Ellen Riloff. 2007. Effective Information Extraction with Semantic Affinity Pat- terns and Relevant Regions. Proc. EMNLP 2007. Prague, Czech Republic.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Indri: A Language-model based Search Engine for Complex Queries (extended version)", "authors": [ { "first": "Trevor", "middle": [], "last": "Strohman", "suffix": "" }, { "first": "Donald", "middle": [], "last": "Metzler", "suffix": "" }, { "first": "Howard", "middle": [], "last": "Turtle", "suffix": "" }, { "first": "W. Bruce", "middle": [], "last": "Croft", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Trevor Strohman, Donald Metzler, Howard Turtle and W. Bruce Croft. 2005. Indri: A Language-model based Search Engine for Complex Queries (ex- tended version). Technical Report IR-407, CIIR, Umass Amherst, US.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Combining Information about Epidemic Threats from Multiple Sources", "authors": [ { "first": "Roman", "middle": [], "last": "Yangarber", "suffix": "" }, { "first": "Clive", "middle": [], "last": "Best", "suffix": "" }, { "first": "Flavio", "middle": [], "last": "Peter Von Etter", "suffix": "" }, { "first": "David", "middle": [], "last": "Fuart", "suffix": "" }, { "first": "Ralf", "middle": [], "last": "Horby", "suffix": "" }, { "first": "", "middle": [], "last": "Steinberger", "suffix": "" } ], "year": 2007, "venue": "Proc. RANLP 2007 workshop on Multi-source, Multilingual Information Extraction and Summarization", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roman Yangarber, Clive Best, Peter von Etter, Flavio Fuart, David Horby and Ralf Steinberger. 2007. Combining Information about Epidemic Threats from Multiple Sources. Proc. RANLP 2007 work- shop on Multi-source, Multilingual Information Ex- traction and Summarization. Borovets, Bulgaria.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Verification of Facts across Document Boundaries", "authors": [ { "first": "Roman", "middle": [], "last": "Yangarber", "suffix": "" } ], "year": 2006, "venue": "Proc. International Workshop on Intelligent Information Access", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roman Yangarber. 2006. Verification of Facts across Document Boundaries. Proc. International Work- shop on Intelligent Information Access. Helsinki, Finland.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Redundancy-based Correction of Automatically Extracted Facts", "authors": [ { "first": "Roman", "middle": [], "last": "Yangarber", "suffix": "" }, { "first": "Lauri", "middle": [], "last": "Jokipii", "suffix": "" } ], "year": 2005, "venue": "Proc. HLT/EMNLP 2005. Vancouver, Canada", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roman Yangarber and Lauri Jokipii. 2005. Redundan- cy-based Correction of Automatically Extracted Facts. Proc. HLT/EMNLP 2005. Vancouver, Cana- da.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Unsupervised Word Sense Disambiguation Rivaling Supervised Methods", "authors": [ { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1995, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Yarowsky. 1995. Unsupervised Word Sense Dis- ambiguation Rivaling Supervised Methods. Proc. ACL 1995. Cambridge, MA, US.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "Cross-doc Inference for Event Extraction" }, "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "Trigger Labeling Performance with Confidence Thresholding on Dev Set" }, "FIGREF2": { "type_str": "figure", "num": null, "uris": null, "text": "Argument Labeling Performance with Confidence Thresholding on Dev Set" }, "TABREF1": { "content": "
The statistics of some trigger exam-
ples are presented in table 2.
", "num": null, "html": null, "text": "", "type_str": "table" }, "TABREF2": { "content": "
Example 3:
[Test Sentence]
[Sentences from Related Documents]
Vivendi has been trying to sell assets to pay off huge
debt, estimated at the end of last month at more than
$13 billion.
Under the reported plans, Blackstone Group would
buy Vivendi's theme park division, including Universal
Studios Hollywood, Universal Orlando in Florida...
\u2026
", "num": null, "html": null, "text": "Vivendi earlier this week confirmed months of press speculation that it planned to shed its entertainment assets by the end of the year.", "type_str": "table" }, "TABREF3": { "content": "
per 3 For each related document r returned by INDRI, we repeat the within-sentence event extraction andTest doc Within-sent Event Extraction 0 t EMSetUnlabeled Corpora Information RetrievalRelated docs
cross-sentence inference procedure, and get an ex-
panded event mention set cross-document inference to EMSet + . Then we apply 1 t r 1 t r EMSet + and get theCross-sent InferenceQueryWithin-sent Event Extraction
final event mention output2 t EMSet .EMSett 1Construction QueryEMSet0 r
Cross-doc InferenceEMSet1 rCross-sent Inference
EMSet2 t
to obtain the top N (N=25 in this pa-
", "num": null, "html": null, "text": ") related documents. We construct an INDRI query from the triggers and arguments, each weighted by local confidence and frequency in the test document. For each argument we also add other names coreferential with or bearing some ACE relation to the argument.", "type_str": "table" } } } }