{ "paper_id": "P15-1019", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:11:26.572978Z" }, "title": "Generative Event Schema Induction with Entity Disambiguation", "authors": [ { "first": "Kiem-Hieu", "middle": [], "last": "Nguyen", "suffix": "", "affiliation": { "laboratory": "", "institution": "LIMSI-CNRS", "location": {} }, "email": "" }, { "first": "Xavier", "middle": [], "last": "Tannier", "suffix": "", "affiliation": { "laboratory": "", "institution": "LIMSI-CNRS", "location": {} }, "email": "xtannier@limsi.fr" }, { "first": "Olivier", "middle": [], "last": "Ferret", "suffix": "", "affiliation": { "laboratory": "Laboratoire Vision et Ingnierie des Contenus", "institution": "LIST", "location": { "postCode": "F-91191", "settlement": "Gif-sur-Yvette" } }, "email": "" }, { "first": "Romaric", "middle": [], "last": "Besan\u00e7on", "suffix": "", "affiliation": { "laboratory": "Laboratoire Vision et Ingnierie des Contenus", "institution": "LIST", "location": { "postCode": "F-91191", "settlement": "Gif-sur-Yvette" } }, "email": "romaric.besancon@cea.fr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents a generative model to event schema induction. Previous methods in the literature only use head words to represent entities. However, elements other than head words contain useful information. For instance, an armed man is more discriminative than man. Our model takes into account this information and precisely represents it using probabilistic topic distributions. We illustrate that such information plays an important role in parameter estimation. Mostly, it makes topic distributions more coherent and more discriminative. Experimental results on benchmark dataset empirically confirm this enhancement.", "pdf_parse": { "paper_id": "P15-1019", "_pdf_hash": "", "abstract": [ { "text": "This paper presents a generative model to event schema induction. Previous methods in the literature only use head words to represent entities. However, elements other than head words contain useful information. For instance, an armed man is more discriminative than man. Our model takes into account this information and precisely represents it using probabilistic topic distributions. We illustrate that such information plays an important role in parameter estimation. Mostly, it makes topic distributions more coherent and more discriminative. Experimental results on benchmark dataset empirically confirm this enhancement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Information Extraction was initially defined (and is still defined) by the MUC evaluations (Grishman and Sundheim, 1996) and more specifically by the task of template filling. The objective of this task is to assign event roles to individual textual mentions. A template defines a specific type of events (e.g. earthquakes), associated with semantic roles (or slots) hold by entities (for earthquakes, their location, date, magnitude and the damages they caused (Jean-Louis et al., 2011)). Schema induction is the task of learning these templates with no supervision from unlabeled text. We focus here on event schema induction and continue the trend of generative models proposed earlier for this task. The idea is to group together entities corresponding to the same role in an event template based on the similarity of the relations that these entities hold with predicates. For example, in a corpus about terrorist attacks, entities that are objects of verbs to kill, to attack can be grouped together and characterized by a role named VICTIM. The output of this identification operation is a set of clusters of which members are both words and relations, associated with their probability (see an example later in Figure 4 ). These clusters are not labeled but each of them represents an event slot.", "cite_spans": [ { "start": 105, "end": 120, "text": "Sundheim, 1996)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 1219, "end": 1227, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our approach here is to improve this initial idea by entity disambiguation. Some ambiguous entities, such as man or soldier, can match two different slots (victim or perpetrator). An entity such as terrorist can be mixed up with victims when articles relate that a terrorist has been killed by police (and thus is object of to kill). Our hypothesis is that the immediate context of entities is helpful for disambiguating them. For example, the fact that man is associated with armed, dangerous, heroic or innocent can lead to a better attribution and definition of roles. We then introduce relations between entities and their attributes in the model by means of syntactic relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The document level, which is generally a center notion in topic modeling, is not used in our generative model. This results in a simpler, more intuitive model, where observations are generated from slots, that are defined by probabilistic distributions on entities, predicates and syntactic attributes. This model offers room for further extensions since multiple observations on an entity can be represented in the same manner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Model parameters are estimated by Gibbs sampling. We evaluate the performance of this approach by an automatic and empiric mapping between slots from the system and slots from the reference in a way similar to previous work in the domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of this paper is organized as follows: Section 2 briefly presents previous work; in Section 3, we detail our entity and relation representation; we describe our generative model in Section 4, before presenting our experiments and evaluations in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Despite efforts made for making template filling as generic as possible, it still depends heavily on the type of events.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Mixing generic processes with a restrictive number of domainspecific rules (Freedman et al., 2011) or examples (Grishman and He, 2014) is a way to reduce the amount of effort needed for adapting a system to another domain. The approaches of Ondemand information extraction (Hasegawa et al., 2004; Sekine, 2006) and Preemptive Information Extraction (Shinyama and Sekine, 2006) tried to overcome this difficulty in another way by exploiting templates induced from representative documents selected by queries.", "cite_spans": [ { "start": 75, "end": 98, "text": "(Freedman et al., 2011)", "ref_id": "BIBREF15" }, { "start": 111, "end": 134, "text": "(Grishman and He, 2014)", "ref_id": "BIBREF18" }, { "start": 273, "end": 296, "text": "(Hasegawa et al., 2004;", "ref_id": "BIBREF21" }, { "start": 297, "end": 310, "text": "Sekine, 2006)", "ref_id": "BIBREF35" }, { "start": 349, "end": 376, "text": "(Shinyama and Sekine, 2006)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Event schema induction takes root in work on the acquisition from text of knowledge structures, such as the Memory Organization Packets (Schank, 1980) , used by early text understanding systems (DeJong, 1982) and more recently by Ferret and Grau (1997) . First attempts for applying such processes to schema induction have been made in the fields of Information Extraction (Collier, 1998) , Automatic Summarization (Harabagiu, 2004) and event Question-Answering (Filatova et al., 2006; Filatova, 2008) .", "cite_spans": [ { "start": 128, "end": 150, "text": "Packets (Schank, 1980)", "ref_id": null }, { "start": 194, "end": 208, "text": "(DeJong, 1982)", "ref_id": "BIBREF10" }, { "start": 230, "end": 252, "text": "Ferret and Grau (1997)", "ref_id": "BIBREF12" }, { "start": 373, "end": 388, "text": "(Collier, 1998)", "ref_id": "BIBREF9" }, { "start": 415, "end": 432, "text": "(Harabagiu, 2004)", "ref_id": "BIBREF20" }, { "start": 462, "end": 485, "text": "(Filatova et al., 2006;", "ref_id": "BIBREF13" }, { "start": 486, "end": 501, "text": "Filatova, 2008)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "More recently, work after (Hasegawa et al., 2004) has developed weakly supervised forms of Information Extraction including schema induction in their objectives. However, they have been mainly applied to binary relation extraction in practice (Eichler et al., 2008; Rosenfeld and Feldman, 2007; Min et al., 2012) . In parallel, several approaches were proposed for performing specifically schema induction in already existing frameworks: clause graph clustering (Qiu et al., 2008) , event sequence alignment (Regneri et al., 2010) or LDA-based approach relying on FrameNet-like semantic frames (Bejan, 2008) . More event-specific generative models were proposed by Chambers (2013) and Cheung et al. (2013) . Finally, Chambers and Jurafsky (2008) , Chambers and Jurafsky (2009) , Chambers and Jurafsky (2011), improved by Balasubramanian et al. (2013) , and Chambers (2013) focused specifically on the induction of event roles and the identification of chains of events for building representations from texts by exploiting coreference resolution or the temporal ordering of events. All this work is also linked to work about the induction of scripts from texts, more or less closely linked to events, such as (Frermann et al., 2014) , (Pichotta and Mooney, 2014) or (Modi and Titov, 2014) .", "cite_spans": [ { "start": 26, "end": 49, "text": "(Hasegawa et al., 2004)", "ref_id": "BIBREF21" }, { "start": 243, "end": 265, "text": "(Eichler et al., 2008;", "ref_id": "BIBREF11" }, { "start": 266, "end": 294, "text": "Rosenfeld and Feldman, 2007;", "ref_id": "BIBREF33" }, { "start": 295, "end": 312, "text": "Min et al., 2012)", "ref_id": "BIBREF24" }, { "start": 462, "end": 480, "text": "(Qiu et al., 2008)", "ref_id": "BIBREF31" }, { "start": 508, "end": 530, "text": "(Regneri et al., 2010)", "ref_id": "BIBREF32" }, { "start": 594, "end": 607, "text": "(Bejan, 2008)", "ref_id": "BIBREF1" }, { "start": 665, "end": 680, "text": "Chambers (2013)", "ref_id": "BIBREF7" }, { "start": 685, "end": 705, "text": "Cheung et al. (2013)", "ref_id": "BIBREF8" }, { "start": 717, "end": 745, "text": "Chambers and Jurafsky (2008)", "ref_id": "BIBREF3" }, { "start": 748, "end": 776, "text": "Chambers and Jurafsky (2009)", "ref_id": "BIBREF4" }, { "start": 821, "end": 850, "text": "Balasubramanian et al. (2013)", "ref_id": "BIBREF0" }, { "start": 1209, "end": 1232, "text": "(Frermann et al., 2014)", "ref_id": "BIBREF16" }, { "start": 1235, "end": 1262, "text": "(Pichotta and Mooney, 2014)", "ref_id": "BIBREF30" }, { "start": 1266, "end": 1288, "text": "(Modi and Titov, 2014)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The work we present in this article is in line with Chambers (2013), which will be described in more details in Section 5, together with a quantitative and qualitative comparison.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "An entity is represented as a triple containing: a head word h, a list A of attribute relations and a list T of trigger relations. Consider the following example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity Representation", "sec_num": "3" }, { "text": "(1) Two armed men attacked the police station and killed a policeman. An innocent young man was also wounded.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity Representation", "sec_num": "3" }, { "text": "As illustrated in Figure 1 , four entities, equivalent to four separated triples, are generated from the text above. Head words are extracted from noun phrases. A trigger relation is composed of a predicate (attack, kill, wound) and a dependency type (subject, object). An attribute relation is composed of an argument (armed, police, young) and a dependency type (adjectival, nominal or verbal modifier). In the relationship to triggers, a head word is argument, but in the relationship to attributes, it is predicate. We use Stanford NLP toolkit (Manning et al., 2014) for parsing and coreference resolution.", "cite_spans": [ { "start": 548, "end": 570, "text": "(Manning et al., 2014)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 18, "end": 26, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Entity Representation", "sec_num": "3" }, { "text": "A head word is extracted if it is a nominal or proper noun and it is related to at least one trigger; pronouns are omitted. A trigger of an head word is extracted if it is a verb or an eventive noun and the head word serves as its subject, object, or preposition. We use the categories noun.EVENT and noun.ACT in WordNet as a list of eventive nouns. A head word can have more than one trigger. These multiple relations can come from a syntactic coordination inside a single sentence, as it is the case in the first sentence of the illustrating example. They can also represent a coreference chain across sentences, as we use coreference resolution to merge the triggers of mentions corefering to the same entity in a document. Coreferences are useful sources for event induction (Chambers and Jurafsky, 2011; Chambers, 2013) . Finally, an attribute is extracted if it is an adjective, a noun or a verb and serves as an adjective, verbal or nominal modifier of a head word. If there are several modifiers, only the closest to the head word is selected. This \"best selection\" heuristic allows to omit non-discriminative attributes for the entity. Figure 2 shows the plate notation of our model. For each triple representing an entity e, the model first assigns a slot s for the entity from an uniform distribution uni(1, K). Its head word h is then generated from a multinominal distribution \u03c0 s . Each t i of event trigger relations T e is generated from a multinominal distribution \u03c6 s . Each a j of attribute relations A e is similarly generated from a multinominal distribution \u03b8 s . The distributions \u03b8, \u03c0, and \u03c6 are generated from Dirichlet priors dir(\u03b1), dir(\u03b2) and dir(\u03b3) respectively.", "cite_spans": [ { "start": 779, "end": 808, "text": "(Chambers and Jurafsky, 2011;", "ref_id": null }, { "start": 809, "end": 824, "text": "Chambers, 2013)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 1145, "end": 1153, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Entity Representation", "sec_num": "3" }, { "text": "Given a set of entities E, our model (\u03c0, \u03c6, \u03b8) is defined by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Description", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P \u03c0,\u03c6,\u03b8 (E) = e\u2208E P \u03c0,\u03c6,\u03b8 (e)", "eq_num": "(2)" } ], "section": "Model Description", "sec_num": "4.1" }, { "text": "where the probability of each entity e is defined by P \u03c0,\u03c6,\u03b8 (e) = P (s)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Description", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u00d7 P (h|s) \u00d7 t\u2208Te P (t|s) \u00d7 a\u2208Ae P (a|s)", "eq_num": "(3)" } ], "section": "Model Description", "sec_num": "4.1" }, { "text": "The generative story is as follows: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Description", "sec_num": "4.1" }, { "text": "for slot s \u2190 1 to K do", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Description", "sec_num": "4.1" }, { "text": "For parameter estimation, we use the Gibbs sampling method (Griffiths, 2002) . The slot variable s is sampled by integrating out all the other variables.", "cite_spans": [ { "start": 59, "end": 76, "text": "(Griffiths, 2002)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation", "sec_num": "4.2" }, { "text": "Previous models (Cheung et al., 2013; Chambers, 2013) are based on document-level topic modeling, which originated from models such as Latent Dirichlet Allocation (Blei et al., 2003) . Our model is, instead, independent from document contexts. Its input is a sequence of entity triples. Document boundary is only used in a postprocessing step of filtering (see Section 5.3 for more details). There is a universal slot distribution instead of each slot distribution for one document. Furthermore, slot prior is ignored by using a uniform distribution as a particular case of categorical probability. Sampling-based slot assignment could depend on initial states and random seeds. In our implementation of Gibbs sampling, we use 2,000 burn-in of overall 10,000 iterations. The purpose of burn-in is to assure that parameters converge to a stable state before estimating the probability distributions. Moreover, an interval step of 100 is applied between consecutive samples in order to avoid too strong coherence.", "cite_spans": [ { "start": 163, "end": 182, "text": "(Blei et al., 2003)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation", "sec_num": "4.2" }, { "text": "Particularly, for tracking changes in probabilities resulting from attribute relations, we ran in the first stage a specific burn-in with only heads and trigger relations. This stable state was then used as initialization for the second burn-in in which attributes, heads, and triggers were used altogether. This specific experimental setting made us understand how the attributes modified distributions. We observed that non-ambiguous words or relations (i.e. explode, murder:nsubj) were only slightly modified whereas probabilities of ambiguous words such as man, soldier or triggers such as kill:dobj or attack:nsubj converged smoothly to a different stable state that was semantically more coherent. For instance, the model interestingly realized that even if a terrorist was killed (e.g. by police), he was not actually a real victim of an attack. Figure 3 shows probability convergences of terrorist and kill:dobj given ATTACK victim and ATTACK perpetrator.", "cite_spans": [], "ref_spans": [ { "start": 853, "end": 861, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Parameter Estimation", "sec_num": "4.2" }, { "text": "In order to compare with related work, we evaluated our method on the Message Understanding Conference (MUC-4) corpus (Sundheim, 1991) using precision, recall and F-score as conventional metrics for template extraction.", "cite_spans": [ { "start": 118, "end": 134, "text": "(Sundheim, 1991)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluations", "sec_num": "5" }, { "text": "In what follows, we first introduce the MUC-4 corpus (Section 5.1.1), we detail the mapping technique between learned slots and reference slots (5.1.2) as well as the hyper-parameters of our model (5.1.3). Next, we present a first experiment (Section 5.2) showing how using attribute relations improves overall results. The second experiment (Section 5.3) studies the impact of document classification. We then compare our results with previous approaches, more particularly with Chambers (2013), from both quantitative and qualitative points of view (Section 5.4). Finally, Section 5.5 is dedicated to error analysis, with a special emphasis on sources of false positives.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluations", "sec_num": "5" }, { "text": "The MUC-4 corpus contains 1,700 news articles about terrorist incidents happening in Latin America. The corpus is divided into 1,300 documents for the development set and four test sets, each containing 100 documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "5.1.1" }, { "text": "We follow the rules in the literature to guarantee comparable results (Patwardhan and Riloff, 2007; Chambers and Jurafsky, 2011) . The evaluation focuses on four template types -ARSON, ATTACK, BOMBING, KIDNAPPING -and four slots -Perpetrator, Instrument, Target, and Victim. Perpetrator is merged from Perpetrator Individual and Perpetrator Organization. The matching between system answers and references is based on head word matching. A head word is defined as the rightmost word of the phrase or as the right-most word of the first 'of' if the phrase contains any. Optional templates and slots are ignored when calculating recall. Template types are ignored in evaluation: this means that a perpetrator of BOMBING in the answers could be compared to a perpetrator of ARSON, ATTACK, BOMBING or KIDNAPPING in the reference.", "cite_spans": [ { "start": 70, "end": 99, "text": "(Patwardhan and Riloff, 2007;", "ref_id": "BIBREF27" }, { "start": 100, "end": 128, "text": "Chambers and Jurafsky, 2011)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "5.1.1" }, { "text": "The model learns K slots and assigns each entity in a document to one of the learned slots. Slot mapping consists in matching each reference slot to an equivalent learned slot.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Slot Mapping", "sec_num": "5.1.2" }, { "text": "Note that among the K learned slots, some are irrelevant while others, sometimes of high quality, contain entities that are not part of the reference (spatio-temporal information, protagonist context, etc.). For this reason, it makes sense to have much more learned slots than expected event slots.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Slot Mapping", "sec_num": "5.1.2" }, { "text": "Similarly to previous work in the literature, we implemented an automatic empirical-driven slot mapping. Each reference slot was mapped to the learned slot that performed the best on the task of template extraction according to the Fscore metric. Here, two identical slots of two different templates, such as ATTACK victim and KIDNAPPING victim, must to be mapped separately. Figure 4 shows the most common words of two learned slots which were mapped to BOMB-ING instrument and KIDNAPPING victim. This mapping is then kept for testing.", "cite_spans": [], "ref_spans": [ { "start": 376, "end": 384, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Slot Mapping", "sec_num": "5.1.2" }, { "text": "We first tuned hyper-parameters of the models on the development set. The number of slots was set to K = 35. Dirichlet priors were set to \u03b1 = 0.1, \u03b2 = 1 and \u03b3 = 0.1. The model was learned from the whole dataset. Slot mapping was done on tst1 and tst2. Outputs from tst3 and tst4 were eval- Figure 4 : Attribute, head and trigger distributions learned by the model HT+A for learned slots that were mapped to BOMBING instrument and KID-NAPPING victim.", "cite_spans": [], "ref_spans": [ { "start": 290, "end": 298, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Parameter Tuning", "sec_num": "5.1.3" }, { "text": "uated using references and were averaged across ten runs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Tuning", "sec_num": "5.1.3" }, { "text": "In this experiment, two versions of our model are compared: HT+A uses entity heads, event trigger relations and entity attribute relations. HT uses only entity heads and event triggers and omits attributes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 1: Using Entity Attributes", "sec_num": "5.2" }, { "text": "We studied the gain brought by attribute relations with a focus on their effect when coreference information was available or was missing. The variations on the model input are named single, multi and coref. Single input has only one event trigger for each entity. A text like an armed man attacked the police station and killed a policeman results in two triples for the entity man: (armed:amod, man, attack:nsubj) and (armed:amod, man, kill:nsubj). In multi input, one entity can have several event triggers, leading for the text above to the triple (armed:amod, man, [attack:nsubj, kill:nsubj]). The coref input is richer than multi in that, in addition to triggers from the same sentence, triggers linked to the same corefered entity are merged together. For instance, if man in the above example corefers with he in He was arrested three hours later, the merged triple becomes (armed:amod, man, [attack:nsubj, kill:nsubj, arrest:dobj]). The plate notations of these model+data combinations are given in Figure 5 . Table 1 shows a consistent improvement when using attributes, both with and without coreferences. The best performance of 40.62 F-score is obtained by the full model on inputs with coref- This model is equivalent to 5b) with T=1; 5b) HT model ran on multi data; 5c) HT+A model ran on single data; 5d) HT+A model ran on multi data. erences. Using both attributes in the model and coreference to generate input data results in a gain of 3 F-score points.", "cite_spans": [], "ref_spans": [ { "start": 1008, "end": 1016, "text": "Figure 5", "ref_id": "FIGREF3" }, { "start": 1019, "end": 1026, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Experiment 1: Using Entity Attributes", "sec_num": "5.2" }, { "text": "In the second experiment, we evaluated our model with a post-processing step of document classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 2: Document Classification", "sec_num": "5.3" }, { "text": "The MUC-4 corpus contains many \"irrelevant\" documents. A document is irrelevant if it contains no template. Among 1,300 documents in the development set, 567 are irrelevant. The most challenging part is that there are many terrorist entities, e.g. bomb, force, guerrilla, occurring in irrelevant documents. That makes filtering out those documents important, but difficult. As document clas-sification is not explicitly performed by our model, a post-processing step is needed. Document classification is expected to reduce false positives in irrelevant documents while not dramatically reducing recall.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 2: Document Classification", "sec_num": "5.3" }, { "text": "Given a document d with slot-assigned entities and a set of mapped slots S m resulting from slot mapping, we have to decide whether this document is relevant or not. We define the relevance score of a document as: relevance(d) = e\u2208d:s e \u2208S m t\u2208T e P (t|s e ) e\u2208d t\u2208T e P (t|s e )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 2: Document Classification", "sec_num": "5.3" }, { "text": "where e is an entity in the document d; s e is the slot value assigned to e; and t is an event trigger in the list of triggers T e . The equation (4) defines the score of an entity as the sum of the conditional probabilities of triggers given a slot. The relevance score of the document is proportional to the score of the entities assigned to mapped slots. If this relevance score is higher than a threshold \u03bb, then the document is considered as relevant. The value of \u03bb = 0.02 was tuned Table 2 : Improvement from document classification as post-processing. on the development set by maximizing the F-score of document classification. Table 2 shows the improvement when applying document classification. The precision increases as false positives from irrelevant documents are filtered out. The loss of recall comes from relevant documents that are mistakenly filtered out. However, this loss is not significant and the overall Fscore finally increases by 5%. We also compare our results to an \"oracle\" classifier that would remove all irrelevant documents while preserving all relevant ones. The performance of this oracle classification shows that there are some room for further improvement from document classification.", "cite_spans": [], "ref_spans": [ { "start": 489, "end": 496, "text": "Table 2", "ref_id": null }, { "start": 637, "end": 644, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Experiment 2: Document Classification", "sec_num": "5.3" }, { "text": "Irrelevant document filtering is a technique applied by most supervised and unsupervised approaches. Supervised methods prefer relevance detection at sentence or phrase-level (Patwardhan and Riloff, 2009; Patwardhan and Riloff, 2007) . As for several unsupervised methods, Chambers (2013) includes document classification in his topic model. Chambers and Jurafsky (2011) and Cheung et al. (2013) use the learned clusters to classify documents by estimating the relevance of a document with respect to a template from posthoc statistics about event triggers.", "cite_spans": [ { "start": 175, "end": 204, "text": "(Patwardhan and Riloff, 2009;", "ref_id": "BIBREF28" }, { "start": 205, "end": 233, "text": "Patwardhan and Riloff, 2007)", "ref_id": "BIBREF27" }, { "start": 273, "end": 288, "text": "Chambers (2013)", "ref_id": "BIBREF7" }, { "start": 342, "end": 370, "text": "Chambers and Jurafsky (2011)", "ref_id": null }, { "start": 375, "end": 395, "text": "Cheung et al. (2013)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment 2: Document Classification", "sec_num": "5.3" }, { "text": "For comparing in more depth our results to the state-of-the-art in the literature. we reimplemented the method proposed in Chambers (2013) and integrated our attribute distributions into his model (as shown in Figure 6 ).", "cite_spans": [ { "start": 123, "end": 138, "text": "Chambers (2013)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 210, "end": 218, "text": "Figure 6", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Comparison to State-of-the-Art", "sec_num": "5.4" }, { "text": "The main differences between this model and ours are the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to State-of-the-Art", "sec_num": "5.4" }, { "text": "1. The full template model of Chambers (2013) adds a distribution \u03c8 linking events to documents. This makes the model more complex and maybe less intuitive since there is no reason to connect documents and slots (a document may contain references to several templates and slot mapping does not depend on document level). A benefit of this document System P R F Cheung et al. 201332 37 34 Chambers and Jurafsky (2011) distribution is that it leads to a free classification of irrelevant documents, thus avoiding a pre-or post-processing for classification. However, this issue of document relevance is very specific to the MUC corpus and the evaluation method; In a more general use case, there would be no \"irrelevant\" documents, only documents on various topics.", "cite_spans": [ { "start": 30, "end": 45, "text": "Chambers (2013)", "ref_id": "BIBREF7" }, { "start": 388, "end": 416, "text": "Chambers and Jurafsky (2011)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Comparison to State-of-the-Art", "sec_num": "5.4" }, { "text": "2. Each entity is linked to an event variable e. This event generates a predicate for each entity mention (recall that mentions of an entity are all occurrences of this entity in the documents, for example in a coreference chain). Our work instead focus on the fact that a probabilistic model could have multiple observations at the same position. Multiple triggers and multiple attributes are treated equally. The sources of multiple attributes and multiple triggers are not only from document-level coreferences but also from dependency relations (or even from domain-level entity coreferences if available). Hence, our model arguably generalizes better in terms of both modeling and input data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to State-of-the-Art", "sec_num": "5.4" }, { "text": "3. Chambers (2013) applies a heuristic constraint during the sampling process, imposing that subject and object of the same predicate (e.g. kill:nsubj and kill:dobj) are not distributed into the same slot. Our model does not require this heuristic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to State-of-the-Art", "sec_num": "5.4" }, { "text": "Some details concerning data preprocessing and model parameters are not fully specified by Chambers (2013) ; for this reason, our implementation of the model (applied on the same data) leads to slightly different results than those published. That is why we present the two results here (paper values in Table 3 , reimplementation values in Table 4 ). Table 3 shows that our model outperforms the others on recall by a large margin. It achieves the best overall F-score. In addition, as stated by our experiments, precision could be further improved by more sophisticated document classification. Interestingly, using attributes also proves to be useful in the model proposed by Chambers (2013) (as shown in Table 4 ).", "cite_spans": [ { "start": 91, "end": 106, "text": "Chambers (2013)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 304, "end": 311, "text": "Table 3", "ref_id": "TABREF7" }, { "start": 341, "end": 348, "text": "Table 4", "ref_id": "TABREF9" }, { "start": 352, "end": 359, "text": "Table 3", "ref_id": "TABREF7" }, { "start": 708, "end": 715, "text": "Table 4", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Comparison to State-of-the-Art", "sec_num": "5.4" }, { "text": "We performed an error analysis on the output of HT+A + doc. classification to detect the origin of false positives (FPs). 38% of FPs are mentions that never occur in the reference. Within this 38%, attacker and killer are among the most frequent errors. These words could refer to a perpetrator of an attack. These mentions, however, do not occur in the reference, possibly because human annotators consider them as too generic terms. Apart from such generic terms, other assignments are obvious errors of the system, e.g. window, door or wall as physical target; action or massacre as perpetrator; explosion or shooting as instrument. These kinds of errors are due to the fact that in our model, as in the one of Chambers (2013) , the number of slots is fixed and is not equivalent to the real number of reference slots.", "cite_spans": [ { "start": 714, "end": 729, "text": "Chambers (2013)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "5.5" }, { "text": "On the other hand, 62% of FPs are mentions of entities that occur at least once in the reference. On top of the list are perpetrators such as guerrilla, group and rebel. The model is capable of assigning guerrilla to attribution slot if it is accompanied by a trigger like announce:nsubj. However, triggers that describe quasi-terrorism events (e.g. menace, threatening, military conflict) are also grouped into perpetrator slots. Similarly, mentions of frequent words such as bomb (instrument), building, house, office (targets) tend to be systematically grouped into these slots, regardless of their relations. Increasing the number of slots (to sharpen their content) does not help overall. This is due to the fact that the MUC corpus is very small and is biased towards terrorism events. Adding a higher level of template type as in Chambers (2013) partially solves the problem but makes recall decrease (as shown in Table 3 ).", "cite_spans": [], "ref_spans": [ { "start": 921, "end": 928, "text": "Table 3", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "5.5" }, { "text": "We presented a generative model for representing the roles played by the entities in an event template. We focused on using immediate contexts of entities and proposed a simpler and more effective model than those proposed in previous work. We evaluated this model on the MUC-4 corpus. Even if our results outperform other unsupervised approaches, we are still far from results obtained by supervised systems. Improvements can be obtained by several ways. First, the characteristics of the MUC-4 corpus are a limiting factor. The corpus is small and roles are similar from a template to another, which does not reflect reality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Perspectives", "sec_num": "6" }, { "text": "A bigger corpus, even partially annotated but presenting a better variety of templates, could lead to very different approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Perspectives", "sec_num": "6" }, { "text": "As we showed, our model comes with a unified representation of all types of relations. This opens the way to the use of multiple types of relations (syntactic, semantic, thematic, etc.) to refine the clusters.", "cite_spans": [ { "start": 148, "end": 185, "text": "(syntactic, semantic, thematic, etc.)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Perspectives", "sec_num": "6" }, { "text": "Last but not least, the evaluation protocol, that became a kind of de facto standard, is very much imperfect. Most notably, the way of finally mapping with reference slots can have a great influence on the results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Perspectives", "sec_num": "6" } ], "back_matter": [ { "text": "This work was partially financed by the Foundation for Scientific Cooperation \"Campus Paris-Saclay\" (FSC) under the project Digiteo ASTRE No. 2013-0774D.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgment", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Generating Coherent Event Schemas at Scale", "authors": [ { "first": "Niranjan", "middle": [], "last": "Balasubramanian", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Soderland", "suffix": "" }, { "first": "Mausam", "middle": [], "last": "", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2013, "venue": "2013 Conference on Empirical Methods in Natural Language Processing (EMNLP 2013)", "volume": "", "issue": "", "pages": "1721--1731", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niranjan Balasubramanian, Stephen Soderland, Mausam, and Oren Etzioni. 2013. Generating Coherent Event Schemas at Scale. In 2013 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP 2013), pages 1721-1731, Seattle, Washington, USA, October.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Unsupervised Discovery of Event Scenarios from Texts", "authors": [ { "first": "", "middle": [], "last": "Cosmin Adrian Bejan", "suffix": "" } ], "year": 2008, "venue": "Twenty-First International Florida Artificial Intelligence Research Society Conference (FLAIRS 2008)", "volume": "", "issue": "", "pages": "124--129", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cosmin Adrian Bejan. 2008. Unsupervised Discovery of Event Scenarios from Texts. In Twenty-First In- ternational Florida Artificial Intelligence Research Society Conference (FLAIRS 2008), pages 124-129, Coconut Grove, Florida.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Latent Dirichlet Allocation", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Jordan", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "993--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. Journal of Ma- chine Learning Research, 3:993-1022, March.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Unsupervised Learning of Narrative Event Chains", "authors": [ { "first": "Nathanael", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2008, "venue": "ACL-08: HLT", "volume": "", "issue": "", "pages": "789--797", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nathanael Chambers and Dan Jurafsky. 2008. Unsu- pervised Learning of Narrative Event Chains. In ACL-08: HLT, pages 789-797, Columbus, Ohio, June.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Unsupervised Learning of Narrative Schemas and their Participants", "authors": [ { "first": "Nathanael", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2009, "venue": "Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP (ACL-IJCNLP'09)", "volume": "", "issue": "", "pages": "602--610", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nathanael Chambers and Dan Jurafsky. 2009. Unsu- pervised Learning of Narrative Schemas and their Participants. In Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP (ACL-IJCNLP'09), pages 602-610, Suntec, Singapore, August.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Template-Based Information Extraction without the Templates", "authors": [], "year": null, "venue": "49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL 2011)", "volume": "", "issue": "", "pages": "976--986", "other_ids": {}, "num": null, "urls": [], "raw_text": "Template-Based Information Extraction without the Templates. In 49th Annual Meeting of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies (ACL 2011), pages 976-986, Portland, Oregon, USA, June.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Event Schema Induction with a Probabilistic Entity-Driven Model", "authors": [ { "first": "Nathanael", "middle": [], "last": "Chambers", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1797--1807", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nathanael Chambers. 2013. Event Schema Induction with a Probabilistic Entity-Driven Model. In Pro- ceedings of the 2013 Conference on Empirical Meth- ods in Natural Language Processing, pages 1797- 1807, Seattle, Washington, USA, October.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Probabilistic Frame Induction", "authors": [ { "first": "Kit", "middle": [ "Jackie" ], "last": "", "suffix": "" }, { "first": "Chi", "middle": [], "last": "Cheung", "suffix": "" }, { "first": "Hoifung", "middle": [], "last": "Poon", "suffix": "" }, { "first": "Lucy", "middle": [], "last": "Vanderwende", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "837--846", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kit Jackie Chi Cheung, Hoifung Poon, and Lucy Van- derwende. 2013. Probabilistic Frame Induction. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 837-846.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Automatic Template Creation for Information Extraction", "authors": [ { "first": "R", "middle": [], "last": "Collier", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Collier. 1998. Automatic Template Creation for Information Extraction. Ph.D. thesis, University of Sheffield.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "An overview of the FRUMP system", "authors": [ { "first": "Gerald", "middle": [], "last": "Dejong", "suffix": "" } ], "year": 1982, "venue": "Strategies for natural language processing", "volume": "", "issue": "", "pages": "149--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gerald DeJong. 1982. An overview of the FRUMP system. In W. Lehnert and M. Ringle, editors, Strategies for natural language processing, pages 149-176. Lawrence Erlbaum Associates.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Unsupervised Relation Extraction From Web Documents", "authors": [ { "first": "Kathrin", "middle": [], "last": "Eichler", "suffix": "" }, { "first": "Holmer", "middle": [], "last": "Hemsen", "suffix": "" }, { "first": "G\u00fcnter", "middle": [], "last": "Neumann", "suffix": "" } ], "year": 2008, "venue": "6 th Conference on Language Resources and Evaluation (LREC'08), Marrakech", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kathrin Eichler, Holmer Hemsen, and G\u00fcnter Neu- mann. 2008. Unsupervised Relation Extraction From Web Documents. In 6 th Conference on Lan- guage Resources and Evaluation (LREC'08), Mar- rakech, Morocco.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "An Aggregation Procedure for Building Episodic Memory", "authors": [ { "first": "Olivier", "middle": [], "last": "Ferret", "suffix": "" }, { "first": "Brigitte", "middle": [], "last": "Grau", "suffix": "" } ], "year": 1997, "venue": "15 th International Joint Conference on Artificial Intelligence (IJCAI-97)", "volume": "", "issue": "", "pages": "280--285", "other_ids": {}, "num": null, "urls": [], "raw_text": "Olivier Ferret and Brigitte Grau. 1997. An Aggre- gation Procedure for Building Episodic Memory. In 15 th International Joint Conference on Artificial Intelligence (IJCAI-97), pages 280-285, Nagoya, Japan.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Automatic Creation of Domain Templates", "authors": [ { "first": "Elena", "middle": [], "last": "Filatova", "suffix": "" }, { "first": "Vasileios", "middle": [], "last": "Hatzivassiloglou", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2006, "venue": "21 st International Conference on Computational Linguistics and 44 th Annual Meeting of the Association for Computational Linguistics (COLING-ACL 2006)", "volume": "", "issue": "", "pages": "207--214", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elena Filatova, Vasileios Hatzivassiloglou, and Kath- leen McKeown. 2006. Automatic Creation of Domain Templates. In 21 st International Confer- ence on Computational Linguistics and 44 th Annual Meeting of the Association for Computational Lin- guistics (COLING-ACL 2006), pages 207-214, Syd- ney, Australia.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Unsupervised Relation Learning for Event-Focused Question-Answering and Domain Modelling", "authors": [ { "first": "Elena", "middle": [], "last": "Filatova", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elena Filatova. 2008. Unsupervised Relation Learning for Event-Focused Question-Answering and Domain Modelling. Ph.D. thesis, Columbia University.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Extreme Extraction -Machine Reading in a Week", "authors": [ { "first": "Marjorie", "middle": [], "last": "Freedman", "suffix": "" }, { "first": "Lance", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Boschee", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Gabbard", "suffix": "" }, { "first": "Gary", "middle": [], "last": "Kratkiewicz", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Weischedel", "suffix": "" } ], "year": 2011, "venue": "2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1437--1446", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marjorie Freedman, Lance Ramshaw, Elizabeth Boschee, Ryan Gabbard, Gary Kratkiewicz, Nico- las Ward, and Ralph Weischedel. 2011. Ex- treme Extraction -Machine Reading in a Week. In 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP 2011), pages 1437- 1446, Edinburgh, Scotland, UK., July.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A Hierarchical Bayesian Model for Unsupervised Induction of Script Knowledge", "authors": [ { "first": "Lea", "middle": [], "last": "Frermann", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Pinkal", "suffix": "" } ], "year": 2014, "venue": "14th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2014)", "volume": "", "issue": "", "pages": "49--57", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lea Frermann, Ivan Titov, and Manfred Pinkal. 2014. A Hierarchical Bayesian Model for Unsupervised Induction of Script Knowledge. In 14th Conference of the European Chapter of the Association for Com- putational Linguistics (EACL 2014), pages 49-57, Gothenburg, Sweden, April.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Gibbs sampling in the generative model of Latent Dirichlet Allocation", "authors": [ { "first": "Tom", "middle": [], "last": "Griffiths", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Griffiths. 2002. Gibbs sampling in the genera- tive model of Latent Dirichlet Allocation. Technical report, Stanford University.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "An Information Extraction Customizer", "authors": [ { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "Yifan", "middle": [], "last": "He", "suffix": "" } ], "year": 2014, "venue": "17th International Conference on Text", "volume": "8655", "issue": "", "pages": "3--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ralph Grishman and Yifan He. 2014. An Informa- tion Extraction Customizer. In Petr Sojka, Ale Hork, Ivan Kopeek, and Karel Pala, editors, 17th Inter- national Conference on Text, Speech and Dialogue (TSD 2014), volume 8655 of Lecture Notes in Com- puter Science, pages 3-10. Springer International Publishing.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Message Understanding Conference-6: A Brief History", "authors": [ { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "Beth", "middle": [], "last": "Sundheim", "suffix": "" } ], "year": 1996, "venue": "16 th International Conference on Computational linguistics (COLING'96)", "volume": "", "issue": "", "pages": "466--471", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ralph Grishman and Beth Sundheim. 1996. Mes- sage Understanding Conference-6: A Brief History. In 16 th International Conference on Computational linguistics (COLING'96), pages 466-471, Copen- hagen, Denmark.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Incremental Topic Representation", "authors": [ { "first": "Sanda", "middle": [], "last": "Harabagiu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 20th International Conference on Computational Linguistics (COL-ING'04)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sanda Harabagiu. 2004. Incremental Topic Repre- sentation. In Proceedings of the 20th International Conference on Computational Linguistics (COL- ING'04), Geneva, Switzerland, August.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Discovering Relations among Named Entities from Large Corpora", "authors": [ { "first": "Takaaki", "middle": [], "last": "Hasegawa", "suffix": "" }, { "first": "Satoshi", "middle": [], "last": "Sekine", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2004, "venue": "42 nd Meeting of the Association for Computational Linguistics (ACL'04)", "volume": "", "issue": "", "pages": "415--422", "other_ids": {}, "num": null, "urls": [], "raw_text": "Takaaki Hasegawa, Satoshi Sekine, and Ralph Grish- man. 2004. Discovering Relations among Named Entities from Large Corpora. In 42 nd Meeting of the Association for Computational Linguistics (ACL'04), pages 415-422, Barcelona, Spain.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Text Segmentation and Graph-based Method for Template Filling in Information Extraction", "authors": [ { "first": "Ludovic", "middle": [], "last": "Jean-Louis", "suffix": "" }, { "first": "Romaric", "middle": [], "last": "Besanon", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Ferret", "suffix": "" } ], "year": 2011, "venue": "5 th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "723--731", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ludovic Jean-Louis, Romaric Besanon, and Olivier Ferret. 2011. Text Segmentation and Graph-based Method for Template Filling in Information Extrac- tion. In 5 th International Joint Conference on Nat- ural Language Processing (IJCNLP 2011), pages 723-731, Chiang Mai, Thailand.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "The Stanford CoreNLP Natural Language Processing Toolkit", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Steven", "middle": [ "J" ], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mc-Closky", "suffix": "" } ], "year": 2014, "venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "55--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP Natural Lan- guage Processing Toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computa- tional Linguistics: System Demonstrations, pages 55-60, Baltimore, USA, jun.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Ensemble Semantics for Largescale Unsupervised Relation Extraction", "authors": [ { "first": "Shuming", "middle": [], "last": "Bonan Min", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Chin-Yew", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bonan Min, Shuming Shi, Ralph Grishman, and Chin- Yew Lin. 2012. Ensemble Semantics for Large- scale Unsupervised Relation Extraction. In 2012", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL 2012", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "1027--1037", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL 2012, pages 1027-1037, Jeju Island, Korea.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Inducing neural models of script knowledge", "authors": [ { "first": "Ashutosh", "middle": [], "last": "Modi", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2014, "venue": "Eighteenth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "49--57", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashutosh Modi and Ivan Titov. 2014. Inducing neural models of script knowledge. In Eighteenth Confer- ence on Computational Natural Language Learning (CoNLL 2014), pages 49-57, Ann Arbor, Michigan.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Effective Information Extraction with Semantic Affinity Patterns and Relevant Regions", "authors": [ { "first": "Siddharth", "middle": [], "last": "Patwardhan", "suffix": "" }, { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "717--727", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siddharth Patwardhan and Ellen Riloff. 2007. Ef- fective Information Extraction with Semantic Affin- ity Patterns and Relevant Regions. In Proceedings of the 2007 Joint Conference on Empirical Meth- ods in Natural Language Processing and Computa- tional Natural Language Learning (EMNLP-CoNLL 2007), pages 717-727, Prague, Czech Republic, June.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A Unified Model of Phrasal and Sentential Evidence for Information Extraction", "authors": [ { "first": "Siddharth", "middle": [], "last": "Patwardhan", "suffix": "" }, { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siddharth Patwardhan and Ellen Riloff. 2009. A Uni- fied Model of Phrasal and Sentential Evidence for Information Extraction. In Proceedings of the 2009", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Conference on Empirical Methods in Natural Language Processing", "authors": [], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "151--160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP 2009), pages 151-160.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Statistical script learning with multi-argument events", "authors": [ { "first": "Karl", "middle": [], "last": "Pichotta", "suffix": "" }, { "first": "Raymond", "middle": [], "last": "Mooney", "suffix": "" } ], "year": 2014, "venue": "14th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2014)", "volume": "", "issue": "", "pages": "220--229", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karl Pichotta and Raymond Mooney. 2014. Statistical script learning with multi-argument events. In 14th Conference of the European Chapter of the Associ- ation for Computational Linguistics (EACL 2014), pages 220-229, Gothenburg, Sweden.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Modeling Context in Scenario Template Creation", "authors": [ { "first": "Long", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Min-Yen", "middle": [], "last": "Kan", "suffix": "" }, { "first": "Tat-Seng", "middle": [], "last": "Chua", "suffix": "" } ], "year": 2008, "venue": "Third International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "157--164", "other_ids": {}, "num": null, "urls": [], "raw_text": "Long Qiu, Min-Yen Kan, and Tat-Seng Chua. 2008. Modeling Context in Scenario Template Creation. In Third International Joint Conference on Natural Language Processing (IJCNLP 2008), pages 157- 164, Hyderabad, India.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Learning Script Knowledge with Web Experiments", "authors": [ { "first": "Michaela", "middle": [], "last": "Regneri", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Koller", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Pinkal", "suffix": "" } ], "year": 2010, "venue": "48th Annual Meeting of the Association for Computational Linguistics (ACL 2010)", "volume": "", "issue": "", "pages": "979--988", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michaela Regneri, Alexander Koller, and Manfred Pinkal. 2010. Learning Script Knowledge with Web Experiments. In 48th Annual Meeting of the Asso- ciation for Computational Linguistics (ACL 2010), pages 979-988, Uppsala, Sweden, July.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Clustering for unsupervised relation identification", "authors": [ { "first": "Benjamin", "middle": [], "last": "Rosenfeld", "suffix": "" }, { "first": "Ronen", "middle": [], "last": "Feldman", "suffix": "" } ], "year": 2007, "venue": "Sixteenth ACM conference on Conference on information and knowledge management (CIKM'07)", "volume": "", "issue": "", "pages": "411--418", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Rosenfeld and Ronen Feldman. 2007. Clus- tering for unsupervised relation identification. In Sixteenth ACM conference on Conference on in- formation and knowledge management (CIKM'07), pages 411-418, Lisbon, Portugal.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Language and memory", "authors": [ { "first": "C", "middle": [], "last": "Roger", "suffix": "" }, { "first": "", "middle": [], "last": "Schank", "suffix": "" } ], "year": 1980, "venue": "Cognitive Science", "volume": "4", "issue": "", "pages": "243--284", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roger C. Schank. 1980. Language and memory. Cog- nitive Science, 4:243-284.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "On-demand information extraction", "authors": [ { "first": "Satoshi", "middle": [], "last": "Sekine", "suffix": "" } ], "year": 2006, "venue": "21 st International Conference on Computational Linguistics and 44 th Annual Meeting of the Association for Computational Linguistics (COLING-ACL 2006)", "volume": "", "issue": "", "pages": "731--738", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satoshi Sekine. 2006. On-demand information extraction. In 21 st International Conference on Computational Linguistics and 44 th Annual Meet- ing of the Association for Computational Linguis- tics (COLING-ACL 2006), pages 731-738, Sydney, Australia.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Preemptive Information Extraction using Unrestricted Relation Discovery", "authors": [ { "first": "Yusuke", "middle": [], "last": "Shinyama", "suffix": "" }, { "first": "Satoshi", "middle": [], "last": "Sekine", "suffix": "" } ], "year": 2006, "venue": "HLT-NAACL 2006", "volume": "", "issue": "", "pages": "304--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yusuke Shinyama and Satoshi Sekine. 2006. Preemp- tive Information Extraction using Unrestricted Rela- tion Discovery. In HLT-NAACL 2006, pages 304- 311, New York City, USA.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Third Message Understanding Evaluation and Conference", "authors": [ { "first": "Beth", "middle": [ "M" ], "last": "Sundheim", "suffix": "" } ], "year": 1991, "venue": "", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Beth M. Sundheim. 1991. Third Message Understand- ing Evaluation and Conference (MUC-3): Phase 1", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Status Report", "authors": [], "year": null, "venue": "Proceedings of the Workshop on Speech and Natural Language, HLT '91", "volume": "", "issue": "", "pages": "301--305", "other_ids": {}, "num": null, "urls": [], "raw_text": "Status Report. In Proceedings of the Workshop on Speech and Natural Language, HLT '91, pages 301- 305.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "Entity representation as tuples of ([attributes], head, [triggers])." }, "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": "Generative model for event induction." }, "FIGREF2": { "type_str": "figure", "uris": null, "num": null, "text": "Probability convergence when using attributes in sampling. The use of attributes is started at point 50 (i.e., 50% of burn-in phase). The dotted line shows convergence without attributes; the continuous line shows convergence with attributes." }, "FIGREF3": { "type_str": "figure", "uris": null, "num": null, "text": "Model variants (Dirichlet priors are omitted for simplicity): 5a) HT model ran on single data." }, "FIGREF4": { "type_str": "figure", "uris": null, "num": null, "text": "Variation of Chambers (2013) model: 6a) Original model; 6b) Original model + attribute distributions." }, "TABREF1": { "type_str": "table", "html": null, "content": "
Generate a trigger distribution \u03c6s from a Dirichlet
prior dir(\u03b3);
end
for entity e \u2208 E do
Generate a slot s from a uniform distribution
uni(1, K);
Generate a head h from a multinominal distribution
\u03c0s;
for i \u2190 1 to |Te| do
Generate a trigger ti from a multinominal
distribution \u03c6s;
end
for j \u2190 1 to |Ae| do
Generate an attribute aj from a multinominal
distribution \u03c6s;
end
end
", "num": null, "text": "Generate an attribute distribution \u03b8s from a Dirichlet prior dir(\u03b1); Generate a head distribution \u03c0s from a Dirichlet prior dir(\u03b2);" }, "TABREF5": { "type_str": "table", "html": null, "content": "
SystemPRF
HT+A32.42
", "num": null, "text": "54.59 40.62 HT+A + doc. classification 35.57 53.89 42.79 HT+A + oracle classification 44.58 54.59 49.08" }, "TABREF7": { "type_str": "table", "html": null, "content": "
: Comparison to state-of-the-art unsuper-
vised systems.
", "num": null, "text": "" }, "TABREF9": { "type_str": "table", "html": null, "content": "", "num": null, "text": "Performance on reimplementation ofChambers (2013)." } } } }