{ "paper_id": "D16-1005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:36:55.179451Z" }, "title": "Distinguishing Past, On-going, and Future Events: The EventStatus Corpus", "authors": [ { "first": "Ruihong", "middle": [], "last": "Huang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Texas A&M University", "location": {} }, "email": "huangrh@cse.tamu.edu" }, { "first": "Ignacio", "middle": [], "last": "Cases", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University", "location": {} }, "email": "cases@stanford.edu" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University", "location": {} }, "email": "jurafsky@stanford.edu" }, { "first": "Cleo", "middle": [], "last": "Condoravdi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University", "location": {} }, "email": "cleoc@stanford.edu" }, { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Utah", "location": {} }, "email": "riloff@cs.utah.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Determining whether a major societal event has already happened, is still ongoing , or may occur in the future is crucial for event prediction, timeline generation, and news summarization. We introduce a new task and a new corpus, EventStatus, which has 4500 English and Spanish articles about civil unrest events labeled as PAST, ONGOING , or FUTURE. We show that the temporal status of these events is difficult to classify because local tense and aspect cues are often lacking, time expressions are insufficient, and the linguistic contexts have rich semantic compositionality. We explore two approaches for event status classification: (1) a feature-based SVM classifier augmented with a novel induced lexicon of future-oriented verbs, such as \"threatened\" and \"planned\", and (2) a convolutional neural net. Both types of classifiers improve event status recognition over a state-of-the-art TempEval model, and our analysis offers linguistic insights into the semantic compositionality challenges for this new task.", "pdf_parse": { "paper_id": "D16-1005", "_pdf_hash": "", "abstract": [ { "text": "Determining whether a major societal event has already happened, is still ongoing , or may occur in the future is crucial for event prediction, timeline generation, and news summarization. We introduce a new task and a new corpus, EventStatus, which has 4500 English and Spanish articles about civil unrest events labeled as PAST, ONGOING , or FUTURE. We show that the temporal status of these events is difficult to classify because local tense and aspect cues are often lacking, time expressions are insufficient, and the linguistic contexts have rich semantic compositionality. We explore two approaches for event status classification: (1) a feature-based SVM classifier augmented with a novel induced lexicon of future-oriented verbs, such as \"threatened\" and \"planned\", and (2) a convolutional neural net. Both types of classifiers improve event status recognition over a state-of-the-art TempEval model, and our analysis offers linguistic insights into the semantic compositionality challenges for this new task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "When a major societal event is mentioned in the news (e.g., civil unrest, terrorism, natural disaster), it is important to understand whether the event has already happened (PAST), is currently happening (ON-GOING), or may happen in the future (FUTURE). We introduce a new task and corpus for studying the temporal/aspectual properties of major events. The EventStatus corpus consists of 4500 English and Spanish news articles about civil unrest events, such as protests, demonstrations, marches, and strikes, in which each event is annotated as PAST, ON-GOING, or FUTURE (sublabeled as PLANNED, ALERT or POSSIBLE) . This task bridges event extraction research and temporal research in the tradition of TIMEBANK (Pustejovsky et al., 2003) and TempEval (Verhagen et al., 2007; Verhagen et al., 2010; UzZaman et al., 2013) . Previous corpora have begun this association: TIMEBANK, for example, includes temporal relations linking events with Document Creation Times (DCT). But the EventStatus task and corpus offers several new research directions.", "cite_spans": [ { "start": 552, "end": 614, "text": "ON-GOING, or FUTURE (sublabeled as PLANNED, ALERT or POSSIBLE)", "ref_id": null }, { "start": 712, "end": 738, "text": "(Pustejovsky et al., 2003)", "ref_id": "BIBREF30" }, { "start": 743, "end": 775, "text": "TempEval (Verhagen et al., 2007;", "ref_id": null }, { "start": 776, "end": 798, "text": "Verhagen et al., 2010;", "ref_id": "BIBREF40" }, { "start": 799, "end": 820, "text": "UzZaman et al., 2013)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "First, major societal events are often discussed before they happen, or while they are still happening, because they have the potential to impact a large number of people. News outlets frequently report on impending natural disasters (e.g., hurricanes), anticipated disease outbreaks (e.g., Zika virus), threats of terrorism, and plans or warnings of potential civil unrest (e.g., strikes and protests). Traditional event extraction research has focused primarily on recognizing events that have already happened. Furthermore, the linguistic contexts of on-going and future events involve complex compositionality, and features like explicit time expressions are less useful.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our results demonstrate that a state-of-the-art Tem-pEval system has difficulty identifying on-going and future events, mislabeling examples like these:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) The metro workers' strike in Bucharest has entered the fifth day. (On-Going) (2) BBC unions demand more talks amid threat of new strikes. (Future) (3) Pro-reform groups have called for nationwide protests on polling day. (Future)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Second, we intentionally created the EventStatus corpus to concentrate on one particular event frame (class of events): civil unrest. In contrast, previous temporally annotated corpora focus on a wide variety of events. Focusing on one frame (semantic depth instead of breadth) makes this corpus analogous to domain-specific event extraction data sets, and therefore appropriate for evaluating rich tasks like event extraction and temporal question answering, which require more knowledge about event frames and schemata than might be represented in large broad corpora like TIMEBANK (UzZaman et al., 2012; Llorens et al., 2015) .", "cite_spans": [ { "start": 584, "end": 606, "text": "(UzZaman et al., 2012;", "ref_id": "BIBREF37" }, { "start": 607, "end": 628, "text": "Llorens et al., 2015)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Third, the EventStatus corpus focuses on specific instances of high-level events, in contrast to the lowlevel and often non-specific or generic events that dominate other temporal datasets. 1 Mentions of specific events are much more likely to be realized in non-finite form (as nouns or infinitives, such as \"the strike\" or \"to protest\") than randomly selected event keywords. In breadth-based corpora like the Event-CorefBank (ECB) corpus (Bejan and Harabagiu, 2008) , 34% of the events have non-finite realization; in TIMEBANK, 45% of the events have non-finite realization. By contrast, in a frame-based corpus like ACE2005 (ACE, 2005) , 59% of the events have non-finite forms. In the EventStatus corpus, 80% of the events have non-finite forms. Whether this is due to differences in labeling or to intrinsic properties of these events, the result is that they are much harder to label because tense and aspect are less available than for events realized as finite verbs.", "cite_spans": [ { "start": 441, "end": 468, "text": "(Bejan and Harabagiu, 2008)", "ref_id": "BIBREF6" }, { "start": 628, "end": 639, "text": "(ACE, 2005)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Fourth, the EventStatus data set is multilingual: we collected data from both English and Spanish texts, allowing us to compare events representing the same event frame across two languages that are known to differ in their typological properties for describing events (Talmy, 1985) .", "cite_spans": [ { "start": 269, "end": 282, "text": "(Talmy, 1985)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Using the new EventStatus corpus, we investigate two approaches for recognizing the temporal status of events. We create a SVM classifier that incorporates features drawn from prior TempEval work (Bethard, 2013; Chambers et al., 2014; Llorens et al., 2010) as well as a new automatically induced lexicon of 411 English and 348 Spanish \"futureoriented\" matrix verbs-verbs like \"threaten\" and \"fear\" whose complement clause or nominal direct object argument is likely to describe a future event. We show that the SVM outperforms a state-of-theart TempEval system and that the induced lexicon further improves performance for both English and Spanish. We also introduce a Convolutional Neural Network (CNN) to detect the temporal status of events. Our analysis shows that it successfully models semantic compositionality for some challenging temporal contexts. The CNN model again improves performance in both English and Spanish, providing strong initial results for this new task and corpus.", "cite_spans": [ { "start": 196, "end": 211, "text": "(Bethard, 2013;", "ref_id": "BIBREF7" }, { "start": 212, "end": 234, "text": "Chambers et al., 2014;", "ref_id": "BIBREF10" }, { "start": 235, "end": 256, "text": "Llorens et al., 2010)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For major societal events, it can be very important to know whether the event has ended or if it is still in progress (e.g., are people still rioting in the streets?). And sometimes events are anticipated before they actually happen, such as labor strikes, marches and parades, social demonstrations, political events (e.g., debates and elections), and acts of war. The EventStatus corpus represents the temporal status of an event as one of five categories:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The EventStatus Corpus", "sec_num": "2" }, { "text": "Past: An event that has started and has ended. There should be no reason to believe that it may still be in progress. On-going: An event that has started and is still in progress or likely to resume 2 in the immediate future. There should be no reason to believe that it has ended. Future Planned: An event that has not yet started, but a person or group has planned for or explicitly committed to an instance of the event in the future. There should be near certainty it will happen. Future Alert: An event that has not yet started, but a person or group has been threatening, warning, or advocating for a future instance of the event. Future Possible: An event that has not yet started, but the context suggests that its occurrence is a live possibility (e.g., it is anticipated, feared, hinted at, or is mentioned conditionally). in marking not just temporal status but also what we might call predictive status. Events very likely to occur are distinguished from events whose occurrence depends on other contingencies (Future Planned vs. Alert/Possible). Warnings or mentions of a potential event by a likely actor are further distinguished from events whose occurrence is more open-ended (Future Alert vs. Possible). The status of future events is not due just to lexical semantics or local context but also other qualifiers in the sentence (e.g. \"may\"), the larger discourse context, and world knowledge. The annotation guidelines are formulated with that in mind. The categories for future events are not incompatible with one another but are meant to be informationally ordered (e.g. \"future alert\" implies \"future possible\"). Annotators are instructed to go for the strongest implication supported by the overall context. Table 1 presents examples of each category in news reports about civil unrest events, with the event keywords in italics.", "cite_spans": [], "ref_spans": [ { "start": 1731, "end": 1738, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "The EventStatus Corpus", "sec_num": "2" }, { "text": "The EventStatus dataset consists of English and Spanish news articles. We manually identified 6 English words 3 and 13 Spanish words 4 and phrases associated with civil unrest events, and added their morphological variants. We then randomly selected 2954 and 1491 5 news stories from the English Gigaword 5th Ed. (Parker et al., 2011) and Spanish Gigaword 3rd Ed. (Mendon et al., 2011) corpora, respectively, that contain at least one civil unrest phrase. Events of a specific type are very sparsely distributed in a large corpus like the Gigaword, so we used keyword matching just as a first pass to identify candidate event mentions. Because many keyword instances don't refer to a specific event, primarily due to lexical ambiguity and generic descriptions (e.g., \"Protests are often facilitated by ...\"), we used a two-stage annotation process. First, we extracted sentences containing at least one key phrase, and had three human annotators judge whether the sentence describes a specific civil unrest event. Next, for each sentence that mentions a specific event, the annotators assigned an event status to every civil unrest key phrase in that sentence. In both annotation phases, we asked the annotators to consider the context of the entire article.", "cite_spans": [ { "start": 313, "end": 334, "text": "(Parker et al., 2011)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "EventStatus Annotations", "sec_num": "2.1" }, { "text": "In the first annotation phase, the average pairwise inter-annotator agreement (Cohen's \u03ba) among the annotators was \u03ba = 0.84 on the English data and 0.70 on the Spanish data. We then assigned the majority label among the three annotators to each sentence. In the English data, of the 5085 sentences with at least one key phrase, 2492 (49%) were judged to be about a specific civil unrest event. In the Spanish data, 3249 sentences contained at least one key phrase and 2466 (76%) described a specific event.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EventStatus Annotations", "sec_num": "2.1" }, { "text": "In the second phase, the annotators assigned one of the five temporal status categories listed in Section 2 to each event keyword in a relevant sentence. In addition, we provided a Not Event label. 6 Occasionally, a single instance of a keyword can refer to multiple events (e.g., \"Both last week's and today's protests...\"), so we permitted multiple labels to be assigned to an event phrase. However this happened for only 28 cases in English and 21 cases in Spanish.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EventStatus Annotations", "sec_num": "2.1" }, { "text": "The average pairwise inter-annotator agreement among the three human annotators for the temporal status labels was \u03ba=.78 for English and \u03ba=.80 for Spanish. We used the majority label among the three annotators as the gold status. In total, 2907 English and 2807 Spanish event phrases exist in the relevant sentences and were annotated. However there were 83 English cases (\u22482.9%) and 70 Spanish cases (\u22482.5%) where the labels among the three annotators were all different, so we discarded these cases. Table 2 shows the final distribution of labels in the EventStatus corpus. The EventStatus corpus 7 is available through the LDC.", "cite_spans": [], "ref_spans": [ { "start": 502, "end": 509, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "EventStatus Annotations", "sec_num": "2.1" }, { "text": "Next, we investigated the linguistic properties of the event status categories, lumping together the 3 future subcategories. Table 3 shows the distribution of syntactic forms of the event mentions in two commonly used event datasets, ACE2005 (ACE, 2005) and EventCorefBank (Bejan and Harabagiu, 2008) , and our new EventStatus corpus. In the introduction, we mentioned the high frequency of non-finite event expressions; Table 3 provides the evidence: nonfinite forms (nouns and infinitives) constitute 59% in ACE2005, 34% in EventCorefBank, and a very high 80% of the events in the EventStatus dataset. The distribution is even more skewed for future events, which are 95% (English) and 96% (Spanish) realized by non-finite surface forms. ", "cite_spans": [ { "start": 242, "end": 253, "text": "(ACE, 2005)", "ref_id": null }, { "start": 273, "end": 300, "text": "(Bejan and Harabagiu, 2008)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 125, "end": 132, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 421, "end": 428, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Linguistic Properties of Event Mentions", "sec_num": "2.2" }, { "text": "We observed that many future event mentions are preceded by a set of lexical (non-aux) verbs that we call future oriented verbs, such as \"threatened\" in (4) and \"fear\" in (5). These verbs project the events in the lower clause into the future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Oriented Verbs", "sec_num": "2.3" }, { "text": "(4) They threatened to protest if Kmart does not acknowledge their request for a meeting. (5) People fear renewed rioting during the coming days. Categories of future oriented verbs include mental activity (\"anticipate\", \"expect\"), affective (\"fear\", \"worry\"), planning (\"plan\", \"prepare\", \"schedule\"), threatening (\"threaten\", \"advocate\", \"warn\"), and inchoative verbs (\"start\", \"initiate\", and \"launch\"). We found that these categories correlate with the predictive status of the events they embed. We drew on these insights to induce a lexicon of future oriented verbs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Oriented Verbs", "sec_num": "2.3" }, { "text": "We harvested matrix verbs whose complement unambiguously describes a future event using two heuristics. One heuristic looks for examples with a tense conflict between the matrix verb and its complement: a matrix verb in the past tense (like \"planned\" below) whose complement event is an infinitive verb or deverbal noun modified by a future time expression (like \"tomorrow\" or \"next week\"), hence in the future (e.g., \"strike\" below): 8 (6) The union planned to strike next week. Future events are often marked by conditional clauses, so the second heuristic considers an event to be future if it was post-modified by a conditional clause (beginning with \"if\" or \"unless\"): (7) The union threatened to strike if their appeal was rejected. Finally, to increase precision, we only harvested a verb as future-oriented if it functioned as a matrix both in sentences with an embedded future time expression and in sentences with a conditional clause.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Oriented Verbs", "sec_num": "2.3" }, { "text": "Future Oriented Verb Categories: We ran the algorithm on the English and Spanish Gigaword corpora (Parker et al., 2011; Mendon et al., 2011) , obtaining 411 English verbs and 348 Spanish verbs. To better understand the structure of the learned lexicon, we mapped each English verb to Framenet (Baker et al., 1998) ; 86% (355) of the English verbs occurred in Framenet, in 306 unique frames. We clustered these into 102 frames 9 and grouped the Spanish verbs following English Framenet, identifying 67 categories. (Some learned verbs, such as \"poise\" , \"slate\" , \"compel\" and \"hesitate\", had a clear future orientation but didn't exist in Framenet.) Table 4 shows examples of learned verbs for English and their categories.", "cite_spans": [ { "start": 98, "end": 119, "text": "(Parker et al., 2011;", "ref_id": "BIBREF29" }, { "start": 120, "end": 140, "text": "Mendon et al., 2011)", "ref_id": "BIBREF24" }, { "start": 293, "end": 313, "text": "(Baker et al., 1998)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 649, "end": 656, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Future Oriented Verbs", "sec_num": "2.3" }, { "text": "Commitment: threaten, vow, promise, pledge, commit, declare, claim, volunteer, anticipate Coming to be: enter, emerge, plunge, kick, mount reach, edge, soar, promote, increase, climb, double Purpose: plan, intend, project, aim, object, target Permitting: allow, permit, approve, subpoena Experiencer subj: fear, scare, hate Waiting: expect, wait Scheduling: arrange, schedule Deciding: decide, opt, elect, pick, select, settle Request: ask, urge, order, encourage, demand, appeal, request, summon, implore, advise, invite Evoking: raise, press, back, recall, pressure, force, rush, pull, drag, respond In the next sections we propose two classifiers, an SVM classifier using standard TempEval features plus our new future-oriented lexicon, and a Convolutional Neural Net, as a pilot exploration of what features and architecture work well for the EventStatus task. For these studies we combine the Future Planned, Future Alert and Future Possible categories into a single Future event status because we first wanted to establish how well classifiers can detect the primary temporal distinctions between Past vs. Ongoing vs. Future. The future subcategories are, of course, relatively smaller and we expect that the most effective approach will be to design a classifier that sits on top of the primary classifier to further subcategorize the Future instances. We leave the task of subcategorizing future events for later work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Oriented Verbs", "sec_num": "2.3" }, { "text": "Our first classifier is a linear SVM classifier. 10 We trained three binary classifiers (one per class) using one-vs.-rest, and label an event mention with the class that assigned the highest score to the mention. We used features inspired by prior TempEval work and by the previous analysis, including words, tense and aspect features, time expressions, and the new future-oriented verb lexicon. We also experimented with other features used by TempEval systems (including bigrams, POS tags, and two-hop dependency features), but they did not improve performance. 11 Bag-Of-Words Features: For bag-of-words unigram features we used a window size of 7 (7 left and 7 right) for the English data and 6 for the Spanish data; this size was optimized on the tuning sets.", "cite_spans": [ { "start": 49, "end": 51, "text": "10", "ref_id": null }, { "start": 565, "end": 567, "text": "11", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "SVM Event Status Model", "sec_num": "3" }, { "text": "Tense, Aspect and Time Expressions: Because these features are known to be the most important for relating events to document creation time (Bethard, 2013; Llorens et al., 2010) , we used TIPSem (Llorens et al., 2010) to generate the tense and aspect of events and find time expressions in both languages. TIPSem infers the tense and aspect of nominal and infinitival event mentions using heuristics without relying on syntactic dependencies. For the English data set, we also generated syntactic dependencies using Stanford CoreNLP (Marneffe et al., 2006) and applied several rules to create additional tense and aspect features based on the governing words of event mentions 12 . Time indication features are created by comparing document creation time to time expressions linked to an event mention detected by TIPSem. If TIPSem detects no linked time expressions for an event mention, we take the nearest time expression in the same sentence.", "cite_spans": [ { "start": 140, "end": 155, "text": "(Bethard, 2013;", "ref_id": "BIBREF7" }, { "start": 156, "end": 177, "text": "Llorens et al., 2010)", "ref_id": "BIBREF20" }, { "start": 195, "end": 217, "text": "(Llorens et al., 2010)", "ref_id": "BIBREF20" }, { "start": 533, "end": 556, "text": "(Marneffe et al., 2006)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "SVM Event Status Model", "sec_num": "3" }, { "text": "Governing Words: Governing words have been useful in prior work. Our version of the feature 10 Trained using LIBSVM (Chang and Lin, 2011) with linear kernels (polynomial kernels yielded worse performance).", "cite_spans": [ { "start": 116, "end": 137, "text": "(Chang and Lin, 2011)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "SVM Event Status Model", "sec_num": "3" }, { "text": "11 Previous TempEval work reported that those additional features were useful when computing temporal relations between two events but not when relating an event to the Document Creation Time, for which tense, aspect, and time expression features were the most useful (Llorens et al., 2010; Bethard, 2013) . 12 We did not imitate this procedure for Spanish because the quality of our generated Spanish dependencies is poor. pairs the governing word of an event mention with the dependency relation in between. We used Stanford CoreNLP (Marneffe et al., 2006) to generate dependencies for the English data. For the Spanish data, we used Stanford CoreNLP to generate Partof-Speech tags 13 and then applied the MaltParser (Nivre et al., 2004) to generate dependencies.", "cite_spans": [ { "start": 268, "end": 290, "text": "(Llorens et al., 2010;", "ref_id": "BIBREF20" }, { "start": 291, "end": 305, "text": "Bethard, 2013)", "ref_id": "BIBREF7" }, { "start": 308, "end": 310, "text": "12", "ref_id": null }, { "start": 518, "end": 558, "text": "Stanford CoreNLP (Marneffe et al., 2006)", "ref_id": null }, { "start": 684, "end": 686, "text": "13", "ref_id": null }, { "start": 719, "end": 739, "text": "(Nivre et al., 2004)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "SVM Event Status Model", "sec_num": "3" }, { "text": "Convolutional neural networks (CNNs) have been shown to be effective in modeling natural language semantics (Collobert et al., 2011) . We were especially keen to find out whether the convolution operations of CNNs can model the semantic compositionality needed to detect temporal-aspectual status. For our experiments, we trained a simple CNN with one convolution layer followed by one max pooling layer (Kim, 2014; Collobert et al., 2011) ,", "cite_spans": [ { "start": 108, "end": 132, "text": "(Collobert et al., 2011)", "ref_id": "BIBREF14" }, { "start": 404, "end": 415, "text": "(Kim, 2014;", "ref_id": "BIBREF18" }, { "start": 416, "end": 439, "text": "Collobert et al., 2011)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Convolutional Neural Network Model", "sec_num": "4" }, { "text": "The convolution layer has 300 hidden units. In each unit, the same affine transformation is applied to every consecutive 5 words (a filter instance) in the input sequence of words. A different affine transformation is applied to each hidden unit. After each affine transformation, a Rectified Linear Units (ReLU) (Nair and Hinton, 2010) non-linearity is applied. For each hidden unit, the max pooling layer selects the maximum value from the pool of real values generated from each filter instance.", "cite_spans": [ { "start": 313, "end": 336, "text": "(Nair and Hinton, 2010)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Convolutional Neural Network Model", "sec_num": "4" }, { "text": "After the max pooling layer, a softmax classifier predicts probabilites for each of the three classes, Past, Ongoing and Future. To alleviate overfitting of the CNN model, we applied dropout (Hinton et al., 2012) on the convolution layer and the following pooling layer with a keeping rate of 0.5.", "cite_spans": [ { "start": 191, "end": 212, "text": "(Hinton et al., 2012)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Convolutional Neural Network Model", "sec_num": "4" }, { "text": "Our experiments used the 300-dimension English word2vec embeddings 14 trained on 100 billion words of Google News. We trained our own 300dimension Spanish embeddings, running word2vec (Mikolov et al., 2013) over both Spanish Gigaword (Mendon et al., 2011 )-tokenized using Stanford CoreNLP SpanishTokenizer (Manning et al., 2014) -and the pre-tokenized Spanish Wikipedia dump (Al-Rfou et al., 2013) . The vectors were then tuned during backpropagation for our specific task. ", "cite_spans": [ { "start": 184, "end": 206, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF25" }, { "start": 225, "end": 254, "text": "Gigaword (Mendon et al., 2011", "ref_id": null }, { "start": 307, "end": 329, "text": "(Manning et al., 2014)", "ref_id": "BIBREF22" }, { "start": 376, "end": 398, "text": "(Al-Rfou et al., 2013)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Convolutional Neural Network Model", "sec_num": "4" }, { "text": "For all subsequent evaluations, we use gold event mentions. We randomly sampled around 20% of the annotated documents as the parameter tuning set and used the rest as the test set. Rather than training once on a distinct training set, all our experiment results are based on 10-fold cross validation on the test set, (1191 Spanish documents, 2364 English documents; see Table 5 for the distribution of event mentions).", "cite_spans": [], "ref_spans": [ { "start": 370, "end": 377, "text": "Table 5", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Evaluations", "sec_num": "5" }, { "text": "We begin with a baseline: applying a TempEval system to classify each event. Most of our features are already drawn from TempEval, but our goal was to see if an off-the-shelf system could be directly applied to our task. We chose TIPSem (Llorens et al., 2010 ), a CRF system trained on TimeBank that uses linguistic features, has achieved top performance in TempEval competitions for both English and Spanish (Verhagen et al., 2010) , and can compute the relation of each event with the Document Creation Time. We applied TIPSem to our test set, mapping the DCT relations to our three event status classes 15 .", "cite_spans": [ { "start": 237, "end": 258, "text": "(Llorens et al., 2010", "ref_id": "BIBREF20" }, { "start": 409, "end": 432, "text": "(Verhagen et al., 2010)", "ref_id": "BIBREF40" }, { "start": 606, "end": 608, "text": "15", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Comparing with a TempEval System", "sec_num": "5.1" }, { "text": "Row 1 of Tables 6 and 7 shows TIPSem results. The columns show results for each category separately, as well as macro-average and microaverage results across the three categories. Each cell shows the Recall/Precision/F-score numbers. Since TIPSem linked relatively few event mentions to the DCT, we next leveraged the transitivity of temporal relations (UzZaman et al., 2012; Llorens et al., 2015) , linking an event to a DCT if the temporal relation between another event in the same sentence and the DCT is transferable. For instance, if event A is AFTER its DCT, and event B is AFTER event A, then event B is also AFTER the DCT. 16 Row 2 shows the results of TIPSem with temporal transitivity.", "cite_spans": [ { "start": 353, "end": 375, "text": "(UzZaman et al., 2012;", "ref_id": "BIBREF37" }, { "start": 376, "end": 397, "text": "Llorens et al., 2015)", "ref_id": "BIBREF21" }, { "start": 632, "end": 634, "text": "16", "ref_id": null } ], "ref_spans": [ { "start": 9, "end": 23, "text": "Tables 6 and 7", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Comparing with a TempEval System", "sec_num": "5.1" }, { "text": "Even augmented by transitivity, TIPSem fails to detect many Ongoing (OG) and Future (FU) events; most mislabeled OG and FU events were nominal. Confusion matrices (Table 8) show that most of the missed OG events were labeled as Past (PA) while FU events were commonly mislabeled as both PA and OG. Below are some examples of OG and FU events mislabeled as PA:", "cite_spans": [], "ref_spans": [ { "start": 163, "end": 172, "text": "(Table 8)", "ref_id": "TABREF12" } ], "eq_spans": [], "section": "Comparing with a TempEval System", "sec_num": "5.1" }, { "text": "(8) Jego said Sunday on arriving in Guadeloupe that he would stay as long as it took to bring an end to the strike organised by the Collective against Extreme Exploitation (LKP). (OG) SVM Results Next, we compare TIPSem's results with our SVM classifier. An issue is that TIPSem identifies only 72% and 78% of the gold event mentions, for English and Spanish respectively 17 . To have a fair comparison, we applied the SVM to only the event mentions that TipSem recognized. Row 3 shows these results for the SVM classifier using its full feature set. The SVM outperforms TipSem on all three categories, for both languages, with the largest improvements on Future events. Next, we ran ablation experiments with the SVM to evaluate the impact of different subsets of its features. For these experiments, we applied the SVM to all gold event mentions, thus Rows 1-3 of Tables 6 and 7 report on fewer event mentions than rows 4-8. Row 4 shows results using only bag-of-words features 18 . Row 5 shows results when additionally including the tense, aspect, and time features provided by TIPSem (Llorens et al., 2010) . Unsurprisingly, in both languages 19 these features improve over just bag-of-word features.", "cite_spans": [ { "start": 1089, "end": 1111, "text": "(Llorens et al., 2010)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Comparing with a TempEval System", "sec_num": "5.1" }, { "text": "Row 6 further adds governing word features. These improve English performance, especially for On-Going events. For Spanish, governing word fea-tures slightly decrease performance, likely due to the poor quality of the Spanish dependencies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparing with a TempEval System", "sec_num": "5.1" }, { "text": "Row 7 adds the future oriented lexicon features 20 . For both English and Spanish, the future oriented lexicon increased overall performance, and (as expected) especially for Future events.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparing with a TempEval System", "sec_num": "5.1" }, { "text": "CNN Results Row 8 shows the results using CNN models. For English and Spanish, the same window (7 words for English, 6 words for Spanish) was used to compute bag-of-word features for SVMs as for training the CNN models. For English, the CNN model further increased recall and precision across all three classes. The CNN improved Spanish performance on both Past and On-going events, but the SVM outperformed the CNN for Future events when the future oriented lexicon features were included.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparing with a TempEval System", "sec_num": "5.1" }, { "text": "To better understand whether the CNN model's strong performance was related to handling compositionality, we examined some English examples that were correctly recognized by the CNN model but mislabeled by the SVM classifier with bag-ofwords features. The examples below (event mentions are in italics) suggest that the CNN may be capturing the compositional impact of local cues like \"possibility\" or \"since\": (10) Raising the possibility of a strike on New Year's Eve, the president of New York City's largest union is calling for a 30 percent raise over three years. (FU) (11) The lockout was announced in the wake of a go-slow and partial strike by the union since July 12 after management turned down its demand. (OG)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "6" }, { "text": "We also conducted an error analysis by randomly sampling and then analyzing 50 of the 473 errors by the CNN model. Many cases (26/50) are ambiguous from the sentence alone, requiring discourse information. The first example below is caused by the well-known \"double access\" ambiguity of the complement of a communication verb (Smith, 1978; Abusch, 1997; Giorgi, 2010) .", "cite_spans": [ { "start": 326, "end": 339, "text": "(Smith, 1978;", "ref_id": "BIBREF35" }, { "start": 340, "end": 353, "text": "Abusch, 1997;", "ref_id": "BIBREF0" }, { "start": 354, "end": 367, "text": "Giorgi, 2010)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "6" }, { "text": "(12) Chavez also said he discussed the strike with UN Secretary General Kofi Annan and told him the strike organizers were \"terrorists.\" (OG) (13) Students and teachers protest over education budget (PA) In 9/50 cases, the contexts that imply temporal status are complex and fall out of our \u00b17 word range, e.g.,:", "cite_spans": [ { "start": 199, "end": 203, "text": "(PA)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "6" }, { "text": "(14) Protesters on Saturday also occupied two gymnastics halls near Gorleben which are to be used as accommodation for police. They were later forcibly dispersed by policemen. (PA)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "6" }, { "text": "The remaining 15/50 cases contain enough local cues to be solvable by humans, but both the CNN and SVM models nonetheless failed:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "6" }, { "text": "(15) Eastern leaders have grown weary of the protest movement led mostly by Aymara. (OG)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "6" }, { "text": "Our work overlaps with two communities of tasks and corpora: the task of classifying temporal order between event mentions and Document Creation Time (DCT) in TempEval (Verhagen et al., 2007; Verhagen et al., 2010; UzZaman et al., 2013) , and the task of extracting events, associated with corpora such as ACE2005 (ACE, 2005) and the Event-CorefBank (ECB) (Bejan and Harabagiu, 2008) . By studying the events in a particular frame (civil unrest), but focusing on their temporal status, our work has the potential to draw these communities together. Most event extraction work (Freitag, 1998; Appelt et al., 1993; Ciravegna, 2001; Chieu and Ng, 2002; Riloff and Jones, 1999; Roth and Yih, 2001; Zelenko et al., 2003; Bunescu and Mooney, 2007) has focused on extracting event slots or frames for past events and assigning dates. The TempEval task of linking events to DCT has not focused on events that tend to have non-finite realizations, nor has it focused on subtypes of future events. Our work, including the corpus and the future-oriented verb lexicon, has the potential to benefit related tasks like generating event timelines from news articles (Allan et al., 2000; Yan et al., 2011) or social media sources (Li and Cardie, 2014; Ritter et al., 2012) , or exploring the psychological implications of future oriented language (Nie et al., 2015; Schwartz et al., 2015) .", "cite_spans": [ { "start": 168, "end": 191, "text": "(Verhagen et al., 2007;", "ref_id": "BIBREF39" }, { "start": 192, "end": 214, "text": "Verhagen et al., 2010;", "ref_id": "BIBREF40" }, { "start": 215, "end": 236, "text": "UzZaman et al., 2013)", "ref_id": "BIBREF38" }, { "start": 314, "end": 325, "text": "(ACE, 2005)", "ref_id": null }, { "start": 356, "end": 383, "text": "(Bejan and Harabagiu, 2008)", "ref_id": "BIBREF6" }, { "start": 576, "end": 591, "text": "(Freitag, 1998;", "ref_id": "BIBREF15" }, { "start": 592, "end": 612, "text": "Appelt et al., 1993;", "ref_id": "BIBREF4" }, { "start": 613, "end": 629, "text": "Ciravegna, 2001;", "ref_id": "BIBREF13" }, { "start": 630, "end": 649, "text": "Chieu and Ng, 2002;", "ref_id": "BIBREF12" }, { "start": 650, "end": 673, "text": "Riloff and Jones, 1999;", "ref_id": "BIBREF31" }, { "start": 674, "end": 693, "text": "Roth and Yih, 2001;", "ref_id": "BIBREF33" }, { "start": 694, "end": 715, "text": "Zelenko et al., 2003;", "ref_id": "BIBREF42" }, { "start": 716, "end": 741, "text": "Bunescu and Mooney, 2007)", "ref_id": "BIBREF8" }, { "start": 1151, "end": 1171, "text": "(Allan et al., 2000;", "ref_id": "BIBREF3" }, { "start": 1172, "end": 1189, "text": "Yan et al., 2011)", "ref_id": "BIBREF41" }, { "start": 1214, "end": 1235, "text": "(Li and Cardie, 2014;", "ref_id": "BIBREF19" }, { "start": 1236, "end": 1256, "text": "Ritter et al., 2012)", "ref_id": "BIBREF32" }, { "start": 1331, "end": 1349, "text": "(Nie et al., 2015;", "ref_id": "BIBREF27" }, { "start": 1350, "end": 1372, "text": "Schwartz et al., 2015)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "We have proposed a new task of recognizing the past, on-going, or future temporal status of major events, introducing a new resource for study-ing events in two languages. Besides its importance for studying time and aspectuality, the EventStatus dataset offers a rich resource for any future investigation of information extraction from major societal events. The strong performance of the convolutional net system suggests the power of latent representations to model temporal compositionality, and points to extensions of our work using deeper and more powerful networks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "Finally, our investigation of the role of context and semantic composition in conveying temporal information also has implications for our understanding of temporality and aspectuality and their linguistic expression. Many of the errors made by our CNN system are complex ambiguities, like the double access readings, that cannot be solved without information from the wider discourse context. Our work can thus also be seen as a call for the further use of rich discourse information in the computational study of temporal processing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "For example in TIMEBANK almost half the annotated events (3720 of 7935) are hypothetical or generic, i.e., PERCEP-TION, REPORTING, ASPECTUAL, I ACTION, STATE or I STATE rather than the specific OCCURRENCE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For example, demonstrators have gone home for the day but are expected to return in the morning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The English keywords are \"protest\", \"strike\", \"march\", \"rally\", \"riot\" and \"occupy\". These correspond to the most frequent words in the relevant frame in the Media Frames corpus(Card et al., 2015). Because \"march\" most commonly refers to the month, we removed the word itself and only kept its other morphological variations.4 Spanish keywords: \"marchar\", \"protestar\", \"amotinar(se)\", \"manifestar(se)\", \"huelga\", \"manifestaci\u00f3n\", \"disturbio\", \"mot\u00edn\", \"ocupar * la calle\", \"tomar * la calle\", \"salir * las calles\", \"lanzarse a las calles\", \"cacerolas vac\u00edas\", \"cacerolazo\", \"cacerolada\". Asterisks could be replaced by up to 4 words. The last three terms are common expressions for protest marches in many countries of Latin America and Spain.5 46 (out of 3000) and 9 (out of 1500) stories were removed due to keyword errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "A sentence can contain multiple keyword instances. So even in a relevant sentence, some instances may not refer to a specific event.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://faculty.cse.tamu.edu/huangrh/ EventStatus_corpus.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For English, we extract events linked by the \"xcomp\" dependency using the Stanford dependency parser(Marneffe et al., 2006), with a future time expression attached to the second event with the \"tmod\" relation. For Spanish, we consider two events related if they are at most 5 words apart, and the second event is modified by a time expression, at most 5 words apart.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "By merging frames that share frame elements (e.g., \"Purpose\" and \"Project\" share the frame element \"plan\")", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Stanford CoreNLP has no support for generating syntactic dependencies for Spanish.14 docs.google.com/uc?id=0B7XkCwpI5KDYNlNUTTlSS21pQmM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We used the obvious mappings from TIPSem relations: \"BEFORE\" to \"PA\", \"AFTER\" to \"FU\" , and \"INCLUDES\" (for English) and \"OVERLAP\" (for Spanish) to \"OG\".16 Some transitivity rules are ambiguous: if event A is AF-TER DCT, event B INCLUDES event A, event B can be AFTER or INCLUDES DCT. We ran experiments and chose rules that improved performance the most for TipSem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We were not able to decouple TipSem's event recognition component and force it to process all event mentions.18 Replacing each word feature with a word2vec embedding resulted in slightly worse performance.19 We always obtain even recall and precision for the micro average metric because we only apply classifiers to event mentions that refer to a civil unrest event.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For Spanish, we removed the governing word features because of the poor quality of the Spanish dependencies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We want to thank the Stanford NLP group and especially Danqi Chen for valuable inputs, and Michael Zeleznik for helping us refine the categories and for masterfully orchestrating the annotation efforts. We also thank Luke Zettlemoyer and all our reviewers for providing useful comments. This work was partially supported by the National Science Foundation via NSF Award IIS-1514268, by the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) contract no. FA8750-13-2-0040, and by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior National Business Center (DoI/NBC) contract number D12PC00337. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: the views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA, NSF, IARPA, DoI/NBC, or the U.S. Government.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": "9" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Sequence of tense and temporal de re", "authors": [ { "first": "Dorit", "middle": [], "last": "Abusch", "suffix": "" } ], "year": 1997, "venue": "Linguistics & Philosophy", "volume": "20", "issue": "", "pages": "1--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dorit Abusch. 1997. Sequence of tense and temporal de re. Linguistics & Philosophy, 20:1-50.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "NIST ACE evaluation website", "authors": [], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "ACE. 2005. NIST ACE evaluation website. In http://www.nist.gov/speech/tests/ace/2005.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Polyglot: Distributed word representations for multilingual nlp", "authors": [ { "first": "Rami", "middle": [], "last": "Al-Rfou", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Perozzi", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Skiena", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "183--192", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multi- lingual nlp. In Proceedings of the Seventeenth Confer- ence on Computational Natural Language Learning, pages 183-192, Sofia, Bulgaria, August. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Detections, Bounds, and Timelines: Umass and TDT-3", "authors": [ { "first": "J", "middle": [], "last": "Allan", "suffix": "" }, { "first": "V", "middle": [], "last": "Lavrenko", "suffix": "" }, { "first": "D", "middle": [], "last": "Malin", "suffix": "" }, { "first": "R", "middle": [], "last": "Swan", "suffix": "" } ], "year": 2000, "venue": "Proceedings of Topic Detection and Tracking Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Allan, V. Lavrenko, D. Malin, and R. Swan. 2000. De- tections, Bounds, and Timelines: Umass and TDT-3. In Proceedings of Topic Detection and Tracking Work- shop.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "FASTUS: a Finite-state Processor for Information Extraction from Real-world Text", "authors": [ { "first": "D", "middle": [], "last": "Appelt", "suffix": "" }, { "first": "J", "middle": [], "last": "Hobbs", "suffix": "" }, { "first": "J", "middle": [], "last": "Bear", "suffix": "" }, { "first": "D", "middle": [], "last": "Israel", "suffix": "" }, { "first": "M", "middle": [], "last": "Tyson", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence (IJCAI)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Appelt, J. Hobbs, J. Bear, D. Israel, and M. Tyson. 1993. FASTUS: a Finite-state Processor for Informa- tion Extraction from Real-world Text. In Proceedings of the Thirteenth International Joint Conference on Ar- tificial Intelligence (IJCAI).", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The Berkeley FrameNet Project", "authors": [ { "first": "Collin", "middle": [ "F" ], "last": "Baker", "suffix": "" }, { "first": "Charles", "middle": [ "J" ], "last": "Fillmore", "suffix": "" }, { "first": "John", "middle": [ "B" ], "last": "Lowe", "suffix": "" } ], "year": 1998, "venue": "Proceedings of COLING/ACL", "volume": "", "issue": "", "pages": "86--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet Project. In In Proceed- ings of COLING/ACL, pages 86-90.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A Linguistic Resource for Discovering Event Structures and Resolving Event Coreference", "authors": [ { "first": "C", "middle": [], "last": "Bejan", "suffix": "" }, { "first": "S", "middle": [], "last": "Harabagiu", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Bejan and S. Harabagiu. 2008. A Linguistic Resource for Discovering Event Structures and Resolving Event Coreference. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "ClearTK-TimeML: A minimalist approach to TempEval", "authors": [ { "first": "S", "middle": [], "last": "Bethard", "suffix": "" } ], "year": 2013, "venue": "Proceedings of Second Joint Conference on Lexical and Computational Semantics (*SEM)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Bethard. 2013. ClearTK-TimeML: A minimalist ap- proach to TempEval 2013. In Proceedings of Second Joint Conference on Lexical and Computational Se- mantics (*SEM).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Learning to Extract Relations from the Web using Minimal Supervision", "authors": [ { "first": "R", "middle": [], "last": "Bunescu", "suffix": "" }, { "first": "R", "middle": [], "last": "Mooney", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Bunescu and R. Mooney. 2007. Learning to Extract Relations from the Web using Minimal Supervision. In Proceedings of the 45th Annual Meeting of the As- sociation for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The media frames corpus: Annotations of frames across issues", "authors": [ { "first": "Dallas", "middle": [], "last": "Card", "suffix": "" }, { "first": "Amber", "middle": [ "E" ], "last": "Boydstun", "suffix": "" }, { "first": "Justin", "middle": [ "H" ], "last": "Gross", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dallas Card, Amber E. Boydstun, Justin H. Gross, Philip Resnik, and Noah A. Smith. 2015. The media frames corpus: Annotations of frames across issues. In Pro- ceedings of the 53rd Annual Meeting of the Associa- tion for Computational Linguistics and the 7th Inter- national Joint Conference on Natural Language Pro- cessing (Volume 2: Short Papers).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Dense event ordering with a multi-pass architecture", "authors": [ { "first": "Nathanael", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Mcdowell", "suffix": "" }, { "first": "Taylor", "middle": [], "last": "Cassidy", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Bethard", "suffix": "" } ], "year": 2014, "venue": "Transactions of the Association for Computational Linguistics (TACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nathanael Chambers, Bill McDowell, Taylor Cassidy, and Steve Bethard. 2014. Dense event ordering with a multi-pass architecture. In Transactions of the Associ- ation for Computational Linguistics (TACL).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "LIBSVM: A library for support vector machines", "authors": [ { "first": "Chih-Chung", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Chih-Jen", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2011, "venue": "ACM Transactions on Intelligent Systems and Technology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chih-Chung Chang and Chih-Jen Lin. 2011. LIBSVM: A library for support vector machines. ACM Transac- tions on Intelligent Systems and Technology, 2:27:1- 27:27. Software available at http://www.csie. ntu.edu.tw/\u02dccjlin/libsvm.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A Maximum Entropy Approach to Information Extraction from Semi-Structured and Free Text", "authors": [ { "first": "H", "middle": [ "L" ], "last": "Chieu", "suffix": "" }, { "first": "H", "middle": [ "T" ], "last": "Ng", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 18th National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H.L. Chieu and H.T. Ng. 2002. A Maximum En- tropy Approach to Information Extraction from Semi- Structured and Free Text. In Proceedings of the 18th National Conference on Artificial Intelligence.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Adaptive Information Extraction from Text by Rule Induction and Generalisation", "authors": [ { "first": "F", "middle": [], "last": "Ciravegna", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 17th International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Ciravegna. 2001. Adaptive Information Extraction from Text by Rule Induction and Generalisation. In Proceedings of the 17th International Joint Confer- ence on Artificial Intelligence.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Natural Language Processing (Almost) from Scratch", "authors": [ { "first": "R", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "J", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "M", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "K", "middle": [], "last": "Kavukcuglu", "suffix": "" }, { "first": "P", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "In Journal of Machine Learning Research", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuglu, and P. Kuksa. 2011. Natural Lan- guage Processing (Almost) from Scratch. In Journal of Machine Learning Research.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Toward General-Purpose Learning for Information Extraction", "authors": [ { "first": "Dayne", "middle": [], "last": "Freitag", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dayne Freitag. 1998. Toward General-Purpose Learning for Information Extraction. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "About the speaker: towards a syntax of indexicality", "authors": [ { "first": "Alessandra", "middle": [ "Giorgi" ], "last": "", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alessandra Giorgi. 2010. About the speaker: towards a syntax of indexicality. Oxford University Press, Ox- ford.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Improving Neural Networks by Preventing Co-adaptation of Feature Detectors", "authors": [ { "first": "G", "middle": [ "E" ], "last": "Hinton", "suffix": "" }, { "first": "N", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "A", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "I", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "R", "middle": [ "R" ], "last": "Salakhutdinov", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXivpreprintarXiv:1207.0580" ] }, "num": null, "urls": [], "raw_text": "G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. 2012. Improving Neural Networks by Preventing Co-adaptation of Feature De- tectors. In arXiv preprint arXiv:1207.0580.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Y", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "Proceedings of 2014 the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Kim. 2014. Convolutional neural networks for sen- tence classification. In Proceedings of 2014 the Con- ference on Empirical Methods in Natural Language Processing (EMNLP-2014).", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Timeline Generation: Tracking Individuals on Twitter", "authors": [ { "first": "J", "middle": [], "last": "Li", "suffix": "" }, { "first": "C", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 23rd International Conference on World Wide Web", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Li and C. Cardie. 2014. Timeline Generation: Track- ing Individuals on Twitter. In Proceedings of the 23rd International Conference on World Wide Web.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Evaluating CRFs and Semantic Roles in TempEval-2", "authors": [ { "first": "H", "middle": [], "last": "Llorens", "suffix": "" }, { "first": "E", "middle": [], "last": "Saquete", "suffix": "" }, { "first": "B", "middle": [], "last": "Navarro", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 5th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Llorens, E. Saquete, and B. Navarro. 2010. TIPSem (English and Spanish): Evaluating CRFs and Seman- tic Roles in TempEval-2. In Proceedings of the 5th International Workshop on Semantic Evaluation.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Semeval-2015 Task 5: QA TempEval -Evaluating Temporal Information Understanding with Question Answering", "authors": [ { "first": "H", "middle": [], "last": "Llorens", "suffix": "" }, { "first": "N", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "N", "middle": [], "last": "Uzzaman", "suffix": "" }, { "first": "N", "middle": [], "last": "Mostafazadeh", "suffix": "" }, { "first": "J", "middle": [], "last": "Allen", "suffix": "" }, { "first": "J", "middle": [], "last": "Pustejovsky", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Llorens, N. Chambers, N. UzZaman, Mostafazadeh N., J. Allen, and J. Pustejovsky. 2015. Semeval-2015 Task 5: QA TempEval -Evaluating Temporal Infor- mation Understanding with Question Answering. In Proceedings of the 9th International Workshop on Se- mantic Evaluation (SemEval 2015).", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "The stanford corenlp natural language processing toolkit", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Steven", "middle": [ "J" ], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mcclosky", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "55--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The stanford corenlp natural language process- ing toolkit. In Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguistics (ACL), pages 55-60.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Generating Typed Dependency Parses from Phrase Structure Parses", "authors": [ { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Maccartney", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Fifth Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating Typed Dependency Parses from Phrase Structure Parses. In Proceedings of the Fifth Conference on Language Re- sources and Evaluation (LREC-2006).", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Spanish Gigaword Third Edition", "authors": [ { "first": "Daniel", "middle": [], "last": "Angelo Mendon Ca", "suffix": "" }, { "first": "David", "middle": [], "last": "Jaquette", "suffix": "" }, { "first": "Denise", "middle": [], "last": "Graff", "suffix": "" }, { "first": "", "middle": [], "last": "Dipersio", "suffix": "" } ], "year": 2011, "venue": "Linguistic Data Consortium", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Angelo Mendon ca, Daniel Jaquette, David Graff, and Denise DiPersio. 2011. Spanish Gigaword Third Edi- tion. In Linguistic Data Consortium.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Distributed Representations of Words and Phrases and their Compositionality", "authors": [ { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "I", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "K", "middle": [], "last": "Chen", "suffix": "" }, { "first": "G", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "J", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean. 2013. Distributed Representations of Words and Phrases and their Compositionality. In Proceed- ings of NIPS.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Rectified Linear Units Improve Restricted Boltzmann Machines", "authors": [ { "first": "V", "middle": [], "last": "Nair", "suffix": "" }, { "first": "G", "middle": [ "E" ], "last": "Hinton", "suffix": "" } ], "year": 2010, "venue": "Proceedings of 27th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "V. Nair and G. E. Hinton. 2010. Rectified Linear Units Improve Restricted Boltzmann Machines. In Pro- ceedings of 27th International Conference on Machine Learning.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Computational Exploration of the Linguistic Structures of Future-Oriented Expression: Classification and Categorization", "authors": [ { "first": "A", "middle": [], "last": "Nie", "suffix": "" }, { "first": "J", "middle": [], "last": "Shepard", "suffix": "" }, { "first": "J", "middle": [], "last": "Choi", "suffix": "" }, { "first": "B", "middle": [], "last": "Copley", "suffix": "" }, { "first": "P", "middle": [], "last": "Wolff", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the NAACL Student Research Workshop", "volume": "15", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Nie, J. Shepard, J. Choi, B. Copley, and P. Wolff. 2015. Computational Exploration of the Linguistic Structures of Future-Oriented Expression: Classifica- tion and Categorization. In Proceedings of the NAACL Student Research Workshop (NAACL-SRW'15).", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Memory-Based Dependency Parsing", "authors": [ { "first": "J", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "J", "middle": [], "last": "Hall", "suffix": "" }, { "first": "J", "middle": [], "last": "Nilsson", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Nivre, J. Hall, and J. Nilsson. 2004. Memory- Based Dependency Parsing. In Proceedings of the Eighth Conference on Computational Natural Lan- guage Learning (CoNLL).", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Linguistic Data Consortium", "authors": [ { "first": "Robert", "middle": [], "last": "Parker", "suffix": "" }, { "first": "David", "middle": [], "last": "Graff", "suffix": "" }, { "first": "Junbo", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Ke", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Kazuaki", "middle": [], "last": "Maeda", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2011. English Gigaword. In Lin- guistic Data Consortium.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "The TIMEBANK Corpus", "authors": [ { "first": "J", "middle": [], "last": "Pustejovsky", "suffix": "" }, { "first": "P", "middle": [], "last": "Hanks", "suffix": "" }, { "first": "R", "middle": [], "last": "Saur", "suffix": "" }, { "first": "A", "middle": [], "last": "See", "suffix": "" }, { "first": "R", "middle": [], "last": "Gaizauskas", "suffix": "" }, { "first": "A", "middle": [], "last": "Setzer", "suffix": "" }, { "first": "D", "middle": [], "last": "Radev", "suffix": "" }, { "first": "B", "middle": [], "last": "Sundheim", "suffix": "" }, { "first": "D", "middle": [], "last": "Day", "suffix": "" }, { "first": "L", "middle": [], "last": "Ferro", "suffix": "" }, { "first": "M", "middle": [], "last": "Lazo", "suffix": "" } ], "year": 2003, "venue": "Proceedings of Corpus Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Pustejovsky, P. Hanks, R. Saur, A. See, R. Gaizauskas, A. Setzer, D. Radev, B. Sundheim, D. Day, L. Ferro, and M. Lazo. 2003. The TIMEBANK Corpus. In Proceedings of Corpus Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Learning Dictionaries for Information Extraction by Multi-Level Bootstrapping", "authors": [ { "first": "E", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "R", "middle": [], "last": "Jones", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the Sixteenth National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Riloff and R. Jones. 1999. Learning Dictionaries for Information Extraction by Multi-Level Bootstrapping. In Proceedings of the Sixteenth National Conference on Artificial Intelligence.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Open Domain Event Extraction from Twitter", "authors": [ { "first": "A", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "O", "middle": [], "last": "Mausam", "suffix": "" }, { "first": "S", "middle": [], "last": "Etzioni", "suffix": "" }, { "first": "", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2012, "venue": "The 18th ACM SIGKDD Knowledge Discovery and Data Mining Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Ritter, Mausam, O. Etzioni, and S. Clark. 2012. Open Domain Event Extraction from Twitter. In The 18th ACM SIGKDD Knowledge Discovery and Data Min- ing Conference.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Relational Learning via Propositional Algorithms: An Information Extraction Case Study", "authors": [ { "first": "D", "middle": [], "last": "Roth", "suffix": "" }, { "first": "W", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "1257--1263", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Roth and W. Yih. 2001. Relational Learning via Propositional Algorithms: An Information Extraction Case Study. In Proceedings of the Seventeenth In- ternational Joint Conference on Artificial Intelligence, pages 1257-1263, Seattle, WA, August.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Extracting Human Temporal Orientation in Facebook Language", "authors": [ { "first": "A", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "G", "middle": [], "last": "Park", "suffix": "" }, { "first": "M", "middle": [], "last": "Sap", "suffix": "" }, { "first": "E", "middle": [], "last": "Weingarten", "suffix": "" }, { "first": "J", "middle": [], "last": "Eichstaedt", "suffix": "" }, { "first": "M", "middle": [], "last": "Kern", "suffix": "" }, { "first": "J", "middle": [], "last": "Berger", "suffix": "" }, { "first": "M", "middle": [], "last": "Seligman", "suffix": "" }, { "first": "L", "middle": [], "last": "Ungar", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the The 2015 Conference of the North American Chapter of the Association for Computational Linguistics -Human Language Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Schwartz, G. Park, M. Sap, E. Weingarten, J. Eich- staedt, M. Kern, J. Berger, M. Seligman, and L. Un- gar. 2015. Extracting Human Temporal Orientation in Facebook Language. In Proceedings of the The 2015 Conference of the North American Chapter of the As- sociation for Computational Linguistics -Human Lan- guage Technologies.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "The syntax and interpretation of temporal expressions in English", "authors": [ { "first": "Carlota", "middle": [], "last": "Smith", "suffix": "" } ], "year": 1978, "venue": "Linguistics & Philosophy", "volume": "2", "issue": "", "pages": "43--99", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlota Smith. 1978. The syntax and interpretation of temporal expressions in English. Linguistics & Phi- losophy, 2:43-99.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Lexicalization patterns: Semantic structure in lexical forms", "authors": [ { "first": "Leonard", "middle": [], "last": "Talmy", "suffix": "" } ], "year": 1985, "venue": "Language Typology and Syntactic Description", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leonard Talmy. 1985. Lexicalization patterns: Semantic structure in lexical forms. In Timothy Shopen, editor, Language Typology and Syntactic Description, Volume 3. Cambridge University Press.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Evaluating Temporal Information Understanding with Temporal Question Answering", "authors": [ { "first": "N", "middle": [], "last": "Uzzaman", "suffix": "" }, { "first": "H", "middle": [], "last": "Llorens", "suffix": "" }, { "first": "J", "middle": [], "last": "Allen", "suffix": "" } ], "year": 2012, "venue": "Proceedings of IEEE International Conference on Semantic Computing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. UzZaman, H. Llorens, and J. Allen. 2012. Evaluating Temporal Information Understanding with Temporal Question Answering. In Proceedings of IEEE Inter- national Conference on Semantic Computing.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "SemEval-2013 task 1: TempEval-3 evaluating time expressions, events, and temporal relations", "authors": [ { "first": "N", "middle": [], "last": "Uzzaman", "suffix": "" }, { "first": "H", "middle": [], "last": "Llorens", "suffix": "" }, { "first": "J", "middle": [], "last": "Allen", "suffix": "" }, { "first": "L", "middle": [], "last": "Derczynski", "suffix": "" }, { "first": "M", "middle": [], "last": "Verhagen", "suffix": "" }, { "first": "J", "middle": [], "last": "Pustejovsky", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 7th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. UzZaman, H. Llorens, J. Allen, L. Derczynski, M. Verhagen, and J. Pustejovsky. 2013. SemEval- 2013 task 1: TempEval-3 evaluating time expressions, events, and temporal relations. In Proceedings of the 7th International Workshop on Semantic Evaluation (SemEval 2013).", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "SemEval-2007 task 15: TempEval temporal relation identification", "authors": [ { "first": "M", "middle": [], "last": "Verhagen", "suffix": "" }, { "first": "R", "middle": [], "last": "Gaizauskas", "suffix": "" }, { "first": "F", "middle": [], "last": "Schilder", "suffix": "" }, { "first": "M", "middle": [], "last": "Hepple", "suffix": "" }, { "first": "G", "middle": [], "last": "Katz", "suffix": "" }, { "first": "J", "middle": [], "last": "Pustejovsky", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 4th International Workshop on Semantic Evaluations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Verhagen, R. Gaizauskas, F. Schilder, M. Hepple, G. Katz, and J. Pustejovsky. 2007. SemEval-2007 task 15: TempEval temporal relation identification. In Proceedings of the 4th International Workshop on Se- mantic Evaluations.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Proceedings of the 5th International Workshop on Semantic Evaluation", "authors": [ { "first": "M", "middle": [], "last": "Verhagen", "suffix": "" }, { "first": "R", "middle": [], "last": "Sauri", "suffix": "" }, { "first": "T", "middle": [], "last": "Caselli", "suffix": "" }, { "first": "J", "middle": [], "last": "Pustejovsky", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Verhagen, R. Sauri, T. Caselli, and J. Pustejovsky. 2010. SemEval-2010 task 13: TempEval-2. In Pro- ceedings of the 5th International Workshop on Seman- tic Evaluation.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Timeline Generation through Evolutionary Trans-temporal Summarization", "authors": [ { "first": "R", "middle": [], "last": "Yan", "suffix": "" }, { "first": "L", "middle": [], "last": "Kong", "suffix": "" }, { "first": "C", "middle": [], "last": "Huang", "suffix": "" }, { "first": "X", "middle": [], "last": "Wan", "suffix": "" }, { "first": "X", "middle": [], "last": "Li", "suffix": "" }, { "first": "Y", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Yan, L. Kong, C. Huang, X. Wan, X. Li, and Y. Zhang. 2011. Timeline Generation through Evolutionary Trans-temporal Summarization. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Kernel Methods for Relation Extraction", "authors": [ { "first": "Dmitry", "middle": [], "last": "Zelenko", "suffix": "" }, { "first": "Chinatsu", "middle": [], "last": "Aone", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Richardella", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel Methods for Relation Extraction. Journal of Machine Learning Research, 3.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "text": "", "uris": null }, "TABREF0": { "num": null, "type_str": "table", "text": "The three subtypes of future events are importantPast[EN] Today's demonstration ended without violence.An estimated 2,000 people protested against the government in Peru.[SP] Termin\u00f3 la manifestaci\u00f3n de los kurdos en la UNESCO de Par\u00eds.On-going [EN] Negotiations continue with no end in sight for the 2 week old strike.Yesterday's rallies have caused police to fear more today.[SP] Pacifistas latinoamericanos no cesan sus protestas contra guerra en Irak.", "content": "
Future Planned [EN] 77 percent of German steelworkers voted to strike to raise their wages.
Peace groups have already started organizing mass protests in Sydney.
[SP] Miedo en la City en v\u00edspera de masivas protestas que la toman por blanco.
Future Alert [EN] Farmers have threatened to hold demonstrations on Monday.
Nurses are warning they intend to walkout if conditions don't improve.
[SP] Indigenas hondure\u00f1os amenazan con declararse en huelga de hambre.
Future Possible [EN] Residents fear riots if the policeman who killed the boy is acquitted.
The military is preparing for possible protests at the G8 summit.
[SP] Polic\u00eda Militar analiza la posibilidad de decretar una huelga nacional.
", "html": null }, "TABREF1": { "num": null, "type_str": "table", "text": "Examples of event status categories for civil unrest events, showing two examples in English [EN] and one in Spanish [SP].", "content": "", "html": null }, "TABREF3": { "num": null, "type_str": "table", "text": "Counts of Temporal Status Labels in EventStatus.", "content": "
", "html": null }, "TABREF5": { "num": null, "type_str": "table", "text": "Number and % (in parentheses) of event mentions by syntactic form. PA = Past; OG = On-going; FU = Future", "content": "
", "html": null }, "TABREF6": { "num": null, "type_str": "table", "text": "Examples from Future Oriented Verb Lexicon", "content": "
", "html": null }, "TABREF8": { "num": null, "type_str": "table", "text": "Experimental Results on English Data. Each cell shows Recall/Precision/F-score.", "content": "
Row Method 1 TIPSem 2 TIPSem with transitivity 3 SVM with all features 4 SVM with BOW features only 82/75/78 53/56/54 34/52/41 56/61/59 68/68/68 PA OG FU Macro Micro 19/84/31 14/38/20 4/53/8 12/58/20 16/65/25 69/70/70 40/35/37 12/62/20 40/56/47 54/59/56 84/77/80 48/51/49 42/57/48 58/62/60 69/69/69 5 82/77/79 55/57/56 45/61/52 61/65/63 70/70/70 +Tense/Aspect/Time 6 83/75/79 51/56/53 42/58/49 59/63/61 69/69/69 +Governing Word 7 82/77/79 55/57/56 47/63/54 61/65/63 70/70/70 +Future Oriented Lexicon 8 Convolutional Neural Net 84/80/82 60/58/59 44/59/50 62/66/64 72/72/72
", "html": null }, "TABREF9": { "num": null, "type_str": "table", "text": "Experimental Results on Spanish Data. Each cell shows Recall/Precision/F-score.", "content": "
PA English 1385 (68%) 427 (21%) 233 (11%) OG FU Spanish 1251 (59%) 589 (28%) 280 (13%)
", "html": null }, "TABREF10": { "num": null, "type_str": "table", "text": "Label Distributions in the Test Set", "content": "", "html": null }, "TABREF12": { "num": null, "type_str": "table", "text": "Confusion Matrices for TIPSem (with transitivity).", "content": "
", "html": null } } } }