{
"paper_id": "N15-1048",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:33:27.401673Z"
},
"title": "Inferring Temporally-Anchored Spatial Knowledge from Semantic Roles",
"authors": [
{
"first": "Eduardo",
"middle": [],
"last": "Blanco",
"suffix": "",
"affiliation": {
"laboratory": "Human Intelligence and Language Technologies Lab",
"institution": "University of North Texas Denton",
"location": {
"postCode": "76203",
"region": "TX"
}
},
"email": "eduardo.blanco@unt.edu"
},
{
"first": "Alakananda",
"middle": [],
"last": "Vempala",
"suffix": "",
"affiliation": {
"laboratory": "Human Intelligence and Language Technologies Lab",
"institution": "University of North Texas Denton",
"location": {
"postCode": "76203",
"region": "TX"
}
},
"email": "alakanandavempala@my.unt.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a framework to infer spatial knowledge from verbal semantic role representations. First, we generate potential spatial knowledge deterministically. Second, we determine whether it can be inferred and a degree of certainty. Inferences capture that something is located or is not located somewhere, and temporally anchor this information. An annotation effort shows that inferences are ubiquitous and intuitive to humans.",
"pdf_parse": {
"paper_id": "N15-1048",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a framework to infer spatial knowledge from verbal semantic role representations. First, we generate potential spatial knowledge deterministically. Second, we determine whether it can be inferred and a degree of certainty. Inferences capture that something is located or is not located somewhere, and temporally anchor this information. An annotation effort shows that inferences are ubiquitous and intuitive to humans.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Extracting semantic relations from text is at the core of text understanding. Semantic relations encode semantic connections between words. For example, from (1) Bill couldn't handle the pressure and quit yesterday, one could extract that the CAUSE of quit was the pressure. Doing so would help answering question Why did Bill quit? and determining that the pressure started before Bill quit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the past years, computational semantics has received a significant boost. But extracting all semantic relations in text-even in single sentences-is still an elusive goal. Most existing approaches target either a single relation, e.g., PART-WHOLE (Girju et al., 2006) , or relations that hold between arguments following some syntactic construction, e.g., possessives (Tratz and Hovy, 2013) . Among the latter kind, the task of verbal semantic role labeling focuses on extracting semantic links exclusively between verbs and their arguments. PropBank (Palmer et al., 2005) is a popular corpus for this task, and tools to extract verbal semantic roles have been proposed for years (Carreras and M\u00e0rquez, 2005) .",
"cite_spans": [
{
"start": 238,
"end": 248,
"text": "PART-WHOLE",
"ref_id": null
},
{
"start": 249,
"end": 269,
"text": "(Girju et al., 2006)",
"ref_id": "BIBREF11"
},
{
"start": 370,
"end": 392,
"text": "(Tratz and Hovy, 2013)",
"ref_id": null
},
{
"start": 553,
"end": 574,
"text": "(Palmer et al., 2005)",
"ref_id": "BIBREF19"
},
{
"start": 682,
"end": 710,
"text": "(Carreras and M\u00e0rquez, 2005)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Some semantic relations hold forever, e.g., the CAUSE of event quit in example (1) above is pressure. Discussing when this CAUSE holds is somewhat artificial: at some point Bill quit, and he did so because of the pressure. But LOCATION and other semantic relations often do not hold forever. For example, while buildings typically have one location during their existence, people and objects such as cars and books do not: they participate in events and as a result their locations change.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper presents a framework to infer temporally-anchored spatial knowledge from verbal semantic roles. Specifically, our goal is to infer whether something is located somewhere or not located somewhere, and temporally anchor this spatial information. Consider sentence (2) John was incarcerated at Shawshank prison and its semantic roles (Figure 1 , solid arrows). Given these roles, we aim at inferring that John had LOCATION Shawshank prison during event incarcerated, and that he (probably) did not have this LOCATION before and after (discontinuous arrow). Our intuition is that knowing that incarcerated has THEME John and LO-CATION Shawshank prison will help making these inferences. As we shall discuss, sometimes we have evidence that something is (or is not) located somewhere, but cannot completely commit.",
"cite_spans": [],
"ref_spans": [
{
"start": 342,
"end": 351,
"text": "(Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We target temporally-anchored spatial knowledge between intra-sentential arguments of verbs, not only between arguments of the same verb as exemplified in Figure 1 . The main contributions are:",
"cite_spans": [],
"ref_spans": [
{
"start": 155,
"end": 163,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) analysis of spatial knowledge inferable from PropBank-style semantic roles; (2) annotations of temporally-anchored LOCATION relations on top of OntoNotes; 1 (3) supervised models to infer the additional spatial knowledge; and (4) experiments detailing results using lexical, syntactic and semantic features. The framework presented here infers over 44% spatial knowledge on top of the PropBank-style semantic roles annotated in OntoNotes (certYES and certNO labels, Section 3.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We denote a semantic relation R between x and y as R(x, y). R(x, y) could be read \"x has R y\", e.g., AGENT(moved, John) could be read \"moved has AGENT John\". Semantic roles 2 are semantic relations R(x, y) such that x is a verb and y is an argument of x. We refer to any spatial relation LO-CATION(x, y) where 1x is not a verb, or 2x is a verb but y is not a argument of x, as additional spatial knowledge. As we shall see, we target additional spatial knowledge beyond plain LOCATION(x, y) relations, which only specify the location y of x. Namely, we consider polarity, i.e., whether something is or is not located somewhere, and temporally anchor this information. This paper complements semantic role representations with additional spatial knowledge. We follow a practical approach by inferring spatial knowledge from PropBank-style semantic roles. We believe this is an advantage since PropBank is wellknown in the field and several tools to predict Prop-Bank roles are documented and publicly available. 3 The work presented here could be incorporated into any NLP pipeline after role labeling without modifications to other components.",
"cite_spans": [
{
"start": 1011,
"end": 1012,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Roles and Additional Spatial Knowledge",
"sec_num": "2"
},
{
"text": "PropBank (Palmer et al., 2005) Treebank. It uses a set of numbered arguments 4 (ARG 0 , ARG 1 , etc.) and modifiers (ARGM-TMP, ARGM-MNR, etc.). Numbered arguments do not share a common meaning across verbs, they are defined on verb-specific framesets. For example, ARG 2 is used to indicate \"employer\" with verb work.01 and \"expected terminus of sleep\" with verb sleep.01 (Table 1) . Unlike numbered arguments, modifiers have the same meaning across verbs ( Table 2) .",
"cite_spans": [
{
"start": 9,
"end": 30,
"text": "(Palmer et al., 2005)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 372,
"end": 381,
"text": "(Table 1)",
"ref_id": "TABREF0"
},
{
"start": 458,
"end": 466,
"text": "Table 2)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "PropBank and OntoNotes",
"sec_num": "2.1"
},
{
"text": "The original PropBank corpus consists of (1) 3,327 framesets, each frameset defines the numbered roles for a verb, and (2) actual semantic role annotations (numbered arguments and modifiers) for 112,917 verbs. On average, each verb has 1.93 numbered arguments and 0.66 modifiers annotated. Only 7,198 verbs have an ARGM-LOC annotated, i.e., location information is present in 6.37% of verbs. For more information about PropBank and examples, refer to the annotation guidelines. 5 OntoNotes (Hovy et al., 2006 ) is a more recent corpus that includes POS tags, word senses, parse trees, speaker information, named entities, PropBank-style semantic roles and coreference. While the original PropBank annotations were done exclusively in the news domain, OntoNotes includes other genres as well: broadcast and telephone conversations, weblogs, etc. Because of the additional annotation layers and genres, we work with OntoNotes instead of PropBank. ",
"cite_spans": [
{
"start": 478,
"end": 479,
"text": "5",
"ref_id": null
},
{
"start": 490,
"end": 508,
"text": "(Hovy et al., 2006",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PropBank and OntoNotes",
"sec_num": "2.1"
},
{
"text": "Sentences contain spatial information beyond ARGM-LOC semantic role, i.e., beyond links between verbs and their arguments. There are two main types of additional LOCATION(x, y) relations: 6 (1) those whose arguments x and y are semantic roles of a verb, and (2) those whose arguments x and y are not semantic roles of a verb.",
"cite_spans": [
{
"start": 188,
"end": 189,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Spatial Knowledge",
"sec_num": "2.2"
},
{
"text": "The first kind can be further divided into (1a) those whose arguments are semantic roles of the same verb ( Figure 1) , and (1b) those whose arguments are semantic roles of different verbs. Fig In this paper, we focus on extracting additional spatial knowledge of type (1), and reserve type (2) for future work. More specifically, we infer spatial knowledge between x and y, where the following semantic roles exist: ARG i (x pred , x) and ARGM-LOC(y pred , y). ARG i indicates any numbered argument (ARG 0 , ARG 1 , ARG 2 , etc.) and x pred (y pred ) indicates the verbal predicate to which x (y) attaches. Targeting additional spatial knowledge exclusively for numbered arguments is not a significant limitation: most semantic roles annotated in OntoNotes (75%) are numbered arguments, and it is pointless to infer spatial knowledge for most modifiers, e.g., ARGM-EXT, ARGM-DIS, ARGM-ADV, ARGM-MOD, ARGM-NEG, ARGM-DIR.",
"cite_spans": [],
"ref_spans": [
{
"start": 108,
"end": 117,
"text": "Figure 1)",
"ref_id": "FIGREF0"
},
{
"start": 190,
"end": 193,
"text": "Fig",
"ref_id": null
}
],
"eq_spans": [],
"section": "Additional Spatial Knowledge",
"sec_num": "2.2"
},
{
"text": "Annotating all additional spatial knowledge in OntoNotes inferable from semantic roles is a daunting task. OntoNotes is a large corpus with 63,918 sentences and 9,924 ARGM-LOC semantic roles annotated. Our goal is not to present an extensive annotation effort, but rather show that additional temporally-anchored spatial knowledge can be (1) annotated reliably by non-experts following simple guidelines, and (2) inferred automatically using supervised machine learning. Thus, we focus on 200 sentences from OntoNotes that have at least one ARGM-LOC role annotated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotating Spatial Knowledge",
"sec_num": "3"
},
{
"text": "foreach sentence s do foreach sem. role ARGM-LOC(y pred , y) \u2208 s do foreach sem. role ARG i (x pred , x) \u2208 s do if is valid(x, y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotating Spatial Knowledge",
"sec_num": "3"
},
{
"text": "then Is x located at y before y pred ? Is x located at y during y pred ? Is x located at y after y pred ?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotating Spatial Knowledge",
"sec_num": "3"
},
{
"text": "Algorithm 1: Procedure to generate potential additional spatial knowledge of type (1) (Section 2.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotating Spatial Knowledge",
"sec_num": "3"
},
{
"text": "Obviously, [the pilot All potential additional spatial knowledge is generated with Algorithm 1, and a manual annotation effort determines whether spatial knowledge should be inferred. Algorithm 1 loops over all ARGM-LOC roles, and generates questions regarding whether spatial knowledge can be inferred for any numbered argument within the same sentence. is valid(x, y) returns True if (1) x is not contained in y and (2) y is not contained in x. Considering invalid pairs would be trivial or nonsensical, e.g., pair (x: about what was happening on the ground, y: on the ground) is invalid in the sentence depicted in Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 618,
"end": 626,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Annotating Spatial Knowledge",
"sec_num": "3"
},
{
"text": "]ARG 0 , v 1 did[n't]ARGM-NEG, v 1 [think]v 1 [too much]ARGM-EXT, v 1 [about [what]ARG 1 , v 2 was [happening]v 2 [on the ground]ARGM-LOC, v 2 , or . . . ]ARG 1 , v 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotating Spatial Knowledge",
"sec_num": "3"
},
{
"text": "In a first batch of annotations, two annotators were asked questions generated by Algorithm 1 and required to answer YES or NO. The only information they had available was the source sentence without semantic role information. Feedback from this first attempt revealed that (1) because of the nature of x or y, sometimes questions are pointless, and (2) because of uncertainty, sometimes it is not correct to answer YES or NO, even tough there is some evidence that makes either answer likely.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Process and Guidelines",
"sec_num": "3.1"
},
{
"text": "Based on this feedback, and inspired by previous annotation guidelines (Saur\u00ed and Pustejovsky, 2012), in a second batch we allowed five answers:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Process and Guidelines",
"sec_num": "3.1"
},
{
"text": "\u2022 certYES: I am certain that the answer is yes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Process and Guidelines",
"sec_num": "3.1"
},
{
"text": "\u2022 probYES: It is probable that the answer is yes, but it is not guaranteed. \u2022 certNO: I am certain that the answer is no.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Process and Guidelines",
"sec_num": "3.1"
},
{
"text": "\u2022 probNO: It is probable that the answer is no, but it is not guaranteed. \u2022 UNK: There is not enough information to answer, I can't tell the location of x. The goal is to infer spatial knowledge as gathered by humans when reading text. Thus, annotators were encouraged to use commonsense and world knowledge. While simple and somewhat open to interpretation, these guidelines allowed as to gather annotations with \"good reliability\" (Section 3.3.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Process and Guidelines",
"sec_num": "3.1"
},
{
"text": "In this section, we present annotation examples after resolving conflicts (Figure 4 ). These examples show that ambiguity is common and sentences must be fully interpreted before annotating.",
"cite_spans": [],
"ref_spans": [
{
"start": 74,
"end": 83,
"text": "(Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Annotation Examples",
"sec_num": "3.2"
},
{
"text": "Sentence 4(a) has four semantic roles for verb collecting (solid arrows), and annotators are asked to decide whether ARG 0 and ARG 1 of collecting are located at the ARGM-LOC before, during or after collecting (discontinuous arrows). Annotators interpreted that the FBI agents and divers (ARG 0 ) and evidence (ARG 1 ) were located at Lake Logan (ARGM-LOC) during collecting (certYES). They also annotated that the FBI agents and divers were likely to be located at Lake Logan before and after (probYES). Finally, they determined that the evidence was located at Lake Logan before the collecting (certYES), but probably not after (probNO). These annotations reflect the natural reading of sentence 4(a): (1) people and whatever they collect are located where the collecting takes place during the event, (2) people collecting are likely to be at that location before and after (i.e., presumably they do not arrive immediately before and leave immediately after), and (3) the objects being collected are located at that location before collecting, but probably not after.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Examples",
"sec_num": "3.2"
},
{
"text": "Sentence 4(b) is more complex. First, potential relation LOCATION(in sight, at the intersection) is annotated UNK: it is nonsensical to ask for the location of sight. Second, the Disney symbols are never located at the intersection (certNO). Third, both the car and security guard were located at the intersection during the stop for sure (certYES). Fourth, annotators interpreted that the car was not at the intersection before (certNO), but they were not sure about after (probNO). Fifth, they considered that the security guard was probably located at the intersec- tion before and after. In other words, annotators understood that (1) the car was moving down a road and arrived at the intersection;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Examples",
"sec_num": "3.2"
},
{
"text": "(2) then, it was pulled over by a security guard who is probably stationed at the intersection; and (3) after the stop, the car probably continued with its route but the guard probably stayed at the intersection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Examples",
"sec_num": "3.2"
},
{
"text": "Each annotator answered 1,995 questions generated with Algorithm 1. Basic label counts after resolving conflicts are shown in Table 3 . First, it is worth noting that annotators used UNK to answer only 5.26% of questions. Thus, over 94% of times ARGM-LOC semantic role is found, additional spatial knowledge can be inferred with some degree of certainty. Second, annotators were certain about the additional spatial knowledge, i.e., labels certYES and certNO, 35.94% and 8.72% of times respectively. Thus, 44% of times one encounters ARGM-LOC seman- tic role, additional spatial knowledge can be inferred with certainty. Finally, annotators answered around 50% of questions with probYES or probNO. In other words, they found it likely that spatial information can be inferred, but were not completely certain. Table 4 presents observed agreements, i.e., raw percentage of equal annotations, and Cohen Kappa scores (Cohen, 1960) per temporal anchor and for all questions. Kappa scores are above 0.80, indicating \"good reliability\" (Artstein and Poesio, 2008) . We believe the high Kappa scores are due to the fact that we start from PropBank-style roles instead of plain text, and questions asked are intuitive. Note that not all disagreements are equal, e.g., the difference between certYES and certNO is much larger than the difference between certYES and probYES.",
"cite_spans": [
{
"start": 914,
"end": 927,
"text": "(Cohen, 1960)",
"ref_id": "BIBREF6"
},
{
"start": 1030,
"end": 1057,
"text": "(Artstein and Poesio, 2008)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 126,
"end": 133,
"text": "Table 3",
"ref_id": "TABREF6"
},
{
"start": 810,
"end": 817,
"text": "Table 4",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Annotation Analysis",
"sec_num": "3.3"
},
{
"text": "We follow a standard supervised machine learning approach. The 200 sentences were divided into train (80%) and test (20%), and the corresponding instances assigned to the train and test sets. 8 We trained an SVM with RBF kernel using scikit-learn (Pedregosa et al., 2011) . Parameters C and \u03b3 were tuned using 10-fold cross-validation with the training set, and results are calculated with test instances.",
"cite_spans": [
{
"start": 192,
"end": 193,
"text": "8",
"ref_id": null
},
{
"start": 247,
"end": 271,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inferring Spatial Knowledge",
"sec_num": "4"
},
{
"text": "Selected features (Table 5 ) are a mix of lexical, syntactic and semantic features, and are extracted from tokens (words and POS tags), full parse trees and semantic roles. Lexical and syntactic features are standard in semantic role labeling (Gildea and Jurafsky, 2002) and we do not elaborate on them. Hereafter 8 Splitting instances randomly would be unfair, as instances from the same sentence would be assigned to the train and test sets. Thank you to an anonymous reviewer for pointing this out. we describe semantic features, which include any feature derived from semantic role representations.",
"cite_spans": [
{
"start": 243,
"end": 270,
"text": "(Gildea and Jurafsky, 2002)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 18,
"end": 26,
"text": "(Table 5",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Feature selection",
"sec_num": "4.1"
},
{
"text": "Sentence: [In this laboratory]ARGM-LOC, v 1 [I]ARG 0 , v 1 'm [surrounded]v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature selection",
"sec_num": "4.1"
},
{
"text": "Features 30-33 correspond to the surface form and POS tag of the verbs to which x and y attach to. Feature 34 indicates the semantic role between x pred and x; note that the semantic role between y pred and y is always ARGM-LOC (Algorithm 1). Feature 35 distinguishes inferences of type (1a) from (1b) (Section 2.2): it indicates whether both x and y attach to the same verb, as in Figure 1 , or not, as in Figure 2 . Features 36-39 encode the first and last semantic role of x pred and y pred by order of appearance. Features 40-59 are binary flags signalling which se-Before",
"cite_spans": [],
"ref_spans": [
{
"start": 382,
"end": 390,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 407,
"end": 416,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Feature selection",
"sec_num": "4.1"
},
{
"text": "After All P R F P R F P R F P R F most frequent baseline certYES 0.11 1.00 0.20 0.74 1.00 0.85 0.26 1.00 0.42 0.37 1.00 0.54 other labels 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 weighted avg. 0.01 0.11 0.02 0.54 0.74 0.63 0.07 0.26 0.11 0.14 0.37 0.20 most frequent per temporal anchor baseline certYES 0.00 0.00 0.00 0.75 1.00 0.86 0.00 0.00 0.00 0.75 0.62 0.68 probYES 0.00 0.00 0.00 0.00 0.00 0.00 0.45 1.00 0.62 0.45 0.56 0.50 probNO 0.38 1.00 0.55 0.00 0.00 0.00 0.00 0.00 0.00 0.38 0.62 0.47 other labels 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 weighted avg. 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 probNO 0.39 0.53 0.45 0.00 0.00 0.00 0.00 0.00 0.00 0.39 0.37 0.38 UNK 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 weighted avg. 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 probNO 0.35 0.51 0.41 0.00 0.00 0.00 0.00 0.00 0.00 0.35 0.35 0.35 UNK 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 weighted avg. Table 6 : Results obtained with two baselines, and training with several feature combinations. Models are trained with all instances (before, during and after).",
"cite_spans": [],
"ref_spans": [
{
"start": 6,
"end": 45,
"text": "All P R F P R F P R F P R F",
"ref_id": null
},
{
"start": 1027,
"end": 1034,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "During",
"sec_num": null
},
{
"text": "mantic roles x pred and y pred have, and features 60-99 capture the index of each role (first, second, third, etc.) and its syntactic node (NP, PP, SBAR, etc.). Finally, features 100 and 101 capture the semantic role of x pred and y pred which fully contain y and x respectively, if such roles exists. These features are especially designed for our inference task and are exemplified in Figure 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 387,
"end": 395,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "During",
"sec_num": null
},
{
"text": "Results obtained with the test set using two baselines and models trained with several feature combinations are presented in Table 6 . The most frequent baseline always predicts certYES, and the most frequent per temporal anchor baseline pre-dicts probNO, certYES and probYES for instances with temporal anchor before, during and after respectively. The most frequent baseline obtains a weighted F-measure of 0.20, and most frequent per temporal anchor baseline 0.50. Results with supervised models are better, but we note that always predicting certYES for during instances obtains the same F-measure than using all features (0.65). Table 6 presents results using all features. The weighted F-measure is 0.55, and the highest F-measures are obtained with labels certYES (0.71) and probYES (0.60). Results with certNO and probNO are lower (0.05 and 0.44), we believe this is due to the fact that few instances are annotated with this labels (8.72% and 19.75%, Ta-ble 3). Results are higher (0.65) with during instances than with before and after instances (0.41 and 0.45). These results are intuitive: certain events such as press and write require participants to be located where the event occurs only during the event.",
"cite_spans": [],
"ref_spans": [
{
"start": 125,
"end": 132,
"text": "Table 6",
"ref_id": null
},
{
"start": 634,
"end": 641,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "The weighted F-measure using lexical features is the same than with the most frequent per temporal anchor baseline (0.50). F-measures go up with before (0.21 vs. 0.32, 52.38%) and after (0.28 vs. 0.47, 67.85%) instances, but slightly down with during instances (0.65 vs. 0.63, \u22123.08%).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Ablation and Detailed Results",
"sec_num": "5.1"
},
{
"text": "Complementing lexical features with syntactic and semantic features brings the overall weighted Fmeasure slightly up: 0.53 with syntactic and 0.52 with semantic features (+0.03 and +0.02, 6% and 4%). Before instances benefit the most from syntactic features (0.32 vs. 0.41, 28.13%), and after instances benefit from semantic features (0.47 vs. 0.49, 4.26%). During instances do not benefit from semantic features, and only gain 0.01 F-measure (1.59%) with syntactic features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Ablation and Detailed Results",
"sec_num": "5.1"
},
{
"text": "Finally, combining lexical, syntactic and semantic features obtains the best overall results (weighted F-measure: 0.55 vs. 0.53 and 0.52, 3.77% and 5.77%). We note, however, that before instances do not benefit from including semantic features (same F-measure, 0.41), and the best results for after instances are obtained with lexical and semantic features (0.49 vs. 0.45, 8.16%),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Ablation and Detailed Results",
"sec_num": "5.1"
},
{
"text": "Tools to extract the PropBank semantic roles we infer from have been studied for years (Carreras and M\u00e0rquez, 2005; Haji\u010d et al., 2009; Lang and Lapata, 2010) . These systems only extract semantic links between predicates and their arguments, not between arguments of predicates. In contrast, this paper complements semantic role representations with spatial knowledge for numbered arguments.",
"cite_spans": [
{
"start": 87,
"end": 115,
"text": "(Carreras and M\u00e0rquez, 2005;",
"ref_id": "BIBREF5"
},
{
"start": 116,
"end": 135,
"text": "Haji\u010d et al., 2009;",
"ref_id": null
},
{
"start": 136,
"end": 158,
"text": "Lang and Lapata, 2010)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "There have been several proposals to extract semantic links not annotated in well-known corpora such as PropBank (Palmer et al., 2005) , FrameNet (Baker et al., 1998) or NomBank (Meyers et al., 2004) . Gerber and Chai (2010) augment Nom-Bank annotations with additional numbered argu-ments appearing in the same or previous sentences; posterior work obtained better results for the same task (Gerber and Chai, 2012; Laparra and Rigau, 2013) . The SemEval-2010 Task 10: Linking Events and their Participants in Discourse (Ruppenhofer et al., 2009) targeted cross-sentence missing numbered arguments in PropBank and FrameNet. We have previously proposed an unsupervised framework to compose semantic relations out of previously extracted relations (Blanco and Moldovan, 2011; Blanco and Moldovan, 2014a) , and a supervised approach to infer additional argument modifiers (ARGM) for verbs in PropBank (Blanco and Moldovan, 2014b) . Unlike the current work, these previous efforts (1) improve the semantic representation of verbal and nominal predicates, or (2) infer relations between arguments of the same predicate. None of them target temporally-anchored spatial knowledge or account for uncertainty.",
"cite_spans": [
{
"start": 113,
"end": 134,
"text": "(Palmer et al., 2005)",
"ref_id": "BIBREF19"
},
{
"start": 146,
"end": 166,
"text": "(Baker et al., 1998)",
"ref_id": "BIBREF1"
},
{
"start": 178,
"end": 199,
"text": "(Meyers et al., 2004)",
"ref_id": "BIBREF18"
},
{
"start": 202,
"end": 224,
"text": "Gerber and Chai (2010)",
"ref_id": "BIBREF8"
},
{
"start": 392,
"end": 415,
"text": "(Gerber and Chai, 2012;",
"ref_id": "BIBREF9"
},
{
"start": 416,
"end": 440,
"text": "Laparra and Rigau, 2013)",
"ref_id": "BIBREF17"
},
{
"start": 520,
"end": 546,
"text": "(Ruppenhofer et al., 2009)",
"ref_id": "BIBREF21"
},
{
"start": 746,
"end": 773,
"text": "(Blanco and Moldovan, 2011;",
"ref_id": "BIBREF2"
},
{
"start": 774,
"end": 801,
"text": "Blanco and Moldovan, 2014a)",
"ref_id": "BIBREF3"
},
{
"start": 898,
"end": 926,
"text": "(Blanco and Moldovan, 2014b)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Attaching temporal information to semantic relations is uncommon. In the context of the TAC KBP temporal slot filling track (Garrido et al., 2012; Surdeanu, 2013) , relations common in information extraction (e.g., SPOUSE, COUNTRY OF RESIDENCY) are assigned a temporal interval indicating when they hold. The task proved very difficult, and the best system achieved 48% of human performance. Unlike this line of work, the approach presented in this paper starts from semantic role representations, targets temporally-anchored LOCATION relations, and accounts for degrees of uncertainty (certYES / certNO vs. probYES / probNO).",
"cite_spans": [
{
"start": 124,
"end": 146,
"text": "(Garrido et al., 2012;",
"ref_id": "BIBREF7"
},
{
"start": 147,
"end": 162,
"text": "Surdeanu, 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "The task of spatial role labeling (Haji\u010d et al., 2009; Kolomiyets et al., 2013) aims at thoroughly representing spatial information with so-called spatial roles, i.e., trajector, landmark, spatial and motion indicators, path, direction, distance, and spatial relations. Unlike us, the task does not consider temporal spans nor certainty. But as the examples throughout this paper show, doing so is useful because (1) spatial information for most objects changes over time, and (2) humans sometimes can only state that an object is probably located somewhere. In contrast to this task, we infer temporally-anchored spatial knowledge as humans intuitively understand it, and purposely avoid following any formalism.",
"cite_spans": [
{
"start": 34,
"end": 54,
"text": "(Haji\u010d et al., 2009;",
"ref_id": null
},
{
"start": 55,
"end": 79,
"text": "Kolomiyets et al., 2013)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Semantic roles encode semantic links between a verb and its arguments. Among other role labels, PropBank uses numbered arguments (ARG 0 , ARG 1 , etc.) to encode the core arguments of a verb, and ARGM-LOC to encode the location. This paper exploits these numbered arguments and ARGM-LOC in order to infer temporally-anchored spatial knowledge. This knowledge encodes whether a numbered argument x is or is not located in a location y, and temporally anchors this information with respect to the verb to which y attaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "An annotation effort with 200 sentences from OntoNotes has been presented. First, potential additional spatial knowledge is generated automatically (Algorithm 1). Then, annotators following straightforward guidelines answer questions asking for intuitive spatial information, including uncertainty. The result is annotations with high inter-annotator agreements that encode spatial knowledge as understood by humans when reading text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Experimental results show that inferring additional spatial knowledge can be done with a modest weighted F-measure of 0.55. Results are higher for certYES and probYES (0.71 and 0.60), the labels that indicate that something is certainly or probably located somewhere. Simple majority baselines provide strong results, but combining lexical, syntactic and semantic features yields the best results (0.50 vs. 0.55). Inferring spatial knowledge for numeric arguments before and after an event occurs is harder than during the event (0.41 and 0.45 vs. 0.65).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "The most important conclusion of this work is the fact that given an ARGM-LOC semantic role, temporally-anchored spatial knowledge can be inferred for numbered arguments in the same sentence. Indeed, annotators answered 44% of questions with certYES or certNO, and 50% of questions with probYES or probNO. Another important observation is that spatial knowledge can be inferred from most verbs, not only motion verbs. While it is fairly obvious to infer from John went to Paris that he had LOCATION Paris after went but not before or during, we have shown that verbs such as incarcerated (Figure 1 ) also grant spatial inferences.",
"cite_spans": [],
"ref_spans": [
{
"start": 588,
"end": 597,
"text": "(Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Available at http://hilt.cse.unt.edu/ 2 We use semantic role to refer to PropBank-style (verbal) semantic roles. NomBank(Meyers et al., 2004) and FrameNet(Baker et al., 1998) also annotate semantic roles.3 E.g.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Numbered arguments are also referred to as core.5 http://verbs.colorado.edu/\u02dcmpalmer/projects/ ace/PBguidelines.pdf",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Both ARGM-LOC(x, y) and LOCATION(x, y) encode the same meaning, but we use ARGM-LOC for the PropBank semantic role and LOCATION for additional spatial knowledge.7 Note that the head of ARG0 is residents, not the apartments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Directions (SEW-2009) ",
"cite_spans": [
{
"start": 11,
"end": 21,
"text": "(SEW-2009)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Inter-coder agreement for computational linguistics",
"authors": [
{
"first": "Ron",
"middle": [],
"last": "Artstein",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "4",
"pages": "555--596",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ron Artstein and Massimo Poesio. 2008. Inter-coder agreement for computational linguistics. Computa- tional Linguistics, 34(4):555-596, December.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The Berkeley FrameNet Project",
"authors": [
{
"first": "Collin",
"middle": [
"F"
],
"last": "Baker",
"suffix": ""
},
{
"first": "Charles",
"middle": [
"J"
],
"last": "Fillmore",
"suffix": ""
},
{
"first": "John",
"middle": [
"B"
],
"last": "Lowe",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 17th international conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet Project. In Proceed- ings of the 17th international conference on Computa- tional Linguistics, Montreal, Canada.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Unsupervised learning of semantic relation composition",
"authors": [
{
"first": "Eduardo",
"middle": [],
"last": "Blanco",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Moldovan",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics (ACL 2011)",
"volume": "",
"issue": "",
"pages": "1456--1465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduardo Blanco and Dan Moldovan. 2011. Unsuper- vised learning of semantic relation composition. In Proceedings of the 49th Annual Meeting of the As- sociation for Computational Linguistics (ACL 2011), pages 1456-1465, Portland, Oregon.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Composition of semantic relations: Theoretical framework and case study",
"authors": [
{
"first": "Eduardo",
"middle": [],
"last": "Blanco",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Moldovan",
"suffix": ""
}
],
"year": 2014,
"venue": "ACM Trans. Speech Lang. Process",
"volume": "10",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduardo Blanco and Dan Moldovan. 2014a. Compo- sition of semantic relations: Theoretical framework and case study. ACM Trans. Speech Lang. Process., 10(4):17:1-17:36, January.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Leveraging verb-argument structures to infer semantic relations",
"authors": [
{
"first": "Eduardo",
"middle": [],
"last": "Blanco",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Moldovan",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2014)",
"volume": "",
"issue": "",
"pages": "145--154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduardo Blanco and Dan Moldovan. 2014b. Leveraging verb-argument structures to infer semantic relations. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguis- tics (EACL 2014), pages 145-154, Gothenburg, Swe- den.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Introduction to the CoNLL-2005 shared task: semantic role labeling",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
}
],
"year": 2005,
"venue": "CONLL '05: Proceedings of the Ninth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "152--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Carreras and Llu\u00eds M\u00e0rquez. 2005. Introduction to the CoNLL-2005 shared task: semantic role label- ing. In CONLL '05: Proceedings of the Ninth Confer- ence on Computational Natural Language Learning, pages 152-164.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Coefficient of Agreement for Nominal Scales",
"authors": [
{
"first": "J",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1960,
"venue": "Educational and Psychological Measurement",
"volume": "20",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Cohen. 1960. A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement, 20(1):37.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Temporally anchored relation extraction",
"authors": [
{
"first": "Guillermo",
"middle": [],
"last": "Garrido",
"suffix": ""
},
{
"first": "Anselmo",
"middle": [],
"last": "Pe\u00f1as",
"suffix": ""
},
{
"first": "Bernardo",
"middle": [],
"last": "Cabaleiro",
"suffix": ""
},
{
"first": "Rodrigo",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers",
"volume": "1",
"issue": "",
"pages": "107--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillermo Garrido, Anselmo Pe\u00f1as, Bernardo Cabaleiro, and\u00c1lvaro Rodrigo. 2012. Temporally anchored re- lation extraction. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguis- tics: Long Papers -Volume 1, ACL '12, pages 107- 116.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Beyond Nom-Bank: A Study of Implicit Arguments for Nominal Predicates",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Gerber",
"suffix": ""
},
{
"first": "Joyce",
"middle": [],
"last": "Chai",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1583--1592",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Gerber and Joyce Chai. 2010. Beyond Nom- Bank: A Study of Implicit Arguments for Nominal Predicates. In Proceedings of the 48th Annual Meet- ing of the Association for Computational Linguistics, pages 1583-1592, Uppsala, Sweden, July.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Semantic role labeling of implicit arguments for nominal predicates",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Gerber",
"suffix": ""
},
{
"first": "Joyce",
"middle": [],
"last": "Chai",
"suffix": ""
}
],
"year": 2012,
"venue": "Computational Linguistics",
"volume": "38",
"issue": "",
"pages": "755--798",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Gerber and Joyce Chai. 2012. Semantic role labeling of implicit arguments for nominal predicates. Computational Linguistics, 38:755-798, 2012.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Automatic labeling of semantic roles",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics",
"volume": "28",
"issue": "3",
"pages": "245--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gildea and Daniel Jurafsky. 2002. Automatic la- beling of semantic roles. Computational Linguistics, 28(3):245-288, September.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Automatic discovery of part-whole relations",
"authors": [
{
"first": "Roxana",
"middle": [],
"last": "Girju",
"suffix": ""
},
{
"first": "Adriana",
"middle": [],
"last": "Badulescu",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Moldovan",
"suffix": ""
}
],
"year": 2006,
"venue": "Computational Linguistics",
"volume": "32",
"issue": "1",
"pages": "83--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roxana Girju, Adriana Badulescu, and Dan Moldovan. 2006. Automatic discovery of part-whole relations. Computational Linguistics, 32(1):83-135, March.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The conll-2009 shared task: Syntactic and semantic dependencies in multiple languages",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Meyers",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Jan\u0161t\u011bp\u00e1nek",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Stra\u0148\u00e1k",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task, CoNLL '09",
"volume": "",
"issue": "",
"pages": "1--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M\u00e0rquez, Adam Meyers, Joakim Nivre, Sebastian Pad\u00f3, Jan\u0160t\u011bp\u00e1nek, Pavel Stra\u0148\u00e1k, Mihai Surdeanu, Nianwen Xue, and Yi Zhang. 2009. The conll- 2009 shared task: Syntactic and semantic dependen- cies in multiple languages. In Proceedings of the Thir- teenth Conference on Computational Natural Lan- guage Learning: Shared Task, CoNLL '09, pages 1- 18.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "OntoNotes: the 90% Solution",
"authors": [
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2006,
"venue": "NAACL'06: Proceedings of the Human Language Technology Conference of the NAACL",
"volume": "",
"issue": "",
"pages": "57--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. OntoNotes: the 90% Solution. In NAACL'06: Proceedings of the Human Language Technology Conference of the NAACL, pages 57-60, Morristown, NJ, USA.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Semeval-2013 task 3: Spatial role labeling",
"authors": [
{
"first": "Oleksandr",
"middle": [],
"last": "Kolomiyets",
"suffix": ""
},
{
"first": "Parisa",
"middle": [],
"last": "Kordjamshidi",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation",
"volume": "2",
"issue": "",
"pages": "255--262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oleksandr Kolomiyets, Parisa Kordjamshidi, Marie- Francine Moens, and Steven Bethard. 2013. Semeval- 2013 task 3: Spatial role labeling. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh Inter- national Workshop on Semantic Evaluation (SemEval 2013), pages 255-262.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Unsupervised induction of semantic roles",
"authors": [
{
"first": "Joel",
"middle": [],
"last": "Lang",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10",
"volume": "",
"issue": "",
"pages": "939--947",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joel Lang and Mirella Lapata. 2010. Unsupervised in- duction of semantic roles. In Human Language Tech- nologies: The Annual Conference of the North American Chapter of the Association for Computa- tional Linguistics, HLT '10, pages 939-947.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Impar: A deterministic algorithm for implicit semantic role labelling",
"authors": [
{
"first": "Egoitz",
"middle": [],
"last": "Laparra",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Rigau",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1180--1189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Egoitz Laparra and German Rigau. 2013. Impar: A deterministic algorithm for implicit semantic role la- belling. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1180-1189, Sofia, Bul- garia, August.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The Nom-Bank Project: An Interim Report",
"authors": [
{
"first": "A",
"middle": [],
"last": "Meyers",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Reeves",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Macleod",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Szekely",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Zielinska",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2004,
"venue": "HLT-NAACL 2004 Workshop: Frontiers in Corpus Annotation",
"volume": "",
"issue": "",
"pages": "24--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Meyers, R. Reeves, C. Macleod, R. Szekely, V. Zielin- ska, B. Young, and R. Grishman. 2004. The Nom- Bank Project: An Interim Report. In A. Meyers, ed- itor, HLT-NAACL 2004 Workshop: Frontiers in Cor- pus Annotation, pages 24-31, Boston, Massachusetts, USA, May.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The Proposition Bank: An Annotated Corpus of Semantic Roles",
"authors": [
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Kingsbury",
"suffix": ""
}
],
"year": 2005,
"venue": "Computational Linguistics",
"volume": "31",
"issue": "1",
"pages": "71--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An Annotated Cor- pus of Semantic Roles. Computational Linguistics, 31(1):71-106.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Scikit-learn: Machine learning in Python",
"authors": [
{
"first": "F",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Dubourg",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Vanderplas",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Cournapeau",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Brucher",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Perrot",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Duchesnay",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duches- nay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825- 2830.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "SemEval-2010 Task 10: Linking Events and Their Participants in Discourse",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
},
{
"first": "Caroline",
"middle": [],
"last": "Sporleder",
"suffix": ""
},
{
"first": "Roser",
"middle": [],
"last": "Morante",
"suffix": ""
},
{
"first": "Collin",
"middle": [],
"last": "Baker",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Josef Ruppenhofer, Caroline Sporleder, Roser Morante, Collin Baker, and Martha Palmer. 2009. SemEval- 2010 Task 10: Linking Events and Their Participants in Discourse. In Proceedings of the Workshop on Se- mantic Evaluations: Recent Achievements and Future",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Semantic roles (solid arrows) and additional spatial knowledge (discontinuous arrow)."
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Semantic roles (solid arrows) and additional spatial knowledge (discontinuous arrow) of type (1b). The additional LOCATION(a Royal Caribbean ship, in the Mediterranean) of type (1a) is not shown."
},
"FIGREF2": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Sample sentence and semantic roles. Pair (x: about what was happening on the ground, y: on the ground) is invalid because x contains y."
},
"FIGREF3": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Examples of semantic role representations (solid arrows), potential additional spatial knowledge (discontinuous arrows) and annotations with respect to the verb to which y attaches (collecting or stopped)."
},
"FIGREF4": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "1 [by the remains of [20 service members who]ARG 1 , v 2 are in the process of being [identified]v 2 ]ARG 1 , v 1 Potential additional spatial knowledge: x: 20 service members who, y: In this laboratory; x containedIn y role = ARG1 Sentence: [Children]ARG 0 , v 1 can get to [know]v 1 [different animals and plants, and [even some crops that]ARG 1 , v 2 are [rarely]ARGM-ADV, v 2 [seen]v 2 [in our daily life]ARGM-LOC, v 2 ]ARG 1 , v 1 . Potential additional spatial knowledge: x: Children, y: in our daily life; y containedIn x role = ARG1 Figure 5: Pairs (x, y) for which x containedIn y role and y containedIn x role features have a value."
},
"TABREF0": {
"type_str": "table",
"html": null,
"num": null,
"content": "
ARGM-LOC: location | ARGM-CAU: cause |
ARGM-EXT: extent | ARGM-TMP: time |
ARGM-DIS: discourse connective | ARGM-PNC: purpose |
ARGM-ADV: general-purpose | ARGM-MNR: manner |
ARGM-NEG: negation marker | ARGM-DIR: direction |
ARGM-MOD: modal verb | |
",
"text": "adds semantic role annotations on top of the parse trees of the Penn http://cogcomp.cs.illinois.edu/page/ software, http://ml.nec-labs.com/senna/; [Mr. Cray] ARG0 [will] ARGM-MOD [work] verb [for the Colorado Springs CO company] ARG2 [as an independent contractor] ARG1 . [I] ARG0 'd [slept] verb [through my only previous brush with natural disaster] ARG2 , [. . . ] Examples of PropBank annotations."
},
"TABREF1": {
"type_str": "table",
"html": null,
"num": null,
"content": "",
"text": "Argument modifiers in PropBank."
},
"TABREF3": {
"type_str": "table",
"html": null,
"num": null,
"content": "- |
ure 2 illustrates type (1b). Semantic roles indicate |
ARG 1 and ARGM-DIR of vanished, and ARG 0 and |
ARGM-LOC of cruising. In this example, one can |
infer that twenty-six year old George Smith (ARG 1 |
of vanished) has LOCATION in the Mediterranean |
(ARGM-LOC of cruising) during the cruising event. |
The second kind of additional LOCATION(x, y) is |
exemplified in the following sentence: [Residents |
of Biddeford apartments] |
",
"text": "ARG 0 can [enjoy] verb [the recreational center] ARG 1 [free of charge] MANNER . LOCATION(recreational center, Biddeford apartments) could be inferred yet Biddeford apartments is not a semantic role of a verb. 7 Inferring this kind of relations would require splitting semantic roles; one could also extract that the residents have LOCA-"
},
"TABREF6": {
"type_str": "table",
"html": null,
"num": null,
"content": "",
"text": ""
},
"TABREF8": {
"type_str": "table",
"html": null,
"num": null,
"content": "",
"text": ""
},
"TABREF10": {
"type_str": "table",
"html": null,
"num": null,
"content": "",
"text": ""
}
}
}
}