{ "paper_id": "J08-2005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:19:38.073184Z" }, "title": "The Importance of Syntactic Parsing and Inference in Semantic Role Labeling", "authors": [ { "first": "Vasin", "middle": [], "last": "Punyakanok", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "", "affiliation": {}, "email": "danr@uiuc.edu.\u2020" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a general framework for semantic role labeling. The framework combines a machinelearning technique with an integer linear programming-based inference procedure, which incorporates linguistic and structural constraints into a global decision process. Within this framework, we study the role of syntactic parsing information in semantic role labeling. We show that full syntactic parsing information is, by far, most relevant in identifying the argument, especially, in the very first stage-the pruning stage. Surprisingly, the quality of the pruning stage cannot be solely determined based on its recall and precision. Instead, it depends on the characteristics of the output candidates that determine the difficulty of the downstream problems. Motivated by this observation, we propose an effective and simple approach of combining different semantic role labeling systems through joint inference, which significantly improves its performance. Our system has been evaluated in the CoNLL-2005 shared task on semantic role labeling, and achieves the highest F 1 score among 19 participants.", "pdf_parse": { "paper_id": "J08-2005", "_pdf_hash": "", "abstract": [ { "text": "We present a general framework for semantic role labeling. The framework combines a machinelearning technique with an integer linear programming-based inference procedure, which incorporates linguistic and structural constraints into a global decision process. Within this framework, we study the role of syntactic parsing information in semantic role labeling. We show that full syntactic parsing information is, by far, most relevant in identifying the argument, especially, in the very first stage-the pruning stage. Surprisingly, the quality of the pruning stage cannot be solely determined based on its recall and precision. Instead, it depends on the characteristics of the output candidates that determine the difficulty of the downstream problems. Motivated by this observation, we propose an effective and simple approach of combining different semantic role labeling systems through joint inference, which significantly improves its performance. Our system has been evaluated in the CoNLL-2005 shared task on semantic role labeling, and achieves the highest F 1 score among 19 participants.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Semantic parsing of sentences is believed to be an important task on the road to natural language understanding, and has immediate applications in tasks such as information extraction and question answering. Semantic Role Labeling (SRL) is a shallow semantic parsing task, in which for each predicate in a sentence, the goal is to identify all constituents that fill a semantic role, and to determine their roles (Agent, Patient, Instrument, etc.) and their adjuncts (Locative, Temporal, Manner, etc.) .", "cite_spans": [ { "start": 413, "end": 447, "text": "(Agent, Patient, Instrument, etc.)", "ref_id": null }, { "start": 467, "end": 501, "text": "(Locative, Temporal, Manner, etc.)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The PropBank project (Kingsbury and Palmer 2002; Palmer, Gildea, and Kingsbury 2005) , which provides a large human-annotated corpus of verb predicates and their arguments, has enabled researchers to apply machine learning techniques to develop SRL systems (Gildea and Palmer 2002; Chen and Rambow 2003; Gildea and Hockenmaier 2003; Pradhan et al. 2003; Surdeanu et al. 2003; Pradhan et al. 2004; Xue and Palmer 2004; Koomen et al. 2005) . However, most systems rely heavily on full syntactic parse trees. Therefore, the overall performance of the system is largely determined by the quality of the automatic syntactic parsers of which the state of the art (Collins 1999; Charniak 2001 ) is still far from perfect.", "cite_spans": [ { "start": 21, "end": 48, "text": "(Kingsbury and Palmer 2002;", "ref_id": "BIBREF22" }, { "start": 49, "end": 84, "text": "Palmer, Gildea, and Kingsbury 2005)", "ref_id": "BIBREF31" }, { "start": 257, "end": 281, "text": "(Gildea and Palmer 2002;", "ref_id": "BIBREF15" }, { "start": 282, "end": 303, "text": "Chen and Rambow 2003;", "ref_id": "BIBREF8" }, { "start": 304, "end": 332, "text": "Gildea and Hockenmaier 2003;", "ref_id": "BIBREF13" }, { "start": 333, "end": 353, "text": "Pradhan et al. 2003;", "ref_id": "BIBREF33" }, { "start": 354, "end": 375, "text": "Surdeanu et al. 2003;", "ref_id": "BIBREF42" }, { "start": 376, "end": 396, "text": "Pradhan et al. 2004;", "ref_id": "BIBREF35" }, { "start": 397, "end": 417, "text": "Xue and Palmer 2004;", "ref_id": "BIBREF44" }, { "start": 418, "end": 437, "text": "Koomen et al. 2005)", "ref_id": "BIBREF24" }, { "start": 657, "end": 671, "text": "(Collins 1999;", "ref_id": "BIBREF9" }, { "start": 672, "end": 685, "text": "Charniak 2001", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Alternatively, shallow syntactic parsers (i.e., chunkers and clausers), although they do not provide as much information as a full syntactic parser, have been shown to be more robust in their specific tasks (Li and Roth 2001) . This raises the very natural and interesting question of quantifying the importance of full parsing information to semantic parsing and whether it is possible to use only shallow syntactic information to build an outstanding SRL system.", "cite_spans": [ { "start": 207, "end": 225, "text": "(Li and Roth 2001)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Although PropBank is built by adding semantic annotations to the constituents in the Penn Treebank syntactic parse trees, it is not clear how important syntactic parsing is for an SRL system. To the best of our knowledge, this problem was first addressed by Gildea and Palmer (2002) . In their attempt to use limited syntactic information, the parser they used was very shallow-clauses were not available and only chunks were used. Moreover, the pruning stage there was very strict-only chunks were considered as argument candidates. This results in over 60% of the actual arguments being ignored. Consequently, the overall recall in their approach was very low.", "cite_spans": [ { "start": 258, "end": 282, "text": "Gildea and Palmer (2002)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The use of only shallow parsing information in an SRL system has largely been ignored until the recent CoNLL-2004 shared task competition (Carreras and M\u00e0rquez 2004) . In that competition, participants were restricted to using only shallow parsing information, which included part-of-speech tags, chunks, and clauses (the definitions of chunks and clauses can be found in Tjong Kim Sang and Buchholz [2000] and Carreras et al. [2002] , respectively). As a result, the performance of the best shallow parsingbased system (Hacioglu et al. 2004) in the competition is about 10 points in F 1 below the best system that uses full parsing information (Koomen et al. 2005) . However, this is not the outcome of a true and fair quantitative comparison. The CoNLL-2004 shared task used only a subset of the data for training, which potentially makes the problem harder. Furthermore, an SRL system is usually complicated and consists of several stages. It was still unclear how much syntactic information helps and precisely where it helps the most.", "cite_spans": [ { "start": 138, "end": 165, "text": "(Carreras and M\u00e0rquez 2004)", "ref_id": "BIBREF4" }, { "start": 400, "end": 406, "text": "[2000]", "ref_id": null }, { "start": 411, "end": 433, "text": "Carreras et al. [2002]", "ref_id": "BIBREF6" }, { "start": 520, "end": 542, "text": "(Hacioglu et al. 2004)", "ref_id": "BIBREF20" }, { "start": 645, "end": 665, "text": "(Koomen et al. 2005)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The goal of this paper is threefold. First, we describe an architecture for an SRL system that incorporates a level of global inference on top of the relatively common processing steps. This inference step allows us to incorporate structural and linguistic constraints over the possible outcomes of the argument classifier in an easy way. The inference procedure is formalized via an Integer Linear Programming framework and is shown to yield state-of-the-art results on this task. Second, we provide a fair comparison between SRL systems that use full parse trees and systems that only use shallow syntactic information. As with our full syntactic parse-based SRL system (Koomen et al. 2005) , our shallow parsing-based SRL system is based on the system that achieves very competitive results and was one of the top systems in the CoNLL-2004 shared task competition (Carreras and M\u00e0rquez 2004) . This comparison brings forward a careful analysis of the significance of full parsing information in the SRL task, and provides an understanding of the stages in the process in which this information makes the most difference. Finally, to relieve the dependency of the SRL system on the quality of automatic parsers, we suggest a way to improve semantic role labeling significantly by developing a global inference algorithm, which is used to combine several SRL systems based on different state-of-the-art full parsers. The combination process is done through a joint inference stage, which takes the output of each individual system as input and generates the best predictions, subject to various structural and linguistic constraints.", "cite_spans": [ { "start": 672, "end": 692, "text": "(Koomen et al. 2005)", "ref_id": "BIBREF24" }, { "start": 867, "end": 894, "text": "(Carreras and M\u00e0rquez 2004)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The underlying system architecture can largely affect the outcome of our study. Therefore, to make the conclusions of our experimental study as applicable as possible to general SRL systems, the architecture of our SRL system follows the most widely used two-step design. In the first step, the system is trained to identify argument candidates for a given verb predicate. In the second step, the system classifies the argument candidates into their types. In addition, it is also a simple procedure to prune obvious non-candidates before the first step, and to use post-processing inference to fix inconsistent predictions after the second step. These two additional steps are also employed by our system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Our study of shallow and full syntactic information-based SRL systems was done by comparing their impact at each stage of the process. Specifically, our goal is to investigate at what stage full parsing information is most helpful relative to a shallow parsingbased system. Therefore, our experiments were designed so that the compared systems are as similar as possible, and the addition of the full parse tree-based features is the only difference. The most interesting result of this comparison is that although each step of the shallow parsing information-based system exhibits very good performance, the overall performance is significantly inferior to the system that uses full parsing information. Our explanation is that chaining multiple processing stages to produce the final SRL analysis is crucial to understanding this analysis. Specifically, the quality of the information passed from one stage to the other is a decisive issue, and it is not necessarily judged simply by considering the F-measure. We conclude that, for the system architecture used in our study, the significance of full parsing information comes into play mostly at the pruning stage, where the candidates to be processed later are determined. In addition, we produce a state-of-the-art SRL system by combining different SRL systems based on two automatic full parsers (Collins 1999; Charniak 2001) , which achieves the best result in the CoNLL-2005 shared task .", "cite_spans": [ { "start": 1352, "end": 1366, "text": "(Collins 1999;", "ref_id": "BIBREF9" }, { "start": 1367, "end": 1381, "text": "Charniak 2001)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The rest of this paper is organized as follows. Section 2 introduces the task of semantic role labeling in more detail. Section 3 describes the four-stage architecture of our SRL system, which includes pruning, argument identification, argument classification, and inference. The features used for building the classifiers and the learning algorithm applied are also explained there. Section 4 explains why and where full parsing information contributes to SRL by conducting a series of carefully designed experiments. Inspired by the result, we examine the effect of inference in a single system and propose an approach that combines different SRL systems based on joint inference in Section 5. Section 6 presents the empirical evaluation of our system in the CoNLL-2005 shared task competition. After that, we discuss the related work in Section 7 and conclude this paper in Section 8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The goal of the semantic role labeling task is to discover the predicate-argument structure of each predicate in a given input sentence. In this work, we focus only on the verb predicate. For example, given a sentence I left my pearls to my daughter-in-law in my will, the goal is to identify the different arguments of the verb predicate left and produce the output:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Semantic Role Labeling (SRL) Task", "sec_num": "2." }, { "text": "[ A0 I] [ V left ] [ A1 my pearls] [ A2 to my daughter-in-law] [ AM-LOC in my will].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Semantic Role Labeling (SRL) Task", "sec_num": "2." }, { "text": "Here A0 represents the leaver, A1 represents the thing left, A2 represents the beneficiary, AM-LOC is an adjunct indicating the location of the action, and V determines the boundaries of the predicate, which is important when a predicate contains many words, for example, a phrasal verb. In addition, each argument can be mapped to a constituent in its corresponding full syntactic parse tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Semantic Role Labeling (SRL) Task", "sec_num": "2." }, { "text": "Following the definition of the PropBank and CoNLL-2004 and 2005 shared tasks, there are six different types of arguments labeled as A0-A5 and AA. These labels have different semantics for each verb and each of its senses as specified in the PropBank Frame files. In addition, there are also 13 types of adjuncts labeled as AM-adj where adj specifies the adjunct type. For simplicity in our presentation, we will also refer to these adjuncts as arguments. In some cases, an argument may span over different parts of a sentence; the label C-arg is then used to specify the continuity of the arguments, as shown in this example:", "cite_spans": [ { "start": 32, "end": 44, "text": "PropBank and", "ref_id": null }, { "start": 45, "end": 59, "text": "CoNLL-2004 and", "ref_id": null }, { "start": 60, "end": 64, "text": "2005", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "The Semantic Role Labeling (SRL) Task", "sec_num": "2." }, { "text": "[ A1 The pearls] , [ A0 I] [ V said] , [ C-A1 were left to my daughter-in-law].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Semantic Role Labeling (SRL) Task", "sec_num": "2." }, { "text": "In some other cases, an argument might be a relative pronoun that in fact refers to the actual agent outside the clause. In this case, the actual agent is labeled as the appropriate argument type, arg, while the relative pronoun is instead labeled as R-arg. For example,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Semantic Role Labeling (SRL) Task", "sec_num": "2." }, { "text": "[ A1 The pearls] [ R-A1 which] [ A0 I] [ V left] [ A2 to my daughter-in-law] are fake.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Semantic Role Labeling (SRL) Task", "sec_num": "2." }, { "text": "Because each verb may have different senses producing different semantic roles for the same labels, the task of discovering the complete set of semantic roles should involve not only identifying these labels, but also the underlying sense for a given verb. However, as in all current SRL work, this article focuses only on identifying the boundaries and the labels of the arguments, and ignores the verb sense disambiguation problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Semantic Role Labeling (SRL) Task", "sec_num": "2." }, { "text": "The distribution of these argument labels is fairly unbalanced. In the official release of PropBank I, core arguments (A0-A5 and AA) occupy 71.26% of the arguments, where the largest parts are A0 (25.39%) and A1 (35.19%) . The rest mostly consists of adjunct arguments (24.90%). The continued (C-arg) and referential (R-arg) arguments are relatively few, occupying 1.22% and 2.63%, respectively. For more information on PropBank and the semantic role labeling task, readers can refer to Kingsbury and Palmer (2002) and M\u00e0rquez (2004, 2005) .", "cite_spans": [ { "start": 209, "end": 220, "text": "A1 (35.19%)", "ref_id": null }, { "start": 487, "end": 514, "text": "Kingsbury and Palmer (2002)", "ref_id": "BIBREF22" }, { "start": 519, "end": 539, "text": "M\u00e0rquez (2004, 2005)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "The Semantic Role Labeling (SRL) Task", "sec_num": "2." }, { "text": "Note that the semantic arguments of the same verb do not overlap. We define overlapping arguments to be those that share some of their parts. An argument is considered embedded in another argument if the second argument completely covers the first one. Arguments are exclusively overlapping if they are overlapping but are not embedded.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Semantic Role Labeling (SRL) Task", "sec_num": "2." }, { "text": "Adhering to the most common architecture for SRL systems, our SRL system consists of four stages: pruning, argument identification, argument classification, and inference. In particular, the goal of pruning and argument identification is to identify argument candidates for a given verb predicate. In the first three stages, however, decisions are independently made for each argument, and information across arguments is not incorporated. The final inference stage allows us to use this type of information along with linguistic and structural constraints in order to make consistent global predictions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SRL System Architecture", "sec_num": "3." }, { "text": "This system architecture remains unchanged when used for studying the importance of syntactic parsing in SRL, although different information and features are used. Throughout this article, when full parsing information is available, we assume that the system is presented with the full phrase-structure parse tree as defined in the Penn Treebank (Marcus, Marcinkiewicz, and Santorini 1993) but without trace and functional tags. On the other hand, when only shallow parsing information is available, the full parse tree is reduced to only the chunks and the clause constituents.", "cite_spans": [ { "start": 346, "end": 389, "text": "(Marcus, Marcinkiewicz, and Santorini 1993)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "SRL System Architecture", "sec_num": "3." }, { "text": "A chunk is a phrase containing syntactically related words. Roughly speaking, chunks are obtained by projecting the full parse tree onto a flat tree; hence, they are closely related to the base phrases. Chunks were not directly defined as part of the standard annotation of the treebank, but, rather, their definition was introduced in the CoNLL-2000 shared task on text chunking (Tjong Kim Sang and Buchholz 2000), which aimed to discover such phrases in order to facilitate full parsing. A clause, on the other hand, is the clausal constituent as defined by the treebank standard. An example of chunks and clauses is shown in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 628, "end": 636, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "SRL System Architecture", "sec_num": "3." }, { "text": "When the full parse tree of a sentence is available, only the constituents in the parse tree are considered as argument candidates. Our system exploits the heuristic rules introduced by Xue and Palmer (2004) to filter out simple constituents that are very unlikely to be arguments. This pruning method is a recursive process starting from the target verb. It first returns the siblings of the verb as candidates; then it moves to the parent of the verb, and collects the siblings again. The process goes on until it reaches the root. In addition, if a constituent is a PP (prepositional phrase), its children are also collected. For example, in Figure 1, ", "cite_spans": [ { "start": 186, "end": 207, "text": "Xue and Palmer (2004)", "ref_id": "BIBREF44" } ], "ref_spans": [ { "start": 645, "end": 654, "text": "Figure 1,", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Pruning", "sec_num": "3.1" }, { "text": "The argument identification stage utilizes binary classification to identify whether a candidate is an argument or not. When full parsing is available, we train and apply the binary classifiers on the constituents supplied by the pruning stage. When only shallow parsing is available, the system does not have a pruning stage, and also does not have constituents to begin with. Therefore, conceptually, the system has to consider all possible subsequences (i.e., consecutive words) in a sentence as potential argument candidates. We avoid this by using a learning scheme that utilizes two classifiers, one to predict the beginnings of possible arguments, and the other the ends. The predictions are combined to form argument candidates. However, we can employ a simple heuristic to filter out some candidates that are obviously not arguments. The final predication includes those that do not violate the following constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument Identification", "sec_num": "3.2" }, { "text": "Arguments cannot overlap with the predicate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "If a predicate is outside a clause, its arguments cannot be embedded in that clause.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "Arguments cannot exclusively overlap with the clauses. The first constraint comes from the definition of this task that the predicate simply cannot take itself or any constituents that contain itself as arguments. The other two constraints are due to the fact that a clause can be treated as a unit that has its own verb-argument structure. If a verb predicate is outside a clause, then its argument can only be the whole clause, but may not be embedded in or exclusively overlap with the clause.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "For the argument identification classifier, the features used in full parsing and shallow parsing settings are all binary features, which are described subsequently.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "Most of the features used in our system are common features for the SRL task. The creation of PropBank was inspired by the works of Levin (1993) and Levin and Hovav (1996) , which discuss the relation between syntactic and semantic information. Following this philosophy, the features aim to indicate the properties of the predicate, the constituent which is an argument candidate, and the relationship between them through the available syntactic information. We explain these features herein. For further discussion of these features, we refer the readers to the article by Gildea and Jurafsky (2002) , which introduced these features.", "cite_spans": [ { "start": 132, "end": 144, "text": "Levin (1993)", "ref_id": "BIBREF25" }, { "start": 149, "end": 171, "text": "Levin and Hovav (1996)", "ref_id": "BIBREF26" }, { "start": 576, "end": 602, "text": "Gildea and Jurafsky (2002)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Features Used When Full Parsing is Available.", "sec_num": "3.2.1" }, { "text": "r Predicate and POS tag of predicate: indicate the lemma of the predicate verb and its POS tag.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features Used When Full Parsing is Available.", "sec_num": "3.2.1" }, { "text": "r Voice: indicates passive/active voice of the predicate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features Used When Full Parsing is Available.", "sec_num": "3.2.1" }, { "text": "r Phrase type: provides the phrase type of the constituent, which is the tag of the corresponding constituent in the parse tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features Used When Full Parsing is Available.", "sec_num": "3.2.1" }, { "text": "r Head word and POS tag of the head word: provides the head word of the constituent and its POS tag. We use the rules introduced by Collins (1999) to extract this feature.", "cite_spans": [ { "start": 132, "end": 146, "text": "Collins (1999)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Features Used When Full Parsing is Available.", "sec_num": "3.2.1" }, { "text": "r Position: describes if the constituent is before or after the predicate, relative to the position in the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features Used When Full Parsing is Available.", "sec_num": "3.2.1" }, { "text": "r Path: records the tags of parse tree nodes in the traversal path from the constituent to the predicate. For example, in Figure 1 , if the predicate is assume and the constituent is [ S who has been elected deputy chairman], the path is S\u2191NP\u2191PP\u2191VP\u2193VBN, where \u2191 and \u2193 indicate the traversal direction in the path.", "cite_spans": [], "ref_spans": [ { "start": 122, "end": 130, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Features Used When Full Parsing is Available.", "sec_num": "3.2.1" }, { "text": "r Subcategorization: describes the phrase structure around the predicate's parent. It records the immediate structure in the parse tree that expands to its parent. As an example, if the predicate is elect in Figure 1 , its subcategorization is VP\u2192(VBN)-NP while the subcategorization of the predicate assume is VP\u2192(VBN)-PP. Parentheses indicate the position of the predicate.", "cite_spans": [], "ref_spans": [ { "start": 208, "end": 216, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Features Used When Full Parsing is Available.", "sec_num": "3.2.1" }, { "text": "Generally speaking, we consider only the arguments that correspond to some constituents in parse trees. However, in some cases, we need to consider an argument that does not exactly correspond to a constituent, for example, in our experiment in Section 4.2 where the gold-standard boundaries are used with the parse trees generated by an automatic parse. In such cases, if the information on the constituent, such as phrase type, needs to be extracted, the deepest constituent that covers the whole argument will be used. For example, in Figure 1 , the phrase type for by John Smith is PP, and its path feature to the predicate assume is PP\u2191VP\u2193VBN.", "cite_spans": [], "ref_spans": [ { "start": 538, "end": 546, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Features Used When Full Parsing is Available.", "sec_num": "3.2.1" }, { "text": "We also use the following additional features. These features have been shown to be useful for the systems by exploiting other information in the absence of the full parse tree information (Punyakanok et al. 2004) , and, hence, can be helpful in conjunction with the features extracted from a full parse tree. They also aim to encode the properties of the predicate, the constituent to be classified, and their relationship in the sentence.", "cite_spans": [ { "start": 189, "end": 213, "text": "(Punyakanok et al. 2004)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Features Used When Full Parsing is Available.", "sec_num": "3.2.1" }, { "text": "r Context words and POS tags of the context words: the feature includes the two words before and after the constituent, and their POS tags.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features Used When Full Parsing is Available.", "sec_num": "3.2.1" }, { "text": "r Verb class: the feature is the VerbNet (Kipper, Palmer, and Rambow 2002) class of the predicate as described in PropBank Frames. Note that a verb may inhabit many classes and we collect all of these classes as features, regardless of the context-specific sense which we do not attempt to resolve.", "cite_spans": [ { "start": 41, "end": 74, "text": "(Kipper, Palmer, and Rambow 2002)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Features Used When Full Parsing is Available.", "sec_num": "3.2.1" }, { "text": "r Lengths: of the constituent, in the numbers of words and chunks separately.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features Used When Full Parsing is Available.", "sec_num": "3.2.1" }, { "text": "r Chunk: tells if the constituent \"is,\" \"embeds,\" \"exclusively overlaps,\" or \"is embedded in\" a chunk with its type. For instance, in Figure 1 r Clause relative position: encodes the position of the constituent relative to the predicate in the pseudo-parse tree constructed only from clause constituents, chunks, and part-of-speech tags. In addition, we label the clause with the type of chunk that immediately precedes the clause. This is a simple rule to distinguish the type of clause based on the intuition that a subordinate clause often modifies the part of the sentence immediately before it. Figure 2 shows the pseudo-parse tree of the parse tree in Figure 1 . By disregarding the chunks, there are four configurations-\"target constituent and predicate are siblings,\" \"target constituent's parent is an ancestor of predicate,\" \"predicate's parent is an ancestor of target word,\" or \"otherwise.\" This feature can be viewed as a generalization of the Path feature described earlier.", "cite_spans": [], "ref_spans": [ { "start": 134, "end": 142, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 600, "end": 608, "text": "Figure 2", "ref_id": null }, { "start": 658, "end": 666, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Features Used When Full Parsing is Available.", "sec_num": "3.2.1" }, { "text": "r Clause coverage: describes how much of the local clause from the predicate is covered by the target argument.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features Used When Full Parsing is Available.", "sec_num": "3.2.1" }, { "text": "r NEG: the feature is active if the target verb chunk has not or n't.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features Used When Full Parsing is Available.", "sec_num": "3.2.1" }, { "text": "r MOD: the feature is active when there is a modal verb in the verb chunk.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features Used When Full Parsing is Available.", "sec_num": "3.2.1" }, { "text": "The rules of the NEG and MOD features are used in a baseline SRL system developed by Erik Tjong Kim Sang (Carreras and M\u00e0rquez 2004) .", "cite_spans": [ { "start": 105, "end": 132, "text": "(Carreras and M\u00e0rquez 2004)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Features Used When Full Parsing is Available.", "sec_num": "3.2.1" }, { "text": "The pseudo-parse tree generated from the parse tree in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 55, "end": 63, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "In addition, we also use the conjunctions of features which conjoin any two features into a new feature. For example, the conjunction of the predicate and path features for the predicate assume and the constituent [ S who has been elected deputy chairman] in Figure 1 is (S\u2191NP\u2191PP\u2191VP\u2193VBN, assume).", "cite_spans": [], "ref_spans": [ { "start": 259, "end": 267, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "Parsing is Available. Most features used here are similar to those used by the system with full parsing information. However, for features that need full parse trees in their extraction procedures, we either try to mimic them with some heuristic rules or discard them. The details of these features are as follows. r Shallow-Path: records the traversal path in the pseudo-parse tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features Used When Only Shallow", "sec_num": "3.2.2" }, { "text": "This aims to approximate the Path features extracted from the full parse tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features Used When Only Shallow", "sec_num": "3.2.2" }, { "text": "r Shallow-Subcategorization: describes the chunk and clause structure around the predicate's parent in the pseudo-parse tree. This aims to approximate the Subcategorization feature extracted from the full parse tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features Used When Only Shallow", "sec_num": "3.2.2" }, { "text": "This stage assigns labels to the argument candidates identified in the previous stage. A multi-class classifier is trained to predict the types of the argument candidates. In addition, to reduce the excessive candidates mistakenly output by the previous stage, the classifier can also label an argument as \"null\" (meaning \"not an argument\") to discard it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument Classification", "sec_num": "3.3" }, { "text": "The features used here are the same as those used in the argument identification stage. However, when full parsing is available, an additional feature introduced by Xue and Palmer (2004) is used.", "cite_spans": [ { "start": 165, "end": 186, "text": "Xue and Palmer (2004)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Argument Classification", "sec_num": "3.3" }, { "text": "r Syntactic frame: describes the sequential pattern of the noun phrases and the predicate in the sentence which aims to complement the Path and Subcategorization features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument Classification", "sec_num": "3.3" }, { "text": "The learning algorithm used for training the argument classifier and argument identifier is a variation of the Winnow update rule incorporated in SNoW (Roth 1998; Carlson et al. 1999 ), a multi-class classifier that is tailored for large scale learning tasks. SNoW learns a sparse network of linear functions, in which the targets (argument border predictions or argument type predictions, in this case) are represented as linear functions over a common feature space; multi-class decisions are done via a winnertake-all mechanism. It improves the basic Winnow multiplicative update rule with a regularization term, which has the effect of separating the data with a large margin separator (Dagan, Karov, and Roth 1997; Grove and Roth 2001; Zhang, Damerau, and Johnson 2002) and voted (averaged) weight vector (Freund and Schapire 1999; Golding and Roth 1999) .", "cite_spans": [ { "start": 151, "end": 162, "text": "(Roth 1998;", "ref_id": "BIBREF39" }, { "start": 163, "end": 182, "text": "Carlson et al. 1999", "ref_id": "BIBREF3" }, { "start": 690, "end": 719, "text": "(Dagan, Karov, and Roth 1997;", "ref_id": "BIBREF10" }, { "start": 720, "end": 740, "text": "Grove and Roth 2001;", "ref_id": "BIBREF17" }, { "start": 741, "end": 774, "text": "Zhang, Damerau, and Johnson 2002)", "ref_id": "BIBREF46" }, { "start": 810, "end": 836, "text": "(Freund and Schapire 1999;", "ref_id": "BIBREF12" }, { "start": 837, "end": 859, "text": "Golding and Roth 1999)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Argument Classification", "sec_num": "3.3" }, { "text": "The softmax function (Bishop 1995 ) is used to convert raw activation to conditional probabilities. If there are n classes and the raw activation of class i is act i , the posterior estimation for class i is", "cite_spans": [ { "start": 21, "end": 33, "text": "(Bishop 1995", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Argument Classification", "sec_num": "3.3" }, { "text": "Prob(i) = e act i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument Classification", "sec_num": "3.3" }, { "text": "1\u2264j\u2264n e act j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument Classification", "sec_num": "3.3" }, { "text": "Note that in training this classifier, unless specified otherwise, the argument candidates used to generate the training examples are obtained from the output of the argument identifier, not directly from the gold-standard corpus. In this case, we automatically obtain the necessary examples to learn for class \"null.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument Classification", "sec_num": "3.3" }, { "text": "In the previous stages, decisions were always made for each argument independently, ignoring the global information across arguments in the final output. The purpose of the inference stage is to incorporate such information, including both linguistic and structural knowledge, such as \"arguments do not overlap\" or \"each verb takes at most one argument of each type.\" This knowledge is useful to resolve any inconsistencies of argument classification in order to generate final legitimate predictions. We design an inference procedure that is formalized as a constrained optimization problem, represented as an integer linear program (Roth and Yih 2004) . It takes as input the argument classifiers' confidence scores for each type of argument, along with a list of constraints. The output is the optimal solution that maximizes the linear sum of the confidence scores, subject to the constraints that encode the domain knowledge.", "cite_spans": [ { "start": 634, "end": 653, "text": "(Roth and Yih 2004)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.4" }, { "text": "The inference stage can be naturally extended to combine the output of several different SRL systems, as we will show in Section 5. In this section we first introduce the constraints and formalize the inference problem for the semantic role labeling task. We then demonstrate how we apply integer linear programming (ILP) to generate the global label assignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.4" }, { "text": "Formally, the argument classifiers attempt to assign labels to a set of arguments, S 1:M , indexed from 1 to M. Each argument S i can take any label from a set of argument labels, P, and the indexed set of arguments can take a set of labels, c 1:M \u2208 P M . If we assume that the classifiers return a score score(S i = c i ) that corresponds to the likelihood of argument S i being labeled c i then, given a sentence, the unaltered inference task is solved by maximizing the overall score of the arguments,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constraints over Argument Labeling.", "sec_num": "3.4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c 1:M = argmax c 1:M \u2208P M score(S 1:M = c 1:M ) = argmax c 1:M \u2208P M M i=1 score(S i = c i )", "eq_num": "( 1 )" } ], "section": "Constraints over Argument Labeling.", "sec_num": "3.4.1" }, { "text": "In the presence of global constraints derived from linguistic information and structural considerations, our system seeks to output a legitimate labeling that maximizes this score. Specifically, it can be thought of as if the solution space is limited through the use of a filter function, F, which eliminates many argument labelings from consideration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constraints over Argument Labeling.", "sec_num": "3.4.1" }, { "text": "Here, we are concerned with global constraints as well as constraints on the arguments. Therefore, the final labeling become\u015d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constraints over Argument Labeling.", "sec_num": "3.4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c 1:M = argmax c 1:M \u2208F (P M ) M i=1 score(S i = c i )", "eq_num": "( 2 )" } ], "section": "Constraints over Argument Labeling.", "sec_num": "3.4.1" }, { "text": "When the confidence scores correspond to the conditional probabilities estimated by the argument classifiers, the value of the objective function represents the expected number of correct argument predictions. Hence, the solution of Equation 2is the one that maximizes this expected value among all legitimate outputs. The filter function used considers the following constraints: 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constraints over Argument Labeling.", "sec_num": "3.4.1" }, { "text": "1. Arguments cannot overlap with the predicate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constraints over Argument Labeling.", "sec_num": "3.4.1" }, { "text": "Arguments cannot exclusively overlap with the clauses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "If a predicate is outside a clause, its arguments cannot be embedded in that clause.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "No overlapping or embedding arguments. This constraint holds because semantic arguments are labeled on non-embedding constituents in the syntactic parse tree. In addition, as defined in the CoNLL-2004 and 2005 shared tasks, the legitimate output of an SRL system must satisfy this constraint.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "No duplicate argument classes for core arguments, such as A0-A5 and AA. The only exception is when there is a conjunction in the sentence. For example, Despite this exception, we treat it as a hard constraint because it almost always holds.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.", "sec_num": null }, { "text": "[ A0 I] [ V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.", "sec_num": null }, { "text": "If there is an R-arg argument, then there has to be an arg argument. That is, if an argument is a reference to some other argument arg, then this referenced argument must exist in the sentence. This constraint is directly derived from the definition of R-arg arguments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6.", "sec_num": null }, { "text": "If there is a C-arg argument, then there has to be an arg argument; in addition, the C-arg argument must occur after arg. This is stricter than the previous rule because the order of appearance also needs to be considered. Similarly, this constraint is directly derived from the definition of C-arg arguments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "7.", "sec_num": null }, { "text": "Given the predicate, some argument classes are illegal (e.g., predicate stalk can take only A0 or A1). This information can be found in PropBank Frames.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "8.", "sec_num": null }, { "text": "This constraint comes from the fact that different predicates take different types and numbers of arguments. By checking the PropBank Frame file of the target verb, we can exclude some core argument labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "8.", "sec_num": null }, { "text": "Note that constraints 1, 2, and 3 are actually implemented in the argument identification stage (see Section 3.2). In addition, they need to be explicitly enforced only when full parsing information is not available because the output of the pruning heuristics never violates these constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "8.", "sec_num": null }, { "text": "The optimization problem (Equation (2)) can be solved using an ILP solver by reformulating the constraints as linear (in)equalities over the indicator variables that represent the truth value of statements of the form [argument i takes label j], as described in detail next.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "8.", "sec_num": null }, { "text": "As discussed previously, a collection of potential arguments is not necessarily a valid semantic labeling because it may not satisfy all of the constraints. We enforce a legitimate solution using the following inference algorithm. In our context, inference is the process of finding the best (according to Equation (1)) valid semantic labels that satisfy all of the specified constraints. We take a similar approach to the one previously used for entity/relation recognition (Roth and Yih 2004) , and model this inference procedure as solving an ILP problem.", "cite_spans": [ { "start": 475, "end": 494, "text": "(Roth and Yih 2004)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Using Integer Linear Programming.", "sec_num": "3.4.2" }, { "text": "An integer linear program is a linear program with integral variables. That is, the cost function and the (in)equality constraints are all linear in terms of the variables. The only difference in an integer linear program is that the variables can only take integers as their values. In our inference problem, the variables are in fact binary. A general binary integer linear programming problem can be stated as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Integer Linear Programming.", "sec_num": "3.4.2" }, { "text": "Given a cost vector p \u2208 d , a collection of variables u = (u 1 , . . . , u d ) and cost ma-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Integer Linear Programming.", "sec_num": "3.4.2" }, { "text": "trices C 1 \u2208 c 1 \u00d7 d , C 2 \u2208 c 2 \u00d7 d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Integer Linear Programming.", "sec_num": "3.4.2" }, { "text": ", where c 1 and c 2 are the numbers of inequality and equality constraints and d is the number of binary variables, the ILP solution u * is the vector that maximizes the cost function,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Integer Linear Programming.", "sec_num": "3.4.2" }, { "text": "u * = argmax u\u2208{0,1} d p \u2022 u subject to C 1 u \u2265 b 1 , and C 2 u = b 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Integer Linear Programming.", "sec_num": "3.4.2" }, { "text": "where b 1 \u2208 c 1 , b 2 \u2208 c 2 , and for all u \u2208 {0, 1} d .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Integer Linear Programming.", "sec_num": "3.4.2" }, { "text": "To solve the problem of Equation (2) in this setting, we first reformulate the original cost function M i=1 score(S i = c i ) as a linear function over several binary variables, and then represent the filter function F using linear inequalities and equalities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Integer Linear Programming.", "sec_num": "3.4.2" }, { "text": "We set up a bijection from the semantic labeling to the variable set u. This is done by setting u to be a set of indicator variables that correspond to the labels assigned to arguments. Specifically, let u ic = [S i = c] be the indicator variable that represents whether or not the argument type c is assigned to S i , and let p ic = score(S i = c). Equation (1) can then be written as an ILP cost function as argmax", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Integer Linear Programming.", "sec_num": "3.4.2" }, { "text": "u ic \u2208{0,1}:\u2200i\u2208[1,M],c\u2208P M i=1 c\u2208P p ic u ic subject to c\u2208P u ic = 1 \u2200i \u2208 [1, M]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Integer Linear Programming.", "sec_num": "3.4.2" }, { "text": "which means that each argument can take only one type. Note that this new constraint comes from the variable transformation, and is not one of the constraints used in the filter function F.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Integer Linear Programming.", "sec_num": "3.4.2" }, { "text": "Of the constraints listed earlier, constraints 1 through 3 can be evaluated on a perargument basis and, for the sake of efficiency, arguments that violate these constraints are eliminated even before being given to the argument classifier. Next, we show how to transform the constraints in the filter function into the form of linear (in)equalities over u and use them in this ILP setting. For a more complete example of this ILP formulation, please see Appendix A.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Integer Linear Programming.", "sec_num": "3.4.2" }, { "text": "Constraint 4: No overlapping or embedding. If arguments S j 1 , . . . , S j k cover the same word in a sentence, then this constraint ensures that at most one of the arguments is assigned to an argument type. In other words, at least k \u2212 1 arguments will be the special class null. If the special class null is represented by the symbol \u03c6, then for every set of such arguments, the following linear equality represents this constraint.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Integer Linear Programming.", "sec_num": "3.4.2" }, { "text": "k i=1 u j i \u03c6 \u2265 k \u2212 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Integer Linear Programming.", "sec_num": "3.4.2" }, { "text": "Constraint 5: No duplicate argument classes. Within the same clause, several types of arguments cannot appear more than once. For example, a predicate can only take one A0. This constraint can be represented using the following inequality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Integer Linear Programming.", "sec_num": "3.4.2" }, { "text": "M i=1 u iA0 \u2264 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Integer Linear Programming.", "sec_num": "3.4.2" }, { "text": "Constraint 6: R-arg arguments. Suppose the referenced argument type is A0 and the referential type is R-A0. The linear inequalities that represent this constraint are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Integer Linear Programming.", "sec_num": "3.4.2" }, { "text": "\u2200m \u2208 {1, . . . , M} : M i=1 u iA0 \u2265 u mR-A0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Integer Linear Programming.", "sec_num": "3.4.2" }, { "text": "If there are \u03b3 referential types, then the total number of inequalities needed is \u03b3M.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Integer Linear Programming.", "sec_num": "3.4.2" }, { "text": "Constraint 7: C-arg arguments. This constraint is similar to the reference argument constraints. The difference is that the continued argument arg has to occur before C-arg.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Integer Linear Programming.", "sec_num": "3.4.2" }, { "text": "Assume that the argument pair is A0 and C-A0, and arguments are sorted by their beginning positions, i.e., if i < k, the position of the beginning of S j k is not before that of the beginning of S j i . The linear inequalities that represent this constraint are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Integer Linear Programming.", "sec_num": "3.4.2" }, { "text": "\u2200m \u2208 {2, . . . , M} : m\u22121 i=1 u j i A0 \u2265 u j m C-A0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Integer Linear Programming.", "sec_num": "3.4.2" }, { "text": "Constraint 8: Illegal argument types. Given a specific verb, some argument types should never occur. For example, most verbs do not have arguments A5. This constraint is represented by summing all the corresponding indicator variables to be 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Integer Linear Programming.", "sec_num": "3.4.2" }, { "text": "M i=1 u iA5 = 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Integer Linear Programming.", "sec_num": "3.4.2" }, { "text": "Using ILP to solve this inference problem enjoys several advantages. Linear constraints are very general, and are able to represent any Boolean constraint (Gu\u00e9ret, Prins, and Sevaux 2002) . Table 1 summarizes the transformations of common constraints (most are Boolean), which are revised from Gu\u00e9ret, Prins, and Sevaux (2002) , and can be used for constructing complicated rules.", "cite_spans": [ { "start": 155, "end": 187, "text": "(Gu\u00e9ret, Prins, and Sevaux 2002)", "ref_id": "BIBREF18" }, { "start": 294, "end": 326, "text": "Gu\u00e9ret, Prins, and Sevaux (2002)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 190, "end": 197, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Using Integer Linear Programming.", "sec_num": "3.4.2" }, { "text": "Previous approaches usually rely on dynamic programming to resolve nonoverlapping/embedding constraints (i.e., Constraint 4) when the constraint structure is sequential. However, they are not able to handle more expressive constraints such as those that take long-distance dependencies and counting dependencies into account (Roth and Yih 2005) . The ILP approach, on the other hand, is flexible enough to handle more expressive and general constraints. Although solving an ILP problem is NP-hard in the worst case, with the help of today's numerical packages, this problem can usually be solved very quickly in practice. For instance, in our experiments it only took about 10 minutes to solve the inference problem for 4,305 sentences, using Table 1 Rules of mapping constraints to linear (in)equalities for Boolean variables.", "cite_spans": [ { "start": 325, "end": 344, "text": "(Roth and Yih 2005)", "ref_id": "BIBREF41" } ], "ref_spans": [ { "start": 743, "end": 750, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Using Integer Linear Programming.", "sec_num": "3.4.2" }, { "text": "Linear form", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Original constraint", "sec_num": null }, { "text": "exactly k of x 1 , x 2 , \u2022 \u2022 \u2022 , x n x 1 + x 2 + \u2022 \u2022 \u2022 + x n = k at most k of x 1 , x 2 , \u2022 \u2022 \u2022 , x n x 1 + x 2 + \u2022 \u2022 \u2022 + x n \u2264 k at least k of x 1 , x 2 , \u2022 \u2022 \u2022 , x n x 1 + x 2 + \u2022 \u2022 \u2022 + x n \u2265 k a \u2192 b a \u2264 b a =b a = 1 \u2212 b a \u2192b a + b \u2264 1 a \u2192 b a + b \u2265 1 a \u2194 b a = b a \u2192 b \u2227 c a \u2264 b and a \u2264 c a \u2192 b \u2228 c a \u2264 b + c b \u2227 c \u2192 a a \u2265 b + c \u2212 1 b \u2228 c \u2192 a a \u2265 (b + c)/2 a \u2192 at least k of x 1 , x 2 , \u2022 \u2022 \u2022 , x n a \u2264 (x 1 + x 2 + \u2022 \u2022 \u2022 + x n )/k At least k of x 1 , x 2 , \u2022 \u2022 \u2022 , x n \u2192 a a \u2265 (x 1 + x 2 + \u2022 \u2022 \u2022 + x n \u2212 (k \u2212 1))/(n \u2212 (k \u2212 1))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Original constraint", "sec_num": null }, { "text": "Xpress-MP (2004) running on a Pentium-III 800 MHz machine. Note that ordinary search methods (e.g., beam search) are not necessarily faster than solving an ILP problem and do not guarantee the optimal solution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Original constraint", "sec_num": null }, { "text": "We experimentally study the significance of syntactic parsing by observing the effects of using full parsing and shallow parsing information at each stage of an SRL system. We first describe, in Section 4.1, how we prepare the data. The comparison of full parsing and shallow parsing on the first three stages of the process is presented in the reverse order (Sections 4.2, 4.3, 4.4) . Note that in the following sections, in addition to the performance comparison at various stages, we present also the overall system performance for the different scenarios. In all cases, the overall system performance is derived after the inference stage.", "cite_spans": [ { "start": 359, "end": 373, "text": "(Sections 4.2,", "ref_id": null }, { "start": 374, "end": 378, "text": "4.3,", "ref_id": null }, { "start": 379, "end": 383, "text": "4.4)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Importance of Syntactic Parsing", "sec_num": "4." }, { "text": "We use PropBank Sections 02 through 21 as training data, Section 23 as testing, and Section 24 as a validation set when necessary. In order to apply the standard CoNLL shared task evaluation script, our system conforms to both the input and output format defined in the shared task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setting", "sec_num": "4.1" }, { "text": "The goal of the experiments in this section is to understand the effective contribution of full parsing information versus shallow parsing information (i.e., using only the part-of-speech tags, chunks, and clauses). In addition, we also compare performance when using the correct (gold-standard) data versus using automatic parse data. The performance is measured in terms of precision, recall, and the F 1 measure. Note that all the numbers reported here do not take into account the V arguments as it is quite trivial to predict V and, hence, this gives overoptimistic overall performance if included. When doing the comparison, we also compute the 95% confidence interval of F 1 using the bootstrap resampling method (Noreen 1989) , and the difference is considered significant if the compared F 1 lies outside this interval. The automatic full parse trees are derived using Charniak's parser (2001) (version 0.4). In automatic shallow parsing, the information is generated by different state-of-the-art components, including a POS tagger (Even-Zohar and Roth 2001), a chunker (Punyakanok and Roth 2001) , and a clauser (Carreras, M\u00e0rquez, and Castro 2005) .", "cite_spans": [ { "start": 720, "end": 733, "text": "(Noreen 1989)", "ref_id": "BIBREF30" }, { "start": 1080, "end": 1106, "text": "(Punyakanok and Roth 2001)", "ref_id": "BIBREF37" }, { "start": 1123, "end": 1159, "text": "(Carreras, M\u00e0rquez, and Castro 2005)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setting", "sec_num": "4.1" }, { "text": "To evaluate the performance gap between full parsing and shallow parsing in argument classification, we assume the argument boundaries are known, and only train classifiers to classify the labels of these arguments. In this stage, the only difference between the uses of full parsing and shallow parsing information is the construction of phrase type, head word, POS tag of the head word, path, subcategorization, and syntactic frame features. As described in Section 3.2.2, most of these features can be approximated using chunks and clauses, with the exception of the syntactic frame feature. It is unclear how this feature can be mimicked because it relies on the internal structure of a full parse tree. Therefore, it does not have a corresponding feature in the shallow parsing case. Table 2 reports the experimental results of argument classification when argument boundaries are known. In this case, because the argument classifier of our SRL system does not overpredict or miss any arguments, we do not need to train with a null class, and we can simply measure the performance using accuracy instead of F 1 . The training examples include 90,352 propositions with a total of 332,381 arguments. The test data contain 5,246 propositions and 19,511 arguments. As shown in the table, although the full-parsing features are more helpful than the shallow-parsing features, the performance gap is quite small (0.75% on gold-standard data and 0.61% with the automatic parsers). The rather small difference in the performance between argument classifiers using full parsing and shallow parsing information almost disappears when their output is processed by the inference stage. Table 3 shows the final results in recall, precision, and F 1 , when the argument boundaries are known. In all cases, the differences in F 1 between the full parsing-based and the shallow parsing-based systems are not statistically significant.", "cite_spans": [], "ref_spans": [ { "start": 789, "end": 796, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 1679, "end": 1686, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Argument Classification", "sec_num": "4.2" }, { "text": "When the argument boundaries are known, the performance of the full parsing-based SRL system is about the same as the shallow parsing-based SRL system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion.", "sec_num": null }, { "text": "Argument identification is an important stage that effectively reduces the number of argument candidates after the pruning stage. Given an argument candidate, an argument identifier is a binary classifier that decides whether or not the candidate should be considered as an argument. To evaluate the influence of full parsing information in this stage, the candidate list used here is the outputs of the pruning heuristic applied on the gold-standard parse trees. The heuristic results in a total number of 323,155 positive and 686,887 negative examples in the training set, and 18,988 positive and 39,585 negative examples in the test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument Identification", "sec_num": "4.3" }, { "text": "Similar to the argument classification stage, the only difference between full parsing-and shallow parsing-based systems is in the construction of some features. Specifically, phrase type, head word, POS tag of the head word, path, and subcategorization features are approximated using chunks and clauses when the binary classifier is trained using shallow parsing information. Table 4 reports the performance of the argument identifier on the test set using the direct predictions of the trained binary classifier. The recall and precision of the full parsing-based system are around 2 to 3 percentage points higher than the shallow parsing-based system on the gold-standard data. As a result, the F 1 score is 2.5 percentage points higher. The performance on automatic parse data is unsurprisingly lower but the difference between the full parsing-and the shallow parsing-based systems is as observed previously. In terms of filtering efficiency, around 25% of the examples are predicted as positive. In other words, both argument identifiers filter out around 75% of the argument candidates after pruning. Because the recall in the argument identification stage sets the upper-bound the recall in argument classification, the threshold that determines when examples are predicted to be positive is usually lowered to allow more positive predictions. That is, a candidate is predicted as positive when its probability estimation is larger than the threshold. Table 5 shows the performance of the argument identifiers when the threshold is 0.1. 2 Because argument identification is just an intermediate step in a complete system, a more realistic evaluation method is to see how each final system performs. Using an argument identifier with threshold = 0.1 (i.e., Table 5 ), Table 6 reports the final results in recall, precision, and F 1 . The F 1 difference is 1.5 points when using the gold-standard data. However, when automatic parsers are used, the shallow parsing-based system is, in fact, slightly better; although the difference is not statistically significant. This may be due to the fact that chunk and clause predictions are very important here, and shallow parsers are more accurate in chunk or clause predictions than a full parser (Li and Roth 2001) .", "cite_spans": [ { "start": 1546, "end": 1547, "text": "2", "ref_id": null }, { "start": 2248, "end": 2266, "text": "(Li and Roth 2001)", "ref_id": "BIBREF27" } ], "ref_spans": [ { "start": 378, "end": 385, "text": "Table 4", "ref_id": "TABREF4" }, { "start": 1461, "end": 1468, "text": "Table 5", "ref_id": null }, { "start": 1765, "end": 1772, "text": "Table 5", "ref_id": null }, { "start": 1776, "end": 1783, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Argument Identification", "sec_num": "4.3" }, { "text": "Full parsing information helps in argument identification. However, when the automatic parsers are used, using the full parsing information may not have better overall results compared to using shallow parsing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion.", "sec_num": null }, { "text": "As shown in the previous two sections, the overall performance gaps of full parsing and shallow parsing are small. When automatic parsers are used, the difference is less than 1 point in F 1 or accuracy. Therefore, we conclude that the main contribution of full parsing is in the pruning stage. Because the shallow parsing system does not have enough information for the pruning heuristics, we train two word-based classifiers to replace the pruning stage. One classifier is trained to predict whether a given word is the start (S) of Table 5 The performance of argument identification after pruning (based on the gold-standard full parse trees) and with threshold = 0.1. ", "cite_spans": [], "ref_spans": [ { "start": 535, "end": 542, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Pruning", "sec_num": "4.4" }, { "text": "The overall system performance using the output from the pruning heuristics, applied on the gold-standard full parse trees. an argument; the other classifier is to predict the end (E) of an argument. If the product of probabilities of a pair of S and E predictions is larger than a predefined threshold, then this pair is considered as an argument candidate. The threshold used here was obtained by using the validation set. Both classifiers use very similar features to those used by the argument identifier as explained in Section 3.2, treating the target word as a constituent. Particularly, the features are predicate, POS tag of the predicate, voice, context words, POS tags of the context words, chunk pattern, clause relative position, and shallow-path. The head word and its POS tag are replaced by the target word and its POS tag. The comparison of using the classifiers and the heuristics is shown in Table 7 . Even without the knowledge of the constituent boundaries, the classifiers seem surprisingly better than the pruning heuristics. Using either the gold-standard data set or the output of automatic parsers, the classifiers achieve higher F 1 scores. One possible reason for this phenomenon is that the accuracy of the pruning strategy is limited by the number of agreements between the correct arguments and the constituents of the parse trees. Table 8 summarizes the statistics of the examples seen by both strategies. The pruning strategy needs to decide which are the potential arguments among all constituents. This strategy is upper-bounded by the number of correct arguments that agree with some constituent. On the other hand, the classifiers do not have this limitation. The number of examples they observe is the total number of words to be processed, and the positive examples are those arguments that are annotated as such in the data set.", "cite_spans": [], "ref_spans": [ { "start": 911, "end": 918, "text": "Table 7", "ref_id": null }, { "start": 1363, "end": 1370, "text": "Table 8", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Table 6", "sec_num": null }, { "text": "The performance of pruning using heuristics and classifiers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 7", "sec_num": null }, { "text": "Classifier Threshold = 0.04 The Agreements column shows the number of arguments that match the boundaries of some constituents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Full Parsing", "sec_num": null }, { "text": "Note that because each verb is processed independently, a sentence is processed once for each verb in the sentence. Therefore, the words and constituents in each sentence are counted as many times as the number of verbs to be processed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Full Parsing", "sec_num": null }, { "text": "As before, in order to compare the systems that use full parsing and shallow parsing information, we need to see the impact on the overall performance. Therefore, we built two semantic role systems based on full parsing and shallow parsing information. The full parsing-based system follows the pruning, argument identification, argument classification, and inference stages, as described earlier. For the shallow parsing system, the pruning heuristic is replaced by the word-based pruning classifiers, and the remaining stages are designed to use only shallow parsing as described in previous sections. Table 9 shows the overall performance of the two evaluation systems.", "cite_spans": [], "ref_spans": [ { "start": 604, "end": 611, "text": "Table 9", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Full Parsing", "sec_num": null }, { "text": "As indicated in the tables, the gap in F 1 between full parsing and shallow parsingbased systems enlarges to more than 11 points on the gold-standard data. At first glance, this result seems to contradict our conclusion in Section 4.3. After all, if the pruning stage of shallow parsing SRL system performs equally well or even better, the overall performance gap in F 1 should be small.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Full Parsing", "sec_num": null }, { "text": "After we carefully examined the output of the word-based classifier, we realized that it filters out easy candidates, and leaves examples that are difficult to the later stages. Specifically, these argument candidates often overlap and differ only in one or two words. On the other hand, the pruning heuristic based on full parsing never outputs overlapping candidates and consequently provides input that is easier for the next stage to handle. Indeed, the following argument identification stage turns out to be good in discriminating these non-overlapping candidates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Full Parsing", "sec_num": null }, { "text": "The most crucial contribution of full parsing is in the pruning stage. The internal tree structure significantly helps in discriminating argument candidates, which makes the work done by the following stages easier. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion.", "sec_num": null }, { "text": "Our inference procedure plays an important role in improving accuracy when the local predictions violate the constraints among argument labels. In this section, we first present the overall system performance when most constraints are not used. We then demonstrate how the inference procedure can be used to combine the output of several systems to yield better performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Effect of Inference", "sec_num": "5." }, { "text": "The inference stage in our system architecture provides a principled way to resolve conflicting local predictions. It is interesting to see whether this procedure improves the performance differently for the full parsing-vs. the shallow parsing-based system, as well as gold-standard vs. automatic parsing input. Table 10 shows the results of using only constraints 1, 2, 3, and 4. As mentioned previously, the first three constraints are handled before the argument classification stage. Constraint 4, which forbids overlapping or embedding arguments, is required in order to use the official CoNLL-2005 evaluation script and is therefore kept.", "cite_spans": [], "ref_spans": [ { "start": 313, "end": 321, "text": "Table 10", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Inference with Limited Constraints", "sec_num": "5.1" }, { "text": "By comparing Table 9 with Table 10 , we can see that the effect of adding more constraints is quite consistent over the four settings. Precision is improved by 1 to 2 percentage points but recall is decreased a little. As a result, the gain in F 1 is about 0.5 to 1 point. It is not surprising to see this lower recall and higher precision phenomenon after the constraints described in Section 3.4.1 are examined. Most constraints punish false non-null output, but do not regulate false null predictions. For example, an assignment that has two A1 arguments clearly violates the non-duplication constraint. However, if an assignment has no predicted arguments at all, it still satisfies all the constraints.", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 20, "text": "Table 9", "ref_id": "TABREF9" }, { "start": 26, "end": 34, "text": "Table 10", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Inference with Limited Constraints", "sec_num": "5.1" }, { "text": "The empirical study in Section 4 indicates that the performance of an SRL system primarily depends on the very first stage-pruning, which is directly derived from the full parse trees. This also means that in practice the quality of the syntactic parser is decisive to the quality of the SRL system. To improve semantic role labeling, one possible way is to combine different SRL systems through a joint inference stage, given that the systems are derived using different full parse trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Inference", "sec_num": "5.2" }, { "text": "To test this idea, we first build two SRL systems that use Collins's parser (Collins 1999) 3 and Charniak's parser (Charniak 2001) , respectively. In fact, these two parsers have noticeably different outputs. Applying the pruning heuristics on the output of Collins's parser produces a list of candidates with 81.05% recall. Although this number is significantly lower than the 86.08% recall produced by Charniak's parser, the union of the two candidate lists still significantly improves recall to 91.37%. We construct the two systems by implementing the first three stages, namely, pruning, argument identification, and argument classification. When a test sentence is given, a joint inference stage is used to resolve the inconsistency of the output of argument classification in these two systems.", "cite_spans": [ { "start": 76, "end": 90, "text": "(Collins 1999)", "ref_id": "BIBREF9" }, { "start": 115, "end": 130, "text": "(Charniak 2001)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Joint Inference", "sec_num": "5.2" }, { "text": "We first briefly review the objective function used in the inference procedure introduced in Section 3.4. Formally speaking, the argument classifiers attempt to assign labels to a set of arguments, S 1:M , indexed from 1 to M. Each argument S i can take any label from a set of argument labels, P, and the indexed set of arguments can take a set of labels, c 1:M \u2208 P M . If we assume that the argument classifier returns an estimated conditional probability distribution, Prob(S i = c i ), then, given a sentence, the inference procedure seeks a global assignment that maximizes the objective function denoted by Equation 2, which can be rewritten as follows,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Inference", "sec_num": "5.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c 1:M = argmax c 1:M \u2208F (P M ) M i=1 Prob(S i = c i )", "eq_num": "( 3 )" } ], "section": "Joint Inference", "sec_num": "5.2" }, { "text": "where the linguistic and structural constraints are represented by the filter F. In other words, this objective function reflects the expected number of correct argument predictions, subject to the constraints. When there are two or more argument classifiers from different SRL systems, a joint inference procedure can take the output estimated probabilities for all these candidates as input, although some candidates may refer to the same phrases in the sentence. For example, Figure 3 shows the two candidate sets for a fragment of a sentence, ..., traders say, unable to cool the selling panic in both stocks and futures. In this example, system A has two argument candidates, a 1 = traders and a 4 = the selling panic in both stocks and futures; system B has three argument candidates, b 1 = traders, b 2 = the selling panic, and b 3 = in both stocks and futures.", "cite_spans": [], "ref_spans": [ { "start": 479, "end": 487, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Joint Inference", "sec_num": "5.2" }, { "text": "A straightforward solution to the combination is to treat each argument produced by a system as a possible output. Each possible labeling of the argument is associated with a variable which is then used to set up the inference procedure. However, the final prediction will be likely dominated by the system that produces more candidates, which is system B in this example. The reason is that our objective function is the sum of the probabilities of all the candidate assignments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Inference", "sec_num": "5.2" }, { "text": "This bias can be corrected by the following observation. Although system A only has two candidates, a 1 and a 4 , it can be treated as if it also has two additional phantom candidates, a 2 and a 3 , where a 2 and b 2 refer to the same phrase, and so do a 3 and b 3 . Similarly, system B has a phantom candidate b 4 that corresponds to a 4 . Because system A does not really generate a 2 and a 3 , we can assume that these two phantom candidates are predicted by it as \"null\" (i.e., not an argument). We assign the same prior distribution to each phantom candidate. In particular, the probability of the \"null\" class is set to be 0.55 based on empirical tests, and the probabilities of the remaining classes are set based on their occurrence frequencies in the training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Inference", "sec_num": "5.2" }, { "text": "Then, we treat each possible final argument output as a single unit. Each probability estimation by a system can be viewed as evidence in the final probability estimation and, therefore, we can simply average their estimation. Formally, let S i be the argument set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Inference", "sec_num": "5.2" }, { "text": "The output of two SRL systems: system A has two candidates, a 1 = traders and a 4 = the selling panic in both stocks and futures; system B has three argument candidates, b 1 = traders, b 2 = the selling panic, and b 3 = in both stocks and futures. In addition, we create two phantom candidates a 2 and a 3 for system A that correspond to b 2 and b 3 respectively, and b 4 for system B that corresponds to a 4 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3", "sec_num": null }, { "text": "output by system i, and S = k i=1 S i be the set of all arguments where k is the number of systems; let N be the cardinality of S. Our augmented objective function is then:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c 1:N = argmax c 1:N \u2208F (P N ) N i=1 Prob(S i = c i )", "eq_num": "( 4 )" } ], "section": "Figure 3", "sec_num": null }, { "text": "where S i \u2208 S, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Prob(S i = c i ) = 1 k k j=1 Prob j (S i = c i )", "eq_num": "( 5 )" } ], "section": "Figure 3", "sec_num": null }, { "text": "where Prob j is the probability output by system j.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3", "sec_num": null }, { "text": "Note that we may also treat the individual systems differently by applying different priors (i.e., weights) on the estimated probabilities of the argument candidates. For example, if the performance of system A is much better than system B, then we may want to trust system A's output more by multiplying the output probabilities by a larger weight. Table 11 reports the performance of two individual systems based on Collins's parser and Charniak's parser, as well as the joint system, where the two individual systems are equally weighted. The joint system based on this straightforward strategy significantly improves the performance compared to the two original SRL systems in both recall and precision, and thus achieves a much higher F 1 .", "cite_spans": [], "ref_spans": [ { "start": 350, "end": 358, "text": "Table 11", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Figure 3", "sec_num": null }, { "text": "In this section, we present the detailed evaluation of our SRL system, in the competition on semantic role labeling-the CoNLL-2005 shared task (Carreras and M\u00e0rquez 2005). The setting of this shared task is basically the same as it was in 2004, with some extensions. First, it allows much richer syntactic information. In particular, full parse trees generated using Collins's parser (Collins 1999 ) and Charniak's parser (Charniak 2001) were provided. Second, the full parsing standard partition was usedthe training set was enlarged and covered Sections 02-21, the development set was Section 24, and the test set was Section 23. Finally, in addition to the Wall Street Journal (WSJ) data, three sections of the Brown corpus were used to provide cross-corpora evaluation.", "cite_spans": [ { "start": 384, "end": 397, "text": "(Collins 1999", "ref_id": "BIBREF9" }, { "start": 422, "end": 437, "text": "(Charniak 2001)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Empirical Evaluation-CoNLL Shared Task 2005", "sec_num": "6." }, { "text": "The system we used to participate in the CoNLL-2005 shared task is an enhanced version of the system described in Sections 3 and 5. The main difference was that the joint-inference stage was extended to combine six basic SRL systems instead of two. Specifically for this implementation, we first trained two SRL systems that use Collins's parser and Charniak's parser, respectively, because of their noticeably different outputs. In evaluation, we ran the system that was trained with Charniak's parser five times, with the top-5 parse trees output by Charniak's parser. Together we have six different outputs per predicate. For each parse tree output, we ran the first three stages, namely, pruning, argument identification, and argument classification. Then, a joint-inference stage, where each individual system is weighted equally, was used to resolve the inconsistency of the output of argument classification in these systems. Table 12 shows the overall results on the development set and different test sets; the detailed results on WSJ section 23 are shown in Table 13 . Table 14 shows the results of individual systems and the improvement gained by the joint inference procedure on the development set.", "cite_spans": [], "ref_spans": [ { "start": 933, "end": 941, "text": "Table 12", "ref_id": "TABREF2" }, { "start": 1068, "end": 1076, "text": "Table 13", "ref_id": "TABREF3" }, { "start": 1079, "end": 1087, "text": "Table 14", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Empirical Evaluation-CoNLL Shared Task 2005", "sec_num": "6." }, { "text": "Our system reached the highest F 1 scores on all the test sets and was the best system among the 19 participating teams. After the competition, we improved the system slightly by tuning the weights of the individual systems in the joint inference procedure, where the F 1 scores on WSJ test section and the Brown test set are 79.59 points and 67.98 points, respectively. In terms of the computation time, for both the argument identifier and the argument classifier, the training of each model, excluding feature extraction, takes 50-70 minutes using less than 1GB memory on a 2.6GHz AMD machine. On the same machine, the average test time for each stage, excluding feature extraction, is around 2 minutes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Evaluation-CoNLL Shared Task 2005", "sec_num": "6." }, { "text": "The pioneering work on building an automatic semantic role labeler was proposed by Gildea and Jurafsky (2002) . In their setting, semantic role labeling was treated as a tagging problem on each constituent in a parse tree, solved by a two-stage architecture consisting of an argument identifier and an argument classifier. This is similar to our Table 14 The results of individual systems and the result with joint inference on the development set. main architecture with the exclusion of the pruning and inference stages. There are two additional key differences between their system and ours. First, their system used a back-off probabilistic model as its main engine. Second, it was trained on FrameNet (Baker, Fillmore, and Lowe 1998)-another large corpus, besides PropBank, that contains selected examples of semantically labeled sentences.", "cite_spans": [ { "start": 83, "end": 109, "text": "Gildea and Jurafsky (2002)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 346, "end": 354, "text": "Table 14", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Related Work", "sec_num": "7." }, { "text": "Later that year, the same approach was applied on PropBank by Gildea and Palmer (2002) . Their system achieved 57.7% precision and 50.0% recall with automatic parse trees, and 71.1% precision and 64.4% recall with gold-standard parse trees. It is worth noticing that at that time the PropBank project was not finished and the data set available was only a fraction in size of what it is today. Since these pioneering works, the task has gained increasing popularity and created a new line of research. The two-step constituent-by-constituent architecture became a common blueprint for many systems that followed.", "cite_spans": [ { "start": 62, "end": 86, "text": "Gildea and Palmer (2002)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7." }, { "text": "Partly due to the expansion of the PropBank dataset, researchers have gradually made improvement on the performance of automatic SRL systems by using new techniques and new features. Some of the early systems are described in Chen and Rambow (2003) , Gildea and Hockenmaier (2003) , and Surdeanu et al. (2003) . All are based on a two-stage architecture similar to the one proposed by Gildea and Palmer (2002) with the differences in the machine-learning techniques and the features used. The first breakthrough in terms of performance was due to Pradhan et al. (2003) , who first viewed the task as a massive classification problem and applied multiple SVMs to it. Their final result (after a few more improvements) reported in Pradhan et al. (2004) achieved 84% and 75% in precision and recall, respectively.", "cite_spans": [ { "start": 226, "end": 248, "text": "Chen and Rambow (2003)", "ref_id": "BIBREF8" }, { "start": 251, "end": 280, "text": "Gildea and Hockenmaier (2003)", "ref_id": "BIBREF13" }, { "start": 287, "end": 309, "text": "Surdeanu et al. (2003)", "ref_id": "BIBREF42" }, { "start": 385, "end": 409, "text": "Gildea and Palmer (2002)", "ref_id": "BIBREF15" }, { "start": 547, "end": 568, "text": "Pradhan et al. (2003)", "ref_id": "BIBREF33" }, { "start": 729, "end": 750, "text": "Pradhan et al. (2004)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7." }, { "text": "A second significant contribution beyond the two-stage architecture is due to Xue and Palmer (2004) , who introduced the pruning heuristics to the two-stage architecture, and remarkably reduced the number of candidate arguments a system needs to consider; this approach was adopted by many systems. Another significant advancement was in the realization that global information can be exploited and benefits the results significantly. Inference based on an integer linear programming technique, which was originally introduced by Roth and Yih (2004) on a relation extraction problem, was first applied to the SRL problem by Punyakanok et al. (2004) . It showed that domain knowledge can be easily encoded and contributes significantly through inference over the output of classifiers. The idea of exploiting global information, which is detailed in this paper, was pursued later by other researchers, in different forms.", "cite_spans": [ { "start": 78, "end": 99, "text": "Xue and Palmer (2004)", "ref_id": "BIBREF44" }, { "start": 624, "end": 648, "text": "Punyakanok et al. (2004)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7." }, { "text": "Besides the constituent-by-constituent based architecture, others have also been explored. The alternative frameworks include representing semantic role labeling as a sequence-tagging problem (M\u00e0rquez, Pere Comas, and Catal\u00e0 2005) and tagging the edges in the corresponding dependency trees (Hacioglu 2004) . However, the most popular architecture by far is the constituent-by-constituent based multi-stage architecture, perhaps due to its conceptual simplicity and its success. In the CoNLL-2005 shared task competition , the majority of the systems followed the constituent-by-constituent based two-stage architecture, and the use of the pruning heuristics was also fairly common.", "cite_spans": [ { "start": 192, "end": 230, "text": "(M\u00e0rquez, Pere Comas, and Catal\u00e0 2005)", "ref_id": null }, { "start": 291, "end": 306, "text": "(Hacioglu 2004)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7." }, { "text": "The CoNLL-2005 shared task also highlighted the importance of system combination, such as our ILP technique when used in joint inference, in order to achieve superior performance. The top four systems, which produced significantly better results than the rest, all used some schemes to combine the output of several SRL systems, ranging from using a fixed combination function (Haghighi, Toutanova, and Manning 2005; Koomen et al. 2005) to using a machine-learned combination strategy (M\u00e0rquez, Pere Comas, and Catal\u00e0 2005; .", "cite_spans": [ { "start": 377, "end": 416, "text": "(Haghighi, Toutanova, and Manning 2005;", "ref_id": "BIBREF21" }, { "start": 417, "end": 436, "text": "Koomen et al. 2005)", "ref_id": "BIBREF24" }, { "start": 485, "end": 523, "text": "(M\u00e0rquez, Pere Comas, and Catal\u00e0 2005;", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7." }, { "text": "The work of Gildea and Palmer (2002) pioneered not only the fundamental architecture of SRL, but was also the first to investigate the interesting question regarding the significance of using full parsing for high quality SRL. They compared their full system with another system that only used chunking, and found that the chunk-based system performed much worse. The precision and recall dropped from 57.7% and 50.0% to 27.6% and 22.0%, respectively. That led to the conclusion that full parsing information is necessary to solving the SRL problem, especially at the stage of argument identification-a finding that is quite similar to ours in this article. However, their chunk-based approach was very weak-only chunks were considered as possible candidates; hence, it is not very surprising that the boundaries of the arguments could not be reliably found. In contrast, our shallow parse-based system does not have these restrictions on the argument boundaries and therefore performs much better at this stage, providing a more fair comparison.", "cite_spans": [ { "start": 12, "end": 36, "text": "Gildea and Palmer (2002)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7." }, { "text": "A related comparison can be found also in the work by Pradhan, Hacioglu, Krugler et al. (2005) (their earlier version appeared in Pradhan et al. [2003] ), which reported the performance on several systems using different information sources and system architectures. Their shallow parse-based system is modeled as a sequence tagging problem while the full system is a constituent-by-constituent based two-stage system. Due to technical difficulties, though, they reported the results of the chunk-based systems only on a subset of the full data set. Their shallow parse-based system achieved 60.4% precision and 51.4% recall and their full system achieved 80.6% precision and 67.1% recall on the same data set (but 84% precision and 75% recall with the full data set). Therefore, due to the use of different architectures and data set sizes, the questions of \"how much one can gain from full parsing over shallow parsing when using the full PropBank data set\" and \"what are the sources of the performance gain\" were left open.", "cite_spans": [ { "start": 54, "end": 94, "text": "Pradhan, Hacioglu, Krugler et al. (2005)", "ref_id": "BIBREF32" }, { "start": 130, "end": 151, "text": "Pradhan et al. [2003]", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7." }, { "text": "Similarly, in the CoNLL-2004 shared task (Carreras and M\u00e0rquez 2004) , participants were asked to develop SRL systems with the restriction that only shallow parsing information (i.e., chunks and clauses) were allowed. The performance of the best system was at 72.43% precision and 66.77% recall, which was about 10 points in F 1 lower than the best system based on full parsing in the literature. However, the training examples were derived from only 5 sections and not all the 19 sections usually used in the standard setting. Hence, the question was not yet fully answered.", "cite_spans": [ { "start": 41, "end": 68, "text": "(Carreras and M\u00e0rquez 2004)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7." }, { "text": "Our experimental study, on the other hand, is done with a consistent architecture, by considering each stage in a controlled manner, and using the full data set, allowing one to draw direct conclusions regarding the impact of this information source.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7." }, { "text": "This paper studies the important task of semantic role labeling. We presented an approach to SRL and a principled and general approach to incorporating global information in natural language decisions. Beyond presenting this approach which leads to a state-of-the-art SRL system, we focused on investigating the significance of using full parse tree information as input to an SRL system adhering to the most common system architecture, and the stages in the process where this information has the most impact. We performed a detailed and fair experimental comparison between shallow and full parsing information and concluded that, indeed, full syntactic information can improve the performance of an SRL system. In particular, we have shown that this information is most crucial in the pruning stage of the system, and relatively less important in the following stages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8." }, { "text": "In addition, we showed the importance of global inference to good performance in this task, characterized by rich structural and linguistic constraints among the predicted labels of the arguments. Our integer linear programming-based inference procedure is a powerful and flexible optimization strategy that finds the best solution subject to these constraints. As we have shown, it can be used to resolve conflicting argument predictions in an individual system but can also serve as an effective and simple approach to combining different SRL systems, resulting in a significant improvement in performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8." }, { "text": "In the future, we plan to extend our work in several directions. By adding more constraints to the inference procedure, an SRL system may be further improved. Currently, the constraints are provided by human experts in advance. Learning both hard and statistical constraints from the data will be our top priority. Some work on combining statistical and declarative constraints has already started and is reported in Roth and Yih (2005) . Another issue we want to address is domain adaptation. It has been clearly shown in the CoNLL-2005 shared task that the performance of current SRL systems degrades significantly when tested on a corpus different from the one used in training. This may be due to the underlying components, especially the syntactic parsers which are very sensitive to changes in data genre. Developing a better model that more robustly combines these components could be a promising direction. In addition, although the shallow parsing-based system was shown here to be inferior, shallow parsers were shown to be more robust than full parsers (Li and Roth 2001) . Therefore, combining these two systems may bring forward both of their advantages.", "cite_spans": [ { "start": 417, "end": 436, "text": "Roth and Yih (2005)", "ref_id": "BIBREF41" }, { "start": 1064, "end": 1082, "text": "(Li and Roth 2001)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8." }, { "text": "Constraint 8: Illegal argument types", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8." }, { "text": "u 1A3 + u 2A3 + . . . + u 5A3 = 0 u 1A4 + u 2A4 + . . . + u 5A4 = 0 u 1A5 + u 2A5 + . . . + u 5A5 = 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8." }, { "text": "There are other constraints such as \"exactly one V argument per class,\" or \"V-A1-C-V pattern\" as introduced byPunyakanok et al. (2004). However, we did not find them particularly helpful in our experiments. Therefore, we exclude those constraints in the presentation here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The value was determined by experimenting with the complete system using automatic full parse trees, on the development set. In our tests, lowering the threshold in argument identification always leads to higher overall recall and lower overall precision. As a result, the gain in F 1 is limited.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use the Collins parser implemented byBikel (2004).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank Xavier Carreras and Llu\u00eds M\u00e0rquez for the data and scripts, Szu-ting Yi for her help in improving our joint inference procedure, and Nick Rizzolo as well as the anonymous reviewers for their comments and suggestions. We are also grateful to Dash Optimization for the free academic use of Xpress-MP and AMD for their equipment donation. This research is supported by the Advanced Research and Development Activity (ARDA)'s Advanced Question Answering for Intelligence (AQUAINT) Program, a DOI grant under the Reflex program, NSF grants ITR-IIS-0085836, ITR-IIS-0085980, and IIS-9984168, EIA-0224453, and an ONR MURI Award.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "In this section, we show a complete example of the ILP formulation formulated to solve the inference problem as described in Section 3.4.Example. Assume the sentence is four words long with the following argument candidates, and the following illegal argument types for the predicate of interest.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix A: An ILP Formulation for SRL", "sec_num": null }, { "text": "Indicator Variables and Their Costs. The followings are the indicator variables and their associated costs set up for the example.Indicator Variables:Costs:Objective Function. The objective function can be written as the following.argmax u ic \u2208{0,1}:\u2200i\u2208 [1, 5] Additional Constraints. The rest of the constraints can be formulated as the following.Constraint 4: No overlapping or embeddingConstraint 5: No duplicate argument classesConstraint 6: R-arg argumentsConstraint 7: C-arg arguments. . .", "cite_spans": [ { "start": 254, "end": 257, "text": "[1,", "ref_id": null }, { "start": 258, "end": 260, "text": "5]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Sentence", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The Berkeley Framenet project", "authors": [ { "first": "Collin", "middle": [ "F" ], "last": "Baker", "suffix": "" }, { "first": "J", "middle": [], "last": "Charles", "suffix": "" }, { "first": "John", "middle": [ "B" ], "last": "Fillmore", "suffix": "" }, { "first": "", "middle": [], "last": "Lowe", "suffix": "" } ], "year": 1998, "venue": "Proceedings of COLING-ACL", "volume": "", "issue": "", "pages": "86--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baker, Collin F., Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley Framenet project. In Proceedings of COLING-ACL, pages 86-90, Montreal, Canada.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Intricacies of Collins' parsing model", "authors": [ { "first": "Daniel", "middle": [ "M" ], "last": "Bikel", "suffix": "" } ], "year": 2004, "venue": "Computational Linguistics", "volume": "30", "issue": "4", "pages": "479--511", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bikel, Daniel M. 2004. Intricacies of Collins' parsing model. Computational Linguistics, 30(4):479-511.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Neural Networks for Pattern Recognition, chapter 6.4: Modelling conditional distributions, page 215", "authors": [ { "first": "Christopher", "middle": [ "M" ], "last": "Bishop", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bishop, Christopher M., 1995. Neural Networks for Pattern Recognition, chapter 6.4: Modelling conditional distributions, page 215. Oxford University Press, Oxford, UK.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The SNoW learning architecture", "authors": [ { "first": "Andrew", "middle": [ "J" ], "last": "Carlson", "suffix": "" }, { "first": "Chad", "middle": [ "M" ], "last": "Cumby", "suffix": "" }, { "first": "Jeff", "middle": [ "L" ], "last": "Rosen", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlson, Andrew J., Chad M. Cumby, Jeff L. Rosen, and Dan Roth. 1999. The SNoW learning architecture. Technical Report UIUCDCS-R-99-2101, UIUC Computer Science Department.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Introduction to the CoNLL-2004 shared tasks: Semantic role labeling", "authors": [ { "first": "Xavier", "middle": [], "last": "Carreras", "suffix": "" }, { "first": "Ll\u00fais", "middle": [], "last": "M\u00e0rquez", "suffix": "" }, { "first": ";", "middle": [], "last": "", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Carreras", "suffix": "" }, { "first": "Xavier", "middle": [], "last": "", "suffix": "" }, { "first": "Ll\u00fais", "middle": [], "last": "M\u00e0rquez", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005)", "volume": "", "issue": "", "pages": "152--164", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carreras, Xavier and Ll\u00fais M\u00e0rquez. 2004. Introduction to the CoNLL-2004 shared tasks: Semantic role labeling. In Proceedings of CoNLL-2004, pages 89-97, Boston, MA. Carreras, Xavier and Ll\u00fais M\u00e0rquez. 2005. Introduction to the CoNLL-2005 shared task: Semantic role labeling. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005), pages 152-164, Ann Arbor, MI.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Filtering-ranking perceptron learning for partial parsing", "authors": [ { "first": "Xavier", "middle": [], "last": "Carreras", "suffix": "" }, { "first": "Ll\u00fais", "middle": [], "last": "M\u00e0rquez", "suffix": "" }, { "first": "Jorge", "middle": [], "last": "Castro", "suffix": "" } ], "year": 2005, "venue": "Machine Learning", "volume": "60", "issue": "", "pages": "41--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carreras, Xavier, Ll\u00fais M\u00e0rquez, and Jorge Castro. 2005. Filtering-ranking perceptron learning for partial parsing. Machine Learning, 60:41-71.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Learning and inference for clause identification", "authors": [ { "first": "Xavier", "middle": [], "last": "Carreras", "suffix": "" }, { "first": "Ll\u00fais", "middle": [], "last": "M\u00e0rquez", "suffix": "" }, { "first": "Vasin", "middle": [], "last": "Punyakanok", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 13th European Conference on Machine Learning (ECML-2002)", "volume": "", "issue": "", "pages": "35--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carreras, Xavier, Ll\u00fais M\u00e0rquez, Vasin Punyakanok, and Dan Roth. 2002. Learning and inference for clause identification. In Proceedings of the 13th European Conference on Machine Learning (ECML-2002), pages 35-47, Helsinki, Finland.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Immediate-head parsing for language models", "authors": [ { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 39th Annual Meeting of the Association of Computational Linguistics (ACL-2001)", "volume": "", "issue": "", "pages": "116--123", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charniak, Eugene. 2001. Immediate-head parsing for language models. In Proceedings of the 39th Annual Meeting of the Association of Computational Linguistics (ACL-2001), pages 116-123, Toulouse, France.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Use of deep linguistic features for the recognition and labeling of semantic arguments", "authors": [ { "first": "John", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Owen", "middle": [], "last": "Rambow", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing (EMNLP-2003)", "volume": "", "issue": "", "pages": "41--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, John and Owen Rambow. 2003. Use of deep linguistic features for the recognition and labeling of semantic arguments. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing (EMNLP-2003), pages 41-48, Sapporo, Japan.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Head-driven Statistical Models for Natural Language Parsing", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins, Michael. 1999. Head-driven Statistical Models for Natural Language Parsing. Ph.D. thesis, Computer Science Department, University of Pennsylvania, Philadelphia, PA.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Mistake-driven learning in text categorization", "authors": [ { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Yael", "middle": [], "last": "Karov", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the Second Conference on Empirical Methods in Natural Language Processing (EMNLP-1997)", "volume": "", "issue": "", "pages": "55--63", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dagan, Ido, Yael Karov, and Dan Roth. 1997. Mistake-driven learning in text categorization. In Proceedings of the Second Conference on Empirical Methods in Natural Language Processing (EMNLP-1997), pages 55-63, Providence, RI.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A sequential model for multi-class classification", "authors": [ { "first": "Yair", "middle": [], "last": "Even-Zohar", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing (EMNLP-2001)", "volume": "", "issue": "", "pages": "10--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "Even-Zohar, Yair and Dan Roth. 2001. A sequential model for multi-class classification. In Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing (EMNLP-2001), pages 10-19, Pittsburgh, PA.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Large margin classification using the Perceptron algorithm", "authors": [ { "first": "Yoav", "middle": [], "last": "Freund", "suffix": "" }, { "first": "Robert", "middle": [ "E" ], "last": "Schapire", "suffix": "" } ], "year": 1999, "venue": "Machine Learning", "volume": "37", "issue": "", "pages": "277--296", "other_ids": {}, "num": null, "urls": [], "raw_text": "Freund, Yoav and Robert E. Schapire. 1999. Large margin classification using the Perceptron algorithm. Machine Learning, 37(3):277-296.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Identifying semantic roles using combinatory categorial grammar", "authors": [ { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hockenmaier", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing (EMNLP-2003)", "volume": "", "issue": "", "pages": "57--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gildea, Daniel and Julia Hockenmaier. 2003. Identifying semantic roles using combinatory categorial grammar. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing (EMNLP-2003), pages 57-64, Sapporo, Japan.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Automatic labeling of semantic roles", "authors": [ { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2002, "venue": "Computational Linguistics", "volume": "28", "issue": "3", "pages": "245--288", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gildea, Daniel and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics, 28(3):245-288.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The necessity of parsing for predicate argument recognition", "authors": [ { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association of Computational Linguistics (ACL-2002)", "volume": "", "issue": "", "pages": "239--246", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gildea, Daniel and Martha Palmer. 2002. The necessity of parsing for predicate argument recognition. In Proceedings of the 40th Annual Meeting of the Association of Computational Linguistics (ACL-2002), pages 239-246, Philadelphia, PA.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A Winnow based approach to context-sensitive spelling correction", "authors": [ { "first": "Andrew", "middle": [ "R" ], "last": "Golding", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 1999, "venue": "Machine Learning", "volume": "34", "issue": "", "pages": "107--130", "other_ids": {}, "num": null, "urls": [], "raw_text": "Golding, Andrew R. and Dan Roth. 1999. A Winnow based approach to context-sensitive spelling correction. Machine Learning, 34(1-3):107-130.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Linear concepts and hidden variables", "authors": [ { "first": "Adam", "middle": [ "J" ], "last": "Grove", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2001, "venue": "Machine Learning", "volume": "42", "issue": "", "pages": "123--141", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grove, Adam J. and Dan Roth. 2001. Linear concepts and hidden variables. Machine Learning, 42(1-2):123-141.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Applications of Optimization with Xpress-MP. Dash Optimization. Translated and revised by Susanne Heipcke", "authors": [ { "first": "Christelle", "middle": [], "last": "Gu\u00e9ret", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Prins", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Sevaux", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gu\u00e9ret, Christelle, Christian Prins, and Marc Sevaux. 2002. Applications of Optimization with Xpress-MP. Dash Optimization. Translated and revised by Susanne Heipcke. http://www.dashoptimization. com/home/downloads/book/booka4.pdf.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Semantic role labeling using dependency trees", "authors": [ { "first": "Kadri", "middle": [], "last": "Hacioglu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 20th International Conference on Computational Linguistics (COLING)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hacioglu, Kadri. 2004. Semantic role labeling using dependency trees. In Proceedings of the 20th International Conference on Computational Linguistics (COLING), Geneva, Switzerland.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Semantic role labeling by tagging syntactic chunks", "authors": [ { "first": "Kadri", "middle": [], "last": "Hacioglu", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "Wayne", "middle": [], "last": "Ward", "suffix": "" }, { "first": "James", "middle": [ "H" ], "last": "Martin", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2004, "venue": "Proceedings of CoNLL-2004", "volume": "", "issue": "", "pages": "110--113", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hacioglu, Kadri, Sameer Pradhan, Wayne Ward, James H. Martin, and Daniel Jurafsky. 2004. Semantic role labeling by tagging syntactic chunks. In Proceedings of CoNLL-2004, pages 110-113, Boston, MA.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A joint model for semantic role labeling", "authors": [ { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005)", "volume": "", "issue": "", "pages": "173--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haghighi, Aria, Kristina Toutanova, and Christopher D. Manning. 2005. A joint model for semantic role labeling. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005), pages 173-176, Ann Arbor, MI.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "From Treebank to PropBank", "authors": [ { "first": "Paul", "middle": [], "last": "Kingsbury", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2002, "venue": "Proceedings of LREC-2002", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kingsbury, Paul and Martha Palmer. 2002. From Treebank to PropBank. In Proceedings of LREC-2002, Las Palmas, Canary Islands, Spain.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Extending PropBank with VerbNet semantic predicates", "authors": [ { "first": "Karin", "middle": [], "last": "Kipper", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Owen", "middle": [], "last": "Rambow", "suffix": "" } ], "year": 2002, "venue": "Proceedings of Workshop on Applied Interlinguas", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kipper, Karin, Martha Palmer, and Owen Rambow. 2002. Extending PropBank with VerbNet semantic predicates. In Proceedings of Workshop on Applied Interlinguas, Tiburon, CA.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Generalized inference with multiple semantic role labeling systems", "authors": [ { "first": "Peter", "middle": [], "last": "Koomen", "suffix": "" }, { "first": "Vasin", "middle": [], "last": "Punyakanok", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005)", "volume": "", "issue": "", "pages": "181--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koomen, Peter, Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2005. Generalized inference with multiple semantic role labeling systems. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005), pages 181-184, Ann Arbor, MI.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "English Verb Classes and Alternations: A Preliminary Investigation", "authors": [ { "first": "Beth", "middle": [], "last": "Levin", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Levin, Beth. 1993. English Verb Classes and Alternations: A Preliminary Investigation. University of Chicago Press, Chicago.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "From lexical semantics to argument realization", "authors": [ { "first": "Beth", "middle": [], "last": "Levin", "suffix": "" }, { "first": "Malka", "middle": [ "R" ], "last": "Hovav", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Levin, Beth and Malka R. Hovav. 1996. From lexical semantics to argument realization. Unpublished manuscript.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Exploring evidence for shallow parsing", "authors": [ { "first": "Xin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2001, "venue": "Proceedings of CoNLL-2001", "volume": "", "issue": "", "pages": "107--110", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, Xin and Dan Roth. 2001. Exploring evidence for shallow parsing. In Proceedings of CoNLL-2001, pages 107-110, Toulouse, France.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Building a large annotated corpus of English: The Penn Treebank", "authors": [ { "first": "Mitchell", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Mary Ann Marcinkiewicz", "suffix": "" }, { "first": "", "middle": [], "last": "Santorini", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcus, Mitchell P., Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313-330.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Semantic role labeling as sequential tagging", "authors": [ { "first": "Ll\u00fais", "middle": [], "last": "M\u00e0rquez", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005)", "volume": "", "issue": "", "pages": "193--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "M\u00e0rquez, Ll\u00fais, Jesus Gim\u00e9nez Pere Comas, and Neus Catal\u00e0. 2005. Semantic role labeling as sequential tagging. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005), pages 193-196, Ann Arbor, MI.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Computer-Intensive Methods for Testing Hypotheses", "authors": [ { "first": "Eric", "middle": [ "W" ], "last": "Noreen", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noreen, Eric W. 1989. Computer-Intensive Methods for Testing Hypotheses. New York: John Wiley & Sons.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "The proposition bank: An annotated corpus of semantic roles", "authors": [ { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Kingsbury", "suffix": "" } ], "year": 2005, "venue": "Computational Linguistics", "volume": "31", "issue": "1", "pages": "71--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Palmer, Martha, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71-106.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Support vector learning for semantic argument classification", "authors": [ { "first": "Sameer", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "Kadri", "middle": [], "last": "Hacioglu", "suffix": "" }, { "first": "Valerie", "middle": [], "last": "Krugler", "suffix": "" }, { "first": "Wayne", "middle": [], "last": "Ward", "suffix": "" }, { "first": "James", "middle": [ "H" ], "last": "Martin", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2005, "venue": "Machine Learning", "volume": "60", "issue": "", "pages": "11--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pradhan, Sameer, Kadri Hacioglu, Valerie Krugler, Wayne Ward, James H. Martin, and Daniel Jurafsky. 2005. Support vector learning for semantic argument classification. Machine Learning, 60:11-39.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Semantic role parsing adding semantic structure to unstructured text", "authors": [ { "first": "Sameer", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "Kadri", "middle": [], "last": "Hacioglu", "suffix": "" }, { "first": "Wayne", "middle": [], "last": "Ward", "suffix": "" }, { "first": "James", "middle": [ "H" ], "last": "Martin", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 3rd IEEE International Conference on Data Mining (ICDM 2003)", "volume": "", "issue": "", "pages": "629--632", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pradhan, Sameer, Kadri Hacioglu, Wayne Ward, James H. Martin, and Daniel Jurafsky. 2003. Semantic role parsing adding semantic structure to unstructured text. In Proceedings of the 3rd IEEE International Conference on Data Mining (ICDM 2003), pages 629-632, Melbourne, FL.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Semantic role chunking combining complementary syntactic views", "authors": [ { "first": "Sameer", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "Kadri", "middle": [], "last": "Hacioglu", "suffix": "" }, { "first": "Wayne", "middle": [], "last": "Ward", "suffix": "" }, { "first": "James", "middle": [ "H" ], "last": "Martin", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005)", "volume": "", "issue": "", "pages": "217--220", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pradhan, Sameer, Kadri Hacioglu, Wayne Ward, James H. Martin, and Daniel Jurafsky. 2005. Semantic role chunking combining complementary syntactic views. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005), pages 217-220, Ann Arbor, MI.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Shallow semantic parsing using support vector machines", "authors": [ { "first": "Sameer", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "Wayne", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Kadri", "middle": [], "last": "Hacioglu", "suffix": "" }, { "first": "James", "middle": [ "H" ], "last": "Martin", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2004, "venue": "Proceedings of NAACL-HLT 2004", "volume": "", "issue": "", "pages": "233--240", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pradhan, Sameer, Wayne Ward, Kadri Hacioglu, James H. Martin, and Daniel Jurafsky. 2004. Shallow semantic parsing using support vector machines. In Proceedings of NAACL-HLT 2004, pages 233-240, Boston, MA.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Semantic role labeling via integer linear programming inference", "authors": [ { "first": "Vasin", "middle": [], "last": "Punyakanok", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" }, { "first": "Dav", "middle": [], "last": "Zimak", "suffix": "" } ], "year": 2004, "venue": "Proceedings the 20th International Conference on Computational Linguistics (COLING)", "volume": "", "issue": "", "pages": "1346--1352", "other_ids": {}, "num": null, "urls": [], "raw_text": "Punyakanok, Vasin, Dan Roth, Wen-tau Yih, and Dav Zimak. 2004. Semantic role labeling via integer linear programming inference. In Proceedings the 20th International Conference on Computational Linguistics (COLING), pages 1346-1352, Geneva, Switzerland.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "The use of classifiers in sequential inference", "authors": [ { "first": "Vasin", "middle": [], "last": "Punyakanok", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Punyakanok, Vasin and Dan Roth. 2001. The use of classifiers in sequential inference. In", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Advances in Neural Information Processing Systems 13", "authors": [ { "first": "Todd", "middle": [ "K" ], "last": "Leen", "suffix": "" }, { "first": "Thomas", "middle": [ "G" ], "last": "Dietterich", "suffix": "" }, { "first": "Volker", "middle": [], "last": "Tresp", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "995--1001", "other_ids": {}, "num": null, "urls": [], "raw_text": "Todd K. Leen, Thomas G. Dietterich, and Volker Tresp, editors, Advances in Neural Information Processing Systems 13, pages 995-1001. MIT Press.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Learning to resolve natural language ambiguities: A unified approach", "authors": [ { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the Fifteenth National Conference on Artificial Intelligence (AAAI-98)", "volume": "", "issue": "", "pages": "806--813", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roth, Dan. 1998. Learning to resolve natural language ambiguities: A unified approach. In Proceedings of the Fifteenth National Conference on Artificial Intelligence (AAAI-98), pages 806-813, Madison, WI.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "A linear programming formulation for global inference in natural language tasks", "authors": [ { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2004, "venue": "Proceedings of CoNLL-2004", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roth, Dan and Wen-tau Yih. 2004. A linear programming formulation for global inference in natural language tasks. In Proceedings of CoNLL-2004, pages 1-8, Boston, MA.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Integer linear programming inference for conditional random fields", "authors": [ { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 22nd International Conference on Machine Learning (ICML-2005)", "volume": "", "issue": "", "pages": "737--744", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roth, Dan and Wen-tau Yih. 2005. Integer linear programming inference for conditional random fields. In Proceedings of the 22nd International Conference on Machine Learning (ICML-2005), pages 737-744, Bonn, Germany.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Using predicate-argument structures for information extraction", "authors": [ { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Sanda", "middle": [], "last": "Harabagiu", "suffix": "" }, { "first": "John", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Aarseth", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Surdeanu, Mihai, Sanda Harabagiu, John Williams, and Paul Aarseth. 2003. Using predicate-argument structures for information extraction. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics, pages 8-15, Sapporo, Japan.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Introduction to the CoNLL-2000 shared task: Chunking", "authors": [ { "first": "Tjong", "middle": [], "last": "Kim Sang", "suffix": "" }, { "first": "Erik", "middle": [ "F" ], "last": "", "suffix": "" }, { "first": "Sabine", "middle": [], "last": "Buchholz", "suffix": "" } ], "year": 2000, "venue": "Proceedings of CoNLL-2000 and LLL-2000", "volume": "", "issue": "", "pages": "127--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tjong Kim Sang, Erik F. and Sabine Buchholz. 2000. Introduction to the CoNLL-2000 shared task: Chunking. In Proceedings of CoNLL-2000 and LLL-2000, pages 127-132, Lisbon, Portugal. Xpress-MP. 2004. Dash Optimization. Xpress-MP. http://www. dashoptimization.com.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Calibrating features for semantic role labeling", "authors": [ { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xue, Nianwen and Martha Palmer. 2004. Calibrating features for semantic role labeling. In Proceedings of the 2004", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Conference on Empirical Methods in Natural Language Processing (EMNLP-2004)", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "88--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Conference on Empirical Methods in Natural Language Processing (EMNLP-2004), pages 88-94, Barcelona, Spain.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Text chunking based on a generalization of Winnow", "authors": [ { "first": "Tong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Fred", "middle": [], "last": "Damerau", "suffix": "" }, { "first": "David", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2002, "venue": "Journal of Machine Learning Research", "volume": "2", "issue": "", "pages": "615--637", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang, Tong, Fred Damerau, and David Johnson. 2002. Text chunking based on a generalization of Winnow. Journal of Machine Learning Research, 2:615-637.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "An example of a parse tree and its predicate-argument structure." }, "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": ", if the constituents are [ NP His duties], [ PP by John Smith], and [ VBN elected], then their chunk features are \"is-NP,\" \"embed-PP & embed-NP,\" and \"embedded-in-VP,\" respectively. r Chunk pattern: encodes the sequence of chunks from the constituent to the predicate. For example, in Figure 1 the chunk sequence from [ NP His duties] to the predicate elect is VP-PP-NP-NP-VP.r Chunk pattern length: the feature counts the number of chunks in the chunk pattern feature." }, "FIGREF2": { "uris": null, "type_str": "figure", "num": null, "text": "Phrase type: uses a simple heuristic to identify the type of the argument candidate as VP, PP, or NP.r Head word and POS tag of the head word: are the rightmost word for NP, and the leftmost word for VP and PP." }, "TABREF0": { "text": "if the predicate (target verb) is assume, the pruning heuristic will output: [ PP by John Smith who has been elected deputy chairman], [ NP John Smith who has been elected deputy chairman], [ VB be], [ MD will], and [ NP His duties].", "type_str": "table", "html": null, "num": null, "content": "" }, "TABREF1": { "text": "left ] [ A1 my pearls] [ A2 to my daughter] and [ A1 my gold] [ A2 to my son].", "type_str": "table", "html": null, "num": null, "content": "
" }, "TABREF2": { "text": "The accuracy of argument classification when argument boundaries are known.", "type_str": "table", "html": null, "num": null, "content": "
Full Parsing Shallow Parsing
Gold 91.50 \u00b1 0.4890.75 \u00b1 0.45
Auto 90.32 \u00b1 0.4889.71 \u00b1 0.50
" }, "TABREF3": { "text": "The overall system performance when argument boundaries are known. Gold 91.58 91.90 91.74 \u00b1 0.51 91.14 91.48 91.31 \u00b1 0.51 Auto 90.71 91.14 90.93 \u00b1 0.53 90.50 90.88 90.69 \u00b1 0.53", "type_str": "table", "html": null, "num": null, "content": "
Full ParsingShallow Parsing
PrecRecF 1PrecRecF 1
" }, "TABREF4": { "text": "The performance of argument identification after pruning (based on the gold standard full parse trees). Gold 96.53 93.57 95.03 \u00b1 0.32 93.66 91.72 92.68 \u00b1 0.38 Auto 94.68 90.60 92.59 \u00b1 0.39 92.31 88.36 90.29 \u00b1 0.43", "type_str": "table", "html": null, "num": null, "content": "
Full ParsingShallow Parsing
PrecRecF 1PrecRecF 1
" }, "TABREF5": { "text": "Gold 92.13 95.62 93.84 \u00b1 0.37 88.54 94.81 91.57 \u00b1 0.42 Auto 89.48 94.14 91.75 \u00b1 0.41 86.14 93.21 89.54 \u00b1 0.47", "type_str": "table", "html": null, "num": null, "content": "
ParsingShallow Parsing
PrecRecF 1PrecRecF 1
" }, "TABREF7": { "text": "Gold 25.94 97.27 40.96 \u00b1 0.51 29.58 97.18 45.35 \u00b1 0.83 Auto 22.79 86.08 36.04 \u00b1 0.52 24.68 94.80 39.17 \u00b1 0.79", "type_str": "table", "html": null, "num": null, "content": "
PrecRecF 1PrecRecF 1
" }, "TABREF8": { "text": "Statistics of the training and test examples for the pruning stage.", "type_str": "table", "html": null, "num": null, "content": "
WordsArgumentsConstituentsAgreements
GoldAutoGoldAuto
Train2,575,665332,3814,664,9544,263,831327,603319,768
Test147,98119,511268,678268,48219,26618,301
" }, "TABREF9": { "text": "The overall system performance. Gold 86.22 87.40 86.81 \u00b1 0.59 75.34 75.28 75.31 \u00b1 0.76 Auto 77.09 75.51 76.29 \u00b1 0.76 75.48 67.13 71.06 \u00b1 0.80", "type_str": "table", "html": null, "num": null, "content": "
Full ParsingShallow Parsing
PrecRecF 1PrecRecF 1
" }, "TABREF10": { "text": "The impact of removing most constraints in overall system performance. Gold 85.07 87.50 86.27 \u00b1 0.58 73.19 75.63 74.39 \u00b1 0.75 Auto 75.88 75.81 75.84 \u00b1 0.75 73.56 67.45 70.37 \u00b1 0.80", "type_str": "table", "html": null, "num": null, "content": "
Full ParsingShallow Parsing
PrecRecF 1PrecRecF 1
" }, "TABREF11": { "text": "The performance of individual and combined SRL systems.", "type_str": "table", "html": null, "num": null, "content": "
PrecRecF 1
Collins' parser75.92 71.45 73.62 \u00b1 0.79
Charniak's parser 77.09 75.51 76.29 \u00b1 0.76
Combined result80.53 76.94 78.69 \u00b1 0.71
" }, "TABREF12": { "text": "Overall CoNLL-2005 shared task results. Detailed CoNLL-2005 shared task results on the WSJ test set.", "type_str": "table", "html": null, "num": null, "content": "
Prec.Rec.F 1
Development80.05 74.83 77.35
Test WSJ82.28 76.78 79.44
Test Brown73.38 62.93 67.75
Test WSJ+Brown 81.18 74.92 77.92
" } } } }