{ "paper_id": "P09-1005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:54:01.285227Z" }, "title": "Brutus: A Semantic Role Labeling System Incorporating CCG, CFG, and Dependency Features", "authors": [ { "first": "Stephen", "middle": [ "A" ], "last": "Boxwell", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Ohio State University", "location": {} }, "email": "boxwe11@1ing.ohio-state.edu" }, { "first": "Dennis", "middle": [], "last": "Mehay", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Ohio State University", "location": {} }, "email": "mehay@1ing.ohio-state.edu" }, { "first": "Chris", "middle": [], "last": "Brew", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Ohio State University", "location": {} }, "email": "cbrew@1ing.ohio-state.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We describe a semantic role labeling system that makes primary use of CCG-based features. Most previously developed systems are CFG-based and make extensive use of a treepath feature, which suffers from data sparsity due to its use of explicit tree configurations. CCG affords ways to augment treepathbased features to overcome these data sparsity issues. By adding features over CCG wordword dependencies and lexicalized verbal subcategorization frames (\"supertags\"), we can obtain an F-score that is substantially better than a previous CCG-based SRL system and competitive with the current state of the art. A manual error analysis reveals that parser errors account for many of the errors of our system. This analysis also suggests that simultaneous incremental parsing and semantic role labeling may lead to performance gains in both tasks.", "pdf_parse": { "paper_id": "P09-1005", "_pdf_hash": "", "abstract": [ { "text": "We describe a semantic role labeling system that makes primary use of CCG-based features. Most previously developed systems are CFG-based and make extensive use of a treepath feature, which suffers from data sparsity due to its use of explicit tree configurations. CCG affords ways to augment treepathbased features to overcome these data sparsity issues. By adding features over CCG wordword dependencies and lexicalized verbal subcategorization frames (\"supertags\"), we can obtain an F-score that is substantially better than a previous CCG-based SRL system and competitive with the current state of the art. A manual error analysis reveals that parser errors account for many of the errors of our system. This analysis also suggests that simultaneous incremental parsing and semantic role labeling may lead to performance gains in both tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Semantic Role Labeling (SRL) is the process of assigning semantic roles to strings of words in a sentence according to their relationship to the semantic predicates expressed in the sentence. The task is difficult because the relationship between syntactic relations like \"subject\" and \"object\" do not always correspond to semantic relations like \"agent\" and \"patient\". An effective semantic role labeling system must recognize the differences between different configurations: We use Propbank (Palmer et al., 2005) , a corpus of newswire text annotated with verb predicate semantic role information that is widely used in the SRL literature (M\u00e0rquez et al., 2008) . Rather than describe semantic roles in terms of \"agent\" or \"patient\", Propbank defines semantic roles on a verb-by-verb basis. For example, the verb open encodes the OPENER as Arg0, the OPENEE as Arg1, and the beneficiary of the OPENING action as Arg3. Propbank also defines a set of adjunct roles, denoted by the letter M instead of a number. For example, ArgM-TMP denotes a temporal role, like \"today\". By using verb-specific roles, Propbank avoids specific claims about parallels between the roles of different verbs.", "cite_spans": [ { "start": 494, "end": 515, "text": "(Palmer et al., 2005)", "ref_id": "BIBREF17" }, { "start": 642, "end": 664, "text": "(M\u00e0rquez et al., 2008)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We follow the approach in (Punyakanok et al., 2008) in framing the SRL problem as a two-stage pipeline: identification followed by labeling. During identification, every word in the sentence is labeled either as bearing some (as yet undetermined) semantic role or not . This is done for each verb. Next, during labeling, the precise verb-specific roles for each word are determined. In contrast to the approach in (Punyakanok et al., 2008) , which tags constituents directly, we tag headwords and then associate them with a constituent, as in a previous CCG-based approach (Gildea and Hockenmaier, 2003) . Another difference is our choice of parsers. Brutus uses the CCG parser of (Clark and Curran, 2007 , henceforth the C&C parser), Charniak's parser (Charniak, 2001) for additional CFG-based features, and MALT parser (Nivre et al., 2007) for dependency features, while (Punyakanok et al., 2008) use results from an ensemble of parses from Charniak's Parser and a Collins parser (Collins, 2003; Bikel, 2004) . Finally, the system described in (Punyakanok et al., 2008) uses a joint inference model to resolve discrepancies between multiple automatic parses. We do not employ a similar strategy due to the differing notions of constituency represented in our parsers (CCG having a much more fluid notion of constituency and the MALT parser using a different approach entirely).", "cite_spans": [ { "start": 26, "end": 51, "text": "(Punyakanok et al., 2008)", "ref_id": "BIBREF18" }, { "start": 414, "end": 439, "text": "(Punyakanok et al., 2008)", "ref_id": "BIBREF18" }, { "start": 573, "end": 603, "text": "(Gildea and Hockenmaier, 2003)", "ref_id": "BIBREF9" }, { "start": 681, "end": 704, "text": "(Clark and Curran, 2007", "ref_id": "BIBREF6" }, { "start": 753, "end": 769, "text": "(Charniak, 2001)", "ref_id": "BIBREF5" }, { "start": 821, "end": 841, "text": "(Nivre et al., 2007)", "ref_id": "BIBREF16" }, { "start": 873, "end": 898, "text": "(Punyakanok et al., 2008)", "ref_id": "BIBREF18" }, { "start": 982, "end": 997, "text": "(Collins, 2003;", "ref_id": "BIBREF8" }, { "start": 998, "end": 1010, "text": "Bikel, 2004)", "ref_id": "BIBREF2" }, { "start": 1046, "end": 1071, "text": "(Punyakanok et al., 2008)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For the identification and labeling steps, we train a maximum entropy classifier (Berger et al., 1996) over sections 02-21 of a version of the CCGbank corpus (Hockenmaier and Steedman, 2007) that has been augmented by projecting the Propbank semantic annotations (Boxwell and White, 2008) . We evaluate our SRL system's argument predictions at the word string level, making our results directly comparable for each argument labeling. 1 In the following, we briefly introduce the CCG grammatical formalism and motivate its use in SRL (Sections 2-3). Our main contribution is to demonstrate that CCG -arguably a more expressive and lin-guistically appealing syntactic framework than vanilla CFGs -is a viable basis for the SRL task. This is supported by our experimental results, the setup and details of which we give in Sections 4-10. In particular, using CCG enables us to map semantic roles directly onto verbal categories, an innovation of our approach that leads to performance gains (Section 7). We conclude with an error analysis (Section 11), which motivates our discussion of future research for computational semantics with CCG (Section 12).", "cite_spans": [ { "start": 81, "end": 102, "text": "(Berger et al., 1996)", "ref_id": "BIBREF1" }, { "start": 158, "end": 190, "text": "(Hockenmaier and Steedman, 2007)", "ref_id": "BIBREF10" }, { "start": 263, "end": 288, "text": "(Boxwell and White, 2008)", "ref_id": "BIBREF3" }, { "start": 434, "end": 435, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Combinatory Categorial Grammar (Steedman, 2000) is a grammatical framework that describes syntactic structure in terms of the combinatory potential of the lexical (word-level) items. Rather than using standard part-of-speech tags and grammatical rules, CCG encodes much of the combinatory potential of each word by assigning a syntactically informative category. For example, the verb loves has the category (s\\np)/np, which could be read \"the kind of word that would be a sentence if it could combine with a noun phrase on the right and a noun phrase on the left\". Further, CCG has the advantage of a transparent interface between the way the words combine and their dependencies with other words. Word-word dependencies in the CCGbank are encoded using predicate-argument (PARG) relations. PARG relations are defined by the functor word, the argument word, the category of the functor word and which argument slot of the functor category is being filled. For example, in the sentence John loves Mary (figure 1), there are two slots on the verbal category to be filled by NP arguments. The first argument (the subject) fills slot 1. This can be encoded as , indicating the head of the functor, the head of the argument, the functor category and the argument slot. The second argument (the direct object) fills slot 2. This can be encoded as . One of the potential advantages to using CCGbank-style PARG relations is that they uniformly encode both local and long-range dependencies -e.g., the noun phrase the Mary that John loves expresses the same set of two dependencies. We will show this to be a valuable tool for semantic role prediction.", "cite_spans": [ { "start": 31, "end": 47, "text": "(Steedman, 2000)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2" }, { "text": "There are many potential advantages to using the CCG formalism in SRL. One is the uniformity with which CCG can express equivalence classes of local and longrange (including unbounded) dependencies. CFGbased approaches often rely on examining potentially long sequences of categories (or treepaths) between the verb and the target word. Because there are a number of different treepaths that correspond to a single relation (figure 2), this approach can suffer from data sparsity. CCG, however, can encode all treepath-distinct expressions of a single grammatical relation into a single predicate-argument relationship (figure 3). This feature has been shown (Gildea and Hockenmaier, 2003) to be an effective substitute for treepath-based features. But while predicate-argument-based features are very effective, they are still vulnerable both to parser errors and to cases where the semantics of a sentence do not correspond directly to syntactic dependencies. To counteract this, we use both kinds of features with the expectation that the treepath feature will provide low-level detail to compensate for missed, incorrect or syntactically impossible dependencies.", "cite_spans": [ { "start": 659, "end": 689, "text": "(Gildea and Hockenmaier, 2003)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Potential Advantages to using CCG", "sec_num": "3" }, { "text": "Another advantage of a CCG-based approach (and lexicalist approaches in general) is the ability to encode verb-specific argument mappings. An argument mapping is a link between the CCG category and the semantic roles that are likely to go with each of its arguments. The projection of argument mappings onto CCG verbal categories is explored in (Boxwell and White, 2008) . We describe this feature in more detail in section 7.", "cite_spans": [ { "start": 345, "end": 370, "text": "(Boxwell and White, 2008)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Potential Advantages to using CCG", "sec_num": "3" }, { "text": "As in previous approaches to SRL, Brutus uses a twostage pipeline of maximum entropy classifiers. In addition, we train an argument mapping classifier (described in more detail below) whose predictions are used as features for the labeling model. The same features are extracted for both treebank and automatic parses. Automatic parses were generated using the C&C CCG parser (Clark and Curran, 2007) with its derivation output format converted to resemble that of the CCGbank. This involved following the derivational bracketings of the C&C parser's output and reconstructing the backpointers to the lexical heads using an in-house implementation of the basic CCG combinatory operations. All classifiers were trained to 500 iterations of L-BFGS training -a quasi-Newton method from the numerical optimization literature (Liu and Nocedal, 1989 ) -using Zhang Le's maxent toolkit. 2 To prevent overfitting we used Gaussian priors with global variances of 1 and 5 for the identifier and labeler, respectively. 3 The Gaussian priors were determined empirically by testing on the development set.", "cite_spans": [ { "start": 376, "end": 400, "text": "(Clark and Curran, 2007)", "ref_id": "BIBREF6" }, { "start": 821, "end": 843, "text": "(Liu and Nocedal, 1989", "ref_id": "BIBREF12" }, { "start": 1008, "end": 1009, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Identification and Labeling Models", "sec_num": "4" }, { "text": "Both the identifier and the labeler use the following features:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification and Labeling Models", "sec_num": "4" }, { "text": "(1) Words. Words drawn from a 3 word window around the target word, 4 with each word associated with a binary indicator feature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification and Labeling Models", "sec_num": "4" }, { "text": "(2) Part of Speech. Part of Speech tags drawn from a 3 word window around the target word,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification and Labeling Models", "sec_num": "4" }, { "text": "John loves Mary np (s[dcl]\\np)/np np > s[dcl]\\np < s[dcl]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification and Labeling Models", "sec_num": "4" }, { "text": "Figure 1: This sentence has two dependencies:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification and Labeling Models", "sec_num": "4" }, { "text": " and ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification and Labeling Models", "sec_num": "4" }, { "text": "S 3 3 3 NP Robin VP 4 4 4 V fixed NP d d Det the N car NP r r r Det the N r r r N car RC r r r Rel that S & & NP Robin VP V fixed Figure 2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification and Labeling Models", "sec_num": "4" }, { "text": "The semantic relation (Arg1) between 'car' and 'fixed' in both phrases is the same, but the treepaths -traced with arrows above -are different: (V>VPVP>S>RC>Ns[dcl]\\np and < indicating movement up and down the tree, respectively. Similar to the above treepath feature, except the path stops at the highest node under the least common subsumer that is headed by the target word (this is the constituent that the role would be marked on if we identified this terminal as a role-bearing word). Again, for the relationship between fixed and car in the first sentence of figure 3, the short treepath is (s[dcl]\\np)/np>s[dcl]\\npThis can be read off the verb category: declarative for eats: (s[dcl]\\np)/np or progressive for run-ning: s[ng]\\np. (6) Before/After. A binary indicator variable indi-cating whether the target word is before or after the verb. (7) tion is s/(s\\np),(np\\np)/(s[dcl]/np),np." }, "TABREF4": { "type_str": "table", "text": "al (treebank) 86.22% 87.40% 86.81% Brutus (treebank) 88.29% 86.39% 87.33% P. et al (automatic) 77.09% 75.51% 76.29% Brutus (automatic) 76.73% 70.45% 73.45%", "html": null, "num": null, "content": "
PRF
P. et Table 2: Accuracy of semantic role prediction using
CCG, CFG, and MALT based features.
PRF
Headword (treebank) 88.94% 86.98% 87.95%
Boundary (treebank)88.29% 86.39% 87.33%
Headword (automatic) 82.36% 75.97% 79.04%
Boundary (automatic) 76.33% 70.59% 73.35%
" }, "TABREF7": { "type_str": "table", "text": "An example of how incorrect PP attachment can cause an incorrect labeling. Stop.Arg1 should cover using asbestos rather than using asbestos in 1956. This sentence is based on wsj 0003.3, with the structure simplified for clarity.", "html": null, "num": null, "content": "
the companystoppedusingasbestosin 1956
np((s[dcl]\\np)/(s[ng]\\np)) (s[ng]\\np)/npnp(s\\np)\\(s\\np)
>
s[ng]\\np
s[ng]\\np \u2212 stop.Arg1<
>
s[dcl]\\np
<
s[dcl]
Figure 7: a groupofworkersexposed to asbestos
np(np\\np)/np np \u2212 exposed.Arg1np\\np
" } } } }