{ "paper_id": "Q15-1032", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:07:15.527528Z" }, "title": "Context-aware Frame-Semantic Role Labeling", "authors": [ { "first": "Michael", "middle": [], "last": "Roth", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": { "addrLine": "10 Crichton Street", "postCode": "EH8 9AB", "settlement": "Edinburgh" } }, "email": "mroth@inf.ed.ac.uk" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": { "addrLine": "10 Crichton Street", "postCode": "EH8 9AB", "settlement": "Edinburgh" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Frame semantic representations have been useful in several applications ranging from text-to-scene generation, to question answering and social network analysis. Predicting such representations from raw text is, however, a challenging task and corresponding models are typically only trained on a small set of sentence-level annotations. In this paper, we present a semantic role labeling system that takes into account sentence and discourse context. We introduce several new features which we motivate based on linguistic insights and experimentally demonstrate that they lead to significant improvements over the current state-of-the-art in FrameNet-based semantic role labeling.", "pdf_parse": { "paper_id": "Q15-1032", "_pdf_hash": "", "abstract": [ { "text": "Frame semantic representations have been useful in several applications ranging from text-to-scene generation, to question answering and social network analysis. Predicting such representations from raw text is, however, a challenging task and corresponding models are typically only trained on a small set of sentence-level annotations. In this paper, we present a semantic role labeling system that takes into account sentence and discourse context. We introduce several new features which we motivate based on linguistic insights and experimentally demonstrate that they lead to significant improvements over the current state-of-the-art in FrameNet-based semantic role labeling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The goal of semantic role labeling (SRL) is to identify and label the arguments of semantic predicates in a sentence according to a set of predefined relations (e.g., \"who\" did \"what\" to \"whom\"). In addition to providing definitions and examples of role labeled text, resources like FrameNet (Ruppenhofer et al., 2010) group semantic predicates into socalled frames, i.e., conceptual structures describing the background knowledge necessary to understand a situation, event or entity as a whole as well as the roles participating in it. Accordingly, semantic roles are defined on a per-frame basis and are shared among predicates.", "cite_spans": [ { "start": 292, "end": 318, "text": "(Ruppenhofer et al., 2010)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In recent years, frame representations have been successfully applied in a range of downstream tasks, including question answering (Shen and Lapata, 2007) , text-to-scene generation (Coyne et al., 2012) , stock price prediction (Xie et al., 2013) , and social network extraction (Agarwal et al., 2014) . Whereas some tasks directly utilize information encoded in the FrameNet resource, others make use of FrameNet indirectly through the output of SRL systems that are trained on data annotated with frame-semantic representations. While advances in machine learning have recently given rise to increasingly powerful SRL systems following the FrameNet paradigm (Hermann et al., 2014; T\u00e4ckstr\u00f6m et al., 2015) , little effort has been devoted to improve such models from a linguistic perspective.", "cite_spans": [ { "start": 131, "end": 154, "text": "(Shen and Lapata, 2007)", "ref_id": "BIBREF31" }, { "start": 182, "end": 202, "text": "(Coyne et al., 2012)", "ref_id": "BIBREF2" }, { "start": 228, "end": 246, "text": "(Xie et al., 2013)", "ref_id": "BIBREF36" }, { "start": 279, "end": 301, "text": "(Agarwal et al., 2014)", "ref_id": "BIBREF0" }, { "start": 660, "end": 682, "text": "(Hermann et al., 2014;", "ref_id": "BIBREF15" }, { "start": 683, "end": 706, "text": "T\u00e4ckstr\u00f6m et al., 2015)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we explore insights from the linguistic literature suggesting a connection between discourse and role labeling decisions and show how to incorporate these in an SRL system. Although early theoretical work (Fillmore, 1976) has recognized the importance of discourse context for the assignment of semantic roles, most computational approaches have shied away from such considerations. To see how context can be useful, consider as an example the DELIVERY frame, which states that a THEME can be handed off to either a RECIPIENT or \"more indirectly\" to a GOAL. While the distinction between the latter two roles might be clear for some fillers (e.g., people vs. locations), there are others where both roles are equally plausible and additional information is required to resolve the ambiguity (e.g., countries). If we hear about a letter being delivered to Greece, for instance, reliable cues might be whether the sender is a person or a country and whether Greece refers to the geographic region or to the Greek government.", "cite_spans": [ { "start": 220, "end": 236, "text": "(Fillmore, 1976)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The example shows that context can generally influence the choice of correct role label. Accordingly, we assume that modeling contextual information, such as the meaning of a word in a given situation, can improve semantic role labeling performance. To validate this assumption, we explore different ways of incorporating contextual cues in a SRL model and provide experimental support that demonstrates the usefulness of such additional information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remainder of this paper is structured as follows. In Section 2, we present related work on semantic role labeling and the various features applied in traditional SRL systems. In Section 3, we provide additional background on the FrameNet resource. Sections 4 and 5 describe our baseline system and contextual extensions, respectively, and Section 6 presents our experimental results. We conclude the paper by discussing in more detail the output of our system and highlighting avenues for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Early work in SRL dates back to Gildea and Jurafsky (2002) , who were the first to model role assignment to verb arguments based on FrameNet. Their model makes use of lexical and syntactic features, including binary indicators for the words involved, syntactic categories, dependency paths as well as position and voice in a given sentence. Most subsequent work in SRL builds on Gildea and Jurafsky's feature set, often with the addition of features that describe relevant syntactic structures in more detail, e.g., the argument's leftmost/rightmost dependent (Johansson and Nugues, 2008) .", "cite_spans": [ { "start": 32, "end": 58, "text": "Gildea and Jurafsky (2002)", "ref_id": "BIBREF13" }, { "start": 560, "end": 588, "text": "(Johansson and Nugues, 2008)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "More sophisticated features include the use of convolution kernels (Moschitti, 2004; Croce et al., 2011) in order to represent predicate-argument structures and their lexical similarities more accurately. Beyond lexical and syntactic information, a few approaches employ additional semantic features based on annotated word senses (Che et al., 2010) and selectional preferences (Zapirain et al., 2013) . Deschacht and Moens (2009) and Huang and Yates (2010) use sentence-internal sequence information, in the form of latent states in a hidden markov model. More recently, a few approaches (Roth and Woodsend, 2014; Lei et al., 2015; Foland and Martin, 2015) explore ways of using low-rank vector and tensor approximations to represent lexical and syntactic features as well as combinations thereof.", "cite_spans": [ { "start": 67, "end": 84, "text": "(Moschitti, 2004;", "ref_id": "BIBREF22" }, { "start": 85, "end": 104, "text": "Croce et al., 2011)", "ref_id": "BIBREF4" }, { "start": 331, "end": 349, "text": "(Che et al., 2010)", "ref_id": null }, { "start": 378, "end": 401, "text": "(Zapirain et al., 2013)", "ref_id": "BIBREF40" }, { "start": 404, "end": 430, "text": "Deschacht and Moens (2009)", "ref_id": "BIBREF8" }, { "start": 435, "end": 457, "text": "Huang and Yates (2010)", "ref_id": "BIBREF16" }, { "start": 589, "end": 614, "text": "(Roth and Woodsend, 2014;", "ref_id": "BIBREF27" }, { "start": 615, "end": 632, "text": "Lei et al., 2015;", "ref_id": "BIBREF20" }, { "start": 633, "end": 657, "text": "Foland and Martin, 2015)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "To the best of our knowledge, there exists no prior work where features based on discourse context are used to assign roles on the sentence level. Discourse-like features have been previously applied in models that deal with so-called implicit arguments, i.e., roles which are not locally realized but resolvable within the greater discourse context (Ruppenhofer et al., 2010; Gerber and Chai, 2012) . Successful features for resolving implicit arguments include the distance between mentions and any discourse relations occurring between them (Gerber and Chai, 2012) , roles assigned to mentions in the previous context, the discourse prominence of the denoted entity (Silberer and Frank, 2012) , and its centering status (Laparra and Rigau, 2013) . None of these features have been used in a standard SRL system to date (and trivially, not all of them will be helpful as, for example, the number of sentences between a predicate and an argument is always zero within a sentence). In this paper, we extend the contextual features used for resolving implicit arguments to the SRL task and show how a set of discourse-level enhancements can be added to a traditional sentence-level SRL model.", "cite_spans": [ { "start": 350, "end": 376, "text": "(Ruppenhofer et al., 2010;", "ref_id": "BIBREF28" }, { "start": 377, "end": 399, "text": "Gerber and Chai, 2012)", "ref_id": "BIBREF12" }, { "start": 544, "end": 567, "text": "(Gerber and Chai, 2012)", "ref_id": "BIBREF12" }, { "start": 669, "end": 695, "text": "(Silberer and Frank, 2012)", "ref_id": "BIBREF32" }, { "start": 723, "end": 748, "text": "(Laparra and Rigau, 2013)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The Berkeley FrameNet project (Ruppenhofer et al., 2010 ) develops a semantic lexicon and an annotated example corpus based on Fillmore's (1976) theory of frame semantics. Annotations consist of frameevoking elements (i.e., words in a sentence that are associated with a conceptual frame) and frame elements (i.e., instantiations of semantic roles, which are defined per frame and filled by words or word sequences in a given sentence). For example, the DELIVERY frame describes a scene or situation in which a DELIVERER hands off a THEME to a RE-CIPIENT or a GOAL. 1 In total, there are 1,019 frames and 8,886 frame elements defined in the lat-est publicly available version of FrameNet. 2 An average number of 11.6 different frame-evoking elements are provided for each frame (11,829 in total). Following previous work on FrameNet-based SRL, we use the full text annotation data set, which contains 23,087 frame instances.", "cite_spans": [ { "start": 30, "end": 55, "text": "(Ruppenhofer et al., 2010", "ref_id": "BIBREF28" }, { "start": 127, "end": 144, "text": "Fillmore's (1976)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "FrameNet", "sec_num": "3" }, { "text": "Semantic annotations for frame instances and fillers of frame elements are generally provided at the level of word sequences, which can be single words, complete or incomplete phrases, and entire clauses (Ruppenhofer et al., 2010, Chapter 4 ). An instance of the DELIVERY frame, with annotations of the frame-evoking element (underlined) and instantiated frame elements (in brackets), is given in the example below:", "cite_spans": [ { "start": 204, "end": 240, "text": "(Ruppenhofer et al., 2010, Chapter 4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "FrameNet", "sec_num": "3" }, { "text": "(1) The Soviet Union agreed to speed up [oil] THEME deliveries DELIVERY [to Yugoslavia] RECIPIENT .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FrameNet", "sec_num": "3" }, { "text": "Note that the oil deliveries here concern Yugoslavia as a geopolitical entity and hence the RECIPIENT role is assigned. If Yugoslavia was referred to as the location of a delivery, the GOAL role would be assigned instead. In general, roles can be restricted by so-called semantic types (e.g., every filler of the THEME element in the DELIVERY frame needs to be a physical object). However, not all roles are typed and whether a specific phrase is a suitable filler largely depends on context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FrameNet", "sec_num": "3" }, { "text": "As a baseline for implementing contextual enhancements to an SRL model, we use the semantic role labeling components provided by the mate-tools (Bj\u00f6rkelund et al., 2010) . Given a frame-evoking element in a sentence and its associated frame (i.e., a predicate and its sense), the mate-tools form a pipeline of logistic regression classifiers that identify and label frame elements which are instantiated within the same sentence (i.e., a given predicate's arguments). The adopted SRL system has been developed for PropBank/NomBank-style role labeling and we make several changes to adapt it to FrameNet. Specifically, we change the argument labeling procedure from predicate-specific to frame-specific roles and implement I/O methods to read and generate FrameNet XML files. For direct comparison with the previous state-of-the-art for FrameNetbased SRL, we further implement additional features used in the SEMAFOR system and combine the role labeling components of mate-tools with SEMAFOR's preprocessing toolchain. 3 All features used in our system are listed in Table 1 .", "cite_spans": [ { "start": 144, "end": 169, "text": "(Bj\u00f6rkelund et al., 2010)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 1066, "end": 1073, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Baseline Model", "sec_num": "4" }, { "text": "The main differences between our adaptation of mate-tools and SEMAFOR are as follows: whereas the latter implements identification and labeling of role fillers in one step, mate-tools follow the insight that these two steps are conceptually different (Xue and Palmer, 2004) and should be modeled separately. Accordingly, mate-tools contain a global reranking component which takes into account identification and labeling decisions while SEMAFOR only uses reranking techniques to filter overlapping argument predictions and other constraints (see Das et al., 2014 for details) . We discuss the advantage of a global reranker for our setting in Section 5.", "cite_spans": [ { "start": 251, "end": 273, "text": "(Xue and Palmer, 2004)", "ref_id": "BIBREF38" }, { "start": 547, "end": 576, "text": "Das et al., 2014 for details)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "4" }, { "text": "Context can be relevant for semantic role labeling in various different ways. In this section, we motivate and describe four extensions over previous approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extensions based on Context", "sec_num": "5" }, { "text": "The first extension is a set of features that model document-specific aspects of word meaning using distributional semantics. The motivation for this feature class stems from the insight that the meaning of a word in context can influence correct role assignment. While concepts such as polysemy, homonymy and metonymy are all relevant here, the scarce training data available for FrameNet-based SRL calls for a light-weight model that can be applied without large amounts of labeled data. We therefore employ distributional word representations which we critically adapt based on document content. We describe our contribution in Section 5.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extensions based on Context", "sec_num": "5" }, { "text": "Entities that fill semantic roles are sometimes mentioned in discourse. Given a specific mention for which a role is to be predicted, we can also directly use previous role assignments as classification cues. We describe our implementation of this feature in Section 5.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extensions based on Context", "sec_num": "5" }, { "text": "The filler of a semantic role is often a word or phrase which occurs only once or a few times in a document. If neither syntax nor aspects of lexical meaning provide cues indicating a unique role, useful information can still be derived from the discourse salience of the denoted entity. Our model makes use of a simple salience indicator that can be reliably derived from automatically computed coreference chains. We describe the motivation and actual implementation of this feature in Section 5.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extensions based on Context", "sec_num": "5" }, { "text": "The aforementioned features will influence role labeling decisions directly, however, further improvements can be gained by considering interactions between labeling decisions. As discussed in , role annotations in FrameNet are unique with respect to a frame instance in more than 96% of cases. This means that even if a feature is not a positive indicator for a candidate role filler, knowing it would be a better cue for another candidate can also prevent a hypothetical model from assigning a frame element label incorrectly. While this kind of knowledge has been successfully implemented as constraints in recent FrameNet-based SRL models (Hermann et al., 2014; T\u00e4ckstr\u00f6m et al., 2015) , earlier work on PropBank-based role labeling suggests that better performance can be achieved with a re-ranking component which has the potential to learn such constraints and other interactions implicitly (Toutanova et al., 2005; Bj\u00f6rkelund et al., 2010) . In our model, we adopt the latter method and extend it with additional frame-based features. We describe this approach in more detail in Section 5.4.", "cite_spans": [ { "start": 643, "end": 665, "text": "(Hermann et al., 2014;", "ref_id": "BIBREF15" }, { "start": 666, "end": 689, "text": "T\u00e4ckstr\u00f6m et al., 2015)", "ref_id": "BIBREF33" }, { "start": 898, "end": 922, "text": "(Toutanova et al., 2005;", "ref_id": "BIBREF35" }, { "start": 923, "end": 947, "text": "Bj\u00f6rkelund et al., 2010)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Extensions based on Context", "sec_num": "5" }, { "text": "The underlying idea of distributional models of semantics is that meaning can be acquired based on distributional properties (typically represented by co-occurrence counts) of linguistic entities such as words and phrases (Sahlgren, 2008) . Although the absolute meaning of distributional representations remains unclear, they have proven highly successful for modeling relative aspects of meaning, as required for instance in word similarity tasks (Mikolov et al., 2013; Pennington et al., 2014) . Given their ability to model lexical similarity, it is not surprising that such representations are also successful at representing similar words in semantic tasks related to role labeling Croce et al., 2010; Zapirain et al., 2013) .", "cite_spans": [ { "start": 222, "end": 238, "text": "(Sahlgren, 2008)", "ref_id": "BIBREF30" }, { "start": 449, "end": 471, "text": "(Mikolov et al., 2013;", "ref_id": "BIBREF21" }, { "start": 472, "end": 496, "text": "Pennington et al., 2014)", "ref_id": "BIBREF25" }, { "start": 688, "end": 707, "text": "Croce et al., 2010;", "ref_id": "BIBREF3" }, { "start": 708, "end": 730, "text": "Zapirain et al., 2013)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Modeling Word Meaning in Context", "sec_num": "5.1" }, { "text": "Although distributional representations can be used directly as features for role labeling (Pad\u00f3 et al., 2008; Gorinski et al., 2013; Roth and Woodsend, 2014, inter alia) , further gains should be possible when considering document-specific properties such as genre and context. This is particularly true in the context of FrameNet, where different senses are observed across a diverse range of texts including spoken dialogue and debate transcripts as well as travel guides and newspaper articles. Country names, for example, can be observed as fillers for different roles depending on the text genre and its perspective. Whereas some text may talk about a country as an interesting holiday destination (e.g., \"Berlitz Intro to Jamaica\"), others may discuss what a country is good at or interested in (e.g., \"Iran [Nuclear] Introduction\"). A list of the most frequent roles assigned to different country names are displayed in Table 2 . Previous approaches model word meaning in context (Thater et al., 2010; Dinu and Lapata, 2010 , inter alia) using sentence-level information which is already available in traditional SRL systems in the form of explicit features. Here, we go one step further and define a simple model in which word meaning representations are adapted to each document. As a starting point, we use the GloVe toolkit (Pennington et al., 2014) for learning representations 4 and apply it to the Wikipedia corpus made available by the Westbury Lab. 5 The learned representations can be seen as word vectors whose components encode basic bits of related encyclopaedic knowledge. We adapt these general representations to the actual meaning of a word in a particular text by running additional iterations of the GloVe toolkit using document-specific co-occurrences as input and Wikipedia-based representations for initialization.", "cite_spans": [ { "start": 91, "end": 110, "text": "(Pad\u00f3 et al., 2008;", "ref_id": "BIBREF23" }, { "start": 111, "end": 133, "text": "Gorinski et al., 2013;", "ref_id": "BIBREF14" }, { "start": 134, "end": 170, "text": "Roth and Woodsend, 2014, inter alia)", "ref_id": null }, { "start": 988, "end": 1009, "text": "(Thater et al., 2010;", "ref_id": "BIBREF34" }, { "start": 1010, "end": 1031, "text": "Dinu and Lapata, 2010", "ref_id": "BIBREF9" }, { "start": 1466, "end": 1467, "text": "5", "ref_id": null } ], "ref_spans": [ { "start": 928, "end": 935, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Modeling Word Meaning in Context", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f (X ij )( w T i w j \u2212 logX ij ) 2 ,", "eq_num": "(2)" } ], "section": "Modeling Word Meaning in Context", "sec_num": "5.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling Word Meaning in Context", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "X = C wiki if t < t d C d otherwise", "eq_num": "(3)" } ], "section": "Modeling Word Meaning in Context", "sec_num": "5.1" }, { "text": "The weighting function f scales the impact of each word pair such that unseen pairs do not contribute to the overall objective and frequent co-occurrences are not overweighted. In our experiments, we use the same weighting function and parametrization as defined in Pennington et al. (2014) . We further set the number of iterations to be performed on each co-occurrence matrix following results of an initial cross-validation experiment on our training data (t d = 50, n = 100).", "cite_spans": [ { "start": 266, "end": 290, "text": "Pennington et al. (2014)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Modeling Word Meaning in Context", "sec_num": "5.1" }, { "text": "If an entity is mentioned several times in discourse, it is likely that it also fills several roles. Whereas the distributional model described in Section 5.1 provides us with information regarding the role assignments suitable for an entity given co-occurring words, we can also can explicitly consider previous role assignments to the same entity. As shown in Table 2 , a country that fills the SUPPLIER role is more likely to also fill the role of a SELLER than that of a BUYER. Given the high number of different frame elements in FrameNet, only a small fraction of pairs can be found in the training data, which entails that directly utilizing role co-occurrences might not be helpful. In order to benefit from previous role assignments in discourse, we follow related work on resolving implicit arguments (Ruppenhofer et al., 2011; Silberer and Frank, 2012) and consider the semantic types of role assignments (see Section 3) as features instead of the role labels themselves. This tremendously reduces the feature space from more than 8,000 options (number of defined frame elements) to just 27 (number of semantic types observed for frame elements in the training data).", "cite_spans": [ { "start": 811, "end": 837, "text": "(Ruppenhofer et al., 2011;", "ref_id": "BIBREF29" }, { "start": 838, "end": 863, "text": "Silberer and Frank, 2012)", "ref_id": "BIBREF32" } ], "ref_spans": [ { "start": 362, "end": 369, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Co-occurring Roles", "sec_num": "5.2" }, { "text": "In practice, we define one binary indicator feature f s for each semantic type s observed at training time. Given a potential filler, we set the feature value of f s to 1 (otherwise 0) if and only if there exists a co-referent entity mention annotated as a frame element filler with semantic type s. Since texts in FrameNet do not contain any manual mark-up of coreference relations, we rely on entity mentions and coreference chains predicted by the Stanford Coreference Resolution system (Lee et al., 2013) .", "cite_spans": [ { "start": 490, "end": 508, "text": "(Lee et al., 2013)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Co-occurring Roles", "sec_num": "5.2" }, { "text": "Our third contextual feature type is based on the observation that the salience of a discourse entity and its semantic prominence are interrelated. Previous work (Rose, 2011) showed that semantic prominence, as signal-led by semantic roles, can better explain subsequent phenomena related to discourse salience (such as pronominalization) than syntactic indicators. Our question here is whether this insight can be also applied in reverse. Can information on discourse salience be useful as an indicator for semantic roles?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Newness", "sec_num": "5.3" }, { "text": "For this feature, we make use of the same coreference chains as predicted for determining cooccurring roles. Unfortunately, automatically predicted mentions and coreference chains are noisy. To identify particularly reliable indicators for discourse salience, we inspected held-out development data. One such indicator is whether an entity is mentioned for the first time (discourse-new) or has been mentioned before (discourse-old). Let w denote an entity and R 1 ...R n the set of all co-reference chains with mentions r 1 ...r m \u2208 R i (1 \u2264 i \u2264 n) ordered by their appearance in text. We define discourse newness based on head words r.head as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Newness", "sec_num": "5.3" }, { "text": "(4) new(w) = 0 if \u2203r j \u2208 R i : j > 1 \u2227 r j .head \u2261 w 1 else", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Newness", "sec_num": "5.3" }, { "text": "Although this feature is a simple binary indicator, it can be very useful for distinguishing between roles that are more or less likely to be assigned to new entities. For example, it is easy to imagine that the RESULT of a CAUSATION is more likely to be discourse-new than the EFFECT that caused it. Table 3 provides an overview of frames found in the training and development data which have roles with substantially different likelihoods for discourse-new fillers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Newness", "sec_num": "5.3" }, { "text": "Our goal is to learn a better model for FrameNetbased semantic role labeling using linguistically inspired features such as those described in the previous sections. To do this, we need a framework for representing single role assignments and a model of how such assignments depend on each other within a frame instance. Inspired by previous work on reranking in SRL, we assume that we can find the correct filler of a frame element based on the top k roles predicted for each candidate word sequence. We leverage this assumption to train a reranking model that considers the top predictions for each candidate and uses all relevant features to select the best overall structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Frame-based Reranking", "sec_num": "5.4" }, { "text": "Our implementation of the reranking model is an adaptation of the reranker made available in the mate-tools (see Section 4), which we extend to deal with frame-specific features and arbitrary role labels. As features for the global component, we apply all local features and additionally use the following two types of indicator features on the whole frame structure:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Frame-based Reranking", "sec_num": "5.4" }, { "text": "\u2022 Total number of roles in the predicted structure At test time, the reranker takes as input the n-best labels for the m-best fillers of a frame structure, computes a global score for each of the n \u00d7 m possible combinations and returns the structure with the highest overall score as its prediction output. Based on initial experiments on our training data, we set these parameters to m = 8 and n = 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Frame-based Reranking", "sec_num": "5.4" }, { "text": "In this section, we demonstrate the usefulness of contextual features for FrameNet-based SRL models. Our hypothesis is that contextual information can considerably improve an existing semantic role labeling system. Accordingly, we test this hypothesis based on the output of three different systems. The first system, henceforth called Framat (short for FrameNet-adapted mate-tools) is the baseline system described in Section 4. The second system, henceforth Framat +context , is an enhanced version of the baseline that additionally uses all extensions described in Section 5. Finally, we also consider the output of SEMAFOR , a state-of-the-art model for frame-semantic role labeling. Although all systems are provided with entire documents as input, SEMAFOR and Framat process each document sentence-by-sentence whereas Framat +context also uses features over all sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "For evaluation, we use the same FrameNet training and evaluation texts as established in Das and Smith (2011) . We compute precision, recall and F 1 -score using the modified SemEval-2007 scorer from the SEMAFOR website. 6 6 http://www.ark.cs.cmu.edu/SEMAFOR/eval/ 7 Results produced by running SEMAFOR on the exact same Model/added feature P R F 1", "cite_spans": [ { "start": 89, "end": 109, "text": "Das and Smith (2011)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "Framat w/o reranker 77.5 72.5 74.9 +discourse newness 77.6 72.3 74.9 +word meaning vectors 77.9 72.7 75.2 +cooccurring roles 77.9 72.8 75.3 +reranker 80.6 72.7 76.4 +frame structure 80.4 73.0 76.5 Results Table 4 summarizes our results with Framat, Framat +context , and SEMAFOR using gold and predicted frames (see the upper and lower half of the table, respectively). Although differences in system architecture lead to different precision/recall trade-offs for Framat and SEMAFOR, both systems achieve comparable F 1 (for both gold and predicted frames). Compared to Framat, we can see that the contextual enhancements implemented in our Framat +context model lead to immediate gains of 1.3 points in recall, corresponding to a significant increase of 0.7 points in F 1 . Framat +context 's recall is slightly below that of SEMAFOR (73.0% vs. 73.1%), however, it achieves a much higher level of precision (80.4% vs. 78.4%). We examined whether differences in performance among the three systems are significant using an approximate randomization test over sentences (Yeh, 2000) . SEMAFOR and Framat perform significantly worse (p<0.05) compared to Framat +context both when gold and predicted frames are used. In the remainder of this section we discuss results based on gold frames, since the focus of this work lies primarily on the role labeling task.", "cite_spans": [ { "start": 1069, "end": 1080, "text": "(Yeh, 2000)", "ref_id": "BIBREF39" } ], "ref_spans": [ { "start": 205, "end": 212, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "We demonstrate the effect of adding individual context-based features to the Framat model in a separate experiment. Whereas all models in the previous experiment used a reranker for direct comparability, here we start with the Framat baseline (without a reranker) and add each enhancement described in Section 5 incrementally. As summarized in Table 5 , the baseline without a reranker achieves a precision and frame instances for training and testing as our own models. recall of 77.5% and 72.5%, respectively. Addition of our discourse new feature increases precision (+0.1%), but also reduces recall (\u22120.2%). Adding word meaning vectors compensates for the loss in recall (+0.4%) and further increases precision (+0.3%). Information about role assignments to coreferring mentions increases recall (+0.1%) while retaining the same level of precision. Finally, we can see that jointly considering role labeling decisions in a global reranker with additional features on frame structure leads to the strongest boost in performance, with combined additional gains in precision and recall of +2.5% and +0.2%, respectively. Interestingly, the gains realized here are much higher compared to when adding the reranker to the Framat model without contextual features, which corresponds to a +2.8% increase in precision but a \u22120.8% reduction in recall.", "cite_spans": [], "ref_spans": [ { "start": 344, "end": 351, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Impact of Individual Features", "sec_num": null }, { "text": "General vs. Document-specific Vectors We also assessed the impact of adapting vectors to documents (see Table 6 ). Specifically, we compared a version of the Framat +context model without any vectors against a model using the adaptation technique presented in Section 5.1 and a simpler alternative which obtains GloVe representations trained on the Wikipedia corpus and FrameNet texts. The latter model does not explicitly take document information into account, but it should be able to yield vectors representative of the FrameNet domains, merely by being trained on them. As shown in Table 6, our adaptation technique is superior to learning word representations based on Wikipedia and all FrameNet texts at once. Using the components of document-specific vectors as features improves precision and recall by +0.7 percentage points over Framat +context without vectors. Word representations trained on Wikipedia and FrameNet improve precision by +0.2 percentage points and recall by +0.6.", "cite_spans": [], "ref_spans": [ { "start": 104, "end": 111, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Impact of Individual Features", "sec_num": null }, { "text": "Qualitative Improvements In addition to quantitative gains, we also observe qualitative improvements when considering contextual features. A set of example predictions by different models are listed in Table 7 . The annotations show that Framat and SEMAFOR mislabel several cases that are correctly classified by Framat +context .", "cite_spans": [], "ref_spans": [ { "start": 202, "end": 209, "text": "Table 7", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Impact of Individual Features", "sec_num": null }, { "text": "In the first example, only Framat +context is able to predict that on Dec. 1 fills the frame element Model/word representations P R F 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Impact of Individual Features", "sec_num": null }, { "text": "Framat +context without vectors 79.7 72.2 75.8 +document-specific vectors 80.4 73.0 76.5 +general (Wiki+FN) vectors 79.9 72.8 76.2 Table 6 : Full structure prediction results using gold frames, Framat +context and different vector representations. All numbers are percentages.", "cite_spans": [], "ref_spans": [ { "start": 131, "end": 138, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Impact of Individual Features", "sec_num": null }, { "text": "TIME. This may seem trivial at first glance but is actually remarkable as the word token Dec neither occurs in the training data nor is well represented as a time expression in Wikipedia. The only way the model is able to label this phrase correctly is by finding that corresponding word tokens are similarly distributed across the test document as other time expressions are in the training data. In the second and third examples, correct assignments require some form of world knowledge which is not expressed within the respective sentences but might be approximated based on context. For example, knowing that aunt, uncle and grandmother are role fillers of a KINSHIP frame means that they are of the semantic type human and thus only compatible with the frame element RECIPIENT, not with GOAL. Similarly, correctly classifying the relation between Clinton and stooge in the last example is only possible if the model has access to some information that makes Clinton a likely filler of the SUPERIOR role. We conjecture that document-specific word vector representations provide such information given that Clinton co-occurs in the document with words such as president, chief, and claim. Overall, we find that the features introduced in Section 5 model a fair amount of contextual information which can help a semantic role labeling model to perform better decisions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Impact of Individual Features", "sec_num": null }, { "text": "In this section, we discuss the extent to which our model leverages the full potential of contextual features for semantic role labeling. We manually examine role assignments to frame elements which seem particularly sensitive to context. We analyze such frame elements based on differences in label assignment between Framat and Framat +context that can be traced back to factors such as agency in dis- course and word sense in context. We investigate whether our model captures these factors successfully and showcase examples while reporting absolute changes in precision and recall.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "Many frame elements in FrameNet indicate agency, a property that we expect to highly correlate with contextual features on semantic types of assigned roles (see Section 5.2) and discourse salience (see Section 5.3). Analysis of system output revealed that such features indeed affect and generally improve role labeling. Considering all AGENT elements across frames, we observe absolute improvements of 4% in precision and 3% in recall. In the following, we provide a more detailed analysis of two specific frame elements: the low frequent AGENT element of the PROJECT frame and the highly frequent SPEAKER element in the STATEMENT frame.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Agency and Discourse", "sec_num": "7.1" }, { "text": "The AGENT of a PROJECT is defined as the \"individual or organization that carries out the PROJECT\". The main difficulty in identifying instances of this frame element is that the frameevoking target word is typically a noun such as project, plan, or program and hence syntactic features on word-word dependencies do not provide sufficient cues. We found several cases where context provided missing cues, leading to an increase in recall from 56% to 78%. In cases where additional features did not help, we identified two types of errors: firstly, the filler was too far from the target word and therefore could not be identified as a filler at all (\"[North Korea] AGENT is developing ... program PROJECT \"), and secondly, earlier mentions indicating agency were not detected by the coreference resolution system (\"The IAEA assisted Syria (...) This study was part of an IAEA AGENT .. program PROJECT ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Agency and Discourse", "sec_num": "7.1" }, { "text": "The SPEAKER of a STATEMENT is defined as \"the sentient entity that produces [a] MESSAGE\". Instances of the STATEMENT frame are frequently evoked by verbs such as say, mention, and claim. The SPEAKER role can be hard to identify in subject position as an unknown entity could also fill the MEDIUM role. For example, \"a report claims that ...\" should be analyzed differently from \"a person claims\". Our contextual features improve role labeling in cases where the subject can be classified based on previous role assignments. On the negative side, we found our model to be too conservative in some cases where a subject is discourse new. Additional gains would be possible with improved coreference chains that include pronouns such as some and I. Such chains could be established through a better preprocessing pipeline or by utilizing additional linguistic resources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Agency and Discourse", "sec_num": "7.1" }, { "text": "As discussed earlier, we expect that the meaning of a word in context provides valuable cues regarding potential frame elements. Two types of words are of particular interest here: ambiguous words, for which different senses might apply depending on context, and out-of-vocabulary words, for which no clear sense could be established during training. In the following, we take a closer look at differences in role assignment between Framat and Framat +context for such fillers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Meaning and Context", "sec_num": "7.2" }, { "text": "Ambiguous words that occur as fillers of different frame elements in the test set include party, power, program, and view. We find occurrences of these words in two broad types of contexts: political and non-political. Within political contexts, party and power fill frame elements such as POS-SESSION and LEADER. Outwith political contexts, we find frame elements such as ELECTRICITY and SOCIAL EVENT to be far more likely. The Framat model exhibits a general bias towards the political domain, often missing instances of frame elements that are more common in non-political contexts (e.g., \"the six-[party] INTERLOCUTORS talks DISCUSSION \"). Framat +context , in contrast, shows less of a bias and provides better classification based on context features for all frame elements. Overall, precision for the four ambiguous words is improved from 86% to 93%, with a few errors remaining due to rare dependency paths (e.g., [program] ACT NMOD \u2190 \u2212\u2212 \u2212 which SBAR \u2190\u2212\u2212 is PRD \u2190 \u2212 \u2212 violation COMPLIANCE ) and differences between frame elements that depend on factors such as number (COGNIZER vs. COGNIZER 1).", "cite_spans": [ { "start": 922, "end": 931, "text": "[program]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Word Meaning and Context", "sec_num": "7.2" }, { "text": "A frequently observed error by the baseline model is to assign peripheral frame elements such as TIME to role fillers that actually are not time expressions. This happens because words which have not been seen frequently during training but appear in adverbial positions are generally likely to fill the frame element TIME. We find that the use of document-specific word vector representations drastically reduces the number of such errors (e.g., \"to give GIVING [generously] MANNER vs. *TIME \"), with absolute gains in precision and recall of 14% and 9%, respectively, presumably because non-time expressions are often distributed differently across a document than time expressions. Documentspecific word vector representations also improve recall for out-of-vocabulary words, as seen with the example of Dec discussed in Section 6. However, such representations by themselves might be insufficient to determine which aspects of a word sense are applicable across a document as occurrences in specific contexts may also be misleading (e.g., \". . . changes [throughout the community]\" vs. \"...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Meaning and Context", "sec_num": "7.2" }, { "text": "[throughout the ages] TIME \"). Some of these cases could be resolved using higher level features that explicitly model interactions between (predicted) word meaning in context and other factors, however we leave this to future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Meaning and Context", "sec_num": "7.2" }, { "text": "In this paper, we enriched a traditional semantic role labeling model with additional information from context. The corresponding features we defined can be grouped into three categories: (1) discourse-level features that directly utilize discourse knowledge in the form of coreference chains (newness, prior role assignments), (2) sentence-level features that model properties of a frame structure as a whole, and (3) lexical features that can be computed using methods from distributional semantics and an adaptation to model document-specific word meaning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "To implement our discourse-level enhancements, we modified a semantic role labeling system developed for PropBank/NomBank which we found to achieve competitive performance on FrameNetbased annotations. Our main contribution lies in extending this system to the discourse level. Our experiments revealed that discourse aware features can significantly improve semantic role labeling performance, leading to gains of over +2.0 percentage points in precision and state-of-the-art results in terms of F 1 . Analysis of system output revealed two reasons for improvement. Firstly, contextual features provide necessary additional information to understand and assign roles on the sentence level, and secondly, some of our discourse-level features generalize better than traditional lexical and syntactic features. We further found that additional gains can be achieved using improved preprocessing tools and a more sophisticated model for feature interactions. In the future, we are planning to assess whether discourse-level features generalize crosslinguistically. We would also like to investigate whether semantic role labeling can benefit from recognizing textual entailment and high-level discourse relations. Our code is publicly available under http://github.com/microth/mateplus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "See https://framenet2.icsi.berkeley.edu/ for a comprehensive list of frames and their definitions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Version 1.5, released September 2010.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We note that better results have been reported in Hermann et al.(2014)andT\u00e4ckstr\u00f6m et al. (2015). However, both of these more recent approaches rely on a custom frame identification component as well as proprietary tools and models for tagging and parsing which are not publicly available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We selected this toolkit in our work due to its flexibility: as it directly operates over co-occurrence matrices, we can manipulate counts prior to word vector computation and easily take into account multiple matrices.5 http://www.psych.ualberta.ca/\u02dcwestburylab/ downloads/westburylab.wikicorp.download.htmlTo make up for the large difference in data size between the Wikipedia corpus and a single document, we normalize co-occurrence counts based on the ratio between the absolute numbers of co-occurrences in both resources.Given co-occurrence matrices C wiki and C d , and the vocabulary V , we formally define the features of our SRL model as the components of the vector space w i of words w i (1 \u2264 i \u2264 |V |) occurring in document d. The representations are learned by applying GloVe to optimize the following objective for n iterations (1 \u2264 t \u2264 n):J t = i,j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We are grateful to Diana McCarthy and three anonymous referees whose feedback helped to substantially improve the present paper. The research presented in this paper was funded by a DFG Research Fellowship (RO 4848/1-1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Frame semantic tree kernels for social network extraction from text", "authors": [ { "first": "Apoorv", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Sriramkumar", "middle": [], "last": "Balasubramanian", "suffix": "" }, { "first": "Anup", "middle": [], "last": "Kotalwar", "suffix": "" }, { "first": "Jiehan", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Owen", "middle": [], "last": "Rambow", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "26--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Apoorv Agarwal, Sriramkumar Balasubramanian, Anup Kotalwar, Jiehan Zheng, and Owen Rambow. 2014. Frame semantic tree kernels for social network extrac- tion from text. In Proceedings of the 14th Confer- ence of the European Chapter of the Association for Computational Linguistics, pages 211-219, Gothen- burg, Sweden, 26-30 April 2014.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A high-performance syntactic and semantic dependency parser", "authors": [ { "first": "Anders", "middle": [], "last": "Bj\u00f6rkelund", "suffix": "" }, { "first": "Bernd", "middle": [], "last": "Bohnet", "suffix": "" }, { "first": "Love", "middle": [], "last": "Hafdell", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Nugues", "suffix": "" } ], "year": 2010, "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1--6", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anders Bj\u00f6rkelund, Bernd Bohnet, Love Hafdell, and Pierre Nugues. 2010. A high-performance syntac- tic and semantic dependency parser. In Coling 2010: Demonstration Volume, pages 33-36, Beijing, China. Wanxiang Che, Ting Liu, and Yongqiang Li. 2010. Im- proving semantic role labeling with word sense. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the As- sociation for Computational Linguistics, pages 246- 249, Los Angeles, California, 1-6 June 2010.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Annotation tools and knowledge representation for a text-to-scene system", "authors": [ { "first": "Bob", "middle": [], "last": "Coyne", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Klapheke", "suffix": "" }, { "first": "Masoud", "middle": [], "last": "Rouhizadeh", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Sproat", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Bauer", "suffix": "" } ], "year": 2012, "venue": "Proceedings of 24th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "8--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bob Coyne, Alex Klapheke, Masoud Rouhizadeh, Richard Sproat, and Daniel Bauer. 2012. Annotation tools and knowledge representation for a text-to-scene system. In Proceedings of 24th International Con- ference on Computational Linguistics, pages 679-694, Mumbai, India, 8-15 December 2012.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Towards open-domain semantic role labeling", "authors": [ { "first": "Danilo", "middle": [], "last": "Croce", "suffix": "" }, { "first": "Cristina", "middle": [], "last": "Giannone", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Annesi", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Basili", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "11--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danilo Croce, Cristina Giannone, Paolo Annesi, and Roberto Basili. 2010. Towards open-domain semantic role labeling. In Proceedings of the 48th Annual Meet- ing of the Association for Computational Linguistics, pages 237-246, Uppsala, Sweden, 11-16 July 2010.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Structured lexical similarity via convolution kernels on dependency trees", "authors": [ { "first": "Danilo", "middle": [], "last": "Croce", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Basili", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1034--1046", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danilo Croce, Alessandro Moschitti, and Roberto Basili. 2011. Structured lexical similarity via convolution kernels on dependency trees. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1034-1046, Edinburgh, United Kingdom.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Semisupervised frame-semantic parsing for unknown predicates", "authors": [ { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "19--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dipanjan Das and Noah A. Smith. 2011. Semi- supervised frame-semantic parsing for unknown pred- icates. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies, Portland, Oregon, 19-24 June 2011.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Semi-supervised semantic role labeling using the Latent Words Language Model", "authors": [ { "first": "Koen", "middle": [], "last": "Deschacht", "suffix": "" }, { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "21--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koen Deschacht and Marie-Francine Moens. 2009. Semi-supervised semantic role labeling using the La- tent Words Language Model. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 21-29, Singapore, 2-7 August 2009.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Measuring distributional similarity in context", "authors": [ { "first": "Georgiana", "middle": [], "last": "Dinu", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "9--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Georgiana Dinu and Mirella Lapata. 2010. Measuring distributional similarity in context. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1162-1172, Cambridge, Massachusetts, 9-11 October 2010.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Frame semantics and the nature of language", "authors": [ { "first": "Charles", "middle": [ "J" ], "last": "Fillmore", "suffix": "" } ], "year": 1976, "venue": "Annals of the New York Academy of Sciences: Conference on the Origin and Development of Language and Speech", "volume": "280", "issue": "", "pages": "20--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles J. Fillmore. 1976. Frame semantics and the na- ture of language. In Annals of the New York Academy of Sciences: Conference on the Origin and Develop- ment of Language and Speech, volume 280, pages 20- 32.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Dependencybased semantic role labeling using convolutional neural networks", "authors": [ { "first": "William", "middle": [], "last": "Foland", "suffix": "" }, { "first": "James", "middle": [], "last": "Martin", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics", "volume": "", "issue": "", "pages": "279--288", "other_ids": {}, "num": null, "urls": [], "raw_text": "William Foland and James Martin. 2015. Dependency- based semantic role labeling using convolutional neu- ral networks. In Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics, pages 279-288, Denver, Colorado.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Semantic Role Labeling of Implicit Arguments for Nominal Predicates", "authors": [ { "first": "Matthew", "middle": [], "last": "Gerber", "suffix": "" }, { "first": "Joyce", "middle": [], "last": "Chai", "suffix": "" } ], "year": 2012, "venue": "Computational Linguistics", "volume": "38", "issue": "4", "pages": "755--798", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Gerber and Joyce Chai. 2012. Semantic Role Labeling of Implicit Arguments for Nominal Predi- cates. Computational Linguistics, 38(4):755-798.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Automatic labeling of semantic roles", "authors": [ { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2002, "venue": "Computational Linguistics", "volume": "28", "issue": "3", "pages": "245--288", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Gildea and Daniel Jurafsky. 2002. Automatic la- beling of semantic roles. Computational Linguistics, 28(3):245-288.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Towards weakly supervised resolution of null instantiations", "authors": [ { "first": "Philip", "middle": [], "last": "Gorinski", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Ruppenhofer", "suffix": "" }, { "first": "Caroline", "middle": [], "last": "Sporleder", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013) -Long Papers", "volume": "", "issue": "", "pages": "19--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Gorinski, Josef Ruppenhofer, and Caroline Sporleder. 2013. Towards weakly supervised resolu- tion of null instantiations. In Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013) -Long Papers, pages 119-130, Potsdam, Germany, 19-22 March 2013.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Semantic frame identification with distributed word representations", "authors": [ { "first": "Karl", "middle": [], "last": "Moritz Hermann", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "23--25", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karl Moritz Hermann, Dipanjan Das, Jason Weston, and Kuzman Ganchev. 2014. Semantic frame identifica- tion with distributed word representations. In Pro- ceedings of the 52nd Annual Meeting of the Associa- tion for Computational Linguistics, pages 1448-1458, Baltimore, Maryland, 23-25 June 2014.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Open-domain semantic role labeling by modeling word spans", "authors": [ { "first": "Fei", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Yates", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "11--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fei Huang and Alexander Yates. 2010. Open-domain semantic role labeling by modeling word spans. In Proceedings of the 48th Annual Meeting of the Associ- ation for Computational Linguistics, pages 968-978, Uppsala, Sweden, 11-16 July 2010.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The effect of syntactic representation on semantic role labeling", "authors": [ { "first": "Richard", "middle": [], "last": "Johansson", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Nugues", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 22nd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "18--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Johansson and Pierre Nugues. 2008. The ef- fect of syntactic representation on semantic role label- ing. In Proceedings of the 22nd International Con- ference on Computational Linguistics, pages 393-400, Manchester, United Kingdom, 18-22 August 2008.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Sources of evidence for implicit argument resolution", "authors": [ { "first": "Egoitz", "middle": [], "last": "Laparra", "suffix": "" }, { "first": "German", "middle": [], "last": "Rigau", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013) -Long Papers", "volume": "", "issue": "", "pages": "19--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Egoitz Laparra and German Rigau. 2013. Sources of ev- idence for implicit argument resolution. In Proceed- ings of the 10th International Conference on Compu- tational Semantics (IWCS 2013) -Long Papers, pages 155-166, Potsdam, Germany, 19-22 March 2013.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Deterministic coreference resolution based on entitycentric, precision-ranked rules", "authors": [ { "first": "Heeyoung", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Angel", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Peirsman", "suffix": "" }, { "first": "Nathanael", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2013, "venue": "Computational Linguistics", "volume": "39", "issue": "4", "pages": "885--916", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2013. Deterministic coreference resolution based on entity- centric, precision-ranked rules. Computational Lin- guistics, 39(4):885-916.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "High-order lowrank tensors for semantic role labeling", "authors": [ { "first": "Tao", "middle": [], "last": "Lei", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Llu\u00eds", "middle": [], "last": "M\u00e0rquez", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1150--1160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tao Lei, Yuan Zhang, Llu\u00eds M\u00e0rquez, Alessandro Mos- chitti, and Regina Barzilay. 2015. High-order low- rank tensors for semantic role labeling. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 1150-1160, Den- ver, Colorado.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Linguistic regularities in continuous space word representations", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Yih", "middle": [], "last": "Wen-Tau", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Zweig", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "9--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Technologies, pages 746-751, Atlanta, Georgia, 9-15 June 2013.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A study on convolution kernels for shallow statistic parsing", "authors": [ { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main Volume", "volume": "", "issue": "", "pages": "335--342", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alessandro Moschitti. 2004. A study on convolution kernels for shallow statistic parsing. In Proceedings of the 42nd Meeting of the Association for Computa- tional Linguistics (ACL'04), Main Volume, pages 335- 342, Barcelona, Spain.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Semantic role assignment for event nominalisations by leveraging verbal data", "authors": [ { "first": "Sebastian", "middle": [], "last": "Pad\u00f3", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Pennacchiotti", "suffix": "" }, { "first": "Caroline", "middle": [], "last": "Sporleder", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 22nd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "665--672", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Pad\u00f3, Marco Pennacchiotti, and Caroline Sporleder. 2008. Semantic role assignment for event nominalisations by leveraging verbal data. In Pro- ceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 665- 672, Manchester, United Kingdom.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Automatic induction of FrameNet lexical units", "authors": [ { "first": "Marco", "middle": [], "last": "Pennacchiotti", "suffix": "" }, { "first": "Diego", "middle": [ "De" ], "last": "Cao", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Basili", "suffix": "" }, { "first": "Danilo", "middle": [], "last": "Croce", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "25--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Pennacchiotti, Diego De Cao, Roberto Basili, Danilo Croce, and Michael Roth. 2008. Automatic induction of FrameNet lexical units. In Proceedings of the 2008 Conference on Empirical Methods in Nat- ural Language Processing, pages 457-465, Honolulu, Hawaii, USA, 25-27 October 2008.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "25--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1532-1543, Doha, Qatar, 25-29 October 2014.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Joint information value of syntactic and semantic prominence for subsequent pronominal reference", "authors": [ { "first": "", "middle": [], "last": "Ralph L Rose", "suffix": "" } ], "year": 2011, "venue": "Salience: Multidisciplinary Perspectives on Its Function in Discourse", "volume": "227", "issue": "", "pages": "81--103", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ralph L Rose. 2011. Joint information value of syntactic and semantic prominence for subsequent pronominal reference. Salience: Multidisciplinary Perspectives on Its Function in Discourse, 227:81-103.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Composition of word representations improves semantic role labelling", "authors": [ { "first": "Michael", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Kristian", "middle": [], "last": "Woodsend", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "25--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Roth and Kristian Woodsend. 2014. Compo- sition of word representations improves semantic role labelling. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 407-413, Doha, Qatar, 25-29 October 2014.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "FrameNet II: Extended Theory and Practice", "authors": [ { "first": "Josef", "middle": [], "last": "Ruppenhofer", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Ellsworth", "suffix": "" }, { "first": "Miriam", "middle": [ "R L" ], "last": "Petruck", "suffix": "" }, { "first": "Christopher", "middle": [ "R" ], "last": "Johnson", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Josef Ruppenhofer, Michael Ellsworth, Miriam R. L. Petruck, Christopher R. Johnson, and Jan Scheffczyk. 2010. FrameNet II: Extended Theory and Practice. Technical report, International Computer Science In- stitute, 14 September 2010.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "In search of missing arguments: A linguistic approach", "authors": [ { "first": "Josef", "middle": [], "last": "Ruppenhofer", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Gorinski", "suffix": "" }, { "first": "Caroline", "middle": [], "last": "Sporleder", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "12--14", "other_ids": {}, "num": null, "urls": [], "raw_text": "Josef Ruppenhofer, Philip Gorinski, and Caroline Sporleder. 2011. In search of missing arguments: A linguistic approach. In Proceedings of the Inter- national Conference Recent Advances in Natural Lan- guage Processing 2011, pages 331-338, Hissar, Bul- garia, 12-14 September 2011.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "The distributional hypothesis", "authors": [ { "first": "Magnus", "middle": [], "last": "Sahlgren", "suffix": "" } ], "year": 2008, "venue": "Italian Journal of Linguistics", "volume": "20", "issue": "1", "pages": "33--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Magnus Sahlgren. 2008. The distributional hypothesis. Italian Journal of Linguistics, 20(1):33-54.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Using semantic roles to improve question answering", "authors": [ { "first": "Dan", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", "volume": "", "issue": "", "pages": "12--21", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Shen and Mirella Lapata. 2007. Using semantic roles to improve question answering. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 12-21, Prague, Czech Republic.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Casting implicit role linking as an anaphora resolution task", "authors": [ { "first": "Carina", "middle": [], "last": "Silberer", "suffix": "" }, { "first": "Anette", "middle": [], "last": "Frank", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM 2012)", "volume": "", "issue": "", "pages": "7--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carina Silberer and Anette Frank. 2012. Casting implicit role linking as an anaphora resolution task. In Pro- ceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM 2012), pages 1-10, Montr\u00e9al, Canada, 7-8 June.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Efficient inference and structured learning for semantic role labeling", "authors": [ { "first": "Oscar", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" }, { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" } ], "year": 2015, "venue": "Transactions of the Association for Computational Linguistics", "volume": "3", "issue": "", "pages": "29--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oscar T\u00e4ckstr\u00f6m, Kuzman Ganchev, and Dipanjan Das. 2015. Efficient inference and structured learning for semantic role labeling. Transactions of the Associa- tion for Computational Linguistics, 3:29-41.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Contextualizing semantic representations using syntactically enriched vector models", "authors": [ { "first": "Stefan", "middle": [], "last": "Thater", "suffix": "" }, { "first": "Hagen", "middle": [], "last": "F\u00fcrstenau", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Pinkal", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "11--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefan Thater, Hagen F\u00fcrstenau, and Manfred Pinkal. 2010. Contextualizing semantic representations us- ing syntactically enriched vector models. In Proceed- ings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 948-957, Uppsala, Sweden, 11-16 July 2010.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Joint learning improves semantic role labeling", "authors": [ { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "29--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristina Toutanova, Aria Haghighi, and Christopher Manning. 2005. Joint learning improves semantic role labeling. In Proceedings of the 43rd Annual Meet- ing of the Association for Computational Linguistics, pages 589-596, Ann Arbor, Michigan, 29-30 June 2005.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Semantic frames to predict stock price movement", "authors": [ { "first": "Boyi", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Rebecca", "middle": [ "J" ], "last": "Passonneau", "suffix": "" }, { "first": "Leon", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Germ\u00e1n", "middle": [ "G" ], "last": "Creamer", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Boyi Xie, Rebecca J. Passonneau, Leon Wu, and Germ\u00e1n G. Creamer. 2013. Semantic frames to pre- dict stock price movement. In Proceedings of the 51st", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "873--883", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 873-883, Sofia, Bulgaria, 4-9 Au- gust 2013.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Calibrating features for semantic role labeling", "authors": [ { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "88--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nianwen Xue and Martha Palmer. 2004. Calibrating features for semantic role labeling. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 88-94, Barcelona, Spain, July.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "More accurate tests for the statistical significance of result differences", "authors": [ { "first": "Alexander", "middle": [], "last": "Yeh", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 18th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "947--953", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Yeh. 2000. More accurate tests for the sta- tistical significance of result differences. In Proceed- ings of the 18th International Conference on Computa- tional Linguistics, pages 947-953, Saarbr\u00fccken, Ger- many.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Selectional preferences for semantic role classification", "authors": [ { "first": "Eneko", "middle": [], "last": "Be\u00f1at Zapirain", "suffix": "" }, { "first": "Llu\u00eds", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "M\u00e0rquez", "suffix": "" }, { "first": "", "middle": [], "last": "Surdeanu", "suffix": "" } ], "year": 2013, "venue": "Computational Linguistics", "volume": "39", "issue": "3", "pages": "631--663", "other_ids": {}, "num": null, "urls": [], "raw_text": "Be\u00f1at Zapirain, Eneko Agirre, Llu\u00eds M\u00e0rquez, and Mi- hai Surdeanu. 2013. Selectional preferences for se- mantic role classification. Computational Linguistics, 39(3):631-663.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "Argument identification and classification Lemma form of f POS tag of f Any syntactic dependents of f * Subcat frame of f * Voice of a* Any lemma in a* Number of words in a First word and POS tag in a Second word and POS tag in a Last word and POS tag in a Relation from first word in a to its parent Relation from second word in a to its parent Relation from last word in a to its parent Relative position of a with respect to p Voice of a and relative position with respect to p* Identification only Lemma form of the first word in a Lemma form of the syntactic head of a Lemma form of the last word in a POS tag of the first word in a POS tag of the syntactic head of a POS tag of the last word in a Relation from syntactic head of a to its parent Dependency path from a to f Length of dependency path from a to f Number of words between a and f" }, "TABREF0": { "num": null, "text": "Features from which we adopt in our model; a denotes the argument span under consideration, f refers to the corresponding frame evoking element. Identification features are instantiated as binary indicator features. Features marked with an asterisk are role specific. All other features apply to combinations of role and frame.", "content": "", "type_str": "table", "html": null }, "TABREF2": { "num": null, "text": "Most frequent roles assigned to country names appearing FrameNet texts: whereas Iran and China are mostly mentioned in an economic context, references to Iraq are mainly found in a news article about a politician's visit to the country.", "content": "
", "type_str": "table", "html": null }, "TABREF4": { "num": null, "text": "Frequent frames that have elements with different likelihoods of discourse-new vs. discourse-old fillers; new/old ratios as observed on the development set.", "content": "
", "type_str": "table", "html": null }, "TABREF6": { "num": null, "text": "", "content": "
: Full structure prediction results using gold (top) and predicted frames (bottom). All numbers are per-centages. * Significantly different (p<0.05) from Framat +context .
", "type_str": "table", "html": null }, "TABREF7": { "num": null, "text": "Full structure prediction results using gold frames, Framat and different sets of context features. All numbers are percentages.", "content": "", "type_str": "table", "html": null }, "TABREF8": { "num": null, "text": "SEMAFOR *Can [he] THEME go MOTION [to Paris] GOAL on Dec. 1 ? Framat *Can [he] THEME go MOTION [to Paris on Dec. 1] GOAL ? Framat +context Can [he] THEME go MOTION [to Paris] GOAL [on Dec. 1] TIME ? SEMAFOR *Send SENDING [my regards] THEME to my aunt , uncle and grandmother . Framat *Send SENDING [my regards] THEME [to my aunt , uncle and grandmother] GOAL . Framat +context Send SENDING [my regards] THEME [to my aunt , uncle and grandmother] RECIPIENT .", "content": "
SEMAFOR *Stephanopoulos does n't want to seem a Clinton stooge SUBORDINATES AND SUPERIORS
Framat Framat +context Stephanopoulos does n't want to seem a [Clinton] SUPERIOR stooge SUBORDINATES AND SUPERIORS *Stephanopoulos doesn't want to seem a [Clinton] DESCRIPTOR stooge SUBORDINATES AND SUPERIORS
", "type_str": "table", "html": null }, "TABREF9": { "num": null, "text": "Examples of frame structures that are labeled incorrectly (marked by asterisks) without contextual features.", "content": "", "type_str": "table", "html": null } } } }