{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:18:01.666191Z" }, "title": "Syntax Role for Neural Semantic Role Labeling", "authors": [ { "first": "Zuchao", "middle": [], "last": "Li", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "", "affiliation": {}, "email": "zhaohai@cs.sjtu.edu.cn" }, { "first": "Jiaxun", "middle": [], "last": "Cai", "suffix": "", "affiliation": {}, "email": "caijiaxun@sjtu.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Semantic role labeling (SRL) is dedicated to recognizing the semantic predicate-argument structure of a sentence. Previous studies in terms of traditional models have shown syntactic information can make remarkable contributions to SRL performance; however, the necessity of syntactic information was challenged by a few recent neural SRL studies that demonstrate impressive performance without syntactic backbones and suggest that syntax information becomes much less important for neural semantic role labeling, especially when paired with recent deep neural network and large-scale pre-trained language models. Despite this notion, the neural SRL field still lacks a systematic and full investigation on the relevance of syntactic information in SRL, for", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Semantic role labeling (SRL) is dedicated to recognizing the semantic predicate-argument structure of a sentence. Previous studies in terms of traditional models have shown syntactic information can make remarkable contributions to SRL performance; however, the necessity of syntactic information was challenged by a few recent neural SRL studies that demonstrate impressive performance without syntactic backbones and suggest that syntax information becomes much less important for neural semantic role labeling, especially when paired with recent deep neural network and large-scale pre-trained language models. Despite this notion, the neural SRL field still lacks a systematic and full investigation on the relevance of syntactic information in SRL, for", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Semantic role labeling (SRL), namely, semantic parsing, is a shallow semantic parsing task that aims to recognize the predicate-argument structure of each predicate in a sentence, such as who did what to whom, where and when, and so forth. Specifically, SRL seeks to identify arguments and label their semantic roles given a predicate. SRL is an important method for obtaining semantic information that is beneficial to a wide range of natural language processing (NLP) tasks, including machine translation (Shi et al. 2016) , question answering (Berant et al. 2013; Yih et al. 2016) , discourse relation sense classification (Mihaylov and Frank 2016) , and relation extraction (Lin, Liu, and Sun 2017) .", "cite_spans": [ { "start": 507, "end": 524, "text": "(Shi et al. 2016)", "ref_id": "BIBREF72" }, { "start": 546, "end": 566, "text": "(Berant et al. 2013;", "ref_id": "BIBREF1" }, { "start": 567, "end": 583, "text": "Yih et al. 2016)", "ref_id": "BIBREF85" }, { "start": 626, "end": 651, "text": "(Mihaylov and Frank 2016)", "ref_id": "BIBREF55" }, { "start": 678, "end": 702, "text": "(Lin, Liu, and Sun 2017)", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "SRL can be split into four subtasks: predicate detection, predicate disambiguation, argument identification, and argument classification. For argument annotation, there are two formulations (styles). One is based on constituents (i.e., phrase or span), and the other is based on dependencies. The latter, proposed by the CoNLL-2008 shared task (Surdeanu et al. 2008) , is also called semantic dependency parsing and annotates the heads of arguments rather than phrasal arguments. Figure 1 shows example annotations.", "cite_spans": [ { "start": 344, "end": 366, "text": "(Surdeanu et al. 2008)", "ref_id": "BIBREF75" } ], "ref_spans": [ { "start": 480, "end": 488, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In prior SRL work, considerable attention has been paid to feature engineering, which struggles to capture sufficient discriminative information compared to neural network models, which are capable of extracting features automatically. In particular, syntactic information, including syntactic tree features, has been known to be extremely beneficial to SRL since the large scale of empirical verification of Punyakanok, Roth, and Yih (2008) . Despite their success, their work suffered from erroneous syntactic input, leading to an unsatisfactory performance.", "cite_spans": [ { "start": 409, "end": 441, "text": "Punyakanok, Roth, and Yih (2008)", "ref_id": "BIBREF69" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "To alleviate these issues, Marcheggiani, Frolov, and Titov (2017) and He et al. (2017) proposed a simple but effective neural model for SRL without syntactic input. Their work suggested that neural SRL does not have to rely on syntactic features, contradicting the belief that syntax is a necessary prerequisite for SRL, which was believed as early as Gildea and Palmer (2002) . This dramatic contradiction motivated us to make a thorough exploration on syntactic contribution to SRL. Examples of annotations in span (above) and dependency (below) SRL.", "cite_spans": [ { "start": 27, "end": 65, "text": "Marcheggiani, Frolov, and Titov (2017)", "ref_id": "BIBREF52" }, { "start": 70, "end": 86, "text": "He et al. (2017)", "ref_id": "BIBREF26" }, { "start": 352, "end": 376, "text": "Gildea and Palmer (2002)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "A chronicle of related work for span and dependency SRL. SA represents a syntax-aware system (no + indicates a syntax-agnostic system). F 1 is the result of a single model on the official test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 1", "sec_num": null }, { "text": "As shown in Table 1 , span and dependency are effective formal representations for semantics, though it has been unknown for a long time which form, span, or dependency would be better for the convenience and effectiveness of semantic machine learning and later applications. This topic has been roughly discussed in Johansson and Nugues (2008a) and Li et al. (2019a) , who both concluded that the (best) dependency SRL system at that time clearly outperformed the span-based (best) system through gold syntactic structure transformation; however, due to the different requirements of downstream task applications, span and dependency both remain focuses of research. Additionally, the two forms of SRL may benefit from each other's joint (rather than separated) development. We, therefore, revisit the role of syntax in SRL on a more solid empirical basis and investigate the role of syntax 1 for the two SRL styles by supplying syntax knowledge of varying quality.", "cite_spans": [ { "start": 317, "end": 345, "text": "Johansson and Nugues (2008a)", "ref_id": "BIBREF31" }, { "start": 350, "end": 367, "text": "Li et al. (2019a)", "ref_id": "BIBREF45" } ], "ref_spans": [ { "start": 12, "end": 19, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Table 1", "sec_num": null }, { "text": "Recent work on syntax contributions has been limited to individual models and the ways in which syntax has been utilized. The conclusions drawn for syntax roles therefore have some limitations. In order to reduce these limitations, we explored three typical and strong baseline models and two categories of syntactic utilization methods. In addition, pre-trained language models, such as ELMo (Peters et al. 2018 ) and BERT (Devlin et al. 2019) , that build contextualized representations, continue to provide gains on NLP benchmarks, and Hewitt and Manning (2019) showed that structure of syntax information emerges in the deep models' word representation spaces. Whether neural SRL models can further benefit from explicit syntax information in addition to this implicit syntax information, however, is another issue we consider.", "cite_spans": [ { "start": 393, "end": 412, "text": "(Peters et al. 2018", "ref_id": "BIBREF65" }, { "start": 424, "end": 444, "text": "(Devlin et al. 2019)", "ref_id": "BIBREF13" }, { "start": 539, "end": 564, "text": "Hewitt and Manning (2019)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Table 1", "sec_num": null }, { "text": "Besides, most of the SRL literature is dedicated to impressive performance gains on English, while other multiple languages receive relatively little attention. Although human languages have some basic commonalities in syntactic structure and even different levels of grammar, their differences are also very obvious. The study of syntactic roles needs to be examined in the context of multiple languages for verifying its effectiveness and applicability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 1", "sec_num": null }, { "text": "In order to quantitatively evaluate the contribution of syntax to SRL, we adopt the ratios between labeled F 1 score for semantic dependencies (Sem-F 1 ), the labeled attachment score (LAS) for syntactic dependencies, and the F 1 score for syntactic constituents. This ratio was first introduced by CoNLL-2008 (Surdeanu et al. 2008) shared task as an evaluation metric. Because different syntactic parsers contribute different syntactic inputs with varying levels of quality, different syntactically driven SRL systems are based on different syntactic foundations. Therefore, our proposed ratio offers a fairer comparison between different syntactically driven SRL systems, which our empirical study surveys.", "cite_spans": [ { "start": 310, "end": 332, "text": "(Surdeanu et al. 2008)", "ref_id": "BIBREF75" } ], "ref_spans": [], "eq_spans": [], "section": "Table 1", "sec_num": null }, { "text": "SRL was first pioneered by Gildea and Jurafsky (2000) based on the FrameNet semantic labeling project (Baker, Fillmore, and Lowe 1998) . PropBank (Palmer, Gildea, and Kingsbury 2005) is one of the most commonly used labeling schemes for this task. This involves two variants: span-based labeling (span SRL), where arguments are characterized as word spans (Carreras and M\u00e0rquez 2005; Pradhan et al. 2012) , and head-based labeling (dependency SRL), which only labels head words and relies on syntactic parse trees (Haji\u010d et al. 2009) .", "cite_spans": [ { "start": 27, "end": 53, "text": "Gildea and Jurafsky (2000)", "ref_id": "BIBREF20" }, { "start": 110, "end": 134, "text": "Fillmore, and Lowe 1998)", "ref_id": "BIBREF0" }, { "start": 146, "end": 182, "text": "(Palmer, Gildea, and Kingsbury 2005)", "ref_id": "BIBREF62" }, { "start": 356, "end": 383, "text": "(Carreras and M\u00e0rquez 2005;", "ref_id": "BIBREF6" }, { "start": 384, "end": 404, "text": "Pradhan et al. 2012)", "ref_id": "BIBREF67" }, { "start": 514, "end": 533, "text": "(Haji\u010d et al. 2009)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2." }, { "text": "Conventionally, when identifying predicates, span SRL decomposes to two subtasks: argument identification and argument classification. The former identifies the arguments of a predicate, and the latter assigns them semantic role labels, determining the relations between arguments and predicates. PropBank defines a set of semantic roles for labeling arguments. These roles fall into two categories: core and non-core roles. The core roles (A0-A5 and AA) indicate different semantics in predicate-argument structure, while the non-core roles are modifiers (AM-adj), where adj specifies the adjunct type, such as in temporal (AM-TMP) and locative (AM-LOC) adjuncts. For the example shown in Figure 1 , A0 is a proto-agent, representing the borrower.", "cite_spans": [], "ref_spans": [ { "start": 690, "end": 698, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Background", "sec_num": "2." }, { "text": "Slightly different from span SRL in argument annotation, dependency SRL labels the head words 2 of arguments rather than of entire phrases, a practice popularized by the CoNLL-2008 and CoNLL-2009 shared tasks 3 (Surdeanu et al. 2008; Haji\u010d et al. 2009) . Furthermore, when no predicate is given, two other indispensable subtasks of dependency SRL are required: predicate identification and predicate disambiguation. The former identifies all predicates in a sentence; and the latter determines the word senses, the specific contextual meanings, of predicates. In the example shown in Figure 1 , 01 indicates the first sense from the PropBank sense repository for predicate borrowed in the sentence. Johansson and Nugues (2008c) demonstrated that in conventional SRL models, syntactic trees provide a good form of representation for the assigning of semantic role labels. The successful application of neural networks to SRL (Zhou and Xu 2015; He et al. 2017; Marcheggiani, Frolov, and Titov 2017; Cai et al. 2018 ) mitigated conventional SRL models' need for comprehensive feature engineering based on syntax trees (Zhao et al. 2009a ) and resulted in syntax-agnostic neural SRL models that achieved compet-itive performance. Recent work has built on this and explored the inclusion of syntax in neural SRL. Including syntax in SRL has three main benefits that have been common motivations for recent work:", "cite_spans": [ { "start": 170, "end": 184, "text": "CoNLL-2008 and", "ref_id": null }, { "start": 185, "end": 195, "text": "CoNLL-2009", "ref_id": null }, { "start": 211, "end": 233, "text": "(Surdeanu et al. 2008;", "ref_id": "BIBREF75" }, { "start": 234, "end": 252, "text": "Haji\u010d et al. 2009)", "ref_id": "BIBREF24" }, { "start": 699, "end": 727, "text": "Johansson and Nugues (2008c)", "ref_id": "BIBREF32" }, { "start": 924, "end": 942, "text": "(Zhou and Xu 2015;", "ref_id": "BIBREF92" }, { "start": 943, "end": 958, "text": "He et al. 2017;", "ref_id": "BIBREF26" }, { "start": 959, "end": 996, "text": "Marcheggiani, Frolov, and Titov 2017;", "ref_id": "BIBREF52" }, { "start": 997, "end": 1012, "text": "Cai et al. 2018", "ref_id": "BIBREF3" }, { "start": 1115, "end": 1133, "text": "(Zhao et al. 2009a", "ref_id": "BIBREF87" } ], "ref_spans": [ { "start": 584, "end": 592, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Background", "sec_num": "2." }, { "text": "\u2022", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2." }, { "text": "Arguments are often dispersed around the predicates in syntax trees (Xue and Palmer 2004; Zhao and Kit 2008; He et al. 2018b; He, Li, and Zhao 2019) .", "cite_spans": [ { "start": 68, "end": 89, "text": "(Xue and Palmer 2004;", "ref_id": "BIBREF82" }, { "start": 90, "end": 108, "text": "Zhao and Kit 2008;", "ref_id": "BIBREF89" }, { "start": 109, "end": 125, "text": "He et al. 2018b;", "ref_id": "BIBREF28" }, { "start": 126, "end": 148, "text": "He, Li, and Zhao 2019)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2." }, { "text": "\u2022 Some predicate-argument arcs in semantic dependency graphs are mirrored by head-dependent arcs in their corresponding dependency parse trees, and there is a deterministic mapping between these syntactic relationships and semantic role labels (Surdeanu et al. 2008; Lang and Lapata 2010; Li et al. 2018; Cai and Lapata 2019b; Marcheggiani and Titov 2020) .", "cite_spans": [ { "start": 244, "end": 266, "text": "(Surdeanu et al. 2008;", "ref_id": "BIBREF75" }, { "start": 267, "end": 288, "text": "Lang and Lapata 2010;", "ref_id": "BIBREF40" }, { "start": 289, "end": 304, "text": "Li et al. 2018;", "ref_id": "BIBREF44" }, { "start": 305, "end": 326, "text": "Cai and Lapata 2019b;", "ref_id": "BIBREF5" }, { "start": 327, "end": 355, "text": "Marcheggiani and Titov 2020)", "ref_id": "BIBREF54" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2." }, { "text": "\u2022 Syntax parse trees can strengthen language representations (Johansson and Nugues 2008c; Strubell et al. 2018; Kasai et al. 2019) .", "cite_spans": [ { "start": 61, "end": 89, "text": "(Johansson and Nugues 2008c;", "ref_id": "BIBREF32" }, { "start": 90, "end": 111, "text": "Strubell et al. 2018;", "ref_id": "BIBREF74" }, { "start": 112, "end": 130, "text": "Kasai et al. 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2." }, { "text": "In this paper, since the third benefit is a general improvement for downstream tasks and not limited to SRL, we explore the exploitation of the first two benefits for use in neural SRL.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2." }, { "text": "To fully disclose the predicate-argument structure, typical SRL systems have to perform four subtasks step-by-step or jointly learn and predict the four targets. In order to research the role of syntax, we evaluate our systems in two separate settings: being given the predicate and not being given the predicate. For the first setting, our backbone models all only focus on the identification and labeling of arguments. We use the preidentified predicate information when the predicate is provided in the corpus and adopt a sequence tagging model to perform predicate disambiguation. In the second condition, we do the work of predicate identification and disambiguation in one sequence tagging model. In summary, we focus on three backbone models for argument identification and disambiguation and feed the predicates into the models as features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3." }, { "text": "We summarize and present three typical baseline models, which are based on the strategies of factoring and modeling of semantic graphs in the SRL: sequence-based, tree-based, and graph-based.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factorization and Modeling", "sec_num": "3.1" }, { "text": "Formalization. Given a sequence of tokens X = (w 1 , w 2 , . . . , w n ), a span SRL graph can be defined as a collection of labeled predicate-argument pairs over these tokens:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factorization and Modeling", "sec_num": "3.1" }, { "text": "S = {(p, i, j , r s ), 1 \u2264 p \u2264 n, 1 \u2264 i \u2264 j \u2264 n, r s \u2208 R s },", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factorization and Modeling", "sec_num": "3.1" }, { "text": "where S represents a labeled predicateargument pair for predicate p and the argument span located between sentence fencepost positions i and j and with label r s . A dependency SRL semantic graph for the sentence can be defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factorization and Modeling", "sec_num": "3.1" }, { "text": "D = {(p, a, r d ), 1 \u2264 p \u2264 n, 1 \u2264 d \u2264 n, r d \u2208 R d }, where (p, a, r d )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factorization and Modeling", "sec_num": "3.1" }, { "text": "consists of a predicate (x p ), an argument (x a ), and the type of the semantic role r d , which is in label set (1) dependency-style (2) span-style", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factorization and Modeling", "sec_num": "3.1" }, { "text": "R d . N N N Y N N + + + + + + _ A0 _ _ A1 _ N N N Y N N + + + + + + B-A0 I-A0 I-A0 O B-A1 I-A1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factorization and Modeling", "sec_num": "3.1" }, { "text": "An example of sequence-based factorization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "Sequence-based. As shown in Figure 2 , the semantic dependency graph of SRL is decomposed by predicates. The arguments for each predicate consist of a sequence of either dependency-style or span-style. Notably, an extra Begin-Inside-Outside (BIO) conversion step is required for span-style argument labels. This decomposition is very simple and efficient. In the baseline model of this factorization, the predicate needs to be input as a source feature, which allows the model to produce different inputs for different target argument sequences. Predicate-specific embeddings are usually used for this reason. In our previous work (He et al. 2018b; Li et al. 2018; Munir, Zhao, and Li 2021) , we presented models that recognized and classified arguments as in a sequence labeling task. The predicate-argument pairs were then constructed by performing multiple rounds of sequence labeling according to the number of predicates to obtain a final semantic graph. In these models, the identification and classification of predicates and the recognition and classification of arguments in the sequence-based modeling are separated into two processes. Formally, the model first identifies the predicate (if not given) and obtains predicate set P = {p 1 , p 2 , . . . , p m }. Then, for each p i \u2208 P in the predicate set, a sequence labeling model is adopted to predict the argument label of each token:", "cite_spans": [ { "start": 631, "end": 648, "text": "(He et al. 2018b;", "ref_id": "BIBREF28" }, { "start": 649, "end": 664, "text": "Li et al. 2018;", "ref_id": "BIBREF44" }, { "start": 665, "end": 690, "text": "Munir, Zhao, and Li 2021)", "ref_id": "BIBREF58" } ], "ref_spans": [ { "start": 28, "end": 36, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "r 1 , r 2 , . . . , r n = arg max r\u2208R (P(r|w 1 , w 2 , . . . , w n ; p i ; \u03b8)),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "where \u03b8 represents the model parameters, r and R represent r s and R s in span SRL or r d and R d in dependency SRL, and empty label \u03c6 (\u03c6 = null in dependency SRL, \u03c6 = O in span SRL) is used to indicate non-arguments. In dependency SRL, a and r d are obtained after removing the empty labels, while in span SRL, after removing the empty labels, the BIO-converted label should be decoded to get the start and end positions i and j and the span's role label r s .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "Tree-based. Embedding differentiation by relying on the predicate-indicating inputs is only a soft constraint and prompt. This feature may be lost due to forgetting mechanisms such as dropout in the encoder, which potentially limit SRL model performance. For further help in predicate clue integrating, the tree-based method also decomposes the semantic dependency graph to trees with a depth of 2, according to the predicate which is the child node of ROOT; all other nodes are child nodes of the predicate, as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "(1) dependency-style", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "(2) span-style", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "null A0 A1 ROOT null null null P B -A 0 I -A 0 B -A 1 ROOT I-A 0 O I -A 1 P Predicate: W ord:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "A rgument Label:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "Figure 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "An example of tree-based factorization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "shown in Figure 3 . An empty relation \u03c6 = null is set between the non-arguments and the predicate in order to fill the tree. The tree-based factorization can be thought of as an enhanced version of the sequence-based factorization, as the predicate is more prominent and obvious to specify. Also to emphasize a given predicate being handled, predicatespecific embeddings are applied. In our previous work (Cai et al. 2018) , the predicate identification and classification and the recognition and classification of arguments are still viewed as two separate processes. Predicate-argument pairs for each identified predicate were scored using the following equation, which follows dependency parsers' head-dependent scoring model rather than scoring the likelihood of a position being an argument:", "cite_spans": [ { "start": 405, "end": 422, "text": "(Cai et al. 2018)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 9, "end": 17, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "r 1 , r 2 , . . . , r n = arg max r\u2208R (P(r|{w 1 , w 2 , . . . , w n } \u2297 p i ; \u03b8))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "In tree-based modeling, progressive decoding is performed to output all possible arguments for each predicate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "(1) dependency-style (2) span-style Predicate:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "A rgument:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "A0 A1 A2 A3 A1 A2 A3 A0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "Argument Label: spans:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "An example of graph-based factorization. We omit the dashed line between non-predicates and non-arguments, i.e., the empty relation null, here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "Graph-based. Sequence-based and tree-based models score the argument, label tuple structure for a determined predicate. The graph-based method further extends this mode; specifically it accommodates undetermined predicates and models the semantic dependency graph directly to output a predicate, argument, label triple structure, allowing the model handle to label multiple predicates and arguments at the same time (as shown in Figure 4 ). This mode not only handles instances without given predicates but also allows instances with given predicates to be enhanced by predicate-specific embeddings. Using dependency-style, the graph-based method is a trivial extension of the tree-based method. Span-style is not so simple because of its argument structure. To account for this, graph-based models enumerate and sort all possible spans, take them as candidate arguments, and score them with the candidate sets of predicates. In our previous work (Li et al. 2019a (Li et al. , 2020 , we considered the predicates and arguments jointly and explicitly scored the predicate-argument pairs before classifying their relationships. This modeling objective can be represented as:", "cite_spans": [ { "start": 947, "end": 963, "text": "(Li et al. 2019a", "ref_id": "BIBREF45" }, { "start": 964, "end": 981, "text": "(Li et al. , 2020", "ref_id": "BIBREF47" } ], "ref_spans": [ { "start": 429, "end": 437, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "{(p, a, r)} = arg max p\u2208P,a\u2208A,r\u2208R (P((p, a, r)|w 1 , w 2 , . . . , w n ; \u03b8))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "where P = {w 1 , w 2 , . . . , w n } is the set of all predicate candidates, which is used in graph-based modeling instead of relying on the predictions of an additional predicate identification model. In dependency SRL, the argument candidates set A consists of all words in the sentence, A = {w 1 , w 2 , . . . , w n }, and in span SRL, A consists of all possible spans,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "A = {(w i , w j ), 1 \u2264 i \u2264 j \u2264 n}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "The above methods cover most mainstream neural SRL models based on semantic dependency graph modeling to the best of our knowledge. There are some modeling approaches, such as transition-based SRL (Choi and Palmer 2011; Fei et al. 2021) , that are not based on semantic dependency graphs and hence not the focus of this paper. In the sequence-based and tree-based methods, the BIO conversion is adopted when using span-style, and some works use Conditional Random Fields (CRFs) to model this constraint.", "cite_spans": [ { "start": 220, "end": 236, "text": "Fei et al. 2021)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "This subsection presents basic neural SRL models under the three previous aforementioned methods. In order to make fair comparisons with our experiments, we make the architectures of these models as similar as possible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Implementation", "sec_num": "3.2" }, { "text": "Word Representation. We produce a predicate-specific word representation e i for each word w i in the sequence w = {w 1 , \u2022 \u2022 \u2022 , w n }, where i stands for the word position in an input sequence, and n is the length of this sequence, following Marcheggiani, Frolov, and Titov (2017) . In this work, word representation e i is the concatenation of four types of features: a predicate-specific feature and character-level, word-level, and linguistic features. Since previous works demonstrated that the predicate-specific feature is helpful in promoting the role labeling process, we leverage a predicate-specific indicator embedding e ie i to indicate whether a word is a predicate when predicting and labeling the arguments for each given predicate. At the character level, we exploit a convolutional neural network (CNN) with a bidirectional LSTM (BiLSTM) to learn character embedding e ce i . As shown in Figure 5 , the representation calculated by the CNN is fed as input to the BiLSTM. At the word level, we use a randomly initialized word embedding e re i and a pre-trained word embedding e ", "cite_spans": [ { "start": 244, "end": 282, "text": "Marcheggiani, Frolov, and Titov (2017)", "ref_id": "BIBREF52" } ], "ref_spans": [ { "start": 907, "end": 915, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Baseline Implementation", "sec_num": "3.2" }, { "text": "The sequence-based argument labeling baseline model. Notably, the Word Representation and Softmax parts are specific to a single input word/output prediction, and the BiLSTM Encoder and Hidden Layer parts are used across all time steps.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "we employ a randomly initialized lemma embedding e le i and a randomly initialized POS tag embedding e Sequence Encoder. As Long Short-Term Memory (LSTM) networks (Hochreiter and Schmidhuber 1997) , specifically BiLSTMs, have shown significant representational effectiveness for NLP tasks (Sutskever, Vinyals, and Le 2014; Vinyals et al. 2015) , we thus use a BiLSTM as the sentence encoder. Given a sequence of word representations x = {e 1 , e 2 , \u2022 \u2022 \u2022 , e n } as input, the i-th hidden state g i is encoded as follows:", "cite_spans": [ { "start": 163, "end": 196, "text": "(Hochreiter and Schmidhuber 1997)", "ref_id": "BIBREF30" }, { "start": 289, "end": 322, "text": "(Sutskever, Vinyals, and Le 2014;", "ref_id": "BIBREF76" }, { "start": 323, "end": 343, "text": "Vinyals et al. 2015)", "ref_id": "BIBREF80" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "g f i = LSTM F e i , g f i\u22121 , g b i = LSTM B e i , g b i+1 , g i = g f i \u2295 g b i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "where LSTM F denotes the forward LSTM transformation and LSTM B denotes the backward LSTM transformation. g f i and g b i are the hidden state vectors of the forward LSTM and backward LSTM, respectively. Specifically, we initialize hidden states g 0 and g n+1 as zero tensors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "Scorer in the Sequence-based Model. In the sequence-based model, namely, the sequence tagging model, to get the final predicted semantic roles, stacked multilayer perceptron (MLP) layers on the top of BiLSTM networks are usually exploited, which take as input the hidden representation h i of all time steps and employ ReLU activations between the hidden layers. Finally, a softmax layer is used over the outputs to maximize the likelihood of labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "Scorer in the Tree-based Model. As in the sequence-based model, to predict and label arguments for a given predicate, a role classifier is employed on top of the BiLSTM encoder. Some work like Marcheggiani, Frolov, and Titov (2017) shows that incorporating the predicate's hidden state in their role classifier enhances the model performance, while we argue that a more natural way to incorporate the syntactic information carried by the predicate is to use the attentional mechanism. We adopt the recently introduced biaffine attention (Dozat and Manning 2017) to enhance our role scorer. Biaffine attention is a natural extension of bilinear attention (Luong, Pham, and Manning 2015) , which is widely used in neural machine translation (NMT).", "cite_spans": [ { "start": 193, "end": 231, "text": "Marcheggiani, Frolov, and Titov (2017)", "ref_id": "BIBREF52" }, { "start": 654, "end": 685, "text": "(Luong, Pham, and Manning 2015)", "ref_id": "BIBREF50" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "Nonlinear Affine Transformation. Usually, a BiLSTM decoder takes the concatenation g i of the hidden state vectors as output for each hidden state; however, in the SRL context, the encoder is supposed to distinguish the currently considered predicate from its candidate arguments. As noted in Dozat and Manning (2017) , applying an MLP to the recurrent output states before the classifier has the advantage of stripping away irrelevant information for the current decision. Therefore, to distinguish the currently considered predicate from its candidate arguments in an SRL context, we perform two distinct affine transformations with a nonlinear activation on the hidden state g i , mapping it to vectors with smaller dimensionality:", "cite_spans": [ { "start": 293, "end": 317, "text": "Dozat and Manning (2017)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "h (pred) i = ReLU W (pred) g i + b (pred) , h (arg) i = ReLU W (arg) g i + b (arg)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "where ReLU is the rectilinear activation function (Nair and Hinton 2010) ", "cite_spans": [ { "start": 50, "end": 72, "text": "(Nair and Hinton 2010)", "ref_id": "BIBREF60" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": ", h (pred) i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "is the hidden representation for the predicate, and h (arg) i is the hidden representation for the candidate arguments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "By performing such transformations over the encoder output to feed the scorer, the scorer may benefit from deeper feature extraction. This leads to two benefits. First, instead of keeping both features learned by the two distinct LSTMs, the scorer ideally is now able to learn features composed from both recurrent states with reduced dimensionality. Second, it provides the ability to map the predicates and the arguments into two distinct vector spaces, which is essential for our tasks, since some words can be labeled as predicates and arguments simultaneously. Mapping a word into two different vectors can help the model disambiguate its role in different contexts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "Biaffine Scoring In the standard NMT context, given a target recurrent output vector h j , a bilinear transformation calculates a score s ij for the alignment:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "s ij = h (t) i Wh (s) j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "However, in a traditional classification task, the distribution of classes is often uneven, and the output layer of the model normally includes a bias term designed to capture the prior probability P(y i = c) of each class, with the rest of the model focusing on learning the likelihood of each class given the data P(y i = c|x i ). Dozat and Manning (2017) incorporated the bias terms into the bilinear attention to address this uneven problem, resulting in a biaffine transformation, a natural extension of the bilinear transformation and the affine transformation. In the SRL task, the distribution of the role labels is similarly uneven, and the problem worsens after introducing the additional ROOT node and null label; directly applying the primitive form of bilinear attention would fail to capture the prior probability P(y i = c k ) for each class. Thus, introducing the biaffine attention in our model would be extremely helpful for semantic role prediction.", "cite_spans": [ { "start": 333, "end": 357, "text": "Dozat and Manning (2017)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "It is worth noting that in our model, the scorer aims to assign a score for each specific semantic role. Besides learning the prior distribution for each label, we wish to further capture the preferences for the label that a specific predicate-argument pair can take. Thus, our biaffine attention contains two distinct bias terms:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s i,j = Biaffine(h (pred) j , h (arg) i ) = h (arg) i W (role) h (pred) j (1) + U (role) h (arg) i \u2295 h (pred) j (2) + b (role)", "eq_num": "(3)" } ], "section": "Figure 5", "sec_num": null }, { "text": "where W (role) , U (role) , and b (role) are parameters that will be updated by gradient descent methods in the learning process. There are several points that should be noted in the above biaffine transformation. First, because our goal is to predict the label for each pair of h", "cite_spans": [ { "start": 8, "end": 14, "text": "(role)", "ref_id": null }, { "start": 19, "end": 25, "text": "(role)", "ref_id": null }, { "start": 34, "end": 40, "text": "(role)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "(arg) i and h (pred) j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": ", the output of our biaffine transformation should be a vector of dimensionality N r instead of a real value, where N r is the number of all the candidate semantic labels. Thus, the bilinear transformation in Equation (1) maps two input vectors into another vector. This can be accomplished by setting W (role) as a (d h \u00d7 N r \u00d7 d h ) matrix, where d h is the dimensionality of the hidden state vector. Similarly, the output of the linear transformation in Equation (2) is also a vector by setting U (role) as an (N r \u00d7 2d h ) matrix. Second, Equation (2) captures the preference of each role (or sense) label and is conditioned on taking the j-th word as a predicate and the i-th word as an argument. Third, the last term b (role) captures the prior probability of each class P(y i = c k ). Notice that Equations (2) and (3) capture different kinds of bias for the latent distribution of the label set.", "cite_spans": [ { "start": 500, "end": 506, "text": "(role)", "ref_id": null }, { "start": 725, "end": 731, "text": "(role)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "Given a sentence of length n, for one of its predicates w j , the scorer outputs a score vector", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "{s 1,j , s 2,j , \u2022 \u2022 \u2022 , s n,j }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "Then, our model picks as its output the label with the highest score from each score vector:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "y i,j = arg max 1\u2264k\u2264N r (s i,j [k]), where s i,j [k]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "denotes the score of the k-th candidate in the semantic label vocabulary with size N r .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "Scorer in the Graph-based Model. As in the scorer of the tree-based model (from the full model shown in Figure 6 ), the graph-based model (shown in Figure 7 ) also uses the biaffine scorer to score the predicate-argument structure. Similarly, we also use a nonlinear affine transformation on the top of the BiLSTM encoder. In the sequencebased and tree-based models, dependency-and span-style arguments are converted into a consistent label sequence, while the graph-based model treats arguments as independent graph nodes. In order to unify the two styles of models, we introduce a unified argument representation that can handle both styles of SRL tasks. ", "cite_spans": [], "ref_spans": [ { "start": 104, "end": 112, "text": "Figure 6", "ref_id": null }, { "start": 148, "end": 156, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "The tree-based argument labeling baseline model. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 6", "sec_num": null }, { "text": "The graph-based argument labeling baseline model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "In the sentence w 1 , w 2 , . . . , w n , the model aims to predict a set of predicateargument-relation tuples Y \u2208 P \u00d7 A \u00d7 R, where P = {w 1 , w 2 , . . . , w n } is the set of all possible predicate tokens, A = {(w i , . . . , w j )|1 \u2264 i \u2264 j \u2264 n} includes all the candidate argument spans or dependencies, 4 and R is the set of the semantic roles. For dependency SRL, we assume single word argument spans and thus limit the length of candidate arguments to be 1, so our model uses h arg to construct the final argument representation h (arg) directly. For span SRL, we utilize the span representation from . Each candidate span representation h (arg) is built by", "cite_spans": [ { "start": 538, "end": 543, "text": "(arg)", "ref_id": null }, { "start": 647, "end": 652, "text": "(arg)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "h (arg) = [h (arg) START , h (arg) END , h \u03bb , size(\u03bb)]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "4 When i = j, span reduces to dependency.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "where h arg START and h arg END are boundary representations, \u03bb indicates a span, size(\u03bb) is a feature vector encoding the size of span, and h \u03bb is the specific notion of headedness learned by the attention mechanism (Bahdanau, Cho, and Bengio 2015) over words in each span (where t is the position inside span) as follows:", "cite_spans": [ { "start": 217, "end": 249, "text": "(Bahdanau, Cho, and Bengio 2015)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "540", "sec_num": null }, { "text": "\u00b5 a t = w attn \u2022 MLP attn (h (arg) t ), \u03bd t = exp(\u00b5 a t ) END k=START exp(\u00b5 a k ) h \u03bb = END t=START \u03bd t \u2022 h (arg) t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "540", "sec_num": null }, { "text": "Candidate Pruning The number of candidate arguments for a sentence of length l is O(l 2 ) for span SRL and O(l) for dependency. As the model deals with O(l) possible predicates, the computational complexity is O(l 3 \u2022 |R|) for span and O(l 2 \u2022 |R|) for dependency, both of which are too computationally expensive.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "540", "sec_num": null }, { "text": "To address this issue, we attempt to prune candidates using two beams for storing the candidate arguments and predicates with size \u03b2 p n and \u03b2 a n where \u03b2 p and \u03b2 a are two manually set thresholds, a method inspired by He et al. (2018a) . First, the predicate and argument candidates are ranked according to their predicted scores (\u03c6 p and \u03c6 a , respectively), and then we reduce the predicate and argument candidates with defined beams. Finally, we take the candidates from the beams for use in label prediction. Such pruning will reduce the overall number of candidate tuples to O(n 2 \u2022 |R|) for both types of tasks. Furthermore, for span SRL, we set the maximum length of candidate arguments to L, which may decrease the number of candidate arguments to O(n). Specifically, for predicates and arguments, we introduce two unary scores based on their candidates for ranking:", "cite_spans": [ { "start": 219, "end": 236, "text": "He et al. (2018a)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "540", "sec_num": null }, { "text": "\u03c6 p = w p MLP s p (g p ), \u03c6 a = w a MLP s a (g a f )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "540", "sec_num": null }, { "text": "After pruning, we also adopt the biaffine scorer as in the tree-based models:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "540", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03a6 r (p, a) = Biaffine(h (pred) , h (arg) )", "eq_num": "(4)" } ], "section": "540", "sec_num": null }, { "text": "In this section, we present two types of syntax utilization: syntax-based argument pruning and syntax feature integration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax Utilization", "sec_num": "4." }, { "text": "The k-order argument pruning algorithm. Input: A predicate p, the root node r given a syntactic dependency tree T, the order k Output: The set of argument candidates S 1: initialization set p as current node c, c = p 2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1", "sec_num": null }, { "text": "for each descendant n i of c in T do 3: if D(c, n i ) \u2264 k and n i / \u2208 S then 4: S = S + n i 5:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1", "sec_num": null }, { "text": "end if 6: end for 7: find the syntactic head c h of c, and let c = c h 8: if c = r then 9:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1", "sec_num": null }, { "text": "S = S + r 10: else 11:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1", "sec_num": null }, { "text": "goto step 2 12: end if 13: return argument candidates set S", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1", "sec_num": null }, { "text": "Hard Pruning. 5 The argument structure for each known predicate will be discovered by our argument labeler using the possible arguments (candidates) set. Most SRL works (Xue and Palmer 2004; Zhao and Kit 2008) in the pre-NN era selected words surrounding the predicate word in a syntactic parse tree and pruned these words. We refer to this strategy as hard pruning. In the NN model, we can also borrow this hard pruning strategy to enhance the SRL baseline, and it is one way of using syntax information. Specifically, before inputting to the model, we use the argument pruning algorithm to get a filtered sequence w f = {w 1 , . . . , w f } for each predicate. Then, we replace the original sequence with this one and input it to the SRL model.", "cite_spans": [ { "start": 169, "end": 190, "text": "(Xue and Palmer 2004;", "ref_id": "BIBREF82" }, { "start": 191, "end": 209, "text": "Zhao and Kit 2008)", "ref_id": "BIBREF89" } ], "ref_spans": [], "eq_spans": [], "section": "Syntax-based Argument Pruning", "sec_num": "4.1" }, { "text": "As noted by Punyakanok, Roth, and Yih (2008) , syntactic information is most relevant in identifying the arguments, and the most crucial contribution of full parsing is in the pruning stage. In this paper, we propose a k-order argument hard pruning algorithm inspired by . First, for node n and its descendant n d in a syntactic dependency tree, we define the order to be the distance between the two nodes, denoted as D(n, n d ). Then, we define k-order descendants of n as descendants that satisfy D(n, n d ) = k, and we define a k-order traversal as one that visits each node from the given node to its descendant nodes within k-th order. Note that the definition of k-order traversal is somewhat different from a traditional tree traversal in terminology.", "cite_spans": [ { "start": 12, "end": 44, "text": "Punyakanok, Roth, and Yih (2008)", "ref_id": "BIBREF69" } ], "ref_spans": [], "eq_spans": [], "section": "Syntax-based Argument Pruning", "sec_num": "4.1" }, { "text": "A brief description of the proposed k-order pruning algorithm is given as follows. Initially, we set a given predicate as the current node in a syntactic dependency tree. Then, we collect all its argument candidates using a k-order traversal. Afterward, we reset the current node to its syntactic head and repeat the previous step until we reach the root of the tree. Finally, we collect the root and stop. The k-order argument algorithm 1st-order 2nd-order 3rd-order", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax-based Argument Pruning", "sec_num": "4.1" }, { "text": "An example of first-order, second-order, and third-order argument pruning. The shaded part indicates the given predicate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "is presented in Algorithm 1 in detail. An example of a syntactic dependency tree for the sentence She began to trade the art for money is shown in Figure 8 . The main reasons for applying the extended k-order argument pruning algorithm are two-fold. First, previous standard pruning algorithms may impede the argument coverage too much, even though arguments do usually tend to surround their predicates at a close distance. As a sequence tagging model that has been applied, the algorithm can effectively handle the imbalanced distribution between arguments and non-arguments, which would be poorly handled by early argument classification models that commonly adopt the standard pruning algorithm. Second, the extended pruning algorithm provides a better trade-off between computational cost and performance by carefully tuning k.", "cite_spans": [], "ref_spans": [ { "start": 147, "end": 155, "text": "Figure 8", "ref_id": null } ], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "Soft Pruning. For word pair classification modeling, one major performance bottleneck is caused by unbalanced data. This is especially pertinent for SRL, where more than 90% of argument candidates are non-arguments. The syntax-based hard pruning methods are thus proposed to alleviate the imbalanced distribution; however, these do not extend well to other baselines and languages and even hinder syntax-agnostic SRL models, as Cai et al. (2018) demonstrated using different k values on English. This hindrance might result because this pruning method breaks up the whole sentence, leading the BiLSTM encoder to take the incomplete sentence as input and fail to learn sentence representation sufficiently.", "cite_spans": [ { "start": 428, "end": 445, "text": "Cai et al. (2018)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "To alleviate such a drawback from the previous syntax-based pruning methods, we propose a novel pruning rule extraction method based on syntactic parse trees that generally suits diverse baselines at the same time. In detail, we add an argument pruning layer guided by syntactic rules following BiLSTM layers, which can absorb the syntactic clues simply and effectively. Syntactic Rule. All arguments are specific to a particular predicate. Researchers have found that in syntax trees, the distance between predicates and their arguments generally falls within a certain range for each language; in other words, the arguments of a predicate are typically close to their predicate in their syntactic parse tree (Xue and Palmer 2004; Zhao and Kit 2008; He et al. 2018b; He, Li, and Zhao 2019) . Therefore, we introduce a language-specific rule based on syntactic dependency parses to prune some unlikely arguments. We call this rule the syntactic rule. Specifically, given a predicate p and its argument a, we define d p and d a to be the distance from p and a to their nearest common ancestor node (namely, the root of the minimal subtree that includes p and a), respectively. For example, 0 denotes that a predicate or argument itself is their nearest common ancestor, while 1 represents that their nearest common ancestor is the parent of the predicate or argument. Then, we use the distance tuple (d p , d a ) as their relative position representation inside the parse tree. Finally, we make a list of all tuples ordered according to how many times each distance tuple occurs in the training data, which is counted for each language independently.", "cite_spans": [ { "start": 710, "end": 731, "text": "(Xue and Palmer 2004;", "ref_id": "BIBREF82" }, { "start": 732, "end": 750, "text": "Zhao and Kit 2008;", "ref_id": "BIBREF89" }, { "start": 751, "end": 767, "text": "He et al. 2018b;", "ref_id": "BIBREF28" }, { "start": 768, "end": 790, "text": "He, Li, and Zhao 2019)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "It is worth noting that our syntactic rule is determined by the top-k frequent distance tuples. During training and inference, the syntactic rule takes effect by excluding all candidate arguments whose predicate-argument relative positions in the parse tree are not in the list of top-k frequent tuples. Figure 9 shows simplified examples of a syntactic dependency tree. Given an English sentence in Figure 9 (a), the current predicate is likes, whose arguments are cat and fish. For likes and cat, the predicate (likes) is their common ancestor (denoted as Root arg ) according to the syntax tree. Therefore, the relative position representation of the predicate and argument is (0, 1), and it is the same for likes and fish. As for the right side in Figure 9 , suppose the marked predicate has two arguments-arg1 and arg2. The common ancestors of the predicate and arguments are, respectively, Root arg1 and Root arg2 . In this case, the relative position representations are (0, 1) and (1, 2).", "cite_spans": [], "ref_spans": [ { "start": 304, "end": 312, "text": "Figure 9", "ref_id": null }, { "start": 400, "end": 408, "text": "Figure 9", "ref_id": null }, { "start": 752, "end": 760, "text": "Figure 9", "ref_id": null } ], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "Argument Pruning Method. To maintain the sequential inputs through the whole sentence, we propose a novel syntax-based method to softly prune arguments, which is unlike most existing works (Xue and Palmer 2004; Zhao et al. 2009a; He et al. 2018b ) with hard pruning strategies that prune argument candidates in the pre-processing stage. Our soft pruning strategy is very straightforward. In the argument pruning layer, our model drops these candidate arguments (more exactly, their BiLSTM representations) that do not comply with the syntactic rule. In other words, only the predicates and arguments that satisfy the syntactic rule will be output to the next layer. Notably, whereas hard pruning removes some of the words from each sentence and tasks the model with processing an incomplete sentence, with soft pruning, the model is given the full original sentence, and by applying a mask instead of discarding part of the inputs. While we do use a \"hard\" 0/1 binary mask for our \"soft\" pruning, this step can also be softened to other preset probabilities such as 0.1/0.9 so that the pruned parts can still pass some information. We leave this as an exploration for future work.", "cite_spans": [ { "start": 189, "end": 210, "text": "(Xue and Palmer 2004;", "ref_id": "BIBREF82" }, { "start": 211, "end": 229, "text": "Zhao et al. 2009a;", "ref_id": "BIBREF87" }, { "start": 230, "end": 245, "text": "He et al. 2018b", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "Constituent Pruning. In dependency SRL, argument candidates are pruned by a heuristic search over the dependency syntax tree. Constituent syntax trees, which represent the phrasal compositions of sentences, and span SRL have a different relationship than do dependency syntax trees and dependency SRL, which have similarities in their dependency arcs and dependency semantic relations. Since the argument span boundary in span SRL is consistent with that of the phrase in a constituent syntactic tree, we adopt a new constituent-based argument pruning method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "Constituency syntax breaks a sentence into constituents (i.e., phrases or spans), which naturally form a constituency tree in a top-down fashion. In contrast with the dependency syntax tree, words can only be the terminals in a constituency tree, while the non-terminals are phrases with types. In span SRL, each argument corresponds to a constituent in a constituency tree, which can thus be used to generate span argument candidates, given the predicates (Xue and Palmer 2004; Carreras and M\u00e0rquez 2005) . Punyakanok, Roth, and Yih (2005) showed that constituency trees offer high-quality argument boundaries.", "cite_spans": [ { "start": 457, "end": 478, "text": "(Xue and Palmer 2004;", "ref_id": "BIBREF82" }, { "start": 479, "end": 505, "text": "Carreras and M\u00e0rquez 2005)", "ref_id": "BIBREF6" }, { "start": 508, "end": 540, "text": "Punyakanok, Roth, and Yih (2005)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "Considering that span SRL models only occasionally violate the syntactic constraints (some candidate arguments may not be constituents), we attempt to prune unlikely arguments based on these constraints, essentially ruling out the likely impossible candidates, albeit at the cost of missing some of the rare violating arguments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "In order to utilize such constituent boundaries in the constituency tree and help decide argument candidates, we extract all boundaries for a constituent c to form a set boundaryset = {(START(c), END(c))}. We also define an argument pruning layer that drops candidate arguments whose boundaries are not in this set. It is worth noting that because span arguments are converted to BIO labels under the sequence-based and treebased modeling approaches of span SRL, there is no explicit correspondence between the existing arguments and the constituents, so constituent-based argument pruning is not applicable to the sequence-based and tree-based modeling approaches. We only consider this syntax enhancement when using graph-based modeling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "In addition to guiding argument pruning, another major use of syntax information is serving as a syntax-aware feature in addition to the contextualized representation, thereby enhancing the argument labeler. To integrate the syntactic information into sequential neural networks, we use a syntactic encoder on top of the BiLSTM encoder.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax Feature Integration", "sec_num": "4.2" }, { "text": "Specifically, given a syntactic dependency tree T, for each node n k in T, let C(k) denote the syntactic children set of n k , H(k) denote the syntactic head of n k , and L(k, \u2022) denote the dependency relation between node n k and those that have a direct arc from or to n k . Then, we formulate the syntactic encoder as a transformation f \u03c4 over the node n k , which may take some of C(k), H(k), or L(k, \u2022) as input and compute a syntactic", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax Feature Integration", "sec_num": "4.2" }, { "text": "representation v k for node n k ; namely, v k = f \u03c4 (C(k), H(k), L(k, \u2022), x k ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax Feature Integration", "sec_num": "4.2" }, { "text": "When not otherwise specified, x k denotes the input feature representation of n k , which may be either the word representation e k or the output of BiLSTM h k . \u03c3 denotes the logistic sigmoid function, and denotes the element-wise multiplication.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax Feature Integration", "sec_num": "4.2" }, { "text": "In practice, the transformation f \u03c4 can be any syntax encoding method. In this paper, we will consider three types of syntactic encoders: syntactic graph convolutional network (Syntactic GCN), syntax aware LSTM (SA-LSTM), and tree-structured LSTM (Tree-LSTM).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax Feature Integration", "sec_num": "4.2" }, { "text": "Syntactic GCN. The GCN (Kipf and Welling 2017) was proposed to induce the representations of nodes in a graph based on the properties of their neighbors. Given its effectiveness, introduced a generalized version for the SRL task, namely, syntactic GCN, and showed that the syntactic GCN is effective in incorporating syntactic information into neural models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax Feature Integration", "sec_num": "4.2" }, { "text": "The syntactic GCN captures syntactic information flowing in two directions: one from heads to dependents (along), and the other from dependents to heads (opposite). Additionally, it also models the information flows from a node to itself; that is, it assumes that a syntactic graph contains a self-loop for each node. Thus, the syntactic GCN transformation of a node n k is defined on its neighborhood N(k) = C(k) \u222a H(k) \u222a {n k }. For each edge that connects n k and its neighbor n j , we can compute a vector representation,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax Feature Integration", "sec_num": "4.2" }, { "text": "u k,j = W dir(k,j) x j + b L(k,j)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax Feature Integration", "sec_num": "4.2" }, { "text": "where dir(k, j) denotes the direction type (along, opposite, or self-loop) of the edge from n k to n j , W dir(k,j) is the direction-specific parameter, and b L(k,j) is the label-specific parameter. Considering that syntactic information from all the neighboring nodes may make different contributions to semantic role labeling, the syntactic GCN introduces an additional edge-wise gate for each node pair (n k , n j ) as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax Feature Integration", "sec_num": "4.2" }, { "text": "g k,j = \u03c3(W dir(k,j) g x k + b L(k,j) g", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax Feature Integration", "sec_num": "4.2" }, { "text": ").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax Feature Integration", "sec_num": "4.2" }, { "text": "The syntactic representation v k for a node n k can be then computed as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax Feature Integration", "sec_num": "4.2" }, { "text": "v k = ReLU( j\u2208N(k) g k,j u k,j ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax Feature Integration", "sec_num": "4.2" }, { "text": "SA-LSTM. The SA-LSTM (Qian et al. 2017) is an extension of the standard BiLSTM architecture, which aims to simultaneously encode the syntactic and contextual information for a given word. On the one hand, the SA-LSTM calculates the hidden state in timestep order as does the standard LSTM,", "cite_spans": [ { "start": 21, "end": 39, "text": "(Qian et al. 2017)", "ref_id": "BIBREF70" } ], "ref_spans": [], "eq_spans": [], "section": "Syntax Feature Integration", "sec_num": "4.2" }, { "text": "i g = \u03c3(W (i) x k + U (i) h k\u22121 + b (i) ) f g = \u03c3(W (f ) x k + U (f ) h k\u22121 + b ( f ) ) o g = \u03c3(W (o) x k + U (o) h k\u22121 + b (o) ) u = f (W (u) x k + U (u) h k\u22121 + b (u) ) c k = i g u + f g c k\u22121", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax Feature Integration", "sec_num": "4.2" }, { "text": "On the other hand, it further incorporates the syntactic information into the representation of each word by introducing an additional gate,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax Feature Integration", "sec_num": "4.2" }, { "text": "s g = \u03c3(W (s) x k + U (s) h k\u22121 + b (s) ) h k = o g f (c k ) + s g h k", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax Feature Integration", "sec_num": "4.2" }, { "text": "where f (\u2022) and \u03c3(\u2022) represent the tanh and sigmoid activation functions,h k = f ( t j