{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:13:19.066121Z" }, "title": "High-order Refining for End-to-end Chinese Semantic Role Labeling", "authors": [ { "first": "Hao", "middle": [], "last": "Fei", "suffix": "", "affiliation": { "laboratory": "", "institution": "Wuhan University", "location": { "country": "China" } }, "email": "hao.fei@whu.edu.cn" }, { "first": "Yafeng", "middle": [], "last": "Ren", "suffix": "", "affiliation": { "laboratory": "", "institution": "Guangdong University of Foreign Studies", "location": { "country": "China" } }, "email": "renyafeng@whu.edu.cn" }, { "first": "Donghong", "middle": [], "last": "Ji", "suffix": "", "affiliation": { "laboratory": "", "institution": "Wuhan University", "location": { "country": "China" } }, "email": "dhji@whu.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Current end-to-end semantic role labeling is mostly accomplished via graph-based neural models. However, these all are first-order models, where each decision for detecting any predicate-argument pair is made in isolation with local features. In this paper, we present a high-order refining mechanism to perform interaction between all predicate-argument pairs. Based on the baseline graph model, our highorder refining module learns higher-order features between all candidate pairs via attention calculation, which are later used to update the original token representations. After several iterations of refinement, the underlying token representations can be enriched with globally interacted features. Our high-order model achieves state-of-the-art results on Chinese SRL data, including CoNLL09 and Universal Proposition Bank, meanwhile relieving the long-range dependency issues.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Current end-to-end semantic role labeling is mostly accomplished via graph-based neural models. However, these all are first-order models, where each decision for detecting any predicate-argument pair is made in isolation with local features. In this paper, we present a high-order refining mechanism to perform interaction between all predicate-argument pairs. Based on the baseline graph model, our highorder refining module learns higher-order features between all candidate pairs via attention calculation, which are later used to update the original token representations. After several iterations of refinement, the underlying token representations can be enriched with globally interacted features. Our high-order model achieves state-of-the-art results on Chinese SRL data, including CoNLL09 and Universal Proposition Bank, meanwhile relieving the long-range dependency issues.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Semantic role labeling (SRL), as the shallow semantic parsing aiming to detect the semantic predicates and their argument roles in texts, plays a core role in natural language processing (NLP) community (Pradhan et al., 2005; Zhao et al., 2009; Lei et al., 2015; Xia et al., 2019b) . SRL is traditionally handled by two pipeline steps: predicate identification (Scheible, 2010) and argument role labeling (Pradhan et al., 2005) . More recently, growing interests are paid for developing end-to-end SRL, achieving both two subtasks, i.e., recognizing all possible predicates together with their arguments jointly, via one single model (He et al., 2018a) .", "cite_spans": [ { "start": 203, "end": 225, "text": "(Pradhan et al., 2005;", "ref_id": "BIBREF20" }, { "start": 226, "end": 244, "text": "Zhao et al., 2009;", "ref_id": "BIBREF28" }, { "start": 245, "end": 262, "text": "Lei et al., 2015;", "ref_id": "BIBREF15" }, { "start": 263, "end": 281, "text": "Xia et al., 2019b)", "ref_id": "BIBREF26" }, { "start": 361, "end": 377, "text": "(Scheible, 2010)", "ref_id": "BIBREF23" }, { "start": 405, "end": 427, "text": "(Pradhan et al., 2005)", "ref_id": "BIBREF20" }, { "start": 634, "end": 652, "text": "(He et al., 2018a)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The end-to-end joint architecture can greatly alleviate the error propagation problem, thus helping to achieve better task performance. Currently, the end-to-end SRL methods largely are graph-based neural models, enumerating all possible predicates and their arguments exhaustively (He et al., 2018a; Cai et al., 2018; Li et al., 2019) . However, these first-order models that only consider one predicateargument pair at a time can be limited to short-term features and local decisions, thus being subjective to long-range dependency issues existing at large surface distances between arguments (Chen et al., 2019; . This makes it imperative to capture the global interactions between multiple predicates and arguments.", "cite_spans": [ { "start": 282, "end": 300, "text": "(He et al., 2018a;", "ref_id": "BIBREF12" }, { "start": 301, "end": 318, "text": "Cai et al., 2018;", "ref_id": "BIBREF3" }, { "start": 319, "end": 335, "text": "Li et al., 2019)", "ref_id": "BIBREF16" }, { "start": 595, "end": 614, "text": "(Chen et al., 2019;", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, based on the graph-based model architecture, we propose to further learn the higherorder interaction between all predicate-argument pairs by performing iterative refining for the underlying token representations. Figure 1 illustrates the overall framework of our method. The BiL-STM encoder (Hochreiter and Schmidhuber, 1997) first encodes the inputs into the initial token representations for producing predicate and argument representations, respectively. The biaffine attention then exhaustively calculates the score representations for all the candidate predicate-argument pairs. Based on all these score representations, our highorder refining module generates high-order feature for each corresponding token via an attention mechanism, which is then used for upgrading the raw token representation. After total N iterations of the above refining procedure, the information between the predicates and the associated arguments can be fully interacted, and thus results in global consistency for SRL.", "cite_spans": [ { "start": 306, "end": 340, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 228, "end": 236, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "On the other hand, most of the existing SRL studies focus on the English language, while there is little work in Chinese, mainly due to the limited amount of annotated data. In this study, we focus on the Chinese SRL. We show that our proposed high-order refining mechanism can be especially beneficial for such lower-resource language. Meanwhile, our proposed refining process is fully We conduct experiments on the dependencybased Chinese SRL datasets, including CoNLL09 (Haji\u010d et al., 2009) , and Universal Proposition Bank (Akbik et al., 2015; Akbik and Li, 2016) . Results show that the graph-based end-to-end model with our proposed high-order refining consistently brings task improvements, compared with baselines, achieving state-of-the-art results for Chinese end-to-end SRL.", "cite_spans": [ { "start": 473, "end": 493, "text": "(Haji\u010d et al., 2009)", "ref_id": "BIBREF11" }, { "start": 527, "end": 547, "text": "(Akbik et al., 2015;", "ref_id": "BIBREF0" }, { "start": 548, "end": 567, "text": "Akbik and Li, 2016)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Gildea and Jurafsky (2000) pioneer the task of semantic role labeling, as a shallow semantic parsing. Earlier efforts are paid for designing hand-crafted discrete features with machine learning classifiers (Pradhan et al., 2005; Punyakanok et al., 2008; Zhao et al., 2009) . Later, a great deal of work takes advantages of neural networks with distributed features (FitzGerald et al., 2015; Roth and Lapata, 2016; Strubell et al., 2018) . On the other hand, many previous work shows that integrating syntactic tree structure can greatly facilitate SRL He et al., 2018b; Fei et al., 2020b) .", "cite_spans": [ { "start": 206, "end": 228, "text": "(Pradhan et al., 2005;", "ref_id": "BIBREF20" }, { "start": 229, "end": 253, "text": "Punyakanok et al., 2008;", "ref_id": "BIBREF21" }, { "start": 254, "end": 272, "text": "Zhao et al., 2009)", "ref_id": "BIBREF28" }, { "start": 365, "end": 390, "text": "(FitzGerald et al., 2015;", "ref_id": "BIBREF9" }, { "start": 391, "end": 413, "text": "Roth and Lapata, 2016;", "ref_id": "BIBREF22" }, { "start": 414, "end": 436, "text": "Strubell et al., 2018)", "ref_id": "BIBREF24" }, { "start": 552, "end": 569, "text": "He et al., 2018b;", "ref_id": "BIBREF13" }, { "start": 570, "end": 588, "text": "Fei et al., 2020b)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Prior studies traditionally separate SRL into two individual subtasks, i.e., predicate disambiguation and argument role labeling, mostly conducting only the argument role labeling based on the pre-identified predicate (Pradhan et al., 2005; Zhao et al., 2009; FitzGerald et al., 2015; He et al., 2018b; Fei et al., 2020a) . More recently, several researches consider the end-to-end solution that handles both two subtasks by one single model. All of them employs graph-based neural model, exhaustively enumerating all the possible predicate and argument mentions, as well as their relations (He et al., 2018a; Cai et al., 2018; Li et al., 2019; Xia et al., 2019a) . Most of these end-to-end models, however, are first-order, considering merely one predicate-argument pair at a time. In this work, we propose a high-order refining mechanism to reinforce the graph-based end-to-end method.", "cite_spans": [ { "start": 218, "end": 240, "text": "(Pradhan et al., 2005;", "ref_id": "BIBREF20" }, { "start": 241, "end": 259, "text": "Zhao et al., 2009;", "ref_id": "BIBREF28" }, { "start": 260, "end": 284, "text": "FitzGerald et al., 2015;", "ref_id": "BIBREF9" }, { "start": 285, "end": 302, "text": "He et al., 2018b;", "ref_id": "BIBREF13" }, { "start": 303, "end": 321, "text": "Fei et al., 2020a)", "ref_id": "BIBREF7" }, { "start": 591, "end": 609, "text": "(He et al., 2018a;", "ref_id": "BIBREF12" }, { "start": 610, "end": 627, "text": "Cai et al., 2018;", "ref_id": "BIBREF3" }, { "start": 628, "end": 644, "text": "Li et al., 2019;", "ref_id": "BIBREF16" }, { "start": 645, "end": 663, "text": "Xia et al., 2019a)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Note that most of the existing SRL work focuses on the English language, with less for Chinese, mainly due to the limited amount of annotated data (Xia et al., 2019a) . In this paper, we aim to improve the Chinese SRL and make compensation of the data scarcity by our proposed high-order model.", "cite_spans": [ { "start": 147, "end": 166, "text": "(Xia et al., 2019a)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Task formulation. Following prior end-to-end SRL work (He et al., 2018a; Li et al., 2019) , we treat the task as predicate-argument-role triplets prediction.", "cite_spans": [ { "start": 54, "end": 72, "text": "(He et al., 2018a;", "ref_id": "BIBREF12" }, { "start": 73, "end": 89, "text": "Li et al., 2019)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Framework", "sec_num": "3" }, { "text": "Given an input sentence S = {w 1 , \u2022 \u2022 \u2022 , w n }, the system is expected to output a set of triplets Y \u2208 P \u00d7 A \u00d7 R, where ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Framework", "sec_num": "3" }, { "text": "P = {p 1 , \u2022 \u2022 \u2022 , p m }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Framework", "sec_num": "3" }, { "text": "Our baseline SRL model is mostly from He et al. (2018a) . First, we obtain the vector representation x w t of each word w t from pre-trained embeddings. We then make use of the part-of-speech (POS) tag for each word, and use its embedding x pos t . A convolutional neural networks (CNNs) is used to encode Chinese characters inside a word x c t . We concatenate them as input representations:", "cite_spans": [ { "start": 38, "end": 55, "text": "He et al. (2018a)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Graph-based SRL Model", "sec_num": "3.1" }, { "text": "x t = [x w t ; x pos t ; x c t ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Graph-based SRL Model", "sec_num": "3.1" }, { "text": "Thereafter, a multi-layer bidirectional LSTM (BiLSTM) is used to encode the input representations into contextualized token representations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Graph-based SRL Model", "sec_num": "3.1" }, { "text": "h 1 , \u2022 \u2022 \u2022 , h n = BiLSTM(x 1 , \u2022 \u2022 \u2022 , x n ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Graph-based SRL Model", "sec_num": "3.1" }, { "text": "Based on the token representations, we further generate the separate predicate representations and argument representations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Graph-based SRL Model", "sec_num": "3.1" }, { "text": "v p t = FFN(h t ), v a t = FFN(h t ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Graph-based SRL Model", "sec_num": "3.1" }, { "text": "Then, a biaffine attention (Dozat and Manning, 2016) is used for scoring the semantic relationships exhaustively over all the predicate-argument pairs:", "cite_spans": [ { "start": 27, "end": 52, "text": "(Dozat and Manning, 2016)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Graph-based SRL Model", "sec_num": "3.1" }, { "text": "v s (p i , a j ) = v p i \u2022 W 1 \u2022 v a j + W 2 \u2022 [v p i ; v a j ] + b,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Graph-based SRL Model", "sec_num": "3.1" }, { "text": "(1) where W 1 , W 2 and b are parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Graph-based SRL Model", "sec_num": "3.1" }, { "text": "Decoding and learning. Once a predicateargument pair (p i , a j ) (i.e., the role label r = ) is determined by a softmax classifier, based on the score representation v s (p i , a j ) , the model outputs this tuple (p, a, r) .", "cite_spans": [], "ref_spans": [ { "start": 171, "end": 183, "text": "(p i , a j )", "ref_id": null }, { "start": 215, "end": 224, "text": "(p, a, r)", "ref_id": null } ], "eq_spans": [], "section": "Baseline Graph-based SRL Model", "sec_num": "3.1" }, { "text": "During training, we optimize the probability P \u03b8 (\u0177|S) of the tuple y (p i ,a j ,r) over a sentence S:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Graph-based SRL Model", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P \u03b8 (y|S) = p\u2208P,a\u2208A,r\u2208R P \u03b8 (y (p,a,r) |S) = p\u2208P,a\u2208A,r\u2208R \u03c6(p, a, r) r\u2208R \u03c6(p, a,r) ,", "eq_num": "(2)" } ], "section": "Baseline Graph-based SRL Model", "sec_num": "3.1" }, { "text": "where \u03b8 is the parameters of the model and \u03c6(p, a, r) represents the total unary score from: p, a) ) .", "cite_spans": [], "ref_spans": [ { "start": 93, "end": 98, "text": "p, a)", "ref_id": null } ], "eq_spans": [], "section": "Baseline Graph-based SRL Model", "sec_num": "3.1" }, { "text": "\u03c6(p, a, r) = W p ReLU(v p ) + W a ReLU(v a ) + W s ReLU(v s (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Graph-based SRL Model", "sec_num": "3.1" }, { "text": "(3) The final objective is to minimize the negative loglikelihood of the golden structure:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Graph-based SRL Model", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L = \u2212logP \u03b8 (y|S).", "eq_num": "(4)" } ], "section": "Baseline Graph-based SRL Model", "sec_num": "3.1" }, { "text": "The baseline graph model is a first-order model, since it only considers one predicate-argument pair (as in Eq. 3) at a time. This makes it limited to short-term and local decisions, and thus subjective to long-distance dependency problem wherever there are larger surface distances between arguments. We here propose a higher-order refining mechanism for allowing a deep interactions between all predicate-argument pairs. Our high-order model is shown in Figure 1 . Compared with the baseline model, the main difference lies in the high-order refining module. Our motivation is to inform each predicate-argument pair with the information of the other rest of pairs from the global viewpoint. We reach this by refining the underlying token representations h t with refined ones which carry high-order interacted features.", "cite_spans": [], "ref_spans": [ { "start": 456, "end": 464, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Higher-order Refining", "sec_num": "3.2" }, { "text": "Concretely, we take the baseline as the initiation, performing refinement iteratively. At the i-th refining iteration, we can collect the score repre-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Higher-order Refining", "sec_num": "3.2" }, { "text": "sentations V i,s = {v i,s 1 , \u2022 \u2022 \u2022 , v i,s K } of all candidate", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Higher-order Refining", "sec_num": "3.2" }, { "text": "predicate-argument pairs, where K (i.e., n 2 ) are the total combination number of these pairs. Based on V i,s , we then generate the high-order feature vector o i t by using an attention mechanism guided by the current token representation h i\u22121 t for word w t at last turn, i.e., the (i-1)-th iteration:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Higher-order Refining", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "u i k = tanh(W 3 h i\u22121 t + W 4 v i,s k ), \u03b1 i k = softmax(u i k ), o i t = K k=1 \u03b1 i k v i,s k ,", "eq_num": "(5)" } ], "section": "Higher-order Refining", "sec_num": "3.2" }, { "text": "where W 3 and W 4 are parameters. We then concatenate the raw token representation and highorder feature representation together, and obtain the refined token representation after a non-linear pro-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Higher-order Refining", "sec_num": "3.2" }, { "text": "jection\u0125 i t = FFN([o i t ; h i\u22121 t ])", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Higher-order Refining", "sec_num": "3.2" }, { "text": ". Finally, we use\u0125 i t to update the old one h i\u22121 t . After total N iterations of high-order refinement, we expect the model to capture more informative features at global scope and achieve the global consistency.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Higher-order Refining", "sec_num": "3.2" }, { "text": "Our method is evaluated on the Chinese SRL benchmarks, including CoNLL09 1 and Universal Proposition Bank (UPB) 2 . Each dataset comes with its own training, development and test sets. Precision, recall and F1 score are used as the metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Settings", "sec_num": "4.1" }, { "text": "We use the pre-trained Chinese fasttext embeddings 3 . The BiLSTM has hidden size of 350, with three layers. The kernel sizes of CNN are [3, 4, 5] . We adopt the Adam optimizer with initial learning rate of 1e-5. We train the model by mini-batch size in [16, 32] with early-stop strategy. We also use the contextualized Chinese word representations, i.e., ELMo 4 and BERT (Chinese-base-version) 5 .", "cite_spans": [ { "start": 137, "end": 140, "text": "[3,", "ref_id": null }, { "start": 141, "end": 143, "text": "4,", "ref_id": null }, { "start": 144, "end": 146, "text": "5]", "ref_id": null }, { "start": 254, "end": 258, "text": "[16,", "ref_id": null }, { "start": 259, "end": 262, "text": "32]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Settings", "sec_num": "4.1" }, { "text": "We mainly make comparisons with the recent endto-end SRL models, as well as the pipeline methods on standalone argument role labeling given the gold predicates. Table 1 shows the results on the Chinese CoNLL09. We first find that the joint detection for predicates and arguments can be more beneficial 1 https://catalog.ldc.upenn.edu/", "cite_spans": [], "ref_spans": [ { "start": 161, "end": 168, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Main Results", "sec_num": "4.2" }, { "text": "2 https://github.com/System-T/ UniversalPropositions 3 https://fasttext.cc/ 4 https://github.com/HIT-SCIR/ ELMoForManyLangs 5 https://github.com/google-research/ bert 103 Arg. Prd.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LDC2012T03", "sec_num": null }, { "text": "\u2022 Pipeline method Zhao et al. (2009) 80.4 75.2 77.7 - Bj\u00f6rkelund et al. (2009) 82.4 75.1 78.6 - Roth and Lapata (2016) 83.2 75.9 79.4 - Titov (2017) 84.6 80.4 82.5 -He et al. (2018b) 84.2 81.5 82.8 -Cai and Lapata (2019) \u202185.4 84.6 85.0 -", "cite_spans": [ { "start": 18, "end": 36, "text": "Zhao et al. (2009)", "ref_id": "BIBREF28" }, { "start": 54, "end": 78, "text": "Bj\u00f6rkelund et al. (2009)", "ref_id": "BIBREF2" }, { "start": 96, "end": 118, "text": "Roth and Lapata (2016)", "ref_id": "BIBREF22" }, { "start": 136, "end": 182, "text": "Titov (2017) 84.6 80.4 82.5 -He et al. (2018b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "P R F1 F1", "sec_num": null }, { "text": "\u2022 End-to-end method He et al. (2018a) 82.6 83.6 83.0 85.7 Cai et al. (2018) 84.7 84.0 84.3 86.0 Li et al. (2019) 84.9 84.6 84.8 86.9 Xia et al. (2019a) 84 than the pipeline detection of SRL, notably with 85.1% F1 score on argument detection by Xia et al. (2019a) . Most importantly, our high-order end-toend model outperforms all these baselines on both two subtasks, with 85.9% F1 score for argument role labeling and 88.6% F1 score for predicate detection. When the contextualized word embeddings are available, we find that our model can achieve further improvements, i.e., 88.8% and 90.3% F1 scores for two subtasks, respectively. Table 2 shows the performances on UPB. Overall, the similar trends are kept as that on CoNLL09. Our high-order model still performs the best, yielding 67.9% F1 score on argument role labeling, verifying its prominent capability for the SRL task. Also with BERT embeddings, our model further wins a great advance of performances.", "cite_spans": [ { "start": 20, "end": 37, "text": "He et al. (2018a)", "ref_id": "BIBREF12" }, { "start": 58, "end": 75, "text": "Cai et al. (2018)", "ref_id": "BIBREF3" }, { "start": 96, "end": 112, "text": "Li et al. (2019)", "ref_id": "BIBREF16" }, { "start": 133, "end": 151, "text": "Xia et al. (2019a)", "ref_id": "BIBREF25" }, { "start": 244, "end": 262, "text": "Xia et al. (2019a)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 635, "end": 642, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "P R F1 F1", "sec_num": null }, { "text": "High-order refinement. We take a further step, looking into our proposed high-order refining mechanism. We examine the performances under varying refining iterations in Figure 2 . Compared with the first-order baseline model by He et al. (2018a) , our high-order model can achieve better performances for both two subtasks. We find that our model can reach the peak for predicate detection with total 2 iterations of refinement, while the best iteration number is 4 for argument labeling.", "cite_spans": [ { "start": 228, "end": 245, "text": "He et al. (2018a)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 169, "end": 177, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Analysis", "sec_num": "4.3" }, { "text": "Long-distance dependencies. Figure 3 shows the performances of argument recognition by different surface distance between predicates and arguments. The overall results decrease when arguments are farther away from the predicates. Nevertheless, our high-order model can beat against such drop significantly. Especially when the distance grows larger, e.g., distance \u2265 7, the winning score by our model even becomes more notable.", "cite_spans": [], "ref_spans": [ { "start": 28, "end": 36, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Analysis", "sec_num": "4.3" }, { "text": "We proposed a high-order end-to-end model for Chinese SRL. Based on the baseline graph-based model, our high-order refining module performed interactive learning between all predicate-argument pairs via attention calculation. The generated higher-order featured token representations then were used to update the original ones. After total N iterations of refinement, we enriched the underlying token representations with global interactions, and made the learnt features more informative. Our high-order model brought state-of-the-art results on Chinese SRL data, i.e., CoNLL09 and Universal Proposition Bank, meanwhile relieving the longrange dependency issues.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [ { "text": "We thank the anonymous reviewers for their valuable and detailed comments. This work is supported by the National Natural Science Foundation of China (No. 61772378, No. 61702121 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Generating high quality proposition banks for multilingual semantic role labeling", "authors": [ { "first": "Alan", "middle": [], "last": "Akbik", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Chiticariu", "suffix": "" }, { "first": "Marina", "middle": [], "last": "Danilevsky", "suffix": "" }, { "first": "Yunyao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Shivakumar", "middle": [], "last": "Vaithyanathan", "suffix": "" }, { "first": "Huaiyu", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "397--407", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Akbik, Laura Chiticariu, Marina Danilevsky, Yun- yao Li, Shivakumar Vaithyanathan, and Huaiyu Zhu. 2015. Generating high quality proposition banks for multilingual semantic role labeling. In Proceedings of the ACL, pages 397-407.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Polyglot: Multilingual semantic role labeling with unified labels", "authors": [ { "first": "Alan", "middle": [], "last": "Akbik", "suffix": "" }, { "first": "Yunyao", "middle": [], "last": "Li", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "1--6", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Akbik and Yunyao Li. 2016. Polyglot: Multilin- gual semantic role labeling with unified labels. In Proceedings of the ACL, pages 1-6.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Multilingual semantic role labeling", "authors": [ { "first": "Anders", "middle": [], "last": "Bj\u00f6rkelund", "suffix": "" }, { "first": "Love", "middle": [], "last": "Hafdell", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Nugues", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the CoNLL", "volume": "", "issue": "", "pages": "43--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anders Bj\u00f6rkelund, Love Hafdell, and Pierre Nugues. 2009. Multilingual semantic role labeling. In Pro- ceedings of the CoNLL, pages 43-48.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A full end-to-end semantic role labeler, syntacticagnostic over syntactic-aware?", "authors": [ { "first": "Jiaxun", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Shexia", "middle": [], "last": "He", "suffix": "" }, { "first": "Zuchao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the CoLing", "volume": "", "issue": "", "pages": "2753--2765", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiaxun Cai, Shexia He, Zuchao Li, and Hai Zhao. 2018. A full end-to-end semantic role labeler, syntactic- agnostic over syntactic-aware? In Proceedings of the CoLing, pages 2753-2765.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Semi-supervised semantic role labeling with cross-view training", "authors": [ { "first": "Rui", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the EMNLP", "volume": "", "issue": "", "pages": "1018--1027", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rui Cai and Mirella Lapata. 2019. Semi-supervised semantic role labeling with cross-view training. In Proceedings of the EMNLP, pages 1018-1027.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Capturing argument interaction in semantic role labeling with capsule networks", "authors": [ { "first": "Xinchi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Chunchuan", "middle": [], "last": "Lyu", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the EMNLP", "volume": "", "issue": "", "pages": "5415--5425", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinchi Chen, Chunchuan Lyu, and Ivan Titov. 2019. Capturing argument interaction in semantic role la- beling with capsule networks. In Proceedings of the EMNLP, pages 5415-5425.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Deep biaffine attention for neural dependency parsing", "authors": [ { "first": "Timothy", "middle": [], "last": "Dozat", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1611.01734" ] }, "num": null, "urls": [], "raw_text": "Timothy Dozat and Christopher D Manning. 2016. Deep biaffine attention for neural dependency pars- ing. arXiv preprint arXiv:1611.01734.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Cross-lingual semantic role labeling with highquality translated training corpus", "authors": [ { "first": "Meishan", "middle": [], "last": "Hao Fei", "suffix": "" }, { "first": "Donghong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "", "middle": [], "last": "Ji", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "7014--7026", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Fei, Meishan Zhang, and Donghong Ji. 2020a. Cross-lingual semantic role labeling with high- quality translated training corpus. In Proceedings of the ACL, pages 7014-7026.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Cross-lingual semantic role labeling with model transfer", "authors": [ { "first": "Meishan", "middle": [], "last": "Hao Fei", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Donghong", "middle": [], "last": "Li", "suffix": "" }, { "first": "", "middle": [], "last": "Ji", "suffix": "" } ], "year": 2020, "venue": "Speech, and Language Processing", "volume": "28", "issue": "", "pages": "2427--2437", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Fei, Meishan Zhang, Fei Li, and Donghong Ji. 2020b. Cross-lingual semantic role labeling with model transfer. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 28:2427-2437.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Semantic role labeling with neural network factors", "authors": [ { "first": "Nicholas", "middle": [], "last": "Fitzgerald", "suffix": "" }, { "first": "Oscar", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" }, { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the EMNLP", "volume": "", "issue": "", "pages": "960--970", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nicholas FitzGerald, Oscar T\u00e4ckstr\u00f6m, Kuzman Ganchev, and Dipanjan Das. 2015. Semantic role la- beling with neural network factors. In Proceedings of the EMNLP, pages 960-970.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Automatic labeling of semantic roles", "authors": [ { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "512--520", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Gildea and Daniel Jurafsky. 2000. Automatic labeling of semantic roles. In Proceedings of the ACL, pages 512-520.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The CoNLL-2009 shared task: Syntactic and semantic dependencies in multiple languages", "authors": [ { "first": "Jan", "middle": [], "last": "Haji\u010d", "suffix": "" }, { "first": "Massimiliano", "middle": [], "last": "Ciaramita", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Johansson", "suffix": "" }, { "first": "Daisuke", "middle": [], "last": "Kawahara", "suffix": "" }, { "first": "Maria", "middle": [ "Ant\u00f2nia" ], "last": "Mart\u00ed", "suffix": "" }, { "first": "Llu\u00eds", "middle": [], "last": "M\u00e0rquez", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Meyers", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Pad\u00f3", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Jan\u0161t\u011bp\u00e1nek", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Stra\u0148\u00e1k", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Xue", "suffix": "" }, { "first": "", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the CoNLL", "volume": "", "issue": "", "pages": "1--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jan Haji\u010d, Massimiliano Ciaramita, Richard Johans- son, Daisuke Kawahara, Maria Ant\u00f2nia Mart\u00ed, Llu\u00eds M\u00e0rquez, Adam Meyers, Joakim Nivre, Sebastian Pad\u00f3, Jan\u0160t\u011bp\u00e1nek, Pavel Stra\u0148\u00e1k, Mihai Surdeanu, Nianwen Xue, and Yi Zhang. 2009. The CoNLL- 2009 shared task: Syntactic and semantic dependen- cies in multiple languages. In Proceedings of the CoNLL, pages 1-18.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Jointly predicting predicates and arguments in neural semantic role labeling", "authors": [ { "first": "Luheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "364--369", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luheng He, Kenton Lee, Omer Levy, and Luke Zettle- moyer. 2018a. Jointly predicting predicates and ar- guments in neural semantic role labeling. In Pro- ceedings of the ACL, pages 364-369.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Syntax for semantic role labeling, to be, or not to be", "authors": [ { "first": "Shexia", "middle": [], "last": "He", "suffix": "" }, { "first": "Zuchao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Hongxiao", "middle": [], "last": "Bai", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "2061--2071", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shexia He, Zuchao Li, Hai Zhao, and Hongxiao Bai. 2018b. Syntax for semantic role labeling, to be, or not to be. In Proceedings of the ACL, pages 2061- 2071.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural Computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "High-order lowrank tensors for semantic role labeling", "authors": [ { "first": "Tao", "middle": [], "last": "Lei", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Llu\u00eds", "middle": [], "last": "M\u00e0rquez", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the NAACL", "volume": "", "issue": "", "pages": "1150--1160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tao Lei, Yuan Zhang, Llu\u00eds M\u00e0rquez, Alessandro Mos- chitti, and Regina Barzilay. 2015. High-order low- rank tensors for semantic role labeling. In Proceed- ings of the NAACL, pages 1150-1160.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Dependency or span, end-to-end uniform semantic role labeling", "authors": [ { "first": "Zuchao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Shexia", "middle": [], "last": "He", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Yiqing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhuosheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI", "volume": "", "issue": "", "pages": "6730--6737", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zuchao Li, Shexia He, Hai Zhao, Yiqing Zhang, Zhu- osheng Zhang, Xi Zhou, and Xiang Zhou. 2019. De- pendency or span, end-to-end uniform semantic role labeling. In Proceedings of the AAAI, pages 6730- 6737.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Semantic role labeling with iterative structure refinement", "authors": [ { "first": "Chunchuan", "middle": [], "last": "Lyu", "suffix": "" }, { "first": "Shay", "middle": [ "B" ], "last": "Cohen", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "1071--1082", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chunchuan Lyu, Shay B. Cohen, and Ivan Titov. 2019. Semantic role labeling with iterative structure refine- ment. In Proceedings of EMNLP, pages 1071-1082.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A simple and accurate syntax-agnostic neural model for dependency-based semantic role labeling", "authors": [ { "first": "Diego", "middle": [], "last": "Marcheggiani", "suffix": "" }, { "first": "Anton", "middle": [], "last": "Frolov", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the CoNLL", "volume": "", "issue": "", "pages": "411--420", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diego Marcheggiani, Anton Frolov, and Ivan Titov. 2017. A simple and accurate syntax-agnostic neural model for dependency-based semantic role labeling. In Proceedings of the CoNLL, pages 411-420.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Encoding sentences with graph convolutional networks for semantic role labeling", "authors": [ { "first": "Diego", "middle": [], "last": "Marcheggiani", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the EMNLP", "volume": "", "issue": "", "pages": "1506--1515", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for se- mantic role labeling. In Proceedings of the EMNLP, pages 1506-1515.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Semantic role labeling using different syntactic views", "authors": [ { "first": "Sameer", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "Wayne", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Kadri", "middle": [], "last": "Hacioglu", "suffix": "" }, { "first": "James", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "581--588", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sameer Pradhan, Wayne Ward, Kadri Hacioglu, James Martin, and Daniel Jurafsky. 2005. Semantic role labeling using different syntactic views. In Proceed- ings of the ACL, pages 581-588.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "The importance of syntactic parsing and inference in semantic role labeling", "authors": [ { "first": "Vasin", "middle": [], "last": "Punyakanok", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics", "volume": "34", "issue": "2", "pages": "257--287", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics, 34(2):257-287.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Neural semantic role labeling with dependency path embeddings", "authors": [ { "first": "Michael", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "1192--1202", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Roth and Mirella Lapata. 2016. Neural seman- tic role labeling with dependency path embeddings. In Proceedings of the ACL, pages 1192-1202.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "An evaluation of predicate argument clustering using pseudo-disambiguation", "authors": [ { "first": "Christian", "middle": [], "last": "Scheible", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the LREC", "volume": "", "issue": "", "pages": "1187--1194", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian Scheible. 2010. An evaluation of predicate ar- gument clustering using pseudo-disambiguation. In Proceedings of the LREC, pages 1187-1194.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Linguistically-informed self-attention for semantic role labeling", "authors": [ { "first": "Emma", "middle": [], "last": "Strubell", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Verga", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Andor", "suffix": "" }, { "first": "David", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the EMNLP", "volume": "", "issue": "", "pages": "5027--5038", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically-informed self-attention for semantic role labeling. In Proceedings of the EMNLP, pages 5027-5038.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "A syntax-aware multi-task learning framework for Chinese semantic role labeling", "authors": [ { "first": "Qingrong", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Zhenghua", "middle": [], "last": "Li", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the EMNLP", "volume": "", "issue": "", "pages": "5382--5392", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qingrong Xia, Zhenghua Li, and Min Zhang. 2019a. A syntax-aware multi-task learning framework for Chinese semantic role labeling. In Proceedings of the EMNLP, pages 5382-5392.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Syntax-aware neural semantic role labeling", "authors": [ { "first": "Qingrong", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Zhenghua", "middle": [], "last": "Li", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Meishan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Guohong", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Luo", "middle": [], "last": "Si", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI", "volume": "", "issue": "", "pages": "7305--7313", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qingrong Xia, Zhenghua Li, Min Zhang, Meishan Zhang, Guohong Fu, Rui Wang, and Luo Si. 2019b. Syntax-aware neural semantic role labeling. In Pro- ceedings of the AAAI, pages 7305-7313.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Syntaxenhanced self-attention-based semantic role labeling", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Luo", "middle": [], "last": "Si", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the EMNLP", "volume": "", "issue": "", "pages": "616--626", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yue Zhang, Rui Wang, and Luo Si. 2019. Syntax- enhanced self-attention-based semantic role label- ing. In Proceedings of the EMNLP, pages 616-626.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Semantic dependency parsing of NomBank and Prop-Bank: An efficient integrated approach via a largescale feature selection", "authors": [ { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Wenliang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Chunyu", "middle": [], "last": "Kit", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the EMNLP", "volume": "", "issue": "", "pages": "30--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hai Zhao, Wenliang Chen, and Chunyu Kit. 2009. Se- mantic dependency parsing of NomBank and Prop- Bank: An efficient integrated approach via a large- scale feature selection. In Proceedings of the EMNLP, pages 30-39.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "", "uris": null, "type_str": "figure", "num": null }, "FIGREF1": { "text": "The overview of the graph-based high-order model for end-to-end SRL. The dotted-line green box is our proposed high-order refining module. parallel and differentiable.", "uris": null, "type_str": "figure", "num": null }, "FIGREF2": { "text": "are all possible predicate tokens, A = {a 1 , \u2022 \u2022 \u2022 , a l } are all associated argument tokens, and R are the corresponding role labels for each a i , including a null label indicating no relation between a pair of predicate argument.", "uris": null, "type_str": "figure", "num": null }, "FIGREF3": { "text": "Argument recognition under varying surface distance between predicates and arguments.", "uris": null, "type_str": "figure", "num": null }, "TABREF1": { "type_str": "table", "num": null, "text": "Performances on CoNLL09. Results with \u2021 indicates the additional resources are used.", "content": "
PRF1
He et al. (2018a) 64.8 65.3 64.9
Cai et al. (2018)65.0 66.4 65.8
Li et al. (2019)65.4 67.2 66.0
Xia et al. (2019a) 65.2 67.6 66.1
Ours67.5 68.8 67.9
+ELMo68.0 70.6 68.8
+BERT70.0 73.0 72.4
", "html": null }, "TABREF2": { "type_str": "table", "num": null, "text": "Performances by end-to-end models for the argument role labeling on UPB.", "content": "", "html": null } } } }