{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:12:36.621478Z"
},
"title": "Second-Order Neural Dependency Parsing with Message Passing and End-to-End Training",
"authors": [
{
"first": "Xinyu",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chinese Academy of Sciences University of Chinese Academy of Sciences",
"location": {}
},
"email": ""
},
{
"first": "Kewei",
"middle": [],
"last": "Tu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chinese Academy of Sciences University of Chinese Academy of Sciences",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we propose second-order graphbased neural dependency parsing using message passing and end-to-end neural networks. We empirically show that our approaches match the accuracy of very recent state-ofthe-art second-order graph-based neural dependency parsers and have significantly faster speed in both training and testing. We also empirically show the advantage of second-order parsing over first-order parsing and observe that the usefulness of the head-selection structured constraint vanishes when using BERT embedding.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we propose second-order graphbased neural dependency parsing using message passing and end-to-end neural networks. We empirically show that our approaches match the accuracy of very recent state-ofthe-art second-order graph-based neural dependency parsers and have significantly faster speed in both training and testing. We also empirically show the advantage of second-order parsing over first-order parsing and observe that the usefulness of the head-selection structured constraint vanishes when using BERT embedding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Graph-based dependency parsing is a popular approach to dependency parsing that scores parse components of a sentence and then finds the highest scoring tree through inference. First-order graphbased dependency parsing takes individual dependency edges as the components of a parse tree, while higher-order dependency parsing considers more complex components consisting of multiple edges. There exist both exact inference algorithms (Carreras, 2007; Koo and Collins, 2010; Ma and Zhao, 2012) and approximate inference algorithms (McDonald and Pereira, 2006; Smith and Eisner, 2008; Gormley et al., 2015) to find the best parse tree. Recent work focused on neural network based graph dependency parsers (Kiperwasser and Goldberg, 2016; Wang and Chang, 2016; Cheng et al., 2016; Kuncoro et al., 2016; Ma and Hovy, 2017; . proposed a first-order graph-based neural dependency parsing approach with a simple headselection training objective. It uses a biaffine function to score dependency edges and has high efficiency and good performance. Subsequent work * Kewei Tu is the corresponding author.",
"cite_spans": [
{
"start": 434,
"end": 450,
"text": "(Carreras, 2007;",
"ref_id": "BIBREF2"
},
{
"start": 451,
"end": 473,
"text": "Koo and Collins, 2010;",
"ref_id": "BIBREF15"
},
{
"start": 474,
"end": 492,
"text": "Ma and Zhao, 2012)",
"ref_id": "BIBREF20"
},
{
"start": 530,
"end": 558,
"text": "(McDonald and Pereira, 2006;",
"ref_id": "BIBREF24"
},
{
"start": 559,
"end": 582,
"text": "Smith and Eisner, 2008;",
"ref_id": "BIBREF30"
},
{
"start": 583,
"end": 604,
"text": "Gormley et al., 2015)",
"ref_id": "BIBREF11"
},
{
"start": 703,
"end": 735,
"text": "(Kiperwasser and Goldberg, 2016;",
"ref_id": "BIBREF14"
},
{
"start": 736,
"end": 757,
"text": "Wang and Chang, 2016;",
"ref_id": "BIBREF31"
},
{
"start": 758,
"end": 777,
"text": "Cheng et al., 2016;",
"ref_id": "BIBREF3"
},
{
"start": 778,
"end": 799,
"text": "Kuncoro et al., 2016;",
"ref_id": "BIBREF16"
},
{
"start": 800,
"end": 818,
"text": "Ma and Hovy, 2017;",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "introduced second-order inference into their parser. Ji et al. (2019) proposed a graph neural network that captures second-order information in token representations, which are then used for first-order parsing. Very recently, Zhang et al. (2020) proposed an efficient second-order tree CRF model for dependency parsing and achieved state-of-the-art performance.",
"cite_spans": [
{
"start": 53,
"end": 69,
"text": "Ji et al. (2019)",
"ref_id": "BIBREF12"
},
{
"start": 227,
"end": 246,
"text": "Zhang et al. (2020)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we first show how a previously proposed second-order semantic dependency parser (Wang et al., 2019) can be applied to syntactic dependency parsing with simple modifications. The parser is an end-to-end neural network derived from message passing inference on a conditional random field that encodes the second-order parsing problem. We then propose an alternative conditional random field that incorporates the head-selection constraint of syntactic dependency parsing, and derive a novel second-order dependency parser. We empirically compare the two second-order approaches and the first-order baselines on English Penn Tree Bank 3.0 (PTB), Chinese Penn Tree Bank 5.1 (CTB) and datasets of 12 languages in Universal Dependencies (UD). We show that our approaches achieve state-of-the-art performance on both PTB and CTB and our approaches are significantly faster than recently proposed second-order parsers.",
"cite_spans": [
{
"start": 95,
"end": 114,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We also make two interesting observations from our empirical study. First, it is a common belief that contextual word embeddings such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) already conveys sufficient high-order information that renders high-order parsing less useful, but we find that second-order decoding is still helpful even with strong contextual embeddings like BERT. Second, while Zhang et al. (2019) previously found that incoperating the head-selection constraint is helpful in first-order parsing, we find that with a better loss function design and hyper-parameter tun-ing both first-and second-order parsers without the head-selection constraint can match the accuracy of parsers with the head-selection constraint and can even outperform the latter when using BERT embedding.",
"cite_spans": [
{
"start": 142,
"end": 163,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF27"
},
{
"start": 173,
"end": 194,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 410,
"end": 429,
"text": "Zhang et al. (2019)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approaches are closely related to the work of Gormley et al. (2015) , which proposed a nonneural second-order parser based on Loopy Belief Propagation (LBP). Our work differs from theirs in that: 1) we use Mean Field Variational Inference (MFVI) instead of LBP, which Wang et al. (2019) found is faster and equally accurate in practice; 2) we add the head-selection constraint and do not include the global tree constraint that is shown to produce only slight improvement (Zhang et al., 2019) but would complicate our neural network design and implementation; 3) we employ modern neural encoders and achieve much better parsing accuracy. Our approaches are also closely related to the very recent work of Fonseca and Martins (2020) . The main difference is that we use MFVI while they use the dual decomposition algorithm AD 3 (Martins et al., 2011 (Martins et al., , 2013 for approximate inference.",
"cite_spans": [
{
"start": 50,
"end": 71,
"text": "Gormley et al. (2015)",
"ref_id": "BIBREF11"
},
{
"start": 272,
"end": 290,
"text": "Wang et al. (2019)",
"ref_id": "BIBREF32"
},
{
"start": 476,
"end": 496,
"text": "(Zhang et al., 2019)",
"ref_id": "BIBREF35"
},
{
"start": 709,
"end": 735,
"text": "Fonseca and Martins (2020)",
"ref_id": "BIBREF10"
},
{
"start": 831,
"end": 852,
"text": "(Martins et al., 2011",
"ref_id": "BIBREF23"
},
{
"start": 853,
"end": 876,
"text": "(Martins et al., , 2013",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Approach Zhang et al. (2019) categorized different kinds of graph-based dependency parsers based on their structured output constraints according to the normalization for output scores. A Local approach views dependency parsing as a head-selection problem, in which each word selects exactly one dependency head. A Single approach places no structured constraint, viewing the existence of each possible dependency edge as an independent binary classification problem.",
"cite_spans": [
{
"start": 11,
"end": 30,
"text": "Zhang et al. (2019)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The second-order semantic dependency parser of Wang et al. (2019) is an end-to-end neural network derived from message passing inference on a conditional random field that encodes the second-order parsing problem. It is clearly a Single approach because of the lack of structured constraints in semantic dependency parsing. We can apply this approach to syntactic dependency parsing with two minor modifications. First, co-parents, one of the three types of second-order parts, become invalid and hence are removed. Second, for the approach to output valid parse trees during testing, we run maximum spanning tree (MST) (McDonald et al., 2005 ) based on the posterior edge probabilities predicted by the approach.",
"cite_spans": [
{
"start": 47,
"end": 65,
"text": "Wang et al. (2019)",
"ref_id": "BIBREF32"
},
{
"start": 620,
"end": 642,
"text": "(McDonald et al., 2005",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Inspired by Wang et al. (2019) , below we propose a Local second-order parsing approach. While the Single approach uses Boolean random variables to represent existence of possible dependency edges, our Local approach defines a discrete random variable for each word specifying its dependency head, thus enforcing the head-selection constraint and leading to different formulation of the message passing inference steps.",
"cite_spans": [
{
"start": 12,
"end": 30,
"text": "Wang et al. (2019)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Following Dozat and Manning 2017, we predict edge existence and edge labels separately. Suppose the input sentence is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring",
"sec_num": "2.1"
},
{
"text": "w = [w 0 , w 1 , w 2 , . . . , w n ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring",
"sec_num": "2.1"
},
{
"text": "where w 0 is a dummy root. We feed word representations outputted by the BiLSTM encoder into a biaffine function to assign score s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring",
"sec_num": "2.1"
},
{
"text": "(edge) ij to edge w i \u2192 w j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring",
"sec_num": "2.1"
},
{
"text": "We use a Trilinear function to assign score s (sib) ij,ik to the siblings part consisting of edges w i \u2192 w j and w i \u2192 w k , and another Trilinear function to assign score s (gp) ij,jk to the grandparent part consisting of edges w i \u2192 w j and w j \u2192 w k . For edge labels, we use a biaffine function to predict label scores of each potential edge and use a softmax function to compute the label distribution P (y (label) ij |w), where y (label) ij represents the possible label for edge w i \u2192 w j .",
"cite_spans": [
{
"start": 46,
"end": 51,
"text": "(sib)",
"ref_id": null
},
{
"start": 174,
"end": 178,
"text": "(gp)",
"ref_id": null
},
{
"start": 412,
"end": 419,
"text": "(label)",
"ref_id": null
},
{
"start": 436,
"end": 443,
"text": "(label)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring",
"sec_num": "2.1"
},
{
"text": "The head-selection structured constraint requires that each word except the root has exactly one head. We define variable X j \u2208 {0, 1, 2, . . . , n} to indicate the head of word w j . We then define a conditional random field (CRF) over [X 1 , . . . , X n ]. For each variable X j , the unary potential is defined by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Message Passing",
"sec_num": "2.2"
},
{
"text": "\u03c6 u (X j = i) = exp(s (edge) ij )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Message Passing",
"sec_num": "2.2"
},
{
"text": "Given two variables X j and X l , the binary potential is defined by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Message Passing",
"sec_num": "2.2"
},
{
"text": "\u03c6 p (X j = i, X l = k) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 exp(s (sib) ij,kl ) k = i exp(s (gp) ij,kl ) k = j 1 Otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Message Passing",
"sec_num": "2.2"
},
{
"text": "We use MFVI for approximate inference on this CRF. The algorithm updates the factorized poste-rior distribution Q j (X j ) of each word iteratively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Message Passing",
"sec_num": "2.2"
},
{
"text": "M (t\u22121) j (i) = k =i,j Q (t\u22121) k (i)s (sib) ij,ik +Q (t\u22121) k (j)s (gp) ij,jk + Q (t\u22121) i (k)s (gp) ki,ij Q (t) j (i) = exp{s (edge) ij + M (t\u22121) j (i)} n k=0 exp{s (edge) kj + M (t\u22121) j (k)} At t = 0, Q (t) j (X j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Message Passing",
"sec_num": "2.2"
},
{
"text": "is initialized by normalizing the unary potential. The iterative update steps can be unfolded as recurrent neural network layers parameterized by part scores, thus forming an end-to-end neural network.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Message Passing",
"sec_num": "2.2"
},
{
"text": "Compared with the update formula in the Single approach, here the posterior distributions are defined over head-selections and are normalized over all possible heads. The computational complexity remains the same.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Message Passing",
"sec_num": "2.2"
},
{
"text": "We define the cross entropy losses by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "2.3"
},
{
"text": "L (edge) = \u2212 i log[Q i (y * (edge) i |w)] L (label) = \u2212 i,j 1(y * (edge) j = i) log(P (y * (label) ij |w)) L =\u03bbL (label) + (1 \u2212 \u03bb)L (edge)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "2.3"
},
{
"text": "where y * (edge) i is the head of word w i and y * (label) ij is the label of edge w i \u2192 w j in the golden parse tree, \u03bb is a hyper-parameter and 1(x) is an indicator function that returns 1 when x is true and 0 otherwise.",
"cite_spans": [
{
"start": 49,
"end": 58,
"text": "* (label)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "2.3"
},
{
"text": "Following previous work Ma et al., 2018) , we use PTB 3.0 (Marcus et al., 1993) , CTB 5.1 (Xue et al., 2002) and 12 languages in Universal Dependencies (Nivre et al., 2018 ) (UD) 2.2 to evaluate our parser. Punctuation is ignored in all the evaluations. We use the same treebanks and preprocessing as Ma et al. (2018) for PTB, CTB, and UD. For all the datasets, we remove sentences longer than 90 words in training sets for faster computation. We use GNN, Local1O, Single1O, Local2O and Single2O to represent the approaches of Ji et al. 2019 and Manning (2018), and our two second-order approaches respectively. For all the approaches, we use the MST algorithm to guarantee treestructured output in testing. We use the concatenation of word embeddings, character-level embeddings and part-of-speech (POS) tag embeddings to represent words and additionally concatenate BERT embeddings for experiments with BERT. For a fair comparison with previous work, we use GloVe (Pennington et al., 2014) and BERT-Large-Uncased model for PTB, and structuredskipgram (Ling et al., 2015) and BERT-Base-Chinese model for CTB. For UD, we use fastText embeddings (Bojanowski et al., 2017) and BERT-Base-Multilingual-Cased model for different languages. We set the default iteration number for our approaches to 3 because we find no improvement on more or less iterations. For GNN 1 , we rerun the code based on the official release of Ji et al. (2019) . For Single1O, Local1O 2 , Single2O 3 , we implement these ap- Table 2 : Comparison of our approaches and the previous state-of-the-art approaches on PTB and CTB. We report our results averaged over 5 runs. \u2020 : These approaches perform model selection based on the score on the development set. \u2021 : These approaches do not use POS tags as input. : uses semisupervised multi-task learning with ELMo embeddings. \u2660 : These approaches use structured-skipgram embeddings instead of GloVe embeddings for PTB. \u2663 : For reference, Zhou and Zhao (2019) utilized both dependency and constituency information in their approach. Therefore, the results are not comparable to our results.",
"cite_spans": [
{
"start": 24,
"end": 40,
"text": "Ma et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 58,
"end": 79,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF21"
},
{
"start": 90,
"end": 108,
"text": "(Xue et al., 2002)",
"ref_id": "BIBREF33"
},
{
"start": 152,
"end": 171,
"text": "(Nivre et al., 2018",
"ref_id": "BIBREF0"
},
{
"start": 301,
"end": 317,
"text": "Ma et al. (2018)",
"ref_id": "BIBREF19"
},
{
"start": 966,
"end": 991,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF26"
},
{
"start": 1053,
"end": 1072,
"text": "(Ling et al., 2015)",
"ref_id": "BIBREF17"
},
{
"start": 1145,
"end": 1170,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF1"
},
{
"start": 1417,
"end": 1433,
"text": "Ji et al. (2019)",
"ref_id": "BIBREF12"
},
{
"start": 1957,
"end": 1977,
"text": "Zhou and Zhao (2019)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 1498,
"end": 1505,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Setups",
"sec_num": "3.1"
},
{
"text": "proaches based on the official release code of Wang et al. (2019) and we implement Local2O based on this code. In speed comparison, we implement the second-order approaches based on an PyTorch implementation biaffine parser 4 implemented by Zhang et al. (2020) for a fair speed comparison with their approach 5 . Since we find that the accuracy of our approaches based on PyTorch implementation on PTB does not change, we only report scores based on Wang et al. (2019) .",
"cite_spans": [
{
"start": 47,
"end": 65,
"text": "Wang et al. (2019)",
"ref_id": "BIBREF32"
},
{
"start": 241,
"end": 260,
"text": "Zhang et al. (2020)",
"ref_id": "BIBREF34"
},
{
"start": 450,
"end": 468,
"text": "Wang et al. (2019)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setups",
"sec_num": "3.1"
},
{
"text": "The hyper-parameters we used in our experiments is shown in Table 1 . We tune the the hidden size for calculating s (Unary Arc in the table) separately for PTB and CTB. Following Qi et al. (2018) , we switch to AMSGrad (Reddi et al., 2018) after 5,000 iterations without improvement. We train models for 75,000 iterations with batch sizes of 6000 tokens and stopped the training early after 10,000 iterations without improvements on development sets. Different from previous approaches such as and Ji et al. (2019) , we use Adam (Kingma and Ba, 2015) with a learning rate of 0.01 and anneal the learning rate by 0.85 for every 500 iterations without improvement on the development set for optimization. For GNN, we train the models with the same setting as in Ji et al. (2019) . We do not use character embeddings and our optimization settings for GNN because we find they do not improve the accuracy.",
"cite_spans": [
{
"start": 179,
"end": 195,
"text": "Qi et al. (2018)",
"ref_id": "BIBREF28"
},
{
"start": 498,
"end": 514,
"text": "Ji et al. (2019)",
"ref_id": "BIBREF12"
},
{
"start": 760,
"end": 776,
"text": "Ji et al. (2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 60,
"end": 67,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Hyper-parameters",
"sec_num": "3.2"
},
{
"text": "For the edge loss of Single approaches, Zhang et al. (2019) proposed to sample a subset of the negative edges to balance positive and negative examples, but we find that using a relatively small interpolation \u03bb (shown in Table 1 ) on label loss can improve the accuracy and the sampling does not help further improve the accuracy. Table 2 shows the Unlabeled Attachment Score (UAS) and Labeled Attachment Score (LAS) of all the approaches as well as the reported scores of previous state-of-the-art approaches on PTB and CTB. It can be seen that without BERT, our Local2O achieves state-of-the-art performance on CTB and has almost the same accuracy as the very recent work of Zhang et al. (2020) on PTB. With BERT embeddings, Local2O performs the best on PTB while Single2O has the best accuracy on CTB. Table 3 shows the results of the five approaches on UD in addition to PTB and CTB. We make the following observations. First, our second-order approaches outperform GNN and the first-order approaches both with and without BERT embeddings, showing that second-order decoders are still helpful in neural parsing even with strong contextual embeddings. Second, without BERT, Local slightly outperforms Single, although the difference between the two is quite small 6 ; when BERT is used, however, Single clearly outperforms Local, which is quite interesting and warrants further investigation in the future. Third, the relative strength of Local and Single approaches varies over treebanks, suggesting varying importance of the headselection constraint. Table 3 : LAS and standard deviations on test sets. We report results averaged over 5 runs. We use ISO 639-1 codes to represent languages from UD. \u2020 means that the model is statistically significantly better than the Local1O model by Wilcoxon rank-sum test with a significance level of p < 0.05. We use \u2021 to represent winner of the significant test between the Single2O and Local2O models. ",
"cite_spans": [
{
"start": 40,
"end": 59,
"text": "Zhang et al. (2019)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [
{
"start": 221,
"end": 228,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 331,
"end": 338,
"text": "Table 2",
"ref_id": null
},
{
"start": 805,
"end": 812,
"text": "Table 3",
"ref_id": null
},
{
"start": 1556,
"end": 1563,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Hyper-parameters",
"sec_num": "3.2"
},
{
"text": "We evaluate the speed of different approaches on a single GeForce GTX 1080 Ti GPU following the setting of Zhang et al. (2020) . As shown in Table 4 , our Local approach and Single approach have almost the same speed. Our second-order approaches only slow down the training and testing speed in comparison with the first-order approaches by 23% and 12% respectively. They are also significantly faster than previous state-of-theart approaches. Our Local approach is 1.2 and 2.3 times faster than GNN in training and testing respectively and is 2.4 and 2.9 times faster than the second-order tree CRF approach of Zhang et al. (2020) . In terms of time complexity, our second-order decoders have a time complexity of O(n 3 ) 7 ; while the time complexity of GNN is O(n 2 d), the hidden size d (500 by default) is typically much larger than sentence length n; and the decoder of Zhang et al. (2020) has a time complexity of O(n 3 ) as well, but it requires sequential computation over the input sentence while our decoders can be parallelized 7 The MST algorithm has a time complexity of O(n 2 ) and we follow only using the MST algorithm when the argmax predictions of structured output are not trees. over words of the input sentence.",
"cite_spans": [
{
"start": 107,
"end": 126,
"text": "Zhang et al. (2020)",
"ref_id": "BIBREF34"
},
{
"start": 612,
"end": 631,
"text": "Zhang et al. (2020)",
"ref_id": "BIBREF34"
},
{
"start": 876,
"end": 895,
"text": "Zhang et al. (2020)",
"ref_id": "BIBREF34"
},
{
"start": 1040,
"end": 1041,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 141,
"end": 148,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Speed Comparison",
"sec_num": "3.4"
},
{
"text": "We propose second-order graph-based dependency parsing based on message passing and end-toend neural networks. We modify a previous approach that predicts dependency edges independently and also design a new approach that incorporates the head-selection structured constraint. Our experiments show that our second-order approaches have better overall performance than the first-order baselines; they achieve competitive accuracy with very recent start-of-the-art second-order graph-based parsers and are significantly faster. Our empirical comparisons also show that secondorder decoders still outperform first-order decoders even with BERT embeddings, and that the usefulness of the head-selection constraint is limited, especially when using BERT embeddings. Our code is publicly avilable at https://github.com/ wangxinyu0922/Second_Order_Parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "https://github.com/AntNLP/ gnn-dep-parsing 2 https://github.com/tdozat/Parser-v3 3 https://github.com/wangxinyu0922/ Second_Order_SDP",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/yzhangcs/parser 5 At the time we finished the paper, the official code for the second-order tree CRF parser have not release yet. We believe it is a fair comparison since we use the same settings and GPU asZhang et al. (2020).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note thatZhang et al. (2019) reports higher difference in accuracy between first-order Local and Single approaches. The discrepancy is most likely caused by our better designed loss function and tuned hyper-parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by the National Natural Science Foundation of China (61976139).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Universal dependencies 2.2. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (\u00daFAL",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2018,
"venue": "Faculty of Mathematics and Physics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre et al. 2018. Universal dependencies 2.2. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics, Charles University.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Experiments with a higherorder projective dependency parser",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Carreras. 2007. Experiments with a higher- order projective dependency parser. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bi-directional attention with agreement for dependency parsing",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2204--2214",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1238"
]
},
"num": null,
"urls": [],
"raw_text": "Hao Cheng, Hao Fang, Xiaodong He, Jianfeng Gao, and Li Deng. 2016. Bi-directional attention with agreement for dependency parsing. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2204-2214, Austin, Texas. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Semi-supervised sequence modeling with cross-view training",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1914--1925",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1217"
]
},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Minh-Thang Luong, Christopher D. Man- ning, and Quoc Le. 2018. Semi-supervised se- quence modeling with cross-view training. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 1914- 1925, Brussels, Belgium. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Deep biaffine attention for neural dependency parsing",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Dozat and Christopher D Manning. 2017. Deep biaffine attention for neural dependency pars- ing. In International Conference on Learning Rep- resentations.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Simpler but more accurate semantic dependency parsing",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "484--490",
"other_ids": {
"DOI": [
"10.18653/v1/P18-2077"
]
},
"num": null,
"urls": [],
"raw_text": "Timothy Dozat and Christopher D. Manning. 2018. Simpler but more accurate semantic dependency parsing. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguis- tics (Volume 2: Short Papers), pages 484-490, Mel- bourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Stanford's graph-based neural dependency parser at the CoNLL 2017 shared task",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "20--30",
"other_ids": {
"DOI": [
"10.18653/v1/K17-3002"
]
},
"num": null,
"urls": [],
"raw_text": "Timothy Dozat, Peng Qi, and Christopher D. Manning. 2017. Stanford's graph-based neural dependency parser at the CoNLL 2017 shared task. In Proceed- ings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 20-30, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Left-to-right dependency parsing with pointer networks",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Gonz\u00e1lez",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "710--716",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1076"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Fern\u00e1ndez-Gonz\u00e1lez and Carlos G\u00f3mez- Rodr\u00edguez. 2019. Left-to-right dependency parsing with pointer networks. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 710-716, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Revisiting higher-order dependency parsers",
"authors": [
{
"first": "Erick",
"middle": [],
"last": "Fonseca",
"suffix": ""
},
{
"first": "F",
"middle": [
"T"
],
"last": "Andr\u00e9",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Martins",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erick Fonseca and Andr\u00e9 F. T. Martins. 2020. Revis- iting higher-order dependency parsers. In Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Approximation-aware dependency parsing by belief propagation",
"authors": [
{
"first": "Matthew",
"middle": [
"R"
],
"last": "Gormley",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "489--501",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00153"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew R. Gormley, Mark Dredze, and Jason Eisner. 2015. Approximation-aware dependency parsing by belief propagation. Transactions of the Association for Computational Linguistics, 3:489-501.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Graphbased dependency parsing with graph neural networks",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Yuanbin",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Man",
"middle": [],
"last": "Lan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2475--2485",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1237"
]
},
"num": null,
"urls": [],
"raw_text": "Tao Ji, Yuanbin Wu, and Man Lan. 2019. Graph- based dependency parsing with graph neural net- works. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 2475-2485, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Simple and accurate dependency parsing using bidirectional LSTM feature representations",
"authors": [
{
"first": "Eliyahu",
"middle": [],
"last": "Kiperwasser",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "313--327",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00101"
]
},
"num": null,
"urls": [],
"raw_text": "Eliyahu Kiperwasser and Yoav Goldberg. 2016. Sim- ple and accurate dependency parsing using bidirec- tional LSTM feature representations. Transactions of the Association for Computational Linguistics, 4:313-327.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Efficient thirdorder dependency parsers",
"authors": [
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terry Koo and Michael Collins. 2010. Efficient third- order dependency parsers. In Proceedings of the 48th Annual Meeting of the Association for Compu- tational Linguistics, pages 1-11, Uppsala, Sweden. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Distilling an ensemble of greedy dependency parsers into one MST parser",
"authors": [
{
"first": "Adhiguna",
"middle": [],
"last": "Kuncoro",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Lingpeng",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1744--1753",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1180"
]
},
"num": null,
"urls": [],
"raw_text": "Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, and Noah A. Smith. 2016. Distill- ing an ensemble of greedy dependency parsers into one MST parser. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 1744-1753, Austin, Texas. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Two/too simple adaptations of Word2Vec for syntax problems",
"authors": [
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "Isabel",
"middle": [],
"last": "Trancoso",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1299--1304",
"other_ids": {
"DOI": [
"10.3115/v1/N15-1142"
]
},
"num": null,
"urls": [],
"raw_text": "Wang Ling, Chris Dyer, Alan W. Black, and Isabel Trancoso. 2015. Two/too simple adaptations of Word2Vec for syntax problems. In Proceedings of the 2015 Conference of the North American Chap- ter of the Association for Computational Linguis- tics: Human Language Technologies, pages 1299- 1304, Denver, Colorado. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Neural probabilistic model for non-projective MST parsing",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "59--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Eduard Hovy. 2017. Neural proba- bilistic model for non-projective MST parsing. In Proceedings of the Eighth International Joint Con- ference on Natural Language Processing (Volume 1: Long Papers), pages 59-69, Taipei, Taiwan. Asian Federation of Natural Language Processing.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Stackpointer networks for dependency parsing",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Zecong",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Jingzhou",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1403--1414",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1130"
]
},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma, Zecong Hu, Jingzhou Liu, Nanyun Peng, Graham Neubig, and Eduard Hovy. 2018. Stack- pointer networks for dependency parsing. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 1403-1414, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Fourth-order dependency parsing",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2012,
"venue": "The COL-ING 2012 Organizing Committee",
"volume": "",
"issue": "",
"pages": "785--796",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Hai Zhao. 2012. Fourth-order depen- dency parsing. In Proceedings of COLING 2012: Posters, pages 785-796, Mumbai, India. The COL- ING 2012 Organizing Committee.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Building a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computa- tional Linguistics, 19(2):313-330.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Turning on the turbo: Fast third-order nonprojective turbo parsers",
"authors": [
{
"first": "Andr\u00e9",
"middle": [],
"last": "Martins",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Almeida",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "617--622",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andr\u00e9 Martins, Miguel Almeida, and Noah A. Smith. 2013. Turning on the turbo: Fast third-order non- projective turbo parsers. In Proceedings of the 51st Annual Meeting of the Association for Computa- tional Linguistics (Volume 2: Short Papers), pages 617-622, Sofia, Bulgaria. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Dual decomposition with many overlapping components",
"authors": [
{
"first": "Andr\u00e9",
"middle": [],
"last": "Martins",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "M\u00e1rio",
"middle": [],
"last": "Figueiredo",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Aguiar",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "238--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andr\u00e9 Martins, Noah Smith, M\u00e1rio Figueiredo, and Pe- dro Aguiar. 2011. Dual decomposition with many overlapping components. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 238-249, Edinburgh, Scotland, UK. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Online learning of approximate dependency parsing algorithms",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2006,
"venue": "11th Conference of the European Chapter",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algo- rithms. In 11th Conference of the European Chapter of the Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Non-projective dependency parsing using spanning tree algorithms",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Kiril",
"middle": [],
"last": "Ribarov",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "523--530",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Haji\u010d. 2005. Non-projective dependency pars- ing using spanning tree algorithms. In Proceed- ings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 523-530, Vancouver, British Columbia, Canada. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1202"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Universal dependency parsing from scratch",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "160--170",
"other_ids": {
"DOI": [
"10.18653/v1/K18-2016"
]
},
"num": null,
"urls": [],
"raw_text": "Peng Qi, Timothy Dozat, Yuhao Zhang, and Christo- pher D. Manning. 2018. Universal dependency pars- ing from scratch. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 160-170, Brus- sels, Belgium. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "On the convergence of adam and beyond",
"authors": [
{
"first": "J",
"middle": [],
"last": "Sashank",
"suffix": ""
},
{
"first": "Satyen",
"middle": [],
"last": "Reddi",
"suffix": ""
},
{
"first": "Sanjiv",
"middle": [],
"last": "Kale",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sashank J. Reddi, Satyen Kale, and Sanjiv Kumar. 2018. On the convergence of adam and beyond. In International Conference on Learning Representa- tions.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Dependency parsing by belief propagation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "145--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Smith and Jason Eisner. 2008. Dependency parsing by belief propagation. In Proceedings of the 2008 Conference on Empirical Methods in Natu- ral Language Processing, pages 145-156, Honolulu, Hawaii. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Graph-based dependency parsing with bidirectional LSTM",
"authors": [
{
"first": "Wenhui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2306--2315",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1218"
]
},
"num": null,
"urls": [],
"raw_text": "Wenhui Wang and Baobao Chang. 2016. Graph-based dependency parsing with bidirectional LSTM. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2306-2315, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Second-order semantic dependency parsing with end-to-end neural networks",
"authors": [
{
"first": "Xinyu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jingxian",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kewei",
"middle": [],
"last": "Tu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4609--4618",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1454"
]
},
"num": null,
"urls": [],
"raw_text": "Xinyu Wang, Jingxian Huang, and Kewei Tu. 2019. Second-order semantic dependency parsing with end-to-end neural networks. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 4609-4618, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Building a large-scale annotated Chinese corpus",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Fu-Dong",
"middle": [],
"last": "Chiou",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2002,
"venue": "COLING 2002: The 19th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianwen Xue, Fu-Dong Chiou, and Martha Palmer. 2002. Building a large-scale annotated Chinese cor- pus. In COLING 2002: The 19th International Con- ference on Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Efficient second-order treecrf for neural dependency parsing",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhenghua",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.00975"
]
},
"num": null,
"urls": [],
"raw_text": "Yu Zhang, Zhenghua Li, and Min Zhang. 2020. Ef- ficient second-order treecrf for neural dependency parsing. arXiv preprint arXiv:2005.00975.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "An empirical investigation of structured output modeling for graph-based neural dependency parsing",
"authors": [
{
"first": "Zhisong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5592--5598",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1562"
]
},
"num": null,
"urls": [],
"raw_text": "Zhisong Zhang, Xuezhe Ma, and Eduard Hovy. 2019. An empirical investigation of structured output mod- eling for graph-based neural dependency parsing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5592-5598, Florence, Italy. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Head-driven phrase structure grammar parsing on Penn treebank",
"authors": [
{
"first": "Junru",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2396--2408",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1230"
]
},
"num": null,
"urls": [],
"raw_text": "Junru Zhou and Hai Zhao. 2019. Head-driven phrase structure grammar parsing on Penn treebank. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2396-2408, Florence, Italy. Association for Compu- tational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"text": ",, Dozat",
"content": "
Hidden Layer | Hidden Sizes |
Word/GloVe/Char | 100 |
POS | 50 |
GloVe Linear | 125 |
BERT Linear | 125 |
BiLSTM | 3*600 |
Char LSTM | 1*400 |
Unary Arc (UD) | 500 |
Local1O/Local2O Unary Arc (Others) | 450 |
Single1O/Single2O Unary Arc (Others) | 550 |
Label | 150 |
Binary Arc | 150 |
Dropouts | Dropout Prob. |
Word/GloVe/POS | 20% |
Char LSTM (FF/recur) | 33% |
Char Linear | 33% |
BiLSTM (FF/recur) | 45%/25% |
Unary Arc/Label | 25%/33% |
Binary Arc | 25% |
Optimizer & Loss | Value |
Local1O/Local2O Interpolation (\u03bb) | 0.40 |
Single1O/Single2O Interpolation (\u03bb) | 0.07 |
Adam \u03b21 | 0 |
Adam \u03b22 | 0.95 |
Decay Rate | 0.85 |
Decay Step (without dev improvement) | 500 |
Weight Initialization | Mean/Stddev |
Unary weight | 0.0/1.0 |
Binary weight | 0.0/0.25 |
",
"type_str": "table",
"num": null
},
"TABREF1": {
"html": null,
"text": "",
"content": "",
"type_str": "table",
"num": null
},
"TABREF3": {
"html": null,
"text": "GNN 94.15 89.50 \u2020 90.33 92.39 90.95 79.73 88.43 91.56 87.23 92.44 88.57 89.38 85.26 91.20 89.37 Single1O 94.04 89.28 90.05 92.72 \u2020 92.07 81.73 89.55 92.10 88.27 92.64 89.57 91.81 85.39 92.60 90.13 Local1O 94.23 89.28 90.30 92.56 92.15 81.42 89.43 91.99 88.26 92.49 89.76 91.91 85.27 92.72 90.13 Single2O 94.19 89.55 \u2020 90.24 92.82 \u2020 92.13 81.99 \u2020 89.64 \u2020 92.17 \u2020 88.69 92.83 \u2020 89.97 \u2020 91.90 85.53 \u2020 92.58 90.30 \u2020 Local2O 94.34 \u2020 \u2021 89.57 \u2020 90.53 \u2020 92.83 \u2020 92.12 81.73 89.72 \u2020 92.07 88.53 92.78 90.19 \u2020 91.88 85.88 \u2020 \u2021 92.67 90.35 \u2020 +BERT Single1O 95.20 91.64 \u2020 90.87 93.55 \u2020 92.01 81.95 \u2020 90.44 \u2020 92.56 \u2020 89.35 93.44 \u2020 90.89 91.78 86.13 \u2020 92.51 90.88 \u2020 Local1O 95.32 91.30 91.03 93.17 91.93 81.66 90.09 92.32 89.26 93.05 90.93 91.62 85.67 92.51 90.70 Single2O 95.31 91.69 \u2020 \u2021 91.30 \u2020 93.60 \u2020 \u2021 92.09 \u2020 82.00 \u2020 \u2021 90.75 \u2020 \u2021 92.62 \u2020 \u2021 89.32 93.66 \u2020 91.21 91.74 86.40 \u2020 92.61 91.02 \u2020 \u2021 Local2O 95.34 91.38 91.13 93.34 \u2020 92.07 \u2020 81.67 90.43 \u2020 92.45 \u2020 89.26 93.50 \u2020 90.99 91.66 86.09 \u2020 92.66 90.86 \u2020",
"content": "PTB CTB | bg | ca | cs | de | en | es | fr | it | nl | no | ro | ru | Avg. |
",
"type_str": "table",
"num": null
},
"TABREF5": {
"html": null,
"text": "Comparison of training and testing speed (sentences per second) and the time complexity of the decoders of different approaches on PTB.",
"content": "",
"type_str": "table",
"num": null
}
}
}
}