Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N19-1022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:58:12.189007Z"
},
"title": "Recurrent Models and Lower Bounds for Projective Syntactic Decoding",
"authors": [
{
"first": "Natalie",
"middle": [],
"last": "Schluter",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IT University of Copenhagen Copenhagen",
"location": {
"country": "Denmark"
}
},
"email": "natschluter@itu.dk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The current state-of-the-art in neural graphbased parsing uses only approximate decoding at the training phase. In this paper aim to understand this result better. We show how recurrent models can carry out projective maximum spanning tree decoding. This result holds for both current state-of-the-art models for shiftreduce and graph-based parsers, projective or not. We also provide the first proof on the lower bounds of projective maximum spanning tree, DAG, and digraph decoding. 1 For the remainder of this paper, all decoding algorithms discussed are first-order. current components can intrinsically simulate exact projective decoding.",
"pdf_parse": {
"paper_id": "N19-1022",
"_pdf_hash": "",
"abstract": [
{
"text": "The current state-of-the-art in neural graphbased parsing uses only approximate decoding at the training phase. In this paper aim to understand this result better. We show how recurrent models can carry out projective maximum spanning tree decoding. This result holds for both current state-of-the-art models for shiftreduce and graph-based parsers, projective or not. We also provide the first proof on the lower bounds of projective maximum spanning tree, DAG, and digraph decoding. 1 For the remainder of this paper, all decoding algorithms discussed are first-order. current components can intrinsically simulate exact projective decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "For several years, the NLP field has seen widespread investigation into the application of Neural Networks to NLP tasks, and with this, much, rather inexplicable progress. A string of very recent work (for example, Chen et al. (2018) ; Weiss et al. (2018) ; Peng et al. (2018) ), has attempted to delve into the formal properties of neural network topology choices, in attempts to both motivate, predict, and explain associated research in the field. This paper aims to further contribute along this line of research.",
"cite_spans": [
{
"start": 215,
"end": 233,
"text": "Chen et al. (2018)",
"ref_id": "BIBREF2"
},
{
"start": 236,
"end": 255,
"text": "Weiss et al. (2018)",
"ref_id": "BIBREF20"
},
{
"start": 258,
"end": 276,
"text": "Peng et al. (2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We present the results of our study into the ability of state-of-the-art first-order neural graphbased parsers, with seemingly simple architectures, to explicitly forego structured learning and prediction. 1 In particular, this is not due to a significantly faster, simpler, algorithm for projective maximum spanning tree (MST) decoding than Eisner (1996) 's algorithm, which we formally prove to be impossible, given the Exponential Time Hypothesis. But rather, this is due to the capacity of recurrent components of these architectures to implicitly discover a projective MST. We prove this formally by showing how these re-",
"cite_spans": [
{
"start": 342,
"end": 355,
"text": "Eisner (1996)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The context. The current state-of-the-art for graph-based syntactic dependency parsing is a seemingly basic neural model by Dozat and Manning (2017) . The parser's performance is an improvement on the first, even simpler, rather engineering-free, neural graph-based parser by Kiperwasser and Goldberg (2016) . This latter parser updates with respect to an output structure: projective decoding over a matrix of arc scores coupled with hinge loss between predicted and gold arcs, reporting parser performance of, for example, 93.32% UAS and 91.2% LAS on the converted Penn Treebank. 2 Remarkably, the former parser by Dozat and Manning (2017) forgoes entirely any structural learning, employing simple cross-entropy at training time, and saving (unconstrained) maximum spanning tree decoding for test time.",
"cite_spans": [
{
"start": 124,
"end": 148,
"text": "Dozat and Manning (2017)",
"ref_id": "BIBREF4"
},
{
"start": 276,
"end": 307,
"text": "Kiperwasser and Goldberg (2016)",
"ref_id": "BIBREF10"
},
{
"start": 582,
"end": 583,
"text": "2",
"ref_id": null
},
{
"start": 617,
"end": 641,
"text": "Dozat and Manning (2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We further optimised Kiperwasser and Goldberg (2016)'s parser (Varab and Schluter, 2018) and extended it for cross-entropy learning, as is done by Dozat and Manning (2017) . At test time, instead of any explicit decoding algorithm over the arc score matrix, we simply take the maximum weighted incoming arc for each word; that is, the parser is highly streamlined, without any heavy neural network engineering, but now also without any structured learning, nor without any structural decoding at test time. The resulting neural parser still achieves an impressively competitive UAS of 92.61% evaluated on the converted Penn Treebank data, without recourse to any pre-trained embeddings, unlike the systems by Kiperwasser and Goldberg (2016) and Dozat and Manning (2017) . Using GloVe 100-dimensional Wikipedia and Gigaword corpus (6 billion tokens) pretrained embeddings, without updates, but linearly projected through a single linear dense layer to the same dimension, the structure-less parser achieves 93.18% UAS. 3 With this paper, we shed light on these surprising results from seemingly simple architectures. The insights we present here apply to any neural architecture that first encodes input words of a sentence using some type of recurrent neural network-i.e., all current state-ofthe-art graph-based or shift reduce neural parsers. Our contributions. This paper presents results for understanding the surprisingly superior performance of structure-free learning and prediction in syntactic (tree) dependency parsing. 1. We provide a formal proof that there will never be an algorithm that carries out projective MST decoding in sub-cubic time, unless a widely believed assumption in computational complexity theory, the Exponential Time Hypothesis (ETH), is false. Hence, computationally, we provide convincing evidence that these neural parsing architectures cannot be as simple as they appear. These results are then extended to projective maximum spanning DAG and digraph decoding. 2. In particular, we then show how to simulate Eisner's algorithm using a single recurrent neural network. This shows how, in particular, the LSTM stacked architectures for graph-based parsing by Dozat and Manning (2017) , Cheng et al. (2016) , Hashimoto et al. (2017) , Zhang et al. (2017) , and Kiperwasser and Goldberg (2016) , are capable of intrinsically decoding over arc scores. This therefore provides one practical application where RNNs do not need supplementary approximation considerations (Chen et al., 2018) .",
"cite_spans": [
{
"start": 62,
"end": 88,
"text": "(Varab and Schluter, 2018)",
"ref_id": "BIBREF19"
},
{
"start": 147,
"end": 171,
"text": "Dozat and Manning (2017)",
"ref_id": "BIBREF4"
},
{
"start": 709,
"end": 740,
"text": "Kiperwasser and Goldberg (2016)",
"ref_id": "BIBREF10"
},
{
"start": 745,
"end": 769,
"text": "Dozat and Manning (2017)",
"ref_id": "BIBREF4"
},
{
"start": 1018,
"end": 1019,
"text": "3",
"ref_id": null
},
{
"start": 2194,
"end": 2218,
"text": "Dozat and Manning (2017)",
"ref_id": "BIBREF4"
},
{
"start": 2221,
"end": 2240,
"text": "Cheng et al. (2016)",
"ref_id": "BIBREF3"
},
{
"start": 2243,
"end": 2266,
"text": "Hashimoto et al. (2017)",
"ref_id": "BIBREF7"
},
{
"start": 2269,
"end": 2288,
"text": "Zhang et al. (2017)",
"ref_id": "BIBREF21"
},
{
"start": 2295,
"end": 2326,
"text": "Kiperwasser and Goldberg (2016)",
"ref_id": "BIBREF10"
},
{
"start": 2500,
"end": 2519,
"text": "(Chen et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Exponential Time Hypothesis (ETH) and k-Clique. The Exponential Time Hypothesis is a 3 Our structure-less but optimised implementation of the (Kiperwasser and Goldberg, 2016) graph-based parser, with 100 dimensional generated word embeddings, 50 dimensional generated POS-tag embeddings, a stack of 3 BiLSTMs with an output dimension of 225 each (total 450 concatenated), no dropout, MLP mappings for arc nodes of 400 dimension, and for labels of 100 dimensions. We use DyNet 2.1 (Neubig et al., 2017) , and the parser code is freely available at https://github.com/ natschluter/MaxDecodeParser. widely held though unproven computational hardness assumption stating that 3-SAT (or any of the several related NP-complete problems) cannot be solved in sub-exponential time in the worst case (Impagliazzo and Paturi, 1999) . According to ETH, if 3-SAT were solvable in sub-exponential time, then also P = N P . But the ETH assumption is stronger than the assumption that P = N P , so the converse is not necessarily true. ETH can be used to show that many computational problems are equivalent in complexity, in the sense that if one of them has a subexponential time algorithm then they all do.",
"cite_spans": [
{
"start": 146,
"end": 178,
"text": "(Kiperwasser and Goldberg, 2016)",
"ref_id": "BIBREF10"
},
{
"start": 484,
"end": 505,
"text": "(Neubig et al., 2017)",
"ref_id": null
},
{
"start": 793,
"end": 823,
"text": "(Impagliazzo and Paturi, 1999)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "The k-Clique problem is the parameterised version of the NP-hard Max-Clique problem. This canonical intractable problem in parameterised complexity asks, given an input graph, whether there exists a clique of size k. A na\u00efve algorithm for this problem running in O(n k ) time checks all n k combinations of nodes and verifies each combination in O(k 2 ) time to see if they form a clique. However, Chen et al. (2006) showed that the problem has no n o(k) time algorithm-that is, the problem has no algorithm that runs in time subexponential in the exponent k assuming ETH. 4 . Recurrent neural networks. Recurrent neural networks (Rumelhart et al., 1986 ), as we generally use them in practise in NLP, take as input a matrix x, containing a sequence of n vectors x = x 1 , x 2 , . . . , x n , and apply the following set of equations recursively, with h 0 the initial state:",
"cite_spans": [
{
"start": 398,
"end": 416,
"text": "Chen et al. (2006)",
"ref_id": "BIBREF1"
},
{
"start": 573,
"end": 574,
"text": "4",
"ref_id": null
},
{
"start": 630,
"end": 653,
"text": "(Rumelhart et al., 1986",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "h t =g(b + Wh (t\u22121) + Ux t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Here, g is the activation function. Typically this activation function is tanh, however the computational power of the model is theoretically maintained with any so-called \"squashing\" function (Siegelmann, 1996) .",
"cite_spans": [
{
"start": 193,
"end": 211,
"text": "(Siegelmann, 1996)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "The choice of g, on the other hand, has been shown to affect the power of the recurrent model in general, depending on the restrictions involved in the formal investigation. For the purposes of this paper, the activation function is a rectified linear unit, or ReLU. The general computational power of such RNNs has recently been formally explored by Chen et al. (2018) LSTMs (Hochreiter and Schmidhuber, 1997) are RNNs with weighted self-loops (so-called gates). The recurrence equations take the form:",
"cite_spans": [
{
"start": 351,
"end": 369,
"text": "Chen et al. (2018)",
"ref_id": "BIBREF2"
},
{
"start": 376,
"end": 410,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "f t =g 1 (b f + W f h (t\u22121) + U f x t ) i t =g 1 (b i + W i h (t\u22121) + U i x t ) o t =g 1 (b o + W o h (t\u22121) + U o x t ) c t =f t \u2022 c t\u22121 + i t \u2022 g 1 (b c + W c h (t\u22121) + U c x t ) h t =o t \u2022 g 2 (c t ) where g 1 , g 2 are activation functions. Setting all W f , W i , W c , U f , U i , U c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "to be zero matrices, b f to be a 0 vector, b i , b c to be 1 vectors, and the activation function g 2 to be ReLU we see that, in terms of hidden states, the LSTM model includes that of the RNN. In this paper, all activation functions are ReLUs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "State-of-the-art in neural syntactic dependency parsing. The graph-based neural architectures we refer to here have important commonalities. We focus our discussion on the key contributions by Kiperwasser and Goldberg (2016) (the simplest architecture, and the first), and by Dozat and Manning (2017) (the state-of-the-art).",
"cite_spans": [
{
"start": 193,
"end": 224,
"text": "Kiperwasser and Goldberg (2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "The architectures can be partitioned into three general components:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "1. Word representation generation: Both architectures generate word embeddings and POS-tag embeddings. Pretrained embeddings, if they are being used, are added to the trained embeddings, and concatenated to corresponding POS-tag embedding. The embeddings are sent through a stacked BiLSTM. Output embeddings are projected to two further vector representations: as head node or as dependent node (specialised representations). 2. Arc scoring: All (head, dependent) combinations are scored. 3. Decoding: By some decoding process, the arc score matrix yields a (possibly disconnected) graph representation of the input sentence: (n-1) arcs, where no word has more than one head, as well as their probabilities. We show in this paper how the second and third components can be carried out implicitly within the BiLSTM layers of the first component. Since currently state-of-the-art shift-reduce parsers also encode input words of a sentence using some type of recurrent neural network, this insight also applies to these non-graph-based models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Related computational hardness results. To date there is no known truly sub-cubic algorithm for Boolean Matrix Multiplication (BMM), nor for Context-Free Grammar (CFG) parsing. Adapting Satta (1994) 's lower bound proof for Tree Adjoining Grammar parsing, Lee (1997) proved that BMM can be reduced to finding a valid derivation a string of length O(n 1 3 ) with respect to a CFG of size \u0398(n 2 ). Lee (1997)'s reduction shows that there can be a no O(|G|n 3\u2212 ) for some constant > 0 (sub-cubic-time) algorithm for CFG-parsing without implying a significant breakthrough in BMM, which is widely believed not to be possible. However, the construction required the grammar size |G| = \u0398(n 6 ) to be dependent on the the input size n, which, as Lee (1997) points out, is unrealistic in most applications.",
"cite_spans": [
{
"start": 186,
"end": 198,
"text": "Satta (1994)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Abboud et al. 2015, on the other hand, present a proof of the unlikelihood of a sub-cubic algorithm for CFG-parsing using ETH and specifically the k-Clique problem. Given an instance of the 3k-Clique problem (i.e., an undirected graph and the parameter 3k), they construct a string w of length n k and a CFG, G of constant size (for any 3k) such that if G derives w in sub-cubic time, then there is an algorithm running in time n o(3k) for the 3k-Clique problem, which, as we explained in Section 2, is impossible, assuming ETH.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "To date, no truly sub-cubic algorithm for projective maximum spanning tree decoding is known. In the next section, we present a proof similar in spirit to Abboud et al. (2015)'s that also shows that such an algorithm most likely cannot be found.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Current state-of-the-art neural graph-based parsers forego structural learning and do not even seem to require structured prediction. In this section, we provide evidence that this is indeed not because the parsers are so seemingly simple. Computationally it is unlikely that some simpler and faster decoding method alone is achieving such a competitive performance. We show this with the following theorem. Theorem 1. Under the assumption of ETH, there is no algorithm that carries out projective MST decoding in time significantly faster than O(n 3 ); that is, there is no sub-cubic (O(n 3\u2212 ) for some constant > 0) time algorithm for finding the maximally weighted projective spanning tree, T * , over a weighted digraph input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "Notation and special remarks. We denote by [n] the set {1, . . . , n}. For lack of a better symbol, we use here to signify iterative string concatenation, which otherwise is signified by just writing symbols beside each other, or by the symbol \u2022. Rather than working over words of a sentence, given the formal nature of the proof, the projective MST algorithm must work over symbols of the input word w. Hence the input is a weighted digraph over the symbols of w and the output is a projective MST, T * , over these symbols. The reduction, makes use of the weight of T * .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "Proof (of Theorem 1). Let G = (V, E) be an arbitrary simple undirected graph. We place an arbitrary order on the nodes from V and fix it, so",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "V := {v 1 , . . . , v n }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "As in Abboud et al. 2015's reduction from 3kclique to CFG-parsing, we first generate a string of length O(n k ) to represent the graph for the task at hand; we do so in O(n k ) time. The string contains a representation of all of the possible k-cliques in the graph. We can create a listing of all of these k-cliques using exhaustive search in at most O(n k ) time and space. Let K :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "= {{v i 1 , . . . , v i k } a k-clique in G | i j \u2208 [n], v i j \u2208 V }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "correspond to the set of k-cliques from G, and place an arbitrary order on K := {k 1 , . . . , k |K| }. So, |K| \u2208 O(n k ). We define 6 \u2022 k \u2022 |K| sets of symbols with respect to V , each with n(= |V |) elements:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "\u2022 Unmarked symbols:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "A i,t := {a i,j,t | j \u2208 [n]} for i \u2208 [k], t \u2208 [|K|]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "where a i,j,t corresponds to node v j \u2208 V . Similarly for the sets B i,t and C i,t . \u2022 Marked symbols:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "A i,t := {a i,j,t | a i,j,t \u2208 A i }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "Similarly for the sets B i,t and C i,t . We let A = \u222a i\u2208[k],t\u2208[|K|] (A i,t \u222a A i,t ) and similarly for B and C. Then the vocabulary for constructing our input word is U := A \u222a B \u222a C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "Constructing the input word w. We now construct a word w over the vocabulary U such that if the projective maximum spanning tree has weight |w| + 2k \u2212 2 + |K|, then the graph G has a 3kclique. We do this by defining the weights of possible arcs between carefully selected pairs of symbols from the vocabulary. The entire construction of the word w takes time O(n k ) (coinciding with the upper bound on the word's length).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "The input word is made up of a series of gadgets. For each k-clique, we have three types of gadgets: A-, B-, and C-gadgets. A-and C-gadgets each correspond both to a particular k-clique in G, as well as all k-cliques in G. B-gadgets, on the other hand, only correspond to particular k-cliques in G. Let k t = {v (t,1) , . . . , v (t,k) } \u2208 K be the tth k-clique. Even if each v (t,q) is a node in V (G) the notation for indices is useful to refer to the qth node of the tth k-clique. Also, in what follows, we use the middle index of symbols to simultaneously refer to the k-clique membership: j (t,q) \u2208 [n], and simultaneously allows us to refer to the qth node in the tth k-clique from K, for q \u2208 [k], t \u2208 [|K|]. A-gadgets:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "A(t) := i\u2208[k] (a i,j (t,1) ,t a i,j (t,2) ,t \u2022 \u2022 \u2022 a i,j (t,k) ,t )\u2022 i\u2208[k] (a i,j (t,1) ,t \u2022 a i,j (t,2) ,t \u2022 \u2022 \u2022 a i,j (t,k) ,t ) C-gadgets: C(t) :=( i\u2208[k] c i,j (t,1) ,t c i,j (t,2) ,t \u2022 \u2022 \u2022 c i,j (t,k) ,t ) \u2022c k,j (t,k) ,t \u2022 c k\u22121,j (t,k\u22121) ,t \u2022 \u2022 \u2022 c 1,j (t,1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": ") ,t and B-gadgets:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "B(t) :=L t b k,j (t,k) ,t \u2022 \u2022 \u2022 b 2,j (t,2) ,t b 1,j (t,1) ,t H t b k,j (t,k) ,t \u2022b k\u22121,j (t,k\u22121) ,t \u2022 \u2022 \u2022 b 2,j (t,2) ,t b 1,j (t,1) ,t R t ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "We call the symbol H t the head of the gadget B(t), and L t and R t the gadget's left and right boundary symbols respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "We then set the word w to be",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "t\u2208[|K|] A(t) \u2022 t\u2208[|K|] B(t) \u2022 t\u2208[|K|] C(t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "consisting of an A-gadget region followed by a Bgadget region, and then a C-gadget region.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "Idea of the proof. The idea of the proof is to allow an optimal projective MST, T * , to be built that matches up one distinct (with respect to the kclique) gadget from each region, each representing different k-cliques whenever there is a 3kclique in G. We will deduce the existence of such a clique by the weight of T * . Essentially, a projective spanning tree of weight |w| \u2212 1 will always be present, but T * having weight superior to this will indicate a matching up of gadgets. Now, suppose we have a sub-cubic projective MST algorithm A. By our construction, if A returns a T * with weight |w| + 2k \u2212 2 + |K|, then there is a 3kclique. Otherwise, there is no 3k-clique. On input of length n k , the sub-cubic time algorithm runs in time O((n k ) 3\u2212 ) = O(n 3k\u2212k ) \u2208 n o(3k) for some constant > 0. Thus A will have solved 3k-clique in time n o(3k) , which is impossible under the ETH assumption.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "Note that by the definition of a 3k-clique, a 3kclique can be partitioned arbitrarily into 3 equal sized sub-graphs over k nodes that must each form a k-clique. So, if |K| < k, then there trivially cannot be any 3k-clique in G. We therefore only consider without loss of generality the argumentation for the case where |K| \u2265 k, since our algorithm can simply return a negative answer about the existence of a 3k-clique in G after enumerating the set K and before computing any projective MST. The projective MST algorithm takes as input the description of a weighted digraph, D, whose nodes are defined by symbols of the input word w. The digraph need not be explicitly constructed, since the algorithm can simply use the description of the digraph that follows instead to check for the existence of arcs between symbols. This description has constant length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "A description of the input weighted graph D over w. For the input digraph, arcs can (1) be missing from the fully complete digraph, (2) have weight 1, or (3) have weight 2. To construct D, weights are assigned to arcs by the following rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "Weight 1 arcs. The following arcs of our input graph have weight 1. 1. Region connectivity arcs. These arcs ensure connectivity is possible within respective gadget regions. (a) All arcs (a 1,j ,t , a i,j,t\u22121 ) and (a 1,j ,t , a i,j,t\u22121 ), i.e., the first symbol of the tth A-gadget attaches to all symbols of the previous (t \u2212 1th) A-gadget. (b) All arcs (c 1,j ,t , c i,j,t+1 ) and (c 1,j ,t , c i,j,t+1 ), i.e., the last symbol of the tth C-gadget attaches to all symbols of the next (t + 1th) C-gadget gadget. (c) All",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "arcs (b i,j,t , b i+1,j ,t ) and (b i+1,j,t , b i,j ,t ) for i \u2208 [k \u2212 1]. (d) All arcs (b k,j,t , L t ), (b 1,j ,t , R t ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "(e) All arcs (H t , b 1,j,t ) and (H t , b k,j ,t ) making H t a possible head of the respective B-gadget (B-gadget heads) for any MST. j (t,k) ,t , c k,j,t ), i.e., arcs from the last nonmarked symbol to the first marked symbol, in every C-gadget. Also, all arcs (c i+1,j,t , c i,j,t ) for i \u2208 [k \u2212 1], i.e., together forming a path of marked symbols within each C-gadget. The following arcs are the reversals of (1c) through (1e). (h) All arcs (b i+1,j,t , b i,j ,t ) and",
"cite_spans": [],
"ref_spans": [
{
"start": 136,
"end": 143,
"text": "j (t,k)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "(f) All arcs (L t+1 , H t ), (R t , H t+1 ) for all t \u2208 [|K| \u2212 1]. (g) All arcs (c k,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "(b i,j,t , b i+1,j ,t ) for i \u2208 [k \u2212 1]. (i) All arcs (L t , b k,j,t ), (R t , b 1,j ,t ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "(j) All arcs (b 1,j,t , H t ) and (b k,j ,t , H t ) making H t the head of the respective B-gadget (B-gadget heads) for any MST. 2. Boundary connectivity arcs. These arcs ensure that the boundaries of regions are connected. (a) The arcs (L 1 , a i,j,|K| ) and (L 1 , a i,j,|K| ), i.e., all symbols from the last of the Agadgets attach to the first symbol of the Bgadget region. (b) The arcs (R |K| , c i,j,1 ) and (R |K| , c i,j,1 ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "i.e., all symbols from the first of the Cgadgets attach to the last symbol of the Bgadget region. 3. G-induced arcs. These arcs reflect the connections of the original graph G, and ultimately the existence of a 3k-clique. (a) All arcs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "(b i,j,t , a i,j ,t ), for each i \u2208 [k \u2212 1], t = t , if v j v j \u2208 E(G) (i.e.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": ", not for i = k, which has a weight of 2 rather).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "(b) All arcs (b i,j,t , c i,j ,t ), for each i \u2208 {2, . . . , k}, t = t , if v j v j \u2208 E(G) (i.e.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": ", not for i = 1, which has a weight of 2 rather). (c) All",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "arcs (c i,j,t , a i,j ,t ) for all i \u2208 [k], t = t , if v j v j \u2208 E(G) (i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "e., this time also for i = 1). As we show in Lemma 1.1, with the region connectivity arcs (1a-1g) and boundary connectivity arcs (2), we ensure that the algorithm can always return a projective MST with weight at least |w| \u2212 1. The G-induced arcs and region connectivity arcs (1h-1j, 4) on the other hand will be triggered to use by the algorithm's prioritisation of the following arcs. Weight 2 arcs. We have the following arcs of weight 2. 4. Region connectivity arcs. (L t+1 , L t ) and Proof. The A-region, together with the symbol L 1 from the B-region can form a tree rooted in L 1 using region connectivity arcs (1a) with boundary connectivity arcs (2a)-all weight 1 arcs. Similarly for the C-region with the symbol R |K| from the B-region (arcs (1b) and (2b)). Moreover, these regional sub-trees are trivially projective. If we construct a projective subtree out of the B-region, in which L 1 and R |K| are leaf nodes, then we have the result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "(R t , R t+1 ) for t \u2208 [|K| \u2212 1]. 5. G-induced arcs. (a) All arcs (b k,j,t , a k,j ,t ), for each t = t , if v j v j \u2208 E(G) (b) All arcs (b 1,j,t , c 1,j ,t ), for each t = t , if v j v j \u2208 E(G)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "The combination of weight-1 arcs from (1c), (1d), and (1e) results in each B-gadget B(t) being a projective subtree headed by its head node H t made up of a combination of two paths H t , b 1,j 1 ,t , . . . , b k,j k ,t , L t and H t , b k,j k ,t , . . . , b 1,j 1 ,t , R t . To make a projective subtree out of the entire B-region, we choose some arbitrary H t node as the root and take further weight-1 arcs described in (1f):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "(L p , H p\u22121 ) if p \u2264 i and (R p , H p + 1) otherwise, for i \u2208 [2, |K| \u2212 1].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "In all these possible B-regional projective subtrees, both L 1 and R |K| are leaf nodes, which gives the result. Lemma 1.2. Let T * be a projective MST over D. There are at most 2k+(|K|\u22121) arcs of weight 2 in T * : k from the B-to the A-region, k from the B-to the C-region, and the rest internal to the B-region.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "The number of arcs of weight 2, internal to or originating from the B-region, will be maximised if arcs exiting the B-region all originate from the same B-gadget (instead of 2+ distinct ones).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "Moreover, suppose distinct t 1 , t 2 , t 3 \u2208 [|K|]. If T * includes an arc of weight 2 from gadget B(t 2 ) to gadget A(t 1 ) and from gadget B(t 2 ) to gadget C(t 3 ), then T * must also include arcs characterised by the following 1. all non-marked nodes in A(t 1 ) have nonmarked heads in B(t 2 ), 2. all non-marked nodes in C(t 3 ) have marked heads in B(t 2 ), and 3. all marked nodes in A(t 1 ) have marked heads in C(t 3 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "Proof. There are only arcs of weight 2 in D from the B-region to both the A-and the C-regions, and internally in the B-region. We show that there are at most k weight 2 arcs connecting the A-and Bregions and C-and B-regions. Then we show that the maximal number of weight 2 edges internal to the B-region is (|K| \u2212 1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "Suppose there are more than k arcs of weight 2 from the B-region to the A-region in T * . Then there are at least two of these arcs entering different A-gadgets: (b 1,j ,t , a 1,j,t ) and (b 1,i ,p , a 1,i,p ), with p < t. Consider the barred symbols in the tth A-gadget. There are only two possible heads:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "(1) the symbol following the gadget (region connectivity arcs (1a) or boundary connectivity arcs (2a)), which by projectivity is excluded because these arcs would cross (b 1,j ,t , a 1,j,t ), or (2) symbols from the C-region, which by projectivity is also excluded because they would cross the arc (b 1,i ,p , a 1,i,p ) .",
"cite_spans": [],
"ref_spans": [
{
"start": 298,
"end": 319,
"text": "(b 1,i ,p , a 1,i,p )",
"ref_id": null
}
],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "The proof that there are at most k arcs of weight 2 from the B-region to the C-region in T * is analogous.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "For the maximal number of arcs of weight 2, internal to the B-region, we first consider the maximum number of weight 2 region connectivity arcs (4). By projectivity, a B-gadget with arcs entering an A-or C-gadget cannot have any entering weight 2 region connectivity arc. Also, by projectivity, a single B-gadget can have at most 1 weight 2 region connectivity arc. Thus, the number of weight 2 arcs would be maximised by ensuring arcs exiting the B-region originate from the same B-gadget, so only one B-gadget does not have weight 2 entering arcs. Since there are |K| B-gadgets in total, this means there are at most |K| \u2212 1 weight 2 Bregion internal arcs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "The rest of the proof follows by the similar projectivity arguments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "Lemma 1.3. T * has weight |w|+2k\u22121+(|K|\u22121) if and only if there is a 3k-clique in G.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "Proof. (\u21d0) Suppose there is a 3k-clique in G consisting of the three k-cliques k 1 , k 2 , and k 3 , and such that k 1 \u222a k 2 \u222a k 3 is a 3k-clique. In w, there must be corresponding gadgets in each of its gadget regions. We consider A(1) the gadget for k 1 in A, by B(2) the gadget for k 2 in B, and by C(3) the gadget for k 3 in C. We build up a set S of arcs based on these three gadgets. The set S consists of all the possible G-induced (weight 1 and 2) arcs between these three regions-a disconnected set where no two arcs cross, by Lemma 1.2. By the same lemma, S includes exactly 2k + (|K| \u2212 1) arcs of weight 2. We will add arcs to S to connect the rest of the symbols in w until we form a tree, and by Lemma 1.2 again we cannot add any further weight 2 arcs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "We must now supplement S to make a tree. We first connect the B-region. For t < 2, we connect B-gadgets internally by making the path from the R t to L t , using weight 1 region connectivity arcs. We make paths in the opposite direction, from L t to R t for t > 2. We then add all possible weight 2 region connectivity arcs. This makes A(1) and B-region connected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "All other A-gadgets are connected as in the proof of Lemma 1.1. Similarly for the C-gadgets before and after C(3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "The only nodes that still lack a head node are the marked nodes from C(3). We connect these using region connectivity arcs from (1g).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "We have now constructed a projective tree of weight |w| \u2212 1 + 2k + (|K| \u2212 1). We cannot have a higher weighted projective tree by Lemma 1.2. Hence tree is an optimal T * .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "(\u21d2) Suppose T * has weight |w| \u2212 1 + 2k + (|K| \u2212 1). By Lemma 1.2, T * has exactly k arcs of weight 2 from the B-region to the A-region, and k from the B-region to the C-region, and that in this case all possible G-induced arcs between the three corresponding gadgets are in T * . Moreover, internally to the B-region, there are |K|\u22121 weight 2 edges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "Let the gadgets be w.l.o.g., A(1), B(2), and C(3). Each unmarked b symbol in B(2) corresponds to a node in V , and is the head of an unmarked symbol from A(1) corresponding to every node in k-clique k 1 . This means that in G, all possible connections between nodes in k 1 and k 2 exist. The same holds for B(2) with C(3) and C(3) with A(1). Hence there is a 3k-clique in G.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "With Theorem 1, we have shown that the nonstructural graph-based neural parsing systems cannot be carrying out explicit exact decoding in with a significantly simpler algorithm. As we show in the next section, in fact, the LSTM stacks of these systems alone are powerful enough to simulate all components.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "In our proof, the algorithm consistently makes a choice between edges of weight 1 and edges of weight 2 for the result to preserve projectivity. Possibly more edges of weight 1 may end up in a maximum spanning projective DAG or digraph, so we cannot necessarily use the weight in the same way to deduce the result. The number of edges in D is less than n 2 . Hence if we replace the weights of weight 1 arcs in D by weight 1/(n 2 ), then an output maximum spanning projective digraph or DAG with weight superior to 2k+(|K|\u22121) would indicate a 3k-clique. By the algorithms to do this from (Schluter, 2015) in cubic time, we therefore have the same lower bound for finding a maximum spanning projective DAG or digraph. Corollary 1.1. Under the assumption of ETH, there is no algorithm that carries out projective maximum spanning DAG or digraph decoding in sub-cubic time.",
"cite_spans": [
{
"start": 588,
"end": 604,
"text": "(Schluter, 2015)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lower Bounds for First-Order Projective Dependency Decoding",
"sec_num": "4"
},
{
"text": "Eisner (1996) 's algorithm on an input sentence of length n uses an n \u00d7 n table M and dynamic programming to compute for table cell M i,j the highest weighted sub-trees over the span (i, j) of the input sentence. The algorithm iterates over spans of increasing length. For M i,j , the weights of all possible combinations of sub-spans are considered as candidate sub-trees over the span, and the maximum of these is retained in M i,j .",
"cite_spans": [
{
"start": 7,
"end": 13,
"text": "(1996)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RNN Simulation of Eisner's Algorithm",
"sec_num": "5"
},
{
"text": "For our purposes, the problem with this version of the algorithm is that the RNN cannot compute the maximum of the corresponding O(n) values in either constant space nor in one time-step, and the corresponding sub-tree weight is required in the computation of maximum sub-trees over the span j \u2212 i + 1 at the next recursive step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RNN Simulation of Eisner's Algorithm",
"sec_num": "5"
},
{
"text": "In Algorithm 1, we precompute enough of the comparisons required for finding the maximum spanning sub-tree combination before the algorithm arrives in that table cell (from line 5). Thus, instead of taking the maximum across k \u2208 O(n) values, we only ever take the maximum across 2 values at a time. We now explain this algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RNN Simulation of Eisner's Algorithm",
"sec_num": "5"
},
{
"text": "A sub-tree over the span (i, j) is said to be complete if it includes some arc between i and j. Otherwise the sub-tree is called incomplete. We use seven weight matrices (which we extend to 25 matrices later):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RNN Simulation of Eisner's Algorithm",
"sec_num": "5"
},
{
"text": "\u2022 S an n \u00d7 n matrix of arc scores, where S[i, j] is the score of the (i, j) arc. \u2022 I an n \u00d7 n matrix of incomplete sub-tree scores,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RNN Simulation of Eisner's Algorithm",
"sec_num": "5"
},
{
"text": "where I[i, j, h] is the incomplete sub-tree for the span (i, j) with head h \u2208 0, 1. If h = 0, then i is the root of the sub-tree, and if h = 1, then j is the root. \u2022 C is defined in the same way as I but for complete sub-trees. \u2022 I r [i, j, h] (resp. C r ) stores the current \"row\"maximum value for I[i, j, h] across the span combinations (i, k), (k + 1, j) for k \u2212 i > (k + 1) \u2212 j (resp. (i, k), (k, j) for k \u2212 i > k \u2212 j). These are the cases where the span (i, k) is the largest of the two sub-spans (i, j). These table values are adjusted while the algorithm visits cells (i, k).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RNN Simulation of Eisner's Algorithm",
"sec_num": "5"
},
{
"text": "\u2022 I c [i, j, h] (resp. C c ) stores the current \"column\"-maximum value for I[i, j, h] across the span combinations (i, k), (k + 1, j) for k \u2212 i \u2264 k + 1 \u2212 j (resp. (i, k), (k, j) for k \u2212 i > k \u2212 j). These are the cases where the span (k + 1, j) (resp. (k, j)) is larger or equal to the other sub-span of the partitioned span (i, j). These table values are adjusted while the algorithm visits cells (k + 1, j) ( resp. (k, j) ). The pseudocode for this algorithm, which we refer to as streaming-max-eisner is presented in Algorithm 1. The main difference with the original version is that the internal loop partitioning of a span is separated in Algorithm 1 over several previous iterations of the loop, so that once the algorithm visits cell (i, j), all that needs to be computed is the maximum of the two row-and column-maximum values, from I r and I c , or from C r and C c .",
"cite_spans": [],
"ref_spans": [
{
"start": 410,
"end": 422,
"text": "resp. (k, j)",
"ref_id": null
}
],
"eq_spans": [],
"section": "RNN Simulation of Eisner's Algorithm",
"sec_num": "5"
},
{
"text": "It is straightforward to show the correctness of this algorithm, which we state as Theorem 2. We omit the proof due to space constraints. The algorithm can also be easily adapted for backtracking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RNN Simulation of Eisner's Algorithm",
"sec_num": "5"
},
{
"text": "Theorem 2. Algorithm 1 returns the weight of T * .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RNN Simulation of Eisner's Algorithm",
"sec_num": "5"
},
{
"text": "We make a final adjustment to the algorithm before stating the simulation construction. For the simulation, we only have RNN operations at our disposal: linear combinations and a ReLU activation function, but no explicit max operation. In order to use only RNN operations, we replace the explicit max function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RNN Simulation of Eisner's Algorithm",
"sec_num": "5"
},
{
"text": "Replacing the explicit max function. We note that to find the maximum of the two positive numbers a and b, we can use the ReLU function. Without loss of generality, suppose that a > b, then ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RNN Simulation of Eisner's Algorithm",
"sec_num": "5"
},
{
"text": "We therefore make a final adjustment to the original Eisner algorithm, over the version Algorithm 1, replacing all max functions using Equation 2. Instead of storing only one value for each matrix I r , I c , C r , C c , I, C, we store four, denoted by the fields a, b, ab, ba corresponding the four values we need to store: ReLU(a), ReLU(b), ReLU(a \u2212 b), and ReLU(b \u2212 a) respectively. For instance, for the matrix I, we have I a , I b , I ab , I ba . Then, for example, line 6 becomes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RNN Simulation of Eisner's Algorithm",
"sec_num": "5"
},
{
"text": "a \u2190 ReLU( 1 2 * (I r [i, j, 0].a+I r [i, j, 0].b +I r [i, j, 0].ab+I r [i, j, 0].ba))(3) b \u2190 ReLU( 1 2 * (I c [i, j, 0].a+I c [i, j, 0].b +I c [i, j, 0].ab+I c [i, j, 0].ba))(4) I[i, j, 0].a \u2190 ReLU(a) I[i, j, 0].b \u2190 ReLU(b) I[i, j, 0].ab \u2190 ReLU(a \u2212 b) I[i, j, 0].ba \u2190 ReLU(b \u2212 a)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RNN Simulation of Eisner's Algorithm",
"sec_num": "5"
},
{
"text": "where Equations 3 and 4 are wrapped in an extra ReLU operation which yields no difference to the parameter, but which will be convenient for our simulation in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RNN Simulation of Eisner's Algorithm",
"sec_num": "5"
},
{
"text": "Lines 11-14 and 16-19 are adapted in the same way. We provide the adaption of line 11 to make this precise: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RNN Simulation of Eisner's Algorithm",
"sec_num": "5"
},
{
"text": "a \u2190ReLU( 1 2 * (I[i, p, 1].a + I[i, p, 1].b +I[i, p, 1].ab + I[i, p, 1].ba)) b \u2190ReLU( 1 2 * (C[i, j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RNN Simulation of Eisner's Algorithm",
"sec_num": "5"
},
{
"text": "+S[p, i])) I r [i, p, 1].a \u2190 ReLU(a) I r [i, p, 1].b \u2190 ReLU(b) I r [i, p, 1].ab\u2190 ReLU(a \u2212 b) I r [i, p, 1].ba\u2190 ReLU(b \u2212 a)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RNN Simulation of Eisner's Algorithm",
"sec_num": "5"
},
{
"text": ". Algorithm 1 therefore uses 25 matrices on input of length n-hence, still O(n 2 ) space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RNN Simulation of Eisner's Algorithm",
"sec_num": "5"
},
{
"text": "Simulating Algorithm 1. The projective dependency parsing architecture M to be simulated first sends word embeddings x i , i \u2208 through two unrelated nonlinear dense layers: one for dependents and one for heads. Then all resulting pairs (dependent,head) of word representations are sent through a scoring function to generate a score matrix as input to projective MST decoding (Kiperwasser and Goldberg, 2016) .",
"cite_spans": [
{
"start": 376,
"end": 408,
"text": "(Kiperwasser and Goldberg, 2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RNN Simulation of Eisner's Algorithm",
"sec_num": "5"
},
{
"text": "The architecture M to simulate M consists of two components, each being a recurrent layer: a BiLSTM (for contextual word representations and word specialisations) and an RNN (for scoring and to simulate Algorithm 1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RNN Simulation of Eisner's Algorithm",
"sec_num": "5"
},
{
"text": "M starts by feeding word embeddings x i into its first component, the BiLSTM. In the forward direction, at the tth time step, the contextual representation \u2212 \u2192 o t is generated, \u2212 \u2192 o t\u22121 is specialised to \u2212 \u2192 o h t\u22121 (head) and \u2212 \u2192 o d t\u22121 (dependent), and the previously specialised word representations in \u2212 \u2192 o t (i.e., corresponding to \u2212 \u2192 o 1 , . . . , \u2212 \u2192 o t\u22122 ) are copied over. We add a single extra (n + 1)th time step to each direction, so M can finish specialising contextualised word representations within this first component. Similarly for the backward direction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RNN Simulation of Eisner's Algorithm",
"sec_num": "5"
},
{
"text": "There is one single input to M 's second component, an RNN, which also works in n + 1 time steps. We refer to the inputs for this component as z 1 , . . . , z (n+1) , where z 2 . . . z (n+1) are all dummy inputs. z 1 is the concatenation of the final output vectors from each direction of M 's BiLSTM. In the first time step of this component, M computes the score matrix and stores it in the hidden state h 1 . The hidden state has a dimension large enough to house the 25 tables (O(n 2 ) space) required by Algorithm 1 for subtree score bookkeeping and computing the maximum of two values using linear combinations and a ReLU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RNN Simulation of Eisner's Algorithm",
"sec_num": "5"
},
{
"text": "The outer loop (the span for-loop with variable t) of the algorithm corresponds to each time-step t of the RNN. For the first internal for-loop (the diagonal for-loop with variable i), we note that, in lines 6-9, no cells (i, i + t) whose values are being computed require information from each other at this time-step t.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RNN Simulation of Eisner's Algorithm",
"sec_num": "5"
},
{
"text": "The streaming-row and streaming-column for loops (lines 11-14, 16-19) on the other hand sometimes requires maximal values (i, i + t) from lines 6-9 to be computed. This problem is simply solved by replacing the corresponding expressions appearing as left-hand sides in lines 6-9 by the righthand sides.",
"cite_spans": [
{
"start": 49,
"end": 69,
"text": "(lines 11-14, 16-19)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RNN Simulation of Eisner's Algorithm",
"sec_num": "5"
},
{
"text": "The output h n+1 contains the desired maximum value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RNN Simulation of Eisner's Algorithm",
"sec_num": "5"
},
{
"text": "Recent state-of-the art neural graph-based parsers comprising, among other components, a short stack of BiLSTMs, seem to obviate any explicit structural learning or prediction. In this paper, under the assumption of ETH, we showed that this is not due to any possible indirect discovery of a faster algorithm for finding a projective maximum spanning tree and extended the result to projective maximimum spanning DAGs and digraphs. We further showed how these architectures allow for simulating decoding, implying that they are indeed carrying out implicit structured learning and prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding Remarks",
"sec_num": "6"
},
{
"text": "Training is on Sections 2-21, development on Section 22 and testing on Section 23), converted to dependency format following the default configuration of the Stanford Dependency Converter (version \u2265 3.5.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The problem is said to be W [1]-complete(Flum and Grohe, 2006)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We gratefully acknowledge the careful remarks of the anonymous NAACL reviewers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "7: I[i, j, 1] \u2190 max(Ir[i, j, 0], Ic[i, j, 1]) 8: C[i, j, 0] \u2190 max(Cr[i, j, 0], Cc[i, j, 0]) 9: C[i, j, 1] \u2190 max(Cr[i, j, 0], Cc[i, j, 1]) 10: for p \u2190 j + 1 up to min(j + t + 1, n) do streaming-row for-loop, i.e., while p \u2212 (j + 1) \u2264 t 11: Ir[i, p, 1] = max",
"authors": [
{
"first": "S",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Ir",
"suffix": ""
},
{
"first": "Cr",
"middle": [],
"last": "Ic",
"suffix": ""
}
],
"year": null,
"venue": "Cc all initialised to 0 matrices 3: for t \u2190 1 to n-1 do span for-loop 4: for i \u2190 1 to n \u2212 t do diagonal for-loop 5: j \u2190 i + t the algorithm is visiting cells (i, j) 6: I[i, j, 0] \u2190 max",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Algorithm 1 Projective MST algorithm computing the maximum over at most 2 arguments. 1: procedure STREAMING-MAX-EISNER 2: S, I, C, Ir, Ic, Cr, Cc all initialised to 0 matrices 3: for t \u2190 1 to n-1 do span for-loop 4: for i \u2190 1 to n \u2212 t do diagonal for-loop 5: j \u2190 i + t the algorithm is visiting cells (i, j) 6: I[i, j, 0] \u2190 max(Ir[i, j, 0], Ic[i, j, 0]) 7: I[i, j, 1] \u2190 max(Ir[i, j, 0], Ic[i, j, 1]) 8: C[i, j, 0] \u2190 max(Cr[i, j, 0], Cc[i, j, 0]) 9: C[i, j, 1] \u2190 max(Cr[i, j, 0], Cc[i, j, 1]) 10: for p \u2190 j + 1 up to min(j + t + 1, n) do streaming-row for-loop, i.e., while p \u2212 (j + 1) \u2264 t 11: Ir[i, p, 1] = max(I[i, p, 1], C[i, j, 0] + C[j + 1, p, 1] + S[p, i]) 12: Ir[i, p, 0] = max(I[i, p, 0], C[i, j, 0] + C[j + 1, p, 1] + S[i, p]) 13: Cr[i, p, 1] = max(C[i, p, 1], C[i, j, 1] + I[j + 1, p, 1]) 14: Cr[i, p, 0] = max(C[i, p, 0], I[i, j, 0] + C[j + 1, p, 0]) 15: for p \u2190 i \u2212 1 down to max(i \u2212 1 \u2212 t, 1) do streaming-column for-loop, i.e., while (i \u2212 1) \u2212 p \u2264 t 16: Ic[p, j, 1] = max(I[p, j, 1], C[p, i \u2212 1, 0] + C[i \u2212 1, j, 1] + S[p, i]) 17: Ic[i, p, 0] = max(I[i, p, 0], C[i, j, 0] + C[j + 1, p, 1] + S[i, p]) 18: Cc[i, p, 1] = max(C[i, p, 1], C[i, j, 1] + I[j + 1, p, 1]) 19: Cc[i, p, 0] = max(C[i, p, 0], I[i, j, 0] + C[j + 1, p, 0]) 20: Return I, C References Amir Abboud, Arturs Backurs, and Vanessa Vas- silevska Williams. 2015. If the current clique algo- rithms are optimal, so is Valiant's parser. In Pro- ceedings of FOCS.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Strong computational lower bounds via parameterized complexity",
"authors": [
{
"first": "Jianer",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiuzhen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Iyad",
"suffix": ""
},
{
"first": "Ge",
"middle": [],
"last": "Kanj",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xia",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Computer and System Sciences",
"volume": "8",
"issue": "",
"pages": "1346--1367",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianer Chen, Xiuzhen Huang, Iyad A. Kanj, and Ge Xia. 2006. Strong computational lower bounds via parameterized complexity. Journal of Computer and System Sciences, 8:1346-1367.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Recurrent neural networks as weighted language recognizers",
"authors": [
{
"first": "Yining",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sorcha",
"middle": [],
"last": "Gilroy",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Maletti",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2261--2271",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yining Chen, Sorcha Gilroy, Andreas Maletti, Jonathan May, and Kevin Knight. 2018. Recurrent neu- ral networks as weighted language recognizers. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2261-2271, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bi-directional attention with agreement for dependency parsing",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2204--2214",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Cheng, Hao Fang, Xiaodong He, Jianfeng Gao, and Li Deng. 2016. Bi-directional attention with agreement for dependency parsing. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2204-2214, Austin, Texas. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Deep biaffine attention for neural dependency parsing",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"M"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Dozat and Christopher M. Manning. 2017. Deep biaffine attention for neural dependency pars- ing. In Proceedings of ICLR.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Three new probabilistic models for dependency parsing: An exploration",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 16th Conference on Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "340--345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Pro- ceedings of the 16th Conference on Computational Linguistics -Volume 1, COLING '96, pages 340- 345, Stroudsburg, PA, USA. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Parameterized Complexity Theory",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Flum",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Grohe",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Flum and Martin Grohe. 2006. Parameterized Complexity Theory. Springer-Verlag.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A joint many-task model: Growing a neural network for multiple nlp tasks",
"authors": [
{
"first": "Kazuma",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1923--1933",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kazuma Hashimoto, caiming xiong, Yoshimasa Tsu- ruoka, and Richard Socher. 2017. A joint many-task model: Growing a neural network for multiple nlp tasks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 1923-1933, Copenhagen, Denmark. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Comput",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735- 1780.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The complexity of k-SAT",
"authors": [
{
"first": "Russell",
"middle": [],
"last": "Impagliazzo",
"suffix": ""
},
{
"first": "Ramamohan",
"middle": [],
"last": "Paturi",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 14th IEEE Conference on Computational Complexity",
"volume": "",
"issue": "",
"pages": "237--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Russell Impagliazzo and Ramamohan Paturi. 1999. The complexity of k-SAT. In Proceedings of the 14th IEEE Conference on Computational Complex- ity, pages 237-240.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Simple and accurate dependency parsing using bidirectional lstm feature representations",
"authors": [
{
"first": "Eliyahu",
"middle": [],
"last": "Kiperwasser",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the ACL",
"volume": "4",
"issue": "",
"pages": "313--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eliyahu Kiperwasser and Yoav Goldberg. 2016. Sim- ple and accurate dependency parsing using bidirec- tional lstm feature representations. Transactions of the ACL, 4:313-327.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A simple way to initialize recurrent networks of rectified linear units",
"authors": [
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Jaitly",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quoc V. Le, Navdeep Jaitly, and Geoffrey E. Hinton. 2015. A simple way to initialize recurrent networks of rectified linear units. CoRR, abs/1504.00941.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Fast context-free parsing requires fast boolean matrix multiplication",
"authors": [
{
"first": "Lillian",
"middle": [
"Lee"
],
"last": "",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "9--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lillian Lee. 1997. Fast context-free parsing requires fast boolean matrix multiplication. In Proceed- ings of the 35th Annual Meeting of the Association for Computational Linguistics, pages 9-15, Madrid, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Swabha Swayamdipta, and Pengcheng Yin. 2017. Dynet: The dynamic neural network toolkit",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Austin",
"middle": [],
"last": "Matthews",
"suffix": ""
},
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Clothiaux",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Cynthia",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
},
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Lingpeng",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Adhiguna",
"middle": [],
"last": "Kuncoro",
"suffix": ""
},
{
"first": "Gaurav",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Chaitanya",
"middle": [],
"last": "Malaviya",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Oda",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Richardson",
"suffix": ""
},
{
"first": "Naomi",
"middle": [],
"last": "Saphra",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1701.03980"
]
},
"num": null,
"urls": [],
"raw_text": "Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopou- los, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Ku- mar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Rational recurrences",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1203--1214",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Peng, Roy Schwartz, Sam Thomson, and Noah A. Smith. 2018. Rational recurrences. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1203-1214, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Learning representations by backpropagating errors",
"authors": [
{
"first": "David",
"middle": [
"E"
],
"last": "Rumelhart",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "Ronald",
"middle": [
"J"
],
"last": "Williams",
"suffix": ""
}
],
"year": 1986,
"venue": "Nature",
"volume": "323",
"issue": "",
"pages": "533--536",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1986. Learning representations by back- propagating errors. Nature, 323:533-536.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Tree-adjoining grammar parsing and Boolean matrix multiplication",
"authors": [
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "20",
"issue": "",
"pages": "173--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Giorgio Satta. 1994. Tree-adjoining grammar parsing and Boolean matrix multiplication. Computational Linguistics, 20:173-191.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The complexity of finding the maximum spanning DAG and other restrictions for DAG parsing of natural language",
"authors": [
{
"first": "Natalie",
"middle": [],
"last": "Schluter",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "259--268",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Natalie Schluter. 2015. The complexity of finding the maximum spanning DAG and other restrictions for DAG parsing of natural language. In Proceedings of the Fourth Joint Conference on Lexical and Com- putational Semantics, pages 259-268, Denver, Col- orado. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Recurrent neural networks and finite automata",
"authors": [
{
"first": "T",
"middle": [],
"last": "Hava",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Siegelmann",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Intelligence",
"volume": "12",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hava T. Siegelmann. 1996. Recurrent neural networks and finite automata. Computational Intelligence, 12:567574.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Uniparse: A universal graph-based parsing toolkit",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Varab",
"suffix": ""
},
{
"first": "Natalie",
"middle": [],
"last": "Schluter",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Varab and Natalie Schluter. 2018. Uniparse: A universal graph-based parsing toolkit. CoRR, abs/1807.04053.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "On the practical computational power of finite precision rnns for language recognition",
"authors": [
{
"first": "Gail",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Eran",
"middle": [],
"last": "Yahav",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "740--745",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gail Weiss, Yoav Goldberg, and Eran Yahav. 2018. On the practical computational power of finite pre- cision rnns for language recognition. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 740-745, Melbourne, Australia. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Dependency parsing as head selection",
"authors": [
{
"first": "Xingxing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jianpeng",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "665--676",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xingxing Zhang, Jianpeng Cheng, and Mirella Lapata. 2017. Dependency parsing as head selection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 1, Long Papers, pages 665-676, Valencia, Spain. Association for Computational Lin- guistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "(given infinite precision) and Weiss et al. (2018) (given finite precision), and empirically investigated for practical considerations of convergence under training by Le et al. (2015)."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "There are no other arcs in the input digraph D.Lemma 1.1. There always exists a projective MST in D of weight |w| \u2212 1."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "ReLU(a \u2212 b) + ReLU(b \u2212 a) + a + b) a \u2212 b) + ReLU(b \u2212 a) + ReLU(a) + ReLU(b))."
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": ", 0].a + C[i, j, 0].b +C[i, j, 0].ab + C[i, j, 0].ba +C[j + 1, p, 1].a + C[j + 1, p, 1].b +C[j + 1, p, 1].ab + C[j + 1, p, 1].ba"
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "[n] through a forward (and backward) LSTM with output word representations \u2212 \u2192 o i (and \u2190 \u2212 o i ) of dimension d. The concatenated result [ \u2212 \u2192 o i ; \u2190 \u2212 o i ] is further specialised"
}
}
}
}