{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:57:50.300169Z" }, "title": "Structural Realization with GGNNs", "authors": [ { "first": "Jinman", "middle": [], "last": "Zhao", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Toronto", "location": {} }, "email": "jzhao@cs.toronto.edu" }, { "first": "Gerald", "middle": [], "last": "Penn", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Toronto", "location": {} }, "email": "gpenn@cs.toronto.edu" }, { "first": "Huan", "middle": [], "last": "Ling", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Toronto", "location": {} }, "email": "linghuan@cs.toronto.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we define an abstract task called structural realization that generates words given a prefix of words and a partial representation of a parse tree. We also present a method for solving instances of this task using a Gated Graph Neural Network (GGNN). We evaluate it with standard accuracy measures, as well as with respect to perplexity, in which its comparison to previous work on language modelling serves to quantify the information added to a lexical selection task by the presence of syntactic knowledge. That the addition of parsetree-internal nodes to this neural model should improve the model, with respect both to accuracy and to more conventional measures such as perplexity, may seem unsurprising, but previous attempts have not met with nearly as much success. We have also learned that transverse links through the parse tree compromise the model's accuracy at generating adjectival and nominal parts of speech.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we define an abstract task called structural realization that generates words given a prefix of words and a partial representation of a parse tree. We also present a method for solving instances of this task using a Gated Graph Neural Network (GGNN). We evaluate it with standard accuracy measures, as well as with respect to perplexity, in which its comparison to previous work on language modelling serves to quantify the information added to a lexical selection task by the presence of syntactic knowledge. That the addition of parsetree-internal nodes to this neural model should improve the model, with respect both to accuracy and to more conventional measures such as perplexity, may seem unsurprising, but previous attempts have not met with nearly as much success. We have also learned that transverse links through the parse tree compromise the model's accuracy at generating adjectival and nominal parts of speech.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "We conjecture that this may be an opportune time to reassess the extent to which syntax is capable of contributing to a word prediction task. Structured realization is a generalization of language modelling in which we receive n \u2212 j words as input, together with a syntactic structure that has a yield of n word positions and spans the input, plus an \"overhang\" of j unrealized word positions. Our task is to fill in the most likely missing j words. Language modelling generally possesses only the trivial annotation that consists of the words themselves and has historically assumed that j = 1, constituting an n-gram. Notable exceptions date back to the work of Chelba (2000) on structured language modelling, in which the syntactic annotation is partial, in that there is no overhang (j = 0), but structurally non-trivial, although often sparing relative to corpora that parsers are trained upon. 1 The most thorough exploration of this direction is probably that of K\u00f6hn and Baumann (2016) , who equip a variety of language models with a pretrained dependency parser, which they use to predict the part of speech (POS) of the next word and some overarching syntactic structure, and then predict the next word from its POS plus an n-gram word history. They report a roughly 6% perplexity reduction across the different models.", "cite_spans": [ { "start": 664, "end": 677, "text": "Chelba (2000)", "ref_id": "BIBREF6" }, { "start": 900, "end": 901, "text": "1", "ref_id": null }, { "start": 970, "end": 993, "text": "K\u00f6hn and Baumann (2016)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the specific case where a complete, spanning, syntactic representation is provided, but the model is evaluated solely from a zero-prefix initialization (i.e., n = j), this generalization can be viewed as a simple purely syntactic surface-realization problem, as one would find in a generation task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "With no fanfare whatsoever in CL circles, the machine learning community proposed an evaluation task seven years ago called \"MadLibs\" Kiros et al. (2014) . In our terminology, the syntactic annotation provided is merely n \u2212 j words followed by a string of j POS tags. While it may be difficult to imagine that someone would be in possession of this POS information without also knowing how the POS tags connected together, the authors were interested in testing a new multiplicative neural language model, in which attributes (such as POS tags) can be attached to input words.", "cite_spans": [ { "start": 134, "end": 153, "text": "Kiros et al. (2014)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In a neural setting, parse trees can be encoded with a generalization of recurrent neural networks (RNNs) called Graph Neural Networks (GNNs). GNNs have been used as encoders to deal with a variety of different NLP problems (see related work section later). Gated GNNs (GGNNs) are an improvement over GNNs that is analogous to that of GRUs over RNNs. They train faster, and they address problems with vanishing gradients.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We shall compare two modes of our model here using GGNN-encoded parse trees: one with parse trees from OntoNotes 5.0 (Hovy et al., 2006; Weischedel et al., 2013) , and one with vestigial transitions between pre-terminal categories in sequence, which resembles the syntactic annotation selected by Kiros et al. (2014) , although here the word prefix is also POS-annotated. We also test the combination of the two: a syntactic tree augmented by a linear pipeline of transitions between pre-terminals. We compute sentence-level accuracy by measuring how many words in the generated strings legitimately belong to their assigned POS categories, and compute word-level accuracy scores in three ways: accuracy at choosing a word of the appropriate part of speech (this time with the prefix of words corrected to what the corpus says, as necessary), rank of the corpus sentences by data likelihood, and word-guessing accuracy, relative to what appears in the corpus.", "cite_spans": [ { "start": 117, "end": 136, "text": "(Hovy et al., 2006;", "ref_id": "BIBREF11" }, { "start": 137, "end": 161, "text": "Weischedel et al., 2013)", "ref_id": "BIBREF36" }, { "start": 297, "end": 316, "text": "Kiros et al. (2014)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we exploit a Gated Graph Neural Network (GGNN) (Li et al., 2016) as a parse tree encoder. GNN encoders have been shown to be efficient for neural machine translation (Beck et al., 2018; Bastings et al., 2017) whereas in our case, we focus on structured realization. GGNNs define a propagation model that extends RNNs to arbitrary graphs and learn propagation rules between nodes. We aim to encode syntactic trees by propagating category labels throughout the tree's structure.", "cite_spans": [ { "start": 62, "end": 79, "text": "(Li et al., 2016)", "ref_id": "BIBREF20" }, { "start": 181, "end": 200, "text": "(Beck et al., 2018;", "ref_id": "BIBREF2" }, { "start": 201, "end": 223, "text": "Bastings et al., 2017)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "For completeness, we briefly summarize the GGNN model (Li et al., 2016) . A GGNN uses a directed graph {V, E} where V and E are the sets of nodes and edges. We represent the initial state of a node v as s v and the hidden state of node v at propagation time step t as h t v . The adjacency matrix A \u2208 R |V |\u00d7N |V | determines how the nodes in the graph propagate information to each other, where N represents the number of different edge types. Figure 1 is the visual representation of a GGNN; it starts with h 0 v = s v , then follows a propagation model which unrolls T steps and generates h T v at the end. Each unroll step follows the same", "cite_spans": [ { "start": 54, "end": 71, "text": "(Li et al., 2016)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 445, "end": 453, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Gated Graph Neural Networks", "sec_num": "2.1" }, { "text": "rule to compute h t v from h (t\u22121) v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gated Graph Neural Networks", "sec_num": "2.1" }, { "text": "and A:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gated Graph Neural Networks", "sec_num": "2.1" }, { "text": "a t v = A v [h t\u22121 1 , ..., h t\u22121 |V | ] + b r t v = \u03c3(W r a t v + U r h (t\u22121) v ) z t v = \u03c3(W z a t v + U z h (t\u22121) v ) h t v = tanh(W a t v + U (r t v h (t\u22121) v )) h t v = (1 \u2212 z t v ) h (t\u22121) v + z t v h t v .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gated Graph Neural Networks", "sec_num": "2.1" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gated Graph Neural Networks", "sec_num": "2.1" }, { "text": "b, W, W r , W z , U, U r , U z above are trainable pa- rameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gated Graph Neural Networks", "sec_num": "2.1" }, { "text": "After information is propagated for T time steps, each node's hidden state collectively represents a message about itself and its neighbourhood, which is then passed to its neighbours. Finally there is the output model. For example, Acuna et al. (2018) implemented their output model by:", "cite_spans": [ { "start": 233, "end": 252, "text": "Acuna et al. (2018)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Gated Graph Neural Networks", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h v = tanh(F C 1 (h T v )) out v = F C 2 (h v )", "eq_num": "(2)" } ], "section": "Gated Graph Neural Networks", "sec_num": "2.1" }, { "text": "where F C 1 and F C 2 are two fully connected layers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gated Graph Neural Networks", "sec_num": "2.1" }, { "text": "In this part, we will describe how we use GGNNs and parse trees to build our three experimental models. Figure 2 depicts example trees for these models.", "cite_spans": [], "ref_spans": [ { "start": 104, "end": 112, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Gated Graph Neural Network Models", "sec_num": "2.2" }, { "text": "Since we are using GGNNs, we first need to construct the graph by giving the parse tree. We build three different models:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Tree Construction", "sec_num": "2.2.1" }, { "text": "Model 1: For a given parse tree, let N be the number of nodes in the parse tree. Then the adjacency matrix of the tree, denoted as A, is an N \u00d7 2N matrix, concatenating two N \u00d7 N matrices. A[:N,:] is the forward adjacency matrix of the tree and A[N:,:] is the backward adjacency matrix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Tree Construction", "sec_num": "2.2.1" }, { "text": "Model 2: The input does not consider interior parse tree nodes, but instead works more like a conventional language model. For each parse tree, and given a sequence of words (w 1 , w 2 , ..., w n\u22121 ), we retain all and only the pre-terminal parse tree nodes, and then attempt to predict the next word w n . This is the model of Kiros et al. (2014) . Note that, while it is essentially a language model, the nodes of this Model are a subset of the nodes of Model 1, although the edges are completely different, encoding only transitions between the pre-terminals (1) (2)", "cite_spans": [ { "start": 328, "end": 347, "text": "Kiros et al. (2014)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Input Tree Construction", "sec_num": "2.2.1" }, { "text": "Figure 2: Example of an input parse tree with a given input word prefix (w 1 = Germany) and a completion consisting of POS/pre-terminal categories. The number near each node represents its index within the adjacency matrix of the tree (Figure 3 ). Red arrows are forward edges, green dashed arrows are backward edges and black arrows represent a transverse edge between pre-terminals. The partial trees each contain 1 terminal(\"Germany\"), 4 pre-terminals (\"NP\",\"MD\",\"VB\",\".\") and, in Models (1) and (3), 3 other interior categories (\"S\",\"N\",\"V\"). We want to predict the word after Germany, which will be the child of pre-terminal \"MD\". The input of Model 1 considers tree nodes and forward/backward edges, but not transverse pre-terminal edges. The input of Model 2 does not include other parts of the tree except pre-terminals and the given terminals, yet it contains all three kinds of edges. The input of Model 3 which contains all tree nodes and all edges.", "cite_spans": [], "ref_spans": [ { "start": 235, "end": 244, "text": "(Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Input Tree Construction", "sec_num": "2.2.1" }, { "text": "in sequence. This time, the adjacency matrix A is N \u00d7 3N which is a concatenation of three N \u00d7 N matrices: A f orward , A backward and A pre\u2212terminal .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Tree Construction", "sec_num": "2.2.1" }, { "text": "Model 3: This one is the combination of the above two. The number of nodes is the same as for Model 1. The adjacency matrix is the addition of each respective pair of A f orward , A backward and A pre\u2212terminal , concatenated together. Figure 2 depicts an example for each model. By comparing the results for different models later, we will understand how essential inner nodes and edges between pre-terminals are for word prediction. Also note that Model 1 and Model 3 have the same number of nodes, but the number of nodes in Model 2 is smaller. Nevertheless, in all three models, the input may contain a prefix of n \u2212 j words. As mentioned above, when this prefix is zero-length, we have three classical surface realization models. But we can also view all three models as generalizations of language models, in which:", "cite_spans": [], "ref_spans": [ { "start": 235, "end": 243, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Input Tree Construction", "sec_num": "2.2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (W ) = P (w 1 w 2 ...w n ) = n i=1 P (w i |tree i\u22121 )", "eq_num": "(3)" } ], "section": "Input Tree Construction", "sec_num": "2.2.1" }, { "text": "and tree i\u22121 is the parse tree with the 1 th , 2 th ...(i \u2212 1) th word tokens in place.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Tree Construction", "sec_num": "2.2.1" }, { "text": "Once we have constructed the graph, we need to construct input for the model. Let D = 100 be the dimension of a set of word-embedding vectors over a fixed lexicon. The input to each model is an N \u00d7 D matrix. All three types of nodes need to be represented in same-dimensional vectors: 1) terminals, i.e. words, 2) pre-terminals (nodes that only appear as a parent of a leaf), and 3) interior node tags (nodes that are neither leaves nor preterminals). We then need to normalize all vectors. We associate each terminal (word) with its GloVe (Pennington et al., 2014) pre-trained word vector (trained from Wikipedia 2014 + Gigaword 5, containing a 400K-word vocabulary).", "cite_spans": [ { "start": 540, "end": 565, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Terminal, Pre-terminal and Interior Tags", "sec_num": "2.2.2" }, { "text": "For pre-terminals, we gather sequences (e.g., \"NP MD VB .\" in Figure 1 ) for each input sentence, prepare a corpus consisting only of these tags, and train embedding vectors directly on the POS tags by using the GloVe algorithms (Pennington et al., 2014) . We then associate each pre-terminal with its corresponding vector.", "cite_spans": [ { "start": 229, "end": 254, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 62, "end": 70, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Terminal, Pre-terminal and Interior Tags", "sec_num": "2.2.2" }, { "text": "The number of interior tags is larger than D, however, so one-hot is not appropriate in this case. For each interior node, we randomly generate a D-dimensional vector, sampling each entry of the vector from a standard Gaussian distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Terminal, Pre-terminal and Interior Tags", "sec_num": "2.2.2" }, { "text": "After the input presentation, the propagation step and a fully connected layer, the model will generate an N \u00d7 D output matrix. In other words, all N nodes in the parse tree will have D-dimensional output. In language modelling mode, we would not care about any output except the one generated by the pre-terminal dominating the position of w n\u2212j+1 . Letv denote this normalized Ddimensional output. The probability of w n\u2212j+1 given the tree, P (w n\u2212j+1 = i|tree n\u2212j ) =:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Predict Words", "sec_num": "2.2.3" }, { "text": "exp(c 2 \u00d7 (v T \u2022 v i )) V j=0 exp(c 2 \u00d7 (v T \u2022 v j ))) (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Predict Words", "sec_num": "2.2.3" }, { "text": "where V is the size of the pre-trained lexicon, v i and v j are vector representations for the ith and jth word types. We choose i with maximum conditional probability. This is equivalent to choosing the i for which v i is the closest word vectorv.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Predict Words", "sec_num": "2.2.3" }, { "text": "When c = 1 and tree n\u2212j consists only of the sequence of input words (w 1 , w 2 , ..., w n\u2212j ), Eq 4 would correspond to a standard language model. The interval [e \u22121 , e 1 ] is too small as the range of the numerator to distinguish between good predictions and bad predictions. So instead of only normalizing them, we also multiply by a constant c. Thus the range of the numerator becomes [e \u2212c 2 , e c 2 ]. We tuned c manually from 1 to 15 based on model 1. Figure 4 shows that c = 6 is the best, as it has the lowest cross entropy compared with other values. We will assume c = 6 in Section 3. ", "cite_spans": [], "ref_spans": [ { "start": 460, "end": 468, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Predict Words", "sec_num": "2.2.3" }, { "text": "We train and test all models on OntoNotes 5.0, which contains 110,000+ English sentences from print publications. We also train and evaluate the perplexity of all models on the Penn Treebank (Marcus et al., 1993) , as this has become a standard among syntax-driven language models. PTB $2-21 are used as training data, $24 is for validation, and $23 is used for testing.", "cite_spans": [ { "start": 191, "end": 212, "text": "(Marcus et al., 1993)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "We excluded those trees/sentences with words that are not in GloVe's (Pennington et al., 2014) pre-trained vocabulary from both the training and validation data. The test set and validation set were excluded from our development cycle. Dataset statistics are provided in Table 1 .", "cite_spans": [ { "start": 69, "end": 94, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 271, "end": 278, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "We used 100-dimensional pre-trained GloVe vectors to represent different kinds of leaves in the tree. As the loss function, we use cross entropy loss which is calculated based on Equation 4. For each complete parse tree in the training set, let n be the number of leaves/terminals in this tree. So this tree has n different possible prefixes of known words(w 1 ...w i , 0 \u2264 i \u2264 n \u2212 1), each with a parse tree as training input. In addition, since the number of nodes in different graphs is distinct, we use stochastic gradient descent with a learning rate of 0.01 to train the model (i.e. batch size = 1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Details", "sec_num": "3.2" }, { "text": "Perplexity relates to cross-entropy loss:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Details", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P P L = e \u2212 1 N i p(x) log y(x)", "eq_num": "(5)" } ], "section": "Training Details", "sec_num": "3.2" }, { "text": "This is corpus-level perplexity, where x is an arbitrary word and p(x) is the function we discussed in Eq 4 and N is the number of predictions. The magnitude c = 6 also has lowest perplexity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Details", "sec_num": "3.2" }, { "text": "A simple way to evaluate the accuracy of the models as implementations of the structured realization task is to consider their sentence output in terms of POS accuracy. If we simply remove the yields of corpus trees and attempt to regenerate them from the trees, the resulting strings will often differ from the original yields, but they may still be grammatical in the sense of the first j tokens having the appropriate POS tag sequence. Table 2 shows the sentence-level word and POS accuracies on the OntoNotes test set. Both OntoNotes and PTB provide gold-standard (human labeled) syntactic constituency parse trees. We trained our model on these trees. These trees are expensive, however, so we also evaluated on trees obtained from the Berkeley neural parser (Kitaev and Klein, 2018) a state-of-the-art constituency parser with an F 1 = 95 score on the PTB.", "cite_spans": [], "ref_spans": [ { "start": 439, "end": 446, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Realization Accuracy", "sec_num": "3.3" }, { "text": "Some trees have the same unlabelled tree structure, although they may have different nodes. We can randomly pick two such isomorphic constituency trees T 1 and T 2 , delete their leaves then linearly interpolate between their corresponding nodes and generate. For an arbitrary node of the i th intermediate tree, the vector representation would be:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Continuity of Latent Spaces", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "node = (1 \u2212 \u03bb) \u00d7 T 1 (node) + \u03bb \u00d7 T 2 (node),", "eq_num": "(6)" } ], "section": "Continuity of Latent Spaces", "sec_num": "3.4" }, { "text": "for some value of \u03bb \u2208 [0, 1]. Table 3 demonstrates sentences generated from trees for various values of \u03bb. This kind of \"semantic continuity\" has been demonstrated before on vector encodings, but, to our knowledge, not on structured spaces such as parallel trees of vectors.", "cite_spans": [], "ref_spans": [ { "start": 30, "end": 37, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Continuity of Latent Spaces", "sec_num": "3.4" }, { "text": "Perplexity is perhaps the most common evaluation measure in the language modelling literature. The formula of perplexity was shown in Eq 5. We trained and evaluated our Models on the different datasets listed in Table 1 . The perplexities of the test data sets are listed in Table 4 . RNNG (Dyer et al., 2016 ) is a state-of-the-art syntax-aware model. LSTM-256 LM is our self implemented language model using 2-layer LSTM cell with sequence length 20 and hidden state size 256. Our three models have lower perplexities across the board compared with RNNG on both OntoNotes and PTB. Model 3 on gold parse trees has the lowest perplexity overall, although it is important to remember that our models benefit from distributions from Wikipedia that are implicitly encoded in the GloVe vectors. LSTMs that use GloVe perform worse than the LSTMs with trainable word embeddings shown here. 2 In addition, for comparion, we trained our models on PTB $2-21 excluding those trees that contain words that are not in GloVe, but tested on the entire PTB $23 with gold syntactic constituency parsing trees. For those words not in GloVe, we followed the method in RNNG (Dyer et al., 2016) . First, we replace them by (e.g. ,). Then, for each UNK to-2 Kruskal-Wallis and post-hoc Mann-Whitney tests with Bonferroni correction reveal that M1-3 with benepar trees are statistically significantly different (p < 10 \u221210 ) from RNNG at the sentence level (H=56.84 PTB; 65.54 OntoNotes), and from LSTM at the word level (H=1485.94 PTB; 2561.44 OntoNotes), on both corpora, except that there was no significant difference found between M1 and RNNG with OntoNotes. All effect sizes were small (df=3, V=0.05). With OntoNotes, no significance was found between M1 and M2; with PTB, none was found between M2 and M3. 3: Sentences generated between two random trees that have the same unlabelled tree structure. ken, we use the average of the vector representations of words labelled as XXX in the training set to obtain the vector representation of this token. The perplexity of the entire PTB $23 is listed in Table 5. RNNG, SO-RNNG, GA-RNNG and NVLM are all syntax-aware models. Our Models achieve very good perplexity. Note that while Transformer-XL does perform better, it uses roughly 2.4 \u00d7 10 7 parameters whereas ours uses 9 \u00d7 10 5 . We have a larger vocabulary size because we retain words that appear in GloVe regardless of frequency. Larger vocabulary sizes generally increase perplexity. (Kneser and Ney, 1995) 169.3 LSTM-128 (Zaremba et al., 2014) 113.4 GRU-256", "cite_spans": [ { "start": 290, "end": 308, "text": "(Dyer et al., 2016", "ref_id": "BIBREF9" }, { "start": 884, "end": 885, "text": "2", "ref_id": null }, { "start": 1155, "end": 1174, "text": "(Dyer et al., 2016)", "ref_id": "BIBREF9" }, { "start": 2501, "end": 2523, "text": "(Kneser and Ney, 1995)", "ref_id": "BIBREF16" }, { "start": 2539, "end": 2561, "text": "(Zaremba et al., 2014)", "ref_id": null } ], "ref_spans": [ { "start": 212, "end": 219, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 275, "end": 282, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Perplexity", "sec_num": "3.5" }, { "text": "112.3 RNNG (Dyer et al., 2016) 102.4 SO-RNNG (Kuncoro et al., 2017) 101.2 GA-RNNG (Kuncoro et al., 2017) 100.9 NVLM (Zhang and Song, 2019) 91. RNNG, SO-RNNG, GA-RNNG and NVLM use the same method to preprocess data, keeping only vocabulary that appear more than once in the training set. For hapaxes in the training set and words in the validation/test sets that occur once in the training set, they replace them with tokens. Their models only contain 24 000 word types, whereas ours contain 31 000. In some other language modelling settings, the vocabulary size can be as small as 10 000.", "cite_spans": [ { "start": 11, "end": 30, "text": "(Dyer et al., 2016)", "ref_id": "BIBREF9" }, { "start": 45, "end": 67, "text": "(Kuncoro et al., 2017)", "ref_id": "BIBREF18" }, { "start": 82, "end": 104, "text": "(Kuncoro et al., 2017)", "ref_id": "BIBREF18" }, { "start": 116, "end": 138, "text": "(Zhang and Song, 2019)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": null }, { "text": "Given a parse along with the prefix w 1 , ...w n\u2212j , we can remove the leaves (w n\u2212j+1 , w n+1 , ..., w n ) from the parse tree, and predict w n\u2212j+1 , where 1 \u2264 j \u2264 n. Thus, for a tree with n word positions, we can perform word prediction up to n times. Unlike the structured realization accuracies above, conventional practice in language modelling evaluation is to restore the integrity of w n\u2212j according to the corpus before predicting w n\u2212j+1 when the previous prediction step was unsuccessful. Word accuracies according to this regimen are given in Table 8 , along with accuracy at predicting any word with the required part of speech. To better evaluate the results, we also compute the rank for each predicted word. Let v be the vector representation of the true w n\u2212j+1 andv denote the output vector as discussed earlier. For each vector representation of a word in the pre-trained GloVe vocabulary set, compute the Euclidean distance between it andv. Rank r means ||v \u2212v|| is the r th smallest distance in comparison to the other words in the vocabulary set. If the rank is small, then the model is capable of finding a close prediction. Small rank also means the model is able to learn the relation between the next word and the given partial parse tree. Table 6 and Table 7 show the overall, median, and mean rank distributions of the different models, compared to LSTM-256 within the ranges 10 \u03c6 to 10 \u03c6+1 , 0 \u2264 \u03c6 \u2264 4. Most of the ranks are \u2264 10 and the median ranks for all models are less than 5. Our GGNN based models have more predictions that rank less than or equal to 10 compared with LSTM-256. Model 1 and Model 2 have similar ranks; Model 3's are slightly better. Model 3 has the lowest median rank. Although LSTM-256 has the lowest mean rank, LSTM-256's vocabulary size is much smaller than our GGNN based models.'", "cite_spans": [], "ref_spans": [ { "start": 555, "end": 562, "text": "Table 8", "ref_id": "TABREF12" }, { "start": 1266, "end": 1273, "text": "Table 6", "ref_id": "TABREF9" }, { "start": 1278, "end": 1285, "text": "Table 7", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Word-prediction Accuracy and Rank", "sec_num": "3.6" }, { "text": "Sometimes a model has an output vector located very far from the vector representation of the true word (i.e. its rank is very large), but the predicted word can at least be assigned the correct pre-terminal POS. This means the prediction is in some sense correct, because it is more likely to be grammatically and semantically acceptable. For example, given a sequence \"within three days she had,\" and a gold-standard next word of \"worked,\" with parent \"VBN,\" \"turned\" could be a good prediction even though it is far from \"worked\", because \"turned\" also belongs to \"VBN.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating words of a specific POS", "sec_num": "3.7" }, { "text": "Since we train terminals and pre-terminals separately, there is no prior connection defined between them. For example, given a tag \"NN,\" we do not know which words belong to \"NN\" when training the vectors for the words, or when choosing the vector for \"NN.\" So this is a learned ability. Let us denote the true i th word as t and the predicted i th word as p. To evaluate this capability, every time the model predicts a word p, we count it as a correct prediction if: (1) p occurs somewhere in the training data, dominated by a category c, and (2) c also dominates this occurrence of t.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating words of a specific POS", "sec_num": "3.7" }, { "text": "In Table 8 , we present this accuracy rate in the second column for each of the different models. On the OntoNotes test data, Models 1 and 3 have higher rates than Model 2, while Model 2 has the highest POS accuracy on the PTB test data. Alongside this, we also compute the overall accuracy of selecting the correct word (i.e., when the true word has rank 1), as well as the macro-averaged and macro-median accuracy of selecting the correct word, broken down by the pre-terminal dominating the position to be predicted.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 8", "ref_id": "TABREF12" } ], "eq_spans": [], "section": "Generating words of a specific POS", "sec_num": "3.7" }, { "text": "All three models have high POS accuracies in general (medians: 99.90, 99.93 and 99.7, respectively), but Models 2 and 3 have very bad accuracies for some POSs such as 'NN ' (60.68-67.45) , and ).", "cite_spans": [ { "start": 171, "end": 186, "text": "' (60.68-67.45)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Generating words of a specific POS", "sec_num": "3.7" }, { "text": "Graph Neural Networks as Graph Encoders GNNs were first proposed by Scarselli et al. (2009) . Li et al. (2016) added gating mechanisms for recurrent networks on graphs. In parallel, (Bruna et al., 2013) proposed Graph Convolutional Networks. GCNs differ from GGNNs in their graph propagation model. GGNNs exploit recurrent neural networks to learn propagation weights through time steps. Each step shares the same set of parameters. On the other hand, GCNs train unshared CNN layers through time steps. In this paper, we employed GGNNs as a design choice. Similar to our model architecture, Bastings et al. (2017) ; Beck et al. (2018) used graphs to incorporate syntax into neural machine translation and Surface Realization Song et al. (2018) introduced a graph-to-sequence LSTM for AMR-totext generation that can encode AMR structures directly. The model takes multiple recurrent transition steps in order to propagate information beyond local neighbourhoods. But this method must maintain the entire graph state at each time step. Our models also simultaneously update every node in the tree at every time step. The encoder of Trisedya et al. (2018) takes input RDF triples rendered as a graph and builds a dynamic recurrent structure that traverses the adjacency matrix of the graph one node at a time. Marcheggiani and Perez-Beltrachini (2018), again using a GCN, take only the nodes of the RDF graph as input, using the edges directly as a weight matrix. They, too, must update the entire graph at every time step.", "cite_spans": [ { "start": 68, "end": 91, "text": "Scarselli et al. (2009)", "ref_id": "BIBREF28" }, { "start": 94, "end": 110, "text": "Li et al. (2016)", "ref_id": "BIBREF20" }, { "start": 182, "end": 202, "text": "(Bruna et al., 2013)", "ref_id": null }, { "start": 591, "end": 613, "text": "Bastings et al. (2017)", "ref_id": "BIBREF1" }, { "start": 725, "end": 743, "text": "Song et al. (2018)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "Language Modelling The task of language modelling has a long and distinguished history. Although the term itself was not coined until Jelinek et al. (1975) , the earliest work of Shannon (1948) on entropy presents what are effectively characterlevel language models as a motivating example. In both cases, given a prefix of characters/words or classes (Brown et al., 1992) , the aim of the task is to predict the next such event. n-gram language models factor any dependency of the next event on the prefix through its dependency on the final n \u2212 1 events in the prefix. This long remained the dominant type of language model, but the advent of neural language models (Bengio et al., 2003) , and particularly vector-space embeddings of certain lexical-semantic relations, has drastically changed that landscape. See, e.g., models using recurrent networks (Mikolov et al., 2010) , year (Mikolov et al., 2011) , LSTMs (Sundermeyer et al., 2012) , sequence-to-sequence LSTMs models , and convolutional networks (Gehring et al., 2017) and transformers (Devlin et al., 2019) . An earlier, but ultimately unsuccessful attempt at dislodging n-gram language models was that of Chelba (2000) , who augmented this prefix with syntactic information. Chelba (2000) did not use conventional parse trees from any of the then-common parse-annotated corpora, nor from linguistic theory, because these degraded rather than enhanced language modelling performance. Instead, he had to remain very sparing in order to realize an empirical improvement. The present model not only shares information at the dimensional level, but projects syntactic structure over the words to be predicted. While this makes structured realization a very different task from structured language modelling, this not only appears to improve perplexity, but does so without having to change the conventional representation of trees found in syntactic corpora. The present model could therefore be used to evaluate competing syntactic representations in a controlled way that quantifies their ability to assist with word prediction, as we have here.", "cite_spans": [ { "start": 134, "end": 155, "text": "Jelinek et al. (1975)", "ref_id": "BIBREF12" }, { "start": 179, "end": 193, "text": "Shannon (1948)", "ref_id": "BIBREF29" }, { "start": 352, "end": 372, "text": "(Brown et al., 1992)", "ref_id": "BIBREF4" }, { "start": 668, "end": 689, "text": "(Bengio et al., 2003)", "ref_id": "BIBREF3" }, { "start": 855, "end": 877, "text": "(Mikolov et al., 2010)", "ref_id": "BIBREF24" }, { "start": 885, "end": 907, "text": "(Mikolov et al., 2011)", "ref_id": "BIBREF25" }, { "start": 916, "end": 942, "text": "(Sundermeyer et al., 2012)", "ref_id": "BIBREF32" }, { "start": 1008, "end": 1030, "text": "(Gehring et al., 2017)", "ref_id": "BIBREF10" }, { "start": 1048, "end": 1069, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF7" }, { "start": 1169, "end": 1182, "text": "Chelba (2000)", "ref_id": "BIBREF6" }, { "start": 1239, "end": 1252, "text": "Chelba (2000)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "GGNNs have proved to be effective as encoders of constituent parse trees from a variety of perspectives, including realization accuracy, perplexity, word-level prediction accuracy, categorical cohesion of predictions, and novel lexical selection. A limitation of this study is the comparatively modest size of its corpora, which is due to the requirement for properly curated parse-annotated data. Finding ways to scale up to larger training and test sets without the bias introduced by automated parsers remains an important issue to investigate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Chelba (2000) proposes that, in order to iteratively predict one word at a time, a structured language model should predict syntactic structure over every word that it has predicted, but in his evaluation, it is very clear that he is more concerned with the first stage of word prediction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Efficient interactive annotation of segmentation datasets with polygon-rnn++", "authors": [ { "first": "David", "middle": [], "last": "Acuna", "suffix": "" }, { "first": "Huan", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Amlan", "middle": [], "last": "Kar", "suffix": "" }, { "first": "Sanja", "middle": [], "last": "Fidler", "suffix": "" } ], "year": 2018, "venue": "CVPR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Acuna, Huan Ling, Amlan Kar, and Sanja Fidler. 2018. Efficient interactive annotation of segmenta- tion datasets with polygon-rnn++. In CVPR.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Graph convolutional encoders for syntax-aware neural machine translation", "authors": [ { "first": "Joost", "middle": [], "last": "Bastings", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" }, { "first": "Wilker", "middle": [], "last": "Aziz", "suffix": "" }, { "first": "Diego", "middle": [], "last": "Marcheggiani", "suffix": "" }, { "first": "Khalil", "middle": [], "last": "Sima", "suffix": "" } ], "year": 2017, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Sima'an. 2017. Graph convolutional encoders for syntax-aware neural ma- chine translation. In EMNLP.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Graph-to-sequence learning using gated graph neural networks", "authors": [ { "first": "Daniel", "middle": [ "Edward" ], "last": "", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Beck", "suffix": "" }, { "first": "Gholamreza", "middle": [], "last": "Haffari", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2018, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Edward Robert Beck, Gholamreza Haffari, and Trevor Cohn. 2018. Graph-to-sequence learning us- ing gated graph neural networks. In ACL.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A neural probabilistic language model", "authors": [ { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "R", "middle": [], "last": "Ducharme", "suffix": "" }, { "first": "P", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "C", "middle": [], "last": "Jauvin", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "1137--1155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin. 2003. A neural probabilistic language model. Jour- nal of Machine Learning Research, 3:1137-1155.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Classbased n-gram models of natural language", "authors": [ { "first": "", "middle": [], "last": "Peter F Brown", "suffix": "" }, { "first": "V", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Desouza", "suffix": "" }, { "first": "L", "middle": [], "last": "Robert", "suffix": "" }, { "first": "Vincent J Della", "middle": [], "last": "Mercer", "suffix": "" }, { "first": "Jenifer C", "middle": [], "last": "Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Lai", "suffix": "" } ], "year": 1992, "venue": "Computational linguistics", "volume": "18", "issue": "4", "pages": "467--479", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter F Brown, Peter V Desouza, Robert L Mercer, Vin- cent J Della Pietra, and Jenifer C Lai. 1992. Class- based n-gram models of natural language. Compu- tational linguistics, 18(4):467-479.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Exploiting Syntactic Structure for Natural Language Modeling", "authors": [ { "first": "C", "middle": [], "last": "Chelba", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Chelba. 2000. Exploiting Syntactic Structure for Natural Language Modeling. Ph.D. thesis, Johns Hopkins University.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Transitionbased dependency parsing with stack long shortterm memory", "authors": [ { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Wang", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Austin", "middle": [], "last": "Matthews", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transition- based dependency parsing with stack long short- term memory. In ACL.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Recurrent neural network grammars", "authors": [ { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Adhiguna", "middle": [], "last": "Kuncoro", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2016, "venue": "Proc. of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proc. of NAACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Convolutional sequence to sequence learning", "authors": [ { "first": "Jonas", "middle": [], "last": "Gehring", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Denis", "middle": [], "last": "Yarats", "suffix": "" }, { "first": "Yann N", "middle": [], "last": "Dauphin", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 34th International Conference on Machine Learning", "volume": "70", "issue": "", "pages": "1243--1252", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1243-1252. JMLR. org.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Ontonotes: The 90% solution", "authors": [ { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Mitchell", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Lance", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Weischedel", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, NAACL-Short '06", "volume": "", "issue": "", "pages": "57--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: The 90% solution. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, NAACL-Short '06, pages 57-60, Stroudsburg, PA, USA. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Design of a linguistic statistical decoder for the recognition of continuous speech", "authors": [ { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" }, { "first": "L", "middle": [ "R" ], "last": "Bahl", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Mercer", "suffix": "" } ], "year": 1975, "venue": "IEEE Transactions on Information Theory", "volume": "", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Jelinek, L.R. Bahl, and R.L. Mercer. 1975. Design of a linguistic statistical decoder for the recognition of continuous speech. IEEE Transactions on Infor- mation Theory, IT-21(3).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A multiplicative model for learning distributed text-based attribute representations", "authors": [ { "first": "Ryan", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zemel", "suffix": "" }, { "first": "Ruslan R", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan Kiros, Richard Zemel, and Ruslan R Salakhut- dinov. 2014. A multiplicative model for learn- ing distributed text-based attribute representations. In Z. Ghahramani, M. Welling, C. Cortes, N. D.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Advances in Neural Information Processing Systems", "authors": [ { "first": "K", "middle": [ "Q" ], "last": "Lawrence", "suffix": "" }, { "first": "", "middle": [], "last": "Weinberger", "suffix": "" } ], "year": null, "venue": "", "volume": "27", "issue": "", "pages": "2348--2356", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2348-2356. Curran Associates, Inc.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Constituency parsing with a self-attentive encoder", "authors": [ { "first": "Nikita", "middle": [], "last": "Kitaev", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1805.01052" ] }, "num": null, "urls": [], "raw_text": "Nikita Kitaev and Dan Klein. 2018. Constituency pars- ing with a self-attentive encoder. arXiv preprint arXiv:1805.01052.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Improved backing-off for m-gram language modeling", "authors": [ { "first": "R", "middle": [], "last": "Kneser", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 1995, "venue": "International Conference on Acoustics, Speech, and Signal Processing", "volume": "1", "issue": "", "pages": "181--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Kneser and H. Ney. 1995. Improved backing-off for m-gram language modeling. In 1995 International Conference on Acoustics, Speech, and Signal Pro- cessing, volume 1, pages 181-184 vol.1.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Predictive incremental parsing helps language modeling", "authors": [ { "first": "A", "middle": [], "last": "K\u00f6hn", "suffix": "" }, { "first": "T", "middle": [], "last": "Baumann", "suffix": "" } ], "year": 2016, "venue": "Proc. 26th COLING", "volume": "", "issue": "", "pages": "268--277", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. K\u00f6hn and T. Baumann. 2016. Predictive incremen- tal parsing helps language modeling. In Proc. 26th COLING, pages 268-277.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "What do recurrent neural network grammars learn about syntax?", "authors": [ { "first": "Adhiguna", "middle": [], "last": "Kuncoro", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Lingpeng", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter", "volume": "1", "issue": "", "pages": "1249--1258", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, Graham Neubig, and Noah A. Smith. 2017. What do recurrent neural network grammars learn about syntax? In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1249-1258, Valencia, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "The insideoutside recursive neural network model for dependency parsing", "authors": [ { "first": "Phong", "middle": [], "last": "Le", "suffix": "" }, { "first": "Willem", "middle": [], "last": "Zuidema", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "729--739", "other_ids": { "DOI": [ "10.3115/v1/D14-1081" ] }, "num": null, "urls": [], "raw_text": "Phong Le and Willem Zuidema. 2014. The inside- outside recursive neural network model for depen- dency parsing. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 729-739. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Gated graph sequence neural networks", "authors": [ { "first": "Yujia", "middle": [], "last": "Li", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Tarlow", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Brockschmidt", "suffix": "" }, { "first": "Richard", "middle": [ "S" ], "last": "Zemel", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard S. Zemel. 2016. Gated graph sequence neu- ral networks. CoRR, abs/1511.05493.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Deep graph convolutional encoders for structured data to text generation", "authors": [ { "first": "Diego", "middle": [], "last": "Marcheggiani", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Perez-Beltrachini", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 11th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "1--9", "other_ids": { "DOI": [ "10.18653/v1/W18-6501" ] }, "num": null, "urls": [], "raw_text": "Diego Marcheggiani and Laura Perez-Beltrachini. 2018. Deep graph convolutional encoders for struc- tured data to text generation. In Proceedings of the 11th International Conference on Natural Lan- guage Generation, pages 1-9, Tilburg University, The Netherlands. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Encoding sentences with graph convolutional networks for semantic role labeling", "authors": [ { "first": "Diego", "middle": [], "last": "Marcheggiani", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2017, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for se- mantic role labeling. In EMNLP.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Building a large annotated corpus of English: The Penn Treebank", "authors": [ { "first": "Mitchell", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "Mary", "middle": [ "Ann" ], "last": "Marcinkiewicz", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computa- tional Linguistics, 19(2):313-330.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Recurrent neural network based language model", "authors": [ { "first": "Tom\u00e1\u0161", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Karafi\u00e1t", "suffix": "" }, { "first": "Luk\u00e1\u0161", "middle": [], "last": "Burget", "suffix": "" }, { "first": "Ja\u0148", "middle": [], "last": "Cernock\u1ef3", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Khudanpur", "suffix": "" } ], "year": 2010, "venue": "Eleventh annual conference of the international speech communication association", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom\u00e1\u0161 Mikolov, Martin Karafi\u00e1t, Luk\u00e1\u0161 Burget, Ja\u0148 Cernock\u1ef3, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh annual conference of the international speech com- munication association.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Extensions of recurrent neural network language model", "authors": [ { "first": "Tom\u00e1\u0161", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Kombrink", "suffix": "" }, { "first": "Luk\u00e1\u0161", "middle": [], "last": "Burget", "suffix": "" }, { "first": "Ja\u0148", "middle": [], "last": "Cernock\u1ef3", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Khudanpur", "suffix": "" } ], "year": 2011, "venue": "2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "5528--5531", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom\u00e1\u0161 Mikolov, Stefan Kombrink, Luk\u00e1\u0161 Burget, Ja\u0148 Cernock\u1ef3, and Sanjeev Khudanpur. 2011. Exten- sions of recurrent neural network language model. In 2011 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP), pages 5528-5531. IEEE.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Towards robust linguistic analysis using OntoNotes", "authors": [ { "first": "Alessandro", "middle": [], "last": "Sameer Pradhan", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Anders", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Bj\u00f6rkelund", "suffix": "" }, { "first": "Yuchen", "middle": [], "last": "Uryupina", "suffix": "" }, { "first": "Zhi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "", "middle": [], "last": "Zhong", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "143--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj\u00f6rkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards ro- bust linguistic analysis using OntoNotes. In Pro- ceedings of the Seventeenth Conference on Computa- tional Natural Language Learning, pages 143-152, Sofia, Bulgaria. Association for Computational Lin- guistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "The graph neural network model", "authors": [ { "first": "F", "middle": [], "last": "Scarselli", "suffix": "" }, { "first": "M", "middle": [], "last": "Gori", "suffix": "" }, { "first": "A", "middle": [ "C" ], "last": "Tsoi", "suffix": "" }, { "first": "M", "middle": [], "last": "Hagenbuchner", "suffix": "" }, { "first": "G", "middle": [], "last": "Monfardini", "suffix": "" } ], "year": 2009, "venue": "IEEE Transactions on Neural Networks", "volume": "20", "issue": "1", "pages": "61--80", "other_ids": { "DOI": [ "10.1109/TNN.2008.2005605" ] }, "num": null, "urls": [], "raw_text": "F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini. 2009. The graph neural net- work model. IEEE Transactions on Neural Net- works, 20(1):61-80.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "A mathematical theory of communication", "authors": [ { "first": "C", "middle": [ "E" ], "last": "Shannon", "suffix": "" } ], "year": 1948, "venue": "Bell System Technical Journal", "volume": "27", "issue": "", "pages": "623--656", "other_ids": {}, "num": null, "urls": [], "raw_text": "C.E. Shannon. 1948. A mathematical theory of com- munication. Bell System Technical Journal, 27:379- 423 (July), 623-656 (October).", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Perelygin", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2013, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep mod- els for semantic compositionality over a sentiment treebank. In EMNLP.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "A graph-to-sequence model for AMRto-text generation", "authors": [ { "first": "Linfeng", "middle": [], "last": "Song", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhiguo", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1616--1626", "other_ids": { "DOI": [ "10.18653/v1/P18-1150" ] }, "num": null, "urls": [], "raw_text": "Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018. A graph-to-sequence model for AMR- to-text generation. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1616- 1626, Melbourne, Australia. Association for Compu- tational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Lstm neural networks for language modeling", "authors": [ { "first": "Martin", "middle": [], "last": "Sundermeyer", "suffix": "" }, { "first": "Ralf", "middle": [], "last": "Schl\u00fcter", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2012, "venue": "Thirteenth annual conference of the international speech communication association", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Sundermeyer, Ralf Schl\u00fcter, and Hermann Ney. 2012. Lstm neural networks for language modeling. In Thirteenth annual conference of the international speech communication association.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Sequence to sequence learning with neural networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3104--3112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing sys- tems, pages 3104-3112.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Improved semantic representations from tree-structured long short-term memory networks", "authors": [ { "first": "Kai Sheng", "middle": [], "last": "Tai", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory net- works. In ACL.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "GTR-LSTM: A triple encoder for sentence generation from RDF data", "authors": [ { "first": "Jianzhong", "middle": [], "last": "Bayu Distiawan Trisedya", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1627--1637", "other_ids": { "DOI": [ "10.18653/v1/P18-1151" ] }, "num": null, "urls": [], "raw_text": "Bayu Distiawan Trisedya, Jianzhong Qi, Rui Zhang, and Wei Wang. 2018. GTR-LSTM: A triple encoder for sentence generation from RDF data. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 1627-1637, Melbourne, Australia. As- sociation for Computational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Ontonotes release 5.0 ldc2013t19. Linguistic Data Consortium, Philadelphia", "authors": [ { "first": "Ralph", "middle": [], "last": "Weischedel", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Mitchell", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "Lance", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Taylor", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Kaufman", "suffix": "" }, { "first": "Michelle", "middle": [], "last": "Franchini", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Ni- anwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, et al. 2013. Ontonotes release 5.0 ldc2013t19. Linguistic Data Consortium, Philadel- phia, PA, 23.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Language modeling with shared grammar", "authors": [ { "first": "Yuyu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Le", "middle": [], "last": "Song", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4442--4453", "other_ids": { "DOI": [ "10.18653/v1/P19-1437" ] }, "num": null, "urls": [], "raw_text": "Yuyu Zhang and Le Song. 2019. Language modeling with shared grammar. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 4442-4453, Florence, Italy. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Long short-term memory over tree structures", "authors": [ { "first": "Xiao-Dan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Parinaz", "middle": [], "last": "Sobhani", "suffix": "" }, { "first": "Hongyu", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiao-Dan Zhu, Parinaz Sobhani, and Hongyu Guo. 2015. Long short-term memory over tree structures. CoRR, abs/1503.04881.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "An example GGNN. The GGNN generates output from a directed graph. It consists of a propagation model and an output model. During the propagation step, there are two different edge types in this graph. Black arrows are the OUT edges while red arrows are the IN edges.", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "The adjacency matrix of the above input example (1) inFigure 2. Blank slots represent 0, meaning no edge between two nodes. A = [A f orward ,A backward ].", "type_str": "figure", "num": null, "uris": null }, "FIGREF2": { "text": "Average cross entropy loss of validation set using Model 1 with different magnitude of vectors.", "type_str": "figure", "num": null, "uris": null }, "FIGREF3": { "text": "T 1 = \"(S (NP (NNP)) (VP (VBD) (ADJP (ADJP (RB ) (JJ )) (PP (IN ) (NP (PRP ))))) (. .))\", T 2 = \"(S (NP (PRP )) (VP (VBD ) (S (NP (PRP$ ) (NNS )) (VP (VBG ) (ADVP (RB ))))) (. .)).\"", "type_str": "figure", "num": null, "uris": null }, "TABREF1": { "html": null, "type_str": "table", "num": null, "text": "Statistics of the datasets used in this project. Max_s/Ave_s are the maximum/average lengths of sentences. Max_t/Ave_t are the maximum/average numbers of nodes in trees.", "content": "
AccuracyM1M2M3
word gold32.34 32.09 34.64
word benepar 31.47 31.93 33.96
POS gold94.39 89.47 92.3
POS benepar93.689.19 91.9
" }, "TABREF2": { "html": null, "type_str": "table", "num": null, "text": "Sentence-level accuracies of the models on the OntoNotes test set. \"Benepar\" is the Berkeley neural parser(Kitaev and Klein, 2018).", "content": "
T1 god was very good to me .
jesus was very happy for him .
god said accusatory ones while it .
i made my people talking alone .
he had their people talking again .
T2 he told their people coming again .
" }, "TABREF3": { "html": null, "type_str": "table", "num": null, "text": "", "content": "" }, "TABREF5": { "html": null, "type_str": "table", "num": null, "text": "Perplexities of the OntoNotes/PTB test trees in which all words have GloVe vectors.", "content": "
ModelTest ppl
KN-5-gram
" }, "TABREF7": { "html": null, "type_str": "table", "num": null, "text": "", "content": "" }, "TABREF9": { "html": null, "type_str": "table", "num": null, "text": "Rank distributions for models on the OntoNotes Test Set.", "content": "
model\u2264 10\u2264 100 \u2264 1000 \u2264 10000 >10000 Med Mean
Model 155.7% 13.8%17.6%11.3%1.4%4678
Model 257.0% 14.5%16.5%10.7%1.3%4614
Model 357.2% 13.9%16.7%10.8%1.3%4624
LSTM-256 LM 50.9% 22.3%17.1%8.8%0.9%10470
" }, "TABREF10": { "html": null, "type_str": "table", "num": null, "text": "Rank distributions for models on the PTB Test Set.", "content": "" }, "TABREF11": { "html": null, "type_str": "table", "num": null, "text": "used ERS graph convolutional networks as dependency tree encoders for semantic role labelling. Even before graph neural networks", "content": "
OntoNotesPTB
Word accWord acc
ModelPOS acc \u00b5avg macavgmedPOS acc \u00b5avg macavgmed
Model 194.3532.426.638.694.2835.4741.7447.46
Model 289.432.037.3943.5297.6637.1243.7449.22
Model 392.3334.737.944.395.7337.2741.6944.37
LSTM-256 LM22.423.57
" }, "TABREF12": { "html": null, "type_str": "table", "num": null, "text": "Percentage POS prediction accuracies and word prediction accuracies, for each model.", "content": "
become popular, there were attempts akin to graph
encoders. Dyer et al. (2015); Socher et al. (2013);
Tai et al. (2015); Zhu et al. (2015); Le and Zuidema
(2014) encoded tree structure with recursive neural
networks or Tree-LSTMs.
" } } } }