{ "paper_id": "Q19-1019", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:09:34.666025Z" }, "title": "Densely Connected Graph Convolutional Networks for Graph-to-Sequence Learning", "authors": [ { "first": "Zhijiang", "middle": [], "last": "Guo", "suffix": "", "affiliation": { "laboratory": "", "institution": "Singapore University of Technology", "location": { "addrLine": "Design 8 Somapah Road", "postCode": "487372", "settlement": "Singapore" } }, "email": "zhijiangguo@mymail.sutd.edu.sg" }, { "first": "Yan", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Singapore University of Technology", "location": { "addrLine": "Design 8 Somapah Road", "postCode": "487372", "settlement": "Singapore" } }, "email": "yanzhang@mymail.sutd.edu.sg" }, { "first": "Zhiyang", "middle": [], "last": "Teng", "suffix": "", "affiliation": { "laboratory": "", "institution": "Singapore University of Technology", "location": { "addrLine": "Design 8 Somapah Road", "postCode": "487372", "settlement": "Singapore" } }, "email": "zhiyangteng@mymail.sutd.edu.sg" }, { "first": "Wei", "middle": [], "last": "Lu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Singapore University of Technology", "location": { "addrLine": "Design 8 Somapah Road", "postCode": "487372", "settlement": "Singapore" } }, "email": "luwei@sutd.edu.sg" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We focus on graph-to-sequence learning, which can be framed as transducing graph structures to sequences for text generation. To capture structural information associated with graphs, we investigate the problem of encoding graphs using graph convolutional networks (GCNs). Unlike various existing approaches where shallow architectures were used for capturing local structural information only, we introduce a dense connection strategy, proposing a novel Densely Connected Graph Convolutional Network (DCGCN). Such a deep architecture is able to integrate both local and non-local features to learn a better structural representation of a graph. Our model outperforms the state-of-the-art neural models significantly on AMR-to-text generation and syntax-based neural machine translation.", "pdf_parse": { "paper_id": "Q19-1019", "_pdf_hash": "", "abstract": [ { "text": "We focus on graph-to-sequence learning, which can be framed as transducing graph structures to sequences for text generation. To capture structural information associated with graphs, we investigate the problem of encoding graphs using graph convolutional networks (GCNs). Unlike various existing approaches where shallow architectures were used for capturing local structural information only, we introduce a dense connection strategy, proposing a novel Densely Connected Graph Convolutional Network (DCGCN). Such a deep architecture is able to integrate both local and non-local features to learn a better structural representation of a graph. Our model outperforms the state-of-the-art neural models significantly on AMR-to-text generation and syntax-based neural machine translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Graphs play an important role in natural language processing (NLP) as they are able to capture richer structural information than sequences and trees. Generally, semantics of sentences can be encoded as graphs. For example, the abstract meaning representation (AMR) (Banarescu et al., 2013 ) is a directed, labeled graph as shown in Figure 1 , where nodes in the graph denote semantic concepts and edges denote relations between concepts. Such graph representations can capture rich semanticlevel structural information, and are attractive representations useful for semantics-related tasks such as semantic parsing (Guo and Lu, 2018) and natural language generation (Beck et al., 2018) . In this paper, we focus on the graph-to-sequence * Contributed equally.", "cite_spans": [ { "start": 266, "end": 289, "text": "(Banarescu et al., 2013", "ref_id": "BIBREF3" }, { "start": 616, "end": 634, "text": "(Guo and Lu, 2018)", "ref_id": "BIBREF16" }, { "start": 667, "end": 686, "text": "(Beck et al., 2018)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 333, "end": 341, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "learning tasks, where we aim to learn representations for graphs that are useful for text generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Graph convolutional networks (GCNs) (Kipf and Welling, 2017) are variants of convolutional neural networks (CNNs) that operate directly on graphs, where the representation of each node is iteratively updated based on those of its adjacent nodes in the graph through an information propagation scheme. For example, the first layer of GCNs can only capture the graph's adjacency information between immediate neighbors, while with the second layer one will be able to capture second-order proximity information (neighborhood information two hops away from one node) as shown in Figure 1 . Formally, L layers will be needed in order to capture neighborhood information that is L hops away.", "cite_spans": [], "ref_spans": [ { "start": 576, "end": 584, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "GCNs have been successfully applied to many NLP tasks (Bastings et al., 2017; Zhang et al., 2018b) . Interestingly, although deeper GCNs with more layers will be able to capture richer neighborhood information of a graph, empirically it has been observed that the best performance is achieved with a 2-layer model .", "cite_spans": [ { "start": 54, "end": 77, "text": "(Bastings et al., 2017;", "ref_id": "BIBREF4" }, { "start": 78, "end": 98, "text": "Zhang et al., 2018b)", "ref_id": "BIBREF51" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Therefore, recent efforts that leverage recurrencebased graph neural networks have been explored as the alternatives to encode the structural information of graphs. Examples include graph-state long short-term memory (LSTM) networks (Song et al., 2018) and gated graph neural networks (GGNNs) (Beck et al., 2018) . Deep architectures based on such recurrence-based models have been successfully built for tasks such as language generation, where rich neighborhood information captured was shown useful.", "cite_spans": [ { "start": 233, "end": 252, "text": "(Song et al., 2018)", "ref_id": "BIBREF43" }, { "start": 293, "end": 312, "text": "(Beck et al., 2018)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Compared with recurrent neural networks, convolutional architectures are highly parallelizable and are more amenable to hardware acceleration Figure 1 : A 3-layer densely connected graph convolutional network. The example AMR graph here corresponds to the sentence ''You guys know what I mean.'' Every layer encodes information about immediate neighbors and 3 layers are needed to capture thirdorder neighborhood information (nodes that are 3 hops away from the current node). Each layer concatenates all preceding outputs as the input. (Gehring et al., 2017) . It is therefore worthwhile to explore the possibility of applying deeper GCNs that are able to capture more non-local information associated with the graph for graphto-sequence learning. Prior efforts have tried to train deep GCNs by incorporating residual connections (Bastings et al., 2017) . Xu et al. (2018) show that vanilla residual connections proposed by He et al. (2016) are not effective for graph neural networks. They next attempt to resolve this issue by adding additional recurrent layers on top of graph convolutional layers. However, they are still confined to relatively shallow GCNs architectures (at most 6 layers in their experiments), which may not be able to capture the rich nonlocal interactions for larger graphs.", "cite_spans": [ { "start": 537, "end": 559, "text": "(Gehring et al., 2017)", "ref_id": "BIBREF12" }, { "start": 831, "end": 854, "text": "(Bastings et al., 2017)", "ref_id": "BIBREF4" }, { "start": 857, "end": 873, "text": "Xu et al. (2018)", "ref_id": "BIBREF49" }, { "start": 925, "end": 941, "text": "He et al. (2016)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 142, "end": 150, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, to better address the issue of learning deeper GCNs, we introduce dense connectivity to GCNs and propose the novel densely connected graph convolutional networks (DCGCNs), inspired by DenseNets (Huang et al., 2017 ) that distill insights from residual connections. The dense connectivity strategy is illustrated in Figure 1 schematically. Direct connections are introduced from any layer to all its preceding layers. For example, the third layer receives the outputs of the first layer and the second layer, capturing the first-order, the second-order, and the third-order neighborhood information. With the help of dense connections, we are able to train multi-layer GCN models with a large depth, allowing rich local and non-local information to be captured for learning a better graph representation than those learned from the shallower GCN models.", "cite_spans": [ { "start": 209, "end": 228, "text": "(Huang et al., 2017", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 330, "end": 338, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Experiments show that our model is able to achieve better performance for graph-to-sequence learning tasks. For the AMR-to-text generation task, our model surpasses the current state-ofthe-art neural models trained on LDC2015E86 and LDC2017T10 by 2 and 4.3 BLEU points, respectively. For the syntax-based neural machine translation task, our model is also consistently better than others, showing the effectiveness of the model on a large training set. Our code is available at https://github.com/Cartus/ DCGCN. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we will present the basic components used for constructing our DCGCN model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Densely Connected GCNs", "sec_num": "2" }, { "text": "GCNs are neural networks that operate directly on graph structures (Kipf and Welling, 2017). Here we mathematically illustrate how multi-layer GCNs work on an undirected graph G = (V, E), where V and E are the set of nodes and edges, respectively. The convolution computation for node v at the l-th layer, which takes the input feature representation h (l\u22121) as input and outputs the induced representation h (l) v , can be defined as", "cite_spans": [ { "start": 409, "end": 412, "text": "(l)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "GCNs", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h (l) v = \u03c1 u\u2208N (v) W (l) h (l\u22121) u + b (l)", "eq_num": "(1)" } ], "section": "GCNs", "sec_num": "2.1" }, { "text": "where W (l) is the weight matrix, b (l) is the bias vector, N (v) is the set of one-hop neighbors of node v, and \u03c1 is an activation function (e.g., RELU [Nair and Hinton, 2010] ). h", "cite_spans": [ { "start": 36, "end": 39, "text": "(l)", "ref_id": null }, { "start": 153, "end": 176, "text": "[Nair and Hinton, 2010]", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "GCNs", "sec_num": "2.1" }, { "text": "(0) v is the initial input x v , where x v \u2208 R d and d is the input feature dimension.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GCNs", "sec_num": "2.1" }, { "text": "GCNs with Residual Connections. Bastings et al. (2017) integrate residual connections (He et al., 2016) into GCNs to help information propagation. Specifically, each node is updated according to Equation (1) first and then the resulting representation is combined with the node's representation from the last iteration:", "cite_spans": [ { "start": 32, "end": 54, "text": "Bastings et al. (2017)", "ref_id": "BIBREF4" }, { "start": 86, "end": 103, "text": "(He et al., 2016)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "GCNs", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h (l) v = \u03c1 u\u2208N (v) W (l) h (l\u22121) u + b (l) + h (l\u22121) v", "eq_num": "(2)" } ], "section": "GCNs", "sec_num": "2.1" }, { "text": "GCNs with Layer Aggregations. Xu et al. (2018) propose layer aggregations for GCNs, in which the final representation of each node is computed by combining the node's representations from all GCN layers:", "cite_spans": [ { "start": 30, "end": 46, "text": "Xu et al. (2018)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "GCNs", "sec_num": "2.1" }, { "text": "h final v = LA(h (l) v , h (l\u22121) v , . . . . , h (1) v ) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GCNs", "sec_num": "2.1" }, { "text": "where the LA function can be concatenation, maxpooling, or LSTM-attention operations as defined in Xu et al. (2018) .", "cite_spans": [ { "start": 99, "end": 115, "text": "Xu et al. (2018)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "GCNs", "sec_num": "2.1" }, { "text": "Dense connectivity is the core component of the proposed DCGCN. With dense connectivity, node v in the l-th layer not only takes inputs from h (l\u22121) , but also receives information from all the preceding layers, as shown in Figure 2 . Mathematically, we first define g", "cite_spans": [ { "start": 143, "end": 148, "text": "(l\u22121)", "ref_id": null } ], "ref_spans": [ { "start": 224, "end": 232, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Dense Connectivity", "sec_num": "2.2" }, { "text": "(l)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dense Connectivity", "sec_num": "2.2" }, { "text": "u as the concatenation of the initial node representation and the node representations produced in layers 1,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dense Connectivity", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2022 \u2022 \u2022 , l \u2212 1: g (l) u = [x u ; h (1) u ; . . . ; h (l\u22121) u ].", "eq_num": "(4)" } ], "section": "Dense Connectivity", "sec_num": "2.2" }, { "text": "Such a mechanism allows deeper layers to capture all previous information to alleviate the problem discussed in Section 1 in graph neural networks. Similar strategies are also proposed in previous work (He et al., 2016; Huang et al., 2017) . While dense connectivity allows training deeper neural networks, every intermediate layer is designated to be of very small size, allowing adding only a small set of feature-maps at each layer. The final classifier makes predictions based on all feature-maps, which is called ''collective knowledge'' (Huang et al., 2017) . Such a strategy improves the parameter efficiency. In practice, the dimensions of these small hidden layers d hidden are decided by the number of layers L and the input feature dimension d. In DCGCN, we use", "cite_spans": [ { "start": 202, "end": 219, "text": "(He et al., 2016;", "ref_id": "BIBREF18" }, { "start": 220, "end": 239, "text": "Huang et al., 2017)", "ref_id": "BIBREF21" }, { "start": 543, "end": 563, "text": "(Huang et al., 2017)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Dense Connectivity", "sec_num": "2.2" }, { "text": "d hidden = d/L.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dense Connectivity", "sec_num": "2.2" }, { "text": "For example, if we have a 3-layer (L = 3) DCGCN model and input dimension is 300 (d = 300), the hidden dimension of each layer will be d hidden = d/L = 300/3 = 100. Then Figure 2 : Each DCGCN block has two sub-blocks. Both of them are densely connected graph convolutional layers with different numbers of layers. A linear transformation is used between two sub-blocks, followed by a residual connection.", "cite_spans": [], "ref_spans": [ { "start": 170, "end": 178, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Dense Connectivity", "sec_num": "2.2" }, { "text": "we concatenate the output of each layer to form the new representation. We have 3 layers so the output dimension is 300 (3 \u00d7 100). Different from the GCN model whose hidden dimension is larger than or equal to the input dimension, the DCGCN model shrinks the hidden dimension as the number of layers increases in order to improve the parameter efficiency similar to DenseNets (Huang et al., 2017) .", "cite_spans": [ { "start": 376, "end": 396, "text": "(Huang et al., 2017)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Dense Connectivity", "sec_num": "2.2" }, { "text": "Accordingly, we modify the convolution computation of each layer as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dense Connectivity", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h (l) v = \u03c1 u\u2208N (v) W (l) g (l) u + b (l)", "eq_num": "(5)" } ], "section": "Dense Connectivity", "sec_num": "2.2" }, { "text": "The column dimension of the weight matrix increases by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dense Connectivity", "sec_num": "2.2" }, { "text": "d hidden per layer, that is, W (l) \u2208 R d hidden \u00d7d (l) , where d (l) = d + d hidden \u00d7 (l \u2212 1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dense Connectivity", "sec_num": "2.2" }, { "text": "Attention mechanisms have become almost a de facto standard in many sequence-based tasks (Vaswani et al., 2017) . In DCGCNs, we also incorporate the self-attention strategy by implicitly specifying different weights to different nodes in a neighborhood similar to graph attention networks (Velickovic et al., 2018) .", "cite_spans": [ { "start": 89, "end": 111, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF47" }, { "start": 289, "end": 314, "text": "(Velickovic et al., 2018)", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "Graph Attention", "sec_num": "2.3" }, { "text": "In order to perform self-attention on nodes, attention coefficients are required. The input for the calculation is a set of vectors,g (l) ", "cite_spans": [ { "start": 134, "end": 137, "text": "(l)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Graph Attention", "sec_num": "2.3" }, { "text": "= {g (l) 1 ,g (l) 2 , . . . ,g (l)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Attention", "sec_num": "2.3" }, { "text": "n }, after node-wise feature transformationg", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Attention", "sec_num": "2.3" }, { "text": "(l) u = W (l) g (l)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Attention", "sec_num": "2.3" }, { "text": "u . As an initial step, a shared linear projection parameterized by a weight matrix, W a \u2208 R d hidden \u00d7d hidden , is applied to nodes in the graph. Attention coefficients can be computed as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Attention", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b1 (l) ij = exp \u03c6 a [W ag (l) i ; W ag (l) j ] k\u2208N i exp \u03c6 a [W ag (l) i ; W ag (l) k ]", "eq_num": "(6)" } ], "section": "Graph Attention", "sec_num": "2.3" }, { "text": "where a \u2208 R 2d hidden is a weight vector, \u03c6 is the activation function (here we use LeakyReLU [Girshick et al., 2014] ). These coefficients are used to compute a linear combination of the node representations. Modifying the convolution computation for attention, we arrive at:", "cite_spans": [ { "start": 94, "end": 117, "text": "[Girshick et al., 2014]", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Graph Attention", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h (l) v = \u03c1 u\u2208N (v) \u03b1 (l) vu W (l) g (l) u + b (l)", "eq_num": "(7)" } ], "section": "Graph Attention", "sec_num": "2.3" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Attention", "sec_num": "2.3" }, { "text": "\u03b1 (l)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Attention", "sec_num": "2.3" }, { "text": "vu are normalized attention coefficients computed by the attention mechanism at l-th layer. Note that these coefficients will not change the dimension of the output representations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Attention", "sec_num": "2.3" }, { "text": "3 Graph-to-Sequence Model In the following we will explain the model architecture of the graph-to-sequence model. We leverage DCGCNs as the graph encoder, which directly models the graph structure without linearization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Attention", "sec_num": "2.3" }, { "text": "The graph encoder is composed of DCGCN blocks, as shown in Figure 3 . Within each DCGCN block, we design two types of multi-layer DCGCNs as two sub-blocks to capture graph structure at different abstract levels. As Figure 2 shows, in each block, the first sub-block has n-layers and the second sub-block has m-layers. This prototype shares the same spirit with the usage of two different-sized filters in DenseNets (Huang et al., 2017) .", "cite_spans": [ { "start": 415, "end": 435, "text": "(Huang et al., 2017)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 59, "end": 67, "text": "Figure 3", "ref_id": null }, { "start": 215, "end": 223, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Graph Encoder", "sec_num": "3.1" }, { "text": "Linear Combination Layer. In addition to densely connected layers, we include a linear Figure 3 : The model concatenates node embeddings and positional embeddings as inputs. The encoder contains a stack of N identical blocks. The linear transformation layer combines output of all blocks into hidden representations. These are fed into an attention mechanism, generating the context vector. The decoder, a 2-layer LSTM (Hochreiter and Schmidhuber, 1997) , makes predictions based on hidden representations and the context vector.", "cite_spans": [ { "start": 419, "end": 453, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 87, "end": 95, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Graph Encoder", "sec_num": "3.1" }, { "text": "combination layer between multi-layer DCGCNs to filter the representations from different DCGCN layers, reaching a more expressive representation. This strategy is inspired by ELMo (Peters et al., 2018) , which combines the hidden states from different LSTM layers. We also use a residual connection (He et al., 2016) to incorporate the initial inputs of multi-layer GCNs into the linear combination layer, see Figure 3 . Formally, the output of the linear combination layer is defined as:", "cite_spans": [ { "start": 181, "end": 202, "text": "(Peters et al., 2018)", "ref_id": "BIBREF36" }, { "start": 300, "end": 317, "text": "(He et al., 2016)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 411, "end": 419, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Graph Encoder", "sec_num": "3.1" }, { "text": "h comb = W comb h out + x v + b comb (8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Encoder", "sec_num": "3.1" }, { "text": "where h out is the output of the densely connected layers by concatenating outputs from all previous L layers h out = [h (1) ; . . . ; h (L) ] and h out \u2208 R d .", "cite_spans": [ { "start": 121, "end": 124, "text": "(1)", "ref_id": null }, { "start": 137, "end": 140, "text": "(L)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Graph Encoder", "sec_num": "3.1" }, { "text": "x v is the input of the DCGCN layer. h out and x v share the same dimension d. W comb \u2208 R d\u00d7d is a weight matrix and b comb is a bias vector for the linear transformation. Both W comb and b comb are different according to different DCGCN layers. In addition, another linear combination layer is added to obtain the final representations as shown in Figure 3 .", "cite_spans": [], "ref_spans": [ { "start": 349, "end": 357, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Graph Encoder", "sec_num": "3.1" }, { "text": "In order to improve the information propagation process in graph structures such as AMR graphs and dependency trees, previous researchers enrich the original input graphs with additional transformations. add reverse edges as well as self-loop edges for each node to the original graph. This strategy is similar to the bidirectional recurrent neural networks (RNNs) (Elman, 1990) , which can enjoy the information propagation from two directions. Beck et al. (2018) adapt this approach and additionally transform the directed input graphs into Levi graphs (Gross et al., 2013) . Basically, edges in the original graphs are turned into additional nodes in Levi graphs. With this approach, we can encode the original edge labels and node inputs in the same way. Specifically, Beck et al. (2018) define three types of edge labels on the Levi graph: default, reverse, and self, which refer to the original edges, the new virtual edges that are reverse to the original edges, and the self-loop edges. Scarselli et al. (2009) add another node that is connected to all other nodes. Zhang et al. (2018a) use a global sentence-level node to assemble and back-distribute information. Motivated by these works, we propose an extended Levi graph, which adds a global node in the Levi graph. For every node x in the original Levi graph, there is a new edge (global) from the global node to x. Figure 4 shows an example AMR graph and its corresponding extended Levi graph. The edge type vocabulary for the extended Levi graph of the AMR graph now becomes T = { default, reverse, self, global}. Our motivations are three-fold. First, the global node gives each node a global view of the input graph, which can make each node more aware of the non-local information. Second, the global node can serve as a hub to help node communications, which can facilitate the node information propagation process. Third, the output vectors of the global node in the encoder can be used as the initial states of the decoder, which are crucial for sequence-to-sequence learning tasks. Prior efforts average representations of all nodes as the graph embedding to initialize the decoder. Instead, we directly use the learned representation of the global nodes, which captures the information from all nodes in the whole graph.", "cite_spans": [ { "start": 365, "end": 378, "text": "(Elman, 1990)", "ref_id": "BIBREF9" }, { "start": 446, "end": 464, "text": "Beck et al. (2018)", "ref_id": "BIBREF5" }, { "start": 555, "end": 575, "text": "(Gross et al., 2013)", "ref_id": "BIBREF15" }, { "start": 773, "end": 791, "text": "Beck et al. (2018)", "ref_id": "BIBREF5" }, { "start": 995, "end": 1018, "text": "Scarselli et al. (2009)", "ref_id": "BIBREF39" } ], "ref_spans": [ { "start": 1379, "end": 1387, "text": "Figure 4", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Extended Levi Graph", "sec_num": "3.2" }, { "text": "The input to the syntax-based neural machine translation task is the dependency tree. Unlike the AMR graph, the sentence contains significant sequential information. Beck et al. (2018) inject this information by adding sequential connections to each token. In our model, we also add forward and backward sequential connections, as illustrated in Figure 5 . Therefore, the edge type vocabulary for the extended Levi graph of the dependency tree becomes T = {default, reverse, self, global, forward, backward}.", "cite_spans": [ { "start": 166, "end": 184, "text": "Beck et al. (2018)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 346, "end": 354, "text": "Figure 5", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Extended Levi Graph", "sec_num": "3.2" }, { "text": "Positional encodings about the relative or absolute position of the tokens have been proved beneficial for sequence learning (Gehring et al., 2017) . We also include positional encodings by concatenating them with the learned word embeddings. The positional encodings are indexed by integer values representing the minimum distance from the root node. For example, come-01 in Figure 4 is the root node of the AMR graph, so its index should be 0, where and is the child node of come-01, its index is 1. Notice that we denote the index of the global node as \u22121. ", "cite_spans": [ { "start": 125, "end": 147, "text": "(Gehring et al., 2017)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 376, "end": 384, "text": "Figure 4", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Extended Levi Graph", "sec_num": "3.2" }, { "text": "Directionality and edge labels play an important role in linguistic structures. Information from incoming edges, outgoing edges, and self edges should be treated differently by using separate weight matrices. Moreover, information from incoming edges that have different labels should have different weight matrices, too. Following this motivation, we incorporate the directionality of an edge directly in its label. For example, node learn-01 in Figure 4 has three incoming edges, these edges have three different types: default (from node op2), self (from node learn-01), and global (from node gnode). For the AMR graph we have four types of edges while for dependency trees we have six as mentioned in Section 3.2. Thus, considering different type of edges, we modify the convolution computation as:", "cite_spans": [], "ref_spans": [ { "start": 447, "end": 455, "text": "Figure 4", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Direction Aggregation", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "v (l) t = \u03c1 u\u2208N (v) dir(u,v)=t \u03b1 (l) vu W (l) t g (l) u + b (l) t", "eq_num": "(9)" } ], "section": "Direction Aggregation", "sec_num": "3.3" }, { "text": "where dir (u, v) selects the weight matrix and bias term associated with the edge type t. For example, in the AMR generation task, there are four edge types: default, reverse, self, and global. Each type corresponds to a separate weight matrix and a separate bias term. Now we need to aggregate representations learned from different types of edges. A simple way to do this is averaging them to get the final representations. However, Hamilton et al. (2017) show that using a mean-based function to aggregate feature information from different nodes may not be satisfactory, since information from different sources should not be treated equally. Thus we assign different weights to information from different types of edges to integrate such information. Specifically, we concatenate the learned representations from all types of edges and perform a linear transformation, mathematically represented as:", "cite_spans": [ { "start": 10, "end": 16, "text": "(u, v)", "ref_id": null }, { "start": 435, "end": 457, "text": "Hamilton et al. (2017)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Direction Aggregation", "sec_num": "3.3" }, { "text": "f ([v (l) 1 ; \u2022 \u2022 \u2022 ; v (l) T ]) = W f [v (l) 1 ; \u2022 \u2022 \u2022 ; v (l) T ] + b f (10) where W f \u2208 R d \u00d7d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Direction Aggregation", "sec_num": "3.3" }, { "text": "hidden is the weight matrix and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Direction Aggregation", "sec_num": "3.3" }, { "text": "d = T \u00d7 d hidden .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Direction Aggregation", "sec_num": "3.3" }, { "text": "T is the size of the edge type vocabulary and d hidden is the hidden dimension in DCGCN layers as described in Section 2.2. b f \u2208 R d hidden is a bias vector. Finally, the convolution computation becomes:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Direction Aggregation", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h (l) v = \u03c1 f ([v (l) 1 ; \u2022 \u2022 \u2022 ; v (l) T ])", "eq_num": "(11)" } ], "section": "Direction Aggregation", "sec_num": "3.3" }, { "text": "We use an attention-based LSTM decoder (Bahdanau et al., 2015) . The initial state of the decoder is the representation of the global node described in Section 3.2. The decoder yields the natural language sequence by calculating a sequence of hidden states sequentially. Here we also include the coverage mechanism (Tu et al., 2016) . Therefore, when generating the t-th token, the decoder considers five factors: the attention memory, the word embedding of the (t \u2212 1)-th token, the previous hidden state of LSTM, the previous context vector, and the previous coverage vector.", "cite_spans": [ { "start": 39, "end": 62, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF2" }, { "start": 315, "end": 332, "text": "(Tu et al., 2016)", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.4" }, { "text": "We assess the effectiveness of our models on two typical graph-to-sequence learning tasks, including AMR-to-text generation and syntaxbased neural machine translation (NMT). For the AMR-to-text generation task, we use two benchmarks-the LDC2015E86 dataset (AMR15) and the LDC2017T10 dataset (AMR17). In these datasets, each instance contains a sentence and an AMR graph. We follow Konstas et al. (2017) to apply entity simplification in the preprocessing steps. We then transform each preprocessed AMR graph into its extended Levi graph as described in Section 3.2. For the syntax-based NMT task, we evaluate our model on both the En-De and the En-Cs News Commentary v11 dataset from the WMT16 translation task. 2 We parse English sentences after tokenization to generate the dependency trees on the source side using SyntaxNet (Alberti et al., 2017) . 3 We tokenize Czech and German using the Moses tokenizer. 4 On the target side, we use byte-pair encodings (Sennrich et al., 2016) with 8,000 merge operations to obtain subwords. We transform the labelled dependency trees into their corresponding extended Levi graphs as described in Section 3.2. Table 1 shows the statistics of these four datasets. The AMR-to-text datasets contain about 16 K \u223c 36 K training instances. The NMT datasets are relatively large, consisting of around 200 K training instances. We tune model hyper-parameters using random layouts based on the results of the development set. We choose the number of DCGCN blocks (Block) from {1, 2, 3, 4}. We select the feature dimension d from {180, 240, 300, 360, 420}. We do not use pretrained embeddings. The encoder and the decoder share the training vocabulary. We adopt Adam (Kingma and Ba, 2015) with an initial learning rate of 0.0003 as the optimizer. The 2 http://www.statmt.org/wmt16/translationtask.html.", "cite_spans": [ { "start": 381, "end": 402, "text": "Konstas et al. (2017)", "ref_id": "BIBREF26" }, { "start": 712, "end": 713, "text": "2", "ref_id": null }, { "start": 828, "end": 850, "text": "(Alberti et al., 2017)", "ref_id": "BIBREF1" }, { "start": 853, "end": 854, "text": "3", "ref_id": null }, { "start": 911, "end": 912, "text": "4", "ref_id": null }, { "start": 960, "end": 983, "text": "(Sennrich et al., 2016)", "ref_id": "BIBREF40" } ], "ref_spans": [ { "start": 1150, "end": 1157, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "3 https://github.com/tensorflow/models/tree/ master/research/syntaxnet. 4 https://github.com/moses-smt/mosesdecoder. We determine when to stop training based on the perplexity change in the development set. For decoding, we use beam search with beam size 10.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "Through preliminary experiments, we find that the combinations (Block = 4, d = 360, Batch = 16) and (Block = 2, d = 360, Batch = 24) give best results on AMR and NMT tasks, respectively. Following previous work, we evaluate the results in terms of both BLEU (B) scores (Papineni et al., 2002) and sentence-level CHRF++ (C) scores (Popovic, 2017; Beck et al., 2018) . Particularly, we use case-insensitive BLEU scores for AMR and case sensitive BLEU scores for NMT. For ensemble models, we train five models with different random seeds and then use Sockeye (Felix et al., 2017) to perform default ensemble decoding.", "cite_spans": [ { "start": 269, "end": 292, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF35" }, { "start": 330, "end": 345, "text": "(Popovic, 2017;", "ref_id": "BIBREF37" }, { "start": 346, "end": 364, "text": "Beck et al., 2018)", "ref_id": "BIBREF5" }, { "start": 556, "end": 576, "text": "(Felix et al., 2017)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "We compare the performance of DCGCNs with the other three kinds of models: (1) sequence-tosequence (Seq2Seq) models, which use linearized graphs as inputs;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Main Results on AMR-to-text Generation", "sec_num": "4.2" }, { "text": "(2) recurrent graph encoders (GGNN2Seq, GraphLSTM); (3) models trained with external resources. For convenience, we denote the LSTM-based Seq2Seq models of Konstas et al. (2017) and Beck et al. (2018) as Seq2SeqK and Seq2SeqB, respectively. GGNN2Seq (Beck et al., 2018) is the model that leverages GGNNs as graph encoders. Table 2 shows the results on AMR17. Our single model achieves 27.6 BLEU points, which is the new state-of-the-art result for single models. In particular, our single DCGCN model consistently outperforms Seq2Seq models by a significant margin when trained without external resources. For example, the single DCGCN model gains 5.9 more BLEU points than the single models of Seq2SeqB on AMR17. These results demonstrate the importance of explicitly capturing the graph structure in the encoder.", "cite_spans": [ { "start": 156, "end": 177, "text": "Konstas et al. (2017)", "ref_id": "BIBREF26" }, { "start": 182, "end": 200, "text": "Beck et al. (2018)", "ref_id": "BIBREF5" }, { "start": 250, "end": 269, "text": "(Beck et al., 2018)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 323, "end": 330, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Main Results on AMR-to-text Generation", "sec_num": "4.2" }, { "text": "In addition, our single DCGCN model obtains better results than previous ensemble models. For example, on AMR17, the single DCGCN model is 1 BLEU point higher than the ensemble model of Seq2SeqB. Our model requires substantially fewer parameters (e.g., the parameter size is only 3/5 and 1/9 of those in GGNN2Seq and Seq2SeqB, respectively). The ensemble approach based on combining five DCGCN models initialized with different random seeds achieves a BLEU score of 30.4 and a CHRF++ score of 59.6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Main Results on AMR-to-text Generation", "sec_num": "4.2" }, { "text": "Under the same setting, our model also consistently outperforms graph encoders based on recurrent neural networks or gating mechanisms. For GGNN2Seq, our single model is 3.3 and 0.1 BLEU points higher than their single and ensemble models, respectively. We also have similar observations in terms of CHRF++ scores for sentence-level evaluations. DCGCN also outperforms GraphLSTM by 2.0 BLEU points in the fully supervised setting as shown in Table 3 . Note that GraphLSTM uses char-level neural representations and pretrained word embeddings, whereas our model solely relies on word-level representations with random initializations. This empirically shows that compared with recurrent graph encoders, DCGCNs can learn better representations for graphs.", "cite_spans": [], "ref_spans": [ { "start": 442, "end": 449, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Main Results on AMR-to-text Generation", "sec_num": "4.2" }, { "text": "Moreover, we compare our results with the state-of-the-art semi-supervised models on the AMR15 test set (Table 3) , including non-neural methods such as TSP (Song et al., 2016) , PBMT (Pourdamghani et al., 2016) , Tree2Str (Flanigan et al., 2016) , and SNRG (Song et al., 2017) . All these non-neural models train language models on the whole Gigaword corpus. Our ensemble model gives 28.2 BLEU points without external data, which is better than these other methods.", "cite_spans": [ { "start": 157, "end": 176, "text": "(Song et al., 2016)", "ref_id": "BIBREF42" }, { "start": 184, "end": 211, "text": "(Pourdamghani et al., 2016)", "ref_id": "BIBREF38" }, { "start": 223, "end": 246, "text": "(Flanigan et al., 2016)", "ref_id": "BIBREF11" }, { "start": 258, "end": 277, "text": "(Song et al., 2017)", "ref_id": "BIBREF41" } ], "ref_spans": [ { "start": 104, "end": 113, "text": "(Table 3)", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Main Results on AMR-to-text Generation", "sec_num": "4.2" }, { "text": "Following Konstas et al. (2017) and Song et al. (2018) , we also evaluate our model using external Gigaword sentences as training data. We first use the additional data to pretrain the model, then fine tune it on the gold data. Using additional 0.1M data, the single DCGCN model achieves a BLEU score of 29.0, which is higher than Seq2SeqK (Konstas et al., 2017) and GraphLSTM (Song et al., 2018) trained with 0.2M additional data. When using the same amount of 0.2M data, the performance of DCGCN is 4.2 and 3.4 BLEU points higher than Seq2SeqK and GraphLSTM, respectively. The DCGCN model is able to achieve competitive BLEU points (33.2) by using 0.3M external data, while GraphLSTM achieves a score of 33.6 by using 2M data and Seq2SeqK achieves a score of 33.8 by using 20M data. These results show that our model is more effective in terms of using automatically generated AMR graphs. Using 0.3M additional data, our ensemble model achieves the new state-of-the-art result of 35.3 BLEU points. Table 4 shows the results for the English-German (En-De) and English-Czech (En-Cs) translation tasks. BoW+GCN, CNN+GCN, and BiRNN+GCN refer to utilizing the following encoders with a GCN layer on top respectively: 1) a bag-ofwords encoder, 2) a one-layer CNN, and 3) a bidirectional RNN. PB-SMT is the phrase-based statistical machine translation model using Moses (Koehn et al., 2007 with the best GCN-based model (BiRNN+GCN), our single DCGCN model surpasses it by 2.7 and 2.5 BLEU points on the En-De and En-Cs tasks, respectively. Our models consist of full GCN layers, removing the burden of using a recurrent encoder to extract non-local contextual information in the bottom layers. Compared with non-GCN models, our single DCGCN model is 2.2 and 1.9 BLEU points higher than the current state-of-the-art single model (GGNN2Seq) on the En-De and En-Cs translation tasks, respectively. In addition, our single model is comparable to the ensemble results of Seq2SeqB and GGNN2Seq, whereas the number of parameters of our models is only about 1/6 of theirs. Additionally, the ensemble DCGCN models achieve 20.5 and 13.1 BLEU points on the En-De and En-Cs tasks, respectively. Our ensemble results are significantly higher than those of the state-of-the-art syntax-based ensemble models reported by GGNN2Seq (En-De: 20.5 vs. 19.6; En-Cs: 13.1 vs. 11.7 in terms of BLEU).", "cite_spans": [ { "start": 10, "end": 31, "text": "Konstas et al. (2017)", "ref_id": "BIBREF26" }, { "start": 36, "end": 54, "text": "Song et al. (2018)", "ref_id": "BIBREF43" }, { "start": 340, "end": 362, "text": "(Konstas et al., 2017)", "ref_id": "BIBREF26" }, { "start": 377, "end": 396, "text": "(Song et al., 2018)", "ref_id": "BIBREF43" }, { "start": 1365, "end": 1384, "text": "(Koehn et al., 2007", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 1000, "end": 1007, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Main Results on AMR-to-text Generation", "sec_num": "4.2" }, { "text": "Layers in the Sub-block. Table 5 shows the effect of the number of layers of each subblock on the AMR15 development set. DenseNets (Huang et al., 2017) use two kinds of convolution filters: 1 \u00d7 1 and 3 \u00d7 3. Similar to DenseNets, we choose the values of n and m for layers from [1, 2, 3, 6] . We choose this value range by considering the scale of non-local nodes, the abstract information at different level, and the calculation efficiency. For brevity, we only show representative configurations. We first investigate DCGCN with one block. In general, the performance increases when we gradually enlarge n and m. For example, when n = 1 and m = 1, the BLEU score is 17.6; when n = 6 and m = 6, the BLEU score becomes 22.0. We observe that the three settings (n = 6, m = 3), (n = 3, m = 6), and (n = 6, m = 6) give similar results for both 1 DCGCN block and 2 DCGCN blocks. Because the first two settings contain fewer parameters than the third setting, it is reasonable to choose either (n = 6, m = 3) or (n = 3, m = 6).", "cite_spans": [ { "start": 131, "end": 151, "text": "(Huang et al., 2017)", "ref_id": "BIBREF21" }, { "start": 277, "end": 280, "text": "[1,", "ref_id": null }, { "start": 281, "end": 283, "text": "2,", "ref_id": null }, { "start": 284, "end": 286, "text": "3,", "ref_id": null }, { "start": 287, "end": 289, "text": "6]", "ref_id": null } ], "ref_spans": [ { "start": 25, "end": 32, "text": "Table 5", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Additional Experiments", "sec_num": "4.4" }, { "text": "For later experiments, we use (n = 6, m = 3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional Experiments", "sec_num": "4.4" }, { "text": "Comparisons with Baselines. The first block in Table 6 shows the performance of our two baseline models: multi-layer GCNs with residual connections (GCN+RC) and multi-layer GCNs with both residual connections and layer aggregations (GCN+RC+LA). In general, increasing the number of GCN layers from 2 to 9 boosts the model performance. However, when the layer number exceeds 10, the performance of both baseline models start to drop. For example, GCN+RC+LA (10) achieves a BLEU score of 21.2, which is worse than GCN+RC+LA (9).", "cite_spans": [], "ref_spans": [ { "start": 47, "end": 54, "text": "Table 6", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Additional Experiments", "sec_num": "4.4" }, { "text": "In preliminary experiments, we cannot manage to train very deep GCN+RC and GCN+RC+LA models. In contrast, our DCGCN models can be trained using a large number of layers. For example, DCGCN4 contains 36 layers. When we increase the DCGCN blocks from 1 to 4, the model performance continues increasing on the AMR15 development set. We therefore choose DCGCN4 for the AMR experiments. Using a similar method, DCGCN2 is selected for the NMT tasks. When the layer numbers are 9, DCGCN1 is better than GCN+RC in term of B/C scores (21.7/51.5 vs. 21.1/50.5). GCN+RC+LA (9) is sightly better than DCGCN1. However, when we set the number to 18, GCN+RC+LA achieves a BLEU score of 19.4, which is significantly worse than the BLEU score obtained by DCGCN2 (23.3). We also try GCN+RC+LA (27), but it does not converge. In conclusion, these results show the robustness and effectiveness of our DCGCN models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional Experiments", "sec_num": "4.4" }, { "text": "Performance vs. Parameter Budget. We also evaluate the performance of DCGCN model against different number of parameters on the AMR generation task. Results are shown in Figure 6 . Specifically, we try four parameter budgets, including 11.8M, 14.0M, 16.2M, and 18.4M. These numbers correspond to the model size (in terms of number of parameters) of DCGCN1, DCGCN2, DCGCN3, and DCGCN4, respectively. For each budget, we vary both the depth of GCN models and the hidden vector dimensions of each node in GCNs in order to exhaust the entire budget. For example, GCN (2) \u2212 512, GCN (3) \u2212 426, GCN (4)\u2212372, and GCN (5)\u2212336 contain about 11.8M parameters, where GCN (i) \u2212 d indicates a GCN model with i layers and the hidden size for each node is d. We compare DCGCN1 with these four models. DCGCN1 gives 22.9 BLEU points. For the GCN models, the best result is obtained by GCN (5) \u2212 336, which falls behind DCGCN1 by 2.0 BLEU points. We compare DCGCN2, DCGCN3, and DCGCN4 with their equal-sized GCN models in a similar way. The results show that DCGCN consistently outperforms GCN under the same parameter budget. When the parameter budget becomes larger, we can observe that the performance difference becomes more prominent.", "cite_spans": [ { "start": 559, "end": 562, "text": "GCN", "ref_id": null } ], "ref_spans": [ { "start": 170, "end": 178, "text": "Figure 6", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Additional Experiments", "sec_num": "4.4" }, { "text": "In particular, the BLEU margins between DCGCN models and their best GCN models are 2.0, 2.7, 2.7, and 3.4, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional Experiments", "sec_num": "4.4" }, { "text": "Performance vs. Layers. We compare DCGCN models with different layers under the same parameter budget. Table 7 shows the results. For example, when both DCGCN1 and DCGCN2 are limited to 10.9M parameters, DCGCN2 obtains 22.2 BLEU points, which is higher than DCGCN1 (20.9). Similarly, when DCGCN3 and DCGCN4 contain 18.6M and 18.4M parameters, DCGCN4 outperforms DCGCN3 by 1 BLEU point with a slightly smaller model. In general, we found when the parameter budget is the same, deeper DCGCN models can obtain better results than the shallower ones.", "cite_spans": [], "ref_spans": [ { "start": 103, "end": 110, "text": "Table 7", "ref_id": "TABREF13" } ], "eq_spans": [], "section": "Additional Experiments", "sec_num": "4.4" }, { "text": "Level of Density. Table 8 shows the ablation study of the level of density of our model. We use DCGCNs with 4 dense blocks as the full model. Then we remove dense connections gradually from the last block to the first block. In general, the performance of the model drops substantially as we remove more dense connections until it cannot converge without dense connections. The full model gives 25.5 BLEU points on the AMR15 dev set. After removing the dense connections in the last block, the BLEU score becomes 24.8. Without using the dense connections in the last two blocks, the score drops to 23.8. Furthermore, excluding the dense connections in the last three blocks only gives 23.2 BLEU points. Although these four 2018, we conduct a further ablation study for modules used in the graph encoder and LSTM decoder on the AMR15 dev set, including linear combination, global node, direction aggregation, graph attention mechanism, and coverage mechanism using the 4-block models by always keeping the dense connections. Table 9 shows the results. For the encoder, we find that the linear combination and the global node have more contributions in terms of B/C Table 9 : Ablation study for modules used in the graph encoder and the LSTM decoder.", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 25, "text": "Table 8", "ref_id": "TABREF15" }, { "start": 1024, "end": 1031, "text": "Table 9", "ref_id": null }, { "start": 1164, "end": 1171, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Additional Experiments", "sec_num": "4.4" }, { "text": "scores. The results drop by 2/2.2 and 1.3/1.2 points, respectively, after removing them. Without these two components, our model gives a BLEU score of 22.6, which is still better than the best GCN+RC model (21.1) and the best GCN+RC+LA model (22.1). Adding either the global node or the linear combination improves the baseline models with only dense connections. This suggests that enriching input graphs with the global node and including the linear combination can facilitate GCNs to learn better information aggregations, producing more expressive graph representations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional Experiments", "sec_num": "4.4" }, { "text": "Results also show the linear combination is more effective than the global node. Considering them together further enhances the model performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional Experiments", "sec_num": "4.4" }, { "text": "After removing the graph attention module, our model gives 24.9 BLEU points. Similarly, excluding the direction aggregation module leads to a performance drop to 24.6 BLEU points. The coverage mechanism is also effective in our models. Without the coverage mechanism, the result drops by 1.7/2.4 points for B/C scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional Experiments", "sec_num": "4.4" }, { "text": "Graph Size. Following Bastings et al. (2017) , we show in Figure 7 the CHRF++ score variations according to the graph size |G| on the AMR2015 development set, where |G| refers to the number of nodes in the extended Levi graph. We bin the graph size into five classes (\u2264 30, (30, 40] , (40, 50] , (50, 60] , > 60). We average the sentencelevel CHRF++ scores of the sentences in the same bin to plot Figure 7 . For small graphs (i.e., |G| \u2264 30), DCGCN obtains similar results as the baselines. For large graphs, DCGCN significantly outperforms the two baselines. In general, as the graph size increases, the gap between DCGCN and the two baselines becomes larger. In addition, we can also notice that the margin between GCN and GCN+LA is quite stable, while the margin between DCGCN and GCN+LA varies according to the graph size. The trend for BLEU scores is similar to CHRF++ scores. This suggests that DCGCN can perform better for larger graphs as its deeper architecture can capture the longdistance dependencies. Dense connections facilitate information propagation in large graphs, while", "cite_spans": [ { "start": 22, "end": 44, "text": "Bastings et al. (2017)", "ref_id": "BIBREF4" }, { "start": 267, "end": 282, "text": "(\u2264 30, (30, 40]", "ref_id": null }, { "start": 285, "end": 289, "text": "(40,", "ref_id": null }, { "start": 290, "end": 293, "text": "50]", "ref_id": null }, { "start": 296, "end": 300, "text": "(50,", "ref_id": null }, { "start": 301, "end": 304, "text": "60]", "ref_id": null } ], "ref_spans": [ { "start": 58, "end": 66, "text": "Figure 7", "ref_id": "FIGREF3" }, { "start": 398, "end": 406, "text": "Figure 7", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Analysis and Discussion", "sec_num": "4.5" }, { "text": "Reference: u.s. intelligence officials stated that north korean officials are continuing global trade in technology for weapons of mass destruction including instructions for making advanced missiles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis and Discussion", "sec_num": "4.5" }, { "text": "GCN+RC: a u.s. intelligence official stated that north korea officials continued the global trade for weapons of mass destruction by making advanced missiles to make advanced missiles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis and Discussion", "sec_num": "4.5" }, { "text": "GCN+RC+LA: a u.s. intelligence official stated that north korea officials continued global trade with weapons of mass destruction including making advanced missiles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis and Discussion", "sec_num": "4.5" }, { "text": "DCGCN: a u.s. intelligence official stated that north korea officials continue global trade on technology for weapons of mass destruction including instructions to make advanced missiles. shallow GCNs might struggle to capture such dependencies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis and Discussion", "sec_num": "4.5" }, { "text": "Example Output. Table 10 shows example outputs from three models for the AMR-to-text task, together with the corresponding AMR graph as well as the text reference. The word ''technology'' in the reference acts as a link between ''global trade'' and ''weapons of mass destruction'', offering the background knowledge to help understand the context. The word ''instructions'' also plays a crucial role in the generated sentence -without the word the sentence will have a significantly different meaning. Both GCN+RC and GCN+RC+LA fail to successfully generate these two important words. The output from GCN+RC does not even appear to be grammatically correct. In contrast, DCGCN manages to generate both words. We believe this is because DCGCN is able to learn richer semantic information by capturing complex long dependencies. GCN+RC+LA does generate an output that looks similar to the reference at the token level. However, the conveyed semantic information in the generated sentence largely differs from that of the reference. DCGCNs do not have this problem.", "cite_spans": [], "ref_spans": [ { "start": 16, "end": 24, "text": "Table 10", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Analysis and Discussion", "sec_num": "4.5" }, { "text": "Our work builds on a rich line of recent efforts on graph-to-sequence models, graph convolutional networks, and densely connected convolutional networks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Graph-to-Sequence Learning. Early research efforts for graph-to-sequence learning are based on statistical methods. Lu et al. (2009) present a language generation model using the tree-structured meaning representation based on tree conditional random fields. Lu and Ng (2011) propose a model for language generation from lambda calculus expressions that can be represented as forest structures. Lapata (2012, 2013) leverage hypergraphs for concept-to-text generation. Flanigan et al. (2016) transform a given AMR graph into a spanning tree, before translating it into a sentence using a tree-to-string transducer. Pourdamghani et al. (2016) adopt a phrase-based model for machine translation (Koehn et al., 2003) based on a linearized AMR graph. Song et al. (2017) leverage a synchronous node replacement grammar. Konstas et al. (2017) also linearize the input graph and feed it to the Seq2Seq model (Sutskever et al., 2014) . Sequence-based neural networks may lose structural information from the original graph because they require linearization of the input graph. Recent research efforts consider developing encoders with graph neural networks. Beck et al. (2018) use GGNNs as the encoder and introduce the Levi graph that allows nodes and edges to have their own hidden representations. Song et al. (2018) propose the graph-state LSTM to directly encode graph-level semantics. In order to capture non-local information, the encoder performs graph state transition by information exchange between connected nodes. Their work belongs to the family of RNNs. Our graph encoder is built based on GCNs. Recurrent graph neural networks Song et al., 2018) use gated operations to update node states whereas graph convolutional networks use linear transformation. The contrast between our model and theirs is reminiscent of the contrast between CNN and RNN.", "cite_spans": [ { "start": 116, "end": 132, "text": "Lu et al. (2009)", "ref_id": "BIBREF32" }, { "start": 259, "end": 275, "text": "Lu and Ng (2011)", "ref_id": "BIBREF31" }, { "start": 395, "end": 414, "text": "Lapata (2012, 2013)", "ref_id": null }, { "start": 468, "end": 490, "text": "Flanigan et al. (2016)", "ref_id": "BIBREF11" }, { "start": 614, "end": 640, "text": "Pourdamghani et al. (2016)", "ref_id": "BIBREF38" }, { "start": 692, "end": 712, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF25" }, { "start": 746, "end": 764, "text": "Song et al. (2017)", "ref_id": "BIBREF41" }, { "start": 814, "end": 835, "text": "Konstas et al. (2017)", "ref_id": "BIBREF26" }, { "start": 900, "end": 924, "text": "(Sutskever et al., 2014)", "ref_id": "BIBREF45" }, { "start": 1150, "end": 1168, "text": "Beck et al. (2018)", "ref_id": "BIBREF5" }, { "start": 1293, "end": 1311, "text": "Song et al. (2018)", "ref_id": "BIBREF43" }, { "start": 1635, "end": 1653, "text": "Song et al., 2018)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Closest to our work, Bastings et al. (2017) stack GCNs upon a RNN or CNN encoder because 2-layer GCNs may not be able to capture nonlocal information, especially when the graph is large. Our graph encoder solely relies on the DCGCN model, whose deep network structure encodes richer local and non-local information for learning better graph representations.", "cite_spans": [ { "start": 21, "end": 43, "text": "Bastings et al. (2017)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Densely Connected Convolutional Networks. Intuitively, neural networks should be able to learn rich representations by stacking a large number of layers. However, empirical results often do not support such an intuition-useful information captured in earlier layers may get lost after passing through subsequent layers. Many recent efforts focus on resolving such an issue. Highway Networks (Srivastava et al., 2015) use bypassing paths along with gating units to train networks. ResNets (He et al., 2016) , in which identity mappings are used as bypassing paths, have achieved impressive performance on various tasks. DenseNets (Huang et al., 2017) refine this insight and propose a dense connectivity strategy, which connects all layers directly with each other to ensure maximum information flow between layers.", "cite_spans": [ { "start": 488, "end": 505, "text": "(He et al., 2016)", "ref_id": "BIBREF18" }, { "start": 629, "end": 649, "text": "(Huang et al., 2017)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Graph Convolutional Networks. Early efforts that attempt to extend neural networks to deal with arbitrary structured graphs are introduced by Gori et al. (2005) and Scarselli et al. (2009) , where the states of nodes are updated based on the states of their neighbors. Bruna (2014) then applies the convolution operation on graph Laplacians to construct efficient architectures in the spectral domain. Subsequent efforts improve its computational efficiency with local spectral convolution techniques (Henaff et al., 2015; Defferrard et al., 2016; Kipf and Welling, 2017) .", "cite_spans": [ { "start": 142, "end": 160, "text": "Gori et al. (2005)", "ref_id": "BIBREF14" }, { "start": 165, "end": 188, "text": "Scarselli et al. (2009)", "ref_id": "BIBREF39" }, { "start": 269, "end": 281, "text": "Bruna (2014)", "ref_id": "BIBREF6" }, { "start": 501, "end": 522, "text": "(Henaff et al., 2015;", "ref_id": "BIBREF19" }, { "start": 523, "end": 547, "text": "Defferrard et al., 2016;", "ref_id": "BIBREF8" }, { "start": 548, "end": 571, "text": "Kipf and Welling, 2017)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Our approach is closely related to GCNs (Kipf and Welling, 2017), which restrict the filters to operate on a first-order neighborhood around each node. Recent improvements and extensions of GCNs include using additional aggregation methods such as vertex attention (Velickovic et al., 2018) or pooling mechanism (Hamilton et al., 2017) to better summarize neighborhood states.", "cite_spans": [ { "start": 265, "end": 290, "text": "(Velickovic et al., 2018)", "ref_id": "BIBREF48" }, { "start": 312, "end": 335, "text": "(Hamilton et al., 2017)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "However, the best performance of GCNs is achieved with a 2-layer model, while deeper models perform worse though they can potentially have access to more non-local information. show that this issue is due to the over-smoothed output representations that impede distinguishing nodes from different clusters. Recent attempts that try to address this issue includes the use of layer-aggregation functions (Xu et al., 2018) , which combine learned features from all layers, and the use of co-training and self-training mechanisms that encourage exploration on the entire graph .", "cite_spans": [ { "start": 402, "end": 419, "text": "(Xu et al., 2018)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "We introduce the novel densely connected graph convolutional networks to learn structural graph representations. Experimental results show that DCGCNs can outperform state-of-the-art models in two tasks: AMR-to-text generation and syntaxbased neural machine translation. Unlike previous designs of GCNs, DCGCNs scale naturally to significantly more layers without suffering from performance degradation and optimization difficulties, thanks to the introduced dense connectivity mechanism. Such a deep architecture allows the encoder to better capture the rich structural information of a graph, especially when it is large.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "There are multiple venues for future work. One natural question we would like to ask is how to make use of the proposed framework to perform improved graph representation learning for various graph related tasks (Xu et al., 2018) . On the other hand, we would also like to investigate how other NLP applications such as relation extraction (Zhang et al., 2018b) and semantic role labeling can potentially benefit from our proposed approach.", "cite_spans": [ { "start": 212, "end": 229, "text": "(Xu et al., 2018)", "ref_id": "BIBREF49" }, { "start": 340, "end": 361, "text": "(Zhang et al., 2018b)", "ref_id": "BIBREF51" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Transactions of the Association for Computational Linguistics, vol. 7, pp. 297-312, 2019. Action Editor: Stefan Reizler. Submission batch: 11/2018; Revision batch: 2/2019; Published 6/2019. c 2019 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Our implementation is based on MXNET(Chen et al., 2015) and the Sockeye(Felix et al., 2017) toolkit.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank the anonymous reviewers and our Action Editor Stefan Riezler for their comments and suggestions on this work. We would also like to thank Daniel Beck, Linfeng Song, Joost Bastings, Zuozhu Liu, and Yiluan Guo for their helpful suggestions. This work is supported by Singapore Ministry of Education Academic Research Fund (AcRF) Tier 2 Project MOE2017-T2-1-156. This work is also partially supported by SUTD project PIE-SGP-AI-2018-01.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "degree (m / mass)))) 000000 :mod (g / globe)", "authors": [], "year": null, "venue": "korea\")))))", "volume": "000000", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "s / state-01 00 :ARG0 (p / person 0000 :ARG0-of (h / have-org-role-91 000000 :ARG1 (i / intelligence 00000000 :mod (c / country :wiki \"united states\" 0000000000 :name (n / name :op1 \"u.s.\"))) 000000 :ARG2 (o / official))) 00 :ARG1 (c2 / continue-01 0000 :ARG0 (p2 / person 000000 :ARG0-of (h2 / have-org-role-91 00000000 :ARG2 (o2 / official 0000000000 :mod (c3 / country :wiki \"north korea\" 000000000000 :name (n2 / name :op1 \"north\" :op2 000000000000 \"korea\"))))) 0000 :ARG1 (t / trade-01 000000 :ARG1 (t2 / technology 00000000 :purpose (w / weapon 0000000000 :ARG2-of (d / destroy-01 000000000000 :degree (m / mass)))) 000000 :mod (g / globe)) 0000 :ARG2-of (i2 / include-01 000000 :ARG1 (i3 / instruct-01 00000000 :ARG3 (m2 / make-01 00000000000 :ARG1 (m3 / missile 0000000000000 :ARG1-of (a / advanced-02))", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Syntaxnet models for the conll 2017 shared task", "authors": [ { "first": "Chris", "middle": [], "last": "Alberti", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Andor", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Bogatyy", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gillick", "suffix": "" }, { "first": "Lingpeng", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Terry", "middle": [], "last": "Koo", "suffix": "" }, { "first": "Ji", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Alberti, Daniel Andor, Ivan Bogatyy, Michael Collins, Daniel Gillick, Lingpeng Kong, Terry Koo, Ji Ma, Mark Omernick, Slav Petrov, Chayut Thanapirom, Zora Tung, and David Weiss. 2017. Syntaxnet models for the conll 2017 shared task. arXiv preprint arxiv:1703.04929.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Abstract meaning representation for sembanking", "authors": [ { "first": "Laura", "middle": [], "last": "Banarescu", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Bonial", "suffix": "" }, { "first": "Shu", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Madalina", "middle": [], "last": "Georgescu", "suffix": "" }, { "first": "Kira", "middle": [], "last": "Griffitt", "suffix": "" }, { "first": "Ulf", "middle": [], "last": "Hermjakob", "suffix": "" } ], "year": 2013, "venue": "Proceedings of LAW@ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of LAW@ACL.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Graph convolutional encoders for syntax-aware neural machine translation", "authors": [ { "first": "Joost", "middle": [], "last": "Bastings", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" }, { "first": "Wilker", "middle": [], "last": "Aziz", "suffix": "" }, { "first": "Diego", "middle": [], "last": "Marcheggiani", "suffix": "" }, { "first": "Khalil", "middle": [], "last": "Sima", "suffix": "" } ], "year": 2017, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Sima'an. 2017. Graph convolutional encoders for syntax-aware neural machine translation. In Proceedings of EMNLP.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Graph-to-sequence learning using gated graph neural networks", "authors": [ { "first": "Daniel", "middle": [], "last": "Beck", "suffix": "" }, { "first": "Gholamreza", "middle": [], "last": "Haffari", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2018, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Beck, Gholamreza Haffari, and Trevor Cohn. 2018. Graph-to-sequence learning using gated graph neural networks. In Proceedings of ACL.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Spectral networks and deep locally connected networks on graphs", "authors": [ { "first": "Joan", "middle": [], "last": "Bruna", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joan Bruna. 2014. Spectral networks and deep locally connected networks on graphs. In Proceedings of ICLR.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems", "authors": [ { "first": "Tianqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Mu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yutian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Min", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Naiyan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Minjie", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Tianjun", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Chiyuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zheng", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. 2015. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arxiv:1512.01274.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "authors": [ { "first": "Micha\u00ebl", "middle": [], "last": "Defferrard", "suffix": "" }, { "first": "Xavier", "middle": [], "last": "Bresson", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Vandergheynst", "suffix": "" } ], "year": 2016, "venue": "Proceedings of NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Micha\u00ebl Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. In Proceedings of NIPS.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Finding structure in time", "authors": [ { "first": "Jeffrey", "middle": [ "L" ], "last": "Elman", "suffix": "" } ], "year": 1990, "venue": "Cognitive Science", "volume": "14", "issue": "2", "pages": "179--211", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey L. Elman. 1990. Finding structure in time. Cognitive Science, 14(2):179-211.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Sockeye: A toolkit for neural machine translation", "authors": [ { "first": "Hieber", "middle": [], "last": "Felix", "suffix": "" }, { "first": "Domhan", "middle": [], "last": "Tobias", "suffix": "" }, { "first": "Denkowski", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Vilar", "middle": [], "last": "David", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hieber Felix, Domhan Tobias, Denkowski Michael, Vilar David, Sokolov Artem, Clifton Ann, and Post Matt. 2017. Sockeye: A toolkit for neural machine translation. arXiv preprint arxiv:1712.05690.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Generation from abstract meaning representation using tree transducers", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Flanigan", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "Jaime", "middle": [ "G" ], "last": "Carbonell", "suffix": "" } ], "year": 2016, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Flanigan, Chris Dyer, Noah A. Smith, and Jaime G. Carbonell. 2016. Generation from abstract meaning representation using tree transducers. In Proceedings of NAACL-HLT.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Convolutional sequence to sequence learning", "authors": [ { "first": "Jonas", "middle": [], "last": "Gehring", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Denis", "middle": [], "last": "Yarats", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Dauphin", "suffix": "" } ], "year": 2017, "venue": "Proceedings of ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann Dauphin. 2017. Con- volutional sequence to sequence learning. In Proceedings of ICML.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Rich feature hierarchies for accurate object detection and semantic segmentation", "authors": [ { "first": "Ross", "middle": [], "last": "Girshick", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Donahue", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Darrell", "suffix": "" }, { "first": "Jitendra", "middle": [], "last": "Malik", "suffix": "" } ], "year": 2014, "venue": "Proceedings of CVPR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of CVPR.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A new model for learning in graph domains", "authors": [ { "first": "Michele", "middle": [], "last": "Gori", "suffix": "" }, { "first": "Gabriele", "middle": [], "last": "Monfardini", "suffix": "" }, { "first": "Franco", "middle": [], "last": "Scarselli", "suffix": "" } ], "year": 2005, "venue": "Proceedings of IJCNN", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michele Gori, Gabriele Monfardini, and Franco Scarselli. 2005. A new model for learning in graph domains. In Proceedings of IJCNN.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Handbook of Graph Theory", "authors": [ { "first": "Jonathan", "middle": [ "L" ], "last": "Gross", "suffix": "" }, { "first": "Jay", "middle": [], "last": "Yellen", "suffix": "" }, { "first": "Ping", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan L. Gross, Jay Yellen, and Ping Zhang. 2013. Handbook of Graph Theory, Second Edition. Chapman & Hall/CRC.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Better transitionbased AMR parsing with a refined search space", "authors": [ { "first": "Zhijiang", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2018, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhijiang Guo and Wei Lu. 2018. Better transition- based AMR parsing with a refined search space. In Proceedings of EMNLP.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Inductive representation learning on large graphs", "authors": [ { "first": "William", "middle": [ "L" ], "last": "Hamilton", "suffix": "" }, { "first": "Rex", "middle": [], "last": "Ying", "suffix": "" }, { "first": "Jure", "middle": [], "last": "Leskovec", "suffix": "" } ], "year": 2017, "venue": "Proceedings of NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "William L. Hamilton, Rex Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Proceedings of NIPS.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Deep residual learning for image recognition", "authors": [ { "first": "Kaiming", "middle": [], "last": "He", "suffix": "" }, { "first": "Xiangyu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shaoqing", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2016, "venue": "Proceedings of CVPR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of CVPR.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Deep convolutional networks on graph-structured data", "authors": [ { "first": "Mikael", "middle": [], "last": "Henaff", "suffix": "" }, { "first": "Joan", "middle": [], "last": "Bruna", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikael Henaff, Joan Bruna, and Yann LeCun. 2015. Deep convolutional networks on graph-structured data. arXiv preprint arxiv:1506.05163.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural Computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Densely connected convolutional networks", "authors": [ { "first": "Gao", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Zhuang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Laurens", "middle": [], "last": "Van Der Maaten", "suffix": "" }, { "first": "Kilian", "middle": [ "Q" ], "last": "Weinberger", "suffix": "" } ], "year": 2017, "venue": "Proceedings of CVPR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. 2017. Densely connected convolutional networks. In Proceedings of CVPR.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Semisupervised classification with graph convolutional networks", "authors": [ { "first": "N", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Max", "middle": [], "last": "Kipf", "suffix": "" }, { "first": "", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2017, "venue": "Proceedings of ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas N. Kipf and Max Welling. 2017. Semi- supervised classification with graph convolu- tional networks. In Proceedings of ICLR.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Moses: Open source toolkit for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Brooke", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Shen", "suffix": "" } ], "year": 2007, "venue": "Proceedings of ACL(Demo)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of ACL(Demo).", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Statistical phrase-based translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Franz", "middle": [ "Josef" ], "last": "Och", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based transla- tion. In Proceedings of NAACL-HLT.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Neural amr: Sequence-to-sequence models for parsing and generation", "authors": [ { "first": "Ioannis", "middle": [], "last": "Konstas", "suffix": "" }, { "first": "Srinivasan", "middle": [], "last": "Iyer", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Yatskar", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Luke", "middle": [ "S" ], "last": "Zettlemoyer", "suffix": "" } ], "year": 2017, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke S. Zettlemoyer. 2017. Neural amr: Sequence-to-sequence models for parsing and generation. In Proceedings of ACL.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Unsupervised concept-to-text generation with hypergraphs", "authors": [ { "first": "Ioannis", "middle": [], "last": "Konstas", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2012, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ioannis Konstas and Mirella Lapata. 2012. Unsupervised concept-to-text generation with hypergraphs. In Proceedings of NAACL-HLT.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Inducing document plans for concept-to-text generation", "authors": [ { "first": "Ioannis", "middle": [], "last": "Konstas", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2013, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ioannis Konstas and Mirella Lapata. 2013. Inducing document plans for concept-to-text generation. In Proceedings of EMNLP.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Deeper insights into graph convolutional networks for semi-supervised learning", "authors": [ { "first": "Qimai", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zhichao", "middle": [], "last": "Han", "suffix": "" }, { "first": "Xiao-Ming", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2018, "venue": "Proceedings of AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qimai Li, Zhichao Han, and Xiao-Ming Wu. 2018. Deeper insights into graph convolutional networks for semi-supervised learning. In Proceedings of AAAI.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Gated graph sequence neural networks", "authors": [ { "first": "Yujia", "middle": [], "last": "Li", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Tarlow", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Brockschmidt", "suffix": "" }, { "first": "Richard", "middle": [ "S" ], "last": "Zemel", "suffix": "" } ], "year": 2016, "venue": "Proceedings of ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard S. Zemel. 2016. Gated graph sequence neural networks. In Proceedings of ICLR.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "A probabilistic forest-to-string model for language generation from typed lambda calculus expressions", "authors": [ { "first": "Wei", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2011, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Lu and Hwee Tou Ng. 2011. A probabilistic forest-to-string model for language generation from typed lambda calculus expressions. In Proceedings of EMNLP.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Natural language generation with tree conditional random fields", "authors": [ { "first": "Wei", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Wee Sun", "middle": [], "last": "Hwee Tou Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2009, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Lu, Hwee Tou Ng, and Wee Sun Lee. 2009. Natural language generation with tree conditional random fields. In Proceedings of EMNLP.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Encoding sentences with graph convolutional networks for semantic role labeling", "authors": [ { "first": "Diego", "middle": [], "last": "Marcheggiani", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2017, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In Proceedings of EMNLP.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Rectified linear units improve restricted boltzmann machines", "authors": [ { "first": "Vinod", "middle": [], "last": "Nair", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" } ], "year": 2010, "venue": "Proceedings of ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vinod Nair and Geoffrey E. Hinton. 2010. Rectified linear units improve restricted boltz- mann machines. In Proceedings of ICML.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [ "S" ], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke S. Zettlemoyer. 2018. Deep contextualized word representations. In Proceed- ings of NAACL-HLT.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "chrf++: Words helping character n-grams", "authors": [ { "first": "Maja", "middle": [], "last": "Popovic", "suffix": "" } ], "year": 2017, "venue": "Proceedings of WMT@ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maja Popovic. 2017. chrf++: Words helping character n-grams. In Proceedings of WMT@ACL.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Generating english from abstract meaning representations", "authors": [ { "first": "Nima", "middle": [], "last": "Pourdamghani", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Ulf", "middle": [], "last": "Hermjakob", "suffix": "" } ], "year": 2016, "venue": "Proceedings of INLG", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nima Pourdamghani, Kevin Knight, and Ulf Hermjakob. 2016. Generating english from abstract meaning representations. In Proceedings of INLG.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "The graph neural network model", "authors": [ { "first": "Franco", "middle": [], "last": "Scarselli", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Gori", "suffix": "" }, { "first": "Ah", "middle": [], "last": "Chung Tsoi", "suffix": "" }, { "first": "Markus", "middle": [], "last": "Hagenbuchner", "suffix": "" }, { "first": "Gabriele", "middle": [], "last": "Monfardini", "suffix": "" } ], "year": 2009, "venue": "IEEE Transactions on Neural Networks", "volume": "20", "issue": "1", "pages": "61--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. 2009. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61-80.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of ACL.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "AMRto-text generation with synchronous node replacement grammar", "authors": [ { "first": "Linfeng", "middle": [], "last": "Song", "suffix": "" }, { "first": "Xiaochang", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhiguo", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" } ], "year": 2017, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linfeng Song, Xiaochang Peng, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2017. AMR- to-text generation with synchronous node replacement grammar. In Proceedings of ACL.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "AMRto-text generation as a traveling salesman problem", "authors": [ { "first": "Linfeng", "middle": [], "last": "Song", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xiaochang", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Zhiguo", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" } ], "year": 2016, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linfeng Song, Yue Zhang, Xiaochang Peng, Zhiguo Wang, and Daniel Gildea. 2016. AMR- to-text generation as a traveling salesman problem. In Proceedings of EMNLP.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "A graph-tosequence model for AMR-to-text generation", "authors": [ { "first": "Linfeng", "middle": [], "last": "Song", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhiguo", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" } ], "year": 2018, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018. A graph-to- sequence model for AMR-to-text generation. In Proceedings of ACL.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Training very deep networks", "authors": [ { "first": "Klaus", "middle": [], "last": "Rupesh Kumar Srivastava", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Greff", "suffix": "" }, { "first": "", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 2015, "venue": "Proceedings of NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rupesh Kumar Srivastava, Klaus Greff, and J\u00fcrgen Schmidhuber. 2015. Training very deep networks. In Proceedings of NIPS.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Sequence to sequence learning with neural networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "Proceedings of NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of NIPS.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Modeling coverage for neural machine translation", "authors": [ { "first": "Zhaopeng", "middle": [], "last": "Tu", "suffix": "" }, { "first": "Zhengdong", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xiaohua", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" } ], "year": 2016, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of ACL.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Proceedings of NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NIPS.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Graph attention networks", "authors": [ { "first": "Petar", "middle": [], "last": "Velickovic", "suffix": "" }, { "first": "Guillem", "middle": [], "last": "Cucurull", "suffix": "" }, { "first": "Arantxa", "middle": [], "last": "Casanova", "suffix": "" }, { "first": "Adriana", "middle": [], "last": "Romero", "suffix": "" }, { "first": "Pietro", "middle": [], "last": "Li\u00f2", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2018, "venue": "Proceedings of ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li\u00f2, and Yoshua Bengio. 2018. Graph attention networks. In Proceedings of ICLR.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Representation learning on graphs with jumping knowledge networks", "authors": [ { "first": "Keyulu", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Chengtao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yonglong", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Tomohiro", "middle": [], "last": "Sonobe", "suffix": "" }, { "first": "Ken", "middle": [], "last": "Ichi Kawarabayashi", "suffix": "" }, { "first": "Stefanie", "middle": [], "last": "Jegelka", "suffix": "" } ], "year": 2018, "venue": "Proceedings of ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken ichi Kawarabayashi, and Stefanie Jegelka. 2018. Representation learning on graphs with jumping knowledge networks. In Proceedings of ICML.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Sentence-state LSTM for text representation", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Linfeng", "middle": [], "last": "Song", "suffix": "" } ], "year": 2018, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yue Zhang, Qi Liu, and Linfeng Song. 2018a. Sentence-state LSTM for text representation. In Proceedings of ACL.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Graph convolution over pruned dependency trees improves relation extraction", "authors": [ { "first": "Yuhao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2018, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018b. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of EMNLP.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "An AMR graph (top) and its corresponding extended Levi graph (bottom). The extended Levi graph contains an additional global node and four different type of edges.", "type_str": "figure" }, "FIGREF1": { "num": null, "uris": null, "text": "A dependency tree and its extended Levi graph.", "type_str": "figure" }, "FIGREF2": { "num": null, "uris": null, "text": "Comparison of DCGCN and GCN over different number of parameters. a-b means the model has a layers (a blocks for DCGCN) and the hidden size is b (e.g., 5-336 means a 5-layer GCN with the hidden size 336).", "type_str": "figure" }, "FIGREF3": { "num": null, "uris": null, "text": "CHRF++ scores with respect to the input graph size for three models.", "type_str": "figure" }, "TABREF1": { "num": null, "html": null, "text": "The number of sentences in four datasets.", "type_str": "table", "content": "" }, "TABREF2": { "num": null, "html": null, "text": "Seq2SeqB(Beck et al., 2018) S 28,4 M 21.7 49.1 GGNN2Seq(Beck et al., 2018) S 28.3M 23.3 50.4 Seq2SeqB(Beck et al., 2018) E 142M 26.6 52.5 GGNN2Seq(Beck et al., 2018) E 141M 27.5 53.5", "type_str": "table", "content": "
ModelT #PBC
DCGCN (ours)S 18.5M 27.6 57.3 E 92.5 M 30.4 59.6
" }, "TABREF3": { "num": null, "html": null, "text": "Main results on AMR17. #P shows the model size in terms of parameters; ''S'' and ''E'' denote single and ensemble models, respectively.", "type_str": "table", "content": "" }, "TABREF5": { "num": null, "html": null, "text": "", "type_str": "table", "content": "
" }, "TABREF7": { "num": null, "html": null, "text": "Main results on English-German and English-Czech datasets.", "type_str": "table", "content": "
" }, "TABREF9": { "num": null, "html": null, "text": "The effect of the number of layers inside DCGCN sub-blocks on the AMR15 development set.", "type_str": "table", "content": "
" }, "TABREF11": { "num": null, "html": null, "text": "", "type_str": "table", "content": "
: Comparisons with baselines. +RC denotes
GCNs with residual connections. +RC+LA refers
to GCNs with both residual connections and layer
aggregations. DCGCNi represents our model
with i blocks, containing i \u00d7 (n + m) layers.
The number of layers for each model is shown
in parentheses.
" }, "TABREF13": { "num": null, "html": null, "text": "Comparisons of different DCGCN models under almost the same parameter budget.", "type_str": "table", "content": "
models have the same number of layers, dense
connections allow the model to achieve much
better performance. If all the dense connections
are not considered, the model does not coverage
at all. These results indicate dense connections do
play a significant role in our model.
Ablation Study for Encoder and Decoder.
Following Song et al.
" }, "TABREF15": { "num": null, "html": null, "text": "Ablation study for density of connections on the dev set of AMR15. -{i} dense block denotes removing the dense connections in the i-th block.", "type_str": "table", "content": "
ModelBC
DCGCN425.5 55.4
Encoder Modules
-Linear Combination23.7 53.2
-Global Node24.2 54.6
-Direction Aggregation24.6 54.6
-Graph Attention24.9 54.7
-Global Node&Linear Combination 22.9 52.4
Decoder Modules
-Coverage Mechanism23.8 53.0
" }, "TABREF16": { "num": null, "html": null, "text": "Example outputs.", "type_str": "table", "content": "" } } } }