{ "paper_id": "Q19-1034", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:09:33.237039Z" }, "title": "Graph Convolutional Network with Sequential Attention for Goal-Oriented Dialogue Systems", "authors": [ { "first": "Suman", "middle": [], "last": "Banerjee", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indian Institute of Technology Madras", "location": { "country": "India" } }, "email": "" }, { "first": "Mitesh", "middle": [ "M" ], "last": "Khapra", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indian Institute of Technology Madras", "location": { "country": "India" } }, "email": "miteshk@cse.iitm.ac.in" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Domain-specific goal-oriented dialogue systems typically require modeling three types of inputs, namely, (i) the knowledge-base associated with the domain, (ii) the history of the conversation, which is a sequence of utterances, and (iii) the current utterance for which the response needs to be generated. While modeling these inputs, current state-of-the-art models such as Mem2Seq typically ignore the rich structure inherent in the knowledge graph and the sentences in the conversation context. Inspired by the recent success of structure-aware Graph Convolutional Networks (GCNs) for various NLP tasks such as machine translation, semantic role labeling, and document dating, we propose a memoryaugmented GCN for goal-oriented dialogues. Our model exploits (i) the entity relation graph in a knowledge-base and (ii) the dependency graph associated with an utterance to compute richer representations for words and entities. Further, we take cognizance of the fact that in certain situations, such as when the conversation is in a code-mixed language, dependency parsers may not be available. We show that in such situations we could use the global word co-occurrence graph to enrich the representations of utterances. We experiment with four datasets: (i) the modified DSTC2 dataset, (ii) recently released code-mixed versions of DSTC2 dataset in four languages, (iii) Wizard-of-Oz style CAM676 dataset, and (iv) Wizard-of-Oz style MultiWOZ dataset. On all four datasets our method outperforms existing methods, on a wide range of evaluation metrics.", "pdf_parse": { "paper_id": "Q19-1034", "_pdf_hash": "", "abstract": [ { "text": "Domain-specific goal-oriented dialogue systems typically require modeling three types of inputs, namely, (i) the knowledge-base associated with the domain, (ii) the history of the conversation, which is a sequence of utterances, and (iii) the current utterance for which the response needs to be generated. While modeling these inputs, current state-of-the-art models such as Mem2Seq typically ignore the rich structure inherent in the knowledge graph and the sentences in the conversation context. Inspired by the recent success of structure-aware Graph Convolutional Networks (GCNs) for various NLP tasks such as machine translation, semantic role labeling, and document dating, we propose a memoryaugmented GCN for goal-oriented dialogues. Our model exploits (i) the entity relation graph in a knowledge-base and (ii) the dependency graph associated with an utterance to compute richer representations for words and entities. Further, we take cognizance of the fact that in certain situations, such as when the conversation is in a code-mixed language, dependency parsers may not be available. We show that in such situations we could use the global word co-occurrence graph to enrich the representations of utterances. We experiment with four datasets: (i) the modified DSTC2 dataset, (ii) recently released code-mixed versions of DSTC2 dataset in four languages, (iii) Wizard-of-Oz style CAM676 dataset, and (iv) Wizard-of-Oz style MultiWOZ dataset. On all four datasets our method outperforms existing methods, on a wide range of evaluation metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Goal-oriented dialogue systems that can assist humans in various day-to-day activities have widespread applications in several domains such as e-commerce, entertainment, healthcare, and so forth. For example, such systems can help humans in scheduling medical appointments or reserving restaurants, booking tickets. From a modeling perspective, one clear advantage of dealing with domain-specific goal-oriented dialogues is that the vocabulary is typically limited, the utterances largely follow a fixed set of templates, and there is an associated domain knowledge that can be exploited. More specifically, there is some structure associated with the utterances as well as the knowledge base (KB). More formally, the task here is to generate the next response given (i) the previous utterances in the conversation history, (ii) the current user utterance (known as the query), and (iii) the entities and their relationships in the associated knowledge base. Current state-of-the-art methods (Seo et al., 2017; Madotto et al., 2018) typically use variants of Recurrent Neural Networks (RNNs) (Elman, 1990) to encode the history and current utterance or an external memory network (Sukhbaatar et al., 2015) to encode them along with the entities in the knowledge base. The encodings of the utterances and memory elements are then suitably combined using an attention network and fed to the decoder to generate the response, one word at a time. However, these methods do not exploit the structure in the knowledge base as defined by entity-entity relations and the structure in the utterances as defined by a dependency parse. Such structural information can be exploited to improve the performance of the system, as demonstrated by recent works on syntax-aware neural machine translation (Eriguchi et al., 2016; Bastings et al., 2017; Chen et al., 2017) , semantic role labeling , and document dating (Vashishth et al., 2018) , which use Graph Convolutional Networks (GCNs) (Defferrard et al., 2016; Duvenaud et al., 2015; Kipf and Welling, 2017) to exploit sentence structure.", "cite_spans": [ { "start": 992, "end": 1010, "text": "(Seo et al., 2017;", "ref_id": "BIBREF32" }, { "start": 1011, "end": 1032, "text": "Madotto et al., 2018)", "ref_id": "BIBREF26" }, { "start": 1092, "end": 1105, "text": "(Elman, 1990)", "ref_id": "BIBREF13" }, { "start": 1180, "end": 1205, "text": "(Sukhbaatar et al., 2015)", "ref_id": "BIBREF37" }, { "start": 1787, "end": 1810, "text": "(Eriguchi et al., 2016;", "ref_id": "BIBREF16" }, { "start": 1811, "end": 1833, "text": "Bastings et al., 2017;", "ref_id": "BIBREF2" }, { "start": 1834, "end": 1852, "text": "Chen et al., 2017)", "ref_id": "BIBREF5" }, { "start": 1900, "end": 1924, "text": "(Vashishth et al., 2018)", "ref_id": "BIBREF38" }, { "start": 1973, "end": 1998, "text": "(Defferrard et al., 2016;", "ref_id": "BIBREF10" }, { "start": 1999, "end": 2021, "text": "Duvenaud et al., 2015;", "ref_id": "BIBREF11" }, { "start": 2022, "end": 2045, "text": "Kipf and Welling, 2017)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we propose to use such graph structures for goal-oriented dialogues. In particular, we compute the dependency parse tree for each utterance in the conversation and use a GCN to capture the interactions between words. This allows us to capture interactions between distant words in the sentence as long as they are connected by a dependency relation. We also use GCNs to encode the entities of the KB where the entities are treated as nodes and their relations as edges of the graph. Once we have a richer structure aware representation for the utterances and the entities, we use a sequential attention mechanism to compute an aggregated context representation from the GCN node vectors of the query, history, and entities. Further, we note that in certain situations, such as when the conversation is in a code-mixed language or a language for which parsers are not available, then it may not be possible to construct a dependency parse for the utterances. To overcome this, we construct a co-occurrence matrix from the entire corpus and use this matrix to impose a graph structure on the utterances. More specifically, we add an edge between two words in a sentence if they co-occur frequently in the corpus. Our experiments suggest that this simple strategy acts as a reasonable substitute for dependency parse trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We perform experiments with the modified DSTC2 (Bordes et al., 2017) dataset, which contains goal-oriented conversations for making restaurant reservations. We also use its recently released code-mixed versions (Banerjee et al., 2018) , which contain code-mixed conversations in four different languages: Hindi, Bengali, Gujarati, and Tamil. We compare with recent state-of-theart methods and show that on average, the proposed model gives an improvement of 2.8 BLEU points and 2 ROUGE points. We also perform experiments on two human-human dialogue datasets of different sizes: (i) Cam676 (Wen et al., 2017): a small scale dataset containing 676 dialogues from the restaurant domain; and (ii) MultiWOZ (Budzianowski et al., 2018) : a largescale dataset containing around 10k dialogues and spanning multiple domains for each dialogue. On these two datasets as well, we observe a similar trend, wherein our model outperforms existing methods.", "cite_spans": [ { "start": 47, "end": 68, "text": "(Bordes et al., 2017)", "ref_id": "BIBREF3" }, { "start": 211, "end": 234, "text": "(Banerjee et al., 2018)", "ref_id": "BIBREF1" }, { "start": 703, "end": 730, "text": "(Budzianowski et al., 2018)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contributions can be summarized as follows: (i) We use GCNs to incorporate structural information for encoding query, history, and KB entities in goal-oriented dialogues; (ii) We use a sequential attention mechanism to obtain query aware and history aware context representations; (iii) We leverage co-occurrence frequencies and PPMI (positive-pointwise mutual information) values to construct contextual graphs for code-mixed utterances; and (iv) We show that the proposed model obtains state-of-the-art results on four different datasets spanning five different languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we review the previous work in goal-oriented dialogue systems and describe the introduction of GCNs in NLP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Goal-Oriented Dialogue Systems: Initial goaloriented dialogue systems (Young, 2000; Williams and Young, 2007) were based on dialogue state tracking (Williams et al., 2013; Henderson et al., 2014a,b) and included pipelined modules for natural language understanding, dialogue state tracking, policy management, and natural language generation. Wen et al. (2017) used neural networks for these intermediate modules but still lacked absolute end-to-end trainability. Such pipelined modules were restricted by the fixed slot-structure assumptions on the dialogue state and required per-module based labeling. To mitigate this problem, Bordes et al. (2017) released a version of goal-oriented dialogue dataset that focuses on the development of end-to-end neural models. Such models need to reason over the associated KB triples and generate responses directly from the utterances without any additional annotations. For example, Bordes et al. (2017) proposed a Memory Network (Sukhbaatar et al., 2015) based model to match the response candidates with the multi-hop attention weighted representation of the conversation history and the KB triples in memory. Liu and Perez (2017) further added highway (Srivastava et al., 2015) and residual connections (He et al., 2016) to the memory network in order to regulate the access to the memory blocks. Seo et al. (2017) developed a variant of RNN cell that computes a refined representation of the query over multiple iterations before querying the memory. However, all these approaches retrieve the response from a set of candidate responses and such a candidate set is not easy to obtain for any new domain of interest. To account for this, and Zhao et al. (2017) adapted RNN-based encoderdecoder models to generate appropriate responses instead of retrieving them from a candidate set. introduced a key-value memory network based generative model that integrates the underlying KB with RNN-based encodeattend-decode models. Madotto et al. (2018) used memory networks on top of the RNN decoder to tightly integrate KB entities with the decoder in order to generate more informative responses. However, as opposed to our work, all these works ignore the underlying structure of the entityentity graph of the KB and the syntactic structure of the utterances.", "cite_spans": [ { "start": 70, "end": 83, "text": "(Young, 2000;", "ref_id": "BIBREF42" }, { "start": 84, "end": 109, "text": "Williams and Young, 2007)", "ref_id": "BIBREF41" }, { "start": 148, "end": 171, "text": "(Williams et al., 2013;", "ref_id": "BIBREF40" }, { "start": 172, "end": 198, "text": "Henderson et al., 2014a,b)", "ref_id": null }, { "start": 631, "end": 651, "text": "Bordes et al. (2017)", "ref_id": "BIBREF3" }, { "start": 925, "end": 945, "text": "Bordes et al. (2017)", "ref_id": "BIBREF3" }, { "start": 972, "end": 997, "text": "(Sukhbaatar et al., 2015)", "ref_id": "BIBREF37" }, { "start": 1154, "end": 1174, "text": "Liu and Perez (2017)", "ref_id": "BIBREF25" }, { "start": 1197, "end": 1222, "text": "(Srivastava et al., 2015)", "ref_id": "BIBREF36" }, { "start": 1248, "end": 1265, "text": "(He et al., 2016)", "ref_id": "BIBREF17" }, { "start": 1342, "end": 1359, "text": "Seo et al. (2017)", "ref_id": "BIBREF32" }, { "start": 1687, "end": 1705, "text": "Zhao et al. (2017)", "ref_id": "BIBREF43" }, { "start": 1967, "end": 1988, "text": "Madotto et al. (2018)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "GCNs in NLP: Recently, there has been an active interest in enriching existing encodeattend-decode models (Bahdanau et al., 2015) with structural information for various NLP tasks. Such structure is typically obtained from the constituency and/or dependency parse of sentences. The idea is to treat the output of a parser as a graph and use an appropriate network to capture the interactions between the nodes of this graph. For example, Eriguchi et al. (2016) and Chen et al. (2017) showed that incorporating such syntactical structures as Tree-LSTMs in the encoder can improve the performance of neural machine translation. Peng et al. (2017) use Graph-LSTMs to perform cross sentence n-ary relation extraction and show that their formulation is applicable to any graph structure and Tree-LSTMs can be thought of as a special case of it. In parallel, Graph Convolutional Networks (GCNs) (Duvenaud et al., 2015; Defferrard et al., 2016; Kipf and Welling, 2017) and their variants (Li et al., 2016) have emerged as state-of-the-art methods for computing representations of entities in a knowledge graph. They provide a more flexible way of encoding such graph structures by capturing multi-hop relationships between nodes. This has led to their adoption for various NLP tasks such as neural machine translation (Marcheggiani et al., 2018; Bastings et al., 2017) , semantic role labeling , document dating (Vashishth et al., 2018) , and question answering (Johnson, 2017; De Cao et al., 2019) .", "cite_spans": [ { "start": 106, "end": 129, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF0" }, { "start": 438, "end": 460, "text": "Eriguchi et al. (2016)", "ref_id": "BIBREF16" }, { "start": 465, "end": 483, "text": "Chen et al. (2017)", "ref_id": "BIBREF5" }, { "start": 626, "end": 644, "text": "Peng et al. (2017)", "ref_id": "BIBREF31" }, { "start": 889, "end": 912, "text": "(Duvenaud et al., 2015;", "ref_id": "BIBREF11" }, { "start": 913, "end": 937, "text": "Defferrard et al., 2016;", "ref_id": "BIBREF10" }, { "start": 938, "end": 961, "text": "Kipf and Welling, 2017)", "ref_id": "BIBREF22" }, { "start": 981, "end": 998, "text": "(Li et al., 2016)", "ref_id": "BIBREF23" }, { "start": 1311, "end": 1338, "text": "(Marcheggiani et al., 2018;", "ref_id": "BIBREF27" }, { "start": 1339, "end": 1361, "text": "Bastings et al., 2017)", "ref_id": "BIBREF2" }, { "start": 1405, "end": 1429, "text": "(Vashishth et al., 2018)", "ref_id": "BIBREF38" }, { "start": 1455, "end": 1470, "text": "(Johnson, 2017;", "ref_id": "BIBREF20" }, { "start": 1471, "end": 1491, "text": "De Cao et al., 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "To the best of our knowledge, ours is the first work that uses GCNs to incorporate dependency structural information and the entity-entity graph structure in a single end-to-end neural model for goal-oriented dialogues. This is also the first work that incorporates contextual co-occurrence information for code-mixed utterances, for which no dependency structures are available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this section, we describe GCNs (Kipf and Welling, 2017) for undirected graphs and then describe their syntactic versions, which work with directed labeled edges of dependency parse trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "3" }, { "text": "Graph convolutional networks operate on a graph structure and compute representations for the nodes of the graph by looking at the neighborhood of the node. We can stack k layers of GCNs to account for neighbors that are k-hops away from the current node. Formally, let G = (V, E) be an undirected graph, where V is the set of nodes (let |V| = n) and E is the set of edges. Let X \u2208 R n\u00d7m be the input feature matrix with n nodes and each node x u (u \u2208 V) is represented by an m-dimensional feature vector. The output of a 1-layer GCN is the hidden representation matrix H \u2208 R n\u00d7d where each d-dimensional representation of a node captures the interactions with its 1-hop neighbors. Each row of this matrix can be computed as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GCN for Undirected Graphs", "sec_num": "3.1" }, { "text": "h v = ReLU u\u2208N (v) (W x u + b) , \u2200v \u2208 V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GCN for Undirected Graphs", "sec_num": "3.1" }, { "text": "(1) Here W \u2208 R d\u00d7m is the model parameter matrix, b \u2208 R d is the bias vector, and ReLU is the rectified linear unit activation function. N (v) is the set of neighbors of node v and is assumed to also include the node v so that the previous representation of the node v is also considered while computing its new hidden representation. To capture interactions with nodes that are multiple hops away, multiple layers of GCNs can be stacked together. Specifically, the representation of node v after k th GCN layer can be formulated as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GCN for Undirected Graphs", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h k+1 v = ReLU u\u2208N (v) (W k h k u + b k )", "eq_num": "(2)" } ], "section": "GCN for Undirected Graphs", "sec_num": "3.1" }, { "text": "\u2200v \u2208 V. Here h k u is the representation of the u th node in the (k \u2212 1) th GCN layer and h 1 u = x u .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GCN for Undirected Graphs", "sec_num": "3.1" }, { "text": "In a directed labeled graph G = (V, E), each edge between nodes u and v is represented by a triple (u, v, L(u, v)) where L(u, v) is the associated edge label. modified GCNs to operate over directed labeled graphs, such as the dependency parse tree of a sentence. For such a tree, in order to allow information to flow from head to dependents and vice-versa, they added inverse dependency edges from dependents to heads such as (v, u, L(u, v) ) to E and made the model parameters and biases label specific. In their formulation,", "cite_spans": [ { "start": 99, "end": 114, "text": "(u, v, L(u, v))", "ref_id": null }, { "start": 121, "end": 128, "text": "L(u, v)", "ref_id": null }, { "start": 427, "end": 441, "text": "(v, u, L(u, v)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Syntactic GCN", "sec_num": "3.2" }, { "text": "h k+1 v = ReLU u\u2208N (v) (W k L(u,v) h k u + b k L(u,v) ) (3) \u2200v \u2208 V. Notice that unlike equation 2, equation 3 has parameters W k L(u,v) and b k L(u,v)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic GCN", "sec_num": "3.2" }, { "text": "which are label-specific. Suppose there are L different labels, then this formulation will require L weights and biases per GCN layer, resulting in a large number of parameters. To avoid this, the authors use only three sets of weights and biases per GCN layer (as opposed to L) depending on the direction in which the information flows. More specifically, (u,v) instead of having a separate bias per label. The final GCN formulation can thus be described as:", "cite_spans": [ { "start": 357, "end": 362, "text": "(u,v)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Syntactic GCN", "sec_num": "3.2" }, { "text": "W k L(u,v) = W k dir(u,v) , where dir(u, v) indicates whether information flows from u to v, v to u or u = v. In this work, we also make b k L(u,v) = b k dir", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic GCN", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h k+1 v = ReLU u\u2208N (v) (W k dir(u,v) h k u +b k dir(u,v) )", "eq_num": "(4)" } ], "section": "Syntactic GCN", "sec_num": "3.2" }, { "text": "We first formally define the task of end-to-end goal-oriented dialogue generation. Each dialogue of t turns can be viewed as a succession of user utterances (U ) and system responses (S) and can be represented as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "(U 1 , S 1 , U 2 , S 2 , . . . , U t , S t ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "Along with these utterances, each dialogue is also accompanied by e KB triples that are relevant to that dialogue and can be represented as: (k 1 , k 2 , k 3 , . . . , k e ). Each triple is of the form: (entity 1 , relation, entity 2 ). These triples can be represented in the form of a graph", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "G k = (V k , E k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "where V k is the set of all entities and each edge in E k is of the form:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "(entity 1 , entity 2 , relation),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "where relation signifies the edge label. At any dialogue turn i, given the (i) dialogue history H = (U 1 , S 1 , U 2 , . . . , S i\u22121 ), (ii) the current user utterance as the query Q = U i and (iii) the associated knowledge graph G k , the task is to generate the current response S i that leads to a completion of the goal. As mentioned earlier, we exploit the graph structure in KB and the syntactic structure in the utterances to generate appropriate responses. Toward this end, we propose a model with the following components for encoding these three types of inputs. The code for the model is released publicly. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "The query Q = U i is the i th (current) user utterance in the dialogue and contains |Q| tokens. We denote the embedding of the i th token in the query as q i . We first compute the contextual representations of these tokens by passing them through a bidirectional RNN:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Encoder", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "b t = BiRN N Q (b t\u22121 , q t )", "eq_num": "(5)" } ], "section": "Query Encoder", "sec_num": "4.1" }, { "text": "Now, consider the dependency parse tree of the query sentence denoted by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Encoder", "sec_num": "4.1" }, { "text": "G Q = (V Q , E Q ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Encoder", "sec_num": "4.1" }, { "text": "We use a query-specific GCN to operate on G Q , which takes {b i } |Q| i=1 as the input to the first GCN layer. The node representation in the k th hop of the query specific GCN is computed as: (u,v) are edge direction specific query-GCN weights and biases for the k th hop and c 1 u = b u .", "cite_spans": [ { "start": 194, "end": 199, "text": "(u,v)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Query Encoder", "sec_num": "4.1" }, { "text": "c k+1 v = ReLU u\u2208N (v) (W k dir(u,v) c k u + g k dir(u,v) ) (6) \u2200v \u2208 V Q . Here W k dir(u,v) , g k dir", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Encoder", "sec_num": "4.1" }, { "text": "The history H of the dialogue contains |H| tokens and we denote the embedding of the i th token in the history by p i . Once again, we first compute the hidden representations of these tokens using a bidirectional RNN:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dialogue History Encoder", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s t = BiRN N H (s t\u22121 , p t )", "eq_num": "(7)" } ], "section": "Dialogue History Encoder", "sec_num": "4.2" }, { "text": "We now compute a dependency parse tree for each sentence in the history and collectively represent all the trees as a single graph G H = Figure 1 : Illustration of the GCN and RNN+GCN modules which are used as encoders in our model. The notations are specific to the dialogue history encoder but both the encoders are similar for the query. We use only the GCN encoder for the KB.", "cite_spans": [], "ref_spans": [ { "start": 137, "end": 145, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Dialogue History Encoder", "sec_num": "4.2" }, { "text": "(V H , E H ). Note that this graph will only contain edges between words belonging to the same sentence and there will be no edges between words across sentences. We then use a history-specific GCN to operate on G H which takes s t as the input to the first layer. The node representation in the k th hop of the history-specific GCN is computed as: k dir(u,v) and o k dir (u,v) are edge direction-specific history-GCN weights and biases in the k th hop and a 1 u = s u . Such an encoder with a single hop of GCN is illustrated in Figure 1 (b) and the encoder without the BiRNN is depicted in Figure 1 (a).", "cite_spans": [ { "start": 349, "end": 359, "text": "k dir(u,v)", "ref_id": null }, { "start": 372, "end": 377, "text": "(u,v)", "ref_id": null } ], "ref_spans": [ { "start": 530, "end": 538, "text": "Figure 1", "ref_id": null }, { "start": 592, "end": 600, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Dialogue History Encoder", "sec_num": "4.2" }, { "text": "a k+1 v = ReLU u\u2208N (v) (V k dir(u,v) a k u + o k dir(u,v) ) (8) \u2200v \u2208 V H . Here V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dialogue History Encoder", "sec_num": "4.2" }, { "text": "As mentioned earlier, G K = (V K , E K ) is the graph capturing the interactions between the entities in the knowledge graph associated with the dialogue. Let there be m such entities and we denote the embedding of the node corresponding to the i th entity as e i . We then operate a KB-specific GCN on these entity representations to obtain refined representations that capture relations between entities. The node representation in the k th hop of the KB specific GCN is computed as: (u,v) and z k dir (u,v) are edge direction-specific KB-GCN weights and biases in k th hop and r 1 u = e u . We also add inverse edges to E K similar to the case of syntactic GCNs in order to allow information flow in both the directions for an entity pair in the knowledge graph.", "cite_spans": [ { "start": 486, "end": 491, "text": "(u,v)", "ref_id": null }, { "start": 504, "end": 509, "text": "(u,v)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "KB Encoder", "sec_num": "4.3" }, { "text": "r k+1 v = ReLU u\u2208N (v) (U k dir(u,v) r k u + z k dir(u,v) ) (9) \u2200v \u2208 V K . Here U k dir", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KB Encoder", "sec_num": "4.3" }, { "text": "We use an RNN decoder to generate the tokens of the response and let the hidden states of the decoder be denoted as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequential Attention", "sec_num": "4.4" }, { "text": "{d i } T i=1 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequential Attention", "sec_num": "4.4" }, { "text": "where T is the total number of decoder time steps. In order to obtain a single representation of the node vectors from the final layer (k = f ) of the query-GCN, we use an attention mechanism as described below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequential Attention", "sec_num": "4.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03bc jt = v T 1 tanh(W 1 c f j + W 2 d t\u22121 ) (10) \u03b1 t = softmax(\u03bc t ) (11) h Q t = |Q| j =1 \u03b1 j t c f j", "eq_num": "(12)" } ], "section": "Sequential Attention", "sec_num": "4.4" }, { "text": "Here v 1 , W 1 and W 2 are parameters. Further, at each decoder time step, we obtain a queryaware representation from the final layer of the history-GCN by computing an attention score for each node/token in the history based on the query context vector h Q t as shown below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequential Attention", "sec_num": "4.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03bd jt = v T 2 tanh(W 3 a f j + W 4 d t\u22121 + W 5 h Q t ) (13) \u03b2 t = softmax(\u03bd t )", "eq_num": "(14)" } ], "section": "Sequential Attention", "sec_num": "4.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h H t = |H| j =1 \u03b2 j t a f j", "eq_num": "(15)" } ], "section": "Sequential Attention", "sec_num": "4.4" }, { "text": "Here v 2 , W 3 , W 4 , and W 5 are parameters. Finally, we obtain a query and history aware representation of the KB by computing an attention score over all the nodes in the final layer of KB-GCN using h Q t and h H t as shown below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequential Attention", "sec_num": "4.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c9 jt = v T 3 tanh(W 6 r f j + W 7 d t\u22121 + W 8 h Q t + W 9 h H t ) (16) \u03b3 t = softmax(\u03c9 t )", "eq_num": "(17)" } ], "section": "Sequential Attention", "sec_num": "4.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h K t = m j =1 \u03b3 j t r f j", "eq_num": "(18)" } ], "section": "Sequential Attention", "sec_num": "4.4" }, { "text": "Here v 3 , W 6 , W 7 , W 8 and W 9 are parameters. This sequential attention mechanism is illustrated in Figure 2 . For simplicity, we depict the GCN and RNN+GCN encoders as blocks. The internal structure of these blocks are shown in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 105, "end": 113, "text": "Figure 2", "ref_id": "FIGREF0" }, { "start": 234, "end": 242, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Sequential Attention", "sec_num": "4.4" }, { "text": "The decoder is conditioned on two components: (i) the context that contains the history and the KB and (ii) the query that is the last/previous utterance in the dialogue. We use an aggregator that learns the overall attention to be given to the history and KB components. These attention scores: \u03b8 H t and \u03b8 K t are dependent on the respective context vectors and the previous decoder state d t\u22121 . The final context vector is obtained as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "4.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h C t = \u03b8 H t h H t + \u03b8 K t h K t", "eq_num": "(19)" } ], "section": "Decoder", "sec_num": "4.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h final t = [h C t ; h Q t ]", "eq_num": "(20)" } ], "section": "Decoder", "sec_num": "4.5" }, { "text": "where [; ] denotes the concatenation operator. At every time step, the decoder then computes a probability distribution over the vocabulary using the following equations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "4.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "d t = RN N (d t\u22121 , [h final t ; w t ]) (21) P vocab = softmax(V d t + b )", "eq_num": "(22)" } ], "section": "Decoder", "sec_num": "4.5" }, { "text": "where w t is the decoder input at time step t, V and b are parameters. P vocab gives us a probability distribution over the entire vocabulary and the loss for time step t is l t = \u2212 log P vocab (w * t ), where w * t is the t th word in the ground truth response. The total loss is an average of the per-time step losses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "4.5" }, { "text": "For the dialogue history and query encoder, we used the dependency parse tree for capturing structural information in the encodings. However, if the conversations occur in a language for which no dependency parsers exist, for example: codemixed languages like Hinglish (Hindi-English) (Banerjee et al., 2018), then we need an alternate way of extracting a graph structure from the utterances. One simple solution that has worked well in practice was to create a word co-occurrence matrix from the entire corpus where the context window is an entire sentence. Once we have such a co-occurrence matrix, for a given sentence we can connect an edge between two words if their co-occurrence frequency is above a threshold value. The co-occurrence matrix can either contain co-occurrence frequency counts or positivepointwise mutual information (PPMI) values (Church and Hanks, 1990; Dagan et al., 1993; Niwa and Nitta, 1994) .", "cite_spans": [ { "start": 853, "end": 877, "text": "(Church and Hanks, 1990;", "ref_id": "BIBREF7" }, { "start": 878, "end": 897, "text": "Dagan et al., 1993;", "ref_id": "BIBREF8" }, { "start": 898, "end": 919, "text": "Niwa and Nitta, 1994)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Contextual Graph Creation", "sec_num": "4.6" }, { "text": "In this section, we describe the datasets used in our experiments, the various hyperparameters that we considered, and the models that we compared.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5" }, { "text": "The original DSTC2 dataset (Henderson et al., 2014a ) was based on the task of restaurant table reservation and contains transcripts of real conversations between humans and bots. The utterances were labeled with the dialogue state annotations like the semantic intent representation, requested slots, and the constraints on the slot values. We report our results on the modified DSTC2 dataset of Bordes et al. (2017) , where For our experiments with contextual graphs we report our results on the code-mixed versions of modified DSTC2, which was recently released by Banerjee et al. (2018) . This dataset has been collected by code-mixing the utterances of the English version of modified DSTC2 (En-DSTC2) in four languages: Hindi (Hi-DSTC2), Bengali (Be-DSTC2), Gujarati (Gu-DSTC2), and Tamil (Ta-DSTC2), via crowdsourcing. We also perform experiments on two goal-oriented dialogue datasets that contain conversations between humans wherein the conversations were collected in a Wizard-of-Oz (WOZ) manner. Specifically, we use the Cam676 dataset (Wen et al., 2017), which contains 676 KBgrounded dialogues from the restaurant domain ", "cite_spans": [ { "start": 27, "end": 51, "text": "(Henderson et al., 2014a", "ref_id": "BIBREF18" }, { "start": 397, "end": 417, "text": "Bordes et al. (2017)", "ref_id": "BIBREF3" }, { "start": 568, "end": 590, "text": "Banerjee et al. (2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "5.1" }, { "text": "ROUGE Entity F1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "5.1" }, { "text": "1 2 L Rule-Based (Bordes et al., 2017) 33.3 \u2212 \u2212 \u2212 \u2212 \u2212 MEMNN (Bordes et al., 2017) 41.1 \u2212 \u2212 \u2212 \u2212 \u2212 QRN (Seo et al., 2017) 50.7 \u2212 \u2212 \u2212 \u2212 \u2212 GMEMNN (Liu and Perez, 2017) 48.7 \u2212 \u2212 \u2212 \u2212 \u2212 Seq2Seq-Attn (Bahdanau et al., 2015) 46.0 57.3 67.2 56.0 64.9 67.1 Seq2Seq-Attn+Copy 47.3 55.4 \u2212 \u2212 \u2212 71.6 HRED (Serban et al., 2016) 48.9 58.4 67.9 57.6 65.7 75.6 Mem2Seq (Madotto et al., 2018) 45 (Budzianowski et al., 2018) dataset, which contains 10,438 dialogues.", "cite_spans": [ { "start": 17, "end": 38, "text": "(Bordes et al., 2017)", "ref_id": "BIBREF3" }, { "start": 60, "end": 81, "text": "(Bordes et al., 2017)", "ref_id": "BIBREF3" }, { "start": 101, "end": 119, "text": "(Seo et al., 2017)", "ref_id": "BIBREF32" }, { "start": 142, "end": 163, "text": "(Liu and Perez, 2017)", "ref_id": "BIBREF25" }, { "start": 192, "end": 215, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF0" }, { "start": 290, "end": 311, "text": "(Serban et al., 2016)", "ref_id": "BIBREF34" }, { "start": 350, "end": 372, "text": "(Madotto et al., 2018)", "ref_id": "BIBREF26" }, { "start": 376, "end": 403, "text": "(Budzianowski et al., 2018)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "5.1" }, { "text": "We used the same train, test, and validation splits as provided in the original versions of the datasets. We minimized the cross entropy loss using the Adam optimizer (Kingma and Ba, 2015) and tuned the initial learning rates in the range of 0.0006 to 0.001. For regularization we used an L2 penalty of 0.001 in addition to a dropout (Srivastava et al., 2014) of 0.1. We used randomly initialized word embeddings of size 300. The RNN and GCN hidden dimensions were also chosen to be 300. We used GRU (Cho et al., 2014) cells for the RNNs. All parameters were initialized from a truncated normal distribution with a standard deviation of 0.1.", "cite_spans": [ { "start": 334, "end": 359, "text": "(Srivastava et al., 2014)", "ref_id": "BIBREF35" }, { "start": 500, "end": 518, "text": "(Cho et al., 2014)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Hyperparameters", "sec_num": "5.2" }, { "text": "We compare the performance of the following models. (i) RNN+GCN-SeA vs GCN-SeA: We use RNN+GCN-SeA to refer to the model described in Section 4. Instead of using the hidden representations obtained from the bidirectional RNNs, we also experiment by providing the token embeddings directly to the GCNs-that is, c 1 u = q u in equation 6 and a 1 u = p u in equation 8. We refer to this model as GCN-SeA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models Compared", "sec_num": "5.3" }, { "text": "(ii) Cross edges between the GCNs: In addition to the dependency and contextual edges, we add edges between words in the dialogue history/query and KB entities if a history/query word exactly matches the KB entity. Such edges create a single connected graph that is encoded using a single GCN encoder and then separated into different contexts to compute sequential attention. This model is referred to as RNN+CROSS-GCN-SeA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models Compared", "sec_num": "5.3" }, { "text": "(iii) GCN-SeA+Random vs GCN-SeA+Structure: We experiment with the model where the graph is constructed by randomly connecting edges between two words in a context. We refer to this model as GCN-SeA+Random. We refer to the model that either uses dependency or contextual graphs instead of random graphs as GCN-SeA+Structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models Compared", "sec_num": "5.3" }, { "text": "In this section, we discuss the results of our experiments as summarized in Tables 1-5. We use BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) metrics to evaluate the generation quality of responses. We also report the per-response accuracy, which computes the percentage of responses in which the generated response exactly matches the ground truth response. To evaluate the model's capability of correctly injecting entities in the generated response, we report the entity F1 measure as defined in .", "cite_spans": [ { "start": 100, "end": 123, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF30" }, { "start": 134, "end": 145, "text": "(Lin, 2004)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Results and Discussions", "sec_num": "6" }, { "text": "Results on En-DSTC2: We compare our model with the previous works on the English version of modified DSTC2 in from a list of candidates as opposed to generating it. Our model outperforms all of the retrieval and generation-based models. We obtain a gain of 0.7 in the per-response accuracy compared with the previous retrieval based state-of-the-art model of Seo et al. (2017) , which is a very strong baseline for our generation-based model. We call this a strong baseline because the candidate selection task of this model is easier than the response generation task of our model. We also obtain a gain of 2.8 BLEU points, 2 ROUGE, points and 2.5 entity F1 points compared with current state-of-the-art generation-based models.", "cite_spans": [ { "start": 359, "end": 376, "text": "Seo et al. (2017)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Results and Discussions", "sec_num": "6" }, { "text": "Results on code-mixed datasets and effect of using RNNs: The results of our experiments on the code-mixed datasets are reported in Table 2 . Our model outperforms the baseline models on all the code-mixed languages. One common observation from the results over all the languages is that RNN+GCN-SeA performs better than GCN-SeA. Similar observations were made by for semantic role labeling.", "cite_spans": [], "ref_spans": [ { "start": 131, "end": 138, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results and Discussions", "sec_num": "6" }, { "text": "Results on Cam676 dataset: The results of our experiments on the Cam676 dataset are reported in Table 3 . In order to evaluate goalcompleteness, we use two additional metrics as Table 5 : GCN-SeA with random graphs and dependency/contextual graphs on all DSTC2 datasets. used in the original paper which introduced this dataset, (i) match rate: the number of times the correct entity was suggested by the model, and (ii) success rate: if the correct entity was suggested and the system provided all the requestable slots then the dialogue results in a success. The results suggest that our model's responses are more fluent as indicated by the BLEU and ROUGE scores. It also produces the correct entities according to the dialogue goals but fails to provide enough requestable slots. Note that the model described in the original paper (Wen et al., 2017) is not directly comparable to our work as it uses an explicit belief tracker, which requires extra supervision/annotation about the beliefstate. However, for the sake of completeness we would like to mention that their model using this extra supervision achieves a BLEU score of 23.69 and a success rate of 83.82%.", "cite_spans": [], "ref_spans": [ { "start": 96, "end": 103, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 178, "end": 185, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussions", "sec_num": "6" }, { "text": "Results on MultiWOZ dataset: The results of our experiments on two versions of the MultiWOZ dataset are reported in Table 4 . The first version (SNG) contains around 3K dialogues in which each dialogue involves only a single domain and the second version (MUL) contains all 10k dialogues. The baseline models do not use an oracle belief state as mentioned in Budzianowski et al. (2018) and therefore are comparable to our model. We observed that with a larger GCN hidden dimension (400d in Table 4) our model is able to provide the correct entities and requestable slots in SNG. On the other hand, with a smaller GCN hidden dimension (100d) we are able to generate fluent responses in SNG. On MUL, our model is able to generate fluent responses but struggles in providing the correct entity mainly due to the increased complexity of multiple domains. However, our model still provides a high number of correct requestable slots, as shown by the success rate. This is because multiple domains (hotel, restaurant, attraction, hospital) have the same requestable slots (address, phone, postcode).", "cite_spans": [ { "start": 359, "end": 385, "text": "Budzianowski et al. (2018)", "ref_id": "BIBREF4" }, { "start": 992, "end": 1033, "text": "(hotel, restaurant, attraction, hospital)", "ref_id": null } ], "ref_spans": [ { "start": 116, "end": 123, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results and Discussions", "sec_num": "6" }, { "text": "Effect of using hops: As we increased the number of hops of GCNs (Figure 3 ), we observed a decrease in the performance. One reason for such a drop in performance could be that the average utterance length is very small (7.76 words). Thus, there is not much scope for capturing distant neighborhood information and more hops can add noisy information. The reduction is more prominent in contextual graphs in which multihop neighbors can turn out to be dissimilar words in different sentences.", "cite_spans": [], "ref_spans": [ { "start": 65, "end": 74, "text": "(Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Results and Discussions", "sec_num": "6" }, { "text": "Effect of using random graphs: GCN-SeA+Random and GCN-SeA+Structure take the token embeddings directly instead of passing them though an RNN. This ensures that the difference in performance of the two models are not influenced by the RNN encodings. The results are shown in Table 5 and we observe a drop in performance for GCN-SeA+Random across all the languages. This shows that the dependency and contextual structures play an important role and cannot be replaced by random graphs.", "cite_spans": [], "ref_spans": [ { "start": 274, "end": 281, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussions", "sec_num": "6" }, { "text": "Ablations: We experiment with replacing the sequential attention by the Bahdanau attention (Bahdanau et al., 2015) . We also experiment with various combinations of RNNs and GCNs as encoders. The results are shown in Table 6 . We observed that GCNs do not outperform RNNs independently. In general, RNN-Bahdanau attention performs better than GCN-Bahdanau attention. The sequential attention mechanism outperforms Bahdanau attention as observed from the following comparisons: (i) GCN-Bahdanau attention vs GCN-SeA, (ii) RNN-Bahdanau attention vs RNN-SeA (in BLEU and ROUGE), and (iii) RNN+GCN-Bahdanau attention vs RNN+GCN-SeA. Overall, the best results are always obtained by our final model, which combines RNN, GCN, and sequential attention. We also performed ablations by removing specific parts of the encoder. Specifically, we experiment with (i) query encoder alone, (ii) query + history encoder, and (iii) query + KB encoder. The results shown in Table 7 suggest that the query and the KB are not enough to generate fluent responses and the previous conversation history is essential.", "cite_spans": [ { "start": 91, "end": 114, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 217, "end": 224, "text": "Table 6", "ref_id": "TABREF7" }, { "start": 956, "end": 963, "text": "Table 7", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Results and Discussions", "sec_num": "6" }, { "text": "Human evaluations: In order to evaluate the appropriateness of our model's responses compared to the baselines, we perform a human evaluation of the generated responses using inhouse evaluators. We evaluated randomly chosen responses from 200 dialogues of En-DSTC2 and Table 9 : Qualitative comparison of responses between the baselines and different versions of our model 100 dialogues of Cam676 using the method of pairwise comparisons introduced in Serban et al. (2017). We chose the best baseline model for each dataset, namely, HRED for En-DSTC2 and Seq2seq+Attn for Cam676. We show each dialogue context to three different evaluators and ask them to select the most appropriate response in that context. The evaluators were given no information about which model generated which response. They were allowed to choose an option for tie if they were not able to decide whether one model's response was better than the other model. The results reported in Table 8 suggest that our model's responses are favorable in noisy contexts of spontaneous conversations, such as those exhibited in the DSTC2 dataset. However, in a WOZ setting for human-human dialogues, where the conversations are less spontaneous and contexts are properly established, both the models generate appropriate responses.", "cite_spans": [], "ref_spans": [ { "start": 269, "end": 276, "text": "Table 9", "ref_id": null }, { "start": 959, "end": 966, "text": "Table 8", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Results and Discussions", "sec_num": "6" }, { "text": "Qualitative analysis: We show the generated responses of the baselines and different versions of our model in Table 9 . We see that Seq2seq+Attn model is not able to suggest a restaurant with a high rating whereas HRED gets the restaurant right but suggests an incorrect price range. However, RNN+GCN-SeA suggests the correct restaurant with the preferred attributes. Although GCN-SeA selects the correct restaurant, it does not provide the location in its response.", "cite_spans": [], "ref_spans": [ { "start": 110, "end": 117, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussions", "sec_num": "6" }, { "text": "We showed that structure-aware representations are useful in goal-oriented dialogue and our model outperforms existing methods on four dialogue datasets. We used GCNs to infuse structural information of dependency graphs and contextual graphs to enrich the representations of the dialogue context and KB. We also proposed a sequential attention mechanism for combining the representations of (i) query (current utterance), (ii) conversation history, and (iii) the KB. Finally, we empirically showed that when dependency parsers are not available for certain languages, such as code-mixed languages, then we can use word co-occurrence frequencies and PPMI values to extract a contextual graph and use such a graph with GCNs for improved performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "https://github.com/sumanbanerjee1/ GCN-SeA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank the anonymous reviewers and the action editor for their insightful comments and suggestions. We would like to thank the Department of Computer Science and Engineering, IIT Madras and Robert Bosch Centre for Data Science and Artificial Intelligence (RBC-DSAI), IIT Madras for providing the necessary resources. We would also like to thank Accenture Technology Labs, India, for supporting our work through their generous academic research grant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "Context: Speaker 1: [silence] . Speaker 2: hello, welcome to the cambridge restaurant system . you can ask for restaurants by area, price range or food type . how may i help you ? Speaker 1: uh cheap restaurant malaysian food . Speaker 2: i'm sorry but there is no restaurant serving malaysian food . Speaker 1: uh italian food . Speaker 2: could you please repeat that ? Speaker 1: cheap restaurant uh italian food . Speaker 2: there are restaurants serving italian in the cheap price range . what area would you like ? Speaker 1: i dont care .KB Triples: ask, r cuisine, italian ask, r location, centre ask, r phone, ask phone ask, r address, ask address ask, r price, cheap ask, r rating, 0 pizza hut city centre, r cuisine, italian pizza hut city centre, r location, centre pizza hut city centre, r phone, pizza hut city centre phone pizza hut city centre, r address, pizza hut city centre address pizza hut city centre, r price, cheap pizza hut city centre, r rating, 9 Seq2seq + Attn: ask is a nice place in the west of town serving tasty italian food HRED: pizza hut city centre serves italian food in the expensive price range GCN-SeA: pizza hut city centre serves italian food in the cheap price range RNN+GCN-SeA: pizza hut city centre is a great restaurant serving cheap italian food in the centre of town RNN+CROSS-GCN-SeA:pizza hut city centre is a nice place in the centre of town serving tasty italian food Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 468-473.", "cite_spans": [ { "start": 20, "end": 29, "text": "[silence]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Pro- ceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A dataset for building code-mixed goal oriented conversation systems", "authors": [ { "first": "Suman", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Moghe", "suffix": "" }, { "first": "Siddhartha", "middle": [], "last": "Arora", "suffix": "" }, { "first": "Mitesh", "middle": [ "M" ], "last": "Khapra", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "3766--3780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suman Banerjee, Nikita Moghe, Siddhartha Arora, and Mitesh M. Khapra. 2018. A data- set for building code-mixed goal oriented con- versation systems. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3766-3780.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Graph convolutional encoders for syntax-aware neural machine translation", "authors": [ { "first": "Joost", "middle": [], "last": "Bastings", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" }, { "first": "Wilker", "middle": [], "last": "Aziz", "suffix": "" }, { "first": "Diego", "middle": [], "last": "Marcheggiani", "suffix": "" }, { "first": "Khalil", "middle": [], "last": "Simaan", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1957--1967", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Simaan. 2017. Graph convolutional encoders for syntax-aware neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1957-1967.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Learning end-to-end goal-oriented dialog", "authors": [ { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Y.-Lan", "middle": [], "last": "Boureau", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antoine Bordes, Y.-Lan Boureau, and Jason Weston. 2017. Learning end-to-end goal-oriented dialog. In Proceedings of the 5th International Conference on Learning Representations, ICLR 2017, Toulon.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "MultiWOZ -A large-scale multi-domain wizardof-Oz dataset for task-oriented dialogue modelling", "authors": [ { "first": "Pawe\u0142", "middle": [], "last": "Budzianowski", "suffix": "" }, { "first": "Tsung-Hsien", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Bo-Hsiang", "middle": [], "last": "Tseng", "suffix": "" }, { "first": "I\u00f1igo", "middle": [], "last": "Casanueva", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Ultes", "suffix": "" }, { "first": "Milica", "middle": [], "last": "Osman Ramadan", "suffix": "" }, { "first": "", "middle": [], "last": "Gasic", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "5016--5026", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pawe\u0142 Budzianowski, Tsung-Hsien Wen, Bo- Hsiang Tseng, I\u00f1igo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gasic. 2018. MultiWOZ -A large-scale multi-domain wizard- of-Oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, pages 5016-5026, Brussels.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Improved neural machine translation with a syntax-aware encoder and decoder", "authors": [ { "first": "Huadong", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Shujian", "middle": [], "last": "Huang", "suffix": "" }, { "first": "David", "middle": [], "last": "Chiang", "suffix": "" }, { "first": "Jiajun", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1936--1945", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huadong Chen, Shujian Huang, David Chiang, and Jiajun Chen. 2017. Improved neural machine translation with a syntax-aware en- coder and decoder. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1936-1945.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merrienboer", "suffix": "" }, { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Fethi", "middle": [], "last": "Bougares", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1724--1734", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine trans- lation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1724-1734.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Word association norms, mutual information, and lexicography", "authors": [ { "first": "Kenneth", "middle": [ "Ward" ], "last": "Church", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Hanks", "suffix": "" } ], "year": 1990, "venue": "Computational Linguistics", "volume": "16", "issue": "1", "pages": "22--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Computational Linguistics, 16(1):22-29.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Contextual word similarity and estimation from sparse data", "authors": [ { "first": "Shaul", "middle": [], "last": "Ido Dagan", "suffix": "" }, { "first": "Shaul", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "", "middle": [], "last": "Markovitch", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "164--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ido Dagan, Shaul Marcus, and Shaul Markovitch. 1993. Contextual word similarity and estima- tion from sparse data. In Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics, pages 164-171, Columbus, OH.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Question answering by reasoning across documents with graph convolutional networks", "authors": [ { "first": "Nicola", "middle": [], "last": "De Cao", "suffix": "" }, { "first": "Wilker", "middle": [], "last": "Aziz", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2306--2317", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nicola De Cao, Wilker Aziz, and Ivan Titov. 2019. Question answering by reasoning across documents with graph convolutional networks. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2306-2317, Minneapolis, MN.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "authors": [ { "first": "Micha\u00ebl", "middle": [], "last": "Defferrard", "suffix": "" }, { "first": "Xavier", "middle": [], "last": "Bresson", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Vandergheynst", "suffix": "" }, { "first": ";", "middle": [ "D D" ], "last": "Lee", "suffix": "" }, { "first": "M", "middle": [], "last": "Sugiyama", "suffix": "" }, { "first": "U", "middle": [ "V" ], "last": "Luxburg", "suffix": "" }, { "first": "I", "middle": [], "last": "Guyon", "suffix": "" }, { "first": "R", "middle": [], "last": "Garnett", "suffix": "" } ], "year": 2016, "venue": "Advances in Neural Information Processing Systems", "volume": "29", "issue": "", "pages": "3844--3852", "other_ids": {}, "num": null, "urls": [], "raw_text": "Micha\u00ebl Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016, Convolutional neural networks on graphs with fast localized spectral filtering, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 3844-3852. Curran Asso- ciates, Inc.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Convolutional networks on graphs for learning molecular fingerprints", "authors": [ { "first": "David", "middle": [ "K" ], "last": "Duvenaud", "suffix": "" }, { "first": "Dougal", "middle": [], "last": "Maclaurin", "suffix": "" }, { "first": "Jorge", "middle": [], "last": "Iparraguirre", "suffix": "" }, { "first": "Rafael", "middle": [], "last": "Bombarell", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Hirzel", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Aspuru-Guzik", "suffix": "" }, { "first": "Ryan", "middle": [ "P" ], "last": "Adams", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David K. Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alan Aspuru-Guzik, and Ryan P. Adams. 2015, Convolutional networks on graphs for learning molecular fingerprints. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Advances in Neural Information Processing Systems 28", "authors": [ { "first": "", "middle": [], "last": "Garnett", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "2224--2232", "other_ids": {}, "num": null, "urls": [], "raw_text": "Garnett, editors, Advances in Neural Infor- mation Processing Systems 28, pages 2224-2232. Curran Associates, Inc.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Finding structure in time", "authors": [ { "first": "Jeffrey", "middle": [ "L" ], "last": "Elman", "suffix": "" } ], "year": 1990, "venue": "Cognitive Science", "volume": "14", "issue": "2", "pages": "179--211", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey L. Elman. 1990. Finding structure in time. Cognitive Science, 14(2):179-211.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Keyvalue retrieval networks for task-oriented dialogue", "authors": [ { "first": "Mihail", "middle": [], "last": "Eric", "suffix": "" }, { "first": "Lakshmi", "middle": [], "last": "Krishnan", "suffix": "" }, { "first": "Francois", "middle": [], "last": "Charette", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue", "volume": "", "issue": "", "pages": "37--49", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihail Eric, Lakshmi Krishnan, Francois Charette, and Christopher D. Manning. 2017. Key- value retrieval networks for task-oriented dialogue. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 37-49, Saarbr\u00fccken.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A copy-augmented sequence-to-sequence architecture gives good performance on taskoriented dialogue", "authors": [ { "first": "Mihail", "middle": [], "last": "Eric", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihail Eric and Christopher Manning. 2017. A copy-augmented sequence-to-sequence archi- tecture gives good performance on task- oriented dialogue. In Proceedings of the 15th", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Tree-to-sequence attentional neural machine translation", "authors": [ { "first": "Akiko", "middle": [], "last": "Eriguchi", "suffix": "" }, { "first": "Kazuma", "middle": [], "last": "Hashimoto", "suffix": "" }, { "first": "Yoshimasa", "middle": [], "last": "Tsuruoka", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "823--833", "other_ids": {}, "num": null, "urls": [], "raw_text": "Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2016. Tree-to-sequence attentional neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 823-833.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Deep residual learning for image recognition", "authors": [ { "first": "Kaiming", "middle": [], "last": "He", "suffix": "" }, { "first": "Xiangyu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shaoqing", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2016, "venue": "2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016", "volume": "", "issue": "", "pages": "770--778", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, pages 770-778, Las Vegas, NV.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The second dialog state tracking challenge", "authors": [ { "first": "Matthew", "middle": [], "last": "Henderson", "suffix": "" }, { "first": "Blaise", "middle": [], "last": "Thomson", "suffix": "" }, { "first": "Jason", "middle": [ "D" ], "last": "Williams", "suffix": "" } ], "year": 2014, "venue": "The 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue", "volume": "", "issue": "", "pages": "263--272", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Henderson, Blaise Thomson, and Jason D. Williams. 2014a. The second dialog state tracking challenge. In Proceedings of the SIGDIAL 2014 Conference, The 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 263-272, Philadelphia, PA.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "The third dialog state tracking challenge", "authors": [ { "first": "Matthew", "middle": [], "last": "Henderson", "suffix": "" }, { "first": "Blaise", "middle": [], "last": "Thomson", "suffix": "" }, { "first": "Jason", "middle": [ "D" ], "last": "Williams", "suffix": "" } ], "year": 2014, "venue": "IEEE Spoken Language Technology Workshop", "volume": "", "issue": "", "pages": "324--329", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Henderson, Blaise Thomson, and Jason D. Williams. 2014b. The third dialog state tracking challenge. In 2014 IEEE Spoken Language Technology Workshop, SLT 2014, pages 324-329, South Lake Tahoe, NV.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Learning graphical state transitions", "authors": [ { "first": "D", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel D. Johnson. 2017. Learning graphical state transitions. In 5th International Conference on Learning Representations, ICLR 2017, Toulon.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Semi-supervised classification with graph convolutional networks", "authors": [ { "first": "N", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Max", "middle": [], "last": "Kipf", "suffix": "" }, { "first": "", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Gated graph sequence neural networks", "authors": [ { "first": "Yujia", "middle": [], "last": "Li", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Tarlow", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Brockschmidt", "suffix": "" }, { "first": "Richard", "middle": [ "S" ], "last": "Zemel", "suffix": "" } ], "year": 2016, "venue": "4th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard S. Zemel. 2016. Gated graph sequence neural networks. In 4th International Conference on Learning Representations, ICLR 2016, San Juan.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "ROUGE: A package for automatic evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Text Summarization Branches Out: Proceedings of the ACL-04 Workshop", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74-81, Barcelona.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Gated end-toend memory networks", "authors": [ { "first": "Fei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Perez", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fei Liu and Julien Perez. 2017. Gated end-to- end memory networks. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, 2017, pages 1-10, Valencia.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Mem2Seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems", "authors": [ { "first": "Andrea", "middle": [], "last": "Madotto", "suffix": "" }, { "first": "Chien-Sheng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1468--1478", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018. Mem2Seq: Effectively incor- porating knowledge bases into end-to-end task-oriented dialog systems. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1468-1478.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Exploiting semantics in neural machine translation with graph convolutional networks", "authors": [ { "first": "Diego", "middle": [], "last": "Marcheggiani", "suffix": "" }, { "first": "Joost", "middle": [], "last": "Bastings", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "486--492", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diego Marcheggiani, Joost Bastings, and Ivan Titov. 2018. Exploiting semantics in neural machine translation with graph convolutional networks. In Proceedings of the 2018 Con- ference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 486-492.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Encoding sentences with graph convolutional networks for semantic role labeling", "authors": [ { "first": "Diego", "middle": [], "last": "Marcheggiani", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1506--1515", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In Pro- ceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1506-1515.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Cooccurrence vectors from corpora vs. distance vectors from dictionaries", "authors": [ { "first": "Yoshiki", "middle": [], "last": "Niwa", "suffix": "" }, { "first": "Yoshihiko", "middle": [], "last": "Nitta", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 15th Conference on Computational Linguistics", "volume": "1", "issue": "", "pages": "304--309", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshiki Niwa and Yoshihiko Nitta. 1994. Co- occurrence vectors from corpora vs. distance vectors from dictionaries. In Proceedings of the 15th Conference on Computational Linguistics -Volume 1, COLING '94, pages 304-309, Stroudsburg, PA.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "BLEU: A method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, PA.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Cross-sentence n-ary relation extraction with graph LSTMs", "authors": [ { "first": "Nanyun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Hoifung", "middle": [], "last": "Poon", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Quirk", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "101--115", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, and Wen-tau Yih. 2017. Cross-sentence n-ary relation extraction with graph LSTMs. Transactions of the Association for Computational Linguistics, 5:101-115.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Query-reduction networks for question answering", "authors": [ { "first": "Minjoon", "middle": [], "last": "Seo", "suffix": "" }, { "first": "Sewon", "middle": [], "last": "Min", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Farhadi", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minjoon Seo, Sewon Min, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Query-reduction networks for question answering. In 5th Inter- national Conference on Learning Representa- tions, ICLR 2017, Toulon.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "A hierarchical latent variable encoder-decoder model for generating dialogues", "authors": [ { "first": "V", "middle": [], "last": "Iulian", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Serban", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Sordoni", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Lowe", "suffix": "" }, { "first": "Joelle", "middle": [], "last": "Charlin", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Pineau", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Courville", "suffix": "" }, { "first": "", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2017, "venue": "Thirty-First AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iulian V. Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A hier- archical latent variable encoder-decoder model for generating dialogues. In Thirty-First AAAI Conference on Artificial Intelligence, page 1583.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Building end-to-end dialogue systems using generative hierarchical neural network models", "authors": [ { "first": "Iulian", "middle": [], "last": "Vlad Serban", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Sordoni", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Aaron", "middle": [ "C" ], "last": "Courville", "suffix": "" }, { "first": "Joelle", "middle": [], "last": "Pineau", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "3776--3784", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pages 3776-3784, Phoenix, AZ.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Dropout: A simple way to prevent neural networks from overfitting", "authors": [ { "first": "Nitish", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2014, "venue": "Journal of Machine Learning Research", "volume": "15", "issue": "1", "pages": "1929--1958", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929-1958.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Training very deep networks", "authors": [ { "first": "K", "middle": [], "last": "Rupesh", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Greff", "suffix": "" }, { "first": "", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems", "volume": "28", "issue": "", "pages": "2377--2385", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rupesh K Srivastava, Klaus Greff, and J\u00fcrgen Schmidhuber. 2015, Training very deep net- works. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2377-2385. Curran Associates, Inc.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "End-to-end memory networks", "authors": [ { "first": "Sainbayar", "middle": [], "last": "Sukhbaatar", "suffix": "" }, { "first": "Arthur", "middle": [], "last": "Szlam", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Fergus", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "2440--2448", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, pages 2440-2448, Montreal.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Dating documents using graph convolution networks", "authors": [ { "first": "Shikhar", "middle": [], "last": "Vashishth", "suffix": "" }, { "first": "Swayambhu", "middle": [], "last": "Shib Sankar Dasgupta", "suffix": "" }, { "first": "Partha", "middle": [], "last": "Nath Ray", "suffix": "" }, { "first": "", "middle": [], "last": "Talukdar", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1605--1615", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shikhar Vashishth, Shib Sankar Dasgupta, Swayambhu Nath Ray, and Partha Talukdar. 2018. Dating documents using graph con- volution networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1605-1615.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "A network-based end-toend trainable task-oriented dialogue system", "authors": [ { "first": "David", "middle": [], "last": "Tsung-Hsien Wen", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Vandyke", "suffix": "" }, { "first": "Milica", "middle": [], "last": "Mrk\u0161i\u0107", "suffix": "" }, { "first": "Lina", "middle": [ "M" ], "last": "Ga\u0161i\u0107", "suffix": "" }, { "first": "Pei-Hao", "middle": [], "last": "Rojas-Barahona", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Su", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Ultes", "suffix": "" }, { "first": "", "middle": [], "last": "Young", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter", "volume": "1", "issue": "", "pages": "438--449", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsung-Hsien Wen, David Vandyke, Nikola Mrk\u0161i\u0107, Milica Ga\u0161i\u0107, Lina M. Rojas- Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A network-based end-to- end trainable task-oriented dialogue system. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 438-449, Valencia.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "The dialog state tracking challenge", "authors": [ { "first": "Jason", "middle": [ "D" ], "last": "Williams", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Raux", "suffix": "" }, { "first": "Deepak", "middle": [], "last": "Ramachandran", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" } ], "year": 2013, "venue": "The 14th Annual Meeting of the Special Interest Group on Discourse and Dialogue", "volume": "", "issue": "", "pages": "404--413", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason D. Williams, Antoine Raux, Deepak Ramachandran, and Alan W. Black. 2013. The dialog state tracking challenge. In Proceedings of the SIGDIAL 2013 Conference, The 14th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 404-413, Metz.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Partially observable Markov decision processes for spoken dialog systems", "authors": [ { "first": "Jason", "middle": [ "D" ], "last": "Williams", "suffix": "" }, { "first": "Steve", "middle": [ "J" ], "last": "Young", "suffix": "" } ], "year": 2007, "venue": "Computer Speech & Language", "volume": "21", "issue": "2", "pages": "393--422", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason D. Williams and Steve J. Young. 2007. Partially observable Markov decision processes for spoken dialog systems. Computer Speech & Language, 21(2):393-422.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Probabilistic methods in spoken-dialogue systems", "authors": [ { "first": "Steve", "middle": [ "J" ], "last": "Young", "suffix": "" } ], "year": null, "venue": "Philosophical Transactions: Mathematical, Physical and Engineering Sciences", "volume": "358", "issue": "", "pages": "1389--1402", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steve J. Young. 2000. Probabilistic methods in spoken-dialogue systems. Philosophical Transactions: Mathematical, Physical and Engineering Sciences, 358(1769):1389-1402.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Generative encoderdecoder models for task-oriented spoken dialog systems with chatting capability", "authors": [ { "first": "Tiancheng", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Allen", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Kyusong", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Maxine", "middle": [], "last": "Eskenazi", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue", "volume": "", "issue": "", "pages": "27--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tiancheng Zhao, Allen Lu, Kyusong Lee, and Maxine Eskenazi. 2017. Generative encoder- decoder models for task-oriented spoken dialog systems with chatting capability. In Proceed- ings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 27-36.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Illustration of sequential attention mechanism in RNN+GCN-SeA. such annotations are removed and only the raw utterance-response pairs are present with an associated set of KB triples for each dialogue. It contains around 1,618 training dialogues, 500 validation dialogues, and 1,117 test dialogues.", "type_str": "figure", "num": null, "uris": null }, "FIGREF2": { "text": "GCN-SeA with multiple hops on all DSTC2 datasets.", "type_str": "figure", "num": null, "uris": null }, "TABREF1": { "text": "For most of the retrieval-based models, the BLEU or ROUGE scores are not available as they select a candidate", "type_str": "table", "content": "
DatasetModelper-resp. accBLEUROUGEEntity F1
12L
Seq2Seq-Bahdanau Attn48.055.162.9 52.5 61.074.3
Hi-DSTC2HRED Mem2Seq47.2 43.155.3 50.263.4 52.7 61.5 55.5 48.1 54.071.3 73.8
GCN-SeA47.056.065.0 55.3 63.072.4
RNN+CROSS-GCN-SeA47.256.464.7 54.9 62.673.5
RNN+GCN-SeA49.257.166.4 56.8 64.475.9
Seq2Seq-Bahdanau Attn50.455.667.4 57.6 65.176.2
Be-DSTC2HRED Mem2Seq47.8 41.955.6 52.167.2 57.0 64.9 58.9 50.8 57.071.5 73.2
GCN-SeA47.158.467.4 57.3 64.969.6
RNN+CROSS-GCN-SeA50.459.168.3 58.9 65.974.9
RNN+GCN-SeA50.359.269.0 59.4 66.675.1
Seq2Seq-Bahdanau Attn47.754.564.8 54.9 62.671.3
GU-DSTC2HRED Mem2Seq48.0 43.154.7 48.965.4 55.2 63.3 55.7 48.6 54.271.8 75.5
GCN-SeA48.155.765.5 56.2 63.572.2
RNN+CROSS-GCN-SeA49.456.966.4 57.2 64.373.4
RNN+GCN-SeA48.956.766.1 56.9 64.173.0
Seq2Seq-Bahdanau Attn49.362.967.8 56.3 65.677.7
Ta-DSTC2HRED Mem2Seq47.8 44.261.5 58.966.9 55.2 64.8 58.6 50.8 57.074.4 74.9
GCN-SeA46.462.868.5 57.5 66.171.9
RNN+CROSS-GCN-SeA50.864.569.8 59.6 67.578.8
RNN+GCN-SeA50.764.970.2 59.9 67.977.9
", "num": null, "html": null }, "TABREF2": { "text": "Comparison of RNN+GCN-SeA with other models on all code-mixed datasets.", "type_str": "table", "content": "
ModelsMatch Success BLEU ROUGE-1 ROUGE-2 ROUGE-L
Seq2seq-Attn85.2948.5318.8148.1124.6940.41
HRED83.8244.1219.3848.2524.0939.93
GCN-SeA85.2921.3218.4847.6925.1540.29
RNN+GCN-SeA 94.1245.5921.6250.4927.6942.35
", "num": null, "html": null }, "TABREF3": { "text": "Comparison of our models with the baselines on the Cam676 dataset.", "type_str": "table", "content": "", "num": null, "html": null }, "TABREF5": { "text": "Comparison of our models with the baselines on the MultiWOZ dataset.", "type_str": "table", "content": "
DatasetModelper-resp. BLEUROUGEEntity F1
acc12L
En-DSTC2 GCN-SeA+Random45.957.867.1 56.5 64.872.2
GCN-SeA+Structure47.159.067.4 57.1 65.071.9
Hi-DSTC2GCN-SeA+Random GCN-SeA+Structure44.4 47.054.9 56.063.1 52.9 60.9 65.0 55.3 63.067.2 72.4
Be-DSTC2GCN-SeA+Random GCN-SeA+Structure44.9 47.156.5 58.465.4 54.8 62.7 67.4 57.3 64.965.6 69.6
Gu-DSTC2GCN-SeA+Random GCN-SeA+Structure45.0 48.154.0 55.764.1 54.0 61.9 65.5 56.2 63.569.1 72.2
Ta-DSTC2GCN-SeA+Random GCN-SeA+Structure44.8 46.461.4 62.866.9 55.6 64.3 68.5 57.5 66.170.5 71.9
", "num": null, "html": null }, "TABREF7": { "text": "Ablation results of various models on all versions of DSTC2.", "type_str": "table", "content": "
DatasetModelper resp. accBLEUROUGEEntity F1
12L
Query22.838.153.5 37.6 50.618.4
En-DSTC2Query + History47.160.668.8 59.4 66.672.8
Query + KB41.455.863.7 52.4 60.963.5
Query22.537.550.9 37.8 48.411.1
Hi-DSTC2Query + History45.555.965.3 55.7 63.369.8
Query + KB40.552.660.8 49.7 58.560.2
Query22.737.951.9 38.0 49.010.6
Be-DSTC2Query + History45.757.467.1 57.4 64.669.9
Query + KB41.254.663.0 52.1 60.360.2
Query22.436.150.7 37.2 48.410.9
Gu-DSTC2Query + History21.136.648.6 35.1 46.307.2
Query + KB40.150.660.9 50.1 58.759.5
Query22.839.353.6 39.0 50.618.8
Ta-DSTC2Query + History45.863.168.9 58.4 66.572.6
Query + KB40.959.264.2 52.3 61.564.2
", "num": null, "html": null }, "TABREF8": { "text": "Ablations on different parts of the encoder of RNN+GCN-SeA.", "type_str": "table", "content": "
DatasetWins % Losses % Ties %
En-DSTC242.1722.8335.00
Cam67629.0027.3343.66
", "num": null, "html": null }, "TABREF9": { "text": "Human evaluation results showing wins, losses, and ties % on En-DSTC2 and Cam676.", "type_str": "table", "content": "", "num": null, "html": null } } } }