{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:34:42.471440Z" }, "title": "Energy-based Neural Modelling for Large-Scale Multiple Domain Dialogue State Tracking", "authors": [ { "first": "Anh", "middle": [], "last": "Duong", "suffix": "", "affiliation": { "laboratory": "", "institution": "Communications & Entertainment Institute Technological University Dublin", "location": { "country": "Ireland" } }, "email": "" }, { "first": "Robert", "middle": [ "J" ], "last": "Ross", "suffix": "", "affiliation": { "laboratory": "", "institution": "Communications & Entertainment Institute Technological University Dublin", "location": { "country": "Ireland" } }, "email": "robert.ross@tudublin.ie" }, { "first": "John", "middle": [ "D" ], "last": "Kelleher", "suffix": "", "affiliation": { "laboratory": "", "institution": "Communications & Entertainment Institute Technological University Dublin", "location": { "country": "Ireland" } }, "email": "john.d.kelleher@tudublin.ie" }, { "first": "Adapt", "middle": [], "last": "Centre", "suffix": "", "affiliation": { "laboratory": "", "institution": "Communications & Entertainment Institute Technological University Dublin", "location": { "country": "Ireland" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Scaling up dialogue state tracking to multiple domains is challenging due to the growth in the number of variables being tracked. Furthermore, dialog state tracking models do not yet explicitly make use of relationships between dialogue variables, such as slots across domains. We propose using energy-based structure prediction methods for large-scale dialogue state tracking task in two multiple domain dialogue datasets. Our results indicate that: (i) modelling variable dependencies yields better results; and (ii) the structured prediction output aligns with the dialogue slot-value constraint principles. This leads to promising directions to improve stateof-the-art models by incorporating variable dependencies into their prediction process.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Scaling up dialogue state tracking to multiple domains is challenging due to the growth in the number of variables being tracked. Furthermore, dialog state tracking models do not yet explicitly make use of relationships between dialogue variables, such as slots across domains. We propose using energy-based structure prediction methods for large-scale dialogue state tracking task in two multiple domain dialogue datasets. Our results indicate that: (i) modelling variable dependencies yields better results; and (ii) the structured prediction output aligns with the dialogue slot-value constraint principles. This leads to promising directions to improve stateof-the-art models by incorporating variable dependencies into their prediction process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Task-oriented dialogue systems have been developed to assist users in many fields (Brixey et al., 2017; Zhao et al., 2019) . In recent years it is a rising trend to scale-up task-oriented dialogue systems from single domain to multiple domains to improve the generalisability of models and support transfer of knowledge across domains. This leads to a new challenge in handling dialogues in the multi-domain context, that in turns increases the work load of the dialogue manager, and in particular the dialogue state tracking component. On the other hand, a number of works have demonstrated the benefit of processing multiple domains, for example it has been shown that such models yield better performances across domains in comparison with single domain trackers constructed and trained with the same approach (Mrksic et al., 2015) . Dialogue state tracking in task-oriented dialogue systems frequently uses a multi-slot representation for the dialogue state, thus casting the task as a multi-task classification problem. In these scenar-ios, an increase in the number of domains is equivalent to an increase in the number of slots, this in turn enlarges the models and makes the task more challenging. While traditionally one can develop a number of models to track dialogue states in each domain separately, recent advanced techniques tend to train dialogue state trackers in the multi-domain environment. Such multi-domain trackers produce state-of-the-art results (Kim et al., 2020; Heck et al., 2020) .", "cite_spans": [ { "start": 82, "end": 103, "text": "(Brixey et al., 2017;", "ref_id": "BIBREF1" }, { "start": 104, "end": 122, "text": "Zhao et al., 2019)", "ref_id": "BIBREF27" }, { "start": 813, "end": 834, "text": "(Mrksic et al., 2015)", "ref_id": "BIBREF16" }, { "start": 1471, "end": 1489, "text": "(Kim et al., 2020;", "ref_id": "BIBREF11" }, { "start": 1490, "end": 1508, "text": "Heck et al., 2020)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To date state-of-the-art dialogue state trackers have treated the task as a set of individual domaindependent classification problems (Heck et al., 2020; Zhou and Small, 2019) . However, we argue that such approaches leave room for improvement; particularly with the consideration of the nature of human-machine interactions (Landragin, 2013) . Specially, we argue that the multi-task classification methodology usually does not take into account the relationships between dialogue slot variables, despite the fact that these factors can play an essential part in the dialogue state prediction (Trinh et al., 2019a) . Therefore, we propose to explicitly incorporate dialogue variable associations into the prediction process in a multi-domain dialogue environment, thus casting the dialogue state tracking task a structured prediction problem.", "cite_spans": [ { "start": 134, "end": 153, "text": "(Heck et al., 2020;", "ref_id": "BIBREF8" }, { "start": 154, "end": 175, "text": "Zhou and Small, 2019)", "ref_id": "BIBREF29" }, { "start": 325, "end": 342, "text": "(Landragin, 2013)", "ref_id": "BIBREF14" }, { "start": 594, "end": 615, "text": "(Trinh et al., 2019a)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we demonstrate the manner, in which dialogue variable dependencies make an impact on the dialogue state tracking process in a multiple domain context. We choose two newly published multiple domain datasets, MultiWOZ 2.0 (Budzianowski et al., 2018) and MultiWOZ 2.1 (Eric et al., 2019) , to conduct our study. These datasets contain a large number of dialogues across several different domains, thus they are practical for our study. Our investigation is detailed in three stages:", "cite_spans": [ { "start": 234, "end": 261, "text": "(Budzianowski et al., 2018)", "ref_id": "BIBREF2" }, { "start": 279, "end": 298, "text": "(Eric et al., 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Data analysis -It is important to clearly determine whether variable dependencies exist in dialogue data, and to what extent they present in dialogue states. These questions can be solved by performing statistical tests on dialogue data (Trinh et al., 2019c ).", "cite_spans": [ { "start": 239, "end": 259, "text": "(Trinh et al., 2019c", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Model development -Since we treat the dialogue state tracking task as a structured prediction problem, we develop an energy-based tracking model for the task, where the energybased learning methodology has been found effective in handling variable dependencies (Trinh et al., 2019b) .", "cite_spans": [ { "start": 263, "end": 284, "text": "(Trinh et al., 2019b)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Evaluation & Analysis -We evaluate the performance of our energy-based model and benchmark it against state-of-the-art trackers. Furthermore, we conduct an analysis study on the effectiveness of dialogue variable dependencies on the dialogue state tracking process in comparison with a multi-task deep learning method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To the best of our knowledge, there have been structured prediction models developed for dialogue state tracking in single domains, but no work has been performed for multiple domains. On the other hand, several multi-domain dialogue state trackers study the topic of variable dependencies to some extent, but do not provide a detailed analysis on this phenomenon. Therefore, the contributions of our work are two-fold: (i) a large-scale structured prediction model for multi-domain dialogue state tracking; and (ii) a systematic analysis of variable dependencies across dialogue slots and domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The work presented in this paper is an empirical research of our previous work on capturing variable dependencies in dialogue states within single dialogue domains (Trinh et al., 2019a,b) . We demonstrate that the energy-based method has good generalisability when applied to track dialogue states in multiple domain settings.", "cite_spans": [ { "start": 164, "end": 187, "text": "(Trinh et al., 2019a,b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There are a number of works that to some extent have studied the variable associations in dialogue data in both single and multiple domain contexts. Single-domain dialogue variable dependencies were explicitly studied in the work by Trinh et al. (2019c,a) . The associations between slots in single domain dialogue data are demonstrated to be beneficial factors for dialogue state tracking, and structured prediction approaches such as energy-based learning are effective in studying this phenomenon. On the other hand, although there has been no explicit study on variable dependencies in multiple domain dialogue data, we can indirectly infer the benefit of modelling such dependencies. Mrksic et al. (2015) show that shared models across dialogue domains yield better results than their domain-specific counterparts. Similarly in the TRADE model, highlighted the correlations between domains by training the base model on all of the domains except one, then fine-tuning on the remaining domain.", "cite_spans": [ { "start": 233, "end": 255, "text": "Trinh et al. (2019c,a)", "ref_id": null }, { "start": 689, "end": 709, "text": "Mrksic et al. (2015)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Variable Associations in Multi-Domain Dialogue", "sec_num": "2" }, { "text": "Since we focus on multiple domain dialogue state tracking, we conduct our study on MultiWOZ 2.0 (Budzianowski et al., 2018) and MultiWOZ 2.1 (Eric et al., 2019) , two novel chat-based multidomain dialogue datasets. We perform statistical tests on the dialogue data, and present the data analysis results in Figure 1 . The statistical tests are Pearson's chi-squared test, which is useful for detecting pairwise dependencies between variables, and the chi-square test-based Cramer's V measurement, that measures the dependency strength once confirmed (Trinh et al., 2019c) . In Figure 1 we present the heatmap of measured Cramer's V between all slot pairs in MultiWOZ 2.1 dataset, since this dataset contains manually fixed labels based on MultiWOZ 2.0 data.", "cite_spans": [ { "start": 96, "end": 123, "text": "(Budzianowski et al., 2018)", "ref_id": "BIBREF2" }, { "start": 141, "end": 160, "text": "(Eric et al., 2019)", "ref_id": "BIBREF4" }, { "start": 550, "end": 571, "text": "(Trinh et al., 2019c)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 307, "end": 315, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 577, "end": 585, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Variable Associations in Multi-Domain Dialogue", "sec_num": "2" }, { "text": "The analysis explicitly confirms the variable dependencies in the multiple domain dialogue data, where pairwise statistical significance coefficient p < 0.05 for all slot pairs. These dependencies exist on both slot and domain levels. Our analysis results also align to some extent with the cosine similarity of slot embedding presented in the TRADE model .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Variable Associations in Multi-Domain Dialogue", "sec_num": "2" }, { "text": "Energy-based learning (LeCun et al., 2006) is an approach to structured prediction that can be used to account for variable dependencies in a supervised learning process. The core concept of the approach is to represent the associations of all variables in the system with a scalar value called energy, and to train an energy function that assigns low energy values to valid combinations of variables. There are two key functions in the energy-based dialogue state tracker that we have developed:", "cite_spans": [ { "start": 22, "end": 42, "text": "(LeCun et al., 2006)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Energy-based Learning Dialogue State Tracking", "sec_num": "3" }, { "text": "\u2022 Feature function F (X) -As a first step, we transform raw data into a distributed representation; this can be done with advanced techniques such as combinations of embedding and recurrent neural networks (Kelleher, 2019) . The feature function can be either pretrained separately as an auxiliary task or jointly trained with the energy function.", "cite_spans": [ { "start": 206, "end": 222, "text": "(Kelleher, 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Energy-based Learning Dialogue State Tracking", "sec_num": "3" }, { "text": "\u2022 Energy function E(F (X), Y ) -The energy function is designed to capture variable dependencies and present them via a scalar value called energy. In our work, we develop the energy function with a deep learning architecture called Structured Prediction Energy Networks (SPEN) (Belanger and McCallum, 2016) to capture the dependencies between input and output variables, as well as among output variables.", "cite_spans": [ { "start": 278, "end": 307, "text": "(Belanger and McCallum, 2016)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Energy-based Learning Dialogue State Tracking", "sec_num": "3" }, { "text": "The working mechanism of an energy-based model is different from a standard feedforward deep learning model:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Energy-based Learning Dialogue State Tracking", "sec_num": "3" }, { "text": "\u2022 Learning process -During the learning process the energy function is typically trained to assign lower energy values to correct variable configurations, i.e. the desired output can be predicted with the minimal energy value with respect to our input. In our work we adopt a variant of the learning strategy detailed for the Deep Value Networks (DVN) architecture (Gygli et al., 2017) for this task.", "cite_spans": [ { "start": 365, "end": 385, "text": "(Gygli et al., 2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Energy-based Learning Dialogue State Tracking", "sec_num": "3" }, { "text": "\u2022 Inference process -Since in the energybased learning methodology the energy function is trained to be an estimator for the good-ness of fit between variables in the system, the output variables cannot be predicted in a straight forward manner. Therefore, we perform multiple inference loops guided by the gradient of the energy surface to find a set of labels for a given input using the trained energy function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Energy-based Learning Dialogue State Tracking", "sec_num": "3" }, { "text": "Task-oriented dialogues consist of multiple turns, where each turn contains machine and user actions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Recurrent Neural Feature Network", "sec_num": "3.1" }, { "text": "In the MultiWOZ datasets these actions are presented in a sentence format instead of dialogue act semantic representations. To accommodate the structure of multiple domain dialogue data, we make use of a multi-task LSTM-based dialogue state encoder (Trinh et al., 2018) . In the description below we denote dialogue input data X, and the multi-task LSTM network F (X). The architecture of our feature network is visualised in Figure 2 . Our LSTM-based feature network consists of 5 layers:", "cite_spans": [ { "start": 249, "end": 269, "text": "(Trinh et al., 2018)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 426, "end": 434, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Hierarchical Recurrent Neural Feature Network", "sec_num": "3.1" }, { "text": "\u2022 Word embedding layer -The word embedding layer is trained from scratch due to the small vocabulary present in the data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Recurrent Neural Feature Network", "sec_num": "3.1" }, { "text": "\u2022 Sentence-level LSTM layer -To transform the sentence into vector representations, we make use of bidirectional LSTM structure (Bi-LSTM) (Schuster and Paliwal, 1997) . In this layer machine and user transcripts are processed with separate Bi-LSTM cells, then their output vectors are concatenated before being fed into the next layer.", "cite_spans": [ { "start": 138, "end": 166, "text": "(Schuster and Paliwal, 1997)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Recurrent Neural Feature Network", "sec_num": "3.1" }, { "text": "\u2022 Turn-level LSTM layer -A number of unidirectional LSTM cells are used to roll out the dialogue by turns. As highlighted in an earlier multi-task LSTM-based model (Trinh et al., 2018) , using a number of LSTM cells can extract more useful information. The output of all the LSTM cells is concatenated into joint vectors, and treated as dialogue turn representations.", "cite_spans": [ { "start": 164, "end": 184, "text": "(Trinh et al., 2018)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Recurrent Neural Feature Network", "sec_num": "3.1" }, { "text": "\u2022 Domain-specific LSTM layer -For each domain in the data we assign one LSTM cell to specialise the information downstream from the overall dialogue to the domain level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Recurrent Neural Feature Network", "sec_num": "3.1" }, { "text": "\u2022 Slot-specific classifiers -The output layer consists of a number of slot-specific classifiers. Each classifier produces the prediction of the slot it corresponds to with a softmax activation function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Recurrent Neural Feature Network", "sec_num": "3.1" }, { "text": "We pretrain this feature network F (X) following the method as highlighted in a number of works on energy-based learning (Belanger and McCallum, 2016; Trinh et al., 2019b) . It should be noted that the dialogue features can be extracted as the output of either the turn-level layer or the domainspecific layer. From our experiments, we have observed that the domain-specific LSTM layer produces more meaningful representations, thus it is more beneficial to pass on the energy function.", "cite_spans": [ { "start": 121, "end": 150, "text": "(Belanger and McCallum, 2016;", "ref_id": "BIBREF0" }, { "start": 151, "end": 171, "text": "Trinh et al., 2019b)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Recurrent Neural Feature Network", "sec_num": "3.1" }, { "text": "Since we focus on studying the variable dependencies between slots, our energy function must include the term for this phenomenon explicitly. We base the design of our energy network on the concept of Structured Prediction Energy Networks (SPEN) (Belanger and McCallum, 2016) . The SPEN network is developed as a deep learning architecture to define an energy function that includes two individual energy terms, local energy and global energy:", "cite_spans": [ { "start": 246, "end": 275, "text": "(Belanger and McCallum, 2016)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Deep Learning Energy Network", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E(F (X), Y ) = E local (F (X), Y ) + E global (Y )", "eq_num": "(1)" } ], "section": "Deep Learning Energy Network", "sec_num": "3.2" }, { "text": "Local energy is computed between input and output (label) variables, and is intended to capture the agreement between feature representations and labels:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deep Learning Energy Network", "sec_num": "3.2" }, { "text": "E local (F (X), Y ) = L i=1 y i W i X (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deep Learning Energy Network", "sec_num": "3.2" }, { "text": "where W is the set of trainable parameters, Y = {y i } L is a label vector, and L is the number of label classes. Global energy meanwhile is the energy term that captures the relationship between labels independently of the input features:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deep Learning Energy Network", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E global (Y ) = W g2 f (W g1 Y )", "eq_num": "(3)" } ], "section": "Deep Learning Energy Network", "sec_num": "3.2" }, { "text": "where weights W g1 and W g2 are trainable parameters, and f (\u2022) is a non-linear function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deep Learning Energy Network", "sec_num": "3.2" }, { "text": "The purpose of the learning process is to train the energy function to measure the goodness of fit between variables correctly. It is important to design a suitable objective function to ensure that the energy function is well trained (Trinh et al., 2020) . For multi-label classification tasks, F 1 measurement is a common evaluation metric. In our structured dialogue state tracking task we make use of the F 1 metric for continuous variables, and interpret it as the ground truth energy:", "cite_spans": [ { "start": 235, "end": 255, "text": "(Trinh et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Learning Process", "sec_num": "3.3" }, { "text": "E * F 1 (Y, Y * ) = 2 i y i y * i i y i + i y * i (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Process", "sec_num": "3.3" }, { "text": "where Y is the predicted labels, and Y * is the ground truth labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Process", "sec_num": "3.3" }, { "text": "Since the ground truth energy is calculated with our F 1 measurement, its value can only fall into the range [0, 1]. Therefore, it is appropriate to use a cross entropy function as the loss function between predicted and ground truth energies:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Process", "sec_num": "3.3" }, { "text": "L(E, E * F 1 ) = \u2212E * F 1 log E \u2212 (1 \u2212 E * F 1 ) log(1 \u2212 E) (5) where E = E(F (X), Y )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Process", "sec_num": "3.3" }, { "text": "is the predicted energy, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Process", "sec_num": "3.3" }, { "text": "E * F 1 = E * F 1 (Y, Y * )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Process", "sec_num": "3.3" }, { "text": "is the ground truth energy. There exist slot-value constraint rules in the taskoriented dialogue state tracking task such that at any time in the conversation each slot can be classified with not more than one value. However, multi-label classification methods do not include a mechanism to control the output prediction following these rules. Therefore we introduce a regularisation term to encourage our energy-based tracker to shape the output into the desired format:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Process", "sec_num": "3.3" }, { "text": "R(Y, Y * ) = i y i \u2212 i y * i i y * i 2 (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Process", "sec_num": "3.3" }, { "text": "where Y is the predicted output, and Y * is the ground truth labels. Our final objective function including the label regularisation term for the learning process of the energy network is formulated as follow:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Process", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L = L(E, E * F 1 ) + \u03b1R(Y, Y * )", "eq_num": "(7)" } ], "section": "Learning Process", "sec_num": "3.3" }, { "text": "where \u03b1 is a regularisation coefficient. This learning process is visualised in Fig. 3 . ", "cite_spans": [], "ref_spans": [ { "start": 80, "end": 86, "text": "Fig. 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Learning Process", "sec_num": "3.3" }, { "text": "The energy function, as described above, can be interpreted as an estimator of the goodness of fit of the variables in the system. However, at prediction time we do not have the output variables that are an essential part of the energy formulation. Instead, to determine these values we perform a loopy inference process guided by the gradient of the energy surface.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Process", "sec_num": "3.4" }, { "text": "We start with a random hypothesis and use gradient ascent to update the output hypothesis:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Process", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Y (0) = {random(y i )} L Y (t+1) = P Y Y (t) + \u03b7\u2207 Y E(F (X), Y (t) )", "eq_num": "(8)" } ], "section": "Inference Process", "sec_num": "3.4" }, { "text": "where P Y is the projection operation to shape the predicted output to the output variable space Y = {y i } M \u2208 {[0, 1]} M , and \u03b7 is the learning rate for gradient ascent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Process", "sec_num": "3.4" }, { "text": "Here, it should be noted that the energy function is an estimator for our F 1 measurement of the predicted output; thus, we aim to maximise the F 1 score to achieve the desired prediction:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Process", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E(F (X), Y (t) ) \u223c E * F 1 (Y (t) , Y * )", "eq_num": "(9)" } ], "section": "Inference Process", "sec_num": "3.4" }, { "text": "4 Experiments", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Process", "sec_num": "3.4" }, { "text": "As indicated earlier, we have selected the multiple domain dialogue datasets, MultiWOZ 2.0 (Budzianowski et al., 2018) and MultiWOZ 2.1 (Eric et al., 2019) , to conduct our study of variable dependencies. Since the MultiWOZ 2.0 dataset was known to contain a lot of labelling errors, the latter version MultiWOZ 2.1 was manually annotated to correct them. Each dataset contains more than 10000 dialogues across 7 domains, split into three subsets: train, development and test for training, validation and test purposes respectively. However, following the common practice of other previous works we excluded two domains that rarely appear in the datasets. We followed the data processing and scoring scripts from the TRADE model for our dialogue state tracking task.", "cite_spans": [ { "start": 91, "end": 118, "text": "(Budzianowski et al., 2018)", "ref_id": "BIBREF2" }, { "start": 136, "end": 155, "text": "(Eric et al., 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Inference Process", "sec_num": "3.4" }, { "text": "Our experiments were conducted in two stages: first, we trained a multi-task learning network to extract dialogue features; then, we experimented on the energy-based learning level to explore interlabel dependencies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Process", "sec_num": "3.4" }, { "text": "Our model's hyperparameters are presented in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 45, "end": 52, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Inference Process", "sec_num": "3.4" }, { "text": "We train both feature network and energy-based models with Adam optimiser (Kingma and Ba, 2015) for 300 epochs. To avoid the overfitting problem, we apply the early stopping technique and find that our models converge shortly after 200 epochs. We trained the feature network 3 times for each dataset, and selected the best model to extract features. The energy-based network was trained 5 times and the predictions were ensembled into the ultimate dialogue states for evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Process", "sec_num": "3.4" }, { "text": "We evaluate the performance of both our multitask feature system and the energy-based tracker with an Accuracy metric as is common in dialogue state tracking. The results are reported in Table 2 alongside results of a number of state-of-the-art systems to our knowledge. Overall, our energy-based dialogue state tracker yields competitive results in comparison to models that account for variable relationships using techniques such as attention mechanism (Kumar et al., 2020; Zhong et al., 2018) and transfer learning . When accounting for the variable dependencies with the energy-based method, we improve the belief state tracking results by large margins, i.e., 13.9% for MultiWOZ 2.0 and 18.1% for MultiWOZ 2.1. We believe that there are at least two reasons for this large improvement:", "cite_spans": [ { "start": 456, "end": 476, "text": "(Kumar et al., 2020;", "ref_id": null }, { "start": 477, "end": 496, "text": "Zhong et al., 2018)", "ref_id": "BIBREF28" } ], "ref_spans": [ { "start": 187, "end": 194, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results & Discussion", "sec_num": "5" }, { "text": "\u2022 High quality features are extracted from dialogue data due to the architecture of a hierarchical multi-task LSTM network. As we extract input features from domain-specific LSTM cells, the features contain both dialogue information up to current turns as well as domain information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results & Discussion", "sec_num": "5" }, { "text": "\u2022 The associations between variables, in particular label dependencies, are accounted for explicitly; hence more information is available for the classification of each slot than would be available in a straightforward multi-task classification process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results & Discussion", "sec_num": "5" }, { "text": "While the energy-based system does not achieve the state-of-the-art performance, it should be noted that state-of-the-art systems currently employ a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results & Discussion", "sec_num": "5" }, { "text": "MultiWOZ 2.0 MultiWOZ 2.1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "TripPy (Heck et al., 2020) -0.553 Schema-guided (Chen et al., 2020) 0.512 0.552 DST-Picklist (Zhang et al., 2019) -0.533 SOM-DST (Kim et al., 2020) 0.517 0.530 MA-DST (Kumar et al., 2020) -0.519 DSTQA (Zhou and Small, 2019) 0.514 0.512 COMER (Ren et al., 2019) 0.488 -TRADE 0.486 0.456 HyST 0.442 -Neural reading 0.411 -GCE (Nouri and Hosseini-Asl, 2018) 0.363 -GLAD (Zhong et al., 2018) 0.356 -Our work Energy-based system 0.488 0.547 Multi-task feature system 0.349 0.366 very wide variety of modelling techniques while the currently presented work focuses on the addition of a mechanism to guide final labelling. For example, TripPy (Heck et al., 2020) , which achieves the highest accuracy in MultiWOZ 2.1 data, is based on span-prediction and a number of memory mechanisms. Meanwhile, SOM-DST (Kim et al., 2020) improves the dialogue state tracking efficiency with a selectively overwriting memory mechanism. Both of these however do not explicitly look at the variable dependencies as potentially useful factors of dialogue states. The practical use of the energy-based learning method may lie in its use to fine tune results to take into account variable dependencies. Given the fact that the energy-based model is developed separately from the feature network, we can apply it to state-of-the-art models to investigate the effectiveness of variable dependencies in different situations.", "cite_spans": [ { "start": 7, "end": 26, "text": "(Heck et al., 2020)", "ref_id": "BIBREF8" }, { "start": 48, "end": 67, "text": "(Chen et al., 2020)", "ref_id": "BIBREF3" }, { "start": 93, "end": 113, "text": "(Zhang et al., 2019)", "ref_id": "BIBREF26" }, { "start": 129, "end": 147, "text": "(Kim et al., 2020)", "ref_id": "BIBREF11" }, { "start": 167, "end": 187, "text": "(Kumar et al., 2020)", "ref_id": null }, { "start": 201, "end": 223, "text": "(Zhou and Small, 2019)", "ref_id": "BIBREF29" }, { "start": 242, "end": 260, "text": "(Ren et al., 2019)", "ref_id": "BIBREF18" }, { "start": 367, "end": 387, "text": "(Zhong et al., 2018)", "ref_id": "BIBREF28" }, { "start": 636, "end": 655, "text": "(Heck et al., 2020)", "ref_id": "BIBREF8" }, { "start": 798, "end": 816, "text": "(Kim et al., 2020)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "One final observation with respect to the results is differences in performance across MultiWOZ 2.0 and 2.1 datasets. Even though the labels in MultiWOZ 2.1 dataset are corrected with manual labour, meaning the data is less noisy than the MultiWOZ 2.0 data, not all systems yield better results in MultiWOZ 2.1 than in MultiWOZ 2.0, e.g., models such as TRADE and DSTQA (Zhou and Small, 2019) perform better with the original noisy data. In contrast, we observe that other state-of-the-art systems includ-ing our energy-based tracker perform better with cleaner data (MultiWOZ 2.1); this is of course a common phenomenon in supervised learning.", "cite_spans": [ { "start": 370, "end": 392, "text": "(Zhou and Small, 2019)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "In term of accuracy score our energy-based tracker outperforms the multi-task feature system by a large margins. However, the accuracy metric does not in itself verify the system's ability to capture variable dependencies. In order to evaluate the effectiveness of the energy-based learning method in capturing variable dependencies, we conduct an analysis on the performance of our trackers on the MultiWOZ 2.1 test set. Specifically, we analyse pairwise variable dependencies with Pearson's chi-squared test and measure their strength with Cramer's V coefficient as detailed earlier in Section 2. We present the results of variable association analysis between a number of slots in Table 3 with respect to test labels, labels produced by the Energy-based Tracker and labels produced by our Multi-Task Learning tracker. Here, we only show the dependencies between a subset of the slots purely for space reasons. If we were to show more or all of them, the table wouldn't fit in the template. We have, however, done the analysis of the dependencies for other slots and the results indicate that the other slots have similar tendencies, and more importantly that the data we present is representative of this more general pattern. The analysis results demonstrate that the energybased tracker more consistently mirrors the association strengths seen in the test labels then does our baseline Multi-Task Learning approach. It is evidenced by smaller margins in Cramer's V coefficients between the Energy-based tracker and the Test label results than seen between the Multi-task system results and the Test label results 1 . There are, however, very few exceptions to this trend, namely the attraction.area -restaurant.price range and attraction.area -train.destination pairs where the multi-task based system has produced associations closer to the test label case than does the energybased model.", "cite_spans": [], "ref_spans": [ { "start": 684, "end": 691, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Variable Dependencies Analysis", "sec_num": "5.1" }, { "text": "Overall, we argue that the ability to capture variable dependencies between slots across dialogue domains explains the reason why the energy-based method outperforms the multi-task learning approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Variable Dependencies Analysis", "sec_num": "5.1" }, { "text": "Dialogue states of many task-oriented dialogue systems must satisfy a slot-value constraint principle that each slot must not have more than one value in the belief state of any turn. Specifically, the value of each informable slot can be either none if it is not mentioned by users, or a specific value, for example Chinese for the slot food in domain restaurant if information is provided by the user. While the underlying multi-task feature system follows this rule strictly due to the use of the output softmax activation function in slot-specific classifiers, the energy-based tracking model is not guaranteed to maintain this strict constraint.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Slot-Value Constraint Analysis", "sec_num": "5.2" }, { "text": "To overcome this challenge, we proposed a label regularisation term (Equation 6) in the objective function detailed in Section 3.3. To evaluate the effectiveness of this mechanism, we conduct an additional analysis to determine the behaviours of our energy-based system based on this regularisation. This analysis is conducted in two stages:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Slot-Value Constraint Analysis", "sec_num": "5.2" }, { "text": "\u2022 First, we train and evaluate our energy-based method on the dialogue data without the label regularisation term. Thus, the loss function (Equation 5) becomes our learning objective in this baseline case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Slot-Value Constraint Analysis", "sec_num": "5.2" }, { "text": "\u2022 Second, we set different threshold values, and calculate the proportion of correct predictions over the total number of dialogue turns that follow slot-value constraint rules with different thresholds. A value is considered activated if the predicted belief score of this value exceeds the threshold. This stage is conducted for our energy-based method both with and without the regularisation term.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Slot-Value Constraint Analysis", "sec_num": "5.2" }, { "text": "The slot-value constraint analysis is presented in Table 4 : Analysis of the impact of label regularisation on the energy-based dialogue state tracking on the Mul-tiWOZ 2.0 & 2.1 data. The results are reported with the proportion (%) of correct predictions over the total number of dialogue turns that follow the slot-value constraint rules. +Reg/-Reg denotes the presence/absence of the label regularisation in the learning process.", "cite_spans": [], "ref_spans": [ { "start": 51, "end": 58, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Slot-Value Constraint Analysis", "sec_num": "5.2" }, { "text": "The analysis result demonstrates that our energybased systems with the label regularisation consistently outperforms those that do not include this term in the learning process with different belief score thresholds. Here, the label regularisation helps guide the system's prediction behaviour towards the requirement of the task-oriented domains. We can conclude that the impact of label regularisation on dialogue state tracking is systematic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Slot-Value Constraint Analysis", "sec_num": "5.2" }, { "text": "In this paper we demonstrated the effectiveness of applying the energy-based learning method to a large-scale dialogue state tracking task in multiple domains. We showed that the energy-based method is capable of capturing the dependencies between dialogue variables such as slots across domains, thus it improves the performance over a multi-task deep learning system significantly. Our analyses also showed that the structured prediction method can produce dialogue states that follow dialogue slot-value constraint rules in contrast with a multilabel classification method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Although the results achieved with the energybased method are competitive with published dialogue state tracking systems, they are not yet state of the art. There are several directions to investigate the further impact of an energy-based methodology on the dialogue state tracking task. One promising direction is the application of our energybased method on top of an existing state-of-the-art systems to further improve that system's performance. Another direction is to refine the energybased structure and investigate various strategies for the learning and inference processes to improve the ability to integrate captured dependencies into the structured prediction at a higher level. Furthermore our long term goal is to apply the structured learning approach in tracking different aspects of the conversations such as personality and preference as well as user intents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "It should be noted that stronger associations do not necessarily indicate better tracking performance -our goal is to capture valid associations not to arbitrarily increase the number of associations seen in label outputs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Structured Prediction Energy Networks", "authors": [ { "first": "David", "middle": [], "last": "Belanger", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 33rd International Conference on Machine Learning", "volume": "48", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Belanger and Andrew McCallum. 2016. Struc- tured Prediction Energy Networks. In Proceedings of the 33rd International Conference on Machine Learning, volume 48.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "SHIHbot : A Facebook chatbot for Sexual Health Information on HIV / AIDS", "authors": [ { "first": "Jacqueline", "middle": [], "last": "Brixey", "suffix": "" }, { "first": "Rens", "middle": [], "last": "Hoegen", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Rusow", "suffix": "" }, { "first": "Karan", "middle": [], "last": "Singla", "suffix": "" }, { "first": "Xusen", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Ron", "middle": [], "last": "Artstein", "suffix": "" }, { "first": "Anton", "middle": [], "last": "Leuski", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the SIGDIAL 2017 Conference", "volume": "", "issue": "", "pages": "370--373", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacqueline Brixey, Rens Hoegen, Wei Lan, Joshua Ru- sow, Karan Singla, Xusen Yin, Ron Artstein, and Anton Leuski. 2017. SHIHbot : A Facebook chat- bot for Sexual Health Information on HIV / AIDS. In Proceedings of the SIGDIAL 2017 Conference, pages 370-373.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "MultiWOZ -A Large-Scale Multi-Domain Wizard-of-Oz Dataset for Task-Oriented Dialogue Modelling", "authors": [ { "first": "Pawe\u0142", "middle": [], "last": "Budzianowski", "suffix": "" }, { "first": "Tsung-Hsien", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Bo-Hsiang", "middle": [], "last": "Tseng", "suffix": "" }, { "first": "I\u00f1igo", "middle": [], "last": "Casanueva", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Ultes", "suffix": "" }, { "first": "Milica", "middle": [], "last": "Osman Ramadan", "suffix": "" }, { "first": "", "middle": [], "last": "Ga\u0161i\u0107", "suffix": "" } ], "year": 2018, "venue": "Proceedings of 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pawe\u0142 Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, I\u00f1igo Casanueva, Stefan Ultes, Osman Ra- madan, and Milica Ga\u0161i\u0107. 2018. MultiWOZ -A Large-Scale Multi-Domain Wizard-of-Oz Dataset for Task-Oriented Dialogue Modelling. In Proceed- ings of 2018 Conference on Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Schema-Guided Multi-Domain Dialogue State Tracking with Graph Attention Neural Networks", "authors": [ { "first": "Lu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Boer", "middle": [], "last": "Lv", "suffix": "" }, { "first": "Chi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Su", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2020, "venue": "Association for the Advancement of Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lu Chen, Boer Lv, Chi Wang, Su Zhu, Bowen Tan, and Kai Yu. 2020. Schema-Guided Multi-Domain Dia- logue State Tracking with Graph Attention Neural Networks. In Association for the Advancement of Artificial Intelligence.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "MultiWOZ 2.1: A Consolidated Multi-Domain Dialogue Dataset with State Corrections and State Tracking Baselines", "authors": [ { "first": "Mihail", "middle": [], "last": "Eric", "suffix": "" }, { "first": "Rahul", "middle": [], "last": "Goel", "suffix": "" }, { "first": "Shachi", "middle": [], "last": "Paul", "suffix": "" }, { "first": "Adarsh", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Abhishek", "middle": [], "last": "Sethi", "suffix": "" }, { "first": "Anuj", "middle": [ "Kumar" ], "last": "Goyal", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Ku", "suffix": "" }, { "first": "Sanchit", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Shuyang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani-Tur", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihail Eric, Rahul Goel, Shachi Paul, Adarsh Kumar, Abhishek Sethi, Anuj Kumar Goyal, Peter Ku, San- chit Agarwal, Shuyang Gao, and Dilek Hakkani- Tur. 2019. MultiWOZ 2.1: A Consolidated Multi- Domain Dialogue Dataset with State Corrections and State Tracking Baselines.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Dialog State Tracking: A Neural Reading Comprehension Approach", "authors": [ { "first": "Shuyang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Abhishek", "middle": [], "last": "Sethi", "suffix": "" }, { "first": "Sanchit", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Tagyoung", "middle": [], "last": "Chung", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani-Tur", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the SIGDial 2019 Conference", "volume": "", "issue": "", "pages": "264--273", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shuyang Gao, Abhishek Sethi, Sanchit Agarwal, Tagy- oung Chung, and Dilek Hakkani-tur. 2019. Dialog State Tracking: A Neural Reading Comprehension Approach. In Proceedings of the SIGDial 2019 Con- ference, pages 264-273.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "HyST: A Hybrid Approach for Flexible and Accurate Dialogue State Tracking", "authors": [ { "first": "Rahul", "middle": [], "last": "Goel", "suffix": "" }, { "first": "Shachi", "middle": [], "last": "Paul", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani-Tur", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the INTERSPEECH 2019 Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rahul Goel, Shachi Paul, and Dilek Hakkani-Tur. 2019. HyST: A Hybrid Approach for Flexible and Accu- rate Dialogue State Tracking. In Proceedings of the INTERSPEECH 2019 Conference.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Deep Value Networks Learn to Evaluate and Iteratively Refine Structured Outputs", "authors": [ { "first": "Michael", "middle": [], "last": "Gygli", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Norouzi", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 34th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Gygli, Mohammad Norouzi, and Anelia An- gelova. 2017. Deep Value Networks Learn to Eval- uate and Iteratively Refine Structured Outputs. In Proceedings of the 34th International Conference on Machine Learning.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "TripPy: A Triple Copy Strategy for Value Independent Neural Dialog State Tracking", "authors": [ { "first": "Michael", "middle": [], "last": "Heck", "suffix": "" }, { "first": "Nurul", "middle": [], "last": "Carel Van Niekerk", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Lubis", "suffix": "" }, { "first": "Hsien-Chin", "middle": [], "last": "Geishauser", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Milica", "middle": [], "last": "Moresi", "suffix": "" }, { "first": "", "middle": [], "last": "Ga\u0161i\u0107", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the SIGDial 2020 Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Heck, Carel van Niekerk, Nurul Lubis, Chris- tian Geishauser, Hsien-Chin Lin, Marco Moresi, and Milica Ga\u0161i\u0107. 2020. TripPy: A Triple Copy Strategy for Value Independent Neural Dialog State Tracking. In Proceedings of the SIGDial 2020 Conference.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Long Short-Term Memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "Jurgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural Computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": { "DOI": [ "10.1162/neco.1997.9.8.1735" ] }, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and Jurgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation, 9(8):1735-1780.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Deep Learning", "authors": [ { "first": "D", "middle": [], "last": "John", "suffix": "" }, { "first": "", "middle": [], "last": "Kelleher", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John D Kelleher. 2019. Deep Learning. The MIT Press.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Efficient Dialogue State Tracking by Selectively Overwriting Memory", "authors": [ { "first": "Sungdong", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Sohee", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Gyuwan", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Sang-Woo", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th annual meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sungdong Kim, Sohee Yang, Gyuwan Kim, and Sang- Woo Lee. 2020. Efficient Dialogue State Tracking by Selectively Overwriting Memory. In Proceed- ings of the 58th annual meeting of the Association for Computational Linguistics (ACL).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Adam: A Method for Stochastic Optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 3rd International Conference for Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In Proceed- ings of the 3rd International Conference for Learn- ing Representations.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "MA-DST: Multi-Attention-Based Scalable Dialog State Tracking", "authors": [ { "first": "Adarsh", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Ku", "suffix": "" }, { "first": "Anuj", "middle": [], "last": "Goyal", "suffix": "" } ], "year": null, "venue": "Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI 2020", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adarsh Kumar, Peter Ku, Anuj Goyal, Angeliki Met- allinou, and Dilek Hakkani-Tur. 2020. MA-DST: Multi-Attention-Based Scalable Dialog State Track- ing. In Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI 2020).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Man-Machine Dialogue: Design and Challenges", "authors": [ { "first": "", "middle": [], "last": "Fr\u00e9d\u00e9ric Landragin", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1002/9781118578681" ] }, "num": null, "urls": [], "raw_text": "Fr\u00e9d\u00e9ric Landragin. 2013. Man-Machine Dialogue: Design and Challenges. ISTE Ltd and John Wiley & Sons, Inc.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Multidomain Dialog State Tracking using Recurrent Neural Networks", "authors": [ { "first": "Nikola", "middle": [], "last": "Mrksic", "suffix": "" }, { "first": "O'", "middle": [], "last": "Diarmuid", "suffix": "" }, { "first": "Blaise", "middle": [], "last": "Seaghdha", "suffix": "" }, { "first": "Milica", "middle": [], "last": "Thomson", "suffix": "" }, { "first": "Pei-Hao", "middle": [], "last": "Gasic", "suffix": "" }, { "first": "David", "middle": [], "last": "Su", "suffix": "" }, { "first": "Tsung-Hsien", "middle": [], "last": "Vandyke", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Wen", "suffix": "" }, { "first": "", "middle": [], "last": "Young", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "794--799", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikola Mrksic, Diarmuid O'Seaghdha, Blaise Thom- son, Milica Gasic, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2015. Multi- domain Dialog State Tracking using Recurrent Neu- ral Networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Lin- guistics, pages 794-799.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Toward Scalable Neural Dialogue State Tracking Model", "authors": [ { "first": "Elnaz", "middle": [], "last": "Nouri", "suffix": "" }, { "first": "Ehsan", "middle": [], "last": "Hosseini-Asl", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2nd Conversational AI workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elnaz Nouri and Ehsan Hosseini-Asl. 2018. Toward Scalable Neural Dialogue State Tracking Model. In Proceedings of the 2nd Conversational AI workshop, NeurIPS 2018.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Scalable and Accurate Dialogue State Tracking via Hierarchical Sequence Generation", "authors": [ { "first": "Liliang", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Jianmo", "middle": [], "last": "Ni", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Mcauley", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "1876--1885", "other_ids": { "DOI": [ "10.18653/v1/d19-1196" ] }, "num": null, "urls": [], "raw_text": "Liliang Ren, Jianmo Ni, and Julian McAuley. 2019. Scalable and Accurate Dialogue State Tracking via Hierarchical Sequence Generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing, pages 1876-1885.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Bidirectional recurrent neural networks", "authors": [ { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Kuldip", "middle": [ "K" ], "last": "Paliwal", "suffix": "" } ], "year": 1997, "venue": "IEEE Transactions on Signal Processing", "volume": "45", "issue": "11", "pages": "2673--2681", "other_ids": { "DOI": [ "10.1109/78.650093" ] }, "num": null, "urls": [], "raw_text": "Mike Schuster and Kuldip K. Paliwal. 1997. Bidirec- tional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673-2681.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A Multi-Task Approach to Incremental Dialogue State Tracking", "authors": [ { "first": "Anh Duong", "middle": [], "last": "Trinh", "suffix": "" }, { "first": "Robert", "middle": [ "J" ], "last": "Ross", "suffix": "" }, { "first": "John", "middle": [ "D" ], "last": "Kelleher", "suffix": "" } ], "year": 2018, "venue": "Proceedings of The 22nd workshop on the Semantics and Pragmatics of Dialogue, SEMDIAL", "volume": "", "issue": "", "pages": "132--145", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anh Duong Trinh, Robert J. Ross, and John D. Kelle- her. 2018. A Multi-Task Approach to Incremental Dialogue State Tracking. In Proceedings of The 22nd workshop on the Semantics and Pragmatics of Dialogue, SEMDIAL, pages 132-145.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Capturing Dialogue State Variable Dependencies with an Energy-based Neural Dialogue State Tracker", "authors": [ { "first": "Anh Duong", "middle": [], "last": "Trinh", "suffix": "" }, { "first": "Robert", "middle": [ "J" ], "last": "Ross", "suffix": "" }, { "first": "John", "middle": [ "D" ], "last": "Kelleher", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the SIGDial 2019 Conference", "volume": "", "issue": "", "pages": "75--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anh Duong Trinh, Robert J. Ross, and John D. Kelle- her. 2019a. Capturing Dialogue State Variable De- pendencies with an Energy-based Neural Dialogue State Tracker. In Proceedings of the SIGDial 2019 Conference, pages 75-84.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Energy-Based Modelling for Dialogue State Tracking", "authors": [ { "first": "Anh Duong", "middle": [], "last": "Trinh", "suffix": "" }, { "first": "Robert", "middle": [ "J" ], "last": "Ross", "suffix": "" }, { "first": "John", "middle": [ "D" ], "last": "Kelleher", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 1st Workshop on NLP for Conversational AI", "volume": "", "issue": "", "pages": "77--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anh Duong Trinh, Robert J. Ross, and John D. Kelle- her. 2019b. Energy-Based Modelling for Dialogue State Tracking. In Proceedings of the 1st Workshop on NLP for Conversational AI, pages 77-86.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Investigating Variable Dependencies in Dialogue States", "authors": [ { "first": "Anh Duong", "middle": [], "last": "Trinh", "suffix": "" }, { "first": "Robert", "middle": [ "J" ], "last": "Ross", "suffix": "" }, { "first": "John", "middle": [ "D" ], "last": "Kelleher", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 23rd Workshop on the Semantics and Pragmatics of Dialogue", "volume": "", "issue": "", "pages": "195--197", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anh Duong Trinh, Robert J. Ross, and John D. Kelle- her. 2019c. Investigating Variable Dependencies in Dialogue States. In Proceedings of the 23rd Work- shop on the Semantics and Pragmatics of Dialogue, pages 195-197.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Kelleher. 2020. F-Measure Optimisation and Label Regularisation for Energy-Based Neural Dialogue State Tracking Models", "authors": [ { "first": "Anh Duong", "middle": [], "last": "Trinh", "suffix": "" }, { "first": "Robert", "middle": [ "J" ], "last": "Ross", "suffix": "" }, { "first": "John", "middle": [ "D" ], "last": "", "suffix": "" } ], "year": null, "venue": "Artificial Neural Networks and Machine Learning", "volume": "2020", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anh Duong Trinh, Robert J. Ross, and John D. Kelle- her. 2020. F-Measure Optimisation and Label Reg- ularisation for Energy-Based Neural Dialogue State Tracking Models. In Artificial Neural Networks and Machine Learning ICANN 2020.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Transferable Multi-Domain State Generator for Task-Oriented Dialogue Systems", "authors": [ { "first": "Chien-Sheng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Madotto", "suffix": "" }, { "first": "Ehsan", "middle": [], "last": "Hosseini-Asl", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chien-Sheng Wu, Andrea Madotto, Ehsan Hosseini- Asl, Caiming Xiong, Richard Socher, and Pascale Fung. 2019. Transferable Multi-Domain State Gen- erator for Task-Oriented Dialogue Systems. In Pro- ceedings of the 57th Annual Meeting of the Associa- tion for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Find or Classify? Dual Strategy for Slot-Value Predictions on Multi-Domain Dialog State Tracking", "authors": [ { "first": "Jian-Guo", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Kazuma", "middle": [], "last": "Hashimoto", "suffix": "" }, { "first": "Chien-Sheng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Yao", "middle": [], "last": "Wan", "suffix": "" }, { "first": "Philip", "middle": [ "S" ], "last": "Yu", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jian-Guo Zhang, Kazuma Hashimoto, Chien-Sheng Wu, Yao Wan, Philip S. Yu, Richard Socher, and Caiming Xiong. 2019. Find or Classify? Dual Strat- egy for Slot-Value Predictions on Multi-Domain Di- alog State Tracking.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "MOLI: Smart Conversation Agent for Mobile Customer Service", "authors": [ { "first": "Guoguang", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Jianyu", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Christoph", "middle": [], "last": "Alt", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Schwarzenberg", "suffix": "" }, { "first": "Leonhard", "middle": [], "last": "Hennig", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Schaffer", "suffix": "" }, { "first": "Sven", "middle": [], "last": "Schmeier", "suffix": "" }, { "first": "Changjian", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Feiyu", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.3390/info10020063" ] }, "num": null, "urls": [], "raw_text": "Guoguang Zhao, Jianyu Zhao, Yang Li, Christoph Alt, Robert Schwarzenberg, Leonhard Hennig, Stefan Schaffer, Sven Schmeier, Changjian Hu, and Feiyu Xu. 2019. MOLI: Smart Conversation Agent for Mobile Customer Service. Information (Switzer- land), 10(2).", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Global-Locally Self-Attentive Dialogue State Tracker", "authors": [ { "first": "Victor", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting ofthe Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1458--1467", "other_ids": {}, "num": null, "urls": [], "raw_text": "Victor Zhong, Caiming Xiong, and Richard Socher. 2018. Global-Locally Self-Attentive Dialogue State Tracker. In Proceedings of the 56th Annual Meet- ing ofthe Association for Computational Linguistics, pages 1458-1467.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Multi-domain Dialogue State Tracking as Dynamic Knowledge Graph Enhanced Question Answering", "authors": [ { "first": "Li", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Small", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Zhou and Kevin Small. 2019. Multi-domain Dia- logue State Tracking as Dynamic Knowledge Graph Enhanced Question Answering.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Cramer's V assessment of variable dependencies in MultiWOZ 2.1 data", "uris": null, "num": null, "type_str": "figure" }, "FIGREF1": { "text": "Multi-task Recurrent Neural Feature Network for MultiWOZ datasets. All recurrent units are LSTM(Hochreiter and Schmidhuber, 1997).", "uris": null, "num": null, "type_str": "figure" }, "FIGREF2": { "text": "The learning process of our energy-based dialogue state tracker. The grey area denotes a frozen network where the parameters have been pretrained.", "uris": null, "num": null, "type_str": "figure" }, "TABREF1": { "text": "Performances of state-of-the-art and presented dialogue state tracking systems on MultiWOZ 2.0 & 2.1 data. The results for belief states are reported with the Accuracy metric.", "html": null, "num": null, "content": "", "type_str": "table" }, "TABREF3": { "text": "Data analysis on variable dependencies in the performance of multi-task and energy-based trackers in MultiWOZ 2.1 data. The variable dependencies are reported with Cramer's V coefficient. In the table, the first block is variable dependencies in labels of the test set, while the second block is variable dependencies detected by our energy-based model, and the last block is the performance of the multi-task feature system.", "html": null, "num": null, "content": "
", "type_str": "table" }, "TABREF4": { "text": "", "html": null, "num": null, "content": "
Threshold MultiWOZ 2.0 MultiWOZ 2.1
+Reg -Reg+Reg -Reg
0.545.736.852.448.3
0.729.726.339.435.1
0.916.815.518.318.1
", "type_str": "table" } } } }