ACL-OCL / Base_JSON /prefixS /json /sigdial /2005.sigdial-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:50:41.870249Z"
},
"title": "Quantitative Evaluation of User Simulation Techniques for Spoken Dialogue Systems",
"authors": [
{
"first": "Jost",
"middle": [],
"last": "Schatzmann",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge Trumpington Street Cambridge",
"location": {
"country": "England"
}
},
"email": ""
},
{
"first": "Kallirroi",
"middle": [],
"last": "Georgila",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {
"addrLine": "2 Buccleuch Place Edinburgh",
"country": "Scotland"
}
},
"email": "kgeorgil@inf.ed.ac.uk"
},
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge",
"location": {
"addrLine": "Trumpington Street",
"settlement": "Cambridge",
"country": "England"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The lack of suitable training and testing data is currently a major roadblock in applying machine-learning techniques to dialogue management. Stochastic modelling of real users has been suggested as a solution to this problem, but to date few of the proposed models have been quantitatively evaluated on real data. Indeed, there are no established criteria for such an evaluation. This paper presents a systematic approach to testing user simulations and assesses the most prominent domain-independent techniques using a large DARPA Communicator corpus of human-computer dialogues. We show that while recent advances have led to significant improvements in simulation quality, simple statistical metrics are still sufficient to discern synthetic from real dialogues.",
"pdf_parse": {
"paper_id": "2005",
"_pdf_hash": "",
"abstract": [
{
"text": "The lack of suitable training and testing data is currently a major roadblock in applying machine-learning techniques to dialogue management. Stochastic modelling of real users has been suggested as a solution to this problem, but to date few of the proposed models have been quantitatively evaluated on real data. Indeed, there are no established criteria for such an evaluation. This paper presents a systematic approach to testing user simulations and assesses the most prominent domain-independent techniques using a large DARPA Communicator corpus of human-computer dialogues. We show that while recent advances have led to significant improvements in simulation quality, simple statistical metrics are still sufficient to discern synthetic from real dialogues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Within the broad field of research on spoken dialogue systems (SDS), the application of machine-learning approaches to dialogue management is currently attracting interest (Levin et al., 2000) (Young, 2002) . The major motivation driving research in this area is the hope of learning optimal strategies from data. Yet, it is rarely the case that enough training data is available to sufficiently explore the vast space of possible dialogue states and strategies. Ironically, the best strategy may often not even be present in the given dataset. It may thus be argued that an optimal strategy cannot be learned from a fixed corpus, regardless of the size of the training corpus.",
"cite_spans": [
{
"start": 172,
"end": 192,
"text": "(Levin et al., 2000)",
"ref_id": "BIBREF3"
},
{
"start": 193,
"end": 206,
"text": "(Young, 2002)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An interesting approach to solving this problem is to use small corpora to train stochastic models for simulating real user behavior. Once such a model is available, This paper is supported by the EU FP6 TALK Project.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "any number of dialogues can be generated through interaction between the simulated user and the dialogue system. The simulated user also enables us to explore dialogue strategies that are not present in the given corpus. This way the learning dialogue manager can deviate from the known strategies and learn new and potentially better ones. Figure 1 illustrates the learning setup. Previous research has demonstrated the success of the learning setup (Levin et al., 2000) , (Scheffler, 2002) and also examined the use of user simulation for system evaluation (Eckert et al., 1997) . The quality of the user model, however, has not been thoroughly investigated. It is indeed unclear, how we can quantitatively evaluate whether the simulated user responses are realistic, generalise well to unseen dialogue situations and resemble the variety of the user population.",
"cite_spans": [
{
"start": 451,
"end": 471,
"text": "(Levin et al., 2000)",
"ref_id": "BIBREF3"
},
{
"start": 474,
"end": 491,
"text": "(Scheffler, 2002)",
"ref_id": "BIBREF8"
},
{
"start": 559,
"end": 580,
"text": "(Eckert et al., 1997)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 341,
"end": 349,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper assesses the most prominent domainindependent simulation techniques using a large DARPA Communicator corpus of human-computer dialogues. We describe what modifications are necessary to train and test the models presented in the literature on real data. We further present a systematic approach to evaluating user simulations. Our analysis shows that none of the currently available techniques can realistically reproduce the variety of human user behaviour and that simple statistical measures are sufficient to distinguish synthetic from real dialogues. We investigate these shortcomings and outline suggestions for future research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Approaches to user simulation can be classified in a number of ways. Most commonly, one distinguishes systems with regard to the level of abstraction at which they model dialogue. This can be at either the acoustic-, word-, or intention-level. The latter is a particularly useful representation of the interaction, since it avoids the need to reproduce the enormous variety of human language on the level of speech signals or word sequences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intention-based dialogue",
"sec_num": "2.1"
},
{
"text": "Hence, simulation on the intention level has been most popular in recent years. This approach was first taken by (Eckert et al., 1997) and has been adopted in later work by most other research groups (Levin et al., 2000) , (Scheffler, 2002) , (Pietquin, 2004) , (Georgila et al., 2005a) . Examples of user simulation on the word or acoustic level are rare, but can be found in (Watanabe et al., 1998) and (Lopez-Cozar et al., 2003) . Naturally, their portability and scalability is limited.",
"cite_spans": [
{
"start": 113,
"end": 134,
"text": "(Eckert et al., 1997)",
"ref_id": "BIBREF0"
},
{
"start": 200,
"end": 220,
"text": "(Levin et al., 2000)",
"ref_id": "BIBREF3"
},
{
"start": 223,
"end": 240,
"text": "(Scheffler, 2002)",
"ref_id": "BIBREF8"
},
{
"start": 243,
"end": 259,
"text": "(Pietquin, 2004)",
"ref_id": "BIBREF6"
},
{
"start": 262,
"end": 286,
"text": "(Georgila et al., 2005a)",
"ref_id": "BIBREF1"
},
{
"start": 377,
"end": 400,
"text": "(Watanabe et al., 1998)",
"ref_id": "BIBREF9"
},
{
"start": 405,
"end": 431,
"text": "(Lopez-Cozar et al., 2003)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Intention-based dialogue",
"sec_num": "2.1"
},
{
"text": "One may further distinguish between probabilistic and deterministic user models. Whereas probabilistic models can be trained on data and allow for some \"lifelike\" randomness in user behaviour, deterministic models are driven by handcrafted rules. For a given dialogue state and system action, a deterministic user model will always produce the same user response.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic vs. Deterministic Simulation",
"sec_num": "2.2"
},
{
"text": "Deterministic models have been used to evaluate which dialogue strategies work well for different types of user response pattern (Lin and Lee, 2001) . While they may be suitable for observing general correlations between dialogue strategy, user behaviour and system performance, a probabilistic model is clearly preferable for modelling realistic user behaviour. The following sections will review some of the most prominent work in this area. Very recent work by (Georgila et al., 2005a) is not covered here.",
"cite_spans": [
{
"start": 129,
"end": 148,
"text": "(Lin and Lee, 2001)",
"ref_id": "BIBREF4"
},
{
"start": 464,
"end": 488,
"text": "(Georgila et al., 2005a)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic vs. Deterministic Simulation",
"sec_num": "2.2"
},
{
"text": "Stochastic modelling of users on the intention level is first suggested as a means of SDS evaluation by Eckert, Levin and Pieraccini (Eckert et al., 1997) . Their work introduces a Bigram model for predicting the user action a u in response to a given system action a s p = P (a u |a s ).",
"cite_spans": [
{
"start": 104,
"end": 154,
"text": "Eckert, Levin and Pieraccini (Eckert et al., 1997)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Bigram Model",
"sec_num": "2.3"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Bigram Model",
"sec_num": "2.3"
},
{
"text": "The Bigram model has the advantage of being purely probabilistic and fully domain-independent. Its weakness is that it does not place enough constraints on the user to simulate realistic behaviour. The generated responses may correspond well to the previous system action, but they often do not make sense in the wider context of the dialogue. The authors note that the model can be extended to a general n-gram model but due to data sparsity, it is usually impossible to train n-grams with n > 2. Eckert et al. do not train the Bigram model on real data or evaluate the quality of the simulated output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Bigram Model",
"sec_num": "2.3"
},
{
"text": "Levin, Eckert and Pieraccini describe how the pure Bigram model can be modified to limit the number of model parameters and to account for some degree of conventional structure in dialogues (Eckert et al., 1997) , (Levin et al., 2000) . Instead of allowing any user response, only the probabilities for anticipated types of user responses are calculated for each type of system action. A system request for attribute A x , for instance, is parameterised using the probability that the user actually specifies A x and that he specifies n additional attributes",
"cite_spans": [
{
"start": 190,
"end": 211,
"text": "(Eckert et al., 1997)",
"ref_id": "BIBREF0"
},
{
"start": 214,
"end": 234,
"text": "(Levin et al., 2000)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Levin Model",
"sec_num": "2.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (provide A x |request A x ) (2) P (n|request A x ).",
"eq_num": "(3)"
}
],
"section": "The Levin Model",
"sec_num": "2.4"
},
{
"text": "This set of probabilities implicitly characterises the level of cooperativeness and the degree of initiative taken by the user model. The Levin model places stronger constraints on the user actions than the pure Bigram model, but it also makes assumptions concerning the format of the dialogue. If the dialogue manager or the anticipated dialogue format changes, a new set of parameters is needed. Like the Bigram model, the Levin model does not ensure consistency between different user actions over the course of a dialogue. The assumption that every user response depends only on the previous system turn is flawed. The user actions can violate logical constraints and the synthetic dialogues often continue for a long time, with the user continuously changing his goal or repeating information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Levin Model",
"sec_num": "2.4"
},
{
"text": "Levin et al. use the ATIS corpus to train a small subset of their model parameters, all other probabilities are handcrafted using common sense. The authors also do not evaluate how realistically simulated the responses are. However, the authors demonstrate that the simulated user can be used to reveal errors in the dialogue management strategy (Eckert et al., 1997) and that it can be used for reinforcement-learning of strategies (Levin et al., 2000) .",
"cite_spans": [
{
"start": 346,
"end": 367,
"text": "(Eckert et al., 1997)",
"ref_id": "BIBREF0"
},
{
"start": 433,
"end": 453,
"text": "(Levin et al., 2000)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Levin Model",
"sec_num": "2.4"
},
{
"text": "Scheffler and Young (Scheffler and Young, 2001) (Scheffler, 2002) attempt to overcome the lack of goal consistency that the Levin model suffers from. Their approach uses deterministic rules for goal-dependent actions and probabilistic modelling to cover conversational behaviour.",
"cite_spans": [
{
"start": 20,
"end": 47,
"text": "(Scheffler and Young, 2001)",
"ref_id": "BIBREF7"
},
{
"start": 48,
"end": 65,
"text": "(Scheffler, 2002)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Scheffler Model",
"sec_num": "2.5"
},
{
"text": "To model the user goal, Scheffler and Young introduce fixed goal structures. These consist of attribute-value pairs with associated status variables. All of the possible \"paths\" that a user may take during a dialogue are mapped out in advance in the form of a network. The probability of each route through the network is learned from training data and the explicit representation of the user goal ensures that the simulated user always selects routes in accordance with his goal. Scheffler and Young's approach produces promising results, but it is highly taskdependent and ideally requires an existing prototype system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Scheffler Model",
"sec_num": "2.5"
},
{
"text": "The authors address the problem of evaluating the simulated user by comparing statistical properties of the simulated dialogues with those of the training data dialogues. More precisely, they show that the goal-completion time and goal-achievement rate for different tasks are comparable in the simulated and real dialogues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Scheffler Model",
"sec_num": "2.5"
},
{
"text": "Pietquin (Pietquin, 2004) combines features from Scheffler and Young's work with the Levin model. The core idea is to condition the probabilities used by Levin et al on an explicit representation of the user goal",
"cite_spans": [
{
"start": 9,
"end": 25,
"text": "(Pietquin, 2004)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Pietquin Model",
"sec_num": "2.6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (provide A x |request A x , goal).",
"eq_num": "(4)"
}
],
"section": "The Pietquin Model",
"sec_num": "2.6"
},
{
"text": "This enables Pietquin to explicitly model the dependencies between a user's actions and his goal. Pietquin handselects the probability values so as to ensure that the user acts are in accordance with his goal throughout the dialogue. Like Scheffler and Young, Pietquin represents the user goal using a simple table of attribute-value pairs. The appropriate values are randomly selected from a database.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Pietquin Model",
"sec_num": "2.6"
},
{
"text": "Pietquin introduces interesting dependencies between the user's goal and his conversational behaviour. This is done by adding new status variables to each attributevalue pair. The priority variable for instance, governs how likely the user is to drop the corresponding attributevalue pair from his goal. This enables Pietquin to model how likely a user is to relax a certain constraint, such as \"Preferred airline is British Airways\". Pietquin also attaches a simple counter to each attribute-value pair to record how often a piece of information has been transmitted to the system. The likelihood of the user hanging up before completing the task can be modelled as a function of this variable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Pietquin Model",
"sec_num": "2.6"
},
{
"text": "While these models of user goal, memory and satisfaction are rather coarse, they illustrate the various aspects of the user state which influence behaviour. It is also important to note that Pietquin's model is domain-independent -a definite advantage over Scheffler and Young's work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Pietquin Model",
"sec_num": "2.6"
},
{
"text": "A major weakness of Pietquin's work however is that it is not trained or tested on any real dialogue data. All the probabilities in his model are hand-selected using common sense, and no attempt is made to evaluate how real-istic the user simulation is. Pietquin shows that an equivalent representation of his user model can be found in the form of a Bayesian Network, but the parameter values for this network are also copied from the original model rather than learned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Pietquin Model",
"sec_num": "2.6"
},
{
"text": "The previous section has reviewed a number of different user simulation techniques. To date, few of these have been evaluated on real data. In part this is due to the lack of a suitable evaluation methodology. It is indeed not clear what constitutes a \"realistic\" simulation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1"
},
{
"text": "In our view, evaluation must cover two aspects. First, we need to assess if the user model can generate humanlike output. Does it produce responses that a real user might have given in the same dialogue context? Secondly, we need to assess if the simulation can reproduce the variety of real user behaviour. This ensures that the model represents the whole user population -not just an average user.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1"
},
{
"text": "For the first part of the evaluation, the dataset is split into a training and a test set. The dialogues are assumed to be annotated as a sequence of turns t, with each turn consisting of a variable number of actions a, as shown in the sample dialogue in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 255,
"end": 262,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Comparing Simulated and Real User Responses",
"sec_num": "3.2"
},
{
"text": "Evaluation is done on a turn by turn basis. Each of the system turns in the test set is separately fed into the simulation, together with the corresponding dialogue history and the current user goal. The response turn generated by the simulated user is then compared to the real response given by the user in the test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing Simulated and Real User Responses",
"sec_num": "3.2"
},
{
"text": "We propose the use of Precision and Recall to quantify how closely the synthetic turn resembles the real user turn. These metrics have not yet been used for user model evaluation in SDS development but they are a common measure of goodness in user modelling outside SDS. (Zukerman and Albrecht, 2001) . Recall (R) measures how many of the actions in the real reponse are predicted correctly. Precision (P ) measures the proportion of correct actions among all the predicted actions. An action is considered correct if it matches at least one of the actions in the real user response.",
"cite_spans": [
{
"start": 271,
"end": 300,
"text": "(Zukerman and Albrecht, 2001)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing Simulated and Real User Responses",
"sec_num": "3.2"
},
{
"text": "Correctly predicted actions All actions in simulated response (5) R = 100 * Correctly predicted actions All actions in real response (6) It is of course not possible to specify what levels of Precision and Recall need to be reached in order to claim that a simulated user is realistic. Nevertheless, Precision and Recall offer a reliable method for comparing simulated and real user responses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "P = 100 *",
"sec_num": null
},
{
"text": "Precision and Recall deliver a rough indication of how realistic the best response is that the simulated user can generate. On its own however, this form of evaluation is not sufficient. Our goal is not to build a simulated user for producing the single most likely response to a given system action. A dialogue strategy must perform well for all kinds of possible user response, not just the one with the highest probability. Hence we need to produce a large number of dialogues with a variety of user behaviour. We then need to assess if the synthetic dataset has the same statistical properties as the training data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing Simulated and Real Datasets",
"sec_num": "3.3"
},
{
"text": "The difficult question is: \"What statistical properties are reliable indicators of realistic dialogues?\". In previous research, dialogue length, goal achievement rate and goal completion length have been used (Scheffler and Young, 2001) . These metrics can only be considered rough indicators of how realistic the dialogues are. It would be possible to optimise a user model according to these criteria and still produce non-sense dialogues. For instance, given that the average dialogue length found in the training data was n turns, the simulated user could be forced to hang up after exactly n turns, thus achieving a perfect evaluation score.",
"cite_spans": [
{
"start": 209,
"end": 236,
"text": "(Scheffler and Young, 2001)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing Simulated and Real Datasets",
"sec_num": "3.3"
},
{
"text": "We argue that a large set of measures is needed to cover a variety of dialogue properties. For our evaluation, we divide these into three groups:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing Simulated and Real Datasets",
"sec_num": "3.3"
},
{
"text": "1. The first group of experiments investigates highlevel features of the dialogue. How long do the dialogues last and how much information is transmitted in individual turns? How active are the dialogue participants?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing Simulated and Real Datasets",
"sec_num": "3.3"
},
{
"text": "2. The second group of experiments analyses the style of the dialogue. This aims to produce a more fine grained picture of the system and user behaviour. We investigate the frequency of different speech acts and analyse what proportion of actions is goaldirected, what part is taken up by dialogue formalities etc. We also examine the user's degree of cooperativeness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing Simulated and Real Datasets",
"sec_num": "3.3"
},
{
"text": "3. The third and last group of experiments investigates the success rate and efficiency of the dialogues. In particular, we look at goal achievement rates and goal completion times. This helps us to evaluate if misunderstandings are modelled well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing Simulated and Real Datasets",
"sec_num": "3.3"
},
{
"text": "In closing this section, it should be remarked that all of the statistical measures suggested here are only indicators of how good a simulation technique is. It is not possible to specify what range of values a synthetic corpus needs to satisfy in order to be sufficiently realistic. Moreover, no guarantee can be given that a simulated dialogue is realistic even if all of its properties are identical to the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing Simulated and Real Datasets",
"sec_num": "3.3"
},
{
"text": "Yet, the set of measures forms a helpful toolkit for comparing simulation techniques and identifying possible weaknesses. The tests cover dialogue length, style and efficiency. In addition, the variety of measures is sufficiently large to ensure that a user model cannot be easily trained so as to achieve perfect scores on each of them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing Simulated and Real Datasets",
"sec_num": "3.3"
},
{
"text": "Data from the DARPA Communicator project is used for all of of the experiments presented in this paper. The full corpus consists of 4 datasets, recorded using systems from ATT, BBN, CMU and SRI. The 4 sets add up to a total of 697 dialogues. Each of the four sets is split into training and testing data, with a ratio of 90:10. Further information regarding the content and annotation of data can be found in (Georgila et al., 2005b) .",
"cite_spans": [
{
"start": 409,
"end": 433,
"text": "(Georgila et al., 2005b)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Testing Data",
"sec_num": "4.1"
},
{
"text": "All of the datasets contain slot-filling dialogues from the travel booking domain, covering flight-, hotel-and rental car-reservations. The dialogue systems differ slightly in the wording of their prompts and in their choice of dialogue strategy and the language understanding components are not equally powerful. On the intention level however, the general structure of the dialogues is very similar. The systems cover roughly the same booking details and they are all almost entirely driven by system-initiative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Testing Data",
"sec_num": "4.1"
},
{
"text": "User model training is done on the recognised user output rather than the reference transcriptions. The simulation thus effectively combines the user and the communication channel. No separate error modelling is performed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Testing Data",
"sec_num": "4.1"
},
{
"text": "The dialogue data is automatically converted to the following format: Each dialogue is a sequence of alternating user and system turns. Each turn t contains one or more actions. Each action a consists of a speech act (compulsory), an attribute (optional) and a value (optional). A snippet of a sample dialogue is shown in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 322,
"end": 329,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dialogue Annotation",
"sec_num": "4.2"
},
{
"text": "We also add \"hangup\" actions to the end of each dialogue. Considering the act of \"hanging up\" as an action, helps us to train user model parameters concerning the likelihood of a user hanging up in a given dialogue situation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue Annotation",
"sec_num": "4.2"
},
{
"text": "For the purpose of our evaluation, we implement and train the Bigram, Levin User a 10 yes answer We found that this approach led to severe data sparsity problems when applied to a real corpus. Datasets such as the Communicator corpus contain a large number of possible values for each attribute. The number of possible combinations of user action, attribute and value prohibits us from reliably estimating a probability for each one.",
"cite_spans": [
{
"start": 70,
"end": 75,
"text": "Levin",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Attribute Values",
"sec_num": "4.3"
},
{
"text": "It is thus not possible to implement the Bigram model and the Levin model in their original form. For this evaluation we choose to adapt both models in the following way: The speech act and attribute are modelled probabilistically, as suggested by the respective authors. The attribute-value is determined by the user goal, as suggested by Scheffler and Young. This ensures that sufficient training data is available to train all model parameters. It further improves the model as it ensures that the same value is provided if the user is asked multiple times for the same attribute. See Figure 2 for an illustration.",
"cite_spans": [],
"ref_spans": [
{
"start": 588,
"end": 596,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Predicting Attribute Values",
"sec_num": "4.3"
},
{
"text": "To allow for \"lifelike\" randomness in the user goal, we use a probabilistic domain model. At the start of each dialogue, a user goal is randomly constructed according to the probability distribution over all the attributes and values found in the training data. We use this domain model for all of our simulated users. Testing is not done on the attribute-value, only on the speech act and the attribute.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Attribute Values",
"sec_num": "4.3"
},
{
"text": "The original Bigram model as described by Eckert et al. assumes that dialogue is a sequence of alternating user and system actions. Under this assumption, the next user action is predicted based on the previous system action.",
"cite_spans": [
{
"start": 42,
"end": 55,
"text": "Eckert et al.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bigram Model Implementation",
"sec_num": "4.4"
},
{
"text": "In real dialogues, dialogue turns can include several The assumption that action a i can be predicted from a i\u22121 is hence no longer valid. The sample dialogue in Table 1 illustrates this well: action a 4 triggers action a 6 , which in turn triggers action a 8 . However, it is also not possible to estimate \"turn bigrams\", i.e. estimate P (t i |t i\u22121 ) instead of P (a i |a i\u22121 ). Since the number of actions per turn is variable, the number of possible turn combinations will inevitably cause data sparsity problems.",
"cite_spans": [],
"ref_spans": [
{
"start": 162,
"end": 169,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Bigram Model Implementation",
"sec_num": "4.4"
},
{
"text": "For our implementation, we choose the following workaround: Bigrams are still estimated on an \"action\" basis, but the probability P (a u |a s ) is interpreted as the probability that the user response contains a u when the previous system turn contains action a s .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bigram Model Implementation",
"sec_num": "4.4"
},
{
"text": "We further implement a simple back-off mechanism to account for system actions that appear in the test data but have not appeared in the training data. For these actions, no bigram is trained during parameter estimation. In these cases, we back off to the unigram probability of each user action.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bigram Model Implementation",
"sec_num": "4.4"
},
{
"text": "The Levin model has to be adapted to the dialogue format present in the Communicator data. Relaxing questions (\"Would you also consider another airline?\"), for example, were anticipated by Levin et al. but do not exist in the Communicator data. Instead the dialogue managers spend a considerable amount of time on grounding (implicitly or explicitly confirming pieces of information).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Levin Model Implementation",
"sec_num": "4.5"
},
{
"text": "To account for these differences in the system action set, we parameterise the Levin user model using a slightly modified set of probabilities. A positive response to an explicit confirmation of attribute A x , for instance, is parameterised as P (yes answer|explicit conf irm A x ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Levin Model Implementation",
"sec_num": "4.5"
},
{
"text": "Similar modifications are made for the other user and system actions that were not present in the dialogue data available to Levin et al.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Levin Model Implementation",
"sec_num": "4.5"
},
{
"text": "As described in Section 2.6, the Pietquin model is an extension of the Levin model. The core idea of Pietquin's work is to condition the user model parameters on the user goal. In a real dataset, however, it is not possible to estimate a probability for every conceivable configuration of user goal. The number of possible combinations of user actions and user goals is far too large to obtain reliable probability estimates. Our workaround for this problem is to condition the probabilities on selected properties of the user goal, rather than its full state. For instance, we check if an attribute is present in the goal or not, or if it has been provided before or not. This geatly reduces the number of parameters and avoids data sparsity problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pietquin Model Implementation",
"sec_num": "4.6"
},
{
"text": "The data available to us did not contain annotations regarding the specifics of the user goals. We were able to automatically infer these by scanning the parsed reference transcriptions of the user utterances. For every provide inf o action, the corresponding attribute and value were added to the user goal. When two actions contradicted each other (i.e. same attribute, but different value) the later one was assumed to overwrite the earlier one. Counts were recorded to track how often each piece of information had been transmitted to the system over the course of the dialogue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "User Goal Inference",
"sec_num": "4.7"
},
{
"text": "As explained by (Scheffler and Young, 2001) , the automatic inference of user goals from dialogues is not unproblematic. The true user goal can never be known since the achieved goal may not be the one that the user started out with. It is impossible to ascertain which goals are indeed completed correctly and which are flawed by recognition errors. User goals may also change as users become aware of system limitations.",
"cite_spans": [
{
"start": 16,
"end": 43,
"text": "(Scheffler and Young, 2001)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "User Goal Inference",
"sec_num": "4.7"
},
{
"text": "To generate dialogues, the simulated user needs \"a dialogue partner\" to interact with. The straightforward strategy would be to take one of the original dialogue managers from ATT, BBN, CMU or SRI. Since none of these was available to us, the only alternative was to implement a new dialogue manager (DM). To make full use of the 4 training datasets, we chose to build a DM which is \"an average\" of the four original dialogue managers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue Manager Implementation",
"sec_num": "4.8"
},
{
"text": "The new DM includes the features which are common to all of the original dialogue managers and it structures the dialogue in a similar way. Like all the original managers, the new DM covers flight bookings (origin and destination city, departing date and time, return flight) and ground arrangements (hotel location, hotel chain, car rental).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue Manager Implementation",
"sec_num": "4.8"
},
{
"text": "The new DM can process any of the user speech acts present in the data. This includes yes answer and no answer actions, which need to be correctly resolved according to the dialogue context. The DM can also handle user-initiative, i.e. process multiple pieces of incoming information. To resemble the dialogue managers in the training data, however, it does not encourage user initiative. Each DM turn contains at most one request inf o action.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue Manager Implementation",
"sec_num": "4.8"
},
{
"text": "For each slot, the DM uses a simple state machine. The state of the slot informs the dialogue manager what action to take next to fill and confirm the slot. As can be seen in Figure 3 , the DM can reject, implicitly confirm or explicitly confirm incoming information, based on the confidence score of the incoming action. Confidence scores for each user action are randomly selected from a flat distribution. The threshold levels for rejection, implicit or explicit confirmation can be set so that their relative proportion resembles that found in the training data. The be- Figure 3 : Dialogue manager agenda haviour of the dialogue manager involves no actual access to flight, hotel or car booking systems. Since all interaction occurs on the intention level, no database retrieval needs to be implemented.",
"cite_spans": [],
"ref_spans": [
{
"start": 175,
"end": 183,
"text": "Figure 3",
"ref_id": null
},
{
"start": 575,
"end": 583,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dialogue Manager Implementation",
"sec_num": "4.8"
},
{
"text": "With regards to evaluation, it is difficult to quantify the effect of using a new DM on the quality of the simulated dialogues. Quite clearly, if the new DM behaves very differently from the original DM that was used for collecting the training data for the simulated user, then the synthetic data can never match the real data exactly -no matter how good the simulated user is. Since the training data is recorded with many different dialogue managers, it is also questionable if a single DM can generate the same variety of dialogues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue Manager Implementation",
"sec_num": "4.8"
},
{
"text": "The fact that the training data is recorded using 4 different DMs is a great advantage for us. It enables us to quantify how much user behaviour can vary due to differences in experimental setup and dialogue strategy. By comparing the four original DMs, we can sketch out a target range for our simulated dialogues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue Manager Implementation",
"sec_num": "4.8"
},
{
"text": "As explained in section 3, the evaluation is split into two main parts. The first part compares simulated user responses to real user responses in an unseen test set. This assesses how realistic the best response is that the simulated user can predict. The second part compares corpora of simulated dialogues to real corpora. This evaluates how well the simulation covers the variety of user behaviour in the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "5"
},
{
"text": "As described in Section 3.2, we use Precision and Recall to measure the similarity between simulated and real user responses. The results (Table 2) show that the scores significantly improve from the Bigram to the Levin model. It is interesting to note that the jump in precision clearly exceeds the jump in recall. This is due to the fact that the Bigram model outputs a much greater number of user actions than the Levin model. We will confirm and discuss this problem in more detail later. The relative ranking of the three models is as expected: As the level of sophistication rises, the performance improves. Also as expected, the training data performance is slightly better than the test data performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 138,
"end": 147,
"text": "(Table 2)",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Evaluation of the Best Response",
"sec_num": "5.1"
},
{
"text": "In the second part of the evaluation, we test how well the user models cover the variety of the user population in the training data. A corpus of 150 dialogues is generated with each of the user models through interaction with the dialogue manager (DM). The statistical distribution of the synthetic corpus is then compared to the training data, as described in Section 3.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of the Generated Corpus",
"sec_num": "5.2"
},
{
"text": "As described in Section 4.8, our evaluation experiments are run with a DM that is different from the one used to collect the training data for the simulated user. It is interesting to investigate what effect the dialogue manager has on user behaviour. We therefore show individual measurements for each of the four datasets as well as the full training corpus (denoted by \"ALL\"). The range of values spanned by the four DMs is the target range for the simulated dialogues. Variations within this range can be attributed to the dialogue manager and the experimental setup.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of the Generated Corpus",
"sec_num": "5.2"
},
{
"text": "The first group of experiments covers the following statistical dialogue properties:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "High-level Dialogue Features",
"sec_num": "5.2.1"
},
{
"text": "\u2022 Dialogue length, measured in the number of turns per task: mean, variance and shape of distribution",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "High-level Dialogue Features",
"sec_num": "5.2.1"
},
{
"text": "\u2022 Turn length, measured in the number of actions per turn: mean, variance and shape of distribution",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "High-level Dialogue Features",
"sec_num": "5.2.1"
},
{
"text": "\u2022 Participant activity as a ratio of system and user actions per dialogue Figure 4 shows the mean values for dialogue length (= task length) and turn length. The Pietquin model achieves a very good result for dialogue length, missing the mean length of the training data by less than 2 turns. The Levin model is further away from the training data result, but it is still within the target range. The Bigram model performs very badly -the dialogues finish far too early. Analysis of the simulated dialogues shows that the user is very uncooperative, causing the system to finish the dialogue before completing any booking. This will also be confirmed by very low goal completion rates later in the evaluation. We found that the standard deviation of the task length is too small in all of the simulated datasets. The shape of the distributions (Figure 5 ) confirms this. The curves for the Levin model and the Pietquin model look better than for the Bigram model, but their tails are still too flat.",
"cite_spans": [],
"ref_spans": [
{
"start": 74,
"end": 82,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 844,
"end": 853,
"text": "(Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "High-level Dialogue Features",
"sec_num": "5.2.1"
},
{
"text": "Interestingly, the results for turn length look better. As shown in Figure 4 the means of the simulated datasets and the training data are nicely aligned. Only the Bigram model produces far too many actions per turn. The flaw Figure 5 : Task length distribution leading to this problem is the assumption that each system action triggers exactly one user action. In real dialogues the relationship is not necessarily 1 to 1. An open question such as \"How may I help you?\", for instance, can lead the user to respond with several pieces of information. An implicit confirmation or an apology, on the other hand, may trigger no user response at all. The latter case is very common in real dialogues, leading to a lower average number of actions per turn.",
"cite_spans": [],
"ref_spans": [
{
"start": 68,
"end": 76,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 226,
"end": 234,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "High-level Dialogue Features",
"sec_num": "5.2.1"
},
{
"text": "The Levin model and the Pietquin model achieve almost perfect results for the standard deviation of the turn length. Looking at the shape of their distributions ( Figure 6 ), we can see that they closely resemble the shape of the training data distribution. The next experiment investigates dialogue participant activity. Figure 7 shows the ratio of user vs. system actions. The lower part of the bar indicates the percentage of user actions while the upper part represents system actions.",
"cite_spans": [],
"ref_spans": [
{
"start": 163,
"end": 172,
"text": "Figure 6",
"ref_id": "FIGREF3"
},
{
"start": 323,
"end": 331,
"text": "Figure 7",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "High-level Dialogue Features",
"sec_num": "5.2.1"
},
{
"text": "Once again, the Bigram model is far outside the target range. As confirmed by the previous experiment, the user is \"talking too much\". The Levin and the Pietquin model achieve almost identical scores. Both models are inside the target range and not far from the training data result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "High-level Dialogue Features",
"sec_num": "5.2.1"
},
{
"text": "The next group of experiments covers the following statistical properties: \u2022 Proportion of goal-directed actions (request and provide information) vs. grounding actions (explicit and implicit confirmations) vs. dialogue formalities (greetings, apologies, instructions) vs. unrecognised actions (unknown).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue Style and Cooperativeness",
"sec_num": "5.2.2"
},
{
"text": "\u2022 Number of times a piece of information is requested, provided, re-requested and re-provided in each dialogue",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue Style and Cooperativeness",
"sec_num": "5.2.2"
},
{
"text": "The histogram in Figure 8 shows the frequency of the most dominant user and system speech acts. The first three bins cover user actions: \"provide info\", \"yes/no answer\" and \"unknown\". The last two bins are for system actions: \"request info\" and \"explicit/implicit confirm\". Secondly, we group all user and system actions into categories, as shown in Figure 9 . This allows us to investigate what proportion of the dialogue is spent on goaldirected actions, grounding actions and dialogue formalities. Since the number of unrecognised actions is high, a separate category is created for these actions. Hangup actions are not included in this analysis, since every dialogue contains exactly one hang-up action.",
"cite_spans": [],
"ref_spans": [
{
"start": 17,
"end": 25,
"text": "Figure 8",
"ref_id": "FIGREF5"
},
{
"start": 350,
"end": 358,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dialogue Style and Cooperativeness",
"sec_num": "5.2.2"
},
{
"text": "Our analysis shows that the relative ordering of actions is fairly similar for the four real systems. In the simulated datasets, the number of \"unknown\" user actions is clearly too low. This indicates that misunderstandings are not simulated well. At the same time, the proportion of goaldirected actions is too low compared to grounding actions and dialogue formalities. To evaluate the cooperativeness of the simulated user, we examine how often attributes are requested, provided, re-requested and re-provided per dialogue ( Figure 10) . The results confirm that the simulated user in the Bigram model is too active: The ratio between provide inf o and request inf o actions is tilted towards the user actions. The Levin and the Pietquin models show a rather large number of provide inf o and request inf o actions. The ratio between system requests and corresponding user responses, however, is very similar to the training data. This shows that the degree of user cooperativeness is modelled fairly well.",
"cite_spans": [],
"ref_spans": [
{
"start": 528,
"end": 538,
"text": "Figure 10)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Dialogue Style and Cooperativeness",
"sec_num": "5.2.2"
},
{
"text": "The final group of experiments covers the following statistical properties:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue Success Rate and Efficiency",
"sec_num": "5.2.3"
},
{
"text": "\u2022 average goal / subgoal achievement rate",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue Success Rate and Efficiency",
"sec_num": "5.2.3"
},
{
"text": "\u2022 mean and variance of the goal completion time Figure 11 shows the goal achievement rates and goal completion times for the four real systems and the three simulated systems. We are only showing the results for flight-bookings here, but a similar analysis can be done for hotel-reservations and rental-car bookings. We have assumed that a subgoal is completed when the system acknowledges the corresponding booking. As expected, the Bigram model produces very poor results. The performance of the Levin and the Pietquin model is more interesting. Our analysis shows that the simulated users more frequently achieve their goals, but that the average completion time is longer. A possible explanation for this may be that the user's level of persistence and patience is not modelled well. Real user's seem to be more likely to hangup if the dialogue progress is slow.",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 57,
"text": "Figure 11",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Dialogue Success Rate and Efficiency",
"sec_num": "5.2.3"
},
{
"text": "Another plausible explanation is that real users can be roughly divided into a large group of novices and a small group of experts. The latter group is aware of system limitations and completes the dialogue successfully and quickly. The novices, on the other hand, tend to engage in long, error-prone dialogues that do not lead to successful completion. This dependency between user expertise and user behaviour is not accounted for in our implementations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue Success Rate and Efficiency",
"sec_num": "5.2.3"
},
{
"text": "Analysis of the dialogue transcriptions shows that many real users produce special requests, such as \"a window seat on the plane\" or \"rental car-insurance\". This group of novice users appears to be underrepresented in the simulated datasets. Our experiments confirm this: \"special requests\" are usually parsed as \"unknown\" and this type of action is significantly less frequent in simulated datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue Success Rate and Efficiency",
"sec_num": "5.2.3"
},
{
"text": "Interestingly, the Pietquin model performs worse than the Levin model for the goal completion metrics, although it explicitly takes the user goal into consideration. It appears that the model in its current form is too constraining. The assumption that the user goal stays fixed over the course of a dialogue is not correct. Secondly, the Pietquin model encourages the user not to mention attributes which are not part of his goal. While this is conceptually correct, it seems to have negative effects on user behaviour when the goal representation cannot capture the complexity of real user goals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue Success Rate and Efficiency",
"sec_num": "5.2.3"
},
{
"text": "Manual analysis of the simulated dialogues also shows that the first phase of the dialogue (greeting, instruction, exchange of flight booking details) is fairly realistic, presumably because it follows dialogue conventions which are modelled well by Levin and Pietquin. The second phase (modification of booking details, re-retrieval of suitable flights, etc.) is less realistic, possibly because it is more strongly driven by the user goal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue Success Rate and Efficiency",
"sec_num": "5.2.3"
},
{
"text": "This paper has presented a detailed evaluation of the most prominent domain-independent approaches to stochastic user simulation based on a large corpus of real humancomputer dialogues. The nature of the simulation problem is such that no single measure of goodness exists, but we have demonstrated that a set of metrics can be used to identify the strength and weaknesses of each method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Future Work",
"sec_num": "6"
},
{
"text": "Our results show that the works of Levin and Pietquin have led to good improvements in user simulation quality. Both approaches clearly outperform the Bigram baseline. However, the results also show that the simulated datasets can still be distinguished from real datasets using simple statistical metrics. Our analysis indicates that it may beneficial to distinguish between different user groups, for instance by training multiple user models with (say) different levels of expertise. Further research is also needed on modelling user goals and on modelling dialogue misunderstandings. We hope to address these problems in future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Future Work",
"sec_num": "6"
},
{
"text": "We believe that it may be particularly beneficial to develop a better representation of the user goal. To cover realistic dialogues, we must acknowledge that user goals can have hierarchical structures -and that these structures can evolve over time. The Hidden Vector State Model (Young, 2002) has recently been introduced as a method for learning hierarchical dependencies and we intend to investigate its use for user modelling.",
"cite_spans": [
{
"start": 281,
"end": 294,
"text": "(Young, 2002)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Future Work",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "User modelling for spoken dialogue system evaluation",
"authors": [
{
"first": "W",
"middle": [],
"last": "Eckert",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Levin",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Pieraccini",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. of ASRU '97",
"volume": "",
"issue": "",
"pages": "80--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Eckert, E. Levin, and R. Pieraccini. 1997. User mod- elling for spoken dialogue system evaluation. In Proc. of ASRU '97, pages 80-87.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning user simulations for information state update dialogue systems",
"authors": [
{
"first": "K",
"middle": [],
"last": "Georgila",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Lemon",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Georgila, J. Henderson, and O. Lemon. 2005a. Learn- ing user simulations for information state update dia- logue systems. Submitted to Eurospeech '05.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic annotation of COMMUNICATOR dialogue data for learning dialogue strategies and user simulations",
"authors": [
{
"first": "K",
"middle": [],
"last": "Georgila",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Lemon",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Henderson",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of DIALOR '05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Georgila, O. Lemon, and J. Henderson. 2005b. Auto- matic annotation of COMMUNICATOR dialogue data for learning dialogue strategies and user simulations. In Proc. of DIALOR '05 (to appear).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A stochastic model of human-machine interaction for learning dialog strategies",
"authors": [
{
"first": "E",
"middle": [],
"last": "Levin",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Pieraccini",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Eckert",
"suffix": ""
}
],
"year": 2000,
"venue": "IEEE Trans. on Speech and Audio Processing",
"volume": "8",
"issue": "1",
"pages": "11--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Levin, R. Pieraccini, and W. Eckert. 2000. A stochas- tic model of human-machine interaction for learning dialog strategies. IEEE Trans. on Speech and Audio Processing, 8(1):11-23.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Computer-aided analysis and design for spoken dialogue systems based on quantitative simulations",
"authors": [
{
"first": "B.-S",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "L.-S",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2001,
"venue": "IEEE Transactions on Speech and Audio Processing",
"volume": "9",
"issue": "5",
"pages": "534--548",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B.-S. Lin and L.-S. Lee. 2001. Computer-aided analysis and design for spoken dialogue systems based on quan- titative simulations. IEEE Transactions on Speech and Audio Processing, 9(5):534-548.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Assessment of dialogue systems by means of a new simulation technique",
"authors": [
{
"first": "R",
"middle": [],
"last": "Lopez-Cozar",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "De La Torre",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Segura",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rubio",
"suffix": ""
}
],
"year": 2003,
"venue": "Speech Communication",
"volume": "40",
"issue": "",
"pages": "387--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Lopez-Cozar, A. de la Torre, J. Segura, and A. Rubio. 2003. Assessment of dialogue systems by means of a new simulation technique. Speech Communication, 40:387-407.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Framework for Unsupervised Learning of Dialogue Strategies",
"authors": [
{
"first": "O",
"middle": [],
"last": "Pietquin",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "O. Pietquin. 2004. A Framework for Unsupervised Learning of Dialogue Strategies. Ph.D. thesis, Faculte Polytechnique de Mons.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Corpus-based dialogue simulation for automatic strategy learning and evaluation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Scheffler",
"suffix": ""
},
{
"first": "S",
"middle": [
"J"
],
"last": "Young",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. NAACL Workshop on Adaptation in Dialogue Systems",
"volume": "",
"issue": "",
"pages": "64--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Scheffler and S. J. Young. 2001. Corpus-based di- alogue simulation for automatic strategy learning and evaluation. In Proc. NAACL Workshop on Adaptation in Dialogue Systems, pages 64-70.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automatic design of spoken dialogue systems",
"authors": [
{
"first": "K",
"middle": [],
"last": "Scheffler",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Scheffler. 2002. Automatic design of spoken dialogue systems. Ph.D. thesis, Cambridge University.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Evaluating dialogue strategies under communication errors using computer-to-computer simulation",
"authors": [
{
"first": "T",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Araki",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Doshita",
"suffix": ""
}
],
"year": 1998,
"venue": "Trans. of IE-ICE",
"volume": "",
"issue": "9",
"pages": "1025--1033",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Watanabe, M. Araki, and S. Doshita. 1998. Evalu- ating dialogue strategies under communication errors using computer-to-computer simulation. Trans. of IE- ICE, Info Syst., E81-D(9):1025-1033.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Talking to machines (statistically speaking)",
"authors": [
{
"first": "S",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of ICSLP '02",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Young. 2002. Talking to machines (statistically speak- ing). In Proc. of ICSLP '02. Denver, Colorado.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Predictive statistical models for user modeling",
"authors": [
{
"first": "I",
"middle": [],
"last": "Zukerman",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Albrecht",
"suffix": ""
}
],
"year": 2001,
"venue": "User Modeling and User-Adapted Interaction",
"volume": "11",
"issue": "",
"pages": "129--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Zukerman and D. Albrecht. 2001. Predictive statistical models for user modeling. User Modeling and User- Adapted Interaction, 11:129-158.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Strategy learning using a simulated user",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Response generation actions.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Mean task and turn length",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Turn length distribution",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "Ratio of user vs. system actions \u2022 Frequency of different user and system speech-acts (average number of occurrences per dialogue)",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"text": "Histogram of speech acts",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF6": {
"text": "Proportion of dialogue spent on goal-directed actions, grounding actions, dialogue formalities, unrecognised actions. All bars show the percentage of actions in the corresponding class, i.e. the four bars add up to 100% Dialogue efficiency (thin lines show std. deviation)",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF7": {
"text": "Flight goal completion rates (percentage of dialogues with successfully completed subgoal) and completion times (in dialogue turns, thin lines indicate standard deviation).",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"html": null,
"text": "and Pietquin models. None of these Turn Spkr. Actn. Speech act, Attribute, Value t",
"type_str": "table",
"content": "<table><tr><td>1</td><td>Sys</td><td>a 1 a 2</td><td>greeting request info orig city</td></tr><tr><td>t 2</td><td>User</td><td>a 3</td><td>provide info orig city boston</td></tr><tr><td>t 3</td><td>Sys</td><td>a 4 a 5</td><td>implicit confirm orig city oslo request info dest city</td></tr><tr><td>t 4</td><td>User</td><td>a 6 a 7</td><td>no answer provide info orig city boston</td></tr><tr><td>t 5</td><td>Sys</td><td>a 8 a 9</td><td>apology explicit confirm orig city boston</td></tr><tr><td>t 6</td><td/><td/><td/></tr></table>",
"num": null
},
"TABREF1": {
"html": null,
"text": "Sample dialogue models has been fully applied to real data before and we found that a number of modifications were necessary to be able to actually train the models. The Bigram model and",
"type_str": "table",
"content": "<table/>",
"num": null
},
"TABREF3": {
"html": null,
"text": "Precision and recall scores outperforms both the Bigram and the Levin model. Its improvement over the Levin model is notable, but not as dramatic as the gap between the Bigram and the Levin model. This is natural, considering that the Pietquin model may be viewed as an extension of the Levin model.",
"type_str": "table",
"content": "<table/>",
"num": null
}
}
}
}