{ "paper_id": "N07-1035", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:48:08.457245Z" }, "title": "Estimating the Reliability of MDP Policies: A Confidence Interval Approach", "authors": [ { "first": "Joel", "middle": [ "R" ], "last": "Tetreault", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pittsburgh LRDC Pittsburgh PA", "location": { "postCode": "15260", "country": "USA" } }, "email": "tetreaul@pitt.edu" }, { "first": "Dan", "middle": [], "last": "Bohus", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": { "postCode": "15213", "region": "PA", "country": "USA" } }, "email": "dbohus@cs.cmu.edu" }, { "first": "Diane", "middle": [ "J" ], "last": "Litman", "suffix": "", "affiliation": { "laboratory": "", "institution": "LRDC Pittsburgh PA", "location": { "postCode": "15260", "country": "USA" } }, "email": "litman@cs.pitt.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Past approaches for using reinforcement learning to derive dialog control policies have assumed that there was enough collected data to derive a reliable policy. In this paper we present a methodology for numerically constructing confidence intervals for the expected cumulative reward for a learned policy. These intervals are used to (1) better assess the reliability of the expected cumulative reward, and (2) perform a refined comparison between policies derived from different Markov Decision Processes (MDP) models. We applied this methodology to a prior experiment where the goal was to select the best features to include in the MDP statespace. Our results show that while some of the policies developed in the prior work exhibited very large confidence intervals, the policy developed from the best feature set had a much smaller confidence interval and thus showed very high reliability.", "pdf_parse": { "paper_id": "N07-1035", "_pdf_hash": "", "abstract": [ { "text": "Past approaches for using reinforcement learning to derive dialog control policies have assumed that there was enough collected data to derive a reliable policy. In this paper we present a methodology for numerically constructing confidence intervals for the expected cumulative reward for a learned policy. These intervals are used to (1) better assess the reliability of the expected cumulative reward, and (2) perform a refined comparison between policies derived from different Markov Decision Processes (MDP) models. We applied this methodology to a prior experiment where the goal was to select the best features to include in the MDP statespace. Our results show that while some of the policies developed in the prior work exhibited very large confidence intervals, the policy developed from the best feature set had a much smaller confidence interval and thus showed very high reliability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "NLP researchers frequently have to deal with issues of data sparsity. Whether the task is machine translation or named-entity recognition, the amount of data one has to train or test with can greatly impact the reliability and robustness of one's models, results and conclusions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One research area that is particularly sensitive to the data sparsity issue is machine learning, specifi-cally in using Reinforcement Learning (RL) to learn the optimal action for a dialogue system to make given any user state. Typically this involves learning from previously collected data or interacting in real-time with real users or user simulators. One of the biggest advantages to this machine learning approach is that it can be used to generate optimal policies for every possible state. However, this method requires a thorough exploration of the state-space to make reliable conclusions on what the best actions are. States that are infrequently visited in the training set could be assigned sub-optimal actions, and therefore the resulting dialogue manager may not provide the best interaction for the user.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we present an approach for estimating the reliability of a policy derived from collected training data. The key idea is to take into account the uncertainty in the model parameters (MDP transition probabilities), and use that information to numerically construct a confidence interval for the expected cumulative reward for the learned policy. This confidence interval approach allows us to: (1) better assess the reliability of the expected cumulative reward for a given policy, and (2) perform a refined comparison between policies derived from different MDP models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We apply the proposed approach to our previous work (Tetreault and Litman, 2006) in using RL to improve a spoken dialogue tutoring system. In that work, a dataset of 100 dialogues was used to develop a methodology for selecting which user state features should be included in the MDP state-space. But are 100 dialogues enough to generate reliable policies? In this paper we apply our confidence in-terval approach to the same dataset in an effort to investigate how reliable our previous conclusions are, given the amount of available training data.", "cite_spans": [ { "start": 52, "end": 80, "text": "(Tetreault and Litman, 2006)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the following section, we discuss the prior work and its data sparsity issue. In section 3, we describe in detail our confidence interval methodology. In section 4, we show how this methodology works by applying it to the prior work. In sections 5 and 6, we present our conclusions and future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Past research into using RL to improve spoken dialogue systems has commonly used Markov Decision Processes (MDP's) (Sutton and Barto, 1998) to model a dialogue (such as (Levin and Pieraccini, 1997) and (Singh et al., 1999) ", "cite_spans": [ { "start": 115, "end": 139, "text": "(Sutton and Barto, 1998)", "ref_id": "BIBREF9" }, { "start": 169, "end": 197, "text": "(Levin and Pieraccini, 1997)", "ref_id": "BIBREF4" }, { "start": 202, "end": 222, "text": "(Singh et al., 1999)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "A MDP is defined by a set of states {s i } i=1..n , a set of actions {a k } k=1..p , and a set of transition probabilities which reflect the dynamics of the environment {p(s i |s j , a k )} k=1..p i,j=1..n : if the model is at time t in state s j and takes action a k , then it will transition to state s i with probability p(s i |s j , a k ). Additionally, an expected reward r(s i , s j , a k ) is defined for each transition. Once these model parameters are known, a simple dynamic programming approach can be used to learn the optimal control policy \u03c0 * , i.e. the set of actions the model should take at each state, to maximize its expected cumulative reward.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "The dialog control problem can be naturally cast in this formalism: the states {s i } i=1..n in the MDP correspond to the dialog states (or an abstraction thereof), the actions {a k } k=1..p correspond to the particular actions the dialog manager might take, and the rewards r(s i , s j , a k ) are defined to reflect a particular dialog performance metric. Once the MDP structure has been defined, the model parameters {p(s i |s j , a k )} k=1..p i,j=1..n are estimated from a corpus of dialogs (either real or simulated), and, based on them, the policy which maximizes the expected cumulative reward is computed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "While most work in this area has focused on developing the best policy (such as (Walker, 2000) , (Henderson et al., 2005) ), there has been relatively little work done with respect to selecting the best features to include in the MDP state-space. For instance, Singh et al. (1999) showed that dialogue length was a useful state feature and Frampton and Lemon (2005) showed that the user's last dialogue act was also useful. In our previous work, we compare the worth of several features. In addition, Paek and Chickering's (2005) work showed how a statespace can be reduced by only selecting features that are relevant to maximizing the reward function.", "cite_spans": [ { "start": 80, "end": 94, "text": "(Walker, 2000)", "ref_id": "BIBREF11" }, { "start": 97, "end": 121, "text": "(Henderson et al., 2005)", "ref_id": "BIBREF2" }, { "start": 261, "end": 280, "text": "Singh et al. (1999)", "ref_id": "BIBREF8" }, { "start": 340, "end": 365, "text": "Frampton and Lemon (2005)", "ref_id": "BIBREF1" }, { "start": 501, "end": 529, "text": "Paek and Chickering's (2005)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "The motivation for this line of research is that if one can properly select the most informative features, one develops better policies, and thus a better dialogue system. In the following sections we summarize our past data, approach, results, and issue with policy reliability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "For this study, we used an annotated corpus of human-computer spoken dialogue tutoring sessions. The fixed-policy corpus contains data collected from 20 students interacting with the system for five problems (for a total of 100 dialogues of roughly 50 turns each). The corpus was annotated with 5 state features (Table 1 ). It should be noted that two of the features, Certainty and Frustration, were manually annotated while the other three were done automatically. All features are binary except for Certainty which has three values. For the action set {a k } k=1..p , we looked at what type of question the system could ask the student given the previous state. There are a total of four possible actions: ask a short answer question (one that requires a simple one word response), a complex answer question (one that requires a longer, deeper response), ask both a simple and complex question in the same turn, or do not ask a question at all (give a hint). The reward function r was the learning gain of each student based on a pair of tests before and after the entire session of 5 dialogues. The 20 students were split into two groups (high and low learners) based on their learning gain, so 10 students and their respective five dialogues were given a positive reward of +100, while the remainder were assigned a negative reward of -100. The rewards were assigned in the final dialogue state, a common approach when applying RL in spoken dialogue systems.", "cite_spans": [], "ref_spans": [ { "start": 312, "end": 320, "text": "(Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "MDP Structure", "sec_num": "2.1" }, { "text": "To investigate the usefulness of different features, we took the following approach. We started with two baseline MDPs. The first model (Baseline 1) used only the Correctness feature in the state-space. The second model (Baseline 2) included both the Correctness and Certainty features. Next we constructed 3 new models by adding each of the remaining three features (Frustration, Percent Correct and Concept Repetition) to the Baseline 2 model. We defined three metrics to compare the policies derived from these MDPs: (1) Diff's: the number of states whose policy differs from the Baseline 2 policy, (2) Percent Policy change (P.C.): the weighted amount of change between the two policies (100% indicates total change), and (3) Expected Cumulative Reward (or ECR) which is the average reward one would expect in that MDP when in the statespace.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach and Results", "sec_num": "2.2" }, { "text": "The intuition is that if a new feature were relevant, the corresponding model would lead to a different policy and a better expected cumulative reward (when compared to the baseline models). Conversely, if the features were not useful, one would expect that the new policies would look similar (specifically, the Diff's count and % Policy Change would be low) or produce similar expected cumulative rewards to the original baseline policy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach and Results", "sec_num": "2.2" }, { "text": "The results of this analysis are shown in Table 2 1 The Diff's and Policy Change metrics are undefined for the two baselines since we only use these two metrics to compare the other three features to Base-line 2. All three metrics show that the best feature to add to the Baseline 2 model is Concept Repetition since it results in the most change over the Baseline 2 policy, and also the expected reward is the highest as well. For the remainder of this paper, when we refer to Concept Repetition, Frustration, or Percent Correctness, we are referring to the model that includes that feature as well as the Baseline 2 features Correctness and Certainty. ", "cite_spans": [], "ref_spans": [ { "start": 42, "end": 49, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Approach and Results", "sec_num": "2.2" }, { "text": "However, the approach discussed above assumes that given the size of the data set, the ECR and policies are reliable. If the MDP model were very fragile, that is the policy and expected cumulative reward were very sensitive to the quality of the transition probability estimates, then the metrics could reveal quite different rankings. Previously, we used a qualitative approach of tracking how the worth of each state (V-value) changed over time. The V-values indicate how much reward one would expect from starting in that state to get to a final state. We hypothesized that if the V-values stabilized as data increased, then the learned policy would be more reliable. So is this V-value methodology adequate for assessing if there is enough data to determine a stable policy, and also for assessing if one model is better than another? Since our approach for statespace selection is based on comparing a new policy with a baseline policy, having a stable policy is extremely important since instability could lead to different conclusions. For example, in one comparison, a new policy could differ with the baseline in 8 out of 10 states. But if the MDP were unstable, adding just a little more data could result in a difference of only 4 out of 10 states. Is there an approach that can categorize whether given a certain data size, that the expected cumulative reward (and thus the policy) is reliable? In the next section we present a new methodology for numerically constructing confidence intervals for these value function estimates. Then, in the following section, we reevaluate our prior work with this methodology and discuss the results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem with Reliability", "sec_num": "2.3" }, { "text": "Intervals The starting point for the proposed methodology is the observation that for each state s j and action a k in the MDP, the set of transition probabilities {p(s i |s j , a k )} i=1..n are modeled as multinomial distributions that are estimated from the transition counts in the training data:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(s i |s j , a k ) = c(s i , s j , a k ) n i=1 c(s i , s j , a k )", "eq_num": "(1)" } ], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "where n is the number of states in the model, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "c(s i , s j , a k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "is the number of times the system was in state s j , took action a k , and transitioned to state", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "s i in the training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "It is important to note that these parameters are just estimates. The reliability of these estimates clearly depends on the amount of training data, more specifically on the transition counts c(s i , s j , a k ). For instance, consider a model with 3 states and 2 actions. Say the model was in state s 1 and took action a 1 ten times. Out of these, three times the model transitioned back to state s 1 , two times it transitioned to state s 2 , and five times to state s 3 . Then we have: While both sets of transition parameters have the same value, the second set of estimates is more reliable. The central idea of the proposed approach is to model this uncertainty in the system parameters, and use it to numerically construct confidence intervals for the value of the optimal policy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "Formally, each set of transition probabilities", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "{p(s i |s j , a k )} i=1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": ".n is modeled as a multinomial distribution, estimated from data 2 . The uncertainty of multinomial estimates are commonly modeled by means of a Dirichlet distribution. The Dirichlet distribution is characterized by a set of parameters \u03b1 1 , \u03b1 2 , ..., \u03b1 n , which in this case correspond to the counts {c(s i , s j , a k )} i=1..n . For any given j, the likelihood of the set of multinomial transition parameters {p(s i |s j , a k )} i=1..n is then given by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "P ({p(s i |s j , a k )} i=1..n |D) = = 1 Z(D) n i=1 p(s i |s j , a k ) \u03b1 i \u22121 (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "Z(D) = n i=1 \u0393(\u03b1 i ) \u0393( n i=1 \u03b1 i ) and \u03b1 i = c(s i , s j , a k ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "Note that the maximum likelihood estimates for the formula above correspond to the frequency count formula we have already described:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p M L (s i |s j , a k ) = \u03b1 i n i=1 \u03b1 i = c(s i , s j , a k ) n i=1 c(s i , s j , a k )", "eq_num": "(5)" } ], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "To capture the uncertainty in the model parameters, we therefore simply need to store the counts of the observed transitions c(s i , s j , a k ). Based on this model of uncertainty, we can numerically construct a confidence interval for the value of the optimal policy \u03c0 * . Instead of computing the value of the policy based on the maximum likelihood transition", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "estimatesT M L = {p M L(s i |s j , a k )} k=1..p i,j=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "..n , we generate a large number of transition matricesT 1 ,T 1 , ...T m by sampling from the Dirichlet distributions corresponding to the counts observed in the training data (in the experiments reported in this paper, we used m = 1000). We then compute the value of the optimal policy \u03c0 * in each of these models", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "{V \u03c0 * (T i )} i=1..m .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "Finally, we numerically construct the 95% confidence interval for the value function based on the resulting value estimates: the bounds for the confidence interval are set at the lowest and highest 2.5 percentile of the resulting distribution of the values for the optimal policy", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "{V \u03c0 * (T i )} i=1..m .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "The algorithm is outlined below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "1. compute transition counts from the training set:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "C = {c(s i , s j , a k )} k=1..p i,j=1..n", "eq_num": "(6)" } ], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "2. compute maximum likelihood estimates for transition probability matrix:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "T M L = {p M L (s i |s j , a k )} k=1..p i,j=1..n", "eq_num": "(7)" } ], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "3. use dynamic programming to compute the optimal policy \u03c0 * for modelT M L 4. sample m transition matrices {T k } k=1..m , using the Dirichlet distribution for each row:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "{p i (s i |s j , a k )} i=1..n = = Dir({c(s i , s j , a k )} i=1..n )", "eq_num": "(8)" } ], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "5. evaluate the optimal policy \u03c0 * in each of these m models, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "obtain V \u03c0 * (T i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "6. numerically build the 95% confidence interval for V \u03c0 * from these estimates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "To summarize, the central idea is to take into account the reliability of the transition probability estimates and construct a confidence interval for the expected cumulative reward for the learned policy. In the standard approach, we would compute an estimate for the expected cumulative reward, by simply using the transition probabilities derived from the training set. Note that these transition probabilities are simply estimates which are more or less accurate, depending on how much data is available. The proposed methodology does not fully trust these estimates, and asks the question: given that the real world (i.e. real transition probabilities) might actually be a bit different than we think it is, how well can we expect the learned policy to perform? Note that the confidence interval we construct, and therefore the conclusions we draw, are with respect to the policy learned from the current estimates, i.e. from the current training set. If more data becomes available, a different optimal policy might emerge, about which we cannot say much.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Evaluation with Confidence", "sec_num": "3.1" }, { "text": "Given the stochastic nature of the models, confidence intervals are often used to estimate the reliability of results in machine learning experiments, e.g. (Rivals and Personnaz, 2002) , (Schapire, 2002) and (Dumais et al., 1998) . In this work we use a confidence interval methodology in the context of MDPs. The idea of modeling the uncertainty of the transition probability estimates using Dirichlet models also appears in (Jaulmes et al., 2005) . In that work, the authors used the uncertainty in model parameters to develop active learning strategies for partially observable MDPs, a topic not previously addressed in the literature. In our work we rely on the same model of uncertainty for the transition matrix, but use it to derive confidence intervals for the expected cumulative reward for the learned optimal policy, in an effort to assess the reliability of this policy.", "cite_spans": [ { "start": 156, "end": 184, "text": "(Rivals and Personnaz, 2002)", "ref_id": "BIBREF6" }, { "start": 187, "end": 203, "text": "(Schapire, 2002)", "ref_id": "BIBREF7" }, { "start": 208, "end": 229, "text": "(Dumais et al., 1998)", "ref_id": "BIBREF0" }, { "start": 426, "end": 448, "text": "(Jaulmes et al., 2005)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3.2" }, { "text": "Our previous results indicated that Concept Repetition was the best feature to add to the Baseline 2 state-space model, but also that Percent Correctness and Frustration (when added to Baseline 2) offered an improvement over the Baseline MDP's. However, these conclusions were based on a very qualitative approach for determining if a policy is reliable or not. In the following subsection, we apply our approach of confidence intervals to empirically determine if given this data set of 100 dialogues, whether the estimates of the ECR are reliable, and whether the original rankings and conclusions hold up under this refined analysis. In subsection 4.2, we provide a methodology for pinpointing when one model is better than another.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "For our first investigation, we look at the confidence intervals of each MDP's ECR over the entire data set of 20 students (later in this section we show plots for the confidence intervals as data increases). Table 3 shows the upper and lower bounds for the ECR originally reported in Table 2 . The first column shows the original, estimated ECR of the MDP and the last column is the width of the bound (the difference between the upper and lower bound).", "cite_spans": [], "ref_spans": [ { "start": 209, "end": 216, "text": "Table 3", "ref_id": null }, { "start": 285, "end": 292, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Quantitative Analysis of ECR Reliability", "sec_num": "4.1" }, { "text": "So what conclusions can we make about the reliability of the ECR, and hence of the learned policies for the different MDP's, given this amount of training data? The confidence interval for the ECR for Table 3 : Confidence Intervals with complete dataset the Baseline 1 model ranges from 0.21 to 23.73. Recall that the final states are capped at +100 and -100, and are thus the maximum and minimum bounds that one can see in this experiment. These bounds tell us that, if we take into account the uncertainty in the model estimates (given the small training set size), with probability 0.95 the actual true ECR for this policy will be greater than 0.21 and smaller than 23.73. The width of this confidence interval is 23.52.", "cite_spans": [], "ref_spans": [ { "start": 201, "end": 208, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Quantitative Analysis of ECR Reliability", "sec_num": "4.1" }, { "text": "For the Baseline 2 model, the bounds are much wider: from -5.31 to 60.48, for a total width of 65.79. While the ECR estimate is 31.92 (which is seemingly larger than 6.15 for the Baseline 1 model), the wide confidence interval tells us that this estimate is not very reliable. It is possible that the policy derived from this model with this amount of data could perform poorly, and even get a negative reward. From the dialogue system designer's standpoint, a model like this is best avoided.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantitative Analysis of ECR Reliability", "sec_num": "4.1" }, { "text": "Of the remaining three models -Concept Repetition, Frustration, and Percent Correctness, the first one exhibits a tighter confidence interval, indicating that the estimated expected cumulative reward (42.56) is fairly reliable: with 95% probability of being between 28.37 and 59.29. The ECR for the other two models (Frustration and Percent Correctness) again shows a wide confidence interval once we take into account the uncertainty in the model parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantitative Analysis of ECR Reliability", "sec_num": "4.1" }, { "text": "These results shed more light on the shortcomings of the ECR metric used to evaluate the models in prior work. This estimate does not take into account the uncertainty of the model parameters. For example, a model can have an optimal policy with a very high ECR value, but have very wide confidence bounds reaching even into negative rewards. On the other hand, another model can have a relatively lower ECR but if its bounds are tighter (and the lower bound is not negative), one can know that that policy is less affected by poor parameter estimates stemming from data sparsity issues. Using the confidence intervals associated with the ECR gives a much more refined, quantitative estimate of the reliability of the reward, and hence of the policy derived from that data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantitative Analysis of ECR Reliability", "sec_num": "4.1" }, { "text": "An extension of this result is that confidence intervals can also allow us to make refined judgments about the comparative utility of different features, the original motivation of our prior study. Basically, a model (M1) is better than another (M2) if M1's lower bound is greater than the upper bound of M2. That is, one knows that 95% of the time, the worst case situation of M1 (the lower bound) will always yield a higher reward than the best case of M2. In our data, this happens only once, with Concept Repetition being empirically better than Baseline 1, since the lower bound of Concept Repetition is 28.37 and the upper bound of Baseline 1 is 23.73. Given this situation, Concept Repetition is a useful feature which, when included in the model, leads to a better policy than simply using Correctness. We cannot draw any conclusions about the other features, since their bounds are generally quite wide. Given this amount of training data, we cannot say whether Percent Correctness and Frustration are better features than the Baseline MDP's. Although their ECR's are higher, there is too much uncertainty to definitely conclude they are better.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantitative Analysis of ECR Reliability", "sec_num": "4.1" }, { "text": "The previous analysis focused on a quantitative method of (1) determining the reliability of the MDP ECR estimate and policy, as well as (2) assessing whether one model is better than another. In this section, we present an extension to the second contribution by answering the question: given that one model is more reliable than another, is it possible to determine at which point one model's estimates become more reliable than another model's? In our To do this, we investigate how the confidence interval changes as the amount of training data increases instead of looking at the reliability estimate at only one particular data size.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pinpointing Model Cross-over", "sec_num": "4.2" }, { "text": "We incrementally increase the amount of training data (adding the data from one new student at a time), and calculate the corresponding optimal policy and confidence interval for the expected cumulative reward for that policy. Figure 1 shows the confidence interval plots as data is added to the MDP for the Baseline 1 and Concept Repetition MDP's. For reference, Baseline 2, Percent Correctness and Frustration plots did not exhibit the same converging behavior as these two, which is not surprising given how wide the final bounds are. For each plot, the bold lines represent the upper and lower bounds, and the dotted line represents the calculated ECR.", "cite_spans": [], "ref_spans": [ { "start": 227, "end": 235, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Pinpointing Model Cross-over", "sec_num": "4.2" }, { "text": "Analyzing the two MDP's, we find that the confidence intervals for Baseline 1 and Concept Repetition converge as more data is added, which is an expected trend. One useful result from observing the change in confidence intervals is that one can determine the point in one which one model becomes empirically better than another. Superimposing the upper and lower bounds (Figure 2 ) reveals that after we include the data from the first 13 students, the lower bound of Concept Repetition crosses over the upper bound of Baseline 1.", "cite_spans": [], "ref_spans": [ { "start": 370, "end": 379, "text": "(Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Pinpointing Model Cross-over", "sec_num": "4.2" }, { "text": "Observing this behavior is especially useful for performing model switching. In automatic model switching, a dialogue manager runs in real time and as it collects data, it can switch from using a simple dialogue model to a complex model. Confidence intervals can be used to determine when to switch from one model to the next by checking if a complex model's bounds cross over the bounds of the current model. Basically, the dialogue manager switches when it can be sure that the more complex model's ECR is not only higher, but statistically significantly so. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pinpointing Model Cross-over", "sec_num": "4.2" }, { "text": "Past work in using MDP's to improve spoken dialogue systems have usually glossed over the issue of whether or not there was enough training data to develop reliable policies. In this work, we present a numerical method for building confidence intervals for the expected cumulative reward for a learned policy. The proposed approach allows one to (1) better assess the reliability of the expected cumulative reward for a given policy, and (2) perform a refined comparison between policies derived from different MDP models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "We applied this methodology to a prior experiment where the objective was to select the best features to include in the MDP state-space. Our results show that policies constructed from the Baseline 1 and Concept Repetition models are more reliable, given the amount of data available for training. The Concept Repetition model (which is composed of the Concept Repetition, Certainty and Correctness features) was especially useful, as it led to a policy that outperformed the Baseline 1 model, even when we take into account the uncertainty in the model estimates caused by data sparsity. In contrast, for the Baseline 2, Percent Correctness, and Frustration models, the estimates for the expected cumulative reward are much less reliable, and no conclusion can be reliably drawn about the usefulness of these features. In addition, we showed that our confidence interval approach has applications in another MDP problem: model switching.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "As an extension of this work, we are currently investigating in more detail what makes some MDP's reliable or unreliable for a certain data size (such as the case where Baseline 2 does not converge but a more complicated model does, such as Concept Repetition). Our initial findings indicate that, as more data becomes available the bounds tighten for most parameters in the transition matrix. However, for some of the parameters the bounds can remain wide, and that is enough to keep the confidence interval for the expected cumulative reward from converging.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "6" }, { "text": "Please note that to due to refinements in code, there is a slight difference between the ECR's reported in this work and the ECR's reported in the previous work, for the three features added to Baseline 2. These changes did not alter the rankings of these models, or the conclusions of the previous work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "By p we will denote the true model parameters; byp we will denote data-driven estimates for these parameters", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Jeff Schneider, Drew Bagnell, Pam Jordan, as well as the ITSPOKE and Pitt NLP groups, and the Dialog on Dialogs group for their help and comments. Finally, we would like to thank the four anonymous reviewers for their comments on the initial version of this paper. Support for this research was provided by NSF grants #0325054 and #0328431.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Inductive learning algorithms and representations for text categorization", "authors": [ { "first": "J", "middle": [], "last": "Dumais", "suffix": "" }, { "first": "D", "middle": [], "last": "Platt", "suffix": "" }, { "first": "M", "middle": [], "last": "Heckerman", "suffix": "" }, { "first": "", "middle": [], "last": "Sahami", "suffix": "" } ], "year": 1998, "venue": "Conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dumais, J. Platt, D. Heckerman, and M. Sahami. 1998. Inductive learning algorithms and representations for text categorization. In Conference on Information and Knowledge Management.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Reinforcement learning of dialogue strategies using the user's last dialogue act", "authors": [ { "first": "M", "middle": [], "last": "Frampton", "suffix": "" }, { "first": "O", "middle": [], "last": "Lemon", "suffix": "" } ], "year": 2005, "venue": "IJCAI Wkshp. on K&R in Practical Dialogue Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Frampton and O. Lemon. 2005. Reinforcement learn- ing of dialogue strategies using the user's last dialogue act. In IJCAI Wkshp. on K&R in Practical Dialogue Systems.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Hybrid reinforcement/supervised learning for dialogue policies from communicator data", "authors": [ { "first": "J", "middle": [], "last": "Henderson", "suffix": "" }, { "first": "O", "middle": [], "last": "Lemon", "suffix": "" }, { "first": "K", "middle": [], "last": "Georgila", "suffix": "" } ], "year": 2005, "venue": "IJCAI Wkshp. on K&R in Practical Dialogue Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Henderson, O. Lemon, and K. Georgila. 2005. Hybrid reinforcement/supervised learning for dialogue poli- cies from communicator data. In IJCAI Wkshp. on K&R in Practical Dialogue Systems.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Active learning in partially observable markov decision processes", "authors": [ { "first": "R", "middle": [], "last": "Jaulmes", "suffix": "" }, { "first": "J", "middle": [], "last": "Pineau", "suffix": "" }, { "first": "D", "middle": [], "last": "Precup", "suffix": "" } ], "year": 2005, "venue": "European Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Jaulmes, J. Pineau, and D. Precup. 2005. Active learn- ing in partially observable markov decision processes. In European Conference on Machine Learning.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A stochastic model of computer-human interaction for learning dialogues", "authors": [ { "first": "E", "middle": [], "last": "Levin", "suffix": "" }, { "first": "R", "middle": [], "last": "Pieraccini", "suffix": "" } ], "year": 1997, "venue": "Proc. of EUROSPEECH '97", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Levin and R. Pieraccini. 1997. A stochastic model of computer-human interaction for learning dialogues. In Proc. of EUROSPEECH '97.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The markov assumption in spoken dialogue management", "authors": [ { "first": "T", "middle": [], "last": "Paek", "suffix": "" }, { "first": "D", "middle": [], "last": "Chickering", "suffix": "" } ], "year": 2005, "venue": "6th SIGDial Workshop on Discourse and Dialogue", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Paek and D. Chickering. 2005. The markov assump- tion in spoken dialogue management. In 6th SIGDial Workshop on Discourse and Dialogue.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Construction of confidence intervals for neural networks based on least squares estimation", "authors": [ { "first": "I", "middle": [], "last": "Rivals", "suffix": "" }, { "first": "L", "middle": [], "last": "Personnaz", "suffix": "" } ], "year": 2002, "venue": "Neural Networks", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Rivals and L. Personnaz. 2002. Construction of con- fidence intervals for neural networks based on least squares estimation. In Neural Networks.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The boosting approach to machine learning: An overview", "authors": [ { "first": "R", "middle": [], "last": "Schapire", "suffix": "" } ], "year": 2002, "venue": "MSRI Workshop on Nonlinear Estimation and Classification", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Schapire. 2002. The boosting approach to machine learning: An overview. In MSRI Workshop on Nonlin- ear Estimation and Classification.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Reinforcement learning for spoken dialogue systems", "authors": [ { "first": "S", "middle": [], "last": "Singh", "suffix": "" }, { "first": "M", "middle": [], "last": "Kearns", "suffix": "" }, { "first": "D", "middle": [], "last": "Litman", "suffix": "" }, { "first": "M", "middle": [], "last": "Walker", "suffix": "" } ], "year": 1999, "venue": "Proc. NIPS '99", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Singh, M. Kearns, D. Litman, and M. Walker. 1999. Reinforcement learning for spoken dialogue systems. In Proc. NIPS '99.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Reinforcement Learning", "authors": [ { "first": "R", "middle": [], "last": "Sutton", "suffix": "" }, { "first": "A", "middle": [], "last": "Barto", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Sutton and A. Barto. 1998. Reinforcement Learning. The MIT Press.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Comparing the utility of state features in spoken dialogue using reinforcement learning", "authors": [ { "first": "J", "middle": [], "last": "Tetreault", "suffix": "" }, { "first": "D", "middle": [], "last": "Litman", "suffix": "" } ], "year": 2006, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Tetreault and D. Litman. 2006. Comparing the utility of state features in spoken dialogue using reinforce- ment learning. In NAACL.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "An application of reinforcement learning to dialogue strategy selection in a spoken dialogue system for email", "authors": [ { "first": "M", "middle": [], "last": "Walker", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Walker. 2000. An application of reinforcement learn- ing to dialogue strategy selection in a spoken dialogue system for email. JAIR, 12.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "Confidence Interval Plots case, we want to know at what point Concept Repetition becomes more reliable than Baseline 1." }, "FIGREF2": { "num": null, "uris": null, "type_str": "figure", "text": "Baseline 1 and Concept Repetition Bounds" }, "TABREF1": { "html": null, "num": null, "text": "State Features in Tutoring Corpus", "type_str": "table", "content": "" }, "TABREF3": { "html": null, "num": null, "text": "", "type_str": "table", "content": "
" }, "TABREF4": { "html": null, "num": null, "text": "Additionally, let's say the same model was in state s 2 and took action a 2 1000 times. Following that action, it transitioned 300 times to state s 1 , 200 times to state s 2 , and 500 times to state s 3 .", "type_str": "table", "content": "
p(si|s1, a1) = 0.3; 0.2; 0.5 =3 10;2 10;5 10(2)
p(si|s2, a2) = 0.3; 0.2; 0.5 =300 1000;200 1000;500 1000(3)
" } } } }