ACL-OCL / Base_JSON /prefixE /json /ecnlp /2021.ecnlp-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:33:27.416828Z"
},
"title": "Turn-Level User Satisfaction Estimation in E-commerce Customer Service",
"authors": [
{
"first": "Runze",
"middle": [],
"last": "Liang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Ryuichi",
"middle": [],
"last": "Takanobu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Fenglin",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Alibaba Group",
"location": {
"settlement": "Hangzhou",
"country": "China"
}
},
"email": "fenglin.lfl@alibaba-inc.com"
},
{
"first": "Ji",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Alibaba Group",
"location": {
"settlement": "Hangzhou",
"country": "China"
}
},
"email": ""
},
{
"first": "Haiqing",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Alibaba Group",
"location": {
"settlement": "Hangzhou",
"country": "China"
}
},
"email": "haiqing.chenhq@alibaba-inc.com"
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "aihuang@tsinghua.edu.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "User satisfaction estimation in the dialoguebased customer service is critical not only for helping developers find the system defects, but also making it possible to get timely human intervention for dissatisfied customers. In this paper, we investigate the problem of user satisfaction estimation in E-commerce customer service. In order to apply the estimator to online services for timely human intervention, we need to estimate the satisfaction score at each turn. However, in actual scenario we can only collect the satisfaction labels for the whole dialogue sessions via user feedback. To this end, we formalize the turn-level satisfaction estimation as a reinforcement learning problem, in which the model can be optimized with only session-level satisfaction labels. We conduct experiments on the dataset collected from a commercial customer service system, and compare our model with the supervised learning models. Extensive experiments show that the proposed method outperforms all the baseline models.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "User satisfaction estimation in the dialoguebased customer service is critical not only for helping developers find the system defects, but also making it possible to get timely human intervention for dissatisfied customers. In this paper, we investigate the problem of user satisfaction estimation in E-commerce customer service. In order to apply the estimator to online services for timely human intervention, we need to estimate the satisfaction score at each turn. However, in actual scenario we can only collect the satisfaction labels for the whole dialogue sessions via user feedback. To this end, we formalize the turn-level satisfaction estimation as a reinforcement learning problem, in which the model can be optimized with only session-level satisfaction labels. We conduct experiments on the dataset collected from a commercial customer service system, and compare our model with the supervised learning models. Extensive experiments show that the proposed method outperforms all the baseline models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Task-oriented dialogue systems have been widely studied recently (Gao et al., 2019; , and many have been widely deployed to realworld applications, such as intelligent assistants and customer service in industry. However, due to the limitation of model capability, the system may fail to understand the intent of users or complete the task, which makes it common for users to become dissatisfied with the system (Kiseleva et al., 2016b; Lopatovska et al., 2019) .",
"cite_spans": [
{
"start": 65,
"end": 83,
"text": "(Gao et al., 2019;",
"ref_id": "BIBREF6"
},
{
"start": 412,
"end": 436,
"text": "(Kiseleva et al., 2016b;",
"ref_id": "BIBREF12"
},
{
"start": 437,
"end": 461,
"text": "Lopatovska et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we focus on the problem of user satisfaction estimation (Chowdhury et al., 2016; Kiseleva et al., 2016a) in E-commerce customer service, where users may ask for E-commerce transactions, claim a refund or make a complaint to the customer service. An actual E-commerce customer service may serve thousands of users simultaneously, many of whom may feel dissatisfied, more or less. It is imperative to offer manual service to those users who are exhibiting signs of dissatisfaction. Nevertheless, the manual service resources are usually limited. Therefore, estimating user satisfaction can help us assign manual service priority to the users by sorting the ongoing dialogues with satisfaction scores.",
"cite_spans": [
{
"start": 71,
"end": 95,
"text": "(Chowdhury et al., 2016;",
"ref_id": "BIBREF2"
},
{
"start": 96,
"end": 119,
"text": "Kiseleva et al., 2016a)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Ideally, the satisfaction score estimation and sorting process should be in a timely and turn-level manner. Take Figure 1 for an example. In the first two turns 1 , the system responses are consistent with the user utterances. Therefore, the satisfaction score until the second turn should be high, and the user should not be allocated human service. But in the third turn, the system seems to ask a weird question instead of responding to the special situation the user encounters. Therefore, the satisfaction score until the third turn should be lower than that until the second turn. And after the fourth turn, the satisfaction score should get even lower since the system still responds improperly. Whether the user will be offered human resources in the third and the fourth turn is determined by the rank of the satisfaction score among all the ongoing dialogues.",
"cite_spans": [],
"ref_spans": [
{
"start": 113,
"end": 121,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, in actual scenario we can only collect the satisfaction labels for the whole dialogue sessions through user feedback (Park et al., 2020) , because asking the users to provide turn-level feedback will lead to poor user experience. Consequently, most of the existing works only tackle the session-level satisfaction prediction problem, where they can only predict the satisfaction label after the whole session finishes, lacking the ability to adjust the satisfaction score as the dialogue proceeds.",
"cite_spans": [
{
"start": 126,
"end": 145,
"text": "(Park et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address this problem, we formalize the turnlevel user satisfaction estimation as a reinforcement learning problem. With carefully designed actions and reward function, we can optimize the turn-level satisfaction estimator with only session-level satisfaction labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To summarize, we utilize reinforcement learning to achieve turn-level satisfaction estimation in Ecommerce customer service when only the sessionlevel labels are available. Extensive experiments verify the effectiveness of our method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "User satisfaction estimation for dialogue systems has been an important research topic over the past decades. Most of the existing work focused on the session-level user satisfaction estimation (Jiang et al., 2015; Hashemi et al., 2018; Park et al., 2020) . Walker et al. (1997) first proposed PAR-ADISE framework, which can estimate the user satisfaction in spoken dialogue systems through a task success measure and dialogue-based cost measures. Yang et al. (2010) extended the PARADISE framework by an item-based collaborative filtering model. Some works on user satisfaction estimation focused on extracting useful features from usersystem interaction (Kiseleva et al., 2016a; Sandbank et al., 2018) . Others modeled a dialogue as a sequence of dialogue actions (Jiang et al., 2015 ) or utterances (Hashemi et al., 2018 Choi et al., 2019) . However, these methods can predict user satisfaction only after the dialog is completed, which can not be adopted in an E-commerce customer service scenario where timely satisfaction estimation is preferred.",
"cite_spans": [
{
"start": 194,
"end": 214,
"text": "(Jiang et al., 2015;",
"ref_id": "BIBREF9"
},
{
"start": 215,
"end": 236,
"text": "Hashemi et al., 2018;",
"ref_id": "BIBREF8"
},
{
"start": 237,
"end": 255,
"text": "Park et al., 2020)",
"ref_id": "BIBREF14"
},
{
"start": 258,
"end": 278,
"text": "Walker et al. (1997)",
"ref_id": "BIBREF17"
},
{
"start": 448,
"end": 466,
"text": "Yang et al. (2010)",
"ref_id": "BIBREF19"
},
{
"start": 656,
"end": 680,
"text": "(Kiseleva et al., 2016a;",
"ref_id": "BIBREF11"
},
{
"start": 681,
"end": 703,
"text": "Sandbank et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 766,
"end": 785,
"text": "(Jiang et al., 2015",
"ref_id": "BIBREF9"
},
{
"start": 786,
"end": 823,
"text": ") or utterances (Hashemi et al., 2018",
"ref_id": null
},
{
"start": 824,
"end": 842,
"text": "Choi et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "While some works also addressed the turn-level online satisfaction estimation, they needed turnlevel human annotations (Ultes et al., 2017; Bodigutla et al., 2020) . These methods are not scalable in terms of annotation costs due to the large volumes of user data in E-commerce. Choi et al. (2019) used elaborate rules to generate turn-level satisfaction labels and trained the model in a supervised manner, but rules do not generalize well to the rapid growth of new data in a commercial system. Recently, Kachuee et al. 2020suggested a self-supervised contrastive learning approach to use unlabeled data and transfer to user satisfaction prediction with labeled data, but the size of labeled data is still very large.",
"cite_spans": [
{
"start": 119,
"end": 139,
"text": "(Ultes et al., 2017;",
"ref_id": "BIBREF16"
},
{
"start": 140,
"end": 163,
"text": "Bodigutla et al., 2020)",
"ref_id": "BIBREF0"
},
{
"start": 279,
"end": 297,
"text": "Choi et al. (2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In our work, we propose to leverage reinforcement learning to achieve turn-level user satisfaction estimation. Only requiring session-level labels, our model is more suitable for industrial E-commerce customer service than existing methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We formally define the task in our work as follows: the tth turn of a dialogue, denoted by T t , consists of user request T u t and system response T s t . Each dialogue d contains a few turns, namely d = (T 1 , T 2 , ..., T T ), and we estimate the satisfaction score sc t of a user at each turn T t (t = 1, 2, ..., T ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "User Satisfaction Estimation",
"sec_num": "3"
},
{
"text": "We now describe the proposed method in detail, which consists of three components: dialogue encoder, satisfaction score estimator, and reinforcement learning module. Figure 2 shows the overview of the proposed method. ",
"cite_spans": [],
"ref_spans": [
{
"start": 166,
"end": 174,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "User Satisfaction Estimation",
"sec_num": "3"
},
{
"text": "Following (Choi et al., 2019) , we extract features from each turn and model a dialogue as a sequence of features, such as turn index and input channel 2 . Suppose there are m features and we denote the one-hot vector for the jth feature in turn T t as f j t . Then the feature for the tth turn is",
"cite_spans": [
{
"start": 10,
"end": 29,
"text": "(Choi et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue Encoder",
"sec_num": "3.1"
},
{
"text": "f t = [f 1 t ; f 2 t ; ...; f m t ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue Encoder",
"sec_num": "3.1"
},
{
"text": ". For better understanding of natural languages, we use BERT (Devlin et al., 2019) to encode the pair of user and system utterances at each turn, and apply it as a part of the input features f t .",
"cite_spans": [
{
"start": 61,
"end": 82,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue Encoder",
"sec_num": "3.1"
},
{
"text": "Then, we use the gated recurrent units (GRU) (Chung et al., 2014) to get the hidden state h t of the dialogue history up to the tth turn:",
"cite_spans": [
{
"start": 45,
"end": 65,
"text": "(Chung et al., 2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue Encoder",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h t = GRU (h t\u22121 , f t )",
"eq_num": "(1)"
}
],
"section": "Dialogue Encoder",
"sec_num": "3.1"
},
{
"text": "For satisfaction score estimation, our insight is that a the degree of a user's dissatisfaction will accumulate if he/she encounters successive improper system response (where the satisfaction score is negative and decreases over time), or can be relieved by a satisfactory reply (where the satisfaction score increases). Therefore, it is natural to predict the increment of user satisfaction score, not only because it is in line with the intuition that users who experience more dis-satisfactory turns are more likely to give up interacting with the system, but also the predicted increment of user satisfaction score can be regarded as the actions in reinforcement learning (see Section 3.3 for details). Formally, having encoded the dialogue, we first predict the increment of user satisfaction score \u2206sc t with a multilayer perceptron (MLP):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Satisfaction Score Estimator",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2206sc t = M LP (h t )",
"eq_num": "(2)"
}
],
"section": "Satisfaction Score Estimator",
"sec_num": "3.2"
},
{
"text": "Then, we sum up the increments of user satisfaction score to get the user satisfaction score up to the tth turn:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Satisfaction Score Estimator",
"sec_num": "3.2"
},
{
"text": "sc 1:t = sc 1:t\u22121 + \u2206sc t = t \u03c4 =1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Satisfaction Score Estimator",
"sec_num": "3.2"
},
{
"text": "\u2206sc \u03c4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Satisfaction Score Estimator",
"sec_num": "3.2"
},
{
"text": "(3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Satisfaction Score Estimator",
"sec_num": "3.2"
},
{
"text": "To optimize the satisfaction score estimator, we sample a pair of a satisfying dialogue (where the user is satisfied with the system at the session level) and a dissatisfying dialogue and compare the two predicted satisfaction scores. Our key insight is that although it is hard to directly assign each turn with the absolute value of satisfaction, the predicted satisfaction score of satisfying dialogue must be higher than that of the dissatisfying dialogue. We model the satisfaction score estimator as an agent assigning increment of satisfaction score to each turn given the dialogue context, and the aforementioned fact can be utilized to design the reward signal in reinforcement learning setting. Formally, the training set D is split into satisfying dialogues S D and dissatisfying dialogues S D . In each episode of reinforcement learning, we randomly sample a satisfying dialogue d \u2208 S D with T turns and a dissatisfying dialogue d \u2208 S D with T turns. Then the satisfaction score estimator is regarded as the agent, and predicts the increment of satisfaction score of each turn for d and d successively. Thus, the length of an episode is T + T .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning Module",
"sec_num": "3.3"
},
{
"text": "For the first turn of satisfying dialogue (i.e., the 1st time step), the state is initialized with the features of the first turn (of satisfying dialogue). The rest states of the satisfying dialogue (i.e., the 2rd \u223c T th time steps) are updated by the features of current turn and GRU hidden states encoding features of history turns (of satisfying dialogue). Similarly, for the first turn of dissatisfying dialogue (i.e., the (T + 1)th time step), the state is reinitialized with the features of the first turn (of dissatisfying dialogue). The rest states of the dissatisfying dialogue (i.e., the (T + 2)th \u223c (T + T )th time steps) are also updated by features of current turn and GRU hidden states encoding features of history turns (of dissatisfying dialogue). Formally, the state is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning Module",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s t = f t (t = 1, T + 1) [h t\u22121 ; f t ](t = 1, T + 1)",
"eq_num": "(4)"
}
],
"section": "Reinforcement Learning Module",
"sec_num": "3.3"
},
{
"text": "The action a t = \u2206sc t is sampled from the policy \u03c0(a t |s t ) \u223c N (M LP (GRU (s t )), \u03c3 2 ), where \u03c3 is a hyper-parameter. The rewards r t for each time step t are all 0 except the T th and (T + T )th step. The rewards for these two steps are 1 if the agent predicts sc 1:T > sc T +1:T +T , and -1 otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning Module",
"sec_num": "3.3"
},
{
"text": "Let the expectation of return J(\u03c0 \u03b8 ) =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning Module",
"sec_num": "3.3"
},
{
"text": "E\u03c0 \u03b8 [ T t=1 \u03b3 t\u22121 r t ] + E\u03c0 \u03b8 [ T +T t=T +1 \u03b3 t\u2212T \u22121 r t ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning Module",
"sec_num": "3.3"
},
{
"text": ", where the policy is parameterized by \u03b8, and \u03b3 denotes the discount rate. Following the REIN-FORCE (Williams, 1992) algorithm, the gradient of the expectation of return can be calculated as follows:",
"cite_spans": [
{
"start": 100,
"end": 116,
"text": "(Williams, 1992)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning Module",
"sec_num": "3.3"
},
{
"text": "\u2207 \u03b8 J(\u03c0 \u03b8 ) = E \u03c0 \u03b8 [( T t=1 \u03b3 t\u22121 r t ) T t=1 \u2207 \u03b8 log \u03c0 \u03b8 (a t |s t )] + E \u03c0 \u03b8 [( T +T t=T +1 \u03b3 t\u2212T \u22121 r t ) T +T t=T +1 \u2207 \u03b8 log \u03c0 \u03b8 (a t |s t )] (5) 4 Experimental Setting 4.1 Dataset",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning Module",
"sec_num": "3.3"
},
{
"text": "The dataset in this experiment is sampled from a commercial customer service system, where users communicate with the intelligent assistant about the E-commerce transactions, such as claiming a refund and requesting a receipt. The users are allowed to request manual service during the dialogue if they feel dissatisfied with the automatic system. The dataset contains 1294 dialogue sessions in total, 840 and 454 of which are labeled as satisfying and dissatisfying, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning Module",
"sec_num": "3.3"
},
{
"text": "We aim at deploying our satisfaction estimator to online services, where thousands of dialogues are handled simultaneously. As the manual service resources are limited, we need to sort the ongoing dialogues by the satisfaction scores estimated by our model, and allocate manual service resource to the least satisfied users.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "4.2"
},
{
"text": "To evaluate the model in this scenario, we use the Area Under the Receiver Operating Characteristic Curve (AUC) (Fawcett, 2006) as the evaluation metric. In our scenario, AUC equals the probability that the satisfaction score of a randomly sampled satisfying dialogue is higher than the score of a randomly sampled dissatisfying dialogue.",
"cite_spans": [
{
"start": 112,
"end": 127,
"text": "(Fawcett, 2006)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "4.2"
},
{
"text": "We compare our model with the following baselines: (1) DeepFM (Guo et al., 2017) which combines the factorization machine and deep neural network. (2) ConvSAT (Choi et al., 2019) which uses bidirectional LSTMs to encode the context history for each turn, and also utilizes the behaviour signals.",
"cite_spans": [
{
"start": 62,
"end": 80,
"text": "(Guo et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 159,
"end": 178,
"text": "(Choi et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "4.3"
},
{
"text": "We train the baseline models using session-level labels with supervised learning, then treat the subdialogue (i.e., the first n turns of dialogue history) as a whole dialogue session to estimate turn-level user satisfaction during evaluation. We also add an augmented variant of supervised learning: we augment the training set with turn-level labels by directly copying the session-level labels as the training signals of the sub-dialogues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "4.3"
},
{
"text": "To investigate how well the model can estimate user satisfaction in a timely manner, we first compare the AUC of each model with different number of remaining turns n, where we predict the satisfaction score n turns before the end of each dialogue (i.e., we predict sc 1:T \u2212n for a dialogue with T turns). In this way, we can test whether our model is capable of estimating the user's satisfaction tendency before a dialogue finishes or fails. Figure 3 shows the AUC of satisfaction estimation with respect to remaining turns. Our proposed method outperforms all other methods with all remaining turns. And the improvement of our proposed method over the other methods increases as the number of remaining turns grows. The reason is that the distribution of incomplete dialogues differs from the complete ones. Since the supervised learning model only learns to score the complete dialogues during the training period, it cannot properly score the incomplete ones during the test period. In contrast, since the reinforcement learning model learns to make turn-level estimation during the training time, its estimation performance is much better than that of supervised learning model when the number of remaining turns is large. Augmenting the training data with sub-dialogues benefits the supervised learning process, but the performance is still worse than the reinforcement learning. To verify the effectiveness of each feature in dialogue encoding, we conduct ablation study. We remove one feature in each experiment, and the model makes satisfaction estimation with access to the complete dialogues in the test set.",
"cite_spans": [],
"ref_spans": [
{
"start": 444,
"end": 452,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Turn-Level Satisfaction Estimation",
"sec_num": "5.1"
},
{
"text": "The results of ablation study are shown in Table 1 . The model with all the features have the best performance, indicating that every feature is useful for making satisfaction estimation. Table 1 : AUC of satisfaction score.",
"cite_spans": [],
"ref_spans": [
{
"start": 43,
"end": 51,
"text": "Table 1",
"ref_id": null
},
{
"start": 189,
"end": 196,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Turn-Level Satisfaction Estimation",
"sec_num": "5.1"
},
{
"text": "To understand the behaviour of our proposed model, we draw the distribution of satisfaction score predicted by our model up until each specific turn. As shown in Figure 4 , at the first few turns, the absolute value of satisfaction score is usually small, as users usually express their demands in the beginning with no satisfaction tendency. When the dialogue continues, the dialogues will exhibit more clues about satisfaction or dissatisfaction. Therefore, the predicted satisfaction scores go up (or down) in the satisfying (or dissatisfying) dialogues as depicted by orange (or blue) figures. This verifies the ability of distinguishing the dissatisfying dialogues from the satisfying ones by our method. ",
"cite_spans": [],
"ref_spans": [
{
"start": 162,
"end": 170,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Model Behaviour Analysis",
"sec_num": "5.2"
},
{
"text": "We present a reinforcement learning method to estimate turn-level satisfaction scores with only session-level labels. We verify that our model can effectively estimate satisfaction scores of customer service dialogues. In the future work, we will explore algorithms for retraining the customer service system with the help of user satisfaction estimator. Table 3 shows a dialogue case where the user is dissatisfied. At the first turn, the user selects the order. Since it is common for users to select order in the first turn, the absolute value of the estimated satisfaction increment is small. This suggests that our model finds no clear satisfaction or dissatisfaction tendency of the user. In the second turn, the user raises a question about the quick refund. Since this is a common question and system responds with relevant knowledge, our model predicts a positive satisfaction increment (i.e., the user is likely to be more satisfied). However, in the third turn, the user asks for manual service, which usually indicates that the user is dissatisfied with the content of the last response. Therefore, our model predicts a negative satisfaction increment with large absolute value, showing that the user might become quite dissatisfied with the automatic system. At the fourth turn, the user continues asking for manual service, and therefore our model continues predicting a negative satisfaction increment with large absolute value. Table 4 illustrates a dialogue case where the user is satisfied. At the first turn, the user also selects the order, and therefore the absolute value of the predicted satisfaction increment is small. In the following turns, the user consecutively clicks the knowledge recommendation links and shortcut buttons in the user interface. This is a good phenomenon because the user can conveniently get the desired information through simple clicks, without the need for typing the questions through the keyboard. Hence, our model keeps making estimation of positive satisfying increment, showing the belief that the user is satisfied.",
"cite_spans": [],
"ref_spans": [
{
"start": 355,
"end": 362,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 1444,
"end": 1451,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The above cases illustrate that our proposed model can make reasonable turn-level satisfaction estimation in various situations, verifying the effectiveness and great interpretability of our reinforcement learning method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In this work, a turn consists of a pair of a user utterance and a system utterance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See Appendix A for details",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was partly supported by the NSFC projects (Key project with No. 61936010 and regular project with No. 61876096). This work was also supported by the Guoqiang Institute of Tsinghua University, with Grant No. 2019GQG1 and 2020GQG0005.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "The dataset is split into training set (70%), vadidation set (15%) and test set (15%). In all experiments, the dimension of GRU output vector is 32. Each MLP is a two-layer neural network, whose hidden size is 32 and the activation function is ReLU. We use Adam as the optimizer and the learning rate is 0.0001. The batch size is 4, and the discount rate for reinforcement learning is 1. The extracted features for each dialogue turn is listed in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 447,
"end": 454,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Implementation details",
"sec_num": null
},
{
"text": "The index of the current turn in a dialogue session. Each turn consists of a pair of user and system utterances. The dimension is 10 (1, 2, ..., 9, \u226510). Frequence How many times the (exactly) same question has been proposed by other users in one month on the system. We manually divide the scope of frequence into 8 disjoint intervals, and the dimension is therefore 8. Input channel The channel for each turn that users input through (e.g., keyboard and shortcut button). The dimension is 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Explanation Turn index",
"sec_num": null
},
{
"text": "The detected user intent for each turn (e.g., making a complaint and claiming a refund). The dimension is 10. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "User intent",
"sec_num": null
},
{
"text": "To better understand the turn-level satisfaction estimation behaviour of our model, we conduct case study. We sample two dialogue cases from the test set and display their contents as well as the satisfaction increment \u2206sc t estimated by our model for each turn. It is worth noting that in this Ecommerce customer service, the system might respond in rich text format, including tables, images and links. In such case, the system response will be represented by the title of the knowledge (e.g., Knowledge: Why I'm not eligible for the quick refund?).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Case Study",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Joint turn and dialogue level user satisfaction estimation on mulit-domain conversations",
"authors": [
{
"first": "Praveen",
"middle": [],
"last": "Kumar Bodigutla",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Tiwari",
"suffix": ""
},
{
"first": "Spyros",
"middle": [],
"last": "Matsoukas",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
"volume": "",
"issue": "",
"pages": "3897--3909",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Praveen Kumar Bodigutla, Aditya Tiwari, Spyros Matsoukas, Josep Valls-Vargas, and Lazaros Poly- menakos. 2020. Joint turn and dialogue level user satisfaction estimation on mulit-domain conversa- tions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 3897-3909.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Offline and online satisfaction prediction in open-domain conversational systems",
"authors": [
{
"first": "Jason",
"middle": [
"Ingyu"
],
"last": "Choi",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Ahmadvand",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Agichtein",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 28th ACM International Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "1281--1290",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Ingyu Choi, Ali Ahmadvand, and Eugene Agichtein. 2019. Offline and online satisfaction pre- diction in open-domain conversational systems. In Proceedings of the 28th ACM International Confer- ence on Information and Knowledge Management, pages 1281-1290.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Predicting user satisfaction from turn-taking in spoken conversations",
"authors": [
{
"first": "",
"middle": [],
"last": "Shammur Absar Chowdhury",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Evgeny",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Stepanov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Riccardi",
"suffix": ""
}
],
"year": 2016,
"venue": "Interspeech",
"volume": "",
"issue": "",
"pages": "2910--2914",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shammur Absar Chowdhury, Evgeny A Stepanov, and Giuseppe Riccardi. 2016. Predicting user satisfac- tion from turn-taking in spoken conversations. In Interspeech, pages 2910-2914.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Empirical evaluation of gated recurrent neural networks on sequence modeling",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "NIPS 2014 Workshop on Deep Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence model- ing. In NIPS 2014 Workshop on Deep Learning.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL-HLT 2019: Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In NAACL-HLT 2019: Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 4171-4186.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An introduction to roc analysis",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Fawcett",
"suffix": ""
}
],
"year": 2006,
"venue": "Pattern Recognition Letters",
"volume": "27",
"issue": "8",
"pages": "861--874",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Fawcett. 2006. An introduction to roc analysis. Pattern Recognition Letters, 27(8):861-874.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Neural approaches to conversational ai. Foundations and Trends\u00ae in Information Retrieval",
"authors": [
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Lihong",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "13",
"issue": "",
"pages": "127--298",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianfeng Gao, Michel Galley, and Lihong Li. 2019. Neural approaches to conversational ai. Founda- tions and Trends\u00ae in Information Retrieval, 13(2- 3):127-298.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Deepfm: a factorizationmachine based neural network for ctr prediction",
"authors": [
{
"first": "Huifeng",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Ruiming",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Yunming",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Zhenguo",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiuqiang",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 26th International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1725--1731",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, and Xiuqiang He. 2017. Deepfm: a factorization- machine based neural network for ctr prediction. In Proceedings of the 26th International Joint Confer- ence on Artificial Intelligence, pages 1725-1731.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Measuring user satisfaction on smart speaker intelligent assistants using intent sensitive query embeddings",
"authors": [
{
"first": "",
"middle": [],
"last": "Seyyed Hadi",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Hashemi",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [
"El"
],
"last": "Williams",
"suffix": ""
},
{
"first": "Imed",
"middle": [],
"last": "Kholy",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"A"
],
"last": "Zitouni",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Crook",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th ACM International Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "1183--1192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seyyed Hadi Hashemi, Kyle Williams, Ahmed El Kholy, Imed Zitouni, and Paul A. Crook. 2018. Measuring user satisfaction on smart speaker intel- ligent assistants using intent sensitive query embed- dings. In Proceedings of the 27th ACM Interna- tional Conference on Information and Knowledge Management, pages 1183-1192.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatic online evaluation of intelligent assistants",
"authors": [
{
"first": "Jiepu",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Hassan Awadallah",
"suffix": ""
},
{
"first": "Rosie",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Umut",
"middle": [],
"last": "Ozertem",
"suffix": ""
},
{
"first": "Imed",
"middle": [],
"last": "Zitouni",
"suffix": ""
},
{
"first": "Ranjitha Gurunath",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Omar",
"middle": [
"Zia"
],
"last": "Khan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 24th International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "506--516",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiepu Jiang, Ahmed Hassan Awadallah, Rosie Jones, Umut Ozertem, Imed Zitouni, Ranjitha Gurunath Kulkarni, and Omar Zia Khan. 2015. Automatic on- line evaluation of intelligent assistants. In Proceed- ings of the 24th International Conference on World Wide Web, pages 506-516.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Self-supervised contrastive learning for efficient user satisfaction prediction in conversational agents",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Kachuee",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Young-Bum",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sungjin",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.11230"
]
},
"num": null,
"urls": [],
"raw_text": "Mohammad Kachuee, Hao Yuan, Young-Bum Kim, and Sungjin Lee. 2020. Self-supervised con- trastive learning for efficient user satisfaction pre- diction in conversational agents. arXiv preprint arXiv:2010.11230.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Predicting user satisfaction with intelligent assistants",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Kiseleva",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [
"Hassan"
],
"last": "Awadallah",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"C"
],
"last": "Crook",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "45--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Kiseleva, Kyle Williams, Ahmed Hassan Awadal- lah, Aidan C. Crook, Imed Zitouni, and Tasos Anas- tasakos. 2016a. Predicting user satisfaction with in- telligent assistants. In Proceedings of the 39th In- ternational ACM SIGIR conference on Research and Development in Information Retrieval, pages 45-54.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Ahmed Hassan Awadallah, Aidan C. Crook, Imed Zitouni, and Tasos Anastasakos",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Kiseleva",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Jiepu",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 ACM on Conference on Human Information Interaction and Retrieval",
"volume": "",
"issue": "",
"pages": "121--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Kiseleva, Kyle Williams, Jiepu Jiang, Ahmed Has- san Awadallah, Aidan C. Crook, Imed Zitouni, and Tasos Anastasakos. 2016b. Understanding user sat- isfaction with intelligent assistants. In Proceedings of the 2016 ACM on Conference on Human Informa- tion Interaction and Retrieval, pages 121-130.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Talk to me: Exploring user interactions with the amazon alexa",
"authors": [
{
"first": "Irene",
"middle": [],
"last": "Lopatovska",
"suffix": ""
},
{
"first": "Katrina",
"middle": [],
"last": "Rink",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Kieran",
"middle": [],
"last": "Raines",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Cosenza",
"suffix": ""
},
{
"first": "Harriet",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Perachya",
"middle": [],
"last": "Sorsche",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Hirsch",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Adrianna",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Librarianship and Information Science",
"volume": "51",
"issue": "4",
"pages": "984--997",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irene Lopatovska, Katrina Rink, Ian Knight, Kieran Raines, Kevin Cosenza, Harriet Williams, Perachya Sorsche, David Hirsch, Qi Li, and Adrianna Mar- tinez. 2019. Talk to me: Exploring user interactions with the amazon alexa. Journal of Librarianship and Information Science, 51(4):984-997.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Large-scale hybrid approach for predicting user satisfaction with conversational agents",
"authors": [
{
"first": "Dookun",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Dongmin",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yinglei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Matsoukas",
"middle": [],
"last": "Spyros",
"suffix": ""
},
{
"first": "Young-Bum",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Ruhi",
"middle": [],
"last": "Sarikaya",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Quinn",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.07113"
]
},
"num": null,
"urls": [],
"raw_text": "Dookun Park, Hao Yuan, Dongmin Kim, Yinglei Zhang, Matsoukas Spyros, Young-Bum Kim, Ruhi Sarikaya, Edward Guo, Yuan Ling, Kevin Quinn, et al. 2020. Large-scale hybrid approach for pre- dicting user satisfaction with conversational agents. arXiv preprint arXiv:2006.07113.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Detecting egregious conversations between customers and virtual agents",
"authors": [
{
"first": "Tommy",
"middle": [],
"last": "Sandbank",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Shmueli-Scheuer",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Konopnicki",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Herzig",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richards",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Piorkowski",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL HLT 2018: 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1802--1811",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tommy Sandbank, Michal Shmueli-Scheuer, David Konopnicki, Jonathan Herzig, John Richards, and David Piorkowski. 2018. Detecting egregious con- versations between customers and virtual agents. In NAACL HLT 2018: 16th Annual Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, volume 1, pages 1802-1811.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Domain-independent user satisfaction reward estimation for dialogue policy learning",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Ultes",
"suffix": ""
},
{
"first": "Pawel",
"middle": [],
"last": "Budzianowski",
"suffix": ""
},
{
"first": "Inigo",
"middle": [],
"last": "Casanueva",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Mrksic",
"suffix": ""
},
{
"first": "Lina",
"middle": [
"Maria"
],
"last": "Rojas-Barahona",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Tsung-Hsien",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Gasic",
"suffix": ""
},
{
"first": "Steve",
"middle": [
"J"
],
"last": "Young",
"suffix": ""
}
],
"year": 2017,
"venue": "Interspeech",
"volume": "",
"issue": "",
"pages": "1721--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Ultes, Pawel Budzianowski, Inigo Casanueva, Nikola Mrksic, Lina Maria Rojas-Barahona, Pei- Hao Su, Tsung-Hsien Wen, Milica Gasic, and Steve J Young. 2017. Domain-independent user sat- isfaction reward estimation for dialogue policy learn- ing. In Interspeech, pages 1721-1725.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Paradise: A framework for evaluating spoken dialogue agents",
"authors": [
{
"first": "Marilyn",
"middle": [
"A"
],
"last": "Walker",
"suffix": ""
},
{
"first": "Diane",
"middle": [
"J"
],
"last": "Litman",
"suffix": ""
},
{
"first": "Candace",
"middle": [
"A"
],
"last": "Kamm",
"suffix": ""
},
{
"first": "Alicia",
"middle": [],
"last": "Abella",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "271--280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marilyn A. Walker, Diane J. Litman, Candace A. Kamm, and Alicia Abella. 1997. Paradise: A frame- work for evaluating spoken dialogue agents. In Pro- ceedings of the 35th Annual Meeting of the Associa- tion for Computational Linguistics, pages 271-280.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Simple statistical gradientfollowing algorithms for connectionist reinforcement learning",
"authors": [
{
"first": "Ronald",
"middle": [
"J"
],
"last": "Williams",
"suffix": ""
}
],
"year": 1992,
"venue": "Machine Learning",
"volume": "8",
"issue": "",
"pages": "229--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald J. Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforce- ment learning. Machine Learning, 8(3):229-256.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Collaborative filtering model for user satisfaction prediction in spoken dialog system evaluation",
"authors": [
{
"first": "Zhaojun",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Baichuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Irwin",
"middle": [],
"last": "King",
"suffix": ""
},
{
"first": "Gina",
"middle": [],
"last": "Levow",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Meng",
"suffix": ""
}
],
"year": 2010,
"venue": "2010 IEEE Spoken Language Technology Workshop",
"volume": "",
"issue": "",
"pages": "472--477",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhaojun Yang, Baichuan Li, Yi Zhu, Irwin King, Gina Levow, and Helen Meng. 2010. Collaborative filter- ing model for user satisfaction prediction in spoken dialog system evaluation. In 2010 IEEE Spoken Lan- guage Technology Workshop, pages 472-477.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Recent advances and challenges in task-oriented dialog systems",
"authors": [
{
"first": "Zheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ryuichi",
"middle": [],
"last": "Takanobu",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2020,
"venue": "Science China Technological Sciences",
"volume": "63",
"issue": "10",
"pages": "2011--2027",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zheng Zhang, Ryuichi Takanobu, Qi Zhu, Minlie Huang, and Xiaoyan Zhu. 2020. Recent advances and challenges in task-oriented dialog systems. Sci- ence China Technological Sciences, 63(10):2011- 2027.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "A dialogue example in E-commerce customer service where the system cannot understand the user's intent, thereby making the user dissatisfied.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"text": "The overview of the proposed method.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"text": "AUC of satisfaction estimation with different remaining turns.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF3": {
"text": "The distribution of satisfaction score estimated by our model up until each specific turn.",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF2": {
"text": "A dialogue in which the user is dissatisfied.",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>turn</td><td>user utterance</td><td>system response</td><td>user input channel</td><td>\u2206sc t</td></tr><tr><td>1</td><td colspan=\"2\">87654321 (order number) How can I help you with this or-</td><td>order selection</td><td>-0.176</td></tr><tr><td/><td/><td>der?</td><td/><td/></tr><tr><td>2</td><td>What can I do if the seller</td><td>Knowledge: What can I do if the</td><td>knowledge recom-</td><td>0.375</td></tr><tr><td/><td>won't refund me?</td><td>seller won't refund me?</td><td>mendation</td><td/></tr><tr><td>3</td><td>After applying for a re-</td><td>Knowledge: After applying for a</td><td>knowledge recom-</td><td>0.522</td></tr><tr><td/><td>fund, what if the seller</td><td>refund, what if the seller doesn't</td><td>mendation</td><td/></tr><tr><td/><td>doesn't react?</td><td>react?</td><td/><td/></tr><tr><td>4</td><td>The seller declined to re-</td><td>Knowledge: What can I do if the</td><td>shortcut</td><td>0.365</td></tr><tr><td/><td>fund me.</td><td>seller declines to refund me?</td><td/><td/></tr></table>"
},
"TABREF3": {
"text": "A dialogue in which the user is satisfied.",
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>"
}
}
}
}