Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N12-1010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:04:43.159764Z"
},
"title": "Intrinsic and Extrinsic Evaluation of an Automatic User Disengagement Detector for an Uncertainty-Adaptive Spoken Dialogue System",
"authors": [
{
"first": "Kate",
"middle": [],
"last": "Forbes-Riley",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Diane",
"middle": [],
"last": "Litman",
"suffix": "",
"affiliation": {},
"email": "litman@pitt.edu"
},
{
"first": "Heather",
"middle": [],
"last": "Friedberg",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Joanna",
"middle": [],
"last": "Drummond",
"suffix": "",
"affiliation": {},
"email": "jdrummond@cs.toronto.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a model for detecting user disengagement during spoken dialogue interactions. Intrinsic evaluation of our model (i.e., with respect to a gold standard) yields results on par with prior work. However, since our goal is immediate implementation in a system that already detects and adapts to user uncertainty, we go further than prior work and present an extrinsic evaluation of our model (i.e., with respect to the real-world task). Correlation analyses show crucially that our automatic disengagement labels correlate with system performance in the same way as the gold standard (manual) labels, while regression analyses show that detecting user disengagement adds value over and above detecting only user uncertainty when modeling performance. Our results suggest that automatically detecting and adapting to user disengagement has the potential to significantly improve performance even in the presence of noise, when compared with only adapting to one affective state or ignoring affect entirely.",
"pdf_parse": {
"paper_id": "N12-1010",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a model for detecting user disengagement during spoken dialogue interactions. Intrinsic evaluation of our model (i.e., with respect to a gold standard) yields results on par with prior work. However, since our goal is immediate implementation in a system that already detects and adapts to user uncertainty, we go further than prior work and present an extrinsic evaluation of our model (i.e., with respect to the real-world task). Correlation analyses show crucially that our automatic disengagement labels correlate with system performance in the same way as the gold standard (manual) labels, while regression analyses show that detecting user disengagement adds value over and above detecting only user uncertainty when modeling performance. Our results suggest that automatically detecting and adapting to user disengagement has the potential to significantly improve performance even in the presence of noise, when compared with only adapting to one affective state or ignoring affect entirely.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Spoken dialogue systems that can detect and adapt to user affect 1 are fast becoming reality (Schuller et al., 2009b; Batliner et al., 2008; Prendinger and Ishizuka, 2005; Vidrascu and Devillers, 2005; Lee and Narayanan, 2005; Shafran et al., 2003) . The benefits are clear: affect-adaptive systems have been shown to increase task success (Forbes-Riley and Litman, 2011a; D'Mello et al., 2010; Wang et al., 2008) or improve other system performance metrics such as user satisfaction (Liu and Picard, 2005; Klein et al., 2002) . However, to date most affective systems researchers have focused either only on affect detection, or only on detecting and adapting to a single affective state. The next step is thus to develop and evaluate spoken dialogue systems that detect and respond to multiple affective states.",
"cite_spans": [
{
"start": 93,
"end": 117,
"text": "(Schuller et al., 2009b;",
"ref_id": "BIBREF40"
},
{
"start": 118,
"end": 140,
"text": "Batliner et al., 2008;",
"ref_id": "BIBREF4"
},
{
"start": 141,
"end": 171,
"text": "Prendinger and Ishizuka, 2005;",
"ref_id": "BIBREF37"
},
{
"start": 172,
"end": 201,
"text": "Vidrascu and Devillers, 2005;",
"ref_id": "BIBREF47"
},
{
"start": 202,
"end": 226,
"text": "Lee and Narayanan, 2005;",
"ref_id": "BIBREF27"
},
{
"start": 227,
"end": 248,
"text": "Shafran et al., 2003)",
"ref_id": "BIBREF42"
},
{
"start": 358,
"end": 372,
"text": "Litman, 2011a;",
"ref_id": "BIBREF17"
},
{
"start": 373,
"end": 394,
"text": "D'Mello et al., 2010;",
"ref_id": "BIBREF12"
},
{
"start": 395,
"end": 413,
"text": "Wang et al., 2008)",
"ref_id": "BIBREF50"
},
{
"start": 484,
"end": 506,
"text": "(Liu and Picard, 2005;",
"ref_id": "BIBREF30"
},
{
"start": 507,
"end": 526,
"text": "Klein et al., 2002)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We previously showed that detecting and responding to user uncertainty during spoken dialogue computer tutoring significantly improves task success (Forbes-Riley and Litman, 2011a) . We are now taking the next step: incorporating automatic detection and adaptation to user disengagement as well, with the goal of further improving task success. We targeted user uncertainty and disengagement because manual annotation showed them to be the two most common user affective states in our system and both are negatively correlated with task success Forbes-Riley and Litman, 2011b) . Thus, we hypothesize that providing appropriate responses to these states would reduce their frequency, consequently improving task success. Although we address these user states in the tutoring domain, spoken dialogue researchers across domains and applications have investigated the automatic detection of both user uncertainty (e.g. (Drummond and Litman, 2011; Pon-Barry and Shieber, 2011; Paek and Ju, 2008; Alwan et al., 2007) ) and user disengagement (e.g., Wang and Hirschberg, 2011; Schuller et al., 2009a) ), to improve system performance. The detection of user disengagement in particular has received substantial attention in recent years, due to growing awareness of its potential for negatively impacting commercial applications (Wang and Hirschberg, 2011; Schuller et al., 2009a) .",
"cite_spans": [
{
"start": 166,
"end": 180,
"text": "Litman, 2011a)",
"ref_id": "BIBREF17"
},
{
"start": 545,
"end": 576,
"text": "Forbes-Riley and Litman, 2011b)",
"ref_id": "BIBREF18"
},
{
"start": 915,
"end": 942,
"text": "(Drummond and Litman, 2011;",
"ref_id": "BIBREF13"
},
{
"start": 943,
"end": 971,
"text": "Pon-Barry and Shieber, 2011;",
"ref_id": "BIBREF34"
},
{
"start": 972,
"end": 990,
"text": "Paek and Ju, 2008;",
"ref_id": "BIBREF33"
},
{
"start": 991,
"end": 1010,
"text": "Alwan et al., 2007)",
"ref_id": "BIBREF2"
},
{
"start": 1043,
"end": 1069,
"text": "Wang and Hirschberg, 2011;",
"ref_id": "BIBREF49"
},
{
"start": 1070,
"end": 1093,
"text": "Schuller et al., 2009a)",
"ref_id": "BIBREF39"
},
{
"start": 1321,
"end": 1348,
"text": "(Wang and Hirschberg, 2011;",
"ref_id": "BIBREF49"
},
{
"start": 1349,
"end": 1372,
"text": "Schuller et al., 2009a)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we present a model for automatically detecting user disengagement during spoken dialogue interactions. Intrinsic evaluation of our model yields results on par with those of prior work. However, we argue that while intrinsic evaluations are necessary, they aren't sufficient when immediate implementation is the goal, because there is no a priori way to know when the model's performance is acceptable to use in a working system. This problem is particularly relevant to affect detection because it is such a difficult task, where no one achieves nearperfect results. We argue that for such tasks some extrinsic evaluation is also necessary, to show that the automatic labels are useful and/or are a reasonable substitute for a gold standard before undertaking a labor-intensive and time-consuming evaluation with real users. Here we use correlational analyses to show that our automatic disengagement labels are related to system performance in the same way as the gold standard (manual) labels. We further show through regression analyses that detecting user disengagement adds value over and above detecting only user uncertainty when modeling performance. These results provide strong evidence that enhancing a spoken dialogue system to detect and adapt to multiple affective states (specifically, user disengagement and uncertainty) has the potential to significantly improve performance even in the presence of noise due to automatic detection, when compared with only adapting to one affective state or ignoring affect entirely.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our focus in this paper is on first using machine learning to develop a detector of user disengagement for spoken dialogue systems, and then evaluating its usefulness as fully as possible prior to its implementation and deployment with real users.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Disengaged users are highly undesirable in human-computer interaction because they increase the potential for user dissatisfaction and task failure; thus over the past decade there has already been substantial prior work focused on detecting user disengagement and the closely related states of boredom, motivation and lack of interest (e.g., Wang and Hirschberg, 2011; Jeon et al., 2010; Schuller et al., 2009a; Bohus and Horvitz, 2009; Martalo et al., 2008; Porayska-Pomsta et al., 2008; Kapoor and Picard, 2005; Sidner and Lee, 2003; Forbes-Riley and Litman, 2011b) ).",
"cite_spans": [
{
"start": 343,
"end": 369,
"text": "Wang and Hirschberg, 2011;",
"ref_id": "BIBREF49"
},
{
"start": 370,
"end": 388,
"text": "Jeon et al., 2010;",
"ref_id": "BIBREF23"
},
{
"start": 389,
"end": 412,
"text": "Schuller et al., 2009a;",
"ref_id": "BIBREF39"
},
{
"start": 413,
"end": 437,
"text": "Bohus and Horvitz, 2009;",
"ref_id": "BIBREF7"
},
{
"start": 438,
"end": 459,
"text": "Martalo et al., 2008;",
"ref_id": "BIBREF31"
},
{
"start": 460,
"end": 489,
"text": "Porayska-Pomsta et al., 2008;",
"ref_id": "BIBREF36"
},
{
"start": 490,
"end": 514,
"text": "Kapoor and Picard, 2005;",
"ref_id": "BIBREF25"
},
{
"start": 515,
"end": 536,
"text": "Sidner and Lee, 2003;",
"ref_id": "BIBREF43"
},
{
"start": 537,
"end": 568,
"text": "Forbes-Riley and Litman, 2011b)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Within this work, specific affect definitions vary slightly with the intention of being coherent within the application and domain and being relevant to the specific adaptation goal (Martalo et al., 2008) . However, affective systems researchers generally agree that disengaged users show little involvement in the interaction, and often display facial, gestural and linguistic signals such as gaze avoidance, finger tapping, humming, sarcasm, et cetera.",
"cite_spans": [
{
"start": 182,
"end": 204,
"text": "(Martalo et al., 2008)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The features used to detect disengagement also vary depending on system domain and application. For example, Sidner & Lee (2003) are interested in modeling more natural and collaborative human-robot interactions during basic conversations. They define an algorithm for the engagement process that involves appropriate eye gaze and turn-taking. Martalo et al. (2008) study how user engagement influences dialogue patterns during interactions with an embodied agent that gives advice about healthy dieting. They model engagement using manually coded dialogue acts based on the SWBDL-DAMSL scheme (Stolcke et al., 2000) . Bohus and Horvitz (2009) study systems that attract and engage users for dynamic, multi-party dialogues in open-world settings. They model user intentions to engage the system with cues from facial sensors and the dialogue. Within recent spoken dialogue research, acoustic-prosodic, lexical and contextual features have been found to be effective detectors of disengagement Wang and Hirschberg, 2011; Jeon et al., 2010) ; we will briefly compare our own results with these in Section 5.",
"cite_spans": [
{
"start": 109,
"end": 128,
"text": "Sidner & Lee (2003)",
"ref_id": "BIBREF43"
},
{
"start": 344,
"end": 365,
"text": "Martalo et al. (2008)",
"ref_id": "BIBREF31"
},
{
"start": 594,
"end": 616,
"text": "(Stolcke et al., 2000)",
"ref_id": "BIBREF44"
},
{
"start": 619,
"end": 643,
"text": "Bohus and Horvitz (2009)",
"ref_id": "BIBREF7"
},
{
"start": 993,
"end": 1019,
"text": "Wang and Hirschberg, 2011;",
"ref_id": "BIBREF49"
},
{
"start": 1020,
"end": 1038,
"text": "Jeon et al., 2010)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "While all of the above-mentioned research has presented intrinsic evaluations of their disengagement modeling efforts that indicate a reasonable degree of accuracy as compared to a gold standard (e.g., manual coding), only a few have yet demonstrated that the model's detected values are useful in practice and/or are a reasonable substitute for the gold standard with respect to some practical objective (e.g., a relationship to performance). In particular, two studies (Bohus and Horvitz, 2009; Schuller et al., 2009a) have gone directly from intrinsic evaluation of (dis)engagement models to performing user studies with the implemented model, thereby bypassing other less expensive and less labor-intensive means of extrinsic evaluation to quantify their model's usefulness-and potentially indicate its need to be further improved-before deployment with real users. Neither study reports statistically significant improvements in system performance as a result of detecting user (dis)engagement.",
"cite_spans": [
{
"start": 471,
"end": 496,
"text": "(Bohus and Horvitz, 2009;",
"ref_id": "BIBREF7"
},
{
"start": 497,
"end": 520,
"text": "Schuller et al., 2009a)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Finally, while substantial spoken dialogue and affective systems research has shown that users display a range of affective states while interacting with a system (e.g. (Schuller et al., 2009b; Conati and Maclaren, 2009; Batliner et al., 2008; Devillers and Vidrascu, 2006; Lee and Narayanan, 2005; Shafran et al., 2003; Ang et al., 2002) ), to date only a few affective systems have been built that detect and adapt to multiple user affective states (e.g., (D'Mello et al., 2010; Aist et al., 2002; Tsukahara and Ward, 2001 )), and most of these have been deployed with crucial natural language processing components \"wizarded\" by a hidden human agent (e.g., who performs speech recognition or affect annotation on the user turns); moreover, none have yet shown significant improvements in system performance as a result of adapting to multiple user affective states.",
"cite_spans": [
{
"start": 169,
"end": 193,
"text": "(Schuller et al., 2009b;",
"ref_id": "BIBREF40"
},
{
"start": 194,
"end": 220,
"text": "Conati and Maclaren, 2009;",
"ref_id": "BIBREF8"
},
{
"start": 221,
"end": 243,
"text": "Batliner et al., 2008;",
"ref_id": "BIBREF4"
},
{
"start": 244,
"end": 273,
"text": "Devillers and Vidrascu, 2006;",
"ref_id": "BIBREF10"
},
{
"start": 274,
"end": 298,
"text": "Lee and Narayanan, 2005;",
"ref_id": "BIBREF27"
},
{
"start": 299,
"end": 320,
"text": "Shafran et al., 2003;",
"ref_id": "BIBREF42"
},
{
"start": 321,
"end": 338,
"text": "Ang et al., 2002)",
"ref_id": "BIBREF3"
},
{
"start": 458,
"end": 480,
"text": "(D'Mello et al., 2010;",
"ref_id": "BIBREF12"
},
{
"start": 481,
"end": 499,
"text": "Aist et al., 2002;",
"ref_id": "BIBREF1"
},
{
"start": 500,
"end": 524,
"text": "Tsukahara and Ward, 2001",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We develop and evaluate our disengagement detector using a corpus of spoken dialogues from a 2008 controlled experiment evaluating our uncertaintyadaptive spoken dialogue tutoring system, IT-SPOKE (Intelligent Tutoring SPOKEn dialog system) (Forbes-Riley and Litman, 2011a). 2 ITSPOKE tutors 5 Newtonian physics problems (one per dialogue), using a Tutor Question -Student Answer -Tutor Response format. After each tutor question, the student speech is digitized from head-mounted microphone input and sent to the Sphinx2 recognizer, which yields an automatic transcript (Huang et al., 1993) . This answer's (in)correctness is then automatically classified based on this transcript, using the TuTalk semantic analyzer (Jordan et al., 2007) , and the answer's (un)certainty is automatically classified by inputting features of the speech signal, the automatic transcript, and the dialogue context into a logistic regression model. We will discuss these features further in Section 5. All natural language processing components were trained using prior ITSPOKE corpora. The appropriate tutor response is determined based on the answer's automatically labeled (in)correctness and (un)certainty and then sent to the Cepstral text-to-speech system 3 , whose audio output is played through the student headphones and is also displayed on a web-based interface.",
"cite_spans": [
{
"start": 275,
"end": 276,
"text": "2",
"ref_id": null
},
{
"start": 571,
"end": 591,
"text": "(Huang et al., 1993)",
"ref_id": "BIBREF22"
},
{
"start": 718,
"end": 739,
"text": "(Jordan et al., 2007)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ITSPOKE: Spoken Dialogue Tutor",
"sec_num": "3"
},
{
"text": "The experimental procedure was as follows: college students with no college-level physics (1) read a short physics text, (2) took a pretest, (3) worked 5 \"training\" problems with ITSPOKE, where each user received a varying level of uncertainty adaptation based on condition, (4) took a user satisfaction survey, (5) took a posttest isomorphic to the pretest, and (6) worked a \"test\" problem with ITSPOKE that was isomorphic to the 5th training problem, where no user received any uncertainty adaptation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ITSPOKE: Spoken Dialogue Tutor",
"sec_num": "3"
},
{
"text": "The resulting corpus contains 432 dialogues (6 per student) and 7216 turns from 72 students, 47 female and 25 male. All turns are used in the disengagement detection experiments described next. However, only the training problem dialogues (360, 5 per student, 6044 student turns) are used for the performance analyses in Sections 6-7, because the final test problem was given after the instruments measuring performance (survey and posttest).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ITSPOKE: Spoken Dialogue Tutor",
"sec_num": "3"
},
{
"text": "Our survey and tests are the same as those used in multiple prior ITSPOKE experiments (c.f., (Forbes-Riley and Litman, 2011a)). The pretest and posttest each contain 26 multiple choice questions querying knowledge of the topics covered in the dialogues. Average pretest and posttest scores in the corpus were 51.0% and 73.1% (out of 100%) with standard deviations of 14.5% and 13.8%, respectively. The user satisfaction survey contains 16 statements rated on a 5-point Likert scale. Average total sur-vey score was 60.9 (out of 80), with a standard deviation of 8.5. While the statements themselves are listed elsewhere (Forbes-Riley and , 9 statements concern the tutoring domain (e.g., The tutor was effective/precise/useful), 7 of which were taken from (Baylor et al., 2003) and 2 of which were created for our system. 3 statements concern user uncertainty levels and were created for our system. 4 statements concern the spoken dialogue interaction (e.g., It was easy to understand the tutor's speech) and were taken from (Walker et al., 2002) . Our survey has also been incorporated into other recent work exploring user satisfaction in spoken dialogue computer tutors (Dzikovska et al., 2011) . In Section 6 we discuss how user scores on these instruments are used to measure system performance. See (Forbes-Riley and Litman, 2011a) for further details of ITSPOKE and the 2008 experiment.",
"cite_spans": [
{
"start": 756,
"end": 777,
"text": "(Baylor et al., 2003)",
"ref_id": "BIBREF5"
},
{
"start": 1026,
"end": 1047,
"text": "(Walker et al., 2002)",
"ref_id": "BIBREF48"
},
{
"start": 1174,
"end": 1198,
"text": "(Dzikovska et al., 2011)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ITSPOKE: Spoken Dialogue Tutor",
"sec_num": "3"
},
{
"text": "Following the experiment, the entire corpus was manually labeled for (in)correctness (correct, incorrect), (un)certainty (CER, UNC) and (dis)engagement (ENG, DISE) by one trained annotator. Table 1 shows the distribution of the labeled turns in the 2008 ITSPOKE corpus. In prior ITSPOKE corpora, our annotator displayed interannotator agreement of 0.85 and 0.62 Kappa on correctness and uncertainty, respectively (Forbes-Riley and Litman, 2011a). For the disengagement label, a reliability analysis was performed over several annotation rounds on subsets of the 2008 ITSPOKE corpus by this and a second trained annotator, yielding 0.55 Kappa (this analysis is described in detail elsewhere (Forbes-Riley et al., 2011)). Our Kappas indicate that user uncertainty and disengagement can both be annotated with moderate reliability in our dataset, on par with prior emotion annotation work (c.f., (Pon-Barry and Shieber, 2011)). Note however that the best way to label users' internal affective state(s) is still an open question. Many system researchers (including ourselves) rely on trained labelers (e.g., (Pon-Barry et al., 2006; Porayska-Pomsta et al., 2008) ) while others use selfreports (e.g., (Conati and Maclaren, 2009; Gratch et al., 2009; McQuiggan et al., 2008) ). Both methods are problematic; for example both can be rendered inaccurate when users mask their true feelings. Two studies that have compared self-reports, peer labelers, trained labelers, and combinations of labelers (Afzal and Robinson, 2011; D'Mello et al., 2008) both illustrate the common finding that human annotators display low to moderate interannotator reliability for affect annotation, and both studies show that trained labelers yield the highest reliability on this task. Despite the lack of high interannotator reliability, responding to affect detected by trained human labels has still been shown to improve system performance (see Section 1). ",
"cite_spans": [
{
"start": 1105,
"end": 1129,
"text": "(Pon-Barry et al., 2006;",
"ref_id": "BIBREF35"
},
{
"start": 1130,
"end": 1159,
"text": "Porayska-Pomsta et al., 2008)",
"ref_id": "BIBREF36"
},
{
"start": 1198,
"end": 1225,
"text": "(Conati and Maclaren, 2009;",
"ref_id": "BIBREF8"
},
{
"start": 1226,
"end": 1246,
"text": "Gratch et al., 2009;",
"ref_id": "BIBREF21"
},
{
"start": 1247,
"end": 1270,
"text": "McQuiggan et al., 2008)",
"ref_id": "BIBREF32"
},
{
"start": 1492,
"end": 1518,
"text": "(Afzal and Robinson, 2011;",
"ref_id": "BIBREF0"
},
{
"start": 1519,
"end": 1540,
"text": "D'Mello et al., 2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 190,
"end": 197,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "ITSPOKE: Spoken Dialogue Tutor",
"sec_num": "3"
},
{
"text": "As noted in Section 1, we have developed a user disengagement detector to incorporate into our existing uncertainty-adaptive spoken dialogue system. The result will be a state of the art system that adapts to multiple affective states during the dialogue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatically Detecting User Disengagement (DISE) in ITSPOKE",
"sec_num": "4"
},
{
"text": "Our disengagement annotation scheme (Forbes-Riley et al., 2011) was derived from empirical observations in our data but draws on prior work, including work mentioned in Section 2, appraisal theory-based emotion models (e.g., Conati and Maclaren (2009)) 4 , and prior approaches to annotating disengagement or related states in tutoring (Lehman et al., 2008; Porayska-Pomsta et al., 2008) . Briefly, our overall Disengagement label (DISE) is used for turns expressing moderate to strong disengagement towards the interaction, i.e., responses given without much effort or without caring about appropriateness. Responses might also be accompanied by signs of inattention, boredom, or irritation. Clear examples include answers spoken quickly in leaden monotone, with sarcastic or playful tones, or with off-task sounds such as rhythmic tapping or electronics usage. 5 Note that our DISE label is defined independently of the tutoring domain and thus should generalize across spoken dialogue systems. Figure 1 illustrates the DISE, (in)correctness, and (un)certainty labels across 3 tutor/student turn pairs. U 1 is labeled DISE and UNC because the student gave up immediately and with irritation when too much prior knowledge was required. U 2 is labeled DISE and UNC because the student avoided giving a specific numerical value, offering instead a vague (and obviously incorrect) answer. U 3 is labeled DISE and CER because the student sang the correct answer, indicating a lack of interest in the larger purpose of the material being discussed. 6",
"cite_spans": [
{
"start": 336,
"end": 357,
"text": "(Lehman et al., 2008;",
"ref_id": "BIBREF28"
},
{
"start": 358,
"end": 387,
"text": "Porayska-Pomsta et al., 2008)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 997,
"end": 1005,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Binary DISE Label",
"sec_num": "4.1"
},
{
"text": "T 1 : What is the definition of Newton's Second Law? U 1 : I have no idea <sigh>. (DISE, incorrect, UNC) . . . T 2 : What's the numerical value of the man's acceleration? Please specify the units too. ",
"cite_spans": [],
"ref_spans": [
{
"start": 82,
"end": 104,
"text": "(DISE, incorrect, UNC)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Binary DISE Label",
"sec_num": "4.1"
},
{
"text": "Machine learning classification was done at the turn level using WEKA software 7 and 10-fold cross validation. A J48 decision tree was chosen because of its easily read output and the fact that previous experiments with our data showed little variance be- 5 Affective systems research has found total disengagement rare in laboratory settings (Lehman et al., 2008; Martalo et al., 2008) . As in that research, we equate the DISE label with no or low engagement. Since total disengagement is common in real-world unobserved human-computer interactions (deleting unsatisfactory software being an extreme example) it remains an open question as to how well laboratory findings generalize.",
"cite_spans": [
{
"start": 256,
"end": 257,
"text": "5",
"ref_id": null
},
{
"start": 343,
"end": 364,
"text": "(Lehman et al., 2008;",
"ref_id": "BIBREF28"
},
{
"start": 365,
"end": 386,
"text": "Martalo et al., 2008)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DISE Detection Method",
"sec_num": "4.2"
},
{
"text": "6 Our original scheme distinguished six DISE subtypes that trained annotators distinguished with a reliability of .43 Kappa (Forbes-Riley et al., 2011) . However, pilot experiments indicated that our models cannot accurately distinguish them, thus our DISE detector focuses on the DISE label.",
"cite_spans": [
{
"start": 124,
"end": 151,
"text": "(Forbes-Riley et al., 2011)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DISE Detection Method",
"sec_num": "4.2"
},
{
"text": "7 http://www.cs.waikato.ac.nz/ml/weka/ tween different machine learning algorithms (Drummond and . We also use a cost matrix, which heavily penalizes classifying a true DISE instance as false, because our class distributions are highly skewed (16.21% DISE turns) and the cost matrix successfully mitigated the skew's effect in our prior work, where the uncertainty distribution is also skewed (20.55% UNC turns) (Drummond and Litman, 2011) .",
"cite_spans": [
{
"start": 412,
"end": 439,
"text": "(Drummond and Litman, 2011)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DISE Detection Method",
"sec_num": "4.2"
},
{
"text": "To train our DISE model, we first extracted the set of speech and dialogue features shown in Figure 2 from the user turns in our corpus. As shown, the acoustic-prosodic features represent duration, pausing, pitch, and energy, and were normalized by the first user turn, as well as totaled and averaged over each dialogue. The lexical and dialogue features consist of the current dialogue name (i.e., one of the six physics problems) and turn number, the current ITSPOKE question's name (e.g.,T 3 in Figure 1 has a unique identifier) and depth in the discourse structure (e.g., an ITSPOKE remediation question after an incorrect user answer would be at one greater depth than the prior question), a word occurrence vector for the automatically recognized text of the user turn, an automatic (in)correctness label, and lastly, the number of user turns since the last correct turn (\"incorrect runs\"). We also included two user-based features, gender and pretest score. Note that although our feature set was drawn primarily from our prior uncertainty detection experiments (Forbes-Riley and Litman, 2011a; Drummond and Litman, 2011), we have also experimented with other features, including state-of-theart acoustic-prosodic features used in the last Interspeech Challenges Schuller et al., 2009b) and made freely available in the openS-MILE Toolkit (Florian et al., 2010) . To date, however, these features have only decreased the crossvalidation performance of our models. 8 While some of our features are tutoring-specific, these have similar counterparts in other applications (i.e., answer (in)correctness corresponds to a more general notion of \"response appropriateness\" in other domains, while pretest score corresponds to the general notion of domain expertise). Moreover, all of our features are fully automatic and available in real-time, so that the model can be directly implemented and deployed. To that end, we now describe the results of our intrinsic and extrinsic evaluations of our DISE model, aimed at determining whether it is ready to be evaluated with real users. Table 2 shows the averaged results of the crossvalidation with the J48 decision tree algorithm. In addition to accuracy, we use Unweighted Average (UA) Precision 9 , Recall, and F-measure because they are the standard measures used to evaluate current affect recognition technology, particularly for unbalanced two-class problems (Schuller et al., 2009b) . In addition, we use the cross correlation (CC) measure and mean linear error (MLE) because these metrics were used in recent work for evaluating disengagement (level of interest) detectors for the Interspeech 2010 challenge Wang and Hirschberg, 2011; Jeon et al., 2010) ). 10 Note however that the Interspeech 2010 task differs from ours not only in the corpus and features, but also in the learning task: they used regression to detect a continuous level of interest ranging from 0 to 1, while we detect a binary class. Thus comparison between our results and those are only suggestive rather than conclusive.",
"cite_spans": [
{
"start": 1271,
"end": 1294,
"text": "Schuller et al., 2009b)",
"ref_id": "BIBREF40"
},
{
"start": 1347,
"end": 1369,
"text": "(Florian et al., 2010)",
"ref_id": "BIBREF15"
},
{
"start": 2414,
"end": 2438,
"text": "(Schuller et al., 2009b)",
"ref_id": "BIBREF40"
},
{
"start": 2665,
"end": 2691,
"text": "Wang and Hirschberg, 2011;",
"ref_id": "BIBREF49"
},
{
"start": 2692,
"end": 2710,
"text": "Jeon et al., 2010)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 93,
"end": 101,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 499,
"end": 507,
"text": "Figure 1",
"ref_id": null
},
{
"start": 2084,
"end": 2091,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "DISE Detection Method",
"sec_num": "4.2"
},
{
"text": "As shown in Table 2 , we also compare our results with those of majority class (ENG) labeling of the same turns. Since (7216-1170)/7216 user turns in the corpus are engaged (recall Table 1 ), always selecting the majority class (ENG) label for these turns thus yields 83.8% accuracy (with 0% precision and recall for DISE, and 83.8% precision and 100% recall for ENG). While our DISE model does not outperform majority class labeling with respect to accuracy, this is not surprising given the steep skew in class distribution, and our learned model significantly outperforms the baseline with respect to all the other measures (p<.001). 11 Our CC and MLE results are on par with the best results from the state-of-the-art systems competing in the 2010 Interspeech Challenge, where the task was to detect level of interest. In particular, the winner obtained a CC of 0.428 (higher numbers are better) and an MLE of 0.146 (lower numbers are better) (Jeon et al., 2010) , while a subsequent study yielded a CC of 0.480 and an MLE of 0.131 on the same corpus (Wang and Hirschberg, 2011) . Our results are also on par with the best results of the other prior research on detecting disengagement discussed in Section 2 that detects a small number of disengagement classes and reports accuracy and/or recall and precision. For example, (Martalo et al., 2008) report average precision of 75% and recall of 74% (detecting three levels of disengagement), while (Kapoor and Picard, 2005) report an accuracy of 86% for detecting binary (dis)interest. Our final DISE model was produced by running the J48 algorithm over our entire corpus. The resulting decision tree contains 141 nodes and 75 leaves. Inspection of the tree reveals that all of the feature types in Figure 2 (acoustic-prosodic, lexical/dialogue, user identifier) are used as decision nodes in the tree, although not all variations on these types were used. The upper-level nodes of the tree are usually considered to be more informative features as compared to lower-level nodes, since they are queried for more leaves. The upper level of the DISE model consists entirely of temporal, lexical, pitch and energy features as well as question name and depth and incorrect runs, while features such as gender, turn number, and dialogue name appear only near the leaves, and pretest score and turn (in)correctness don't appear at all. The amount of pausing prior to the start of the user turn is the most important feature for determining disengagement, with pauses shorter than a quarter second being labeled DISE, suggesting that fast answers are a strong signal of disengagement in our system. Users who answer quickly may do so without taking the time to think it through; the more engaged user, in contrast, takes more time to prepare an answer.",
"cite_spans": [
{
"start": 637,
"end": 639,
"text": "11",
"ref_id": null
},
{
"start": 947,
"end": 966,
"text": "(Jeon et al., 2010)",
"ref_id": "BIBREF23"
},
{
"start": 1055,
"end": 1082,
"text": "(Wang and Hirschberg, 2011)",
"ref_id": "BIBREF49"
},
{
"start": 1329,
"end": 1351,
"text": "(Martalo et al., 2008)",
"ref_id": "BIBREF31"
},
{
"start": 1451,
"end": 1476,
"text": "(Kapoor and Picard, 2005)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 181,
"end": 188,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 1752,
"end": 1760,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Intrinsic Evaluation: Cross-Validation",
"sec_num": "5"
},
{
"text": "Three lexical items from the student turns, \"friction\", \"light\", and \"greater\", are the next most important features in the tree, suggesting that particular concepts and question types can be typically associated with user disengagement in a system. For example, open-ended system questions may lead users to disengage due to frustration from not knowing when their answer is complete. One common case in ITSPOKE involves asking users to name all the forces on an object; some users don't know how many to list, so they start listing random forces, such as \"friction.\" On the other hand, multiple choice questions can also lead users to disengage; they begin with a reasonable chance of being correct and thus don't take the time to think through their answer. One common case in ITSPOKE involves asking users to determine which of two objects has the greater or lesser force, acceleration, and velocity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Evaluation: Cross-Validation",
"sec_num": "5"
},
{
"text": "While our feature set is highly generalizable to other domains, it is an empirical question as to whether the feature values we found maximally effective for predicting disengagement also generalize to other domains. Intuition is often unreliable, and it has been widely shown in affect prediction that the answer can depend on domain, dataset, and learning algorithm employed. Moreover, there are many types of spoken dialogue systems with different styles and no single type can represent the entire field. That said, it is also important to note that there are lessons to be learned from the features selected for one particular domain, in terms of the take-home message for other domains. For example, the fact that \"prior pause\" is selected as a strong signal of disengagement in ITSPOKE dialogues may indicate that the feature itself (regardless of its selected value) could be transferred to different domains, alone or in the demonstrated combinations with the other selected features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Evaluation: Cross-Validation",
"sec_num": "5"
},
{
"text": "Next we use extrinsic evaluation to confirm that our final DISE model is both useful and a reasonable substitute for our gold standard manual DISE labels. With respect to showing the utility of detecting DISE, we use a correlational analysis to show that the gold standard (manual) DISE values are significantly predictive of two different measures of system performance. 12 With respect to showing the adequacy of our current level of detection performance for the learned DISE model, we demonstrate that after replacing the manual DISE labels with the automatic DISE labels when running our correlations, the automatic labels are related to performance in the same way as the gold standard labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation: Correlation",
"sec_num": "6"
},
{
"text": "Thus for both our automatically detected DISE labels (auto) and our gold standard DISE labels (manual), we first computed the total number of occurrences for each student, and then computed a bivariate Pearson's correlation between this total and two different metrics of performance: learning gain (LG) and user satisfaction (US). In the tutoring domain, learning is the primary performance metric and as is common in this domain we compute it as normalized learning gain ((posttest score-pretest score)/(1- pretest score)). In spoken dialogue systems, user satisfaction is the primary performance metric and as is common in this domain we compute it by totaling over the user satisfaction survey scores. 13 Table 3 shows first the mean and standard deviation for the DISE label over all students, the Pearson's Correlation coefficient (R) and its significance (p). As shown, both our manual and automatic DISE labels are significantly related to performance, regardless of whether we measure it as user satisfaction or learning gain. 14 Moreover, in both cases the correlations are nearly identical between the manual and automatic labels. These results indicate that the detected DISE values are a useful substitute for the gold standard, and suggest that redesigning IT-SPOKE to recognize and respond to DISE can significantly improve system performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 709,
"end": 716,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Extrinsic Evaluation: Correlation",
"sec_num": "6"
},
{
"text": "Because we are adding our disengagement detector to a spoken dialogue system that already detects and adapts to user uncertainty, we argue that it is also necessary to evaluate whether greater performance benefits are likely to be obtained by adapting to a second state. In other words, given how difficult it is to effectively detect and adapt to one user affective state, is performance likely to improve by detecting and adapting to multiple affective states?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation: Affective State Multiple Regression",
"sec_num": "7"
},
{
"text": "To answer this question, we performed a multiple linear regression analysis aimed at quantifying the relative usefulness of the automatically detected 13 Identical results were obtained by using an average instead of a total, and only slightly weaker results were obtained when normalizing the DISE totals as the percentages of total turns.",
"cite_spans": [
{
"start": 151,
"end": 153,
"text": "13",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation: Affective State Multiple Regression",
"sec_num": "7"
},
{
"text": "14 We previously found a related correlation between different DISE and learning measures, during the analysis of our DISE annotation scheme (Forbes-Riley and Litman, 2011b). In particular, we showed a significant partial correlation between the percentage of manual DISE labels and posttest controlled for pretest score. disengagement and uncertainty labels when predicting our system performance metrics. We ran four stepwise linear regressions. The first regression predicted learning gain, and gave the model two possible inputs: the total number of automatic DISE labels and UNC labels per user. We then ran the same regression again, this time predicting user satisfaction. For comparison, we ran the same two regressions using the manual DISE and UNC labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation: Affective State Multiple Regression",
"sec_num": "7"
},
{
"text": "As the trained regression models in Figure 3 show, when predicting learning gain, selecting both automatically detected affective state metrics as inputs significantly increases the model's predictive power as compared to only selecting one. 15 The (standardized) feature weights indicate relative predictive power in accounting for the variance in learning gain. As shown, both automatic affect metrics have the same weight in the final model. This result suggests that adapting to our automatically detected disengagement and uncertainty labels can further improve learning over and above adapting to uncertainty alone. Although the final model's predictive power is low (R 2 =0.15), our interest here is only in investigating whether the two affective states are more useful in combination than in isolation for predicting performance. In similar types of stepwise regressions on prior ITSPOKE corpora, we've shown that more complete models of system performance incorporating many predictors of learning (i.e. affective states in conjunction with other dialogue features) can yield R 2 values of over .5 (Forbes-Riley et al., 2008) . 16 Learning Gain = -.31 * Total Automatic DISE (R 2 =.09, p=.009)",
"cite_spans": [
{
"start": 1108,
"end": 1135,
"text": "(Forbes-Riley et al., 2008)",
"ref_id": "BIBREF19"
},
{
"start": 1138,
"end": 1140,
"text": "16",
"ref_id": null
}
],
"ref_spans": [
{
"start": 36,
"end": 44,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Extrinsic Evaluation: Affective State Multiple Regression",
"sec_num": "7"
},
{
"text": "Learning Gain = -.24 * Total Automatic DISE -.24 * Total Automatic UNC (R 2 =.15, p=.004) Interestingly, for the regression models of learning gain that used manual affect metrics, only the DISE metric was selected as an input. This indicates that the automatic affective state labels are useful in combination for predicting performance in a way that is not reflected in their gold standard counterparts. Detecting multiple affective states might thus be one way to compensate for the noise that is introduced in a fully-automated affective spoken dialogue system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation: Affective State Multiple Regression",
"sec_num": "7"
},
{
"text": "Similarly, only the DISE metric was selected for inclusion in the regression model of user satisfaction, regardless of whether manual or automatic labels were used. A separate correlation analysis showed that user uncertainty is not significantly correlated with user satisfaction in our system, though we previously found that multiple uncertainty-related metrics do significantly correlate with learning .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation: Affective State Multiple Regression",
"sec_num": "7"
},
{
"text": "In this paper we used extrinsic evaluations to provide evidence for the utility of a new system design involving the complex task of user affect detection, prior to undertaking an expensive and timeconsuming evaluation of an affect-adaptive system with real users. In particular, we first presented a novel model for automatically detecting user disengagement in spoken dialogue systems. We showed through intrinsic evaluations (i.e., cross-validation experiments using gold-standard labels) that the model yields results on par with prior work. We then showed crucially through novel extrinsic evaluation that the resulting automatically detected disengagement labels correlate with two primary performance metrics (user satisfaction and learning) in the same way as gold standard (manual) labels. This suggests that adapting to the automatic disengagement labels has the potential to significantly improve performance even in the presence of noise from the automatic labeling. Finally, further extrinsic analyses using multiple regression suggest that adapt-ing to our automatic disengagement labels can improve learning (though not user satisfaction) over and above the improvement achieved by only adapting to automatically detected user uncertainty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary and Current Directions",
"sec_num": "8"
},
{
"text": "We have already developed and implemented an adaptation for user disengagement in ITSPOKE. The disengagement adaptation draws on empirical analyses of our data and effective responses to user disengagement presented in prior work (c.f., (Forbes-Riley and Litman, 2011b)), We are currently evaluating our disengagement adaptation in the \"ideal\" environment of a Wizard of Oz experiment, where user disengagement, uncertainty, and correctness are labeled by a hidden human during user interactions with ITSPOKE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary and Current Directions",
"sec_num": "8"
},
{
"text": "Based on the evaluations here, we believe our disengagement model is ready for implementation in ITSPOKE. We will then evaluate the resulting spoken dialogue system for detecting and adapting to multiple affective states in an upcoming controlled experiment with real users.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary and Current Directions",
"sec_num": "8"
},
{
"text": "ITSPOKE is a speech-enhanced and otherwise modified version of the Why2-Atlas text-based qualitative physics tutor(VanLehn et al., 2002).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "an outgrowth of Festival(Black and Taylor, 1997).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Appraisal theorists distinguish emotional behaviors from their underlying causes, arguing that emotions result from an evaluation of a context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also tried using our automatic UNC label as a feature in our DISE model, but our results weren't significantly improved. 9 simply ((Precision(DISE) + Precision(ENG))/2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Pearson product-moment correlation coefficient (CC) is a measure of the linear dependence that is widely used in regression settings. MLE is a regression performance measure for the mean absolute error between an estimator and the true value.11 CC is undefined for majority class labeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Spoken dialogue research has shown that redesigning a system in light of such correlational analysis can indeed yield performance improvements(Rotaru and Litman, 2009).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Using the stepwise method, Automatic DISE was the first feature selected, and Automatic UNC the second. However, note that a model consisting of only the Automatic UNC metric also yields significantly worse predictive power than selecting both affective state metrics. Further note that almost identical models were produced using percentages rather than totals.16 R 2 is the standard reported metric for linear regressions. However, for consistency withTable 3, note that the two models inFigure 3yield R values of -.31 and -.38, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is funded by NSF award 0914615. We thank Scott Silliman for systems support.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Natural affect data: Collection and annotation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Afzal",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Robinson",
"suffix": ""
}
],
"year": 2011,
"venue": "Affect and Learning Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Afzal and P. Robinson. 2011. Natural affect data: Collection and annotation. In Sidney D'Mello and Rafael Calvo, editors, Affect and Learning Technolo- gies. Springer.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Experimentally augmenting an intelligent tutoring system with human-supplied capabilities: Adding human-provided emotional scaffolding to an automated reading tutor that listens",
"authors": [
{
"first": "G",
"middle": [],
"last": "Aist",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Kort",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Reilly",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mostow",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Picard",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. Intelligent Tutoring Systems Conference (ITS) Workshop on Empirical Methods for Tutorial Dialogue Systems",
"volume": "",
"issue": "",
"pages": "16--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Aist, B. Kort, R. Reilly, J. Mostow, and R. Pi- card. 2002. Experimentally augmenting an intelli- gent tutoring system with human-supplied capabili- ties: Adding human-provided emotional scaffolding to an automated reading tutor that listens. In Proc. In- telligent Tutoring Systems Conference (ITS) Workshop on Empirical Methods for Tutorial Dialogue Systems, pages 16-28, San Sebastian, Spain.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A system for technology based assessment of language and literacy in young children: the role of multiple information sources",
"authors": [
{
"first": "A",
"middle": [],
"last": "Alwan",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Black",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Caseyz",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gerosa",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Heritagez",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Iseliy",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Jonesz",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kazemzadeh",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Narayanan",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Pricex",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tepperman",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Wangy",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 9th IEEE International Workshop on Multimedia Signal Processing (MMSP)",
"volume": "",
"issue": "",
"pages": "26--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Alwan, Y. Bai, M. Black, L. Caseyz, M. Gerosa, M. Heritagez, M. Iseliy, M. Jonesz, A. Kazemzadeh, S. Lee, S. Narayanan, P. Pricex, J. Tepperman, and S. Wangy. 2007. A system for technology based as- sessment of language and literacy in young children: the role of multiple information sources. In Proceed- ings of the 9th IEEE International Workshop on Multi- media Signal Processing (MMSP), pages 26-30, Cha- nia, Greece, October.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Prosody-based automatic detection of annoyance and frustration in human-computer dialog",
"authors": [
{
"first": "R",
"middle": [],
"last": "Ang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Dhillon",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Krupski",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the International Conference on Spoken Language Processing (ICSLP)",
"volume": "",
"issue": "",
"pages": "2037--2039",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ang, R. Dhillon, A. Krupski, E.Shriberg, and A. Stol- cke. 2002. Prosody-based automatic detection of an- noyance and frustration in human-computer dialog. In J. H. L. Hansen and B. Pellom, editors, Proceedings of the International Conference on Spoken Language Processing (ICSLP), pages 2037-2039, Denver, USA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Private emotions vs. social interaction -a data-driven approach towards analysing emotion in speech",
"authors": [
{
"first": "A",
"middle": [],
"last": "Batliner",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Steidl",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Hacker",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Noth",
"suffix": ""
}
],
"year": 2008,
"venue": "User Modeling and User-Adapted Interaction: The Journal of Personalization Research",
"volume": "18",
"issue": "",
"pages": "175--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Batliner, S. Steidl, C. Hacker, and E. Noth. 2008. Private emotions vs. social interaction -a data-driven approach towards analysing emotion in speech. User Modeling and User-Adapted Interaction: The Journal of Personalization Research, 18:175-206.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The effect of pedagogical agent voice and animation on learning, motivation, and perceived persona",
"authors": [
{
"first": "A",
"middle": [
"L"
],
"last": "Baylor",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ryu",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Shen",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the ED-MEDIA Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. L. Baylor, J. Ryu, and E. Shen. 2003. The effect of pedagogical agent voice and animation on learning, motivation, and perceived persona. In Proceedings of the ED-MEDIA Conference, Honolulu, Hawaii, June.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Festival speech synthesis system: system documentation (1.1.1). The Centre for Speech Technology Research",
"authors": [
{
"first": "A",
"middle": [],
"last": "Black",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Taylor",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Black and P. Taylor. 1997. Festival speech synthe- sis system: system documentation (1.1.1). The Centre for Speech Technology Research, University of Edin- burgh, http://www.cstr.ed.ac.uk/projects/festival/.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Models for multiparty engagement in open-world dialog",
"authors": [
{
"first": "D",
"middle": [],
"last": "Bohus",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Horvitz",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of SIGdial",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Bohus and E. Horvitz. 2009. Models for multiparty engagement in open-world dialog. In Proceedings of SIGdial, London, UK.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Empirically building and evaluating a probabilistic model of user affect",
"authors": [
{
"first": "C",
"middle": [],
"last": "Conati",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Maclaren",
"suffix": ""
}
],
"year": 2009,
"venue": "User Modeling and User-Adapted Interaction",
"volume": "19",
"issue": "3",
"pages": "267--303",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Conati and H. Maclaren. 2009. Empirically build- ing and evaluating a probabilistic model of user af- fect. User Modeling and User-Adapted Interaction, 19(3):267-303.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Describing the emotional states that are expressed in speech",
"authors": [
{
"first": "R",
"middle": [],
"last": "Cowie",
"suffix": ""
},
{
"first": "R",
"middle": [
"R"
],
"last": "Cornelius",
"suffix": ""
}
],
"year": 2003,
"venue": "Speech Communication",
"volume": "40",
"issue": "1-2",
"pages": "5--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Cowie and R. R. Cornelius. 2003. Describing the emotional states that are expressed in speech. Speech Communication, 40(1-2):5-32.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Real-life emotions detection with lexical and paralinguistic cues on human-human call center dialogs",
"authors": [
{
"first": "L",
"middle": [],
"last": "Devillers",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Vidrascu",
"suffix": ""
}
],
"year": 2006,
"venue": "Ninth International Conference on Spoken Language Processing (ICSLP",
"volume": "",
"issue": "",
"pages": "801--804",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Devillers and L. Vidrascu. 2006. Real-life emo- tions detection with lexical and paralinguistic cues on human-human call center dialogs. In Ninth Inter- national Conference on Spoken Language Processing (ICSLP, pages 801-804, Pittsburgh, PA, September.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Automatic detection of learner's affect from conversational cues",
"authors": [
{
"first": "S",
"middle": [],
"last": "D'mello",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Craig",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Witherspoon",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mcdaniel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Graesser",
"suffix": ""
}
],
"year": 2008,
"venue": "User Modeling and User-Adapted Interaction: The Journal of Personalization Research",
"volume": "18",
"issue": "",
"pages": "45--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D'Mello, S. Craig, A. Witherspoon, B. McDaniel, and A. Graesser. 2008. Automatic detection of learner's affect from conversational cues. User Modeling and User-Adapted Interaction: The Journal of Personal- ization Research, 18:45-80.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A time for emoting: When affect-sensitivity is and isn't effective at promoting deep learning",
"authors": [
{
"first": "S",
"middle": [],
"last": "Mello",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Lehman",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Sullins",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Daigle",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Combs",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Vogt",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Perkins",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Graesser",
"suffix": ""
}
],
"year": 2010,
"venue": "Intelligent Tutoring Systems Conference",
"volume": "",
"issue": "",
"pages": "245--254",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. D'Mello, B. Lehman, J. Sullins, R. Daigle, R. Combs, K. Vogt, L. Perkins, and A. Graesser. 2010. A time for emoting: When affect-sensitivity is and isn't effec- tive at promoting deep learning. In Intelligent Tutoring Systems Conference, pages 245-254, Pittsburgh, PA, USA, June.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Examining the impacts of dialogue content and system automation on affect models in a spoken tutorial dialogue system",
"authors": [
{
"first": "J",
"middle": [],
"last": "Drummond",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Litman",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. 12th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL)",
"volume": "",
"issue": "",
"pages": "312--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Drummond and D. Litman. 2011. Examining the im- pacts of dialogue content and system automation on affect models in a spoken tutorial dialogue system. In Proc. 12th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 312-318, Portland, Oregon, June.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Exploring user satisfaction in a tutorial dialogue system",
"authors": [
{
"first": "M",
"middle": [],
"last": "Dzikovska",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Moore",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Steinhauser",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Campbell",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. 12th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL)",
"volume": "",
"issue": "",
"pages": "162--172",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Dzikovska, J. Moore, N. Steinhauser, and G. Camp- bell. 2011. Exploring user satisfaction in a tutorial dialogue system. In Proc. 12th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 162-172, Portland, Oregon, June.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The Munich versatile and fast open-source audio feature extractor",
"authors": [
{
"first": "E",
"middle": [],
"last": "Florian",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Wollmer",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Schuller",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. ACM Multimedia (MM)",
"volume": "",
"issue": "",
"pages": "1459--1462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Florian, M. Wollmer, and B. Schuller. 2010. The Mu- nich versatile and fast open-source audio feature ex- tractor. In Proc. ACM Multimedia (MM), pages 1459- 1462, Florence, Italy.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A user modelingbased performance analysis of a wizarded uncertaintyadaptive dialogue system corpus",
"authors": [
{
"first": "K",
"middle": [],
"last": "Forbes-Riley",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Litman",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Forbes-Riley and D. Litman. 2009. A user modeling- based performance analysis of a wizarded uncertainty- adaptive dialogue system corpus. In Proc. Inter- speech, Brighton, UK, September.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Benefits and challenges of real-time uncertainty detection and adaptation in a spoken dialogue computer tutor",
"authors": [
{
"first": "K",
"middle": [],
"last": "Forbes-Riley",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Litman",
"suffix": ""
}
],
"year": 2011,
"venue": "Speech Communication",
"volume": "53",
"issue": "9",
"pages": "1115--1136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Forbes-Riley and D. Litman. 2011a. Benefits and challenges of real-time uncertainty detection and adap- tation in a spoken dialogue computer tutor. Speech Communication, 53(9-10):1115-1136.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "When does disengagement correlate with learning in spoken dialog computer tutoring?",
"authors": [
{
"first": "K",
"middle": [],
"last": "Forbes-Riley",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Litman",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings 15th International Conference on Artificial Intelligence in Education (AIED)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Forbes-Riley and D. Litman. 2011b. When does disengagement correlate with learning in spoken dia- log computer tutoring? In Proceedings 15th Interna- tional Conference on Artificial Intelligence in Educa- tion (AIED), Auckland, NZ, June.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The relative impact of student affect on performance models in a spoken dialogue tutoring system",
"authors": [
{
"first": "K",
"middle": [],
"last": "Forbes-Riley",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Rotaru",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Litman",
"suffix": ""
}
],
"year": 2008,
"venue": "User Modeling and User-Adapted Interaction",
"volume": "18",
"issue": "",
"pages": "11--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Forbes-Riley, M. Rotaru, and D. Litman. 2008. The relative impact of student affect on performance mod- els in a spoken dialogue tutoring system. User Model- ing and User-Adapted Interaction, 18(1-2):11-43.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Annotating disengagement for spoken dialogue computer tutoring",
"authors": [
{
"first": "K",
"middle": [],
"last": "Forbes-Riley",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Litman",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Friedberg",
"suffix": ""
}
],
"year": 2011,
"venue": "Affect and Learning Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Forbes-Riley, D. Litman, and H. Friedberg. 2011. An- notating disengagement for spoken dialogue computer tutoring. In Sidney D'Mello and Rafael Calvo, editors, Affect and Learning Technologies. Springer.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Assessing the validity of appraisal-based models of emotion",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Gratch",
"suffix": ""
},
{
"first": "Stacy",
"middle": [],
"last": "Marsella",
"suffix": ""
},
{
"first": "Ning",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Stankovic",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ACII",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Gratch, Stacy Marsella, Ning Wang, and Brooke Stankovic. 2009. Assessing the validity of appraisal-based models of emotion. In Proceedings of ACII, Amsterdam, Netherlands.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "The SphinxII speech recognition system: An Overview. Computer, Speech and Language",
"authors": [
{
"first": "X",
"middle": [
"D"
],
"last": "Huang",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Alleva",
"suffix": ""
},
{
"first": "H",
"middle": [
"W"
],
"last": "Hon",
"suffix": ""
},
{
"first": "M",
"middle": [
"Y"
],
"last": "Hwang",
"suffix": ""
},
{
"first": "K",
"middle": [
"F"
],
"last": "Lee",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Rosenfeld",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. D. Huang, F. Alleva, H. W. Hon, M. Y. Hwang, K. F. Lee, and R. Rosenfeld. 1993. The SphinxII speech recognition system: An Overview. Computer, Speech and Language.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Level of interest sensing in spoken dialog using multi-level fusion of acoustic and lexical evidence",
"authors": [
{
"first": "J",
"middle": [
"H"
],
"last": "Jeon",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2010,
"venue": "INTERSPEECH'10",
"volume": "",
"issue": "",
"pages": "2802--2805",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. H. Jeon, R. Xia, and Y. Liu. 2010. Level of interest sensing in spoken dialog using multi-level fusion of acoustic and lexical evidence. In INTERSPEECH'10, pages 2802-2805.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Tools for authoring a dialogue agent that participates in learning studies",
"authors": [
{
"first": "P",
"middle": [],
"last": "Jordan",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ringenberg",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "C",
"middle": [
"P"
],
"last": "Rose",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. Artificial Intelligence in Education (AIED)",
"volume": "",
"issue": "",
"pages": "43--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Jordan, B. Hall, M. Ringenberg, Y. Cui, and C.P. Rose. 2007. Tools for authoring a dialogue agent that par- ticipates in learning studies. In Proc. Artificial Intelli- gence in Education (AIED), pages 43-50.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Multimodal affect recognition in learning environments",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kapoor",
"suffix": ""
},
{
"first": "R",
"middle": [
"W"
],
"last": "Picard",
"suffix": ""
}
],
"year": 2005,
"venue": "13th Annual ACM International Conference on Multimedia",
"volume": "",
"issue": "",
"pages": "677--682",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Kapoor and R. W. Picard. 2005. Multimodal affect recognition in learning environments. In 13th Annual ACM International Conference on Multimedia, pages 677-682, Singapore.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "This computer responds to user frustration: Theory, design, and results",
"authors": [
{
"first": "J",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Moon",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Picard",
"suffix": ""
}
],
"year": 2002,
"venue": "Interacting with Computers",
"volume": "14",
"issue": "",
"pages": "119--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Klein, Y. Moon, and R. Picard. 2002. This computer responds to user frustration: Theory, design, and re- sults. Interacting with Computers, 14:119-140.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Towards detecting emotions in spoken dialogs",
"authors": [
{
"first": "C",
"middle": [
"M"
],
"last": "Lee",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2005,
"venue": "IEEE Transactions on Speech and Audio Processing",
"volume": "13",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. M. Lee and S. Narayanan. 2005. Towards detect- ing emotions in spoken dialogs. IEEE Transactions on Speech and Audio Processing, 13(2), March.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "What are you feeling? Investigating student affective states during expert human tutoring sessions",
"authors": [
{
"first": "B",
"middle": [],
"last": "Lehman",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Matthews",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "D'mello",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2008,
"venue": "Intelligent Tutoring Systems Conference (ITS)",
"volume": "",
"issue": "",
"pages": "50--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Lehman, M. Matthews, S. D'Mello, and N. Per- son. 2008. What are you feeling? Investigating student affective states during expert human tutoring sessions. In Intelligent Tutoring Systems Conference (ITS), pages 50-59, Montreal, Canada, June.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Spoken tutorial dialogue and the feeling of another's knowing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Litman",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Forbes-Riley",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Litman and K. Forbes-Riley. 2009. Spoken tutorial dialogue and the feeling of another's knowing. In Pro- ceedings 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), Lon- don, UK, September.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Embedded empathy in continuous, interactive health assessment",
"authors": [
{
"first": "K",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "R",
"middle": [
"W"
],
"last": "Picard",
"suffix": ""
}
],
"year": 2005,
"venue": "CHI Workshop on HCI Challenges in Health Assessment",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Liu and R. W. Picard. 2005. Embedded empathy in continuous, interactive health assessment. In CHI Workshop on HCI Challenges in Health Assessment.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Attitude display in dialogue patterns",
"authors": [
{
"first": "A",
"middle": [],
"last": "Martalo",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Novielli",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "De Rosis",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. AISB 2008 Symposium on Affective Language in Human and Machine",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Martalo, N. Novielli, and F. de Rosis. 2008. Attitude display in dialogue patterns. In Proc. AISB 2008 Sym- posium on Affective Language in Human and Machine, pages 1-8, Aberdeen, Scotland, April.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Modeling self-efficacy in intelligent tutoring systems: An inductive approach",
"authors": [
{
"first": "S",
"middle": [],
"last": "Mcquiggan",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Mott",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lester",
"suffix": ""
}
],
"year": 2008,
"venue": "User Modeling and User-Adapted Interaction (UMUAI)",
"volume": "18",
"issue": "1-2",
"pages": "81--123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. McQuiggan, B. Mott, and J. Lester. 2008. Model- ing self-efficacy in intelligent tutoring systems: An in- ductive approach. User Modeling and User-Adapted Interaction (UMUAI), 18(1-2):81-123, February.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Accommodating explicit user expressions of uncertainty in voice search or something like that",
"authors": [
{
"first": "T",
"middle": [],
"last": "Paek",
"suffix": ""
},
{
"first": "Y.-C",
"middle": [],
"last": "Ju",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 9th Annual Conference of the International Speech Communication Association (INTERSPEECH 08)",
"volume": "",
"issue": "",
"pages": "1165--1168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Paek and Y.-C. Ju. 2008. Accommodating explicit user expressions of uncertainty in voice search or something like that. In Proceedings of the 9th Annual Conference of the International Speech Communica- tion Association (INTERSPEECH 08), pages 1165- 1168, Brisbane, Australia, September.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Recognizing uncertainty in speech",
"authors": [
{
"first": "H",
"middle": [],
"last": "Pon-Barry",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Shieber",
"suffix": ""
}
],
"year": 2011,
"venue": "EURASIP Journal on Advances in Signal Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Pon-Barry and S. Shieber. 2011. Recognizing uncer- tainty in speech. EURASIP Journal on Advances in Signal Processing.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Responding to student uncertainty in spoken tutorial dialogue systems",
"authors": [
{
"first": "H",
"middle": [],
"last": "Pon-Barry",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Schultz",
"suffix": ""
},
{
"first": "E",
"middle": [
"Owen"
],
"last": "Bratt",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Peters",
"suffix": ""
}
],
"year": 2006,
"venue": "International Journal of Artificial Intelligence in Education",
"volume": "16",
"issue": "",
"pages": "171--194",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Pon-Barry, K. Schultz, E. Owen Bratt, B. Clark, and S. Peters. 2006. Responding to student uncertainty in spoken tutorial dialogue systems. International Jour- nal of Artificial Intelligence in Education, 16:171-194.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Diagnosing and acting on student affect: the tutor's perspective. User Modeling and User-Adapted Interaction",
"authors": [
{
"first": "K",
"middle": [],
"last": "Porayska-Pomsta",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mavrikis",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Pain",
"suffix": ""
}
],
"year": 2008,
"venue": "The Journal of Personalization Research",
"volume": "18",
"issue": "",
"pages": "125--173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Porayska-Pomsta, M. Mavrikis, and H. Pain. 2008. Diagnosing and acting on student affect: the tutor's perspective. User Modeling and User-Adapted In- teraction: The Journal of Personalization Research, 18:125-173.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "The Empathetic Companion: A character-based interface that addresses users' affective states",
"authors": [
{
"first": "H",
"middle": [],
"last": "Prendinger",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ishizuka",
"suffix": ""
}
],
"year": 2005,
"venue": "International Journal of Applied Artificial Intelligence",
"volume": "19",
"issue": "3",
"pages": "267--285",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Prendinger and M. Ishizuka. 2005. The Empa- thetic Companion: A character-based interface that ad- dresses users' affective states. International Journal of Applied Artificial Intelligence, 19(3):267-285.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Discourse structure and performance analysis: Beyond the correlation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Rotaru",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Litman",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Rotaru and D. Litman. 2009. Discourse structure and performance analysis: Beyond the correlation. In Pro- ceedings 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), Lon- don, UK.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Being bored? recognising natural interest by extensive audiovisual integration for real-life application",
"authors": [
{
"first": "B",
"middle": [],
"last": "Schuller",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Eyben",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Gast",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Hrnler",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Wollmer",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Rigoll",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Hthker",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Konosu",
"suffix": ""
}
],
"year": 2009,
"venue": "Special Issue on Visual and Multimodal Analysis of Human Spontaneous Behavior",
"volume": "27",
"issue": "",
"pages": "1760--1774",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Schuller, R. Muller, F. Eyben, J. Gast, B. Hrnler, M. Wollmer, G. Rigoll, A. Hthker, and H. Konosu. 2009a. Being bored? recognising natural interest by extensive audiovisual integration for real-life applica- tion. Image and Vision Computing Journal, Special Issue on Visual and Multimodal Analysis of Human Spontaneous Behavior, 27:1760-1774.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "The Interspeech 2009 Emotion Challenge",
"authors": [
{
"first": "B",
"middle": [],
"last": "Schuller",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Steidl",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Batliner",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Schuller, S. Steidl, and A. Batliner. 2009b. The Interspeech 2009 Emotion Challenge. In Proceed- ings of the 10th Annual Conference of the Inter- national Speech Communication Association (Inter- speech), ISCA, Brighton, UK, September.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "The Interspeech 2010 Paralinguistic Challenge",
"authors": [
{
"first": "B",
"middle": [],
"last": "Schuller",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Steidl",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Batliner",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Burkhardt",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Devillers",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 11th Annual Conference of the International Speech Communication Assocation (Interspeech)",
"volume": "",
"issue": "",
"pages": "2794--2797",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Schuller, S. Steidl, A. Batliner, F. Burkhardt, L. Dev- illers, C. Muller, and S. Narayanan. 2010. The Interspeech 2010 Paralinguistic Challenge. In Pro- ceedings of the 11th Annual Conference of the In- ternational Speech Communication Assocation (Inter- speech), pages 2794-2797, Chiba, Japan, September.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Voice signatures",
"authors": [
{
"first": "I",
"middle": [],
"last": "Shafran",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Riley",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mohri",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)",
"volume": "",
"issue": "",
"pages": "31--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Shafran, M. Riley, and M. Mohri. 2003. Voice signa- tures. In Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 31-36, St. Thomas, US Virgin Islands.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "An architecture for engagement in collaborative conversations between a robot and a human",
"authors": [
{
"first": "C",
"middle": [],
"last": "Sidner",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Sidner and C. Lee. 2003. An architecture for engage- ment in collaborative conversations between a robot and a human. Technical Report TR2003-12, MERL.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Dialogue act modeling for automatic tagging and recognition of conversational speech",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Coccaro",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Bates",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Van Ess-Dykema",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Ries",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Meteer",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics",
"volume": "",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Stolcke, N. Coccaro, R. Bates, P. Taylor, C. Van Ess- Dykema, K. Ries, E. Shriberg, D. Jurafsky, R. Mar- tin, and M. Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational Linguistics, 26(3).",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Responding to subtle, fleeting changes in the user's internal state",
"authors": [
{
"first": "W",
"middle": [],
"last": "Tsukahara",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Ward",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the SIG-CHI on Human factors in computing systems",
"volume": "",
"issue": "",
"pages": "77--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Tsukahara and N. Ward. 2001. Responding to subtle, fleeting changes in the user's internal state. In Pro- ceedings of the SIG-CHI on Human factors in comput- ing systems, pages 77-84, Seattle, WA. ACM.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "The architecture of Why2-Atlas: A coach for qualitative physics essay writing",
"authors": [
{
"first": "K",
"middle": [],
"last": "Vanlehn",
"suffix": ""
},
{
"first": "P",
"middle": [
"W"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Ros\u00e9",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Bhembe",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "B\u00f6ttner",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gaydos",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Makatchev",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Pappuswamy",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ringenberg",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Roque",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Siler",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. Intl. Conf. on Intelligent Tutoring Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. VanLehn, P. W. Jordan, C. Ros\u00e9, D. Bhembe, M. B\u00f6ttner, A. Gaydos, M. Makatchev, U. Pap- puswamy, M. Ringenberg, A. Roque, S. Siler, R. Sri- vastava, and R. Wilson. 2002. The architecture of Why2-Atlas: A coach for qualitative physics essay writing. In Proc. Intl. Conf. on Intelligent Tutoring Systems.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Detection of reallife emotions in dialogs recorded in a call center",
"authors": [
{
"first": "L",
"middle": [],
"last": "Vidrascu",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Devillers",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of INTERSPEECH",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Vidrascu and L. Devillers. 2005. Detection of real- life emotions in dialogs recorded in a call center. In Proceedings of INTERSPEECH, Lisbon, Portugal.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "DARPA communicator: Cross-system results for the 2001 evaluation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rudnicky",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Aberdeen",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Bratt",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Garofolo",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hastie",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Pellom",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Potamianos",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Passonneau",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Sanders",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Seneff",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Stallard",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. ICSLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Walker, A. Rudnicky, R. Prasad, J. Aberdeen, E. Bratt, J. Garofolo, H. Hastie, A. Le, B. Pellom, A. Potami- anos, R. Passonneau, S. Roukos, G. Sanders, S. Sen- eff, and D. Stallard. 2002. DARPA communicator: Cross-system results for the 2001 evaluation. In Proc. ICSLP.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Detecting levels of interest from spoken dialog with multistream prediction feedback and similarity based hierarchical fusion learning",
"authors": [
{
"first": "W",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hirschberg",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. 12th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIG-DIAL)",
"volume": "",
"issue": "",
"pages": "152--161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Wang and J. Hirschberg. 2011. Detecting levels of interest from spoken dialog with multistream predic- tion feedback and similarity based hierarchical fusion learning. In Proc. 12th Annual Meeting of the Spe- cial Interest Group on Discourse and Dialogue (SIG- DIAL), pages 152-161, Portland, Oregon, June.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "The politeness effect: Pedagogical agents and learning outcomes",
"authors": [
{
"first": "W",
"middle": [
"L"
],
"last": "Wang",
"suffix": ""
},
{
"first": "R",
"middle": [
"E"
],
"last": "Johnson",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Mayer",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Rizzo",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Shaw",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2008,
"venue": "International Journal of Human-Computer Studies",
"volume": "66",
"issue": "2",
"pages": "98--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, W.L. Johnson, R. E. Mayer, P. Rizzo, E. Shaw, and H. Collins. 2008. The politeness effect: Peda- gogical agents and learning outcomes. International Journal of Human-Computer Studies, 66(2):98-112.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "The speed of the elevator. Meters per second. (DISE, incorrect, UNC) . . . T 3 : What are the forces acting on the keys after the man releases them? U 3 : graaa-vi-tyyyyy <sings the answer> (DISE, correct, CER) Corpus Example Illustrating the User Turn Labels ((Dis)Engagement, (In)Correctness, (Un)Certainty)"
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Acoustic-Prosodic Features temporal features: turn duration, prior pause duration, turn-internal silence fundamental frequency (f0) and energy (RMS) features: maximum, minimum, mean, std. deviation running totals and averages for all features \u2022 Lexical and Dialogue Features dialogue name and turn number question name and question depth ITSPOKE-recognized lexical items in turn ITSPOKE-labeled turn (in)correctness incorrect runs \u2022 User Identifier Features: gender and pretest score"
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Features Used to Detect Disengagement (DISE) for each User Turn"
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Performance Model's Predictive Power Increases Significantly with Multiple Affective Features"
},
"TABREF0": {
"html": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td colspan=\"3\">: 2008 ITSPOKE Corpus Description (N=7216)</td></tr><tr><td>Turn Label</td><td colspan=\"2\">Total Percent</td></tr><tr><td>Disengaged</td><td colspan=\"2\">1170 16.21%</td></tr><tr><td>Correct</td><td colspan=\"2\">5330 73.86%</td></tr><tr><td>Uncertain</td><td colspan=\"2\">1483 20.55%</td></tr><tr><td colspan=\"2\">Uncertain+Disengaged 373</td><td>5.17%</td></tr></table>",
"num": null
},
"TABREF1": {
"html": null,
"type_str": "table",
"text": "Results of 10-fold Cross-Validation Experiment with J48 Decision Tree Algorithm Detecting the Binary DISE Label in the 2008 ITSPOKE Corpus (N=7216 user turns)",
"content": "<table><tr><td>Algorithm</td><td colspan=\"5\">Accuracy UA Precision UA Recall UA Fmeasure CC</td><td>MLE</td></tr><tr><td>Decision Tree</td><td>83.1%</td><td>68.9%</td><td>68.7%</td><td>68.8%</td><td colspan=\"2\">0.52 0.25</td></tr><tr><td colspan=\"2\">Majority Label 83.8%</td><td>41.9%</td><td>50.0%</td><td>45.6%</td><td>-</td><td>0.27</td></tr></table>",
"num": null
},
"TABREF2": {
"html": null,
"type_str": "table",
"text": "Correlations between Disengagement and both Satisfaction and Learning in ITSPOKE Corpus(N=72 users)",
"content": "<table><tr><td>Measure</td><td colspan=\"4\">Mean (SD) User Satisfaction Learning Gain</td></tr><tr><td/><td/><td>R</td><td>p</td><td>R</td><td>p</td></tr><tr><td>Total Manual DISE</td><td>12.3 (7.3)</td><td colspan=\"2\">-0.25 0.031</td><td>-0.35 0.002</td></tr><tr><td colspan=\"2\">Total Automatic DISE 12.6 (7.4)</td><td colspan=\"2\">-0.26 0.029</td><td>-0.31 0.009</td></tr></table>",
"num": null
}
}
}
}