ACL-OCL / Base_JSON /prefixS /json /sigdial /2005.sigdial-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:48:55.426655Z"
},
"title": "Using Bigrams to Identify Relationships Between Student Certainness States and Tutor Responses in a Spoken Dialogue Corpus",
"authors": [
{
"first": "Kate",
"middle": [],
"last": "Forbes-Riley",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pittsburgh Learning Research and Development Center Pittsburgh PA",
"location": {
"postCode": "15260",
"country": "USA"
}
},
"email": ""
},
{
"first": "Diane",
"middle": [
"J"
],
"last": "Litman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pittsburgh",
"location": {
"postCode": "15260",
"country": "USA"
}
},
"email": "litman@cs.pitt.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We use n-gram techniques to identify dependencies between student affective states of certainty and subsequent tutor dialogue acts, in an annotated corpus of human-human spoken tutoring dialogues. We first represent our dialogues as bigrams of annotated student and tutor turns. We next use \u03c7 2 analysis to identify dependent bigrams. Our results show dependencies between many student states and subsequent tutor dialogue acts. We then analyze the dependent bigrams and suggest ways that our current computer tutor can be enhanced to adapt its dialogue act generation based on these dependencies.",
"pdf_parse": {
"paper_id": "2005",
"_pdf_hash": "",
"abstract": [
{
"text": "We use n-gram techniques to identify dependencies between student affective states of certainty and subsequent tutor dialogue acts, in an annotated corpus of human-human spoken tutoring dialogues. We first represent our dialogues as bigrams of annotated student and tutor turns. We next use \u03c7 2 analysis to identify dependent bigrams. Our results show dependencies between many student states and subsequent tutor dialogue acts. We then analyze the dependent bigrams and suggest ways that our current computer tutor can be enhanced to adapt its dialogue act generation based on these dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "There has been increasing interest in affective dialogue systems , motivated by the belief that in human-human dialogues, conversational participants seem to be (at least to some degree) detecting and responding to the emotional states of other participants. Affective dialogue research is being pursued in many application areas, including intelligent tutoring systems (Aist et al., 2002; Craig and Graesser, 2003; Bhatt et al., 2004; Johnson et al., 2004; Moore et al., 2004) . However, while it seems intuitively plausible that human tutors do in fact vary their responses based on the detection of student affect 1 , to date this belief has largely been theoretically rather than empirically motivated. We propose using bigram-based techniques as a datadriven method for identifying relationships between student affect and tutor responses in a corpus of human-human spoken tutoring dialogues.",
"cite_spans": [
{
"start": 370,
"end": 389,
"text": "(Aist et al., 2002;",
"ref_id": "BIBREF0"
},
{
"start": 390,
"end": 415,
"text": "Craig and Graesser, 2003;",
"ref_id": "BIBREF7"
},
{
"start": 416,
"end": 435,
"text": "Bhatt et al., 2004;",
"ref_id": "BIBREF4"
},
{
"start": 436,
"end": 457,
"text": "Johnson et al., 2004;",
"ref_id": "BIBREF13"
},
{
"start": 458,
"end": 477,
"text": "Moore et al., 2004)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To investigate affect and tutorial dialogue systems, we have built ITSPOKE (Intelligent Tutoring SPOKEn dialogue system) , which is speech-enabled version of the textbased Why2-Atlas conceptual physics tutoring system (VanLehn et al., 2002 ). 2 Our long term goal is to have this system detect and adapt to student affect, and to investigate whether such an affective version of our system improves learning and other measures of performance. To date we have collected corpora of both human and computer tutoring dialogues, and have demonstrated the feasibility of annotating and recognizing student emotions from lexical, acoustic-prosodic, and dialogue features automatically extractable from these corpora (Litman and Forbes-Riley, 2004a; Litman and Forbes-Riley, 2004b ; Forbes-Riley and .",
"cite_spans": [
{
"start": 218,
"end": 239,
"text": "(VanLehn et al., 2002",
"ref_id": "BIBREF26"
},
{
"start": 709,
"end": 741,
"text": "(Litman and Forbes-Riley, 2004a;",
"ref_id": "BIBREF15"
},
{
"start": 742,
"end": 772,
"text": "Litman and Forbes-Riley, 2004b",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Here, we assume viable emotion recognition and move on to the next step: providing an empirical basis for enhancing our computer tutor to adaptively respond to student affect. We first show how to apply n-gram techniques used in other areas of computational linguistics to mine human-human dialogue corpora for dependent bigrams of student states and tutor responses. We then use our bigram analysis to show: 1) statistically-significant dependencies exist between students' emotional states and our hu-man tutor's dialogue act responses, 2) the dependent bigrams suggest empirically-motivated adaptive strategies for implementation in our computer tutor. This method should generalize to any domain with dialogue corpora labeled for user state and system response.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our data consists of 128 transcribed spoken dialogue tutoring sessions, between 14 different university students and one human tutor; each student participated in up to 10 sessions. The corpus was collected as part of an evaluation comparing typed and spoken human and computer dialogue tutoring (where the human tutor performed the same task as ITSPOKE) . The tutor and student spoke through head-mounted microphones, and were in the same room but separated by a partition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Spoken Tutoring Dialogue Corpus",
"sec_num": "2.1"
},
{
"text": "Each session begins after a student types an essay answering a qualitative physics problem. The tutor analyzes the essay, then engages the student in dialogue to correct misconceptions and elicit more complete explanations. The student then revises the essay, thereby ending the session or causing another round of dialogue/essay revision. On average, these sessions last 18.1 minutes and contain 46.5 student and 43.0 tutor turns. Annotated (see Sections 2.2 -2.3) excerpts 3 from our corpus are shown in Figures 1-6 (punctuation added for clarity).",
"cite_spans": [],
"ref_spans": [
{
"start": 506,
"end": 513,
"text": "Figures",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Spoken Tutoring Dialogue Corpus",
"sec_num": "2.1"
},
{
"text": "Prior to the present study, each student turn in our corpus had been manually annotated for \"certainness\" (Liscombe et al., 2005) 4 , as part of a larger 3 All annotations were performed from both audio and transcription within a speech processing tool.",
"cite_spans": [
{
"start": 106,
"end": 129,
"text": "(Liscombe et al., 2005)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotating Student Certainty",
"sec_num": "2.2"
},
{
"text": "4 To date, only one annotator has labeled \"certainness\". However, 10 dialogues were previously annotated by two other labelers using a more labor-intensive scheme, tagging certainness as well as confusion, boredom, frustration, etc. (Litman and Forbes-Riley, 2004a) . Agreement across all three annotators for (un-)certain turns has an average 0.5 Kappa. We view this as a lower bound since the annotation tasks being compared are non-identical. This agreement is however similar to emotion annotation in other domains, e.g. 0.47 Kappa in (Ang et al., 2002) , 0.45 and 0.48 Kappas in (Narayanan, 2002) , and Kappas ranging between 0.32 and 0.42 in (Shafran et al., 2003) .",
"cite_spans": [
{
"start": 233,
"end": 265,
"text": "(Litman and Forbes-Riley, 2004a)",
"ref_id": "BIBREF15"
},
{
"start": 539,
"end": 557,
"text": "(Ang et al., 2002)",
"ref_id": "BIBREF2"
},
{
"start": 584,
"end": 601,
"text": "(Narayanan, 2002)",
"ref_id": "BIBREF20"
},
{
"start": 648,
"end": 670,
"text": "(Shafran et al., 2003)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotating Student Certainty",
"sec_num": "2.2"
},
{
"text": "PHYSICS PROBLEM: Suppose a man is running in a straight line at constant speed. He throws a pumpkin straight up. Where will it land? Explain. . . . dialogue excerpt at 16.6 minutes into session. . . STUDENT68: So the-when you throw it up the acceleration will stay the same? (UNCERTAIN) TUTOR69: Acceleration uh will always be the same because there is-that is being caused by force of gravity which is not changing. [RST, EXP] STUDENT70: mm-k (NEUTRAL) TUTOR71: Acceleration is-it is in-what is the direction uh of this acceleration-acceleration due to gravity? [SAQ] STUDENT72: It's-the direction-it's downward. (CERTAIN) TUTOR73: Yes, it's vertically down. [POS, RST] STUDENT74: mm-k. (NEUTRAL) . . . 53 seconds pass while student types. . . STUDENT75: So in relation to the velocity, when it's being thrown up the velocity increases-well no, it decreases and when it goes down it increases. (MIXED) Figure 1 : Annotated Corpus Excerpt #1 project 5 annotating student states that are of interest in recent tutorial dialogue research (Bhatt et al., 2004; Moore et al., 2004) . Our \"Certainness\" annotation uses one of four labels, defined in the manual as follows:",
"cite_spans": [
{
"start": 417,
"end": 427,
"text": "[RST, EXP]",
"ref_id": null
},
{
"start": 660,
"end": 670,
"text": "[POS, RST]",
"ref_id": null
},
{
"start": 1036,
"end": 1056,
"text": "(Bhatt et al., 2004;",
"ref_id": "BIBREF4"
},
{
"start": 1057,
"end": 1076,
"text": "Moore et al., 2004)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 903,
"end": 911,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Annotating Student Certainty",
"sec_num": "2.2"
},
{
"text": "\u2022 uncertain: Use this label only when you feel the student is clearly uncertain about what they are saying. See Figures 1 (STUDENT 68 ) and 2 (STUDENT 17 , STUDENT 19 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotating Student Certainty",
"sec_num": "2.2"
},
{
"text": "\u2022 certain: Use this label only when you feel the student is clearly certain about what they are saying. See Figures 1 (STUDENT 72 ) and 6 (STUDENT 99 , STUDENT 101 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotating Student Certainty",
"sec_num": "2.2"
},
{
"text": "\u2022 mixed: Use this label if you feel that the speaker conveyed some mixture of uncertain and certain utterances within the same turn. See Figure 1 (STUDENT 75 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 137,
"end": 145,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Annotating Student Certainty",
"sec_num": "2.2"
},
{
"text": "\u2022 neutral: Use this label when you feel the speaker conveyed no sense of certainness. In other words, the speaker seemed neither clearly uncertain nor clearly certain (nor clearly mixed). This is the default case. See Figure 1 (STUDENT 70 , STUDENT 74 ). ",
"cite_spans": [],
"ref_spans": [
{
"start": 218,
"end": 226,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Annotating Student Certainty",
"sec_num": "2.2"
},
{
"text": "Also prior to the present study, each tutor turn in our corpus had been manually annotated for tutoringspecific dialogue acts 6 as part of a project comparing dialogue behavior in human versus computer tutoring (Forbes-Riley et al., 2005) . Our tagset of \"Tutor Dialogue Acts\", shown in Figures 3 -5 below, was developed based on pilot studies using similar tagsets applied in other tutorial dialogue projects 7 (Graesser and Person, 1994; Graesser et al., 1995; Johnson et al., 2004) . As shown in Figures 3 -5, we distinguish three main types of Tutor Acts. The \"Tutor Feedback Acts\" in Figure 3 indicate the \"correctness\" of the student's prior turn.",
"cite_spans": [
{
"start": 211,
"end": 238,
"text": "(Forbes-Riley et al., 2005)",
"ref_id": "BIBREF9"
},
{
"start": 412,
"end": 439,
"text": "(Graesser and Person, 1994;",
"ref_id": "BIBREF10"
},
{
"start": 440,
"end": 462,
"text": "Graesser et al., 1995;",
"ref_id": "BIBREF11"
},
{
"start": 463,
"end": 484,
"text": "Johnson et al., 2004)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 589,
"end": 597,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Annotating Tutor Dialogue Acts",
"sec_num": "2.3"
},
{
"text": "The \"Tutor Question Acts\" in Figure 4 label the type of question that the tutor asks, in terms of their content and the expectation that the content presupposes with respect to the type of student answer required.",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 37,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Annotating Tutor Dialogue Acts",
"sec_num": "2.3"
},
{
"text": "The \"Tutor State Acts\" in Figure 5 summarize or clarify the current state of the student's argument, 6 While one annotator labeled the entire corpus, a second annotator labeled 776 of these turns, yielding a 0.67 Kappa.",
"cite_spans": [
{
"start": 101,
"end": 102,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 26,
"end": 34,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Annotating Tutor Dialogue Acts",
"sec_num": "2.3"
},
{
"text": "7 Tutoring dialogues have a number of tutoring-specific dialogue acts (e.g., hinting). Most researchers have thus used tutoring-specific rather than more domain-independent schemes such as DAMSL (Core and Allen, 1997) , although (Rickel et al., 2001 ) present a first step towards integrating tutoring-specific acts into a more general collaborative discourse framework. Our Feedback and Question Acts have primarily backward-and forward-looking functions respectively, in DAMSL.",
"cite_spans": [
{
"start": 195,
"end": 217,
"text": "(Core and Allen, 1997)",
"ref_id": "BIBREF6"
},
{
"start": 229,
"end": 249,
"text": "(Rickel et al., 2001",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotating Tutor Dialogue Acts",
"sec_num": "2.3"
},
{
"text": "\u2022 Positive Feedback (POS): overt positive response to prior student turn. See Figures 1 (TUTOR 73 ), 2 (TUTOR 18 ) and 6 (TUTOR 98 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotating Tutor Dialogue Acts",
"sec_num": "2.3"
},
{
"text": "\u2022 Negative Feedback (NEG): overt negative response to prior student turn. See Figure 6 (TUTOR 100 ). ",
"cite_spans": [],
"ref_spans": [
{
"start": 78,
"end": 86,
"text": "Figure 6",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Annotating Tutor Dialogue Acts",
"sec_num": "2.3"
},
{
"text": "We hypothesize that there are dependencies between student emotional states (as represented by the \"Certainness\" labels) and subsequent tutor responses (as represented by \"Tutor Dialogue Act\" labels), and that analyzing these dependencies can suggest ways of incorporating techniques for adapting to student emotions into our computer tutor. We test these hypotheses by extracting a bigram representation of student and tutor turns from our annotated dialogues, computing the dependencies of the bigram permutations using Chi Square analyses, and drawing conclusions from the significant results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Analysis",
"sec_num": "3"
},
{
"text": "We view the sequence: \"Student Turn, Tutor Turn\" as our bigram unit, whose individual elements con-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue Bigrams",
"sec_num": "3.1"
},
{
"text": "\u2022 Restatement (RST): repetitions and rewordings of prior student statement. See Figures 1 (TUTOR 69 , TUTOR 73 ) and 6 (TUTOR 100 , TUTOR 102 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue Bigrams",
"sec_num": "3.1"
},
{
"text": "\u2022 Recap (RCP): restating student's overall argument or earlier-established points. See Figure 6 (TUTOR 98 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 87,
"end": 95,
"text": "Figure 6",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Dialogue Bigrams",
"sec_num": "3.1"
},
{
"text": "\u2022 Request/Directive (RD): directions summarizing expectations about student's overall argument. See Figure 2 (TUTOR 16 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 100,
"end": 108,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Dialogue Bigrams",
"sec_num": "3.1"
},
{
"text": "\u2022 Bottom Out (BOT): complete answer supplied after student answer is incorrect, incomplete or unclear. See Figure 2 (TUTOR 20 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 107,
"end": 115,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Dialogue Bigrams",
"sec_num": "3.1"
},
{
"text": "\u2022 Hint (HINT): partial answer supplied after student answer is incorrect, incomplete or unclear. See Figure 6 (TUTOR 100 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 101,
"end": 109,
"text": "Figure 6",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Dialogue Bigrams",
"sec_num": "3.1"
},
{
"text": "\u2022 Expansion (EXP): novel details about student answer supplied without first being queried to student. See Figure 1 (TUTOR 69 ). Figure 6 there are two such units: STUDENT 99 -TUTOR 100 and STUDENT 101 -TUTOR 102 . Because our goal in this paper is to analyze tutor responses, we extract all and only these units from our dialogues for analysis. In other words, we do not extract bigrams of the form: \"Tutor Turn, Student Turn\", although we will do so in a separate future study when we analyze student responses to tutor actions. This decision is akin to disregarding word-level bigrams that cross sentence boundaries. Here, the sequence: \"Student Turn, Tutor Turn\" is our \"dialogue sentence\", and we are interested in all possible permutations of our student and tutor turn annotations in our data that combine to produce these dialogue sentences. After extracting the annotated \"Student Turn, Tutor Turn\" bigrams, we sought to investigate the dependency between student emotional states and tutor responses. Although each of our student turns was labeled with a single \"Certainty\" tag, frequently our tutor turns were labeled with multiple \"Tutor Act\" PHYSICS PROBLEM: Two closed containers look the same, but one is packed with lead and the other with a few feathers. How could you determine which had more mass if you and the containers were floating in a weightless condition in outer space? Explain. . . . dialogue excerpt at 16.5 minutes into session. . . TUTOR98: Yes, we are all learning. Ok, so uh now uh you apply the same push for the same amount of time for on both the containers. Then what would you compare to distinguish between them? [POS, RCP, SAQ] STUDENT99: I would be comparing their rate of velocity. (CERTAIN) TUTOR100: Not rate. You will be comparing their velocity, you see, rate will imply that something is changing which there is no change, velocity is constant. So you will surely compare their velocities-which one will be faster? [NEG, HINT, Figures 1-6 . Because there are 11 \"Tutor Act\" tags, and no limits on tag combinations per turn, it is not surprising that in our 4921 extracted bigrams, we found 478 unique tag combinations in the tutor turns, 294 of which occurred only once. Treating each tagged tutor turn as a unique \"word\" would thus yield a data sparsity problem for our analysis of bigram dependencies. Due to this data sparsity problem, a question we can ask instead, is: is the tutor's inclusion of a particular Tutor Act in a tutor turn dependent on the student's certainness in the prior turn?",
"cite_spans": [
{
"start": 1963,
"end": 1974,
"text": "[NEG, HINT,",
"ref_id": null
}
],
"ref_spans": [
{
"start": 107,
"end": 115,
"text": "Figure 1",
"ref_id": null
},
{
"start": 129,
"end": 137,
"text": "Figure 6",
"ref_id": "FIGREF5"
},
{
"start": 1975,
"end": 1986,
"text": "Figures 1-6",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Dialogue Bigrams",
"sec_num": "3.1"
},
{
"text": "That is, we decided to approach the dependency analysis by considering the presence or absence of each Tutor Act tag separately. In other words, we performed 11 different analyses, one for each Tutor Act tag T, each time asking the question: is there a dependency between student emotional state and a tutor response containing T? More formally, for each analysis, we took our set of \"Student Turn, Tutor Turn\" bigrams, and replaced all annotated tutor turns containing T with only T, and all not containing T with not T. The result was 11 different sets of 4921 \"Student Turn, Tutor Turn\" bigrams. As an example, we show below how the tutor turns in Figure 6 are converted within the \"POS\" analysis:",
"cite_spans": [],
"ref_spans": [
{
"start": 651,
"end": 660,
"text": "Figure 6",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Dialogue Bigrams",
"sec_num": "3.1"
},
{
"text": "TUTOR 98 : [POS, RCP, SAQ] \u2212\u2192 [POS] TUTOR 100 : [NEG, HINT, RST, SAQ] \u2212\u2192 [not- POS] TUTOR 102 : [RST, DAQ] \u2212\u2192 [notPOS]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue Bigrams",
"sec_num": "3.1"
},
{
"text": "The benefit of these multiple analyses is that we can ask specific questions directly motivated by what our computer tutor can do. For example, in the POS analysis, we ask: should student emotional state impact whether the computer tutor generates positive feedback? Currently, there is no emotion adaptation by our computer tutor -it generates positive feedback independently of student emotional state, and independently of any other Tutor Acts that it generates. The same is true for each of the Tutor Acts generated by our computer tutor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue Bigrams",
"sec_num": "3.1"
},
{
"text": "We analyzed bigram dependency using the Chi Square (\u03c7 2 ) test. 8 In this section we illustrate our analysis method, using the set of \"Certainness\" -\"POS/notPOS\" bigrams. In Section 3.3 we discuss the results of performing this same analysis on all 11 sets of \"Student Certainness -Tutor Act\" bigrams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chi Square (\u03c7 2 ) Analyses",
"sec_num": "3.2"
},
{
"text": "\u03c7 2 tests the statistical significance of the relationship between two variables in a dataset. Our observed \"Certainness\" -\"POS\" bigram permutations are reported as a bivariate table in Table 1 . For example, we observed 252 neutral -POS bigrams, and 2517 neutral -notPOS bigrams. Row totals show the number of bigrams containing the first bigram \"word\" (e.g., 2769 bigrams contained \"neutral\" followed by \"POS\" or \"notPOS\"). Column totals show the number of bigrams containing the second bigram \"word\" (e.g., 781 bigrams containing \"POS\" as the second token). Table 1 : Observed Student \"Certainness\" -Tutor \"Positive Feedback\" Bigrams \u03c7 2 compares these observed counts with the counts that would be expected if there were no relationship at all between the two variables in a larger 8 A good tutorial for using the \u03c7 2 test is found here: www.georgetown.edu/facultyballc/webtools/web chi tut.html population (the null hypothesis). For each cell c in Table 1 , the expected count is computed as: (c's row total * c's column total)/(total bigrams). Expected counts for Table 2 : Expected Student \"Certainness\" -Tutor \"Positive Feedback\" Bigrams A \u03c7 2 value assesses whether the differences between observed and expected counts are large enough to conclude that a statistically significant relationship exists between the two variables. The \u03c7 2 value for the table is computed by summing the \u03c7 2 value for each cell, which is computed as follows: (observed value -expected value) 2 /expected value. The total \u03c7 2 value for Table 1 is 225.92. \u03c7 2 would be 0 if observed and expected counts were equal. However some variation is required (the \"critical \u03c7 2 value\"), to account for a given table's degree of freedom and one's chosen probability of exceeding any sampling error. For Table 1 , which has 3 degrees of freedom, the critical \u03c7 2 value at a 0.001 probability of error is 16.27. 9 Our \u03c7 2 value of 225.92 greatly exceeds this critical value. We thus conclude that there is a statistically significant dependency between Certainness and Positive Feedback.",
"cite_spans": [],
"ref_spans": [
{
"start": 186,
"end": 193,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 561,
"end": 568,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 953,
"end": 960,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 1070,
"end": 1077,
"text": "Table 2",
"ref_id": null
},
{
"start": 1523,
"end": 1530,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 1779,
"end": 1786,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Chi Square (\u03c7 2 ) Analyses",
"sec_num": "3.2"
},
{
"text": "We can look more deeply into this overall dependency by calculating the statistical significance of the dependencies between each specific \"Certainness\" tag and the Positive Feedback tag. The freely available Ngram Statistics Package (NSP) (Banerjee and Pedersen, 2003) computes these \u03c7 2 values automatically when we input each set of our \"Student Certainness -Tutor Act\" bigrams. Figure 7 shows the resulting NSP output for the POS/notPOS analysis. Each row shows: 1) the bigram, 2) its rank (according to highest \u03c7 2 value), 3) its \u03c7 2 value, 4) the number of occurrences of this bigram, 5) the number of times the first token in this bigram occurs first in any bigram, 6) the number of times the second token in this bigram occurs last in any bigram. Figure 7 can alternatively be viewed as a 2 X 2 table of observed counts. For example, the table for the neutral -POS bigram has a \"neutral\" row (identical to that in Table 1 ) and a \"non-neutral\" row (computed by summing all the non-neutral rows in Table 1 ). This table has 1 degree of freedom; the critical \u03c7 2 value at p < 0.001 is 10.83. As shown, all of the bigrams in Figure 7 have \u03c7 2 values exceeding this critical value. We thus conclude that there are statistically significant dependencies between each of the Certainness tags and Positive Feedback. 10 In Section 3.3 we will see cases where there is an overall significant dependency, but significant dependencies only for a subset of the four Certainness tags.",
"cite_spans": [
{
"start": 240,
"end": 269,
"text": "(Banerjee and Pedersen, 2003)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 382,
"end": 390,
"text": "Figure 7",
"ref_id": null
},
{
"start": 755,
"end": 763,
"text": "Figure 7",
"ref_id": null
},
{
"start": 922,
"end": 929,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 1005,
"end": 1012,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 1130,
"end": 1138,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "POS notPOS",
"sec_num": null
},
{
"text": "Finally, we can compare the difference between observed and expected values for the statistically significant dependent bigrams identified using NSP. For example, by comparing Tables 1 and 2, we see that the human tutor responds with positive feedback more than expected after emotional turns, and less than expected after neutral turns. This suggests that our computer tutoring system could adapt to nonneutral emotional states by generating more positive feedback (independently of whether the Certainness value is certain, uncertain, or mixed).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS notPOS",
"sec_num": null
},
{
"text": "In essence, for each of the 11 Tutor Acts described in Section 2.3, the first part of our \u03c7 2 analysis determines whether or not there is an overall dependency between Student Certainness and that specific Tutor Act. The second part then determines how this dependency is distributed across individual Student Certainness states. In this section, we present and discuss our results of the \u03c7 2 analysis across all 11 sets of our \"Certainness -Tutor Act\" bigrams. Note that the tables present only our best results, where the \u03c7 2 value exceeded the critical value at p < 0.001 (16.27 and 10.83 for 3 and 1 degrees of freedom, respectively). If a bigram's \u03c7 2 value did not exceed this critical value, it is not shown. Table 3 presents our best results across our 2 sets of \"Certainness -Feedback Act\" bigrams. Each set's results are separated by a double line. The last column shows the \u03c7 2 value for each bigram. The first row for each set shows the \u03c7 2 value for the overall dependency between Certainness and Feedback (e.g. 225.92 for CERT -POS). The remaining rows per set are ranked according to the \u03c7 2 values for the specific dependencies between each \"Certainness\" tag and the \"Feedback\" tag (e.g. 217.35 for neutral -POS). 11 Note that, while all bigrams shown are statistically significant at p < 0.001, as the \u03c7 2 values increase above the critical value, the results become more significant. Each row also shows the observed (Obs) and expected (Exp) counts of each bigram. Table 3 : Observed, Expected, and \u03c7 2 for Dependent \"Certainness\" -\"Feedback\" Bigrams (p<.001)",
"cite_spans": [],
"ref_spans": [
{
"start": 716,
"end": 723,
"text": "Table 3",
"ref_id": null
},
{
"start": 1483,
"end": 1490,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "3.3"
},
{
"text": "As shown, there are overall dependencies between Student Certainness and both Positive and Negative Tutor Feedback. There are also dependencies between every specific Certainness tag and both Positive and Negative tutor Feedback. Moreover, in both cases we see that the tutor responds with more feedback than expected after all emotional student turns (non-neutral), and with less feedback than expected after neutral student turns. This suggests that an increased use of feedback is a viable adaptation to non-neutral emotional states. Of course, the type of feedback adaptation (POS or NEG) must also depend on whether the student answer is correct, as will be discussed further in Section 5. Table 4 presents our best results across our 3 sets of \"Certainness -Question Act\" bigrams, using the same format as Table 3 . As shown, there is an overall dependency only between Student Certainness and Tutor Short Answer Questions that is wholly explained by the dependency of the neutral -SAQ bigram, where the tutor responds to student neutral turns with slightly fewer Short Answer Questions than expected. Both of these \u03c7 2 values barely exceed the critical value however, and they are much smaller than the \u03c7 2 values in Table 3 . Moreover, there are no dependencies at all between Student Certainness and Tutor Long or Deep Answer Questions (LAQ/DAQ). 12 These results suggest that \"Question Acts\" aren't highly relevant for adaptation to Certainness; we hypothesis that they will play a more significant role when we analyze student emotional responses to tutor actions. Table 4 : Observed, Expected, and \u03c7 2 for Dependent \"Certainness\" -\"Question Act\" Bigrams (p<.001) Table 5 presents our best results across our 6 sets of \"Certainness -State Act\" bigrams. There is an overall dependency between Student Certainness and Tutor Restatements, explained by the dependencies of the certain -RST and neutral -RST bigrams. There is also an overall dependency between Student Certainness and Tutor Recaps, explained by the dependent neutral -RCP bigram. However, the \u03c7 2 values for the dependent RST bigrams are much larger than those for the dependent RCP bigrams. 13 Moreover, there are no dependencies (even at p<.05) between Student Certainness and Tutor Request Directives (RD). Although these three Tutor State Acts all serve a summary purpose with respect to the student's argument, RCP and RD are defined as more general acts whose use is based on the overall discussion so far. Only RST addresses the immediately prior student turn; thus it's not surprising that its use shows a stronger dependency on the prior student certainness. The tutor's increased use of RST after certain turns suggests a possible adaptation strategy of increasing or maintaining student certainty by repeating information that the student has already shown certainty about.",
"cite_spans": [],
"ref_spans": [
{
"start": 695,
"end": 702,
"text": "Table 4",
"ref_id": null
},
{
"start": 812,
"end": 819,
"text": "Table 3",
"ref_id": null
},
{
"start": 1224,
"end": 1231,
"text": "Table 3",
"ref_id": null
},
{
"start": 1576,
"end": 1583,
"text": "Table 4",
"ref_id": null
},
{
"start": 1675,
"end": 1682,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bigram",
"sec_num": null
},
{
"text": "The remaining 3 bigram sets contain Tutor Acts that clarify the prior student answer. First, there is an overall dependency between Student Certainness and Tutor Bottom Outs, which is explained by the specific dependencies of the neutral -BOT and uncertain -BOT bigrams. After uncertain turns, the tutor \"Bottoms Out\" (supplies the complete answer) more than expected, and after neutral turns, less than expected. This suggests a straightforward adaptive technique for student uncertainty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bigram",
"sec_num": null
},
{
"text": "There is also an overall dependency between Student Certainness and Tutor Hints, which is explained by the dependencies of the mixed -HINT and neutral -HINT bigrams. After mixed turns, the tutor \"Hints\" (supplies a partial answer) more than expected, and after neutral turns, less than expected. This suggests an adaptive technique similar to the BOT case, except the tutor gives less of the answer because there is less uncertainty (i.e. there is more certainty because the student turn is mixed).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bigram",
"sec_num": null
},
{
"text": "Finally, there is an overall dependency between Student Certainness and Tutor Expansions, which is explained by the dependencies of the neutral -EXP and uncertain -EXP bigrams. In this case, however, the tutor responds with an \"Expansion\" (supplying novel details) more than expected after neutral turns, and less than expected after uncertain turns. This suggests another adaptive technique to uncertainty, whereby the tutor avoids overwhelming the uncertain student with unexpected details. 14 RCP is significant at a lower critical value (p<.01).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bigram",
"sec_num": null
},
{
"text": "14 Of the BOT, HINT, EXP bigrams not shown, only the \"certain\" bigrams are significant at a lower critical value (p<.05). Table 5 : Observed, Expected, and \u03c7 2 for Dependent \"Certainness\" -\"State Act\" Bigrams (p<.001)",
"cite_spans": [],
"ref_spans": [
{
"start": 122,
"end": 129,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bigram",
"sec_num": null
},
{
"text": "While there have been other approaches to using dialogue n-grams (e.g. (Stolcke et al., 2000; Reithinger et al., 1996) ), such n-grams have typically consisted of only dialogue acts, although (Higashinaka et al., 2003) propose computing bigrams of dialogue state and following dialogue act. Moreover, these methods have been used to compute n-gram probabilities for implementing statistical components. We propose a new use of these methods: to mine corpora for only the significant n-grams, for use in designing strategies for adapting to student affect in a computational system. Previous Ngram Statistics Package (NSP) applications have focused on extracting significant word n-grams (Banerjee and Pedersen, 2003) , while our \"dialogue\" bigrams are constructed from multiple turn-level annotations of student certainness and tutor dialogue acts. Although (Shah et al., 2002) have mined a human tutoring corpus for significant \"dialogue\" bigrams to aid in the design of adaptive dialogue strategies, their goal is to generate appropriate tutor responses to student initiative. Their bigrams consist of manually labeled student initiative and tutor response in terms of mutually exclusive categories of communicative goals. In the area of affective tutorial dialogue, (Bhatt et al., 2004) have coded (typed) tutoring dialogues for student hedging and affect. Their focus, however, has been on identifying differences in human versus computer tutoring, while our focus has been on analyzing relationships between student states and tutor responses. Conversely, (Johnson et al., 2004) have coded their tutoring dialogue corpora with tutoring-specific dialogue acts, but have not annotated student affect, and to date have performed only qualitative analyses. Finally, while our research focuses on dialogue acts, others are studying affect and different linguistic phenomena such as lexical choice (Moore et al., 2004) .",
"cite_spans": [
{
"start": 71,
"end": 93,
"text": "(Stolcke et al., 2000;",
"ref_id": "BIBREF25"
},
{
"start": 94,
"end": 118,
"text": "Reithinger et al., 1996)",
"ref_id": "BIBREF21"
},
{
"start": 192,
"end": 218,
"text": "(Higashinaka et al., 2003)",
"ref_id": "BIBREF12"
},
{
"start": 687,
"end": 716,
"text": "(Banerjee and Pedersen, 2003)",
"ref_id": "BIBREF3"
},
{
"start": 858,
"end": 877,
"text": "(Shah et al., 2002)",
"ref_id": "BIBREF24"
},
{
"start": 1269,
"end": 1289,
"text": "(Bhatt et al., 2004)",
"ref_id": "BIBREF4"
},
{
"start": 1561,
"end": 1583,
"text": "(Johnson et al., 2004)",
"ref_id": "BIBREF13"
},
{
"start": 1897,
"end": 1917,
"text": "(Moore et al., 2004)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "This paper proposes an empirically-motivated approach to developing techniques for adapting to student affect in our dialogue tutorial system. Furthermore, our method of extracting and analyzing dialogue bigrams to develop adaptation techniques generalizes to other domains that seek to use user affective states to trigger system adaptation. We first extract \"dialogue bigrams\" from a corpus of humanhuman spoken tutoring dialogues annotated for student Certainness and tutor Dialogue Acts. We then use \u03c7 2 analysis to determine which bigrams are dependent, such that there is a relationship between the use of a Tutor Act and prior Student Certainness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Current Directions",
"sec_num": "5"
},
{
"text": "Our results indicate specific human tutor emotionadaption methods that we can implement in our computer system. Specifically, we find that there are many dependencies between student states of certainty and subsequent tutor dialogue acts, which suggest ways that our computer tutor can be enhanced to adapt dialogue act generation to student affective states. In particular, our results suggest that \"Bottoming Out\" and avoiding \"Expansions\" are viable adaptations to student uncertainty, whereas \"Hinting\" is a viable adaptation to a mixed student state, and adapting by \"Restatements\" may help maintain a state of student certainty. Positive and Negative Feedback occur significantly more than expected after all the non-neutral student states, and thus seem to be a generally \"human\" way of responding to student emotions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Current Directions",
"sec_num": "5"
},
{
"text": "This approach for developing adaptive strategies is currently based on one human tutor's responses across dialogues with multiple students. Clearly, different tutors have different teaching styles; moreover, it is an open question in the tutoring community as to whether, and why, one tutor is better than any other with respect to increasing student learning. Analyzing a different tutor's responses may yield different dependencies between student emotions and tutor responses. Analyzing the responses of multiple tutors would yield a broader range of responses from which common responses could be extracted and analyzed. However, the common adaptations of multiple tutors are not necessarily better for improving student learning than the responses of a human tutor who responds differently. Moreover, such a \"mix and match\" approach would not necessarily yield a consistent generalization about adaptive strategies for student emotion. We have already demonstrated that students learned a significant amount with our human tutor 15 . Thus, although it is an open question as to why these students learn, analyzing our tutor's responses across multiple students enables a consistent generalization about one successful tutor's adaptive strategies for student emotion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Current Directions",
"sec_num": "5"
},
{
"text": "However, it is important to note that we do not know yet if these adaptive techniques will be \"effective\", i.e. that they will improve student learning or improve other performance measures such as student persistence (Aist et al., 2002) when implemented in our computer tutor. Our next step will thus be to use these adaptive techniques as a guideline for implementing adaptive techniques in ITSPOKE. We can then compare the performance of the adaptive system with its non-adaptive counterpart, to see whether or not student performance is improved. Currently ITSPOKE adaptation is based only on the correctness of student turns.",
"cite_spans": [
{
"start": 218,
"end": 237,
"text": "(Aist et al., 2002)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Current Directions",
"sec_num": "5"
},
{
"text": "We will also investigate how other factors interact with student emotional states to determine subsequent Tutor Acts. For although our results demonstrate significant dependencies between emotion and our human tutor responses, only a small amount of variance is accounted for in our results, indicating that other factors play a role in determining tutor responses. One such factor is student \"correctness\", which is not identical to student \"certainness\" (as measured by \"hedging\" (Bhatt et al., 2004) ); for example, a student may be \"certain\" but \"incorrect\". Other factors include the dialogue act that the student is performing. We have recently completed the annotation of student turn \"correctness\", and we have already annotated \"Student Acts\" in tandem with Tutor Acts. Annotation of student \"Frustration\" and \"Anger\" categories has also recently been completed. We plan to extend the n-gram analysis by looking at other n-grams combining these new annotations of student turns with tutor responses.",
"cite_spans": [
{
"start": 482,
"end": 502,
"text": "(Bhatt et al., 2004)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Current Directions",
"sec_num": "5"
},
{
"text": "In addition to using dependent bigrams to develop adaptive dialogue techniques, these results also provide features for other algorithms. We plan to use the dependent bigrams as new features for investigating learning correlations (i.e., Do students whose dialogues display more certain -POS bigrams learn more?), furthering our previous work in this area (Forbes-Riley et al., 2005; .",
"cite_spans": [
{
"start": 356,
"end": 383,
"text": "(Forbes-Riley et al., 2005;",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Current Directions",
"sec_num": "5"
},
{
"text": "We use the terms \"affect\" and \"emotion\" loosely to cover emotions and attitudes believed to be relevant for tutoring.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also use ITSPOKE to examine the utility of building spoken dialogue tutors (e.g.).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "(Liscombe et al., 2005) show that using only acousticprosodic features as predictors, these student certainness annotations can be predicted with 76.42% accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Degrees of freedom is computed as (#rows -1) * (#columns",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that the \u03c7 2 value for each of the bigrams inFigure 7is identical to its \"Certainness -notPOS\" counterpart. This can be understood by observing that the 2 X 2 observed (and expected) table for each \"Certainness -POS\" bigram is identical to its \"notPOS\" counterpart, except that the columns are flipped. That is, \"not notPOS\" is equivalent to \"POS\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "These POS results are discussed in Section 3.2; in this section we summarize the results for all 11 bigram sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "All the LAQ bigrams except certain -LAQ are barely significant at p< .05. Of the DAQ bigrams, only CERT -DAQ and uncertain -DAQ barely exceed the critical value at p<.05.13 Of the RCP and RST bigrams not shown, only certain -",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The student means for the (multiple-choice) pre-and posttests were 0.42 and 0.72, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Julia Hirschberg, Jennifer Venditti, Jackson Liscombe, and Jeansun Lee at Columbia University for certainness annotation and discussion. We thank Pam Jordan, Amruta Purandare, Ted Pederson, and Mihai Rotaru for their helpful comments. This research is supported by ONR (N00014-04-1-0108), and NSF (0325054, 0328431).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Experimentally augmenting an intelligent tutoring system with human-supplied capabilities: Adding Human-Provided Emotional Scaffolding to an Automated Reading Tutor that Listens",
"authors": [
{
"first": "G",
"middle": [],
"last": "Aist",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Kort",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Reilly",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mostow",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Picard",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. Intelligent Tutoring Systems Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Aist, B. Kort, R. Reilly, J. Mostow, and R. Picard. 2002. Experimentally augmenting an intelligent tutor- ing system with human-supplied capabilities: Adding Human-Provided Emotional Scaffolding to an Auto- mated Reading Tutor that Listens. In Proc. Intelligent Tutoring Systems Conference.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Affective Dialogue Systems",
"authors": [
{
"first": "E",
"middle": [],
"last": "Andr\u00e9",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Dybkjaer",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Minker",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Heisterkamp",
"suffix": ""
}
],
"year": 2004,
"venue": "Lecture Notes in Computer Science",
"volume": "3068",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Andr\u00e9, L. Dybkjaer, W. Minker, and P. Heisterkamp, editors. 2004. Affective Dialogue Systems, Tutorial and Research Workshop, ADS 2004, Kloster Irsee, Germany, June 14-16, 2004, Proceedings, volume 3068 of Lecture Notes in Computer Science. Springer.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Prosody-based automatic detection of annoyance and frustration in human-computer dialog",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ang",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Dhillon",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Krupski",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. International Conf. on Spoken Language Processing (ICSLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Ang, R. Dhillon, A. Krupski, E.Shriberg, and A. Stol- cke. 2002. Prosody-based automatic detection of an- noyance and frustration in human-computer dialog. In Proc. International Conf. on Spoken Language Pro- cessing (ICSLP).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The design, implementation, and use of the Ngram Statistic Package",
"authors": [
{
"first": "S",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Pedersen",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. 4th International Conference. on Intelligent Text Processing and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Banerjee and T. Pedersen. 2003. The design, imple- mentation, and use of the Ngram Statistic Package. In Proc. 4th International Conference. on Intelligent Text Processing and Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Hedged responses and expressions of affect in human/human and human/computer tutorial interactions",
"authors": [
{
"first": "K",
"middle": [],
"last": "Bhatt",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Evens",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Argamon",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. 26th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Bhatt, M. Evens, and S. Argamon. 2004. Hedged re- sponses and expressions of affect in human/human and human/computer tutorial interactions. In Proc. 26th",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Annual Meeting of the Cognitive Science Society",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Cognitive Science Society.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Coding dialogues with the DAMSL annotation scheme",
"authors": [
{
"first": "M",
"middle": [
"G"
],
"last": "Core",
"suffix": ""
},
{
"first": "J",
"middle": [
"F"
],
"last": "Allen",
"suffix": ""
}
],
"year": 1997,
"venue": "D. Traum, editor, Working Notes: AAAI Fall Symposium on Communicative Action in Humans and Machines",
"volume": "",
"issue": "",
"pages": "28--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. G. Core and J. F. Allen. 1997. Coding dialogues with the DAMSL annotation scheme. In D. Traum, edi- tor, Working Notes: AAAI Fall Symposium on Commu- nicative Action in Humans and Machines, pages 28- 35, Menlo Park, California.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Why am I confused: An exploratory look into the role of affect in learning",
"authors": [
{
"first": "S",
"middle": [
"D"
],
"last": "Craig",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Graesser",
"suffix": ""
}
],
"year": 2003,
"venue": "Advances in Technology-based Education: Towards a Knowledge-based Society",
"volume": "3",
"issue": "",
"pages": "1903--1906",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. D. Craig and A. Graesser. 2003. Why am I confused: An exploratory look into the role of affect in learning. In A. Mendez-Vilas and J.A.Mesa Gonzalez, editors, Advances in Technology-based Education: Towards a Knowledge-based Society Vol 3, pages 1903-1906.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Predicting emotion in spoken dialogue from multiple knowledge sources",
"authors": [
{
"first": "K",
"middle": [],
"last": "Forbes-Riley",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Litman",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. Human Language Technology Conf. of the North American Chap. of the Assoc. for Computational Linguistics (HLT/NAACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Forbes-Riley and D. Litman. 2004. Predicting emotion in spoken dialogue from multiple knowledge sources. In Proc. Human Language Technology Conf. of the North American Chap. of the Assoc. for Compu- tational Linguistics (HLT/NAACL).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Dialogue-learning correlations in spoken dialogue tutoring",
"authors": [
{
"first": "K",
"middle": [],
"last": "Forbes-Riley",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Litman",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Huettner",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ward",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the International Conference on Artificial Intelligence in Education",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Forbes-Riley, D. Litman, A. Huettner, and A. Ward. 2005. Dialogue-learning correlations in spoken dia- logue tutoring. In Proceedings of the International Conference on Artificial Intelligence in Education.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Question asking during tutoring",
"authors": [
{
"first": "A",
"middle": [],
"last": "Graesser",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Person",
"suffix": ""
}
],
"year": 1994,
"venue": "American Educational Research Journal",
"volume": "31",
"issue": "1",
"pages": "104--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Graesser and N. Person. 1994. Question asking dur- ing tutoring. American Educational Research Journal, 31(1):104-137.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Collaborative dialog patterns in naturalistic one-on-one tutoring",
"authors": [
{
"first": "A",
"middle": [],
"last": "Graesser",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Person",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Magliano",
"suffix": ""
}
],
"year": 1995,
"venue": "Applied Cognitive Psychology",
"volume": "9",
"issue": "",
"pages": "495--522",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Graesser, N. Person, and J. Magliano. 1995. Collabo- rative dialog patterns in naturalistic one-on-one tutor- ing. Applied Cognitive Psychology, 9:495-522.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Corpus-based discourse understanding in spoken dialogue systems",
"authors": [
{
"first": "R",
"middle": [],
"last": "Higashinaka",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Nakano",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Aikawa",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. Assoc. for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Higashinaka, M. Nakano, and K. Aikawa. 2003. Corpus-based discourse understanding in spoken dia- logue systems. In Proc. Assoc. for Computational Lin- guistics (ACL).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Generating socially appropriate tutorial dialog",
"authors": [
{
"first": "W",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Lewis",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Paola",
"middle": [],
"last": "Rizzo",
"suffix": ""
},
{
"first": "Wauter",
"middle": [],
"last": "Bosma",
"suffix": ""
},
{
"first": "Sander",
"middle": [],
"last": "Kole",
"suffix": ""
},
{
"first": "Mattijs",
"middle": [],
"last": "Ghijsen",
"suffix": ""
},
{
"first": "Herwin",
"middle": [],
"last": "Van Welbergen ; Andr\u00e9",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "254--264",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Lewis Johnson, Paola Rizzo, Wauter Bosma, Sander Kole, Mattijs Ghijsen, and Herwin van Welbergen. 2004. Generating socially appropriate tutorial dialog. In Andr\u00e9 et al. (Andr\u00e9 et al., 2004), pages 254-264.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Detecting certainness in spoken tutorial dialogues",
"authors": [
{
"first": "J",
"middle": [],
"last": "Liscombe",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Venditti",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hirschberg",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. InterSpeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Liscombe, J. Venditti, and J.Hirschberg. 2005. Detect- ing certainness in spoken tutorial dialogues. In Proc. InterSpeech.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Annotating student emotional states in spoken tutoring dialogues",
"authors": [
{
"first": "D",
"middle": [],
"last": "Litman",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Forbes-Riley",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. 5th SIGdial Workshop on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Litman and K. Forbes-Riley. 2004a. Annotating stu- dent emotional states in spoken tutoring dialogues. In Proc. 5th SIGdial Workshop on Discourse and Dia- logue.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Predicting student emotions in computer-human tutoring dialogues",
"authors": [
{
"first": "D",
"middle": [
"J"
],
"last": "Litman",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Forbes-Riley",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. Assoc. Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. J. Litman and K. Forbes-Riley. 2004b. Predicting stu- dent emotions in computer-human tutoring dialogues. In Proc. Assoc. Computational Linguistics (ACL).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "ITSPOKE: An intelligent tutoring spoken dialogue system",
"authors": [
{
"first": "D",
"middle": [],
"last": "Litman",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Silliman",
"suffix": ""
}
],
"year": 2004,
"venue": "Companion Proc. of the Human Language Technology Conf. of the North American Chap. of the Assoc. for Computational Linguistics (HLT/NAACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Litman and S. Silliman. 2004. ITSPOKE: An intel- ligent tutoring spoken dialogue system. In Compan- ion Proc. of the Human Language Technology Conf. of the North American Chap. of the Assoc. for Computa- tional Linguistics (HLT/NAACL).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Spoken versus typed human and computer dialogue tutoring",
"authors": [
{
"first": "D",
"middle": [
"J"
],
"last": "Litman",
"suffix": ""
},
{
"first": "C",
"middle": [
"P"
],
"last": "Rose",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Forbes-Riley",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Vanlehn",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Bhembe",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Silliman",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. Intelligent Tutoring Systems Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. J. Litman, C. P. Rose, K. Forbes-Riley, K. VanLehn, D. Bhembe, and S. Silliman. 2004. Spoken versus typed human and computer dialogue tutoring. In Proc. Intelligent Tutoring Systems Conference.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Generating tutorial feedback with affect",
"authors": [
{
"first": "J",
"middle": [
"D"
],
"last": "Moore",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Porayska-Pomsta",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Varges",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zinn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of the 17th International Florida Artificial Intelligence Research Sociey Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. D. Moore, K. Porayska-Pomsta, S. Varges, and C. Zinn. 2004. Generating tutorial feedback with affect. In Proc. of the 17th International Florida Artificial In- telligence Research Sociey Conference.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Towards modeling user behavior in human-machine interaction: Effect of errors and emotions",
"authors": [
{
"first": "S",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. ISLE Workshop on Dialogue Tagging for Multi-modal Human Computer Interaction",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Narayanan. 2002. Towards modeling user behavior in human-machine interaction: Effect of errors and emo- tions. In Proc. ISLE Workshop on Dialogue Tagging for Multi-modal Human Computer Interaction.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Predicting dialogue acts for a speech-to-speech translation system",
"authors": [
{
"first": "N",
"middle": [],
"last": "Reithinger",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Engel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Kipp",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Klesen",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. International Conf. on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Reithinger, R. Engel, M. Kipp, and M. Klesen. 1996. Predicting dialogue acts for a speech-to-speech trans- lation system. In Proc. International Conf. on Spoken Language Processing (ICSLP).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Building a bridge between intelligent tutoring and collaborative dialogue systems",
"authors": [
{
"first": "J",
"middle": [],
"last": "Rickel",
"suffix": ""
},
{
"first": "N",
"middle": [
"B"
],
"last": "Lesh",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Rich",
"suffix": ""
},
{
"first": "C",
"middle": [
"L"
],
"last": "Sidner",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gertner",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of the International Conference on Artificial Intelligence in Education (AI-ED)",
"volume": "",
"issue": "",
"pages": "592--594",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Rickel, N. B. Lesh, C. Rich, C. L. Sidner, and A. Gert- ner. 2001. Building a bridge between intelligent tu- toring and collaborative dialogue systems. In Proc. of the International Conference on Artificial Intelligence in Education (AI-ED), pages 592-594.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Voice signatures",
"authors": [
{
"first": "I",
"middle": [],
"last": "Shafran",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Riley",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mohri",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. IEEE Automatic Speech Recognition and Understanding Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Shafran, M. Riley, and M. Mohri. 2003. Voice sig- natures. In Proc. IEEE Automatic Speech Recognition and Understanding Workshop.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Classifying student initiatives and tutor responses in human-human keyboard-to-keyboard tutoring sessions",
"authors": [
{
"first": "F",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Evens",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rovick",
"suffix": ""
}
],
"year": 2002,
"venue": "Discourse Processes",
"volume": "",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Shah, M. Evens, J. Michael, and A. Rovick. 2002. Classifying student initiatives and tutor responses in human-human keyboard-to-keyboard tutoring ses- sions. Discourse Processes, 33(1).",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Dialogue act modeling for automatic tagging and recognition of conversational speech",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Ries",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Coccaro",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Bates",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Meteer",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Van Ess-Dykema",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics",
"volume": "26",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Stolcke, K. Ries, N. Coccaro, E. Shriberg, R. Bates, D. Jurafsky, P. Taylor, R. Martin, M. Meteer, and C. Van Ess-Dykema. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational Linguistics 26:3.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "The architecture of Why2-Atlas: A coach for qualitative physics essay writing",
"authors": [
{
"first": "K",
"middle": [],
"last": "Vanlehn",
"suffix": ""
},
{
"first": "P",
"middle": [
"W"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "C",
"middle": [
"P"
],
"last": "Ros\u00e9",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Bhembe",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "B\u00f6ttner",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gaydos",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Makatchev",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Pappuswamy",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ringenberg",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Roque",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Siler",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. VanLehn, P. W. Jordan, C. P. Ros\u00e9, D. Bhembe, M. B\u00f6ttner, A. Gaydos, M. Makatchev, U. Pap- puswamy, M. Ringenberg, A. Roque, S. Siler, R. Sri- vastava, and R. Wilson. 2002. The architecture of Why2-Atlas: A coach for qualitative physics essay writing. In Proc. Intelligent Tutoring Systems Confer- ence.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Annotated Corpus Excerpt #2"
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Tutor Feedback Acts \u2022 Short Answer Question (SAQ): concerns basic quantitative relationships. See Figures 1 (TUTOR 71 ) and 6 (TUTOR 98 , TUTOR 100 ). \u2022 Long Answer Question (LAQ): requires definition/interpretation of concepts. See Figure 2 (TUTOR 16 , TUTOR 18 ).\u2022 Deep Answer Question (DAQ): requires reasoning about causes and/or effects. SeeFigure 6(TUTOR 102 )."
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Tutor Question Acts based on the prior student turn(s).Our corpus dialogue excerpts inFigure 1,Figure 2, andFigure 6illustrate that most tutor turns are labeled with multiple Tutor Acts."
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Tutor State Acts stitute \"words\" in the bigram. In"
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "RST, SAQ] STUDENT101: The feathers. (CERTAIN) TUTOR102: The feathers-why?[RST, DAQ]"
},
"FIGREF5": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Annotated Corpus Excerpt #3 tags, as shown in"
},
"TABREF0": {
"content": "<table><tr><td>PHYSICS PROBLEM: Suppose a man is in a free-falling el-evator and is holding his keys motionless right in front of his face. He then lets go. What will be the position of the keys relative to the man's face as time passes? Explain.</td></tr><tr><td>. . . dialogue excerpt at 4.0 minutes into session. . .</td></tr><tr><td>TUTOR16: STUDENT17: Um, that they're only under one force and that's gravity. (UNCERTAIN)</td></tr><tr><td>TUTOR18:Ok, that is the force. But what does it tell you-that's right and that is about the force, but what does that tell you about their motion? [POS, LAQ]</td></tr><tr><td>STUDENT19: Ok, uh, the motion is-oh, is they're moving in the same direction con-they're constant. (UNCERTAIN)</td></tr><tr><td>TUTOR20: Uh, ok. They are also in freefall. [BOT]</td></tr></table>",
"text": "Um, ok, so now we are thinking in terms of the factors that govern this displacement. Um, now they are-the elevator is in freefall. So does what does that tell you about the motion of the keys and the person?[RD, LAQ]",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF2": {
"content": "<table><tr><td colspan=\"4\">POS notPOS Total 439.46 2329.54 2769 175.21 928.79 1104 816 uncertain 129.51 686.49 neutral certain mixed 36.82 195.18 232</td></tr><tr><td>Total</td><td>781</td><td>4140</td><td>4921</td></tr></table>",
"text": "are shown inTable 2.",
"num": null,
"html": null,
"type_str": "table"
}
}
}
}