{ "paper_id": "2007", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:49:33.358997Z" }, "title": "Analysis of User Reactions to Turn-Taking Failures in Spoken Dialogue Systems", "authors": [ { "first": "Mikio", "middle": [], "last": "Nakano", "suffix": "", "affiliation": { "laboratory": "", "institution": "Honda Research Institute Japan Co. Ltd", "location": { "addrLine": "8-1 Honcho", "postCode": "359-0188", "settlement": "Wako", "region": "Saitama", "country": "Japan" } }, "email": "nakano@jp.honda-ri.com" }, { "first": "Yuka", "middle": [], "last": "Nagano", "suffix": "", "affiliation": { "laboratory": "", "institution": "Honda Research Institute Japan Co. Ltd", "location": { "addrLine": "8-1 Honcho", "postCode": "359-0188", "settlement": "Wako", "region": "Saitama", "country": "Japan" } }, "email": "" }, { "first": "Kotaro", "middle": [], "last": "Funakoshi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Honda Research Institute Japan Co. Ltd", "location": { "addrLine": "8-1 Honcho", "postCode": "359-0188", "settlement": "Wako", "region": "Saitama", "country": "Japan" } }, "email": "funakoshi@jp.honda-ri.com" }, { "first": "Toshihiko", "middle": [], "last": "Ito", "suffix": "", "affiliation": { "laboratory": "", "institution": "Honda Research Institute Japan Co. Ltd", "location": { "addrLine": "8-1 Honcho", "postCode": "359-0188", "settlement": "Wako", "region": "Saitama", "country": "Japan" } }, "email": "t-itoh@media.eng.hokudai.ac.jp" }, { "first": "Kenji", "middle": [], "last": "Araki", "suffix": "", "affiliation": { "laboratory": "", "institution": "Honda Research Institute Japan Co. Ltd", "location": { "addrLine": "8-1 Honcho", "postCode": "359-0188", "settlement": "Wako", "region": "Saitama", "country": "Japan" } }, "email": "araki@media.eng.hokudai.ac.jp" }, { "first": "Yuji", "middle": [], "last": "Hasegawa", "suffix": "", "affiliation": { "laboratory": "", "institution": "Honda Research Institute Japan Co. Ltd", "location": { "addrLine": "8-1 Honcho", "postCode": "359-0188", "settlement": "Wako", "region": "Saitama", "country": "Japan" } }, "email": "yuji.hasegawa@jp.honda-ri.com" }, { "first": "Hiroshi", "middle": [], "last": "Tsujino", "suffix": "", "affiliation": { "laboratory": "", "institution": "Honda Research Institute Japan Co. Ltd", "location": { "addrLine": "8-1 Honcho", "postCode": "359-0188", "settlement": "Wako", "region": "Saitama", "country": "Japan" } }, "email": "tsujino@jp.honda-ri.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents the results of an analysis of user reactions towards system failures in turn-taking in human-computer dialogues. When a system utterance and a user utterance start with a small time difference, the user may stop his/her utterance. In addition, when the user utterance ends soon after the overlap starts, the possibility of the utterance being discontinued is high. Based on this analysis, it is suggested that the degradation in speech recognition performance can be predicted using utterance overlapping information.", "pdf_parse": { "paper_id": "2007", "_pdf_hash": "", "abstract": [ { "text": "This paper presents the results of an analysis of user reactions towards system failures in turn-taking in human-computer dialogues. When a system utterance and a user utterance start with a small time difference, the user may stop his/her utterance. In addition, when the user utterance ends soon after the overlap starts, the possibility of the utterance being discontinued is high. Based on this analysis, it is suggested that the degradation in speech recognition performance can be predicted using utterance overlapping information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Many kinds of spoken dialogue systems have been developed in the last two decades. Most previous systems employed a fixed turn-taking strategy, that is, they take a turn when the user puts a certain length of pause after his/her utterances, and they release the turn immediately when the user barges in on a system utterance. In order to improve the usability of spoken dialogue systems, the turn-taking strategy needs to be more flexible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Thus far, there have been several approaches to this problem. Some methods try to decide when to take a turn based on not only the length of pause but also the content and prosody of the user utterance [e.g., (Sato et al., 2002; Ferrer et al., 2003; Schlangen, 2006) ]. Other methods try to decide how to appropriately react to the user barge-in utterances, not just simply stopping whenever a barge-in utter-ance is detected [e.g., (Str\u00f6m and Seneff, 2000; Rose and Kim, 2003) ].", "cite_spans": [ { "start": 209, "end": 228, "text": "(Sato et al., 2002;", "ref_id": "BIBREF5" }, { "start": 229, "end": 249, "text": "Ferrer et al., 2003;", "ref_id": "BIBREF0" }, { "start": 250, "end": 266, "text": "Schlangen, 2006)", "ref_id": "BIBREF6" }, { "start": 433, "end": 457, "text": "(Str\u00f6m and Seneff, 2000;", "ref_id": "BIBREF7" }, { "start": 458, "end": 477, "text": "Rose and Kim, 2003)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Despite these efforts, achieving appropriate turntaking is still difficult. The features used by these methods are not always perfectly obtained. In addition, even humans cannot sometimes decide whether the system should take a turn or not (Sato et al., 2002) .", "cite_spans": [ { "start": 240, "end": 259, "text": "(Sato et al., 2002)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Consequently, in addition to efforts towards improving turn-taking, we need to find a way to make the system cope with turn-taking errors. As a first step, we investigated how users behave when the system made mistakes in turn-taking. We have found that users tend to stop their utterances in certain situations. We expect this to be useful in avoiding misunderstanding caused by speech recognition errors of such discontinued utterances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Turn-Taking Failures", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of User Reactions to", "sec_num": "2" }, { "text": "We analyzed two sets of human-system dialogue data using the following two different dialogue systems in Japanese. One was a car-rental reservation dialogue system in which the user could make a reservation for renting a car by specifying the date, hour, and locations for rental and return, along with the car type. The other was a video recording system in which the user could set the date, time, channel, and recording mode (long play or short play) for recording a TV program. Both systems performed frame-based dialogue management. They employed the Julian speech rec-ognizer directed by network grammars (Kawahara et al., 2004) with its attached acoustic models. The vocabulary size for speech recognition was 225 words for the car-rental reservation system and 198 words for the video recording system. These systems also employed NTT-IT Corporation's FineVoice speech synthesizer. When collecting the data, a microphone and headphones were used. For each dialogue, the microphone input and the system output were recorded in a stereo file.", "cite_spans": [ { "start": 611, "end": 634, "text": "(Kawahara et al., 2004)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Dialogue Data", "sec_num": "2.1" }, { "text": "The contents of the data sets are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dialogue Data", "sec_num": "2.1" }, { "text": "\u2022 Set C: (Car-rental reservation)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dialogue Data", "sec_num": "2.1" }, { "text": "Each of the 23 subjects (12 males and 11 females) engaged in 8 dialogues (total 184 dialogues). In each dialogue, users tried to make one reservation. 134 dialogues were successfully finished within 3.5 minutes, 38 failed, and 12 were aborted because of a system trouble.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dialogue Data", "sec_num": "2.1" }, { "text": "\u2022 Set V: (Video recording reservation)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dialogue Data", "sec_num": "2.1" }, { "text": "This consists of 117 dialogues (9 dialogues by each of the 13 subjects (9 males and 4 females)). These subjects are different from the subjects for Set C. In each dialogue, the user tried to set the timer to record two programs. In 41 dialogues, the user successfully set up the recordings for two programs within 3 minutes. In 36 dialogues, the user set up only one of the programs. In 34 dialogues, the user could not set up the recordings, and 6 were aborted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dialogue Data", "sec_num": "2.1" }, { "text": "Both systems had variations in dialogue and turntaking strategies so that a variety of dialogues were recorded. Thresholds for confidence scores for generating confirmation requests were changed, parameters for speech interval detection were changed, and whether the system stopped its utterances when the user barged in was changed. For each subject, different strategies were used for different dialogues. We will not explain these variations in detail since, as we will explain later, we focused on the phenomena of turn-taking failures rather than the causes of them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dialogue Data", "sec_num": "2.1" }, { "text": "After collecting data, both user and system utterances were transcribed as pronounced. Utterance segmentation was done manually based on pauses longer than 300ms, by using an annotation tool. (o1) The start time of the user utterance is between the start and end times of a system utterance. (o2) The start times of one or more system utterances are between the start and end time of the user utterance. (o3) Both (o1) and (o2) occur. The timestamps of each speech segment indicate the points in time from the start of the stereo file. Below we simply call these speech segments utterances. The total numbers of the user utterances and system utterances in Set C are respectively 3,364 and 5,157 and, in Set V they are 2,521 and 4,522.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dialogue Data", "sec_num": "2.1" }, { "text": "As Raux et al. (2006) reported, there are several kinds of system turn-taking failures. The system sometimes barges in to a user utterance, and sometimes fails to take a turn. These failures are caused by several reasons, such as errors in speech interval detection, and misrecognitions of the user's intention to release a turn. In this paper, we focus only on failures that result in overlaps between user and system utterances. We have not investigated the reason for the failure; but instead of that, we analyzed the overlapping phenomena that often occurred when the system made mistakes in turn-taking, because the goal of the analysis is not to improve turn-taking, but to find a way to recover from turn-taking failures. Table 1 shows the frequencies of user utterances overlapping system utterances.", "cite_spans": [ { "start": 3, "end": 21, "text": "Raux et al. (2006)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 729, "end": 736, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Utterance Overlaps", "sec_num": "2.2" }, { "text": "In this paper, we call utterances stopped in the middle for any reason discontinuations. We found that user utterances overlapping with system utterances are more likely to be discontinuations. Discontinuations are expected to be difficult for speech recognition mainly because they are not grammatical and include word fragments. So detecting and ignoring them would improve speech understanding. We therefore focus on analyzing discontinuations. Figure 1 shows an example of discontinuations in a carrental reservation dialogue. We annotated discontinuations by listening to only the user-speech channel of the stereo files. In set C, 87 utterances are discontinuations, and, in set V, 48 are discontinuations. Of these, 61 and 38 have overlaps with system utterances.", "cite_spans": [], "ref_spans": [ { "start": 448, "end": 456, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Discontinuations", "sec_num": "2.3" }, { "text": "To investigate the speech recognition performance on the discontinuations, we used the same network grammar as the spoken dialogue system used in the data collection. Note that, since user speech segments are made from the timestamps in the transcriptions, they are different from those recognized at the time of data collection. As shown in Table 2 , discontinuations include out-of-grammar utterances, so the word error rates are very high. 1", "cite_spans": [], "ref_spans": [ { "start": 342, "end": 349, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Discontinuations", "sec_num": "2.3" }, { "text": "One way to detect discontinuations that might be effective is to use prosodic information (Liu et al., 2003) . Since prosody recognition is not yet perfect, however, it is worth exploring other methods.", "cite_spans": [ { "start": 90, "end": 108, "text": "(Liu et al., 2003)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Relationship between Discontinuations and Turn-Taking", "sec_num": "2.4" }, { "text": "1 The word error rates for the out-of-grammar utterances is very high for the following reason. We transcribed the user utterances without word boundaries because it is not easy to consistently determine word boundaries for Japanese. We used a morphological analyzer to split these transcriptions into words to obtain references for computing speech recognition accuracy. This process tended to produce one-syllable out-of-vocabulary words. Therefore the references include a greater number of out-of-vocabulary words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relationship between Discontinuations and Turn-Taking", "sec_num": "2.4" }, { "text": "d (s) \u2212\u221e --0.4 --0.2 -0.0 -0.2 -0.4 -0.6 -1.0 --0.4 -0.2 0.0 0.2 0.4 0.6 1.0 \u221e C 2/45 0/7 4/22 15/43 11/56 3/29 4/34 22/284 V 0/17 0/9 10/21 16/57 6/48 3/27 1/12 2/58 (# of discontinuations)/(# of overlapped user utterances) We therefore investigated in which turn-taking situations discontinuations are likely to exist. Discontinuations are likely to occur when the start time of the user and system utterances are close. Table 3 shows the relationships of the frequencies of discontinuations in the overlapping user utterances depending on the start time difference d. Here, the start time difference d is defined as follows:", "cite_spans": [], "ref_spans": [ { "start": 423, "end": 430, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Relationship between Discontinuations and Turn-Taking", "sec_num": "2.4" }, { "text": "d = st(u) \u2212 st(s),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relationship between Discontinuations and Turn-Taking", "sec_num": "2.4" }, { "text": "where st(i) means the start time of utterance i, u is a user utterance and s is the first system utterance among the system utterances overlapping u. We found that people tend to stop their own utterances when d is between \u22120.2s to 0.4s. When d is larger than 0.4s, the user has already spoken for a while so he/she might try to finish the utterance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relationship between Discontinuations and Turn-Taking", "sec_num": "2.4" }, { "text": "Next, we investigated the end time of the overlapped user utterances, because discontinuations can be expected to occur soon after the overlapping starts. Table 4 shows the frequencies of discontinuations depending on the length of the user utterance after the overlapping starts. This is defined as c in the following formula:", "cite_spans": [], "ref_spans": [ { "start": 155, "end": 162, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Relationship between Discontinuations and Turn-Taking", "sec_num": "2.4" }, { "text": "c = \u23a7 \u23aa \u23a8 \u23aa \u23a9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relationship between Discontinuations and Turn-Taking", "sec_num": "2.4" }, { "text": "et(u) \u2212 st(u) (cases (o1) and (o3) in Table 1 ) et(u) \u2212 st(s) (case (o2) in Table 1) ,", "cite_spans": [], "ref_spans": [ { "start": 38, "end": 45, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 76, "end": 84, "text": "Table 1)", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Relationship between Discontinuations and Turn-Taking", "sec_num": "2.4" }, { "text": "where et(i) means the end time of utterance i. As we expected, when c is between 0.1s and 0.6s, the user utterances are more likely to be discontinuations than other cases. From the above analysis, the possibility that a discontinuation occurs is high when d is between \u22120.2s and 0.4s and c is between 0.1s and 0.6s. We call this situation, Situation S. Table 5 shows the frequencies of discontinuations depending on the combinations of d and c.", "cite_spans": [], "ref_spans": [ { "start": 354, "end": 361, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Relationship between Discontinuations and Turn-Taking", "sec_num": "2.4" }, { "text": "Since discontinuations occur more frequently in Situation S than other cases, speech recognition performance would be degraded in Situation S. Table 6 shows these results. This suggests that the overlapping information can be used for predicting speech recognition performance degradation.", "cite_spans": [], "ref_spans": [ { "start": 143, "end": 150, "text": "Table 6", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Predicting Speech Recognition Performance Degradation", "sec_num": "2.5" }, { "text": "This paper presented our preliminary analysis on user reactions to system failures in turn-taking in human-computer dialogues. We found that discontinuations are likely to occur more frequently at the overlapping utterances caused by turn-taking failure. We specified situations where user discontinuations frequently occur. It is suggested that the degradation in speech recognition performance can be predicted using utterance overlapping information. This is expected to be useful for avoiding misunderstanding. We are planning to conduct more detailed analyses on discontinuations, such as their relationship with the subjects and the dialogue and turn-taking strategy of the system. We also plan to investigate changes in speech recognition performance when statistical language models are employed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concluding Remarks", "sec_num": "3" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A prosody-based approach to end-of-utterance detection that does not require speech recognition", "authors": [ { "first": "Luciana", "middle": [], "last": "Ferrer", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Shriberg", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2003, "venue": "Proc. ICASSP-2003", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luciana Ferrer, Elizabeth Shriberg, and Andreas Stolcke. 2003. A prosody-based approach to end-of-utterance detection that does not require speech recognition. In Proc. ICASSP-2003.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Recent progress of open-source LVCSR engine Julius and Japanese model repository", "authors": [ { "first": "Tatsuya", "middle": [], "last": "Kawahara", "suffix": "" }, { "first": "Akinobu", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kazuya", "middle": [], "last": "Takeda", "suffix": "" }, { "first": "Katsunobu", "middle": [], "last": "Itou", "suffix": "" }, { "first": "Kiyohiro", "middle": [], "last": "Shikano", "suffix": "" } ], "year": 2004, "venue": "Proc. Interspeech-2004 (ICSLP)", "volume": "", "issue": "", "pages": "3069--3072", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tatsuya Kawahara, Akinobu Lee, Kazuya Takeda, Kat- sunobu Itou, and Kiyohiro Shikano. 2004. Recent progress of open-source LVCSR engine Julius and Japanese model repository. In Proc. Interspeech-2004 (ICSLP), pages 3069-3072.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Automatic disfluency identification in conversational speech using multiple knowledge sources", "authors": [ { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Shriberg", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2003, "venue": "Proc. Eurospeech-2003", "volume": "", "issue": "", "pages": "957--960", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang Liu, Elizabeth Shriberg, and Andreas Stolcke. 2003. Automatic disfluency identification in conver- sational speech using multiple knowledge sources. In Proc. Eurospeech-2003, pages 957-960.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Doing research in a deployed spoken dialog system: One year of let's go! public experience", "authors": [ { "first": "Antoine", "middle": [], "last": "Raux", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Langner", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Bohus", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" }, { "first": "Maxine", "middle": [], "last": "Eskenazi", "suffix": "" } ], "year": 2006, "venue": "Proc. Interspeech-2006 (IC-SLP)", "volume": "", "issue": "", "pages": "65--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antoine Raux, Brian Langner, Dan Bohus, Alan W. Black, and Maxine Eskenazi. 2006. Doing research in a deployed spoken dialog system: One year of let's go! public experience. In Proc. Interspeech-2006 (IC- SLP), pages 65-68.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A hybrid bargein procedure for more reliable turn-taking in humanmachine dialog systems", "authors": [ { "first": "R", "middle": [ "C" ], "last": "Rose", "suffix": "" }, { "first": "Hong Kook", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2003, "venue": "Proc. ASRU-03", "volume": "", "issue": "", "pages": "198--203", "other_ids": {}, "num": null, "urls": [], "raw_text": "R.C. Rose and Hong Kook Kim. 2003. A hybrid barge- in procedure for more reliable turn-taking in human- machine dialog systems. In Proc. ASRU-03, pages 198-203.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Learning decision trees to determine turn-taking by spoken dialogue systems", "authors": [ { "first": "Ryo", "middle": [], "last": "Sato", "suffix": "" }, { "first": "Ryuichiro", "middle": [], "last": "Higashinaka", "suffix": "" }, { "first": "Masafumi", "middle": [], "last": "Tamoto", "suffix": "" }, { "first": "Mikio", "middle": [], "last": "Nakano", "suffix": "" }, { "first": "Kiyoaki", "middle": [], "last": "Aikawa", "suffix": "" } ], "year": 2002, "venue": "Proc. 7th ICSLP", "volume": "", "issue": "", "pages": "861--864", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryo Sato, Ryuichiro Higashinaka, Masafumi Tamoto, Mikio Nakano, and Kiyoaki Aikawa. 2002. Learn- ing decision trees to determine turn-taking by spoken dialogue systems. In Proc. 7th ICSLP, pages 861-864.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "From reaction to prediction: Experiments with computational models of turntaking", "authors": [ { "first": "David", "middle": [], "last": "Schlangen", "suffix": "" } ], "year": 2006, "venue": "Proc. Interspeech-2006 (ICSLP)", "volume": "", "issue": "", "pages": "2010--2013", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Schlangen. 2006. From reaction to predic- tion: Experiments with computational models of turn- taking. In Proc. Interspeech-2006 (ICSLP), pages 2010-2013.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Intelligent barge-in in conversational systems", "authors": [ { "first": "Nikko", "middle": [], "last": "Str\u00f6m", "suffix": "" }, { "first": "Stephanie", "middle": [], "last": "Seneff", "suffix": "" } ], "year": 2000, "venue": "Proc. 6th IC-SLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikko Str\u00f6m and Stephanie Seneff. 2000. Intelligent barge-in in conversational systems. In Proc. 6th IC- SLP.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "Example discontinuation with overlap." }, "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "discontinuations)/(# of overlapped user utterances)" }, "TABREF1": { "type_str": "table", "html": null, "num": null, "text": "Frequencies of user utterances overlapping with system utterances.", "content": "
yokka no (on 4th) <discontinued>
user84.53285.336
shigatsu mikka no (on April 3rd)
system
84.84885.936
" }, "TABREF3": { "type_str": "table", "html": null, "num": null, "text": "Speech recognition results for all utterances and discontinuations.", "content": "" }, "TABREF4": { "type_str": "table", "html": null, "num": null, "text": "Frequency of discontinuations depending on the start time difference d.", "content": "
c (s) 0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.8 -1.0 -
0.1 0.2 0.3 0.4 0.5 0.6 0.8 1.0 \u221e
C1/50 7/44 10/67 12/66 15/52 4/36 4/75 4/45 4/85
V1/19 4/19 9/30 13/28 2/17 3/16 2/22 0/17 4/81
(# of discontinuations)/(# of overlapped user utterances)
" }, "TABREF5": { "type_str": "table", "html": null, "num": null, "text": "", "content": "
: Frequency of discontinuations depending
on c (the length of user utterance after the overlap-
ping starts)
" }, "TABREF6": { "type_str": "table", "html": null, "num": null, "text": "Frequency of discontinuations depending on c and d.", "content": "
Situation SOther overlapping ut-
terances
setIGOOGALLIGOOGALL
C204262285173458
16.67 107.89 78.57 12.72 66.31 35.36
V13395297100197
9.52 122.73 86.158.44 75.06 43.14
(upper: # of utterances. lower: word error rate (%).)
" }, "TABREF7": { "type_str": "table", "html": null, "num": null, "text": "Speech recognition performance for utterances in Situation S and other cases.", "content": "" } } } }