{ "paper_id": "Y10-1032", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:41:01.759331Z" }, "title": "Combination of 3 Types of Speech Recognizers for Anaphora Resolution", "authors": [ { "first": "Kazutaka", "middle": [], "last": "Shimada", "suffix": "", "affiliation": { "laboratory": "", "institution": "Kyushu Institute of Technology", "location": { "addrLine": "680-4 Iizuka Fukuoka", "postCode": "820-8502", "settlement": "shimada, n", "country": "Japan" } }, "email": "" }, { "first": "Noriko", "middle": [], "last": "Tanamachi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Kyushu Institute of Technology", "location": { "addrLine": "680-4 Iizuka Fukuoka", "postCode": "820-8502", "settlement": "shimada, n", "country": "Japan" } }, "email": "tanamachi@pluto.ai.kyutech.ac.jp" }, { "first": "Tsutomu", "middle": [], "last": "Endo", "suffix": "", "affiliation": { "laboratory": "", "institution": "Kyushu Institute of Technology", "location": { "addrLine": "680-4 Iizuka Fukuoka", "postCode": "820-8502", "settlement": "shimada, n", "country": "Japan" } }, "email": "endo@pluto.ai.kyutech.ac.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we propose a method for anaphora resolution in speech understanding for a livelihood support robot. For robust speech recognition, we combine two types of speech recognizers; a large vocabulary continuous speech recognizer (LVCSR) and domain-specific speech recognizers (DSSR). One problem in the anaphora resolution is lack of the antecedent in the outputs. To solve the problem, we introduce 2 types of DSSRs; one medium-scale DSSR and several small DSSRs. In this paper, we describe the basic idea of our multiple speech recognizer first. The selection process in the recognizer is based on the similarity between the LVCSR and each DSSR. Then, by using the outputs from the LVCSR and the medium-scale DSSR, we resolve anaphoric expressions in the current output from a small-scale DSSR. The experimental result shows the effectiveness of our method.", "pdf_parse": { "paper_id": "Y10-1032", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we propose a method for anaphora resolution in speech understanding for a livelihood support robot. For robust speech recognition, we combine two types of speech recognizers; a large vocabulary continuous speech recognizer (LVCSR) and domain-specific speech recognizers (DSSR). One problem in the anaphora resolution is lack of the antecedent in the outputs. To solve the problem, we introduce 2 types of DSSRs; one medium-scale DSSR and several small DSSRs. In this paper, we describe the basic idea of our multiple speech recognizer first. The selection process in the recognizer is based on the similarity between the LVCSR and each DSSR. Then, by using the outputs from the LVCSR and the medium-scale DSSR, we resolve anaphoric expressions in the current output from a small-scale DSSR. The experimental result shows the effectiveness of our method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Speech understanding and dialogue systems have been developed for practical use recently. These systems often recognize user utterances incorrectly. It is important to deal with speech recognition errors for speech understanding systems. Extracting keywords and understanding an utterance using them reduce speech recognition errors (Bouwman et al., 1999; Komatani and Kawahara, 2000) . Combining some recognizers is one of the best approaches to improve the accuracy of speech understanding systems (Isobe et al., 2007; Utsuro et al., 2004) . Utsuro et. al. (2004) have obtained high accuracy by using some speech recognizers' outputs. However they dealt with word error reduction only. Although Isobe et. al. (2007) have proposed a multi-domain speech recognition system based on some domain-specific recognizers, their system cannot treat out-ofdomain utterances such as a chat between users. However chat utterances often include significant information as the context of the dialogue.", "cite_spans": [ { "start": 333, "end": 355, "text": "(Bouwman et al., 1999;", "ref_id": "BIBREF0" }, { "start": 356, "end": 384, "text": "Komatani and Kawahara, 2000)", "ref_id": "BIBREF7" }, { "start": 500, "end": 520, "text": "(Isobe et al., 2007;", "ref_id": "BIBREF3" }, { "start": 521, "end": 541, "text": "Utsuro et al., 2004)", "ref_id": "BIBREF16" }, { "start": 544, "end": 565, "text": "Utsuro et. al. (2004)", "ref_id": "BIBREF16" }, { "start": 697, "end": 717, "text": "Isobe et. al. (2007)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we propose a simple and effective speech understanding method based on a large vocabulary continuous speech recognizer (LVCSR) and some domain-specific speech recognizers (DSSR). We call it \"One Generalist and Some Specialists (OGSS) model\". Figure 1 (a) shows the outline of the model. In our system, the LVCSR is the generalist, namely domain-independent, and the DSSRs are specialists, namely domain-dependent. We focus on the difference between outputs generated from the generalist and specialists. By using this method, we can recognize domain-dependent speech inputs with high accuracy and also handle context information in domain-independent speech inputs.", "cite_spans": [], "ref_spans": [ { "start": 256, "end": 268, "text": "Figure 1 (a)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The task of this system is speech understanding for a livelihood support robot. The DSSRs recognize particular utterances about orders; e.g., order utterances from elders who need care and order utterances from nurses. We construct the grammar-based DSSRs for order utterances with OGSS model ( a small vocabulary and high accuracy for each order type. We use the LVCSR for recognition of utterances that the DSSRs can not recognize, such as a chat between users. The information recognized by the LVCSR is of assistance for context construction of a dialogue. If we handle these different speech recognizers selectively and integratively, we realize a flexible and robust speech understanding method. Figure 1 (b) shows the effectiveness of the proposed multiple recognizer. The DSSR achieves the order recognition with high accuracy and the LVCSR supplies lack of information in the order utterances.", "cite_spans": [ { "start": 293, "end": 294, "text": "(", "ref_id": null } ], "ref_spans": [ { "start": 702, "end": 710, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In general, there are many anaphoric expressions in a dialogue. Anaphora resolution is one of the most important tasks for understanding the dialogue. In this paper, we also propose an anaphora resolution method in the multiple recognizer. By using previous outputs from the LVCSR and some DSSRs, we resolve an anaphora in the current output. For example, with respect to the utterance \"Please pick it up\" in Figure 1 (b), the system identifies that the word \"it\" in the utterance is the phrase \"remote controller\" which was recognized by the LVCSR in the previous utterance. The antecedent often appears in non-order utterances, that is outside of DSSRs. Therefore the target word is usually recognized by a LVCSR. However, the accuracy of the LVCSR is generally insufficient. The low accuracy of the detection of the antecedent in the speech recognition process leads to the decrease of the accuracy of the anaphora resolution process because the antecedent does not exist in the output of the speech recognizer. Here we apply a medium-scale DSSR to the multiple recognizer. It contains words of the target situation. In other words, the vocabulary of the medium-scale DSSR consists of the union of each small-scale DSSR, such as a nurse's order DSSR and a patient's order DSSR. By using the medium-scale DSSR, the accuracy of non-order utterances often improves. It leads to the improvement of the accuracy of the anaphora resolution method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In Section 2, we explain the basic idea of the multiple speech recognizer. In other words, it is to select an output from each recognizer. In Section 3, we describe an anaphora resolution method based on the combination of 3 types of speech recognizers. Then, we evaluate the method in terms of the output selection and anaphora resolution in Section 4. Finally we conclude this paper in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we explain the process of output selection in the OGSS model. In this process, we focus on a difference of outputs generated from each recognizer. Even human beings tend to misunderstand words which consist of similar pronunciations (Komatani et al., 2005 we focus on the output of the LVCSR. If an input is an order utterance, a DSSR and the LVCSR generate similar outputs on phoneme-level because the LVCSR is domain independent. On the other hand, if the input is not an order utterance, they often generate different outputs even on the phoneme-level because the DSSR never generates the correct result for non-order utterances.", "cite_spans": [ { "start": 250, "end": 272, "text": "(Komatani et al., 2005", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Basic idea", "sec_num": "2.1" }, { "text": "In this paper, we apply an unsupervised approach to the output selection method. We use the edit distance as the similarity measure. The correspondence such as the edit distance is one of the most effective measures to identify high confidence words in outputs (Utsuro et al., 2004) and to extract similar word pairs (Komatani et al., 2005) . In our method, if an input is an order utterance, the edit distance between the outputs from a DSSR and the LVCSR becomes small. However if the input is not an order utterance, that between the outputs from each DSSR and the LVCSR becomes large. In our method, we compute the edit distance of utterance-level and word-level by using a DP matching algorithm. In the process, we compute the edit distance between phonemes of words for both levels.", "cite_spans": [ { "start": 261, "end": 282, "text": "(Utsuro et al., 2004)", "ref_id": "BIBREF16" }, { "start": 317, "end": 340, "text": "(Komatani et al., 2005)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Basic idea", "sec_num": "2.1" }, { "text": "The rules to judge an utterance are applied in the following order:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic idea", "sec_num": "2.1" }, { "text": "1. Compute the edit distance of the utterance-level (ED \u00d9\u00d8\u00d8 \u00d6 ) between the LVCSR and each DSSR. For the outputs of which the edit distance is less than thresh \u00d9\u00d8\u00d8 \u00d6 , we select the output of the DSSR which contains the minimum ED \u00d9\u00d8\u00d8 \u00d6 as the final output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic idea", "sec_num": "2.1" }, { "text": "2. Compute the edit distance of the word-level (ED \u00db\u00d3\u00d6 ) between the LVCSR and each DSSR. For the output of which the edit distance is less than thresh \u00db\u00d3\u00d6 , we select the output of the DSSR which contains the minimum ED \u00db\u00d3\u00d6 as the final output. Otherwise, the LVCSR as the final output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic idea", "sec_num": "2.1" }, { "text": "The ED \u00d9\u00d8\u00d8 \u00d6 is the edit distance value on the utterance-level. The ED \u00db\u00d3\u00d6 is the average of the edit distance value computed on word-level. These values are normalized by the number of phonemes in the outputs. The thresh \u00d9\u00d8\u00d8 \u00d6 and thresh \u00db\u00d3\u00d6 are threshold values for the judgment. These values are decided experimentally.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic idea", "sec_num": "2.1" }, { "text": "In the computation of the word-level, we eliminate word pairs that are matched completely first. Next, we compute all the combinations of the other. Finally, we employ the minimum combinations as the word-level edit distance. Figure 2 shows an example of the calculation of the ED \u00d9\u00d8\u00d8 \u00d6 and ED \u00db\u00d3\u00d6 . In the figure, the dotted line denotes completely matched words. The numerals with arrows denote the original edit distance of the word pair. In the alignment process of word pairs, we select pairs which have the minimum value of the edit distance. In other words, we admit overlap of word pairs. For example, noue vs. no and no vs. no in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 226, "end": 234, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 639, "end": 647, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Basic idea", "sec_num": "2.1" }, { "text": "The OGSS model consists of a LVCSR and some DSSR. The LVCSR is used for utterance verification, namely output selection, and capturing the context information in a dialogue. However, the accuracy of the LVCSR is generally insufficient. The accuracy is important for an anaphora resolution process. The low accuracy of the LVCSR leads to the decrease of the accuracy of the anaphora resolution process because the antecedent does not exist in the output of the speech recognizer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recognizers", "sec_num": "2.2" }, { "text": "In this paper, we used 2 types of DSSRs; some small-scale DSSRs and a medium-scale DSSR. The small-scale DSSRs are used for each particular domain or task; e.g., order utterances from elders who need care and order utterances from nurses. On the other hand, the medium-scale DSSR is used for capturing the context in the target situation (a livelihood support robot in this paper). In other words, it is a integrated DSSR of small-scale DSSRs. The vocabulary of the medium-scale DSSR is the union of each small-scale DSSR.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recognizers", "sec_num": "2.2" }, { "text": "As a result, the multiple speech recognizer consists of one LVCSR, one medium-scale DSSR and some small-scale DSSRs. In the utterance verification process, our method compares the LVCSR with some small-scale DSSRs for the output selection. Also it compares the LVCSR with the medium-scale DSSR for generating a context word list with high accuracy from the mediumscale DSSR.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recognizers", "sec_num": "2.2" }, { "text": "In this section we explain an anaphora resolution process in the OGSS model. Figure 3 shows the outline of the anaphora resolution process. In the figure, \u00c4 \u00bd is a content word list detected from the output of the medium-scale DSSR. In the utterance verification process, if the output from the medium-scale DSSR is similar to that from the LVCSR, content words in the DSSR's output are stored. In the same way, if an input is out-of-vocabulary in all DSSRs, that is the large edit distance between the LVCSR and all the small-scale DSSR, the output of the LVCSR is stored to \u00c4 \u00be . Otherwise, the output from a small-scale DSSR is stored to \u00c4 \u00be . ", "cite_spans": [], "ref_spans": [ { "start": 77, "end": 85, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Understanding and Anaphora Resolution", "sec_num": "3" }, { "text": "The output in the previous section, namely the output selection process, is an output of a speech recognizer. For the anaphora resolution process, we need to analyze the output. For outputs from small-scale DSSRs, we convert them into a semantic frame. We utilize grammar information of DSSRs for the process. Each DSSR consists of 100-200 words and approximately 100 grammar patterns including approximately 50 categories. Figure 4 (a) shows an example of the grammar pattern and categories. The categories often contain semantic constraints such as \"Drink N\" and \"Location\". In this process, we also use a dictionary which is described for required slots of each verb. We detect zero pronouns in utterances from the small-scale DSSRs by using the dictionary.", "cite_spans": [], "ref_spans": [ { "start": 424, "end": 436, "text": "Figure 4 (a)", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Understanding of Outputs from OGSS model", "sec_num": "3.1" }, { "text": "For outputs from a LVCSR, we extract keywords by using some rules based on surface expression. For outputs of a medium-scale DSSR, we also extract keywords by using the categories in the vocabulary. Figure 4 (b) shows examples of the process. In the figure, \"obj\", \"loc\" and \"agt\" denote case markers.", "cite_spans": [], "ref_spans": [ { "start": 199, "end": 211, "text": "Figure 4 (b)", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Understanding of Outputs from OGSS model", "sec_num": "3.1" }, { "text": "If an utterance contains an anaphoric expression, our system detects the antecedent from previous utterances. The anaphora resolution process is based on a scoring method for words in \u00c4 \u00bd and \u00c4 \u00be . In the scoring method, we also focus on (1) the distance from the current utterance and 2change of situation. First, we explain the 1st step of the scoring method; weighting of each word. For a word in \u00c4 \u00bd and \u00c4 \u00be , we set the weights in the following manner. The denotes the location of the in the \u00c4 \u00bd and \u00c4 \u00be .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anaphora Resolution", "sec_num": "3.2" }, { "text": "For the dialogue logs from the medium-scale DSSR ( \u00c4 \u00bd ), \u00d3\u00d2 \u00bd AE \u00c4 \u00bd", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anaphora Resolution", "sec_num": "3.2" }, { "text": "where AE denotes the confidence measure computed from the LVCSR or the medium-scale DSSR for each word and the range is [0, 1].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anaphora Resolution", "sec_num": "3.2" }, { "text": "The \u00c4 \u00be contains 3 types of outputs; outputs from the LVCSR, original outputs from the small-scale DSSRs and outputs from anaphora resolution. For the LVCSR and original DSSR's outputs, we use the AE as the weight. \u00d3\u00d2 \u00be AE \u00c4 \u00be", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anaphora Resolution", "sec_num": "3.2" }, { "text": "(2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anaphora Resolution", "sec_num": "3.2" }, { "text": "If the is the output of the anaphora resolution process, we set a constant number.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anaphora Resolution", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u00d3\u00d2 \u00be \u00bc", "eq_num": "(3)" } ], "section": "Anaphora Resolution", "sec_num": "3.2" }, { "text": "The reason why the value is constant and small as compared with that of original outputs is that the accuracy of the anaphora resolution process is not always high, that is insufficient confidence. Next, we compute a score for each . Here we apply a decay factor based on the distance and the situation to the scoring process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anaphora Resolution", "sec_num": "3.2" }, { "text": "\u00bd \u00d7\u00d8 \u00be \u00a2 \u00d7 \u00d2 (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anaphora Resolution", "sec_num": "3.2" }, { "text": "where \u00d7\u00d8 is the distance between the current utterance that contains the anaphoric expression and the previous utterance that contains the antecedent. \u00d7 is a parameter for the change of situation. We define \"change of a speaker\" and \"change of the location of a robot\" in a dialogue as the \"change of situation\". The \"change of situation\" denotes the change of the topic in conversation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anaphora Resolution", "sec_num": "3.2" }, { "text": "\u00d2 is the number of changes. If there is no change of situation for a target word , the \u00d2 is 0. In this paper, we set \u00d7 \u00bc \u00bd. We multiply the \u00d3\u00d2 \u00bd and the \u00d3 \u00d2 \u00bd by the decay factor .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anaphora Resolution", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u00d3\u00d2 \u00bd \u00d3\u00d2 \u00bd \u00a2 (5) \u00d3\u00d2 \u00be \u00d3\u00d2 \u00be \u00a2", "eq_num": "(6)" } ], "section": "Anaphora Resolution", "sec_num": "3.2" }, { "text": "The final score of is computed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anaphora Resolution", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u00cb \u00d3\u00d6 \u00ab \u00a2 \u00d3\u00d2 \u00bd \u2022 \u00ac \u00a2 \u00d3\u00d2 \u00be", "eq_num": "(7)" } ], "section": "Anaphora Resolution", "sec_num": "3.2" }, { "text": "where \u00ab and \u00ac are weight parameters for each \u00d3\u00d2 . We compute scores of all candidates that appear in previous AE-utterances, and select the word that contains the maximum score in them. In this paper, we set AE \u00bd\u00bc, \u00ab \u00bc and \u00ac \u00bd . Here \u00ab is the weight for the outputs from the medium-scale DSSR and \u00ac is the weight for the outputs from the LVCSR and small DSSRs. In this scoring, we set a small value to \u00ab as compared with \u00ac. The reason is that the outputs from the medium-scale DSSR often contain insertion errors because there are many out-of-vocabulary words in a chat.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anaphora Resolution", "sec_num": "3.2" }, { "text": "We used Julius as the LVCSR and Julian as the DSSR (Lee et al., 2001) . Julius is a famous large vocabulary continuous speech recognition decoder based on word N-gram and context-dependent HMM. In this experiment, we used original acoustic and language models. The Julian consists of a vocabulary and a grammar file. For the grammar file we describe sentence structures in a BNF style, using word category names as terminal symbols. The vocabulary file defines words with their pronunciations (i.e., phoneme sequences) for each category. Here we design grammar and vocabulary files of the Julian which accepts only specific utterances from users. In this experiment, we used 4 small-scale DSSRs that we constructed by hand. The DSSRs are as follows:", "cite_spans": [ { "start": 51, "end": 69, "text": "(Lee et al., 2001)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Speech recognizer in the experiment", "sec_num": "4.1" }, { "text": "Order Utterances from patients: e.g., \"Please bring the remote controller on the table\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech recognizer in the experiment", "sec_num": "4.1" }, { "text": "Order Utterances from nurses: e.g., \"Carry these meals to patient's rooms\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech recognizer in the experiment", "sec_num": "4.1" }, { "text": "\u00afSystem Commands: e.g., \"Move to the right by 50cm\" Question Utterances: e.g., \"Where is my cellphone?\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech recognizer in the experiment", "sec_num": "4.1" }, { "text": "Each DSSR consists of approximately 200 words and 100 grammar patterns. For the medium-scale DSSR for the anaphora resolution, we also used the Julian. The vocabulary file contained words in all small-scale DSSRs. Since the purpose of the medium-scale DSSR is to capture words in non-order utterances, the accuracy on sentence-level is not always important. However, it needs to handle spontaneous speech utterances. Therefore, the grammar file consisted of the combination of the words in a fixed length; e.g., Noun-PP-Noun-PP-Verb.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech recognizer in the experiment", "sec_num": "4.1" }, { "text": "First we evaluated the output selection with a dataset which consists of 20 utterances for each DSSR and 20 out-of-domain utterances such as greetings. The number of test subjects was 10. In other words, we evaluated our method with 1000 utterances: 5 categories (4 DSSRs 1 and LVCSR) \u00a2 20 utterances \u00a2 10 test subjects. The thresh \u00d9\u00d8\u00d8 \u00d6 and thresh \u00db\u00d3\u00d6 were 0.26 and 0.08 respectively. These thresholds were determined on a preliminary experiment with another dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "The F-value of the output selection was 0.916 on average. In addition, the word recognition accuracy of each DSSR was 0.940 on average. Besides, we verified that the change of the F-value was small even if we changed the thresholds within the compass of 0.20-0.26 2 . Therefore, our method, which was based on the edit distance, for the output selection in a multiple recognizer is simple and robust.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "Next, we evaluated the anaphora resolution process in our method combining 3 types of speech recognizers. The dataset of this experiment consisted of 206 utterances that included 53 anaphoric expressions. Figure 5 shows an example of a dialogue in this experiment. In the figure, \"###\" denotes \"change of a speaker\" or \"change of the location of a robot\". The number of test subjects was 2 persons. Table 1 shows the experimental result. The baseline in the table denotes our method without the medium-scale DSSR. In other words, the method did not handle the dialogue log \u00c4 \u00bd . \"Related work\" is a scoring based anaphora resolution method which has been proposed by (Shimada et al., 2009) . It accumulated the scores of each candidate in the \u00d2-previous utterances ( \u00c8 AE \u00cb \u00d3\u00d6 ) 3 .", "cite_spans": [ { "start": 667, "end": 689, "text": "(Shimada et al., 2009)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 205, "end": 213, "text": "Figure 5", "ref_id": null }, { "start": 399, "end": 406, "text": "Table 1", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "To compare the related work with our method fairly, we also applied the medium-scale DSSR to it. The proposed method with the medium-scale DSSR outperformed the baseline, namely a method without the medium-scale DSSR, and the related work, namely another method with the mediumscale DSSR. By using the medium-scale DSSR, the recognition accuracy of words in non-order utterances increased. It led to improvement of the accuracy of the anaphora resolution (64.2 versus 71.7). This result shows the effectiveness of our method incorporating the medium-scale DSSR for the anaphora resolution. The related work was based on the summation of the scores of all candidates in the log. In such method, the existence of noise words, that is insertion errors from the speech recognizer, leads to lower accuracy of the anaphora resolution (68.9 versus 71.7). Although the accuracy of the anaphora resolution increased by using the medium-scale DSSR, the word accuracy for antecedents was insufficient. Misrecognized words caused the decrease of the anaphora resolution process, especially deletion errors of the speech recognizer. If the outputs of speech recognizers and the resolved anaphoric expressions in previous utterances were completely correct, that is the oracle data, the accuracy of the anaphora resolution became more than 95%. This result shows the significance of the accuracy of speech recognizers that captures the words in non-order utterances. On the other hand, the grammars of our medium-scale DSSR were not considered carefully, that is a simple combination of words without statistical model such ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "Zero pronoun in Japanese Figure 5 : An example of a dialogue for the anaphora resolution. as word n-grams. We need to consider the vocabulary and grammar files or the language model of the medium-scale DSSR to improve the accuracy. In our method, we handle the change of the situation in a dialogue. It is, however, the change of a speaker and the location only. To improve the accuracy of the anaphora resolution, we need to incorporate more detailed situation change model such as topics in the dialogue.", "cite_spans": [], "ref_spans": [ { "start": 25, "end": 33, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Pronoun", "sec_num": null }, { "text": "For the utterance verification task, many approaches have been proposed. Sako et al. (2006) have reported a method to discriminate a request to a system from a chat using AdaBoost. Machine learning techniques generally need a large amount of training data to generate a classifier with high accuracy. However constructing training data by hand is costly. Isobe et al. (2007) have proposed a multi-domain speech recognition system based on the model likelihoods of the different domain specific language models. The method needs to recalculate a model to select an output. On the other hand, our method only changes two thresholds. Komatani et. al. (2007) have reported an utterance verification method based on a difference of acoustic likelihood values computed from two recognizers. Kumar et. al. (2005) have utilized Bhattacharyya distance to measure an acoustic similarity of different languages for multilingual speech recognition. Using the difference of acoustic likelihood is adequate for the verification task. Combining a method based on acoustic likelihood with our method is one future work. For the anaphora resolution task, our method was based on a scoring process using the confidence measure, distance and situation changes. In studies for anaphora resolution on text, machine learning-based methods have been used (Iida et al., 2005; Ng and Cardie, 2002) . However ma-chine learning-based methods need to a large amount of training data. The most famous approach for zero pronouns is the centering theory (Kameyama, 1986) . Nariyama (2002) has proposed a method which is an expansion of the centering approach. Minewaki et al. (2005) have reported an utterance interpretation method based on the relevance theory. Incorporating the linguistic knowledge into our method is one of the most effective approaches.", "cite_spans": [ { "start": 73, "end": 91, "text": "Sako et al. (2006)", "ref_id": "BIBREF14" }, { "start": 355, "end": 374, "text": "Isobe et al. (2007)", "ref_id": "BIBREF3" }, { "start": 631, "end": 654, "text": "Komatani et. al. (2007)", "ref_id": "BIBREF5" }, { "start": 785, "end": 805, "text": "Kumar et. al. (2005)", "ref_id": "BIBREF8" }, { "start": 1332, "end": 1351, "text": "(Iida et al., 2005;", "ref_id": "BIBREF2" }, { "start": 1352, "end": 1372, "text": "Ng and Cardie, 2002)", "ref_id": "BIBREF12" }, { "start": 1523, "end": 1539, "text": "(Kameyama, 1986)", "ref_id": "BIBREF4" }, { "start": 1542, "end": 1557, "text": "Nariyama (2002)", "ref_id": "BIBREF11" }, { "start": 1629, "end": 1651, "text": "Minewaki et al. (2005)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "4.3" }, { "text": "The most critical problem of the anaphora resolution in speech understanding is insertion and deletion errors in dialogue logs, namely the existence of noise words and the lack of the antecedent. Therefore systems need to improve the word recognition accuracy for the anaphora resolution. As a solution for the problem, we applied a medium scale speech recognizer to our method. It is in the category of the ROVER method (Fiscus, 1997) . Applying different types of speech recognizers to our method is one future work.", "cite_spans": [ { "start": 421, "end": 435, "text": "(Fiscus, 1997)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "4.3" }, { "text": "Another approach for the improvement is to repair recognition errors by users. Since our task is an interaction with a robot, repairing errors in a conversation by users is an effective approach. Ogata and Goto (2005) have proposed a speech input interface with a speech-repair function. A dialog processing with visualization of outputs and utterance generation from a robot is one interesting approach in our task.", "cite_spans": [ { "start": 196, "end": 217, "text": "Ogata and Goto (2005)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "4.3" }, { "text": "In this paper, we described a speech understanding method based on a multiple speech recognizer. We called it \"OGSS model\". The method was combination of one LVCSR and several DSSRs. By using this method, we realized a flexible and robust speech understanding method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "In this paper, we evaluated two processes of the method: (1) output selection and (2) anaphora resolution. For the output selection, the method was based on the edit distance between each output. In the experiment, we obtained high F-value (more than 0.9). This result shows that our method is simple and robust. For the anaphora resolution, the method was based on a scoring process of each word with a confidence value in dialogue logs. We also used the distance between an anaphora expression and an antecedent, and change of situation such as speaker's change for the scoring process. Although the proposed method was effective as compared with a baseline, the accuracy was not high (71.7%). The reason why the accuracy of the anaphora resolution was low was that the accuracies of the LVCSR and the medium-scale DSSR were insufficient. To improve the accuracy of the anaphora resolution, we need speech recognizers with more high accuracy for capturing content words in non-order utterances. One approach to solve the problem is to apply a statistical model to the medium-scale DSSR.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "Our future work includes (1) a large-scale experiment especially the anaphora resolution, (2) evaluation of the proposed method with other domains and (3) improvement of the accuracy of the medium-scale DSSR.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "In this evaluation, we did not treat the medium-scale DSSR because it is a recognizer for anaphora resolution.2 The best F-value on this experiment was 0.924 in the case that thresh\u00d9\u00d8\u00d8 \u00d6=0.20.3 On the other hand, the proposed method was \"\u00d1 \u00dc \u00cb \u00d3\u00d6 \".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Incorporating confidence measures in the Dutch train timetable information system developed in the ARICE project", "authors": [ { "first": "C", "middle": [], "last": "Bouwman", "suffix": "" }, { "first": "J", "middle": [], "last": "Sturm", "suffix": "" }, { "first": "L", "middle": [], "last": "Boves", "suffix": "" } ], "year": 1999, "venue": "Proceedings of ICASSP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bouwman, C., J. Sturm, and L. Boves. 1999. Incorporating confidence measures in the Dutch train timetable information system developed in the ARICE project. In Proceedings of ICASSP.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A post-processing system to yield reduced word error rates: Recognizer output voting error reduction (ROVER)", "authors": [ { "first": "J", "middle": [ "G" ], "last": "Fiscus", "suffix": "" } ], "year": 1997, "venue": "Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)", "volume": "", "issue": "", "pages": "347--352", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fiscus, J. G. 1997. A post-processing system to yield reduced word error rates: Recognizer output voting error reduction (ROVER). In Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pp. 347-352.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The issue of combining anaphoricity determination and antecedent identification in anaphora resolution", "authors": [ { "first": "R", "middle": [], "last": "Iida", "suffix": "" }, { "first": "K", "middle": [], "last": "Inui", "suffix": "" }, { "first": "Y", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2005, "venue": "International Conference on Natural Language Processing and Knowledge Engineering", "volume": "", "issue": "", "pages": "244--249", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iida, R., K. Inui, and Y. Matsumoto. 2005. The issue of combining anaphoricity determination and antecedent identification in anaphora resolution. In International Conference on Natural Language Processing and Knowledge Engineering (IEEE NLP-KE), pp. 244-249.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A likelihood normalization method for the domain selection in the multi-decoder speech recognition system", "authors": [ { "first": "T", "middle": [], "last": "Isobe", "suffix": "" }, { "first": "K", "middle": [], "last": "Itou", "suffix": "" }, { "first": "K", "middle": [], "last": "Takeda", "suffix": "" } ], "year": 2007, "venue": "IEICE TRANSACTIONS on Information and Systems", "volume": "", "issue": "7", "pages": "1773--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isobe, T., K. Itou, and K. Takeda. 2007. A likelihood normalization method for the domain selec- tion in the multi-decoder speech recognition system. IEICE TRANSACTIONS on Information and Systems (Japanese Edition), 90(7), 1773-1780.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A property-sharing constraint in centering", "authors": [ { "first": "M", "middle": [], "last": "Kameyama", "suffix": "" } ], "year": 1986, "venue": "Proceedings of the 24th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "200--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kameyama, M. 1986. A property-sharing constraint in centering. In Proceedings of the 24th Annual Meeting of the Association for Computational Linguistics, pp. 200-206.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Introducing utterance verification in spoken dialogue system to improve dynamic help generation for novice users", "authors": [ { "first": "K", "middle": [], "last": "Komatani", "suffix": "" }, { "first": "Y", "middle": [], "last": "Fukubayashi", "suffix": "" }, { "first": "T", "middle": [], "last": "Ogata", "suffix": "" }, { "first": "H", "middle": [ "G" ], "last": "Okuno", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 8th SIGdial Workshop on Discourse and Dialogue", "volume": "", "issue": "", "pages": "202--205", "other_ids": {}, "num": null, "urls": [], "raw_text": "Komatani, K., Y. Fukubayashi, T. Ogata, and H. G. Okuno. 2007. Introducing utterance veri- fication in spoken dialogue system to improve dynamic help generation for novice users. In Proceedings of the 8th SIGdial Workshop on Discourse and Dialogue, pp. 202-205.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Generating confirmation to distinguish phonologically confusing word pairs in spoken dialogue systems", "authors": [ { "first": "K", "middle": [], "last": "Komatani", "suffix": "" }, { "first": "R", "middle": [], "last": "Hamabe", "suffix": "" }, { "first": "T", "middle": [], "last": "Ogata", "suffix": "" }, { "first": "H", "middle": [ "G" ], "last": "Okuno", "suffix": "" } ], "year": 2005, "venue": "Proceedings of 4th IJCAI Workshop on Knowledge and Reasoning in Practical Dialogue Systems", "volume": "", "issue": "", "pages": "40--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Komatani, K., R. Hamabe, T. Ogata, and H. G. Okuno. 2005. Generating confirmation to distin- guish phonologically confusing word pairs in spoken dialogue systems. In Proceedings of 4th IJCAI Workshop on Knowledge and Reasoning in Practical Dialogue Systems, pp. 40-45.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Flexible mixed-initiative dialogue management using concept-level confidence measures of speech recognizer output", "authors": [ { "first": "K", "middle": [], "last": "Komatani", "suffix": "" }, { "first": "T", "middle": [], "last": "Kawahara", "suffix": "" } ], "year": 2000, "venue": "Proceedings of International Conference on Computational Linguistics", "volume": "1", "issue": "", "pages": "467--473", "other_ids": {}, "num": null, "urls": [], "raw_text": "Komatani, K. and T. Kawahara. 2000. Flexible mixed-initiative dialogue management using concept-level confidence measures of speech recognizer output. In Proceedings of Interna- tional Conference on Computational Linguistics (COLING 2000), volume 1, pp. 467-473.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Multilingual speech recognition: A unified approach", "authors": [ { "first": "S", "middle": [ "C" ], "last": "Kumar", "suffix": "" }, { "first": "V", "middle": [ "P" ], "last": "Mohandas", "suffix": "" }, { "first": "H", "middle": [], "last": "Li", "suffix": "" } ], "year": 2005, "venue": "Proceedings of InterSpeech 2005", "volume": "", "issue": "", "pages": "3357--3360", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kumar, S. C., V. P. Mohandas, and H. Li. 2005. Multilingual speech recognition: A unified approach. In Proceedings of InterSpeech 2005, pp. 3357-3360.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Julius -an open source real-time large vocabulary recognition engine", "authors": [ { "first": "A", "middle": [], "last": "Lee", "suffix": "" }, { "first": "T", "middle": [], "last": "Kawahara", "suffix": "" }, { "first": "K", "middle": [], "last": "Shikano", "suffix": "" } ], "year": 2001, "venue": "Proceedings of Eurospeech", "volume": "", "issue": "", "pages": "1691--1694", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lee, A., T. Kawahara, and K. Shikano. 2001. Julius -an open source real-time large vocabulary recognition engine. In Proceedings of Eurospeech, pp. 1691-1694.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Interpretation of utterances based on relevance theory: Toward the formalization of implicature with the maximum relevance", "authors": [ { "first": "S", "middle": [], "last": "Minewaki", "suffix": "" }, { "first": "K", "middle": [], "last": "Shimada", "suffix": "" }, { "first": "T", "middle": [], "last": "Endo", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 9th Conference of the Pacific Association for Computational Linguistics (PACLING2005)", "volume": "", "issue": "", "pages": "211--216", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minewaki, S., K. Shimada, and T. Endo. 2005. Interpretation of utterances based on relevance the- ory: Toward the formalization of implicature with the maximum relevance. In Proceedings of the 9th Conference of the Pacific Association for Computational Linguistics (PACLING2005), pp. 211-216.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Grammar for ellipsis resolution in japanese", "authors": [ { "first": "S", "middle": [], "last": "Nariyama", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 9th International conference on Theoretical and Methodological Issues in Machine Translation", "volume": "", "issue": "", "pages": "135--145", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nariyama, S. 2002. Grammar for ellipsis resolution in japanese. In In Proceedings of the 9th International conference on Theoretical and Methodological Issues in Machine Translation, pp. 135-145.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Improving machine learning approaches to coreference resolution", "authors": [ { "first": "V", "middle": [], "last": "Ng", "suffix": "" }, { "first": "C", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "104--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ng, V. and C. Cardie. 2002. Improving machine learning approaches to coreference resolution. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pp. 104-111.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Speech repair: Quick error correction just by using selection operation for speech input interfaces", "authors": [ { "first": "J", "middle": [], "last": "Ogata", "suffix": "" }, { "first": "M", "middle": [], "last": "Goto", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Interspeech 2005", "volume": "", "issue": "", "pages": "133--136", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ogata, J. and M. Goto. 2005. Speech repair: Quick error correction just by using selection operation for speech input interfaces. In Proceedings of Interspeech 2005, pp. 133-136.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "System request discrimination based on AdaBoost", "authors": [ { "first": "A", "middle": [], "last": "Sako", "suffix": "" }, { "first": "T", "middle": [], "last": "Takiguchi", "suffix": "" }, { "first": "Y", "middle": [], "last": "Ariki", "suffix": "" } ], "year": 2006, "venue": "IPSJ technical report. SIG-SLP64", "volume": "", "issue": "", "pages": "19--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sako, A., T. Takiguchi, and Y. Ariki. 2006. System request discrimination based on AdaBoost. In IPSJ technical report. SIG-SLP64, pp. 19-24.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Speech understanding in a multiple recognizer with an anaphora resolution process", "authors": [ { "first": "K", "middle": [], "last": "Shimada", "suffix": "" }, { "first": "A", "middle": [], "last": "Uzumaki", "suffix": "" }, { "first": "M", "middle": [], "last": "Kitajima", "suffix": "" }, { "first": "T", "middle": [], "last": "Endo", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 11th Conference of the Pacific Association for Computational Linguistics (PACLING2009)", "volume": "", "issue": "", "pages": "262--267", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shimada, K., A. Uzumaki, M. Kitajima, and T. Endo. 2009. Speech understanding in a multiple recognizer with an anaphora resolution process. In Proceedings of the 11th Conference of the Pacific Association for Computational Linguistics (PACLING2009), pp. 262-267.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Estimating highly confident portions based on agreement among outputs of multiple LVCSR models", "authors": [ { "first": "T", "middle": [], "last": "Utsuro", "suffix": "" }, { "first": "H", "middle": [], "last": "Nishizaki", "suffix": "" }, { "first": "Y", "middle": [], "last": "Kodama", "suffix": "" }, { "first": "S", "middle": [], "last": "Nakagawa", "suffix": "" } ], "year": 2004, "venue": "Systems and Computers in Japan", "volume": "35", "issue": "7", "pages": "33--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Utsuro, T., H. Nishizaki, Y. Kodama, and S. Nakagawa. 2004. Estimating highly confident por- tions based on agreement among outputs of multiple LVCSR models. Systems and Computers in Japan, 35(7), 33-40.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "The OGSS model and the effectiveness", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "The edit distance calculation", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "The anaphora resolution with the multiple recognizer.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF3": { "text": "Examples of the output analysis", "num": null, "uris": null, "type_str": "figure" }, "TABREF0": { "html": null, "text": "One Generalist and Some Specialists)", "content": "
InputEveryone listensDSSR with small vocabulary SpecialistLVCSRDSSR for Order Non-Order Utterance
LVCSROrder SpecialistI dropped a remote controller under this bed....
Generalist*SelectRecognition
remote controller bed? (misrecognize)
Question Specialist
Order Utterance
The most similar : the best outputExplanation SpecialistPlease pick it up.
* Generalist There is no limitation for* Specialist There are many out-of -domain* LVCSR often makes a mistakeRecognition*Select
recognition, but low accuracyutterances, but high accuracybig it uppick it up
IntegrationContext informationremote controllerResults with high accuracy
", "type_str": "table", "num": null }, "TABREF3": { "html": null, "text": "I want to drink the canned drink on the table. Verb to Drink_V Drink_N (in/on/...) Location Mr. Tanaka is in the consulting room.", "content": "
For small-scale DSSR outputs
Bring (it)*
A Grammar in DSSRs
[Drink(Want), [[`canned drink' , obj], `on the table' , loc]] For LVCSR and medium-scale DSSR outputs Mr. *** -> agt Rules (a) consulting room -> loc Dictionary S->Sub [Tanaka, agt] [consulting room, loc] [Bring(Want) [zero pronoun, obj]]
(b)
", "type_str": "table", "num": null }, "TABREF4": { "html": null, "text": "### at nurse station .... There is a snack on the table. (Tsukue no ue ni okashi ga aruyone.) I heard that Mr. Kimura said ``I'm getting hungry.'' (Kimura-san ga onaka ga suita to itteita mitai.)", "content": "
Please carry it to the Kimura' s room.
(Sore wo kimura-san ni motte itte.)
### at Kimura' s room
Thank you.
(Arigato.)
I have a favor to ask.
(Onegai ga arunodakedo.)
I think that there is a canned drink in the refrigerator.
(Tasika, reizouko ni zyu-su ga atta hazu.)
I want to drink it.
(Sore wo nomitai no dakedo.)
Please bring (it) to me.
(Totte kureru?)
....
", "type_str": "table", "num": null }, "TABREF5": { "html": null, "text": "The accuracy of anaphora resolution.", "content": "
MethodBaseline Related work Proposed
Accuracy64.2%68.9%71.7%
", "type_str": "table", "num": null } } } }