|
{ |
|
"paper_id": "2007", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:50:46.690263Z" |
|
}, |
|
"title": "Dynamic n-best Selection and Its Application in Dialog Act Detection", |
|
"authors": [ |
|
{ |
|
"first": "Junling", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Bosch Research and Technology center", |
|
"location": { |
|
"addrLine": "4009 Miranda Ave. Palo Alto", |
|
"postCode": "94304", |
|
"region": "CA" |
|
} |
|
}, |
|
"email": "junling.hu@us.bosch.com" |
|
}, |
|
{ |
|
"first": "Fabrizio", |
|
"middle": [], |
|
"last": "Morbini", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Bosch Research and Technology center", |
|
"location": { |
|
"addrLine": "4009 Miranda Ave. Palo Alto", |
|
"postCode": "94304", |
|
"region": "CA" |
|
} |
|
}, |
|
"email": "fabrizio.morbini@us.bosch.com" |
|
}, |
|
{ |
|
"first": "Fuliang", |
|
"middle": [], |
|
"last": "Weng", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Bosch Research and Technology center", |
|
"location": { |
|
"addrLine": "4009 Miranda Ave. Palo Alto", |
|
"postCode": "94304", |
|
"region": "CA" |
|
} |
|
}, |
|
"email": "fu-liang.weng@us.bosch.com" |
|
}, |
|
{ |
|
"first": "Xue", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "McGill University Montreal", |
|
"location": { |
|
"postCode": "H3A 2A7", |
|
"region": "QC", |
|
"country": "Canada" |
|
} |
|
}, |
|
"email": "xueliu@cs.mcgill.ca" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We propose dynamically selecting n for nbest outputs returned from a dialog system module. We define a selection criterion based on maximum drop among probabilities, and demonstrate its theoretical properties. Applying this method to a dialog-act detection module, we show consistent higher performance of this method relative to all other n-best methods with fixed n. The performance metric we use is based on ROC area.", |
|
"pdf_parse": { |
|
"paper_id": "2007", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We propose dynamically selecting n for nbest outputs returned from a dialog system module. We define a selection criterion based on maximum drop among probabilities, and demonstrate its theoretical properties. Applying this method to a dialog-act detection module, we show consistent higher performance of this method relative to all other n-best methods with fixed n. The performance metric we use is based on ROC area.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Recent years have seen increasing application of machine learning in dialog systems. From speech recognizer, to natural language understanding and dialog manager, statistical classifiers are applied based on more data available from users. Typically, the results from each of these modules were sent to the next module as n-best list, where n is a fixed number.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we investigate how we can dynamically select the number n for n-best outputs returned from a classifier. We proposed a selection method based on the maximum drop between two adjacent probabilities of the outputs, where all probabilities are sorted from the highest to lowest. We call this method n*-best selection, where n* refers to a variable n.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We investigated the theoretical property of n*-best, particularly its optimality relative to the fixed nbest where n is any fixed number. The optimality metric we use is ROC (Receiver Operating Charac-teristic) area, which measures the tradeoff of false positive and false negative in a selection criterion. We test the empirical performance of n*-best vs. nbest of fixed n for the task of identifying the confidence of dialog act classification. In two very different datasets we use, we found consistent higher performance of n*-best than n-best for any fixed n.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This paper is the first attempt in providing theoretical foundation for dynamically selecting n-best outputs from statistical classifiers. The ROC area measure has recently been adopted by machine learning community, and starts to see its adoption by researchers on dialog systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Even though n*-best method is demonstrated here only for dialog act detection domain, it can be potentially applied to speech recognition, POS (partof-speech) tagging, statistical parser and any other modules that return n-best results in a dialog system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The n-best method has been used extensively in speech recognition and NLU. It is also widely used in machine translation (Toutanova and Suzuki, 2007) . Given that the system has little information on what is a good translation, all potential candidates are sent to a later stage, where a ranker makes a decision on the candidates. In most of these applications, the number of candidates n is a fixed number. The n-best method works well when the system uses multi-pass strategy to defer decision to later stage.", |
|
"cite_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 149, |
|
"text": "(Toutanova and Suzuki, 2007)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dynamically selecting n for n-best outputs", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We call n*-best a variant of n-best where n is a variable, specifically the n*-best method selects the number of classes returned from a model, such that the number n* satisfies the following property:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "n*-best Selection", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": ") ( max arg * 1 + \u2212 = n n n p p n (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "n*-best Selection", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "where and are the probabilities of class n and class n+1 respectively. In other words, n* is the cut-off point that maximizes the drop", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "n*-best Selection", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "n p 1 + n p 1 + \u2212 n n p p .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "n*-best Selection", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We have the following observation: When the output probabilities are ranked from the highest to the lowest, the accumulated probability distribution curve is a concave function.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Theoretical Property of n*-best", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We further show that our derivation of n* is equivalent to maximizing the second derivative of the accumulative probability curve, when the number of classes approaches infinity. In other words,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Theoretical Property of n*-best", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": ")) 1 ( ' ' ( max arg * + \u2212 = n P n n ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Theoretical Property of n*-best", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Due to the page limit, we omit the proof here.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Theoretical Property of n*-best", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "To compare the performance of the n*-best method to n-best selection of fixed n, we need to define an evaluation metric. The evaluation is based on how the n-best results are used.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metric", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The task we study here is described in Figure 1 . The dialog-act classifier uses features computed from the parse tree of the user utterance to make predictions on the user's dialog acts.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 39, |
|
"end": 47, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Task: Dialog Act Detection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The n-best results from the dialog-act classifier are sent to the decision component that determines whether the system is confident about the result of the classifier. If it is confident, it will pass the result to later stages of the dialog system. If it is not confident, the system will respond \"I don't understand\" and save the utterance for later training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Task: Dialog Act Detection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The decision on how confident we are about inter preting a sentence translates into a decision on whether to select that sentence for re-training. In this sense, this decision problem is the same as active leaning. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Task: Dialog Act Detection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Let S be the collection of data points that are marked as low confidence and will be labeled by a human. Let N 2 be the set of all new data. Let h be the confidence threshold and n the number we return from n-best results. We can see that (Figure 2 ) S is a function of both n and h. For a fixed h, the larger n is, the smaller S will be. Our goal is to choose the selection criterion that produces a good S. The optimal S is one that is small and contains only true negative instances.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 239, |
|
"end": 248, |
|
"text": "(Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Error Detection as Active Learning", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In active learning research, the most commonly used evaluation metric is the error rate (Tur et al, 2005; Osugi et al, 2005) . The error rate can also be user utterances (false negatives). We find a better measure that is based on ROC curve. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 105, |
|
"text": "(Tur et al, 2005;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 106, |
|
"end": 124, |
|
"text": "Osugi et al, 2005)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Detection as Active Learning", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We tested the performance of our n*-best method on two datasets. The first dataset contains 1178 user utterances and the second one contains 471 utterances. We use these two sets to simulate two situations: Case 1, a large training data and a small testing set; Case 2, a small training data and a large testing set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "All utterances in both datasets were hand labeled with dialog acts. There can be more than one dia-log act associated with each utterance. An example of training instance is: \"(a cheap restaurant), (Query:restaurant, Answer, Revision) \" the first part is the user utterance, the second part (referred as ) is the set of human-labeled dialog acts. In total, in the domain used for these tests, there are 30 possible user dialog acts.", |
|
"cite_spans": [ |
|
{ |
|
"start": 198, |
|
"end": 234, |
|
"text": "(Query:restaurant, Answer, Revision)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental data", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We compared n*-best with fixed n-best methods with n from 1 to 6. For each of these methods, we calculate TP, FP, TN and FN for values of the threshold h ranging from 0.1 to 1 in steps of 0.05. Then we derived TPR and FPR and plotted the ROC curve. Figure 4 shows the ROC curves obtained by the different methods in Case 1. We can see that the ROC curve for n*-best method is better in most cases than the other methods with fixed n. Figure 5 shows the ROC curves in Case 2, where the model is trained on a small dataset and tested on a large dataset. We can see that the ROC curves for all methods are nearer to the nondiscrimination line than in the previous case. This suggests that the classifier has a lower discrimina tion quality given the small set used for training. However, the n*-best method still out-performs the other n-best methods in the majority of scenarios. To get a summary statistics, we calculated the size of the ROC area. Figures 6 and 7 plot the size of the ROC area of the various methods in the two test cases. We can see that n*-best out-performs all other n-best methods. Figure 7 . ROC Area for n*-best and other n-best methods (n* is represented as n=0)", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 249, |
|
"end": 257, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF6" |
|
}, |
|
{ |
|
"start": 434, |
|
"end": 442, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF7" |
|
}, |
|
{ |
|
"start": 947, |
|
"end": 962, |
|
"text": "Figures 6 and 7", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1102, |
|
"end": 1110, |
|
"text": "Figure 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "d L", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We propose dynamic selecting n for n-best outputs returned from a classifier. We define a selection criterion based on maximum drop among probabilities, and call this method n*-best selection. We demonstrate its theoretical properties in this paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We measured the performance of our n*-best method using the ROC area that has been designed to provide a more complete performance measure for classification models. We showed that our n*best achieved better ROC curves in most cases. It also achieves better ROC area than all other n-best methods in two experiments (with opposite properties).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our method is not limited to detection of dialog acts but can be used also in other components of dialog systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "5" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "AUC optimization vs. error rate minimization", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Cortes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Mohri", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Sebastian Thrun, Lawrence Saul, and Bernhard Sch olkopf", |
|
"volume": "16", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Cortes, M. Mohri. 2004. AUC optimization vs. error rate minimization. Advances in Neural Information Processing Systems 16, eds., Sebastian Thrun, Law- rence Saul, and Bernhard Sch olkopf, MIT Press, Cambridge, MA.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Active Learning to Maximize Area Under the ROC Curve", |
|
"authors": [ |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Culver", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deng", |
|
"middle": [], |
|
"last": "Kun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Scott", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Sixth International Conference on Data Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "149--158", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matt Culver, Deng Kun, and Stephen Scott. 2006. Ac- tive Learning to Maximize Area Under the ROC Curve. Proceedings of the Sixth International Con- ference on Data Mining, IEEE Computer Society. 149-158.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Proceedings of the Dialogs on dialog: Multidisciplinary Evaluation of Advanced Speech-based Interactive Systems. Interspeech2006-ICSLP satellite workshop", |
|
"authors": [ |
|
{ |
|
"first": "Sangkeun", |
|
"middle": [], |
|
"last": "Jung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cheongjae", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gary", |
|
"middle": [ |
|
"Geunbae" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sangkeun Jung, Cheongjae Lee, Gary Geunbae Lee. 2006. Dialog Studio: An Example Based Spoken Dialog System Development Workbench. 2006. Pro- ceedings of the Dialogs on dialog: Multidisciplinary Evaluation of Advanced Speech-based Interactive Systems. Interspeech2006-ICSLP satellite workshop, Pittsburgh.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Balancing Exploration and Exploitation: A New Algorithm for Active Machine Learning boundaries", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Osugi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deng", |
|
"middle": [], |
|
"last": "Kun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Scott", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the Fifth IEEE International Conference on Data Mining (ICDM'05", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "330--337", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Osugi, Deng Kun, and Stephen Scott. 2005. Balancing Exploration and Exploitation: A New Al- gorithm for Active Machine Learning boundaries. Proceedings of the Fifth IEEE International Confer- ence on Data Mining (ICDM'05). 330-337.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Active Learning to Maximize Area Under the ROC Curve", |
|
"authors": [ |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hisami", |
|
"middle": [], |
|
"last": "Suzuki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Sixth IEEE International Conference on Data Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "149--158", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kristina Toutanova and Hisami Suzuki. 2007. Generat- ing Case Markers in Machine Translation. Proceed- ings of NAACL-HLT 2007, Rochester, New York. 49- 56.Matt Culver, Deng Kun, and Stephen Scott. 2006. Active Learning to Maximize Area Under the ROC Curve. Proceedings of the Sixth IEEE International Conference on Data Mining. 149-158.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Combining active and semi-supervised learning for spoken language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Gokhan", |
|
"middle": [], |
|
"last": "Tur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dilek", |
|
"middle": [], |
|
"last": "Hakkani-T\u00fcr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Schapire", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Speech Communication", |
|
"volume": "45", |
|
"issue": "2", |
|
"pages": "171--186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gokhan Tur, Dilek Hakkani-T\u00fcr and Robert E.Schapire. 2005. Combining active and semi-supervised learn- ing for spoken language understanding. Speech Communication, 45(2):171-186.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Detection Dialog Act with Confidence", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "The Decreasing set of S as n increases", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"text": "is the number of true positives and FP is the number of false positives. This measure does not capture the trade off between giving the user wrong answers (false positive) and rejecting too many properly classified", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"text": "Characteristic) curve is a graphical plot of the fraction of true positives vs. the fraction of false positive. ROC curve is an alternative to classical machine learning metrics such as misclassification rate. An ROC space is defined by FPR (False Positive Rate) and TPR (True Positive Rate) as x and y axes respectivelyThe best possible prediction method would yield a point in the upper left corner or coordinate (0,1) of the ROC space, representing the case in which all only true positives are returned by a particular model. The 45 degree diagonal line is called the no-discrimination line and represents the classifier that returns the same percentage of true positive and false positive.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"text": "ROC curve and ROC area", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF6": { |
|
"text": "ROC curves from n*-best and n-best", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF7": { |
|
"text": "ROC curves obtained by n* and n-best .", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
} |
|
} |
|
} |
|
} |