Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y08-1004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:37:50.685910Z"
},
"title": "NIST 2007 Language Recognition Evaluation: From the Perspective of IIR *",
"authors": [
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Bin",
"middle": [],
"last": "Ma",
"suffix": "",
"affiliation": {},
"email": "mabin@i2r.a-star.edu.sg"
},
{
"first": "Kong-Aik",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {},
"email": "kalee@i2r.a-star.edu.sg"
},
{
"first": "Khe-Chai",
"middle": [],
"last": "Sim",
"suffix": "",
"affiliation": {},
"email": "kcsim@i2r.a-star.edu.sg"
},
{
"first": "Hanwu",
"middle": [],
"last": "Sun",
"suffix": "",
"affiliation": {},
"email": "hwsun@i2r.a-star.edu.sg"
},
{
"first": "Rong",
"middle": [],
"last": "Tong",
"suffix": "",
"affiliation": {},
"email": "tongrong@i2r.a-star.edu.sg"
},
{
"first": "Donglai",
"middle": [],
"last": "Zhu",
"suffix": "",
"affiliation": {},
"email": "dzhu@i2r.a-star.edu.sg"
},
{
"first": "Changhuai",
"middle": [],
"last": "You",
"suffix": "",
"affiliation": {},
"email": "echyou@i2r.a-star.edu.sg"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes the Institute for Infocomm Research (IIR) system for the 2007 Language Recognition Evaluation (LRE) conducted by the National Institute of Standards and Technology (NIST). The submitted system is a fusion of multiple state-ofthe-art language classifiers using diversified discriminative language cues. We implemented several state-of-the-art algorithms using both phonotactic and acoustic features. We also investigated the system fusion and score calibration strategy to improve the performance of language recognition, and worked out a pseudo-key analysis approach to cross-validate the performance of the individual classifiers on the evaluation data. We achieve an equal-error-rate (EER) of 1.67 % on the close-set general language recognition test.",
"pdf_parse": {
"paper_id": "Y08-1004",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes the Institute for Infocomm Research (IIR) system for the 2007 Language Recognition Evaluation (LRE) conducted by the National Institute of Standards and Technology (NIST). The submitted system is a fusion of multiple state-ofthe-art language classifiers using diversified discriminative language cues. We implemented several state-of-the-art algorithms using both phonotactic and acoustic features. We also investigated the system fusion and score calibration strategy to improve the performance of language recognition, and worked out a pseudo-key analysis approach to cross-validate the performance of the individual classifiers on the evaluation data. We achieve an equal-error-rate (EER) of 1.67 % on the close-set general language recognition test.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Automatic spoken language recognition (SLR) is a process of determining the identity of the language in a spoken document. As multilingual applications are demanded by the emerging need for globalization and the growing international business interflow, SLR has become an enabling technology in many applications such as multilingual conversational systems (Zue and Glass, 2000) , multilingual speech recognition and translation (Waibel et al., 2000) , and spoken document retrieval (Dai et al. 2003) . It is also a topic of great importance in the areas of intelligence and security, where the language identities of recorded messages and archived materials need to be established before any information can be extracted. SLR technology also facilitates massive on-line language routing for voice surveillance over telephone network.",
"cite_spans": [
{
"start": 357,
"end": 378,
"text": "(Zue and Glass, 2000)",
"ref_id": "BIBREF0"
},
{
"start": 429,
"end": 450,
"text": "(Waibel et al., 2000)",
"ref_id": "BIBREF1"
},
{
"start": 483,
"end": 500,
"text": "(Dai et al. 2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The National Institute of Standards and Technology (NIST) has conducted a series of evaluations of SLR technology in 1996 , 2003 , 2005 (NIST, 2007 . The language recognition evaluations (LREs) focus on language and dialect detection in the context of conversational telephony speech. They are conducted to foster research progress, with the goals of exploring promising new ideas in language recognition, developing advanced technology incorporating these ideas, and measuring the performance of this technology. The Institute for Infocomm Research (IIR) team has participated in the 2005 and 2007 NIST LREs and demonstrated the state-of-the-art technologies.",
"cite_spans": [
{
"start": 99,
"end": 121,
"text": "SLR technology in 1996",
"ref_id": null
},
{
"start": 122,
"end": 128,
"text": ", 2003",
"ref_id": "BIBREF15"
},
{
"start": 129,
"end": 135,
"text": ", 2005",
"ref_id": "BIBREF10"
},
{
"start": 136,
"end": 147,
"text": "(NIST, 2007",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "One of the fundamental issues in SLR is to explore the discriminative cues for spoken languages. In the state-of-the-art language recognition systems, these cues mainly come from the acoustic features (Sugiyama, 1991; Torres-Carassquilo et al., 2002; Burget et al., 2006; Campbell et al., 2006) and phonotactic representations (Hazen and Zue, 1994; Zissman, 1996; Berkling and Barnard, 1994; Corredor-Ardoy et al., 1997; Li and Ma, 2005; Ma, Li, and Tong, 2007) , which reflect different aspects of spoken language characteristics. Another issue is how to effectively organize and exploit these language cues obtained from multiple sources in the recognition system design for the best performance.",
"cite_spans": [
{
"start": 201,
"end": 217,
"text": "(Sugiyama, 1991;",
"ref_id": "BIBREF3"
},
{
"start": 218,
"end": 250,
"text": "Torres-Carassquilo et al., 2002;",
"ref_id": "BIBREF4"
},
{
"start": 251,
"end": 271,
"text": "Burget et al., 2006;",
"ref_id": "BIBREF5"
},
{
"start": 272,
"end": 294,
"text": "Campbell et al., 2006)",
"ref_id": "BIBREF6"
},
{
"start": 327,
"end": 348,
"text": "(Hazen and Zue, 1994;",
"ref_id": "BIBREF7"
},
{
"start": 349,
"end": 363,
"text": "Zissman, 1996;",
"ref_id": "BIBREF8"
},
{
"start": 364,
"end": 391,
"text": "Berkling and Barnard, 1994;",
"ref_id": "BIBREF8"
},
{
"start": 392,
"end": 420,
"text": "Corredor-Ardoy et al., 1997;",
"ref_id": "BIBREF9"
},
{
"start": 421,
"end": 437,
"text": "Li and Ma, 2005;",
"ref_id": "BIBREF10"
},
{
"start": 438,
"end": 461,
"text": "Ma, Li, and Tong, 2007)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Significant improvements in automatic speech recognition (ASR) have been achieved through exploiting the acoustic features representing the temporal properties of speech spectrum. These acoustic features, such as Mel-frequency Cepstral Coefficients (MFCCs), are also good choices to be the front-ends in language recognition systems. Gaussian mixture model (GMM), which can be seen as a one-state hidden Markov model (HMM) (Rabiner, 1989) , is a simple modeling method to provide a multimodal density and is reasonably accurate when speech data are generated from a set of Gaussian distributions. It has demonstrated a great success in textindependent speaker recognition (Reynolds, Quatieri, and Dunn, 2000) . In language recognition, GMM is also an effective method to model the unique characteristics among languages (Torres-Carassquilo et al., 2002) . The support vector machine (SVM) has proven to be a powerful classifier in many pattern classification tasks. It is a discriminative classifier to separate two classes with a hyperplane in a high-dimensional space. The generalized linear discriminant sequence kernel (GLDS) has been proposed to apply SVM for speaker and language recognition (Campbell et al., 2006) . The cepstral feature vectors extracted from an utterance are expanded to a high-dimensional space by calculating all the monomials.",
"cite_spans": [
{
"start": 423,
"end": 438,
"text": "(Rabiner, 1989)",
"ref_id": "BIBREF12"
},
{
"start": 672,
"end": 708,
"text": "(Reynolds, Quatieri, and Dunn, 2000)",
"ref_id": "BIBREF13"
},
{
"start": 820,
"end": 853,
"text": "(Torres-Carassquilo et al., 2002)",
"ref_id": "BIBREF4"
},
{
"start": 1198,
"end": 1221,
"text": "(Campbell et al., 2006)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In recent years, phonotactic features have been shown to provide effective cues for language recognition. The phonotactic features are extracted from an utterance to represent phonetic constraints in a language. Although common sounds are shared considerably across spoken languages, the statistics of these sounds, such as phone n-gram, can differ considerably from one language to another. Parallel Phone Recognizers followed by Language Models (PPR-LM) (Zissman, 1996) uses multiple parallel phone recognizers to convert the input utterance into a phone token sequence. It is followed by a set of n-gram phone language models that imposes constraints on phone decoding and provides language scores. Instead of n-gram phone language models, vector space modeling (VSM) was proposed as the classifier (Li, Ma, and Lee, 2007) , called PPR-VSM. For each phone sequence generated from the multiple phone recognizers, the occurrences of phone n-grams are counted. A phone sequence is then represented as a highdimensional vector of n-gram occurrence. SVM is used as the classifier on the concatenated ngram occurrence vectors.",
"cite_spans": [
{
"start": 456,
"end": 471,
"text": "(Zissman, 1996)",
"ref_id": "BIBREF8"
},
{
"start": 802,
"end": 825,
"text": "(Li, Ma, and Lee, 2007)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "It is generally agreed upon that the integration with different cues of discriminative information can improve the performance of language recognition (Adda-Decker et al., 2003) . The information extraction and organization of multiple sources has been critical to a successful language recognition system (Singer et al., 2003; Tong et al., 2006) . In this paper, we will report our language recognition system submitted to the 2007 NIST LRE. The system is based on the fusion of multiple classifiers, each providing unique discriminative cue for language classification. In order to avoid a spoiled classifier in the submitted fusion system, we have designed a pseudo key analysis approach to check the integrity of each individual classifier before the system fusion.",
"cite_spans": [
{
"start": 151,
"end": 177,
"text": "(Adda-Decker et al., 2003)",
"ref_id": "BIBREF15"
},
{
"start": 306,
"end": 327,
"text": "(Singer et al., 2003;",
"ref_id": "BIBREF16"
},
{
"start": 328,
"end": 346,
"text": "Tong et al., 2006)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The remainder of this paper is organized as follows. The evaluation data and evaluation metric of the 2007 NIST LRE will be introduced in Section 2. The system structure together with the phonotactic and acoustic language classifiers will be presented in Section 3. The fusion of multiple language classifiers and language recognition results on the 2007 NIST LRE evaluation data will be described in Section 4. The pseudo key analysis will be shown in Section 5. Finally in Section 6, we summarize our findings in language recognition. Both closed-set and open-set tests in the six categories were conducted. For the closed-set tests, the non-target languages will be limited to those languages and dialects known to the system. For the open-set test the non-target languages will also include all other unknown languages such as Italian, Punjabi, Tagalog, Indonesian, and French. These unknown languages were not disclosed to participants, and the training data for these languages were not made available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "There are three test conditions to evaluate the system performance under different test segment durations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Data",
"sec_num": "2.1."
},
{
"text": "\u2022 3 seconds of speech (2-4 seconds actual)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Data",
"sec_num": "2.1."
},
{
"text": "\u2022 10 seconds of speech (7-13 seconds actual)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Data",
"sec_num": "2.1."
},
{
"text": "\u2022 30 seconds of speech (25-35 seconds actual)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Data",
"sec_num": "2.1."
},
{
"text": "The silence was not removed from speech so a segment could be much longer. There are 2510 segments for each of the three durations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Data",
"sec_num": "2.1."
},
{
"text": "All the phonotactic and acoustic classifiers were trained with the LDC CallFriend corpus 1 and the LRE 2007 development databases released by NIST to all the participants. The phone recognizers used for phonotactic features were trained with OGI Multilingual database (Muthusamy, Cole, and Oshika, 1992) and IIR-LID database (Tong et al., 2006) . The weights of fusion system were tuned on the LRE 1996 LRE , 2003 LRE , 2005 databases as well as the LRE 2007 development database.",
"cite_spans": [
{
"start": 268,
"end": 303,
"text": "(Muthusamy, Cole, and Oshika, 1992)",
"ref_id": null
},
{
"start": 325,
"end": 344,
"text": "(Tong et al., 2006)",
"ref_id": "BIBREF17"
},
{
"start": 394,
"end": 402,
"text": "LRE 1996",
"ref_id": null
},
{
"start": 403,
"end": 413,
"text": "LRE , 2003",
"ref_id": null
},
{
"start": 414,
"end": 424,
"text": "LRE , 2005",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Development Data",
"sec_num": "2.2."
},
{
"text": "The primary evaluation metric is taken as the average cost performance avg C (NIST LRE, 2007) , which indicates the pair-wise language recognition performance, represented in terms of detection miss and false alarm probabilities, for all target/non-target language pairs. For the case of closed-set test condition, the avg C is given by",
"cite_spans": [
{
"start": 77,
"end": 93,
"text": "(NIST LRE, 2007)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "2.3."
},
{
"text": "( ) ( ) ( ) tar non avg miss FA tar tar 1 1 0.5 0.5 , 1 l L l L C P l P ll N N \u2032 \u2208 \u2208 \u23a7 \u23ab \u2032 = + \u00d7 \u23a8 \u23ac \u2212 \u23a9 \u23ad \u2211 \u2211 (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "2.3."
},
{
"text": "where tar L is the set of tar N target languages (e.g., tar 14 N = for general LR). Notice that the miss probability P miss is computed separately for each target language. All other languages are treated as non-target languages to compute the false alarm probabilities P FA for each target/non-target language pairs. A complete definition of C avg can be found in (NIST LRE, 200) . In addition to the C avg , we also report the results in terms of the average equal-error-rate (EER). That is, we compute the EER for each of the target language and take their average as the performance measure.",
"cite_spans": [
{
"start": 365,
"end": 375,
"text": "(NIST LRE,",
"ref_id": null
},
{
"start": 376,
"end": 380,
"text": "200)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "2.3."
},
{
"text": "The IIR system submitted to the 2007 NIST LRE is a fusion of multiple language classifiers. Figure 1 shows the overall framework.",
"cite_spans": [],
"ref_spans": [
{
"start": 92,
"end": 100,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "System Description",
"sec_num": "3."
},
{
"text": "The first stage of the feature extraction process is the Voice Activity Detection (VAD). Two types of VAD were used:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "3.1."
},
{
"text": "\u2022 Frame-based VAD For the acoustic classifiers, an energy based voice activity detector (VAD) is applied to remove silence frames and to retain only the high quality speech frames for language recognition. The frames whose energy level is more than 30dB below the maximum energy of the entire utterance are considered silence and therefore removed. Furthermore, if there are more than 40% of the frames are retained, only the top 40% of the frames with higher SNR are retained. The rest of the frames are discarded. There are approximately 30% of the frames which are actually selected for further processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "3.1."
},
{
"text": "\u2022 Segment-based VAD For phonotactic classifiers, segment-based VAD is used. Based on the VAD speech frame index obtained in the above, we first join continuous speech frames to form the speech segments. If the resulting segment is longer than 8 seconds, the segment is further split at the frame in that segment with the lowest energy. This is repeated until the resulting segment is less than 8 seconds in long. The final segments are padded with 200ms silence at both ends. After VAD, two types of short time cepstral features, Mel Frequency Cepstral Coefficients (MFCCs) and Linear Prediction Cepstral Coefficients (LPCCs), are adopted as the basic features for acoustic classifiers. To capture temporal information across multiple frames, Shifted Delta Cepstral (SDC) coefficients (Torres-Carassquilo et al., 2002) are further applied to the framebased MFCCs and LPCCs.",
"cite_spans": [
{
"start": 785,
"end": 818,
"text": "(Torres-Carassquilo et al., 2002)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "3.1."
},
{
"text": "The phonotactic classifiers use multiple phone recognizers as the front-end to derive phonotactic statistics of a language. Since the individual phone recognizers are trained on different languages, they capture different acoustic characteristics from the speech data. Therefore, combining these recognizers together improves the overall language recognition performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phonotactic Classifiers",
"sec_num": "3.2."
},
{
"text": "The PPR front-end can be followed by both the phone n-gram language models (LM) (Zissman, 1996) and the vector space modeling (VSM) backend (Li, Ma, and Lee, 2007) . The LM backend evaluates each token sequence using multiple language models, each of which describes a token sequence from the perspective of a target language. With VSM backend, the n-gram statistics from each token sequence form a high-dimensional feature vector, also known as a bag-of-sounds (BOS) vector (Li and Ma, 2005) . A composite vector is constructed by stacking multiple bag-of-sounds vectors derived from multiple token sequences.",
"cite_spans": [
{
"start": 80,
"end": 95,
"text": "(Zissman, 1996)",
"ref_id": "BIBREF8"
},
{
"start": 140,
"end": 163,
"text": "(Li, Ma, and Lee, 2007)",
"ref_id": "BIBREF14"
},
{
"start": 475,
"end": 492,
"text": "(Li and Ma, 2005)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phonotactic Classifiers",
"sec_num": "3.2."
},
{
"text": "With the PPR front-end, the backend of the language classifier can be language models for capturing the phonotactic constraints for each target language. PPR-LM approach (Zissman, 1996) uses the PPR front-end to convert a spoken utterance into multiple sequences of phones. Then a set of L n-gram phone language models estimates the likelihood phonotactic scores for the spoken documents in order to produce classification decisions.",
"cite_spans": [
{
"start": 170,
"end": 185,
"text": "(Zissman, 1996)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PPR-LM Classifier",
"sec_num": "3.2.1."
},
{
"text": "Suppose that we have F phone recognizers with a phone inventory of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PPR-VSM Classifier",
"sec_num": "3.2.2."
},
{
"text": "{ 1 , , , v v v \u03c4 = K } , F v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PPR-VSM Classifier",
"sec_num": "3.2.2."
},
{
"text": "K and the number of phones in v \u03c4 is n \u03c4 . An utterance is decoded by these phone recognizers into F independent sequences of phone tokens. Each of these token sequences can be expressed by a high dimensional phonotactic feature vector with the n-gram counts. The dimension of the feature vector is equal to the total number of n-gram patterns needed to highlight the overall behavior of the utterance. If unigram and bigram are the only concerns, we will have a vector of For each target language, an SVM is trained by using the composite feature vectors in the target language as the positive set and the composite feature vectors in all other languages as the negative set. With L target languages, we project the high dimensional composite feature vectors into a discriminative feature vector with a much lower dimension (Ma, Li, and Tong, 2007) . We formulate the language recognition as a hypothesis test. For each target language, we build a language detector which consists of two GMMs { , } \u03bb \u03bb + \u2212 . The GMM trained on the discriminative vectors of the target language is called the positive model \u03bb + , while the GMM trained on those of its competing languages is called the negative model \u03bb \u2212 . We define the confidence of a test sample O belonging to a target language as the posterior odds in a hypothesis test under the Bayesian interpretation. We have 0 H , which hypothesizes that O is language \u03bb + , and 1 H , which hypothesizes otherwise. The posterior odd is approximated by the",
"cite_spans": [
{
"start": 825,
"end": 849,
"text": "(Ma, Li, and Tong, 2007)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PPR-VSM Classifier",
"sec_num": "3.2.2."
},
{
"text": "likelihood ratio ( ) O \u039b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PPR-VSM Classifier",
"sec_num": "3.2.2."
},
{
"text": "that is used for the final language recognition decision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PPR-VSM Classifier",
"sec_num": "3.2.2."
},
{
"text": "( )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PPR-VSM Classifier",
"sec_num": "3.2.2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( | ) log ( | ) O p O m p O m + \u2212 \u239b \u239e \u239c \u239f \u239d \u23a0 \u039b =",
"eq_num": "(2)"
}
],
"section": "PPR-VSM Classifier",
"sec_num": "3.2.2."
},
{
"text": "In the PPR framework, the languages of parallel phone recognizers, also known as phone tokenizers, and target languages may not have to be the same languages. For example, an English phone recognizer functions as a human listener of English background, trying to extract the discriminative information from the spoken utterances of each target language from its perspective. The discriminative information is expressed in an English phone sequence. In general, the performance gain increases with a greater number of parallel recognizers. We proposed to design the target-oriented phone tokenizers (TOPTs) (Tong et al., 2008) rather to use the same phone recognizer for all the target languages in the PPR practice. For example, Arabic-oriented English phone tokenizer, Mandarin-oriented English phone tokenizer, as Arabic and Mandarin each is believed to have its unique phonotactic features to an English listener. Note that not all the phones and their phonotactics in the target language may not provide equally discriminative information to the listener, it is desirable that the phones in each of the TOPTs can be those extracted from the full phone set of a phone recognizer, and having highest discriminative ability in distinguishing the target language from other languages.",
"cite_spans": [
{
"start": 606,
"end": 625,
"text": "(Tong et al., 2008)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Target-Oriented Phone Tokenizer (TOPT)",
"sec_num": "3.2.3."
},
{
"text": "The target-oriented phone selection strategy is illustrated in Figure 2 . Assuming we have a language recognition task of L target languages, given a phone recognizer with phone inventory ",
"cite_spans": [],
"ref_spans": [
{
"start": 63,
"end": 71,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Target-Oriented Phone Tokenizer (TOPT)",
"sec_num": "3.2.3."
},
{
"text": "1 2 { , , , , } i n v v v v v = L L which",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Target-Oriented Phone Tokenizer (TOPT)",
"sec_num": "3.2.3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "k vk v k v k W w w w = L .",
"eq_num": "{ } m"
}
],
"section": "Target-Oriented Phone Tokenizer (TOPT)",
"sec_num": "3.2.3."
},
{
"text": "We select a subset of phones that have highest discriminative power to construct a new target-oriented phone tokenizer, k TOPT . In this way, we can construct L new target-oriented phone tokenizers, one for each target language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Target-Oriented Phone Tokenizer (TOPT)",
"sec_num": "3.2.3."
},
{
"text": "Phonetic and acoustic diversifications may be applied to both PPR-LM and PPR-VSM systems. The conventional approach adopts phonetic diversification, where the parallel phone recognizers are trained on speech data from different languages with different phone sets. On the other hand, we proposed an alternative methodology where phone recognizers using different acoustic models trained on the same speech data with the same phone set Li, 2007, 2008) are used to achieve acoustic diversification. Analogous to system combination for speech recognition in which merging outputs from multiple systems with different error patterns helps to improve the final performance, using multiple acoustic models aims to form the contractive parallel phone recognition systems using different modeling techniques and training paradigms, without requiring additional phonetically transcribed speech data.",
"cite_spans": [
{
"start": 435,
"end": 450,
"text": "Li, 2007, 2008)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phonetic and Acoustic Diversifications (PAD)",
"sec_num": "3.2.4."
},
{
"text": "Acoustic classifiers exploit acoustic features directly. There are two main approaches, Gaussian mixture modeling (GMM) on short-time cepstral features, such as MFCCs, LPCCs, and the Shifted Delta Cepstral (SDC) coefficients, and support vector machine (SVM) modeling on high dimension acoustic features, such as the polynomial expansion of short-time cepstral features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acoustic Classifiers",
"sec_num": "3.3."
},
{
"text": "In the standard Maximum Likelihood (ML) training framework for GMM, the objective function is to maximize the total log likelihood of training data:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MMI-GMM",
"sec_num": "3.3.1."
},
{
"text": "( ) ( ) ML 1 log | R r r r p O s \u03b8 = = \u2211 F (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MMI-GMM",
"sec_num": "3.3.1."
},
{
"text": "where \u03b8 is the model parameter set and r O is the rth observation sequence, R denotes the total number of training utterances, and r s is the correct language identity of the rth utterance. The ML estimation maximizes the likelihood of each model generating the training data independently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MMI-GMM",
"sec_num": "3.3.1."
},
{
"text": "The discriminative training techniques have been successfully applied in large vocabulary continuous speech recognition (LVCSR) systems. One of the most popular discriminative training approaches, maximum mutual information (MMI) training, has been proved to efficient in the Gaussian mixture modeling for language recognition (Bueget, Matejka, and Cernocky, 2006) . The objective function of MMI is posterior probability of correctly recognizing all training utterances. It estimates the GMM parameters in a discriminative manner by maximizing the following objective function:",
"cite_spans": [
{
"start": 327,
"end": 364,
"text": "(Bueget, Matejka, and Cernocky, 2006)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MMI-GMM",
"sec_num": "3.3.1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) ( ) MMI 1 | ( ) log ( | ) ( ) R r r r r r s p O s P s p O s P s \u03b8 \u03b8 \u03b8 = \u2200 = \u239b \u239e \u239c \u239f \u239c \u239f \u239d \u23a0 \u2211 \u2211 F",
"eq_num": "(4)"
}
],
"section": "MMI-GMM",
"sec_num": "3.3.1."
},
{
"text": "where ( ) ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MMI-GMM",
"sec_num": "3.3.1."
},
{
"text": "SVM has been proven to be an effective two-class classifier for pattern classification problems. To adopt SVM for classification of speech utterances is not straightforward since speech utterances are often parameterized as variable-length sequences of cepstral feature vectors. A kernel function that can measure the similarity between two sequences of speech feature vectors has to be constructed. The generalized linear discriminant sequence (GLDS) kernel has been proposed for speaker and language recognition (Campbell et al., 2006) on acoustic feature vectors. Given two sequences,",
"cite_spans": [
{
"start": 514,
"end": 537,
"text": "(Campbell et al., 2006)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GLDS Kernel",
"sec_num": "3.3.2."
},
{
"text": "{ } 1 2 , , , m X = x x x K and { } 1 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GLDS Kernel",
"sec_num": "3.3.2."
},
{
"text": ", , , n Y = y y y K , of feature vectors, the GLDS kernel is given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GLDS Kernel",
"sec_num": "3.3.2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) 1 GLDS , T x y K X Y \u2212 =b R b",
"eq_num": "(5)"
}
],
"section": "GLDS Kernel",
"sec_num": "3.3.2."
},
{
"text": "where m and n denote the number of feature vectors in the sequences X and Y, respectively. In (5), the two sequences become comparable by mapping them to a high-dimensional vector space via",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GLDS Kernel",
"sec_num": "3.3.2."
},
{
"text": "( ) 1 x X m \u2208 = \u2211 x b b x % and ( ) 1 x Y n \u2208 = \u2211 y b b y % (6) where ( ) \u22c5 b % denotes the polynomial expansion function. For [ ] 1 2 , T x x = x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GLDS Kernel",
"sec_num": "3.3.2."
},
{
"text": "and considering all monomials up to the second order, the expansion function is given by ( ) [",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GLDS Kernel",
"sec_num": "3.3.2."
},
{
"text": "2 1 2 1 1, , , , x x x = b x % 2 1 2 2 , T",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GLDS Kernel",
"sec_num": "3.3.2."
},
{
"text": "x x x \u23a4 \u23a6 . In our final implementation, we used all monomials up to the third order. In 5,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GLDS Kernel",
"sec_num": "3.3.2."
},
{
"text": "( ) T U N = R U U",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GLDS Kernel",
"sec_num": "3.3.2."
},
{
"text": "is a correlation matrix calculated from a data matrix U that consists of the expansions of the entire set of U N training feature vectors. For computational simplicity, it is customary to assume that the matrix R is diagonal. An SVM is then constructed as the sum of kernel functions in the following form",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GLDS Kernel",
"sec_num": "3.3.2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) ( ) GLDS , l l l f X K X X \u03b1 \u03b2 = + \u2211",
"eq_num": "(7)"
}
],
"section": "GLDS Kernel",
"sec_num": "3.3.2."
},
{
"text": "Here, { l X } denotes the support vectors, \u03b2 is the bias, and the term l \u03b1 , for",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GLDS Kernel",
"sec_num": "3.3.2."
},
{
"text": "0 l l \u03b1 = \u2211 , 0 l \u03b1 > ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GLDS Kernel",
"sec_num": "3.3.2."
},
{
"text": "indicates the weight of the lth support vector in the expanded feature space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GLDS Kernel",
"sec_num": "3.3.2."
},
{
"text": "The PPR (see Section 3.2) serves as a front-end decoder that extracts phonotactic information (i.e., phone sequences) from which the speech utterance can be characterized in terms of the occurrence and co-occurrence statistics of various phones. In (Lee, You, and Li, 2008) , we explored the use of acoustically-defined units, instead of the linguistically-defined phones, in characterizing speech utterances and spoken languages. In particular, we train an ensemble of acoustic sound classes in a self-organized manner, each modeled with a Gaussian distribution, to form a speech sound inventory analogous to the phone inventory. We interpret the acoustic sound classes to represent some general vocal tract configurations in producing various speech sounds. The self-organized nature of these acoustic sound classes circumvents the need of laborious phonetic transcription. Furthermore, the structural simplicity of the Gaussian distributions allows us to train sufficient number of acoustic units to transcribe the sound of spoken languages in an effective manner. We formulate the acoustic sound inventory in a form of sequence kernel, referred to as the probabilistic sequence kernel (PSK), for SVM. Similar to that of the GLDS kernel mentioned earlier, the PSK maps variable-length utterances into fixed-and high-dimensional vectors in order to transform a complex classification task into a linearly separable one in a higherdimensional vector space. Let ( )",
"cite_spans": [
{
"start": 249,
"end": 273,
"text": "(Lee, You, and Li, 2008)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Sequence Kernel (PSK)",
"sec_num": "3.3.3."
},
{
"text": "| p j x~( ) ; , j j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Sequence Kernel (PSK)",
"sec_num": "3.3.3."
},
{
"text": "x \u03bc \u03a3 N , for 1,2, , j L = K , denote the inventory of acoustic sound classes. Using these sound classes as bases, the feature expansion is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Sequence Kernel (PSK)",
"sec_num": "3.3.3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) ( ) ( ) ( ) [ ] 1| , 2| , , | T p j p j p j L = = = = p x x x x % K",
"eq_num": "(8)"
}
],
"section": "Probabilistic Sequence Kernel (PSK)",
"sec_num": "3.3.3."
},
{
"text": "where ( ) | p j x denotes the posterior probability of the jth acoustic class (the prior probability of each acoustic class is determined during the training stage as noted below). Each element of the",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Sequence Kernel (PSK)",
"sec_num": "3.3.3."
},
{
"text": "expansion ( ) p x %",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Sequence Kernel (PSK)",
"sec_num": "3.3.3."
},
{
"text": "gives the probability of occurrence of the jth acoustic class evaluated for a given feature vector x . The average probabilistic count across the entire sequence X is given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Sequence Kernel (PSK)",
"sec_num": "3.3.3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) 1 x X m \u2208 = \u2211 x p p x % .",
"eq_num": "(9)"
}
],
"section": "Probabilistic Sequence Kernel (PSK)",
"sec_num": "3.3.3."
},
{
"text": "The vector x p can be interpreted as an M-bin histogram indicating the probabilities of occurrence of various acoustic sound classes observed in the given speech utterance X. Given two sequences, the PSK measures their similarity as the inner product between their expanded vectors, l p and x p , as follows",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Sequence Kernel (PSK)",
"sec_num": "3.3.3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) 1 PSK , T x y K X Y \u2212 =p R p .",
"eq_num": "(10)"
}
],
"section": "Probabilistic Sequence Kernel (PSK)",
"sec_num": "3.3.3."
},
{
"text": "Compared to the GLDS kernel (5), the PSK hinges on the prior knowledge that the frequency of occurrence of speech sounds differs from one language to another in establishing the bases. This prior knowledge is not exploited in the GLDS kernel, leading to some performance deficiency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Sequence Kernel (PSK)",
"sec_num": "3.3.3."
},
{
"text": "This section describes the fusion strategy for the IIR submission to the NIST 2007 Language Recognition Evaluation (LRE07). The final submitted system is a linear fusion of the scores contributed by ten individual classifiers. These classifiers are summarized in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 263,
"end": 270,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Fusion of Classifiers",
"sec_num": "4."
},
{
"text": "Half of the classifiers are phonotactic classifiers while the remaining halves are acoustic classifiers. Two novel PPR-VSM classifiers were introduced to the LRE07 submission, namely the TOPT and PAD classifiers (see Sections 3.2.3 and 3.2.4 respectively). In addition, our system also made use of the HMM/NN hybrid phone recognizers provided by the Brno University of Technology (BUT) 2 . On the other hand, PSK, a novel acoustic classifier with generative front-end was also used (see Section 3.3.3). Furthermore, two GLDS acoustic classifiers were built using the MFCC and LPCC features. Two GMM classifiers were also trained using the ML and MMI criteria.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fusion of Classifiers",
"sec_num": "4."
},
{
"text": "The final system was obtained by means of linear fusion of the scores from the ten individual classifiers:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fusion of Classifiers",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) 1 , C i c c s w s c i b = = + \u2211",
"eq_num": "(11)"
}
],
"section": "Fusion of Classifiers",
"sec_num": "4."
},
{
"text": "where C is the total number of classifiers and ( ) ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fusion of Classifiers",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "i i s b P i i i i s b P i i \u2208 < = \u2208 \u2208 \u2265 = \u2208",
"eq_num": "(13)"
}
],
"section": "Fusion of Classifiers",
"sec_num": "4."
},
{
"text": "and { } K denotes the cardinality of the set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fusion of Classifiers",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) ( ) LLR , 1 , max 1 exp c i i c i w b w b y s \u2200 \u239b \u239e = \u239c \u239f + \u2212 \u239d \u23a0 \u2211",
"eq_num": "(14)"
}
],
"section": "b. Logistic Linear Regression (LLR):",
"sec_num": null
},
{
"text": "where 1, True 0, False",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "b. Logistic Linear Regression (LLR):",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "i i y i \u2208 \u23a7 = \u23a8 \u2208 \u23a9",
"eq_num": "(15)"
}
],
"section": "b. Logistic Linear Regression (LLR):",
"sec_num": null
},
{
"text": "LLR attempts to transform the scores from multiple classifiers to the log likelihood ratios. The LLR is performed using the FoCal toolkit 3 . The final fusion parameters were obtained as the average of the parameters estimated using the above objectives, i.e., ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "b. Logistic Linear Regression (LLR):",
"sec_num": null
},
{
"text": "The fusion parameters were calibrated on the development data comprising the NIST 1996 NIST , 2003 NIST and 2005 evaluation sets as well as the 2005 OHSU development data. Figure 3 shows the Detection Error Trade-off (DET) curves for the 10 individual classifiers as well as the final fusion system for the 30s General LR closed-test task. The top 3 performing classifiers include the BUT-PPR-VSM, TOPT-PPR-VSM and PAD-PPR-VSM classifiers. The C avg performance of the best and worst individual classifiers as well as the fusion system for the 30s, 10s and 3s General LR closed-test tasks is summarized in Table 2 . The relative improvements obtained from fusion over the best individual classifier were 22.3%, 33.3% and 20.3% for the 30s, 10s and 3s tasks respectively. Table 3 shows the comparison of the EER (%) and C avg (%) performance for the open-test and closed-test conditions on various tasks. In general, it was found that the General LR tasks are relatively easier compared to the Chinese LR and the other dialect recognition (DR) tasks. In particular, the Hindustani DR and Spanish DR tasks were the hardest, with C avg performance greater than 30%. As expected, the performance of the closed-test tasks is generally better than that of the open-test tasks due to the presence of the out-of-set languages in the open-test 3 http://www.dsp.sun.ac.za/~nbrummer/focal/index.htm condition. Note that the C avg performance depends on the decision threshold which may not coincide with the EER operating point. There are several cases (e.g. Hindustani DR and Spanish DR) where the C avg performance for the open-test condition outperformed the closed-test condition due to the poor decision threshold in the later condition. The decision thresholds for the open-test conditions were estimated using development data that contains some out-oflanguage (OOL) languages to learn the appropriate trade-off between false acceptance (false alarm) and false rejection (miss). This has been found to yield improved performance compared to using data without OOL languages. For example, the Cavg performance for the 30s General LR open-test task would have been 5.71% instead of 4.28% if the decision threshold was tuned using development data without OOL languages.",
"cite_spans": [
{
"start": 77,
"end": 86,
"text": "NIST 1996",
"ref_id": null
},
{
"start": 87,
"end": 98,
"text": "NIST , 2003",
"ref_id": null
},
{
"start": 99,
"end": 112,
"text": "NIST and 2005",
"ref_id": null
}
],
"ref_spans": [
{
"start": 172,
"end": 180,
"text": "Figure 3",
"ref_id": "FIGREF6"
},
{
"start": 606,
"end": 613,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 771,
"end": 778,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "b. Logistic Linear Regression (LLR):",
"sec_num": null
},
{
"text": "We apply a pseudo-key analysis scheme to cross validate the performances of individual classifiers. It is to find out the abnormal classifier and prevent the error in the final fusion system without knowing the true keys of evaluation data. Suppose that the ratio of genuine/imposter test trials is around 1:(L-1), where L is the number of the target languages. From the pool of scores of M trials from each classifier c, we choose M/L trials with the highest scores as genuine trials and the remaining trials as impostor trials, i.e.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo Key Analysis",
"sec_num": "5."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "True, if , , False, if , c c s c i T k c i s c i T \u2265 \u23a7 = \u23a8 < \u23a9 % % %",
"eq_num": "(17)"
}
],
"section": "( ) ( ) ( )",
"sec_num": null
},
{
"text": "where ( ) , k c i % denotes the pseudo key for the ith trial of the cth classifier and the threshold c T % is set such that there are M/L trials whose scores are above it. In the above equation, ( ) , s c i represents the score of the ith trial from the cth classifiers. Using the pseudo keys from all classifiers, we compute the pseudo EER for the cth classifier as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "( ) ( ) ( )",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) ( ) Pseudo 1, 1 | 1 N g g c EER c EER c g N = \u2260 = \u2212 \u2211 (18) where ( ) ( ) ( ) | , | , , 1 , ) EER c g s c i k g i i M \u03b1 = ( = %",
"eq_num": "(19)"
}
],
"section": "( ) ( ) ( )",
"sec_num": null
},
{
"text": "is the operator computing the EER of cth classifier using the psuedo keys obtained from the gth classifier, and N is the total number of classifiers. We found that the genuine and imposter scores can be roughly expressed as two Gaussian distributions. The probability of error with the pseudo keys obtained from the cth classifier is given by , and c T % is the threshold defined in (17). Obviously, the error probability is the overlapped sections of the two distributions as indicated in (20). The performance of each classifier depends on the area of this overlapped section. The smaller the overlapped section, the better the classifier is. When this overlapped section is minimized, the classifier achieves desired performance. An outlier classifier will give a large overlap between the genuine and imposter distributions, resulting in high error rate with respect to pseudo keys. We used the pseudo-key approach to analyze the performance of individual classifierson the LRE07 development and evaluation data sets. The pseudo EERs were computed using (17) and (18). Figure 4 compares the pseudo and actual EERs for all the classifiers. It is shoen that there exists a consistency between the pseudo and actual EERs on both the development and evaluation sets. The pseudo EERs can therefore provide a rough performance indication of the classifiers.",
"cite_spans": [],
"ref_spans": [
{
"start": 1073,
"end": 1081,
"text": "Figure 4",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "( ) ( ) ( )",
"sec_num": null
},
{
"text": "A description of a language recognition system has been presented as it was developed for the 2007 NIST LRE. The submission was built upon multiple classifiers using generative and discriminative classification techniques, and was purposely designed to exploit the benefits of both phonotactic and acoustic features. Notably, we introduced three novel language classifiers, two phonotactic and one acoustic, in our LRE07 submission. The TOPT and PAD classifiers were shown to be successful refinements to the conventional phonotactic approach. On the other hand, the PSK bridges the gap between acoustic and token-based techniques. All the classifiers were combined at the score level with a simple linear fusion giving an EER of 1.67 % and a C avg of 2.75 % under the general LR core test condition. The LRE results represent the state-of-theart performance with an effective design and implementation. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6."
},
{
"text": "22nd Pacific Asia Conference on Language, Information and Computation, pages 46-57",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "H http://www.ldc.upenn.edu/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.fit.vutbr.cz/research/groups/speech/index_e.php?id=phnrec",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Conversational interfaces: advances and challenges",
"authors": [
{
"first": "V",
"middle": [
"W"
],
"last": "Zue",
"suffix": ""
},
{
"first": "J",
"middle": [
"R"
],
"last": "Glass",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. IEEE",
"volume": "88",
"issue": "",
"pages": "1166--1180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zue, V. W. and J. R. Glass, \"Conversational interfaces: advances and challenges,\" Proc. IEEE, vol. 88, no. 8, pp. 1166-1180, 2000.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Multilinguality in speech and spoken language systems",
"authors": [
{
"first": "A",
"middle": [],
"last": "Waibel",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Geutner",
"suffix": ""
},
{
"first": "L",
"middle": [
"M"
],
"last": "Tomokiyo",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Schultz",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Woszczyna",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. IEEE",
"volume": "88",
"issue": "",
"pages": "1181--1190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Waibel, A., P. Geutner, L. M. Tomokiyo, T. Schultz, and M. Woszczyna, \"Multilinguality in speech and spoken language systems,\" Proc. IEEE, vol. 88, no. 8, pp. 1181-1190, 2000.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A novel feature combination approach for spoken document classification with support vector machines",
"authors": [
{
"first": "P",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Iurgel",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Rigoll",
"suffix": ""
}
],
"year": null,
"venue": "Proc. Multimedia Information Retrieval Workshop, 2003. National Institute of Standards and Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dai P., U. Iurgel, and G. Rigoll, \"A novel feature combination approach for spoken document classification with support vector machines,\" in Proc. Multimedia Information Retrieval Workshop, 2003. National Institute of Standards and Technology. http://www.nist.gov/speech/tests/lang/2007/.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Automatic language recognition using acoustic features",
"authors": [
{
"first": "M",
"middle": [],
"last": "Sugiyama",
"suffix": ""
}
],
"year": 1991,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sugiyama, M., \"Automatic language recognition using acoustic features,\" in Proc. ICASSP, 1991.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Approaches to language identification using Gaussian mixture models and shifted delta cepstral features",
"authors": [
{
"first": "P",
"middle": [
"A"
],
"last": "Torres-Carassquilo",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Singer",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Kohler",
"suffix": ""
},
{
"first": "R",
"middle": [
"J"
],
"last": "Greene",
"suffix": ""
},
{
"first": "D",
"middle": [
"A"
],
"last": "Reynolds",
"suffix": ""
},
{
"first": "J",
"middle": [
"R"
],
"last": "Deller",
"suffix": ""
},
{
"first": "Jr",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. ICSLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Torres-Carassquilo, P. A., E. Singer, M. A. Kohler, R. J. Greene, D. A. Reynolds, and J. R. Deller, Jr., \"Approaches to language identification using Gaussian mixture models and shifted delta cepstral features,\" in Proc. ICSLP, 2002.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Discriminative training techniques for acoustic language identification",
"authors": [
{
"first": "L",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Matejka",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Cernocky",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "209--212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burget, L., P. Matejka, and J. Cernocky, \"Discriminative training techniques for acoustic language identification,\" in Proc. ICASSP, 2006, pp. I-209-212",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Support vector machines for speaker and language recognition",
"authors": [
{
"first": "W",
"middle": [
"M"
],
"last": "Campbell",
"suffix": ""
},
{
"first": "J",
"middle": [
"P"
],
"last": "Campbell",
"suffix": ""
},
{
"first": "D",
"middle": [
"A"
],
"last": "Reynolds",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Singer",
"suffix": ""
},
{
"first": "P",
"middle": [
"A"
],
"last": "Torres-Carrasquillo",
"suffix": ""
}
],
"year": 2006,
"venue": "Computer Speech and Language",
"volume": "20",
"issue": "",
"pages": "210--229",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Campbell, W. M., J. P. Campbell, D. A. Reynolds, E. Singer and P. A. Torres-Carrasquillo \"Support vector machines for speaker and language recognition,\" Computer Speech and Language, vol. 20, pp. 210-229, 2006.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Recent improvements in an approach to segment-based automatic language identification",
"authors": [
{
"first": "T",
"middle": [
"J"
],
"last": "Hazen",
"suffix": ""
},
{
"first": "V",
"middle": [
"W"
],
"last": "Zue",
"suffix": ""
}
],
"year": 1994,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hazen, T. J. and V. W. Zue, \"Recent improvements in an approach to segment-based automatic language identification,\" in Proc. ICASSP, 1994.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Comparison of four approaches to automatic language identification of telephone speech",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Zissman",
"suffix": ""
},
{
"first": "K",
"middle": [
"M"
],
"last": "Berkling",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Barnard",
"suffix": ""
}
],
"year": 1994,
"venue": "Proc. ICASSP",
"volume": "4",
"issue": "",
"pages": "289--292",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zissman, M. A., \"Comparison of four approaches to automatic language identification of telephone speech,\" IEEE Trans. Speech and Audio Processing, vol. 4, no. 1, pp. 31-44, 1996. Berkling, K. M. and E. Barnard, \"Analysis of phoneme-based features for language identification,\" in Proc. ICASSP, pp. 289-292, 1994.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Language identification with language-independent acoustic models",
"authors": [
{
"first": "C",
"middle": [],
"last": "Corredor-Ardoy",
"suffix": ""
},
{
"first": "J",
"middle": [
"L"
],
"last": "Gauvain",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Adda-Decker",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Lamel",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. Eurospeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Corredor-Ardoy, C., J. L. Gauvain, M. Adda-Decker, and L. Lamel, \"Language identification with language-independent acoustic models,\" in Proc. Eurospeech, 1997.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A phonotactic language model for spoken language identification",
"authors": [
{
"first": "H",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, H. and B. Ma, \"A phonotactic language model for spoken language identification,\" in Proc. ACL, 2005.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Spoken Language Recognition Using Ensemble Classifiers",
"authors": [
{
"first": "B",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Tong",
"suffix": ""
}
],
"year": 2007,
"venue": "IEEE Transactions on Audio, Speech and Language Processing",
"volume": "15",
"issue": "7",
"pages": "2053--2062",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ma, B., H. Li, and R. Tong, \"Spoken Language Recognition Using Ensemble Classifiers\", IEEE Transactions on Audio, Speech and Language Processing, vol. 15, no. 7, pp. 2053-2062, Sep. 2007.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A tutorial on hidden Markov models and selected applications in speech recognition",
"authors": [
{
"first": "L",
"middle": [
"R"
],
"last": "Rabiner",
"suffix": ""
}
],
"year": 1989,
"venue": "Proc. IEEE",
"volume": "77",
"issue": "",
"pages": "257--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rabiner, L. R., \"A tutorial on hidden Markov models and selected applications in speech recognition,\" Proc. IEEE, vol.77, no.2, pp. 257-286, 1989.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Speaker Verification Using Adapted Gaussian Mixture Modeling",
"authors": [
{
"first": "D",
"middle": [
"A"
],
"last": "Reynolds",
"suffix": ""
},
{
"first": "T",
"middle": [
"F"
],
"last": "Quatieri",
"suffix": ""
},
{
"first": "R",
"middle": [
"B"
],
"last": "Dunn",
"suffix": ""
}
],
"year": 2000,
"venue": "Digital Signal Processing",
"volume": "10",
"issue": "1-3",
"pages": "19--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reynolds, D. A., T. F. Quatieri, and R. B. Dunn, \"Speaker Verification Using Adapted Gaussian Mixture Modeling,\" Digital Signal Processing, vol. 10, no. 1-3, pp. 19-41, 2000.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A vector space modeling approach to spoken language identification",
"authors": [
{
"first": "H",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "C.-H",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2007,
"venue": "IEEE Trans. Audio, Speech and Language Processing",
"volume": "15",
"issue": "1",
"pages": "271--284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Li, B. Ma, and C.-H. Lee, \"A vector space modeling approach to spoken language identification,\" IEEE Trans. Audio, Speech and Language Processing, vol. 15, no. 1, pp. 271- 284, 2007.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Phonetic knowledge, phonotactics and perceptual validation for automatic language identification",
"authors": [
{
"first": "M",
"middle": [],
"last": "Adda-Decker",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. ICPhS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adda-Decker, M., et al., \"Phonetic knowledge, phonotactics and perceptual validation for automatic language identification,\" in Proc. ICPhS, 2003.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Acoustic, phonetic and discriminative approaches to automatic language recognition",
"authors": [
{
"first": "E",
"middle": [],
"last": "Singer",
"suffix": ""
},
{
"first": "P",
"middle": [
"A"
],
"last": "Torres-Carrasquillo",
"suffix": ""
},
{
"first": "T",
"middle": [
"P"
],
"last": "Gleason",
"suffix": ""
},
{
"first": "W",
"middle": [
"M"
],
"last": "Campbell",
"suffix": ""
},
{
"first": "D",
"middle": [
"A"
],
"last": "Reynolds",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. Eurospeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Singer, E., P. A. Torres-Carrasquillo, T. P. Gleason, W. M. Campbell, and D. A. Reynolds, \"Acoustic, phonetic and discriminative approaches to automatic language recognition,\" in Proc. Eurospeech, 2003.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Integrating acoustic, prosodic and phonotactic features for spoken language identification",
"authors": [
{
"first": "R",
"middle": [],
"last": "Tong",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "E",
"middle": [
"S"
],
"last": "Chng",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tong, R., B. Ma, D. Zhu, H. Li, and E. S. Chng, \"Integrating acoustic, prosodic and phonotactic features for spoken language identification,\" in Proc. ICASSP, 2006.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The OGI multi-language telephone speech corpus",
"authors": [
{
"first": "Y",
"middle": [
"K"
],
"last": "Muthusamy",
"suffix": ""
},
{
"first": "R",
"middle": [
"A"
],
"last": "Cole",
"suffix": ""
},
{
"first": "B",
"middle": [
"T"
],
"last": "Oshika",
"suffix": ""
}
],
"year": null,
"venue": "Proc. ICSLP, 1992. The 2007 NIST Language Recognition Evaluation plan",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muthusamy, Y. K., R. A. Cole, and B. T. Oshika, \"The OGI multi-language telephone speech corpus,\" in Proc. ICSLP, 1992. The 2007 NIST Language Recognition Evaluation plan, http://www.nist.gov/speech/ tests/lang/2007/LRE07EvalPlan-v8b.pdf.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Target-oriented phone tokenizers for spoken language recognition",
"authors": [
{
"first": "R",
"middle": [],
"last": "Tong",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "E",
"middle": [
"S"
],
"last": "Chng",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "4221--4224",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tong, R., B. Ma, H. Li, and E. S. Chng, \"Target-oriented phone tokenizers for spoken language recognition,\" in Proc. ICASSP, 2008, pp. 4221-4224.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Fusion of contrastive acoustic models for parallel phonotactic spoken language identification",
"authors": [
{
"first": "K",
"middle": [
"C"
],
"last": "Sim",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "170--173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sim, K. C. and H. Li, \"Fusion of contrastive acoustic models for parallel phonotactic spoken language identification\", in Proc. Interspeech, 2007, pp. 170-173.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "On acoustic diversification front-end for spoken language recognition",
"authors": [
{
"first": "K",
"middle": [
"C"
],
"last": "Sim",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": null,
"venue": "Speech and Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sim, K. C. and H. Li, \"On acoustic diversification front-end for spoken language recognition\", to appear in IEEE Trans. Audio, Speech and Language Processing.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Discriminative training techniques for acoustic language identification",
"authors": [
{
"first": "L",
"middle": [],
"last": "Bueget",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Matejka",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Cernocky",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "209--212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bueget, L., P. Matejka, and J. Cernocky, \"Discriminative training techniques for acoustic language identification\", in Proc. ICASSP, 2006, pp. 209-212.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Spoken language recognition using support vector machines with generative from-end",
"authors": [
{
"first": "K",
"middle": [
"A"
],
"last": "Lee",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "You",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "4153--4156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee, K. A., C. You, and H. Li, \"Spoken language recognition using support vector machines with generative from-end,\" in Proc. ICASSP, 2008, pp. 4153-4156.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Fusion of multiple language classifiers.",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "phonotactic features, to represent the utterance by the th \u03c4 phone recognizer.",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "r P s and ( ) P s are the prior terms and we consider the prior probabilities of all languages equal. The denominator ( | ) ( ) r s p O s P s \u03b8 \u2200 \u2211 is the likelihood of utterance r O given the competing language models.",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "s c i is the score of the ith trial from the cth classifier. The fusion parameters consist of the classifier specific weights c w and the global bias b. Two objectives were used to tune the fusion parameters:",
"num": null
},
"FIGREF5": {
"type_str": "figure",
"uris": null,
"text": "List of 10 individual classifiers used in the IIR NIST 2007 Language Recognition Evaluation submission",
"num": null
},
"FIGREF6": {
"type_str": "figure",
"uris": null,
"text": "DET curves of individual classifiers and the final fusion system for the 30s General LR closedtest task.",
"num": null
},
"FIGREF8": {
"type_str": "figure",
"uris": null,
"text": "Pseudo and actual EERs evaluated on the development and evaluation sets of theNIST LRE 2007 (30s General LR close-test condition).",
"num": null
},
"TABREF0": {
"content": "<table><tr><td>\u2022 General Language Recognition (LR) including 14 languages, Arabic, Bengali, Chinese,</td></tr><tr><td>English, Hindustani, Spanish, Farsi, German, Japanese, Korean, Russian, Tamil, Thai and</td></tr><tr><td>Vietnamese.</td></tr><tr><td>\u2022 Chinese LR including four Chinese dialects, Cantonese, Mandarin, Min and Wu.</td></tr><tr><td>\u2022 Mandarin Dialect Recognition (DR) including Mainland Mandarin and Taiwan Mandarin.</td></tr><tr><td>\u2022 English DR including American English and India English.</td></tr><tr><td>\u2022 Hindustani DR including Hindi and Urdu.</td></tr><tr><td>\u2022 Spanish DR including Caribbean Spanish and non-Caribbean Spanish.</td></tr></table>",
"text": "There are six test categories in the 2007 NIST LRE involving 26 target languages and dialects:",
"type_str": "table",
"html": null,
"num": null
},
"TABREF1": {
"content": "<table><tr><td/><td>Estimate</td><td>Phone</td><td>Target-oriented</td></tr><tr><td/><td>Discriminative</td><td>Selection</td><td>Phone Tokenizers</td></tr><tr><td/><td>Power</td><td/></tr><tr><td/><td>W 1</td><td/><td>TOPT 1</td></tr><tr><td>Phone</td><td>W 2</td><td/><td>TOPT 2</td></tr><tr><td>inventory</td><td/><td/></tr><tr><td/><td>W L</td><td/><td>TOPT L</td></tr><tr><td>Figure 2:</td><td/><td/></tr></table>",
"text": "Construction of target oriented phone tokenizers.",
"type_str": "table",
"html": null,
"num": null
},
"TABREF3": {
"content": "<table/>",
"text": "",
"type_str": "table",
"html": null,
"num": null
},
"TABREF4": {
"content": "<table><tr><td>Systems</td><td>30s</td><td>C avg (%) 10s</td><td>3s</td></tr><tr><td>Worst individual</td><td>10.23</td><td>18.16</td><td>33.05</td></tr><tr><td>Best individual</td><td>3.54</td><td>9.22</td><td>20.59</td></tr><tr><td>Fusion</td><td>2.75</td><td>6.15</td><td>16.40</td></tr></table>",
"text": "C avg performance using the minEER+LLR fusion method for the General LR closed-test tasks.",
"type_str": "table",
"html": null,
"num": null
},
"TABREF5": {
"content": "<table><tr><td>Systems</td><td>Test Conditions</td><td>EER</td><td>30s</td><td>C avg</td><td>EER</td><td>10s</td><td>C avg</td><td>EER</td><td>3s</td><td>C avg</td></tr><tr><td/><td>Closed-test</td><td>1.67</td><td/><td>2.75</td><td>5.87</td><td/><td>6.15</td><td/><td/><td/></tr></table>",
"text": "Comparison of EER and C avg performance for the open-test and closed-test conditions on various tasks",
"type_str": "table",
"html": null,
"num": null
}
}
}
}