{ "paper_id": "P95-1001", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:33:54.051595Z" }, "title": "Learning Phonological Rule Probabilities from Speech Corpora with Exploratory Computational Phonology", "authors": [ { "first": "Gary", "middle": [], "last": "Tajchman", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of California at Berkeley", "location": {} }, "email": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of California at Berkeley", "location": {} }, "email": "" }, { "first": "Eric", "middle": [], "last": "Fosler", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of California at Berkeley", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents an algorithm for learning the probabilities of optional phonological rules from corpora. The algorithm is based on using a speech recognition system to discover the surface pronunciations of words in spe.ech corpora; using an automatic system obviates expensive phonetic labeling by hand. We describe the details of our algorithm and show the probabilities the system has learned for ten common phonological rules which model reductions and coarticulation effects. These probabilities were derived from a corpus of 7203 sentences of read speech from the Wall Street Journal, and are shown to be a reasonably close match to probabilities from phonetically hand-transcribed data (TIMIT). Finally, we analyze the probability differences between rule use in male versus female speech, and suggest that the differences are caused by differing average rates of speech.", "pdf_parse": { "paper_id": "P95-1001", "_pdf_hash": "", "abstract": [ { "text": "This paper presents an algorithm for learning the probabilities of optional phonological rules from corpora. The algorithm is based on using a speech recognition system to discover the surface pronunciations of words in spe.ech corpora; using an automatic system obviates expensive phonetic labeling by hand. We describe the details of our algorithm and show the probabilities the system has learned for ten common phonological rules which model reductions and coarticulation effects. These probabilities were derived from a corpus of 7203 sentences of read speech from the Wall Street Journal, and are shown to be a reasonably close match to probabilities from phonetically hand-transcribed data (TIMIT). Finally, we analyze the probability differences between rule use in male versus female speech, and suggest that the differences are caused by differing average rates of speech.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Phonological r-ules have formed the basis of phonological theory for decades, although their form and their coverage of the data has changed over the years. Until recently, however, it was difficult to determine the relationship between hand-written phonological rules and actual speech data. The current availability of large speech corpora and pronunciation dictionaries has allowed us to connect rules and speech in much tighter ways. For example, a number of algorithms have recently been proposed which automatically induce phonological rules from dictionaries or corpora (Gasser 1993; Ellison 1992; Daelemans c~ al. 1994) .", "cite_spans": [ { "start": 577, "end": 590, "text": "(Gasser 1993;", "ref_id": "BIBREF8" }, { "start": 591, "end": 604, "text": "Ellison 1992;", "ref_id": "BIBREF7" }, { "start": 605, "end": 627, "text": "Daelemans c~ al. 1994)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While such algorithms have successfully induced syllabicity or harmony constraints, or simple oblig-*Currently at Voice Processing Corp, 1 Main St, Cambridge, MA 02142: tajchman@vpro.eom atory phonological rules, there has been much less work on non-obligatory (optional) rules. In part this is because optional rules like flapping, vowel reduction, and various coarticulation effects are postlexical and often products of fast speech, and hence have been considered less central to phonological theory. In part, however, this is because optional rules are inherently probabilistic. Where obligatory rules apply to every underlying form which meets the environmental conditions, producing a single surface form, optional rules may not apply, and hence the underlying form may appear as the surface form, unmodified by the rule. This makes the induction problem non-deterministic, and not solvable by the above algorithms. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While optional rules have received less attention in linguistics because of their probabilistic nature, in speech recognition, by contrast, optional rules are commonly used to model pronunciation variation. In this paper, we employ techniques from speech recognition research to address the problem of assigning probabilities to these optional phonological rules. We introduce a completely automatic algorithm that explores the coverage of a set of phonological rules on a corpus of lexically transcribed speech using the computational resources of a speech recognition system. This algorithm belongs to the class of techniques we call Exploratory Computational Phonology, which use statistical pattern recognition tools to explore phonological spaces.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We describe the details of our probability estimation algorithm and also present the probabilities the system has learned for ten common phonological rules which model reductions and coarticulation effects. Our probabilities are derived from a corpus of 7203 sentences of read speech from the Wall Street Journal (NIST 1993) . We also benchmark the probabilities generated by our system against probabilities from phonetically hand-transcribed data, and show a relatively good fit. Finally, we analyze the probability differences between rule use in male ver-1Note that this is true whether phonological theory considers these true phonological rules or rather rules of ~phonetic interpretation\". sus female speech, and suggest that the differences are caused by differing average rates of speech.", "cite_spans": [ { "start": 313, "end": 324, "text": "(NIST 1993)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The Algorithm", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "In this section we describe our algorithm which assigns probabilities to hand-written, optional phonological rules like flapping. The algorithm takes a lexicon of underlying forms and applies phonological rules to produce a new lexicon of surface forms. Then we use a speech recognition system on a large corpus of recorded speech to check how many times each of these surface forms occurred in the corpus. Finally, by knowing which rules were used to generate each surface form, we can compute a count for each rule. By combining this with a count of the times a rule did not apply, the algorithm can compute a probability for each rule. The rest of this section will discuss each of the aspects of the algorithm in detail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "The Base Lexicon", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.1", "sec_num": null }, { "text": "Our base lexicon is quite large; it is used to generate the lexicons for all of our speech recognition work at ICSI. It contains 160,000 entries (words) with 300,000 pronunciations. The lexicon contains underlying forms which are very shallow; thus they are post-lexical in the sense that there is no represented relationship between e.g. 'critic' and 'criticism' (where critic is pronounced kritik and criticism kritisizrn). However, the entries do not represent flaps, vowel reductions, and other coarticulatory effects. In order to collect our 300,000 pronunciations, we combined seven different on-line pronunciation dictionaries, including the five shown in For further information about these sources please refer to CMU (CMU 1993), LIMSI (Lamel 1993) , PRONLEX (COMLEX 1994) , BRITPRON (Robinson 1994) . A text-to-speech system was used to gen-2Although it was not relevant to the experiments described here, our lexicon also included two sources which directly supply surface forms. These were 13,362 handtranscribed pronunciations of 5871 words from TIMIT (TIMIT 1990) , and 230 pronunciations of 36 words derived in-house from the OGI Numbers database (Cole et al. 1994 We represent pronunciations with the set of 54 ARPAbet-like phones detailed in Table 2 . All the lexicon sources except LIMSI use ARPABET-like phone sets 3. CMU, BRITPRON, and PRONLEX phone sets include three levels of vowel stress. The pronunciations from all these sources were mapped into our phone set using a set of obligatory rules for stop closures [bcl, dcl, gcl, pcl, tcl, kcl] ", "cite_spans": [ { "start": 745, "end": 757, "text": "(Lamel 1993)", "ref_id": "BIBREF10" }, { "start": 768, "end": 781, "text": "(COMLEX 1994)", "ref_id": null }, { "start": 784, "end": 808, "text": "BRITPRON (Robinson 1994)", "ref_id": null }, { "start": 1065, "end": 1077, "text": "(TIMIT 1990)", "ref_id": null }, { "start": 1162, "end": 1179, "text": "(Cole et al. 1994", "ref_id": "BIBREF4" }, { "start": 1536, "end": 1566, "text": "[bcl, dcl, gcl, pcl, tcl, kcl]", "ref_id": null } ], "ref_spans": [ { "start": 1259, "end": 1266, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "2.1", "sec_num": null }, { "text": "We next apply phonological rules to our base lexicon to produce the surface lexicon. Since the rules 3The LIMSI pronunciations already included the syllabic consonants and reduced vowels. For this reason, the words found only in the LIMSI source lexicon did not participate in the probability estimates for the syllabic and reduced vowel rules. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Applying Phonological Rules to Build a Surface Lexicon", "sec_num": "2.2" }, { "text": "[tcl dcl] [t d]--~ dx/V [ax ix axr] \u2022 [tcl dcl] [t d]--* dx/V r __ [ax ix axr] . hh ~ hv / [+voice]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Applying Phonological Rules to Build a Surface Lexicon", "sec_num": "2.2" }, { "text": "[+voice]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Applying Phonological Rules to Build a Surface Lexicon", "sec_num": "2.2" }, { "text": "3: Phonological Rules are optional, the surface lexicon must contain each underlying pronunciation unmodified, as well as the pronunciation resulting from the application of each relevant phonological rule. Table 3 gives the 10 phonological rules used in these experiments. One goal of our rule-application procedure was to build a tagged lexicon to avoid having to implement a phonological-rule parser to p~rse the surface pronunciations. In a tagged lexicon, each surface pronunciation is annotated with the names of the phonological rules that applied to produce it. Thus when the speech recognizer finds a particular pronunciation in the speech input, the list of rules which applied to produce it can simply be looked up in the tagged lexicon.", "cite_spans": [], "ref_spans": [ { "start": 207, "end": 214, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Applying Phonological Rules to Build a Surface Lexicon", "sec_num": "2.2" }, { "text": "The algorithm applies rules to pronunciations recursively; when a context matches the left hand side of a phonological rule \"RULE,\" two pronunciations are produced: one unchanged by the rule (marked -RULE), and one with the rule applied (marked +RULE). The procedure places the +RULE pronunciation on the queue for later recursive rule application, and continues trying to apply phonological rules to the -RULE pronunciation. See Figure 1 for details of the algorithm. While our procedure is not guaranteed to terminate, in practice the phonological rules we apply have a finite recursive depth.", "cite_spans": [], "ref_spans": [ { "start": 430, "end": 438, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Applying Phonological Rules to Build a Surface Lexicon", "sec_num": "2.2" }, { "text": "The nondeterministic mapping produces a tagged equiprobable multiple pronunciation lexicon of 510,000 pronunciations for 160,000 words. For example, Table 4 gives our base forms for the word \"butter\" : The resulting tagged surface lexicon would have the entries in Table 5 .", "cite_spans": [], "ref_spans": [ { "start": 149, "end": 156, "text": "Table 4", "ref_id": "TABREF6" }, { "start": 265, "end": 272, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Applying Phonological Rules to Build a Surface Lexicon", "sec_num": "2.2" }, { "text": "Given a lexicon with tagged surface pronunciations, the next required step is to count how many times each of these pronunciations occurs in a speech corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering with forced-Viterbi", "sec_num": "2.3" }, { "text": "The algorithm we use has two steps; PHONETIC LIKELIHOOD ESTIMATION and FORCED-VITERBI ALIGNMENT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering with forced-Viterbi", "sec_num": "2.3" }, { "text": "In the first step, PHONETIC LIKELIHOOD ESTI-MATION, we examine each 20ms frame of speech data, and probabilistically label each frame with the phones that were likely to produce the data. That is, for each of the 54 phones in our phone-set, we compute the probability that the slice of acoustic data was produced by that phone. The result of this labeling is a vector of phone-likelihoods for each acoustic frame.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering with forced-Viterbi", "sec_num": "2.3" }, { "text": "Our algorithm is based on a multi-layer perceptron (MLP) which is trained to compute the conditional probability of a phone given an acoustic feature vector for one frame, together with 80 ms of surrounding context. bcl b ah dx ax:+BPU +FL1; +CWtl +FL1 +RVl; +PLX +FL1 +RVl bcl bah dx axr: +TTS +FL1; +BPU +FL1; +CI~J +FL1 -RVl +RV3; +LIM +FL1; +PLX +FL1 -RV1 +RV3 bcl b ah tel t ax:+BPU -FL1; +C~d -FL1 +RV1; +PLX -FL1 +RV1 bcl bah tel t axr:\u00f7TT$ -FL1; +BPU -FL1; +C/fiLl -FL1 -RVl +RV3; +LIM -FL1; +PLX -FL1 -RVl +KV3 bcl bah tcl t er:+CMrd -RVl -RV3; +PLX -RVl -RV3 (1991) show that with a few assumptions, an MLP may be viewed as estimating the probability P(ql x) where q is a phone and x is the input acoustic speech data. The estimator consists of a simple three-layer feed forward MLP trained with the back-propagation algorithm (see Figure 2 ). The input layer consists of 9 frames of input speech data. Each frame, representing 10 msec of speech, is typically encoded by 9 PLP (Hermansky 1990 ) coefficients, 9 delta-PLP coefficients, 9 deltadelta PLP coefficients, delta-energy and delta-deltaenergy terms. Typically, we use 500-4000 hidden units. The output layer has one unit for each phone. The MLP is trained on phonetically hand-labeled speech (TIMIT), and then further trained by an iterative Viterbi procedure (forced-Viterbi providing the labels) with Wall Street Journal corpora. The probability P(qlx) produced by the MLP for each frame is first converted to the likelihood P(xlq ) by dividing by the prior P(q), according to Bayes' rule; we ignore P(z) since it is constant here:", "cite_spans": [ { "start": 569, "end": 575, "text": "(1991)", "ref_id": null }, { "start": 987, "end": 1002, "text": "(Hermansky 1990", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 842, "end": 850, "text": "Figure 2", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Filtering with forced-Viterbi", "sec_num": "2.3" }, { "text": "The second step of the algorithm, FORCED-VITERBI ALIGNMENT, takes this vector of likelihoods for each frame and produces the most likely phonetic string for the Sentence. If each word had only a single pronunciation and if each phone had some fixed duration, the phonetic string would be completely determined by the word string. However, phones vary in length as a function of idiolect and rate of speech, and of course the very fact of optional phonological rules implies multiple possible pronunciations for each word. These pronunciations are encoded in a hidden Markov model (HMM) for each word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "P(x l q) -P(q l x)P(z) P(q)", "sec_num": null }, { "text": "The Viterbi algorithm is a dynamic programming search, which works by computing for each phone at each frame the most likely string of phones ending in that phone. Consider a sentence whose first two words are \"of the\", and assume the simplified lexicon in Each pronunciation of the words 'of' and 'the' is represented by a path through the probabilistic automaton for the word. For expository simplicity, we have made the (incorrect) assumption that consonants have a duration of i frame, and vowel a duration of 2 or 3 frames. The algorithm analyzes the input frame by frame, keeping track of the best path of phones. Each path is ranked by its probability, which is computed by multiplying each of the transition probabilities and the phone probabilities for each frame. Figure 4 shows a schematic of the path computation. The size of each dot indicates the magnitude of the local phone likelihood. The maximum path at each point is extended; non-maximal paths are pruned.", "cite_spans": [], "ref_spans": [ { "start": 774, "end": 782, "text": "Figure 4", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "P(x l q) -P(q l x)P(z) P(q)", "sec_num": null }, { "text": "The result of the forced-Viterbi alignment on a single sentence is a phonetic labeling for the sentence (see Figure 5 for an example), from which we ah -ah-v-dh-ax-ax-ax END six .~.~~ P(ax I dh)= .7 can produce a phonetic pronunciation for each word. By running this algorithm on a large corpus of sentences, we produce a list of \"bottom-up\" pronunciations for each word in the corpus.", "cite_spans": [], "ref_spans": [ { "start": 109, "end": 117, "text": "Figure 5", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "P(x l q) -P(q l x)P(z) P(q)", "sec_num": null }, { "text": "The rule-tagged surface lexicon described in \u00a72.1 and the counts derived from the forced-Viterbi described in \u00a72.3 can be combined to form a tagged lexicon that also has counts for each pronunciation of each word. Following is a sample entry from this lexicon for the word Adams which shows the five derivations for its single pronunciation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule probability estimation", "sec_num": null }, { "text": "Adams: ae dz az m z: count=2 derivation 1: +ATS +FL1 -SL2 derivation 2: +BPU +FL1 -$L2 derivation 3: +\u00a2MU +FL1 +RV1 -SL2 derivation 4: +LIH +FL1 -SL2 derivation 5: +PLX +FL1 -SL2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule probability estimation", "sec_num": null }, { "text": "Each pronunciation of each word in this lexicon is annotated with rule tags. Since each pronunciation may be derived from different source dictionaries or via different rules, each pronunciation of a word may contain multiple derivations, each consisting of the list of rules which applied to give the pronunciation from the base form. These tags are either positive, indicating that a rule applied, or negative, indicating that it did not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule probability estimation", "sec_num": null }, { "text": "To produce the initial rule probabilities, we need to count the number of times each rule applies, out of the number-of times it had the potential to apply. If each pronunciation only had a single derivation, this would be computed simply as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule probability estimation", "sec_num": null }, { "text": "P(R) = Z v~PRON Ct (Rule R applied in p) Ct (Rule R could have applied in p)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule probability estimation", "sec_num": null }, { "text": "This could be computed from the tags as :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule probability estimation", "sec_num": null }, { "text": "Ct(+R tags in p) --P-(-R) = Z Ct(-I-R tags in p) -I-Ct(-R tags in p) v~PRON However, since each pronunciation can have multiple derivations, the counts for each rule from each derivation need to be weighted by the probability of the derivation. The derivation probability is computed simply by multiplying together the probability of each of the applications or non-applications of the rule. Let successive estimation-maximization to provide successive approximations to P(dlp ). For efficiency reasons, we actually compute the probabilities of all rules in parallel, as shown in Figure 6 . ", "cite_spans": [], "ref_spans": [ { "start": 580, "end": 588, "text": "Figure 6", "ref_id": "FIGREF9" } ], "eq_spans": [], "section": "Rule probability estimation", "sec_num": null }, { "text": "We ran the estimation algorithm on 7203 sea, noes (129,864 words) read from the Wall Street Journal. The corpus (!993 WSJ Hub 2 (WSJ 0) training data) -consisted of 12 hours of speech, and had 8916 unique words. Table 6 shows the probabilities for the ten phonological rules described in \u00a72.2. Note that all of the rules are indeed quite optional; even the most commonly-employed rules, like flapping and h-voicing, only apply on average about 90% of the time. Many of the other rules, such as the reduced-vowel or reduced-liquid rules, only apply about 50% of the time.", "cite_spans": [], "ref_spans": [ { "start": 212, "end": 219, "text": "Table 6", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Results", "sec_num": "3" }, { "text": "We next attempted to judge the reliability of our automatic rule-probability estimation algorithm by comparing it with hand transcribed pronunciations. We took the hand-transcribed pronunciations of each word in TIMIT, and computed rule probabilities by the same rule-tag counting procedure used for our forced-Viterbi output. Figure 7 shows the fit between the automatic and hand-transcribed probabilities. Since the TIMIT pronunciations were from a completely different data collection effort with a very different corpus and speakers, the closeness of the probabilities is quite encouraging. into male and female speakers. Notice that many of the rules seem to be employed more often by men than by women. For example, men are about 5% more likely to flap, more likely to reduce vowels ih ._.\" 1 and er, and slightly more likely to reduce Lqums and nasals.", "cite_spans": [], "ref_spans": [ { "start": 327, "end": 335, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "3" }, { "text": "--~ Since ~'-~,~ese are coarticulation or fast-speech effects, our initial hypothesis was that the difference between male and female speakers was due to a faster speech-rate by males. By computing the weighted average seconds per phone for male and female speakers, we found that females had an average of 71 ms/phone, while males had an average of 68 ms/phone, a difference of about 4%, quite correlated with the similar differences in reduction and flapping.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "3" }, { "text": "Our algorithm for phonological rule probability estimation synthesizes and extends earlier work by (Cohen 1989) and (Wooters 1993) . The idea of using optional phonological rules to construct a speechrecognition lexicon derives from Cohen (1989) , who applied optional phonological rules to a baseform dictionary to produce a surface lexicon and then used TIMIT to assign probabilities for each pronunciation. The use of a forced-Viterbi speech decoder to discover pronunciations from a corpus was proposed by Wooters (1993) . Weseniek & Sehiel (1994) independently propose a very similar forced-Viterbidecoder-based technique which they use for measuring the accuracy of hand-written phonology. Chen (1990) and Riley (1991) model the relationship between phonemes and their Mlophonic realizations by training decision trees on TIMIT data. A decision tree is learned for each underlying phoneme specifying its .surface realization in different contexts. These completely automatic techniques, requiring no hand-written rules, can allow a more fine-grained analysis than our rule-based algorithm. However, as a consequence, it is more difficult to extract generalizations across classes of phonemes to which rules can apply. We think that a hybrid between a rule-based and a decision-tree approach could prove quite powerful.", "cite_spans": [ { "start": 99, "end": 111, "text": "(Cohen 1989)", "ref_id": "BIBREF3" }, { "start": 116, "end": 130, "text": "(Wooters 1993)", "ref_id": "BIBREF18" }, { "start": 233, "end": 245, "text": "Cohen (1989)", "ref_id": "BIBREF3" }, { "start": 510, "end": 524, "text": "Wooters (1993)", "ref_id": "BIBREF18" }, { "start": 527, "end": 551, "text": "Weseniek & Sehiel (1994)", "ref_id": null }, { "start": 696, "end": 707, "text": "Chen (1990)", "ref_id": "BIBREF1" }, { "start": 712, "end": 724, "text": "Riley (1991)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "Although the paradigm of exploratory computational phonology is only in its infancy, we believe our rule-probability estimation algorithm to be a new and useful instance of the use of probabilistic techniques and spoken-language corpora in computational linguistics. In Tajchman et al. (1995) we report on the results of our algorithm on speech recognition performance. We plan in future work to address a number of shortcomings of these experiments, for example including some spontaneous speech corpora, and looking at a wider variety of rules.", "cite_spans": [ { "start": 270, "end": 292, "text": "Tajchman et al. (1995)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "In addition, we have extended our algorithm to induce new pronunciations which generalize over pronunciations seen in the corpus (Wooters & Stolcke 1994) . We now plan to augment our probability estimation to use the pronunciations from this new HMM-induction-based generalization step. This will require extending our tag-based probability estimation step to parse the phone strings from the forced-Viterbi.", "cite_spans": [ { "start": 129, "end": 153, "text": "(Wooters & Stolcke 1994)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "In other current work we have also been using this algorithm to model the phonological component of the accent of non-native speakers. Finally, we hope in future work to be able to combine our rulebased approach with more bottom-up methods like the decision-tree or phonological parsing algorithms to induce rules as well as merely training their probabilities\u2022", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" } ], "back_matter": [ { "text": "Thanks to Mike Hochberg, Nelson Morgan, Steve Renals, Tony Robinson, Florian Schiel, Andreas Stolcke, and Chuck Woofers. This work was partially funded by ICSI and an SRI subcontract from ARPA contract MDA904-90-C-5253. Partial funding also came from ES-PRIT project 6487 (The Wernicke project).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Merging multilayer perceptrons & Hidden Markov Models: Some experiments in continuous speech recognition", "authors": [ { "first": "H", "middle": [], "last": "Bourlard", "suffix": "" }, { "first": "N", "middle": [], "last": "Morgan", "suffix": "" } ], "year": 1991, "venue": "Artificial Neural Networks: Advances and Applications", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "BOURLARD, H., & N. MORGAN. 1991. Merging mul- tilayer perceptrons & Hidden Markov Models: Some experiments in continuous speech recog- nition. In Artificial Neural Networks: Advances and Applications, ed. by E. Gelenbe. North Hol- land Press.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Identification of contextual factors for pronounciation networks", "authors": [ { "first": "F", "middle": [], "last": "Chen", "suffix": "" } ], "year": 1990, "venue": "IEEE ICASSP-90", "volume": "", "issue": "", "pages": "753--756", "other_ids": {}, "num": null, "urls": [], "raw_text": "CHEN, F. 1990. Identification of contextual factors for pronounciation networks. In IEEE ICASSP- 90,753-756.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The Carnegie Mellon Pronouncing Dictionary v0", "authors": [], "year": 1993, "venue": "", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "CMU, 1993. The Carnegie Mellon Pronouncing Dic- tionary v0.1. Carnegie Mellon University.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Phonological Structures for Speech Recognition", "authors": [ { "first": "M", "middle": [ "H" ], "last": "Cohen", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "COHEN, M. H., 1989. Phonological Structures for Speech Recognition. University of California, Berkeley dissertation.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The OGI Numbers Database", "authors": [ { "first": "R", "middle": [ "A" ], "last": "Cole", "suffix": "" }, { "first": "K", "middle": [], "last": "Roginski", "suffix": "" }, { "first": "M", "middle": [], "last": "Fanty", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "COLE, R. A., K. ROGINSKI, ~5 M. FANTY., 1994. The OGI Numbers Database. Oregon Graduate Institute.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The COMLEX English Pronouncing Dictionary. copyright Trustees of the University of", "authors": [], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "COMLEX, 1994. The COMLEX English Pronounc- ing Dictionary. copyright Trustees of the Uni- versity of.Pennsylvania.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The acquisition of stress: A data-oriented approach", "authors": [ { "first": "Walter", "middle": [], "last": "Daelemans", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Gillis", "suffix": "" }, { "first": "~", "middle": [], "last": "Gert Durmux", "suffix": "" } ], "year": 1994, "venue": "Computational Linguistics", "volume": "208", "issue": "", "pages": "421--451", "other_ids": {}, "num": null, "urls": [], "raw_text": "DAELEMANS, WALTER, STEVEN GILLIS, ~ GERT DURmUX. 1994. The acquisition of stress: A data-oriented approach. Computational Lin- guistics 208.421-451.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The Machine Learning of Phonological Structure", "authors": [ { "first": "T", "middle": [], "last": "Ellison", "suffix": "" }, { "first": "", "middle": [], "last": "Mark", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "ELLISON, T. MARK, 1992. The Machine Learning of Phonological Structure. University of Western Australia dissertation.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Learning words in time: Towards a modular connectionist account of the acquisition of receptive morphology", "authors": [ { "first": "Michael", "middle": [], "last": "Gasser", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "GASSER, MICHAEL, 1993. Learning words in time: Towards a modular connectionist account of the acquisition of receptive morphology. Draft.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Perceptual linear predictive (pip) analysis of speech", "authors": [ { "first": "H", "middle": [], "last": "Hermansky", "suffix": "" } ], "year": 1990, "venue": "J. Acoustical Society of America", "volume": "87", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "HERMANSKY, H. 1990. Perceptual linear predictive (pip) analysis of speech. J. Acoustical Society of America 87.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The Limsi Dictionary", "authors": [ { "first": "Lori", "middle": [], "last": "Lamel", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "LAMEL, LORI, 1993. The Limsi Dictionary.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Continuous Speech Recognition Corpus (WSJ 0). National Institute of Standards and Technology", "authors": [], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "NIST, 1993. Continuous Speech Recognition Corpus (WSJ 0). National Institute of Standards and Technology Speech Disc 11-1.1 to 11-3.1.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Connectionist speech recognition: Status and prospects", "authors": [ { "first": "S", "middle": [], "last": "Renals", "suffix": "" }, { "first": "N", "middle": [], "last": "Morgan", "suffix": "" }, { "first": "H", "middle": [], "last": "Bourlard", "suffix": "" }, { "first": "M", "middle": [], "last": "Co-Hen", "suffix": "" }, { "first": "H", "middle": [], "last": "Franco", "suffix": "" }, { "first": "C", "middle": [], "last": "Wooters", "suffix": "" }, { "first": "~", "middle": [ "P" ], "last": "Kohn", "suffix": "" } ], "year": 1991, "venue": "ICSI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "RENALS, S., N. MORGAN, H. BOURLARD, M. CO- HEN, H. FRANCO, C. WOOTERS, ~ P. KOHN. 1991. Connectionist speech recognition: Sta- tus and prospects. Technical Report TR-91-070, ICSI, Berkeley, CA.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A statistical model for generating pronunciation networks", "authors": [ { "first": "Michael", "middle": [ "D" ], "last": "Riley", "suffix": "" } ], "year": 1991, "venue": "IEEE ICASSP-91", "volume": "", "issue": "", "pages": "737--740", "other_ids": {}, "num": null, "urls": [], "raw_text": "RILEY, MICHAEL D. 1991. A statistical model for generating pronunciation networks. In IEEE ICASSP-91, 737-740.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The British English Example Pronunciation Dictionary, v0.1. Cambridge University", "authors": [ { "first": "Anthony", "middle": [], "last": "Robinson", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "ROBINSON, ANTHONY, 1994. The British English Example Pronunciation Dictionary, v0.1. Cam- bridge University.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Building multiple pronunciation models for novel words using exploratory computational phonology", "authors": [ { "first": "Gary", "middle": [], "last": "Tajchman", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Fosler", "suffix": "" }, { "first": "~", "middle": [], "last": "Daniel Ju-Rafsky", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "TAJCHMAN, GARY, ERIC FOSLER, ~ DANIEL JU- RAFSKY. 1995. Building multiple pronunciation models for novel words using exploratory com- putational phonology. To appear in Eurospeech- 95.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "TIMIT Acoustic-Phonetic Continuous Speech Corpus. National Institute of Standards and Technology Speech Disc 1-1.1. NTIS Order No", "authors": [], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "91--505065", "other_ids": {}, "num": null, "urls": [], "raw_text": "TIMIT, 1990. TIMIT Acoustic-Phonetic Continuous Speech Corpus. National Institute of Standards and Technology Speech Disc 1-1.1. NTIS Order No. PB91-505065.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Applying speech verification to a large data base of German to obtain a statistical survey about rules of pronunciation", "authors": [ { "first": "Maria-Barbara", "middle": [], "last": "Wesenick", "suffix": "" }, { "first": "~", "middle": [], "last": "Florian Schiel", "suffix": "" } ], "year": 1994, "venue": "ICSLP-9~", "volume": "", "issue": "", "pages": "279--282", "other_ids": {}, "num": null, "urls": [], "raw_text": "WESENICK, MARIA-BARBARA, ~ FLORIAN SCHIEL. 1994. Applying speech verification to a large data base of German to obtain a statistical sur- vey about rules of pronunciation. In ICSLP-9~, 279-282.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Lexical Modeling in a Speaker Independent Speech Understanding System. Berkeley: University of California dissertation", "authors": [ { "first": "Charles", "middle": [ "C" ], "last": "Wooters", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "WOOTERS, CHARLES C., 1993. Lexical Modeling in a Speaker Independent Speech Understand- ing System. Berkeley: University of California dissertation. Available as ICSI TR-92-062.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Multiple-pronunciation lexical modeling in a speaker-independent speech understanding system", "authors": [ { "first": "Chuck", "middle": [], "last": "Wooters", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "WOOTERS, CHUCK, ~5 ANDREAS STOLCKE. 1994. Multiple-pronunciation lexical modeling in a speaker-independent speech understanding sys- tem. In ICSLP-94.", "links": null } }, "ref_entries": { "FIGREF1": { "text": "Applying Rules to the Base Lexicon", "num": null, "uris": null, "type_str": "figure" }, "FIGREF3": { "text": "Phonetic Likelihood Estimator", "num": null, "uris": null, "type_str": "figure" }, "FIGREF4": { "text": "Pronunciation models for \"of\" and \"the\"", "num": null, "uris": null, "type_str": "figure" }, "FIGREF5": { "text": "oh)= .4 START P(ah I START)= .5", "num": null, "uris": null, "type_str": "figure" }, "FIGREF6": { "text": "Computing most-likely phone paths in a Forced-Viterbi alignment of 'of the'", "num": null, "uris": null, "type_str": "figure" }, "FIGREF7": { "text": "A forced-Viterbi phonetic labelling for a Wall Street Journal sentence", "num": null, "uris": null, "type_str": "figure" }, "FIGREF8": { "text": "DERIVS(p} be the set of all derivations of a pronunciation p, \u2022 POSR ULES(p, r, d) be 1.0 if derivation d of pronunciation p uses rule r, else 0. \u2022 ALLRULES(p,r) be the count of all derivations of p in which rule r could have applied (i.e. in which d has either a +R or -R tag). \u2022 P(d]p) be the probability of the derivation d of pronunciation p. \u2022 PRON be the set of pronunciations derived from the forced-Viterbi output. Now a single iteration of the rule-probability algorithm must perform the following computation: POSRULES(p,r,d) no prior knowledge, we make the zero-knowledge initial assumption that P(d[p) = 1The algorithm can the be run as a[DERIVS(p)I\"", "num": null, "uris": null, "type_str": "figure" }, "FIGREF9": { "text": "Parallel computation of rule probabilities", "num": null, "uris": null, "type_str": "figure" }, "FIGREF10": { "text": "breaks down our automatically generated rule probabilities for the Wall Street Journal corpus Percent of Phonological Rule Use, WSJO vs. Automatic vs Hand-transcribed Probabilities for Phonological Rules", "num": null, "uris": null, "type_str": "figure" }, "TABREF0": { "html": null, "text": "", "content": "
.
", "type_str": "table", "num": null }, "TABREF1": { "html": null, "text": "Pronunciation sources used to build fully expanded lexicon.", "content": "", "type_str": "table", "num": null }, "TABREF3": { "html": null, "text": "Baseform phone set used was the ARPA-BET. This was expanded to include syllabics, stop closures, and reduced vowels, alveolar flap, and voiced h.", "content": "
", "type_str": "table", "num": null }, "TABREF4": { "html": null, "text": ", and optional rules to introduce the syllabic consonants [el, em, en], reduced vowels [ax, ix, axr], voiced h [hv], and alveolar flap [dx].", "content": "
", "type_str": "table", "num": null }, "TABREF6": { "html": null, "text": "Base forms for \"butter\"", "content": "
For each lexical item, L, do:
Place all base prons of L onto queue q
While Q is not empty do:
Dequeue pronunciation P from q
For each phonological rule R, do:
If context of R could apply to P
Apply R to P, giving P'
Tag P' with +R, put on queue
Tag P with -R
Output P with tags
", "type_str": "table", "num": null }, "TABREF7": { "html": null, "text": "Resulting tagged entries and Renals et al.", "content": "", "type_str": "table", "num": null }, "TABREF10": { "html": null, "text": "", "content": "
Pr
.60
.57
.74
.35
.35
.72
.77
.87
.92
.92
: Results of the Rule-Probability-Estimation Algorithm
Percent of Phonological Rule Use
Percent
m
female
90.00m ....
llllll
80.00 70.00,i .... 1
60.00m
50.00
40.00
20.00
I0.00
0.0011231
Rule
Figure 8: Male vs Female Probabilities for Phono-
logical Rules
", "type_str": "table", "num": null } } } }