{ "paper_id": "P02-1001", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:30:30.090832Z" }, "title": "Parameter Estimation for Probabilistic Finite-State Transducers *", "authors": [ { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University Baltimore", "location": { "postCode": "21218-2691", "region": "MD", "country": "USA" } }, "email": "jason@cs.jhu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Weighted finite-state transducers suffer from the lack of a training algorithm. Training is even harder for transducers that have been assembled via finite-state operations such as composition, minimization, union, concatenation, and closure, as this yields tricky parameter tying. We formulate a \"parameterized FST\" paradigm and give training algorithms for it, including a general bookkeeping trick (\"expectation semirings\") that cleanly and efficiently computes expectations and gradients.", "pdf_parse": { "paper_id": "P02-1001", "_pdf_hash": "", "abstract": [ { "text": "Weighted finite-state transducers suffer from the lack of a training algorithm. Training is even harder for transducers that have been assembled via finite-state operations such as composition, minimization, union, concatenation, and closure, as this yields tricky parameter tying. We formulate a \"parameterized FST\" paradigm and give training algorithms for it, including a general bookkeeping trick (\"expectation semirings\") that cleanly and efficiently computes expectations and gradients.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Rational relations on strings have become widespread in language and speech engineering (Roche and Schabes, 1997) . Despite bounded memory they are well-suited to describe many linguistic and textual processes, either exactly or approximately.", "cite_spans": [ { "start": 88, "end": 113, "text": "(Roche and Schabes, 1997)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Background and Motivation", "sec_num": "1" }, { "text": "A relation is a set of (input, output) pairs. Relations are more general than functions because they may pair a given input string with more or fewer than one output string.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Motivation", "sec_num": "1" }, { "text": "The class of so-called rational relations admits a nice declarative programming paradigm. Source code describing the relation (a regular expression) is compiled into efficient object code (in the form of a 2-tape automaton called a finite-state transducer). The object code can even be optimized for runtime and code size (via algorithms such as determinization and minimization of transducers).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Motivation", "sec_num": "1" }, { "text": "This programming paradigm supports efficient nondeterminism, including parallel processing over infinite sets of input strings, and even allows \"reverse\" computation from output to input. Its unusual flexibility for the practiced programmer stems from the many operations under which rational relations are closed. It is common to define further useful operations (as macros), which modify existing relations not by editing their source code but simply by operating on them \"from outside.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Motivation", "sec_num": "1" }, { "text": "The entire paradigm has been generalized to weighted relations, which assign a weight to each (input, output) pair rather than simply including or excluding it. If these weights represent probabilities P (input, output) or P (output | input), the weighted relation is called a joint or conditional (probabilistic) relation and constitutes a statistical model. Such models can be efficiently restricted, manipulated or combined using rational operations as before. An artificial example will appear in \u00a72.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Motivation", "sec_num": "1" }, { "text": "The availability of toolkits for this weighted case (Mohri et al., 1998; van Noord and Gerdemann, 2001) promises to unify much of statistical NLP. Such tools make it easy to run most current approaches to statistical markup, chunking, normalization, segmentation, alignment, and noisy-channel decoding, 1 including classic models for speech recognition (Pereira and Riley, 1997) and machine translation (Knight and Al-Onaizan, 1998) . Moreover, once the models are expressed in the finitestate framework, it is easy to use operators to tweak them, to apply them to speech lattices or other sets, and to combine them with linguistic resources.", "cite_spans": [ { "start": 52, "end": 72, "text": "(Mohri et al., 1998;", "ref_id": "BIBREF16" }, { "start": 73, "end": 103, "text": "van Noord and Gerdemann, 2001)", "ref_id": "BIBREF29" }, { "start": 353, "end": 378, "text": "(Pereira and Riley, 1997)", "ref_id": "BIBREF18" }, { "start": 403, "end": 432, "text": "(Knight and Al-Onaizan, 1998)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Background and Motivation", "sec_num": "1" }, { "text": "Unfortunately, there is a stumbling block: Where do the weights come from? After all, statistical models require supervised or unsupervised training. Currently, finite-state practitioners derive weights using exogenous training methods, then patch them onto transducer arcs. Not only do these methods require additional programming outside the toolkit, but they are limited to particular kinds of models and training regimens. For example, the forward-backward algorithm (Baum, 1972) trains only Hidden Markov Models, while (Ristad and Yianilos, 1996) trains only stochastic edit distance.", "cite_spans": [ { "start": 471, "end": 483, "text": "(Baum, 1972)", "ref_id": "BIBREF0" }, { "start": 524, "end": 551, "text": "(Ristad and Yianilos, 1996)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Background and Motivation", "sec_num": "1" }, { "text": "In short, current finite-state toolkits include no training algorithms, because none exist for the large space of statistical models that the toolkits can in principle describe and run.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Motivation", "sec_num": "1" }, { "text": "1 Given output, find input to maximize P (input, output). This paper aims to provide a remedy through a new paradigm, which we call parameterized finitestate machines. It lays out a fully general approach for training the weights of weighted rational relations. First \u00a72 considers how to parameterize such models, so that weights are defined in terms of underlying parameters to be learned. \u00a73 asks what it means to learn these parameters from training data (what is to be optimized?), and notes the apparently formidable bookkeeping involved. \u00a74 cuts through the difficulty with a surprisingly simple trick. Finally, \u00a75 removes inefficiencies from the basic algorithm, making it suitable for inclusion in an actual toolkit. Such a toolkit could greatly shorten the development cycle in natural language engineering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Motivation", "sec_num": "1" }, { "text": "Finite-state machines, including finite-state automata (FSAs) and transducers (FSTs), are a kind of labeled directed multigraph. For ease and brevity, we explain them by example. Fig. 1a shows a probabilistic FST with input alphabet \u03a3 = {a, b}, output alphabet \u2206 = {x, z}, and all states final. It may be regarded as a device for generating a string pair in \u03a3 * \u00d7 \u2206 * by a random walk from 0 . Two paths exist that generate both input aabb and output xz:", "cite_spans": [], "ref_spans": [ { "start": 179, "end": 186, "text": "Fig. 1a", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "0 a:x/.63 \u2212\u2192 0 a: /.07 \u2212\u2192 1 b: /.03 \u2212\u2192 2 b:z/.4 \u2212\u2192 2/.5 0 a:x/.63 \u2212\u2192 0 a: /.07 \u2212\u2192 1 b:z/.12 \u2212\u2192 2 b: /.1 \u2212\u2192 2/.5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "Each of the paths has probability .0002646, so the probability of somehow generating the pair (aabb, xz) is .0002646 + .0002646 = .0005292.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "Abstracting away from the idea of random walks, arc weights need not be probabilities. Still, define a path's weight as the product of its arc weights and the stopping weight of its final state. Thus Fig. 1a defines a weighted relation f where f (aabb, xz) = .0005292. This particular relation does happen to be probabilistic (see \u00a71). It represents a joint distribution (since x,y f (x, y) = 1). Meanwhile, Fig. 1c defines a conditional one (\u2200x y f (x, y) = 1).", "cite_spans": [], "ref_spans": [ { "start": 200, "end": 207, "text": "Fig. 1a", "ref_id": "FIGREF0" }, { "start": 408, "end": 415, "text": "Fig. 1c", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "This paper explains how to adjust probability distributions like that of Fig. 1a so as to model training data better. The algorithm improves an FST's numeric weights while leaving its topology fixed. How many parameters are there to adjust in Fig. 1a ? That is up to the user who built it! An FST model with few parameters is more constrained, making optimization easier. Some possibilities:", "cite_spans": [], "ref_spans": [ { "start": 73, "end": 80, "text": "Fig. 1a", "ref_id": "FIGREF0" }, { "start": 243, "end": 250, "text": "Fig. 1a", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "\u2022 Most simply, the algorithm can be asked to tune the 17 numbers in Fig. 1a separately, subject to the constraint that the paths retain total probability 1. A more specific version of the constraint requires the FST to remain Markovian: each of the 4 states must present options with total probability 1 (at state 1 , 15+.7+.03.+.12=1). This preserves the random-walk interpretation and (we will show) entails no loss of generality. The 4 restrictions leave 13 free params.", "cite_spans": [], "ref_spans": [ { "start": 68, "end": 75, "text": "Fig. 1a", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "\u2022 But perhaps Fig. 1a was actually obtained as the composition of Fig. 1b -c, effectively defining P (input, output) = mid P (input, mid) \u2022 P (output | mid). If Fig. 1b -c are required to remain Markovian, they have 5 and 1 degrees of freedom respectively, so now Fig. 1a has only 6 parameters total. 2 In general, composing machines multiplies their arc counts but only adds their parameter counts. We wish to optimize just the few underlying parameters, not independently optimize the many arc weights of the composed machine.", "cite_spans": [], "ref_spans": [ { "start": 14, "end": 21, "text": "Fig. 1a", "ref_id": "FIGREF0" }, { "start": 66, "end": 73, "text": "Fig. 1b", "ref_id": "FIGREF0" }, { "start": 161, "end": 168, "text": "Fig. 1b", "ref_id": "FIGREF0" }, { "start": 264, "end": 271, "text": "Fig. 1a", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "\u2022 Perhaps Fig. 1b was itself obtained by the probabilistic regular expression (a : p) * \u03bb (b : (p + \u00b5 q)) * \u03bd with the 3 parameters (\u03bb, \u00b5, \u03bd) = (.7, .2, .5). With \u03c1 = .1 from footnote 2, the composed machine 2 Why does Fig. 1c have only 1 degree of freedom? The Markovian requirement means something different in Fig. 1c , which defines a conditional relation P (output | mid) rather than a joint one. A random walk on Fig. 1c chooses among arcs with a given input label. So the arcs from state 6 with input p must have total probability 1 (currently .9+.1). All other arc choices are forced by the input label and so have probability 1. The only tunable value is .1 (denote it by \u03c1), with .9 = 1 \u2212 \u03c1.", "cite_spans": [], "ref_spans": [ { "start": 10, "end": 17, "text": "Fig. 1b", "ref_id": "FIGREF0" }, { "start": 219, "end": 226, "text": "Fig. 1c", "ref_id": "FIGREF0" }, { "start": 313, "end": 320, "text": "Fig. 1c", "ref_id": "FIGREF0" }, { "start": 419, "end": 426, "text": "Fig. 1c", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "( Fig. 1a ) has now been described with a total of just 4 parameters! 3 Here, probabilistic union E", "cite_spans": [], "ref_spans": [ { "start": 2, "end": 9, "text": "Fig. 1a", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "+ \u00b5 F def = \u00b5E + (1 \u2212 \u00b5)F means \"flip a \u00b5-weighted coin and generate E if heads, F if tails.\" E * \u03bb def = (\u03bbE) * (1\u2212\u03bb)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "means \"repeatedly flip an \u03bb-weighted coin and keep repeating E as long as it comes up heads.\" These 4 parameters have global effects on Fig. 1a , thanks to complex parameter tying: arcs 4 b: Fig. 1b get respective probabilities (1 \u2212 \u03bb)\u00b5\u03bd and (1 \u2212 \u00b5)\u03bd, which covary with \u03bd and vary oppositely with \u00b5. Each of these probabilities in turn affects multiple arcs in the composed FST of Fig. 1a .", "cite_spans": [], "ref_spans": [ { "start": 136, "end": 143, "text": "Fig. 1a", "ref_id": "FIGREF0" }, { "start": 191, "end": 198, "text": "Fig. 1b", "ref_id": "FIGREF0" }, { "start": 381, "end": 388, "text": "Fig. 1a", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "p \u2212\u2192 5 , 5 b:q \u2212\u2192 5 in", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "We offer a theorem that highlights the broad applicability of these modeling techniques. 4 If f (input, output) is a weighted regular relation, then the following statements are equivalent: (1) f is a joint probabilistic relation;", "cite_spans": [ { "start": 89, "end": 90, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "(2) f can be computed by a Markovian FST that halts with probability 1;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "(3) f can be expressed as a probabilistic regexp, i.e., a regexp built up from atomic expressions a : b (for a \u2208 \u03a3 \u222a { }, b \u2208 \u2206 \u222a { }) using concatenation, probabilistic union + p , and probabilistic closure * p .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "For defining conditional relations, a good regexp language is unknown to us, but they can be defined in several other ways: (1) via FSTs as in Fig. 1c, (2) by compilation of weighted rewrite rules (Mohri and Sproat, 1996) , (3) by compilation of decision trees (Sproat and Riley, 1996) , (4) as a relation that performs contextual left-to-right replacement of input substrings by a smaller conditional relation (Gerdemann and van Noord, 1999), 5 (5) by conditionalization of a joint relation as discussed below.", "cite_spans": [ { "start": 197, "end": 221, "text": "(Mohri and Sproat, 1996)", "ref_id": "BIBREF15" }, { "start": 261, "end": 285, "text": "(Sproat and Riley, 1996)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 143, "end": 151, "text": "Fig. 1c,", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "A central technique is to define a joint relation as a noisy-channel model, by composing a joint relation with a cascade of one or more conditional relations as in Fig. 1 (Pereira and Riley, 1997; Knight and Graehl, 1998) . The general form is illustrated by 3 Conceptually, the parameters represent the probabilities of reading another a (\u03bb); reading another b (\u03bd); transducing b to p rather than q (\u00b5); starting to transduce p to rather than x (\u03c1). 4 To prove (1)\u21d2(3), express f as an FST and apply the well-known Kleene-Sch\u00fctzenberger construction (Berstel and Reutenauer, 1988) , taking care to write each regexp in the construction as a constant times a probabilistic regexp. A full proof is straightforward, as are proofs of (3)\u21d2(2), (2)\u21d2(1).", "cite_spans": [ { "start": 197, "end": 221, "text": "Knight and Graehl, 1998)", "ref_id": "BIBREF10" }, { "start": 259, "end": 260, "text": "3", "ref_id": null }, { "start": 451, "end": 452, "text": "4", "ref_id": null }, { "start": 551, "end": 581, "text": "(Berstel and Reutenauer, 1988)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 164, "end": 170, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "5 In (4), the randomness is in the smaller relation's choice of how to replace a match. One can also get randomness through the choice of matches, ignoring match possibilities by randomly deleting markers in Gerdemann and van Noord's construction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "P (v, z) def =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "w,x,y P (v|w)P (w, x)P (y|x)P (z|y), implemented by composing 4 machines. 6, 7 There are also procedures for defining weighted FSTs that are not probabilistic (Berstel and Reutenauer, 1988) . Arbitrary weights such as 2.7 may be assigned to arcs or sprinkled through a regexp (to be compiled into : /2.7 \u2212\u2192 arcs). A more subtle example is weighted FSAs that approximate PCFGs (Nederhof, 2000; Mohri and Nederhof, 2001 ), or to extend the idea, weighted FSTs that approximate joint or conditional synchronous PCFGs built for translation. These are parameterized by the PCFG's parameters, but add or remove strings of the PCFG to leave an improper probability distribution.", "cite_spans": [ { "start": 74, "end": 76, "text": "6,", "ref_id": null }, { "start": 77, "end": 78, "text": "7", "ref_id": null }, { "start": 159, "end": 189, "text": "(Berstel and Reutenauer, 1988)", "ref_id": "BIBREF1" }, { "start": 376, "end": 392, "text": "(Nederhof, 2000;", "ref_id": "BIBREF17" }, { "start": 393, "end": 417, "text": "Mohri and Nederhof, 2001", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "Fortunately for those techniques, an FST with positive arc weights can be normalized to make it jointly or conditionally probabilistic:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "\u2022 An easy approach is to normalize the options at each state to make the FST Markovian. Unfortunately, the result may differ for equivalent FSTs that express the same weighted relation. Undesirable consequences of this fact have been termed \"label bias\" (Lafferty et al., 2001) . Also, in the conditional case such per-state normalization is only correct if all states accept all input suffixes (since \"dead ends\" leak probability mass). 8 \u2022 A better-founded approach is global normalization, which simply divides each f (x, y) by", "cite_spans": [ { "start": 254, "end": 277, "text": "(Lafferty et al., 2001)", "ref_id": "BIBREF11" }, { "start": 438, "end": 439, "text": "8", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "x ,y f (x , y ) (joint case) or by y f (x, y ) (conditional case). To implement the joint case, just divide stopping weights by the total weight of all paths (which \u00a74 shows how to find), provided this is finite. In the conditional case, let g be a copy of f with the output labels removed, so that g(x) finds the desired divisor; determinize g if possible (but this fails for some weighted FSAs), replace all weights with their reciprocals, and compose the result with f . 9 6 P (w, x) defines the source model, and is often an \"identity FST\" that requires w = x, really just an FSA. 7 We propose also using n-tape automata to generalize to \"branching noisy channels\" (a case of dendroid distributions). In w,x P (v|w)P (v |w)P (w, x)P (y|x), the true transcription w can be triply constrained by observing speech y and two errorful transcriptions v, v , which independently depend on w.", "cite_spans": [ { "start": 585, "end": 586, "text": "7", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "8 A corresponding problem exists in the joint case, but may be easily avoided there by first pruning non-coaccessible states.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "9 It suffices to make g unambiguous (one accepting path per string), a weaker condition than determinism. When this is not possible (as in the inverse of Fig. 1b , whose conditionaliza-Normalization is particularly important because it enables the use of log-linear (maximum-entropy) parameterizations.", "cite_spans": [], "ref_spans": [ { "start": 154, "end": 161, "text": "Fig. 1b", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "Here one defines each arc weight, coin weight, or regexp weight in terms of meaningful features associated by hand with that arc, coin, etc. Each feature has a strength \u2208 R >0 , and a weight is computed as the product of the strengths of its features. 10 It is now the strengths that are the learnable parameters. This allows meaningful parameter tying: if certain arcs such as u:i \u2212\u2192,", "cite_spans": [ { "start": 252, "end": 254, "text": "10", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "o:e \u2212\u2192, and a:ae \u2212\u2192 share a contextual \"vowel-fronting\" feature, then their weights rise and fall together with the strength of that feature. The resulting machine must be normalized, either per-state or globally, to obtain a joint or a conditional distribution as desired. Such approaches have been tried recently in restricted cases (McCallum et al., 2000; Eisner, 2001b; Lafferty et al., 2001) .", "cite_spans": [ { "start": 335, "end": 358, "text": "(McCallum et al., 2000;", "ref_id": "BIBREF13" }, { "start": 359, "end": 373, "text": "Eisner, 2001b;", "ref_id": "BIBREF6" }, { "start": 374, "end": 396, "text": "Lafferty et al., 2001)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "Normalization may be postponed and applied instead to the result of combining the FST with other FSTs by composition, union, concatenation, etc. A simple example is a probabilistic FSA defined by normalizing the intersection of other probabilistic FSAs f 1 , f 2 , . . .. (This is in fact a log-linear model in which the component FSAs define the features: string x has log f i (x) occurrences of feature i.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "In short, weighted finite-state operators provide a language for specifying a wide variety of parameterized statistical models. Let us turn to their training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transducers and Parameters", "sec_num": "2" }, { "text": "We are primarily concerned with the following training paradigm, novel in its generality. Let f \u03b8 : \u03a3 * \u00d7\u2206 * \u2192 R \u22650 be a joint probabilistic relation that is computed by a weighted FST. The FST was built by some recipe that used the parameter vector \u03b8. Changing \u03b8 may require us to rebuild the FST to get updated weights; this can involve composition, regexp compilation, multiplication of feature strengths, etc. (Lazy algorithms that compute arcs and states of tion cannot be realized by any weighted FST), one can sometimes succeed by first intersecting g with a smaller regular set in which the input being considered is known to fall. In the extreme, if each input string is fully observed (not the case if the input is bound by composition to the output of a one-to-many FST), one can succeed by restricting g to each input string in turn; this amounts to manually dividing f (x, y) by g(x).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation in Parameterized FSTs", "sec_num": "3" }, { "text": "10 Traditionally log(strength) values are called weights, but this paper uses \"weight\" to mean something else. f \u03b8 on demand (Mohri et al., 1998) can pay off here, since only part of f \u03b8 may be needed subsequently.) As training data we are given a set of observed (input, output) pairs, (x i , y i ). These are assumed to be independent random samples from a joint distribution of the form f\u03b8(x, y); the goal is to recover the true\u03b8. Samples need not be fully observed (partly supervised training): thus x i \u2286 \u03a3 * , y i \u2286 \u2206 * may be given as regular sets in which input and output were observed to fall. For example, in ordinary HMM training, x i = \u03a3 * and represents a completely hidden state sequence (cf. Ristad (1998) , who allows any regular set), while y i is a single string representing a completely observed emission sequence. 11 What to optimize? Maximum-likelihood estimation guesses\u03b8 to be the \u03b8 maximizing", "cite_spans": [ { "start": 125, "end": 145, "text": "(Mohri et al., 1998)", "ref_id": "BIBREF16" }, { "start": 708, "end": 721, "text": "Ristad (1998)", "ref_id": "BIBREF22" }, { "start": 836, "end": 838, "text": "11", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Estimation in Parameterized FSTs", "sec_num": "3" }, { "text": "i f \u03b8 (x i , y i ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation in Parameterized FSTs", "sec_num": "3" }, { "text": "Maximum-posterior estimation tries to maximize P (\u03b8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation in Parameterized FSTs", "sec_num": "3" }, { "text": "\u2022 i f \u03b8 (x i , y i ) where P (\u03b8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation in Parameterized FSTs", "sec_num": "3" }, { "text": "is a prior probability. In a log-linear parameterization, for example, a prior that penalizes feature strengths far from 1 can be used to do feature selection and avoid overfitting (Chen and Rosenfeld, 1999) .", "cite_spans": [ { "start": 181, "end": 207, "text": "(Chen and Rosenfeld, 1999)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Estimation in Parameterized FSTs", "sec_num": "3" }, { "text": "The EM algorithm (Dempster et al., 1977) can maximize these functions. Roughly, the E step guesses hidden information: if (x i , y i ) was generated from the current f \u03b8 , which FST paths stand a chance of having been the path used? (Guessing the path also guesses the exact input and output.) The M step updates \u03b8 to make those paths more likely. EM alternates these steps and converges to a local optimum. The M step's form depends on the parameterization and the E step serves the M step's needs.", "cite_spans": [ { "start": 17, "end": 40, "text": "(Dempster et al., 1977)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Estimation in Parameterized FSTs", "sec_num": "3" }, { "text": "Let f \u03b8 be Fig. 1a and suppose (x i , y i ) = (a(a + b) * , xxz). During the E step, we restrict to paths compatible with this observation by computing Fig. 2 . To find each path's posterior probability given the observation (x i , y i ), just conditionalize: divide its raw probability by the total probability (\u2248 0.1003) of all paths in Fig. 2 .", "cite_spans": [], "ref_spans": [ { "start": 11, "end": 18, "text": "Fig. 1a", "ref_id": "FIGREF0" }, { "start": 152, "end": 158, "text": "Fig. 2", "ref_id": "FIGREF1" }, { "start": 339, "end": 345, "text": "Fig. 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Estimation in Parameterized FSTs", "sec_num": "3" }, { "text": "x i \u2022 f \u03b8 \u2022 y i , shown in", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation in Parameterized FSTs", "sec_num": "3" }, { "text": "But that is not the full E step. The M step uses not individual path probabilities (Fig. 2 has infinitely many) but expected counts derived from the paths. Crucially, \u00a74 will show how the E step can accumulate these counts effortlessly. We first explain their use by the M step, repeating the presentation of \u00a72:", "cite_spans": [], "ref_spans": [ { "start": 83, "end": 90, "text": "(Fig. 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Estimation in Parameterized FSTs", "sec_num": "3" }, { "text": "\u2022 If the parameters are the 17 weights in Fig. 1a , the M step reestimates the probabilities of the arcs from each state to be proportional to the expected number of traversals of each arc (normalizing at each state to make the FST Markovian). So the E step must count traversals. This requires mapping Fig. 2 back onto Fig. 1a : to traverse either 8 a:x \u2212\u2192 9 or 9 a:x \u2212\u2192 10 in Fig. 2 is \"really\" to traverse 0 a:x \u2212\u2192 0 in Fig. 1a . \u2022 If Fig. 1a was built by composition, the M step is similar but needs the expected traversals of the arcs in Fig. 1b-c . This requires further unwinding of Fig. 1a's 0 a: x \u2212\u2192 0 : to traverse that arc is \"really\" to traverse Fig. 1b's 4 a:p \u2212\u2192 4 and Fig. 1c 's 6 p:x \u2212\u2192 6 . \u2022 If Fig. 1b was defined by the regexp given earlier, traversing 4 a:p \u2212\u2192 4 is in turn \"really\" just evidence that the \u03bb-coin came up heads. To learn the weights \u03bb, \u03bd, \u00b5, \u03c1, count expected heads/tails for each coin.", "cite_spans": [], "ref_spans": [ { "start": 42, "end": 49, "text": "Fig. 1a", "ref_id": "FIGREF0" }, { "start": 303, "end": 309, "text": "Fig. 2", "ref_id": "FIGREF1" }, { "start": 320, "end": 327, "text": "Fig. 1a", "ref_id": "FIGREF0" }, { "start": 378, "end": 384, "text": "Fig. 2", "ref_id": "FIGREF1" }, { "start": 423, "end": 430, "text": "Fig. 1a", "ref_id": "FIGREF0" }, { "start": 438, "end": 445, "text": "Fig. 1a", "ref_id": "FIGREF0" }, { "start": 543, "end": 552, "text": "Fig. 1b-c", "ref_id": "FIGREF0" }, { "start": 590, "end": 604, "text": "Fig. 1a's 0 a:", "ref_id": "FIGREF0" }, { "start": 659, "end": 670, "text": "Fig. 1b's 4", "ref_id": "FIGREF0" }, { "start": 684, "end": 691, "text": "Fig. 1c", "ref_id": "FIGREF0" }, { "start": 713, "end": 720, "text": "Fig. 1b", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Estimation in Parameterized FSTs", "sec_num": "3" }, { "text": "\u2022 If arc probabilities (or even \u03bb, \u03bd, \u00b5, \u03c1) have loglinear parameterization, then the E step must compute c = i ec f (x i , y i ), where ec(x, y) denotes the expected vector of total feature counts along a random path in f \u03b8 whose (input, output) matches (x, y). The M step then treats c as fixed, observed data and adjusts \u03b8 until the predicted vector of total feature counts equals c, using Improved Iterative Scaling (Della Pietra et al., 1997; Chen and Rosenfeld, 1999) . 12 For globally normalized, joint models, the predicted vector is ec f (\u03a3 * , \u2206 * ). If the log-linear probabilities are conditioned on the state and/or the input, the predicted vector is harder to describe (though usually much easier to compute). 13", "cite_spans": [ { "start": 420, "end": 447, "text": "(Della Pietra et al., 1997;", "ref_id": "BIBREF3" }, { "start": 448, "end": 473, "text": "Chen and Rosenfeld, 1999)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Estimation in Parameterized FSTs", "sec_num": "3" }, { "text": "12 IIS is itself iterative; to avoid nested loops, run only one iteration at each M step, giving a GEM algorithm (Riezler, 1999) . Alternatively, discard EM and use gradient-based optimization.", "cite_spans": [ { "start": 113, "end": 128, "text": "(Riezler, 1999)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Estimation in Parameterized FSTs", "sec_num": "3" }, { "text": "13 For per-state conditional normalization, let Dj,a be the set of arcs from state j with input symbol a \u2208 \u03a3; their weights are normalized to sum to 1. Besides computing c, the E step must count the expected number dj,a of traversals of arcs in each Dj,a. Then the predicted vector given \u03b8 is j,a dj,a \u2022 (expected feature counts on a randomly chosen arc in Dj,a). Per-state joint normalization (Eisner, 2001b, \u00a78. 2) is similar but drops the dependence on a. The difficult case is global conditional normalization. It arises, for example, when training a joint model of the form", "cite_spans": [ { "start": 394, "end": 413, "text": "(Eisner, 2001b, \u00a78.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Estimation in Parameterized FSTs", "sec_num": "3" }, { "text": "f \u03b8 = \u2022 \u2022 \u2022 (g \u03b8 \u2022 h \u03b8 ) \u2022 \u2022 \u2022, where h \u03b8 is a conditional", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation in Parameterized FSTs", "sec_num": "3" }, { "text": "It is also possible to use this EM approach for discriminative training, where we wish to maximize i P (y i | x i ) and f \u03b8 (x, y) is a conditional FST that defines P (y | x). The trick is to instead train a joint model g", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation in Parameterized FSTs", "sec_num": "3" }, { "text": "\u2022 f \u03b8 , where g(x i ) defines P (x i ), thereby maximizing i P (x i ) \u2022 P (y i | x i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation in Parameterized FSTs", "sec_num": "3" }, { "text": ". (Of course, the method of this paper can train such compositions.) If x 1 , . . . x n are fully observed, just define each g(x i ) = 1/n. But by choosing a more general model of g, we can also handle incompletely observed x i : training g \u2022 f \u03b8 then forces g and f \u03b8 to cooperatively reconstruct a distribution over the possible inputs and do discriminative training of f \u03b8 given those inputs. (Any parameters of g may be either frozen before training or optimized along with the parameters of f \u03b8 .) A final possibility is that each x i is defined by a probabilistic FSA that already supplies a distribution over the inputs; then we consider x i \u2022 f \u03b8 \u2022 y i directly, just as in the joint model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation in Parameterized FSTs", "sec_num": "3" }, { "text": "Finally, note that EM is not all-purpose. It only maximizes probabilistic objective functions, and even there it is not necessarily as fast as (say) conjugate gradient. For this reason, we will also show below how to compute the gradient of f \u03b8 (x i , y i ) with respect to \u03b8, for an arbitrary parameterized FST f \u03b8 . We remark without elaboration that this can help optimize task-related objective functions, such as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation in Parameterized FSTs", "sec_num": "3" }, { "text": "i y (P (x i , y) \u03b1 / y P (x i , y ) \u03b1 ) \u2022 error(y, y i ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation in Parameterized FSTs", "sec_num": "3" }, { "text": "It remains to devise appropriate E steps, which looks rather daunting. Each path in Fig. 2 weaves together parameters from other machines, which we must untangle and tally. In the 4-coin parameterization, path 8 a:x \u2212\u2192 9 a:x \u2212\u2192 10 a: \u2212\u2192 10 a: \u2212\u2192 10 b:z \u2212\u2192 12 must yield up a vector H \u03bb , T \u03bb , H \u00b5 , T \u00b5 , H \u03bd , T \u03bd , H \u03c1 , T \u03c1 that counts observed heads and tails of the 4 coins. This nontrivially works out to 4, 1, 0, 1, 1, 1, 1, 2 . For other parameterizations, the path must instead yield a vector of arc traversal counts or feature counts.", "cite_spans": [], "ref_spans": [ { "start": 84, "end": 90, "text": "Fig. 2", "ref_id": "FIGREF1" }, { "start": 409, "end": 434, "text": "to 4, 1, 0, 1, 1, 1, 1, 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "The E Step: Expectation Semirings", "sec_num": "4" }, { "text": "Computing a count vector for one path is hard enough, but it is the E step's job to find the expected value of this vector-an average over the infinitely log-linear model of P (v | u) for u \u2208 \u03a3 * , v \u2208 \u2206 * . Then the predicted count vector contributed by h is i u\u2208\u03a3 * P (u | xi, yi) \u2022 ec h (u, \u2206 * ). The term i P (u | xi, yi) computes the expected count of each u \u2208 \u03a3 * . It may be found by a variant of \u00a74 in which path values are regular expressions over \u03a3 * . many paths \u03c0 through Fig. 2 in proportion to their posterior probabilities P (\u03c0 | x i , y i ). The results for all (x i , y i ) are summed and passed to the M step.", "cite_spans": [], "ref_spans": [ { "start": 485, "end": 491, "text": "Fig. 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "The E Step: Expectation Semirings", "sec_num": "4" }, { "text": "Abstractly, let us say that each path \u03c0 has not only a probability P (\u03c0) \u2208 [0, 1] but also a value val(\u03c0) in a vector space V , which counts the arcs, features, or coin flips encountered along path \u03c0. The value of a path is the sum of the values assigned to its arcs. The E step must return the expected value of the unknown path that generated (x i , y i ). For example, if every arc had value 1, then expected value would be expected path length. Letting \u03a0 denote the set of paths in (Fig. 2) , the expected value is 14", "cite_spans": [], "ref_spans": [ { "start": 486, "end": 494, "text": "(Fig. 2)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "The E Step: Expectation Semirings", "sec_num": "4" }, { "text": "x i \u2022 f \u03b8 \u2022 y i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The E Step: Expectation Semirings", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E[val(\u03c0) | x i , y i ] = \u03c0\u2208\u03a0 P (\u03c0) val(\u03c0) \u03c0\u2208\u03a0 P (\u03c0)", "eq_num": "(1)" } ], "section": "The E Step: Expectation Semirings", "sec_num": "4" }, { "text": "The denominator of equation 1is the total probability of all accepting paths in x i \u2022 f \u2022 y i . But while computing this, we will also compute the numerator. The idea is to augment the weight data structure with expectation information, so each weight records a probability and a vector counting the parameters that contributed to that probability. We will enforce an invariant: the weight of any pathset \u03a0 must be ( \u03c0\u2208\u03a0 P (\u03c0), \u03c0\u2208\u03a0 P (\u03c0) val(\u03c0)) \u2208 R \u22650 \u00d7 V , from which (1) is trivial to compute. Berstel and Reutenauer (1988) give a sufficiently general finite-state framework to allow this: weights may fall in any set K (instead of R). Multiplication and addition are replaced by binary operations \u2297 and \u2295 on K. Thus \u2297 is used to combine arc weights into a path weight and \u2295 is used to combine the weights of alternative paths. To sum over infinite sets of cyclic paths we also need a closure operation * , interpreted as k * = \u221e i=0 k i . The usual finite-state algorithms work if (K, \u2295, \u2297, * ) has the structure of a closed semiring. 15 Ordinary probabilities fall in the semiring (R \u22650 , +, \u00d7, * ). 16 Our novel weights fall in a novel 14 Formal derivation of (1):", "cite_spans": [ { "start": 497, "end": 526, "text": "Berstel and Reutenauer (1988)", "ref_id": "BIBREF1" }, { "start": 1039, "end": 1041, "text": "15", "ref_id": null }, { "start": 1105, "end": 1107, "text": "16", "ref_id": null }, { "start": 1142, "end": 1144, "text": "14", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The E Step: Expectation Semirings", "sec_num": "4" }, { "text": "\u03c0 P (\u03c0 | xi, yi) val(\u03c0) = ( \u03c0 P (\u03c0, xi, yi) val(\u03c0))/P (xi, yi) = ( \u03c0 P (xi, yi | \u03c0)P (\u03c0) val(\u03c0))/ \u03c0 P (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The E Step: Expectation Semirings", "sec_num": "4" }, { "text": "xi, yi | \u03c0)P (\u03c0); now observe that P (xi, yi | \u03c0) = 1 or 0 according to whether \u03c0 \u2208 \u03a0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The E Step: Expectation Semirings", "sec_num": "4" }, { "text": "15 That is: (K, \u2297) is a monoid (i.e., \u2297 : K \u00d7 K \u2192 K is associative) with identity 1. (K, \u2295) is a commutative monoid with identity 0. \u2297 distributes over \u2295 from both sides, 0 \u2297 k = k \u2297 0 = 0, and k * = 1 \u2295 k \u2297 k * = 1 \u2295 k * \u2297 k. For finite-state composition, commutativity of \u2297 is needed as well. 16 The closure operation is defined for p \u2208 [0, 1) as p * = 1/(1 \u2212 p), so cycles with weights in [0, 1) are allowed.", "cite_spans": [ { "start": 295, "end": 297, "text": "16", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The E Step: Expectation Semirings", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u22650 \u00d7 V, \u2295, \u2297, * ): (p 1 , v 1 ) \u2297 (p 2 , v 2 ) def = (p 1 p 2 , p 1 v 2 + v 1 p 2 ) (2) (p 1 , v 1 ) \u2295 (p 2 , v 2 ) def = (p 1 + p 2 , v 1 + v 2 ) (3) if p * defined, (p, v) * def = (p * , p * vp * )", "eq_num": "(4)" } ], "section": "V -expectation semiring, (R", "sec_num": null }, { "text": "If an arc has probability p and value v, we give it the weight (p, pv), so that our invariant (see above) holds if \u03a0 consists of a single length-0 or length-1 path. The above definitions are designed to preserve our invariant as we build up larger paths and pathsets. \u2297 lets us concatenate (e.g.) simple paths \u03c0 1 , \u03c0 2 to get a longer path \u03c0 with P (\u03c0) = P (\u03c0 1 )P (\u03c0 2 ) and val(\u03c0) = val(\u03c0 1 ) + val(\u03c0 2 ). The definition of \u2297 guarantees that path \u03c0's weight will be (P (\u03c0), P (\u03c0) \u2022 val(\u03c0)). \u2295 lets us take the union of two disjoint pathsets, and * computes infinite unions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "V -expectation semiring, (R", "sec_num": null }, { "text": "To compute (1) now, we only need the total weight t i of accepting paths in (Fig. 2) . This can be computed with finite-state methods: the machine", "cite_spans": [], "ref_spans": [ { "start": 76, "end": 84, "text": "(Fig. 2)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "V -expectation semiring, (R", "sec_num": null }, { "text": "x i \u2022 f \u2022 y i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "V -expectation semiring, (R", "sec_num": null }, { "text": "( \u00d7x i )\u2022f \u2022(y i \u00d7 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "V -expectation semiring, (R", "sec_num": null }, { "text": "is a version that replaces all input:output labels with : , so it maps ( , ) to the same total weight t i . Minimizing it yields a onestate FST from which t i can be read directly! The other \"magical\" property of the expectation semiring is that it automatically keeps track of the tangled parameter counts. For instance, recall that traversing 0 a:x \u2212\u2192 0 should have the same effect as traversing both the underlying arcs 4 a:p \u2212\u2192 4 and 6 p:x \u2212\u2192 6 . And indeed, if the underlying arcs have values v 1 and v 2 , then the composed arc", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "V -expectation semiring, (R", "sec_num": null }, { "text": "0 a:x \u2212\u2192 0 gets weight (p 1 , p 1 v 1 ) \u2297 (p 2 , p 2 v 2 ) = (p 1 p 2 , p 1 p 2 (v 1 + v 2 )), just as if it had value v 1 + v 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "V -expectation semiring, (R", "sec_num": null }, { "text": "\u2022 To count traversals of the arcs of Figs. 1b-c, number these arcs and let arc have value e , the th basis vector. Then the th element of val(\u03c0) counts the appearances of arc in path \u03c0, or underlying path \u03c0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Some concrete examples of values may be useful:", "sec_num": null }, { "text": "\u2022 A regexp of form E+ \u00b5 F = \u00b5E+(1\u2212\u00b5)F should be weighted as (\u00b5, \u00b5e k )E + (1 \u2212 \u00b5, (1 \u2212 \u00b5)e k+1 )F in the new semiring. Then elements k and k + 1 of val(\u03c0) count the heads and tails of the \u00b5-coin.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Some concrete examples of values may be useful:", "sec_num": null }, { "text": "\u2022 For a global log-linear parameterization, an arc's value is a vector specifying the arc's features. Then val(\u03c0) counts all the features encountered along \u03c0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Some concrete examples of values may be useful:", "sec_num": null }, { "text": "Really we are manipulating weighted relations, not FSTs. We may combine FSTs, or determinize or minimize them, with any variant of the semiringweighted algorithms. 17 As long as the resulting FST computes the right weighted relation, the arrangement of its states, arcs, and labels is unimportant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Some concrete examples of values may be useful:", "sec_num": null }, { "text": "The same semiring may be used to compute gradients. We would like to find f \u03b8 (x i , y i ) and its gradient with respect to \u03b8, where f \u03b8 is real-valued but need not be probabilistic. Whatever procedures are used to evaluate f \u03b8 (x i , y i ) exactly or approximately-for example, FST operations to compile f \u03b8 followed by minimization of ( \u00d7 x i ) \u2022 f \u03b8 \u2022 (y i \u00d7 )-can simply be applied over the expectation semiring, replacing each weight p by (p, \u2207p) and replacing the usual arithmetic operations with \u2295, \u2297, etc. 18 (2)-(4) preserve the gradient ((2) is the derivative product rule), so this computation yields (f \u03b8 (x i , y i ), \u2207f \u03b8 (x i , y i )).", "cite_spans": [ { "start": 514, "end": 516, "text": "18", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Some concrete examples of values may be useful:", "sec_num": null }, { "text": "Now for some important remarks on efficiency:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Removing Inefficiencies", "sec_num": "5" }, { "text": "\u2022 Computing t i is an instance of the well-known algebraic path problem (Lehmann, 1977; Tarjan, 1981a) . Let T i = x i \u2022f \u2022y i . Then t i is the total semiring weight w 0n of paths in T i from initial state 0 to final state n (assumed WLOG to be unique and unweighted). It is wasteful to compute t i as suggested earlier, by minimizing ( \u00d7x i )\u2022f \u2022(y i \u00d7 ), since then the real work is done by an -closure step (Mohri, 2002) that implements the all-pairs version of algebraic path, whereas all we need is the single-source version. If n and m are the number of states and edges, 19 then both problems are O(n 3 ) in the worst case, but the single-source version can be solved in essentially O(m) time for acyclic graphs and other reducible flow graphs (Tarjan, 1981b) . For a general graph T i , Tarjan (1981b) shows how to partition into \"hard\" subgraphs that localize the cyclicity or irreducibility, then run the O(n 3 ) algorithm on each subgraph (thereby reducing n to as little as 1), and recombine the results. The overhead of partitioning and recombining is essentially only O(m).", "cite_spans": [ { "start": 72, "end": 87, "text": "(Lehmann, 1977;", "ref_id": "BIBREF12" }, { "start": 88, "end": 102, "text": "Tarjan, 1981a)", "ref_id": "BIBREF27" }, { "start": 411, "end": 424, "text": "(Mohri, 2002)", "ref_id": "BIBREF17" }, { "start": 752, "end": 767, "text": "(Tarjan, 1981b)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Removing Inefficiencies", "sec_num": "5" }, { "text": "\u2022 For speeding up the O(n 3 ) problem on subgraphs, one can use an approximate relaxation technique 1998), although such data could also be used; (4) training of branching noisy channels (footnote 7); (5) discriminative training with incomplete data; (6) training of conditional MEMMs (McCallum et al., 2000) and conditional random fields (Lafferty et al., 2001 ) on unbounded sequences.", "cite_spans": [ { "start": 279, "end": 308, "text": "MEMMs (McCallum et al., 2000)", "ref_id": null }, { "start": 339, "end": 361, "text": "(Lafferty et al., 2001", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Removing Inefficiencies", "sec_num": "5" }, { "text": "We are particularly interested in the potential for quickly building statistical models that incorporate linguistic and engineering insights. Many models of interest can be constructed in our paradigm, without having to write new code. Bringing diverse models into the same declarative framework also allows one to apply new optimization methods, objective functions, and finite-state algorithms to all of them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Removing Inefficiencies", "sec_num": "5" }, { "text": "To avoid local maxima, one might try deterministic annealing (Rao and Rose, 2001) , or randomized methods, or place a prior on \u03b8. Another extension is to adjust the machine topology, say by model merging (Stolcke and Omohundro, 1994) . Such techniques build on our parameter estimation method.", "cite_spans": [ { "start": 61, "end": 81, "text": "(Rao and Rose, 2001)", "ref_id": "BIBREF19" }, { "start": 204, "end": 233, "text": "(Stolcke and Omohundro, 1994)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Removing Inefficiencies", "sec_num": "5" }, { "text": "The key algorithmic ideas of this paper extend from forward-backward-style to inside-outside-style methods. For example, it should be possible to do end-to-end training of a weighted relation defined by an interestingly parameterized synchronous CFG composed with tree transducers and then FSTs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Removing Inefficiencies", "sec_num": "5" }, { "text": "To implement an HMM by an FST, compose a probabilistic FSA that generates a state sequence of the HMM with a conditional FST that transduces HMM states to emitted symbols.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Eisner (submitted) develops fast minimization algorithms that work for the real and V -expectation semirings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "If xi and yi are acyclic (e.g., fully observed strings), and f (or rather its FST) has no : cycles, then composition will \"unroll\" f into an acyclic machine. If only xi is acyclic, then the composition is still acyclic if domain(f ) has no cycles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "18 Division and subtraction are also possible: \u2212(p, v) = (\u2212p, \u2212v) and (p, v) \u22121 = (p \u22121 , \u2212p \u22121 vp \u22121 ). Division is commonly used in defining f \u03b8 (for normalization). 19 Multiple edges from j to k are summed into a single edge. (Mohri, 2002) . Efficient hardware implementation is also possible via chip-level parallelism (Rote, 1985) .\u2022 In many cases of interest, T i is an acyclic graph. 20 Then Tarjan's method computes w 0j for each j in topologically sorted order, thereby finding t i in a linear number of \u2295 and \u2297 operations. For HMMs (footnote 11), T i is the familiar trellis, and we would like this computation of t i to reduce to the forwardbackward algorithm (Baum, 1972) . But notice that it has no backward pass. In place of pushing cumulative probabilities backward to the arcs, it pushes cumulative arcs (more generally, values in V ) forward to the probabilities. This is slower because our \u2295 and \u2297 are vector operations, and the vectors rapidly lose sparsity as they are added together. We therefore reintroduce a backward pass that lets us avoid \u2295 and \u2297 when computing t i (so they are needed only to construct T i ). This speedup also works for cyclic graphs and for any V . Write w jk as (p jk , v jk ), and let w 1 jk = (p 1 jk , v 1 jk ) denote the weight of the edge from j to k. 19 Then it can be shown thatThe forward and backward probabilities, p 0j and p kn , can be computed using single-source algebraic path for the simpler semiring (R, +, \u00d7, * )-or equivalently, by solving a sparse linear system of equations over R, a much-studied problem at O(n) space, O(nm) time, and faster approximations (Greenbaum, 1997) .\u2022 A Viterbi variant of the expectation semiring exists:Here, the forward and backward probabilities can be computed in time only O(m + n log n) (Fredman and Tarjan, 1987) . k-best variants are also possible.", "cite_spans": [ { "start": 168, "end": 170, "text": "19", "ref_id": null }, { "start": 229, "end": 242, "text": "(Mohri, 2002)", "ref_id": "BIBREF17" }, { "start": 323, "end": 335, "text": "(Rote, 1985)", "ref_id": "BIBREF24" }, { "start": 391, "end": 393, "text": "20", "ref_id": null }, { "start": 671, "end": 683, "text": "(Baum, 1972)", "ref_id": "BIBREF0" }, { "start": 1626, "end": 1643, "text": "(Greenbaum, 1997)", "ref_id": "BIBREF8" }, { "start": 1789, "end": 1815, "text": "(Fredman and Tarjan, 1987)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "We have exhibited a training algorithm for parameterized finite-state machines. Some specific consequences that we believe to be novel are (1) an EM algorithm for FSTs with cycles and epsilons; (2) training algorithms for HMMs and weighted contextual edit distance that work on incomplete data; (3) endto-end training of noisy channel cascades, so that it is not necessary to have separate training data for each machine in the cascade (cf. Knight and Graehl, ", "cite_spans": [ { "start": 441, "end": 459, "text": "Knight and Graehl,", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "An inequality and associated maximization technique in statistical estimation of probabilistic functions of a Markov process", "authors": [ { "first": "L", "middle": [ "E" ], "last": "Baum", "suffix": "" } ], "year": 1972, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. E. Baum. 1972. An inequality and associated max- imization technique in statistical estimation of proba- bilistic functions of a Markov process. Inequalities, 3.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Rational Series and their Languages", "authors": [ { "first": "Jean", "middle": [], "last": "Berstel", "suffix": "" }, { "first": "Christophe", "middle": [], "last": "Reutenauer", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jean Berstel and Christophe Reutenauer. 1988. Rational Series and their Languages. Springer-Verlag.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A Gaussian prior for smoothing maximum entropy models", "authors": [ { "first": "F", "middle": [], "last": "Stanley", "suffix": "" }, { "first": "Ronald", "middle": [], "last": "Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Rosenfeld", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stanley F. Chen and Ronald Rosenfeld. 1999. A Gaus- sian prior for smoothing maximum entropy models. Technical Report CMU-CS-99-108, Carnegie Mellon.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Inducing features of random fields", "authors": [ { "first": "V", "middle": [ "Della" ], "last": "Della Pietra", "suffix": "" }, { "first": "J", "middle": [], "last": "Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Lafferty", "suffix": "" } ], "year": 1997, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "19", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Della Pietra, V. Della Pietra, and J. Lafferty. 1997. Inducing features of random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Maximum likelihood from incomplete data via the EM algorithm", "authors": [ { "first": "A", "middle": [ "P" ], "last": "Dempster", "suffix": "" }, { "first": "N", "middle": [ "M" ], "last": "Laird", "suffix": "" }, { "first": "D", "middle": [ "B" ], "last": "Rubin", "suffix": "" } ], "year": 1977, "venue": "J. Royal Statist. Soc. Ser. B", "volume": "39", "issue": "1", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. J. Royal Statist. Soc. Ser. B, 39(1):1-38.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Expectation semirings: Flexible EM for finite-state transducers", "authors": [ { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2001, "venue": "Proc. of the ESSLLI Workshop on Finite-State Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Eisner. 2001a. Expectation semirings: Flexible EM for finite-state transducers. In G. van Noord, ed., Proc. of the ESSLLI Workshop on Finite-State Methods in Natural Language Processing. Extended abstract.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Smoothing a Probabilistic Lexicon via Syntactic Transformations", "authors": [ { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Eisner. 2001b. Smoothing a Probabilistic Lexicon via Syntactic Transformations. Ph.D. thesis, Univer- sity of Pennsylvania.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Transducers from rewrite rules with backreferences. Proc. of EACL", "authors": [ { "first": "D", "middle": [], "last": "Gerdemann", "suffix": "" }, { "first": "G", "middle": [], "last": "Van Noord", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Gerdemann and G. van Noord. 1999. Transducers from rewrite rules with backreferences. Proc. of EACL.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Iterative Methods for Solving Linear Systems", "authors": [ { "first": "Anne", "middle": [], "last": "Greenbaum", "suffix": "" } ], "year": 1997, "venue": "Soc. for Industrial and Applied Math", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anne Greenbaum. 1997. Iterative Methods for Solving Linear Systems. Soc. for Industrial and Applied Math.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Translation with finite-state devices", "authors": [ { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Yaser", "middle": [], "last": "Al-Onaizan", "suffix": "" } ], "year": 1998, "venue": "Proc. of AMTA", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Knight and Yaser Al-Onaizan. 1998. Translation with finite-state devices. In Proc. of AMTA.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Machine transliteration", "authors": [ { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Graehl", "suffix": "" } ], "year": 1998, "venue": "Computational Linguistics", "volume": "", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Knight and Jonathan Graehl. 1998. Machine transliteration. Computational Linguistics, 24(4).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "J", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "F", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Proc. of ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Lafferty, A. McCallum, and F. Pereira. 2001. Con- ditional random fields: Probabilistic models for seg- menting and labeling sequence data. Proc. of ICML.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Algebraic structures for transitive closure", "authors": [ { "first": "D", "middle": [ "J" ], "last": "Lehmann", "suffix": "" } ], "year": 1977, "venue": "Theoretical Computer Science", "volume": "4", "issue": "1", "pages": "59--76", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. J. Lehmann. 1977. Algebraic structures for transitive closure. Theoretical Computer Science, 4(1):59-76.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Maximum entropy Markov models for information extraction and segmentation", "authors": [ { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "D", "middle": [], "last": "Freitag", "suffix": "" }, { "first": "F", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2000, "venue": "Proc. of ICML", "volume": "", "issue": "", "pages": "591--598", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. McCallum, D. Freitag, and F. Pereira. 2000. Maxi- mum entropy Markov models for information extrac- tion and segmentation. Proc. of ICML, 591-598.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Regular approximation of context-free grammars through transformation", "authors": [ { "first": "M", "middle": [], "last": "Mohri", "suffix": "" }, { "first": "M.-J", "middle": [], "last": "Nederhof", "suffix": "" } ], "year": 2001, "venue": "Robustness in Language and Speech Technology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Mohri and M.-J. Nederhof. 2001. Regular approxi- mation of context-free grammars through transforma- tion. In J.-C. Junqua and G. van Noord, eds., Robust- ness in Language and Speech Technology. Kluwer.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "An efficient compiler for weighted rewrite rules", "authors": [ { "first": "Mehryar", "middle": [], "last": "Mohri", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Sproat", "suffix": "" } ], "year": 1996, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mehryar Mohri and Richard Sproat. 1996. An efficient compiler for weighted rewrite rules. In Proc. of ACL.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A rational design for a weighted finite-state transducer library. Lecture Notes in Computer Science", "authors": [ { "first": "M", "middle": [], "last": "Mohri", "suffix": "" }, { "first": "F", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "M", "middle": [], "last": "Riley", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Mohri, F. Pereira, and M. Riley. 1998. A rational de- sign for a weighted finite-state transducer library. Lec- ture Notes in Computer Science, 1436.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Generic epsilon-removal and input epsilon-normalization algorithms for weighted transducers", "authors": [ { "first": "M", "middle": [], "last": "Mohri", "suffix": "" } ], "year": 2000, "venue": "Int. J. of Foundations of Comp. Sci", "volume": "", "issue": "13", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Mohri. 2002. Generic epsilon-removal and input epsilon-normalization algorithms for weighted trans- ducers. Int. J. of Foundations of Comp. Sci., 1(13). Mark-Jan Nederhof. 2000. Practical experiments with regular approximation of context-free languages. Computational Linguistics, 26(1).", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Speech recognition by composition of weighted finite automata", "authors": [ { "first": "C", "middle": [ "N" ], "last": "Fernando", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "", "middle": [], "last": "Riley", "suffix": "" } ], "year": 1997, "venue": "Finite-State Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fernando C. N. Pereira and Michael Riley. 1997. Speech recognition by composition of weighted finite au- tomata. In E. Roche and Y. Schabes, eds., Finite-State Language Processing. MIT Press, Cambridge, MA.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Deterministically annealed design of hidden Markov movel speech recognizers", "authors": [ { "first": "A", "middle": [], "last": "Rao", "suffix": "" }, { "first": "K", "middle": [], "last": "Rose", "suffix": "" } ], "year": 2001, "venue": "In IEEE Trans. on Speech and Audio Processing", "volume": "9", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Rao and K. Rose. 2001 Deterministically annealed design of hidden Markov movel speech recognizers. In IEEE Trans. on Speech and Audio Processing, 9(2).", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Probabilistic Constraint Logic Programming", "authors": [ { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefan Riezler. 1999. Probabilistic Constraint Logic Programming. Ph.D. thesis, Universit\u00e4t T\u00fcbingen.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Learning string edit distance", "authors": [ { "first": "E", "middle": [], "last": "Ristad", "suffix": "" }, { "first": "P", "middle": [], "last": "Yianilos", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Ristad and P. Yianilos. 1996. Learning string edit distance. Tech. Report CS-TR-532-96, Princeton.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Hidden Markov models with finite state supervision", "authors": [ { "first": "E", "middle": [], "last": "Ristad", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Ristad. 1998. Hidden Markov models with finite state supervision. In A. Kornai, ed., Extended Finite State Models of Language. Cambridge University Press.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Finite-State Language Processing", "authors": [], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emmanuel Roche and Yves Schabes, editors. 1997. Finite-State Language Processing. MIT Press.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A systolic array algorithm for the algebraic path problem (shortest paths; matrix inversion)", "authors": [ { "first": "", "middle": [], "last": "G\u00fcnter Rote", "suffix": "" } ], "year": 1985, "venue": "Computing", "volume": "34", "issue": "3", "pages": "191--219", "other_ids": {}, "num": null, "urls": [], "raw_text": "G\u00fcnter Rote. 1985. A systolic array algorithm for the algebraic path problem (shortest paths; matrix inver- sion). Computing, 34(3):191-219.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Compilation of weighted finite-state transducers from decision trees", "authors": [ { "first": "Richard", "middle": [], "last": "Sproat", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Riley", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 34th Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Sproat and Michael Riley. 1996. Compilation of weighted finite-state transducers from decision trees. In Proceedings of the 34th Annual Meeting of the ACL.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Best-first model merging for hidden Markov model induction", "authors": [ { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" }, { "first": "Stephen", "middle": [ "M" ], "last": "Omohundro", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Stolcke and Stephen M. Omohundro. 1994. Best-first model merging for hidden Markov model in- duction. Tech. Report ICSI TR-94-003, Berkeley, CA.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A unified approach to path problems", "authors": [ { "first": "", "middle": [], "last": "Robert Endre Tarjan", "suffix": "" } ], "year": 1981, "venue": "Journal of the ACM", "volume": "28", "issue": "3", "pages": "577--593", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Endre Tarjan. 1981a. A unified approach to path problems. Journal of the ACM, 28(3):577-593, July.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Fast algorithms for solving path problems", "authors": [ { "first": "", "middle": [], "last": "Robert Endre Tarjan", "suffix": "" } ], "year": 1981, "venue": "J. of the ACM", "volume": "28", "issue": "3", "pages": "594--614", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Endre Tarjan. 1981b. Fast algorithms for solving path problems. J. of the ACM, 28(3):594-614, July.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "An extendible regular expression compiler for finite-state approaches in natural language processing", "authors": [ { "first": "G", "middle": [], "last": "Van Noord", "suffix": "" }, { "first": "D", "middle": [], "last": "Gerdemann", "suffix": "" } ], "year": 2001, "venue": "Automata Implementation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. van Noord and D. Gerdemann. 2001. An extendible regular expression compiler for finite-state approaches in natural language processing. In Automata Imple- mentation, no. 22 in Springer Lecture Notes in CS.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "(a) A probabilistic FST defining a joint probability distribution. (b) A smaller joint distribution. (c) A conditional distribution. Defining (a)=(b)\u2022(c) means that the weights in (a) can be altered by adjusting the fewer weights in (b) and (c)." }, "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "The joint model ofFig. 1aconstrained to generate only input \u2208 a(a + b) * and output = xxz." } } } }