{ "paper_id": "P02-1008", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:30:53.561504Z" }, "title": "Comprehension and Compilation in Optimality Theory *", "authors": [ { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University Baltimore", "location": { "postCode": "21218-2691", "region": "MD", "country": "USA" } }, "email": "jason@cs.jhu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper ties up some loose ends in finite-state Optimality Theory. First, it discusses how to perform comprehension under Optimality Theory grammars consisting of finite-state constraints. Comprehension has not been much studied in OT; we show that unlike production, it does not always yield a regular set, making finite-state methods inapplicable. However, after giving a suitably flexible presentation of OT, we show carefully how to treat comprehension under recent variants of OT in which grammars can be compiled into finite-state transducers. We then unify these variants, showing that compilation is possible if all components of the grammar are regular relations, including the harmony ordering on scored candidates. A side benefit of our construction is a far simpler implementation of directional OT (Eisner, 2000).", "pdf_parse": { "paper_id": "P02-1008", "_pdf_hash": "", "abstract": [ { "text": "This paper ties up some loose ends in finite-state Optimality Theory. First, it discusses how to perform comprehension under Optimality Theory grammars consisting of finite-state constraints. Comprehension has not been much studied in OT; we show that unlike production, it does not always yield a regular set, making finite-state methods inapplicable. However, after giving a suitably flexible presentation of OT, we show carefully how to treat comprehension under recent variants of OT in which grammars can be compiled into finite-state transducers. We then unify these variants, showing that compilation is possible if all components of the grammar are regular relations, including the harmony ordering on scored candidates. A side benefit of our construction is a far simpler implementation of directional OT (Eisner, 2000).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "To produce language is to convert utterances from their underlying (\"deep\") form to a surface form. Optimality Theory or OT (Prince and Smolensky, 1993) proposes to describe phonological production as an optimization process. For an underlying x, a speaker purportedly chooses the surface form z so as to maximize the harmony of the pair (x, z). Broadly speaking, (x, z) is harmonic if z is \"easy\" to pronounce and \"similar\" to x. But the precise harmony measure depends on the language; according to OT, it can be specified by a grammar of ranked desiderata known as constraints.", "cite_spans": [ { "start": 124, "end": 152, "text": "(Prince and Smolensky, 1993)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "According to OT, then, production maps each underlying form to its best possible surface pronunciation. It is akin to the function that maps each child x to his or her most flattering outfit z. Different children look best in different clothes, and for an oddly shaped child x, even the best conceivable outfit z may be an awkward compromise between style and fit-that is, between ease of pronunciation and similarity to x.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Language comprehension is production in reverse. In OT, it maps each outfit z to the set of chil- * Thanks to Kie Zuraw for asking about comprehension; to Ron Kaplan for demanding an algebraic construction before he believed directional OT was finite-state; and to others whose questions convinced me that this paper deserved to be written. dren x for whom that outfit is optimal, i.e., is at least as flattering as any other outfit z :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "PRODUCE(x) = {z : ( z ) (x, z ) > (x, z)} COMPREHEND(z) = {x : z \u2208 PRODUCE(x)} = {x : ( z ) (x, z ) > (x, z)}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In general z and z may range over infinitely many possible pronunciations. While the formulas above are almost identical, comprehension is in a sense more complex because it varies both the underlying and surface forms. While PRODUCE(x) considers all pairs (x, z ), COMPREHEND(z) must for each x consider all pairs (x, z ). Of course, this nested definition does not preclude computational shortcuts. This paper has three modest goals:", "cite_spans": [ { "start": 226, "end": 236, "text": "PRODUCE(x)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. To show that OT comprehension does in fact present a computational problem that production does not. Even when the OT grammar is required to be finite-state, so that production can be performed with finite-state techniques, comprehension cannot in general be performed with finite-state techniques. 2. To consider recent constructions that cut through this problem (Frank and Satta, 1998; Karttunen, 1998; Eisner, 2000; Gerdemann and van Noord, 2000) . By altering or approximating the OT formalism-that is, by hook or by crook-these constructions manage to compile OT grammars into finite-state transducers. Transducers may readily be inverted to do comprehension as easily as production. We carefully lay out how to use them for comprehension in realistic circumstances (in the presence of correspondence theory, lexical constraints, hearer uncertainty, and phonetic postprocessing). 3. To give a unified treatment in the extended finitestate calculus of the constructions referenced above. This clarifies their meaning and makes them easy to implement. For example, we obtain a transparent algebraic version of Eisner's (2000) unbearably technical automaton construction for his proposed formalism of \"directional OT.\"", "cite_spans": [ { "start": 368, "end": 391, "text": "(Frank and Satta, 1998;", "ref_id": "BIBREF6" }, { "start": 392, "end": 408, "text": "Karttunen, 1998;", "ref_id": "BIBREF11" }, { "start": 409, "end": 422, "text": "Eisner, 2000;", "ref_id": "BIBREF4" }, { "start": 423, "end": 453, "text": "Gerdemann and van Noord, 2000)", "ref_id": "BIBREF7" }, { "start": 1117, "end": 1132, "text": "Eisner's (2000)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The treatment shows that all the constructions emerge directly from a generalized presentation of OT, in which the crucial fact is that the harmony ordering on scored candidates is a regular relation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Work focusing on OT comprehension-or even mentioning it-has been surprisingly sparse. While the recent constructions mentioned in \u00a71 can easily be applied to the comprehension problem, as we will explain, they were motivated primarily by a desire to pare back OT's generative power to that of previous rewrite-rule formalisms (Johnson, 1972) . Fosler (1996) noted the existence of the OT comprehension task and speculated that it might succumb to heuristic search. Smolensky (1996) proposed to solve it by optimizing the underlying form,", "cite_spans": [ { "start": 326, "end": 341, "text": "(Johnson, 1972)", "ref_id": "BIBREF9" }, { "start": 344, "end": 357, "text": "Fosler (1996)", "ref_id": null }, { "start": 465, "end": 481, "text": "Smolensky (1996)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work on Comprehension", "sec_num": "2" }, { "text": "COMPREHEND(z) ? = {x : ( x ) (x , z) > (x, z)}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous Work on Comprehension", "sec_num": "2" }, { "text": "Hale and Reiss (1998) pointed out in response that any comprehension-by-optimization strategy would have to arrange for multiple optima: after all, phonological comprehension is a one-to-many mapping (since phonological production is many-to-one). 1 The correctness of Smolensky's proposal (i.e., whether it really computes COMPREHEND) depends on the particular harmony measure. It can be made to work, multiple optima and all, if the harmony measure is constructed with both production and comprehension in mind. Indeed, for any phonology, it is trivial to design a harmony measure that both production and comprehension optimize. (Just define the harmony of (x, z) to be 1 or 0 according to whether the mapping x \u2192 z is in the language!) But we are really only interested in harmony measures that are defined by OT-style grammars (rankings of \"simple\" constraints). In this case Smolensky's proposal can be unworkable. In particular, \u00a74 will show that a finite-state production grammar in classical OT need not be invertible by any finite-state comprehension grammar. 1 Hale & Reiss's criticism may be specific to phonology and syntax. For some phenomena in semantics, pragmatics, and even morphology, Blutner (1999) argues for a one-to-one form-meaning mapping in which marked forms express marked meanings. He deliberately uses bidirectional optimization to rule out many-to-one cases: roughly speaking, an (x, z) pair is grammatical for him only if z is optimal given x and vice-versa.", "cite_spans": [ { "start": 248, "end": 249, "text": "1", "ref_id": null }, { "start": 1070, "end": 1071, "text": "1", "ref_id": null }, { "start": 1204, "end": 1218, "text": "Blutner (1999)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work on Comprehension", "sec_num": "2" }, { "text": "This section (graphically summarized in Fig. 1 ) lays out a generalized version of OT's theory of production, introducing some notational and representational conventions that may be useful to others and will be important below. In particular, all objects are represented as strings, or as functions that map strings to strings. This will enable us to use finitestate techniques later.", "cite_spans": [], "ref_spans": [ { "start": 40, "end": 46, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "A General Presentation of OT", "sec_num": "3" }, { "text": "The underlying form x and surface form z are represented as strings. We often refer to these strings as input and output. Following Eisner (1997) , each candidate (x, z) is also represented as a string y.", "cite_spans": [ { "start": 132, "end": 145, "text": "Eisner (1997)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "A General Presentation of OT", "sec_num": "3" }, { "text": "The notation (x, z) that we have been using so far for candidates is actually misleading, since in fact the candidates y that are compared encode more than just x and z. They also encode a particular alignment or correspondence between x and z. For example, if x = abdip and z = a[di] [bu] , then a typical candidate would be encoded", "cite_spans": [ { "start": 285, "end": 289, "text": "[bu]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A General Presentation of OT", "sec_num": "3" }, { "text": "y = aab0[ddii][pb0u]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A General Presentation of OT", "sec_num": "3" }, { "text": "which specifies that a corresponds to a, b was deleted (has no surface correspondent), voiceless p surfaces as voiced b, etc. The harmony of y might depend on this alignment as well as on x and z (just as an outfit might fit worse when worn backwards). Because we are distinguishing underlying and surface material by using disjoint alphabets \u03a3 = {a, b, . . .} and \u2206 = {[, ], a, b, . . .}, 2 it is easy to extract the underlying and surface forms (x and z) from y.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A General Presentation of OT", "sec_num": "3" }, { "text": "Although the above example assumes that x and z are simple strings of phonemes and brackets, nothing herein depends on that assumption. Autosegmental representations too can be encoded as strings (Eisner, 1997) .", "cite_spans": [ { "start": 196, "end": 210, "text": "(Eisner, 1997)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "A General Presentation of OT", "sec_num": "3" }, { "text": "In general, an OT grammar consists of 4 components: a constraint ranking, a harmony ordering, and generating and pronouncing functions. The constraint ranking is the language-specific part of the grammar; the other components are often supposed to be universal across languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A General Presentation of OT", "sec_num": "3" }, { "text": "The generating function GEN maps any x \u2208 \u03a3 * to the (nonempty) set of candidates y whose underlying form is x. In other words, GEN just inserts Figure 1 : This paper's view of OT production. In the second line, Ci inserts 's into candidates; then the candidates with suboptimal starrings are pruned away, and finally the 's are removed from the survivors. arbitrary substrings from \u2206 * amongst the characters of x, subject to any restrictions on what constitutes a legitimate candidate y. 3 (Legitimacy might for instance demand that y's surface material z have matched, non-nested left and right brackets, or even that z be similar to x in terms of edit distance.)", "cite_spans": [], "ref_spans": [ { "start": 144, "end": 152, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "A General Presentation of OT", "sec_num": "3" }, { "text": "x underlying form x\u2208\u03a3 * GEN \u2212\u2192 Y 0 (x) C 1 \u2212\u2192 Y 1 (x) C 2 \u2212\u2192 Y 2 (x) \u2022 \u2022 \u2022 Cn \u2212\u2192 Y n (x) sets of candidates y\u2208(\u03a3\u222a\u2206) * PRON \u2212\u2192 Z(x) set of surface forms z\u2208\u2206 * where Y i\u22121 (x) C i \u2212\u2192 Y i (x) really means Y i\u22121 (x) y\u2208(\u03a3\u222a\u2206) * C i \u2212\u2192\u0232 i (x) prune \u2212\u2192 optimal subset of\u0232 i (x) \u0233\u2208(\u03a3\u222a\u2206\u222a{ }) * delete \u2212\u2192 Y i (x) y\u2208(\u03a3\u222a\u2206) *", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A General Presentation of OT", "sec_num": "3" }, { "text": "A constraint ranking is simply a sequence C 1 , C 2 , . . . C n of constraints. Let us take each C i to be a function that scores candidates y by annotating them with violation marks . For example, a NODELETE constraint would map y", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A General Presentation of OT", "sec_num": "3" }, { "text": "= aab0c0[ddii][pb0u] to\u0233 =NODELETE(y) = aab 0c 0[ddii][pb0u]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A General Presentation of OT", "sec_num": "3" }, { "text": ", inserting a after each underlying phoneme that does not correspond to any surface phoneme. This unconventional formulation is needed for new approaches that care about the exact location of the 's. In traditional OT only the number of 's is important, although the locations are sometimes shown for readability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A General Presentation of OT", "sec_num": "3" }, { "text": "Finally, OT requires a harmony ordering on scored candidates\u0233 \u2208 (\u03a3 \u222a \u2206 \u222a { }) * . In traditional OT,\u0233 is most harmonic when it contains the fewest 's. For example, among candidates scored by NODELETE, the most harmonic ones are the ones with the fewest deletions; many candidates may tie for this honor. \u00a76 considers other harmony orderings, a possibility recognized by Prince and Smolensky (1993) ( corresponds to their H-EVAL). In general may be a partial order: two competing candidates may be equally harmonic or incomparable (in which case both can survive), and candidates with different underlying forms never compete at all.", "cite_spans": [ { "start": 370, "end": 397, "text": "Prince and Smolensky (1993)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "A General Presentation of OT", "sec_num": "3" }, { "text": "Production under such a grammar is a matter of successive filtering by the constraints C 1 , . . . C n . Given an underlying form x, let", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A General Presentation of OT", "sec_num": "3" }, { "text": "Y 0 (x) = GEN(x) (1) Y i (x) = {y \u2208 Y i\u22121 (x) : (2) ( y \u2208 Y i\u22121 (x)) C i (y ) C i (y)}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A General Presentation of OT", "sec_num": "3" }, { "text": "The set of optimal candidates is now Y n (x). Extracting z from each y \u2208 Y n (x) gives the set Z(x) or PRODUCE(x) of acceptable surface forms:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A General Presentation of OT", "sec_num": "3" }, { "text": "Z(x) = {PRON(y) : y \u2208 Y n (x)} \u2286 \u2206 * (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A General Presentation of OT", "sec_num": "3" }, { "text": "PRON denotes the simple pronunciation function that extracts z from y. It is the counterpart to GEN: just as GEN fleshes out x \u2208 \u03a3 * into y by inserting symbols of \u2206, PRON slims y down to z \u2208 \u2206 * by removing symbols of \u03a3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A General Presentation of OT", "sec_num": "3" }, { "text": "Notice that Y n \u2286 Y n\u22121 \u2286 . . . \u2286 Y 0 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A General Presentation of OT", "sec_num": "3" }, { "text": "The only candidates y \u2208 Y i\u22121 that survive filtering by C i are the ones that C i considers most harmonic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A General Presentation of OT", "sec_num": "3" }, { "text": "The above notation is general enough to handle some of the important variations of OT, such as Paradigm Uniformity and Sympathy Theory. In particular, one can define GEN so that each candidate y encodes not just an alignment between x and z, but an alignment among x, z, and some other strings that are neither underlying nor surface. These other strings may represent the surface forms for other members of the same morphological paradigm, or intermediate throwaway candidates to which z is sympathetic. Production still optimizes y, which means that it simultaneously optimizes z and the other strings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A General Presentation of OT", "sec_num": "3" }, { "text": "This section assumes OT's traditional harmony ordering, in which the candidates that survive filtering by C i are the ones into which C i inserts fewest 's.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comprehension in Finite-State OT", "sec_num": "4" }, { "text": "Much computational work on OT has been conducted within a finite-state framework (Ellison, 1994) , in keeping with a tradition of finite-state phonology (Johnson, 1972; Kaplan and Kay, 1994) . 4 Finite-state OT is a restriction of the formalism discussed above. It specifically assumes that GEN, C 1 , . . . C n , and PRON are all regular relations, meaning that they can be described by finite-state transducers. GEN is a nondeterministic transducer that maps each x to multiple candidates y. The other transducers map each y to a single\u0233 or z.", "cite_spans": [ { "start": 81, "end": 96, "text": "(Ellison, 1994)", "ref_id": "BIBREF5" }, { "start": 153, "end": 168, "text": "(Johnson, 1972;", "ref_id": "BIBREF9" }, { "start": 169, "end": 190, "text": "Kaplan and Kay, 1994)", "ref_id": "BIBREF10" }, { "start": 193, "end": 194, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Comprehension in Finite-State OT", "sec_num": "4" }, { "text": "These finite-state assumptions were proposed (in a different and slightly weaker form) by Ellison (1994) . Their empirical adequacy has been defended by Eisner (1997) .", "cite_spans": [ { "start": 90, "end": 104, "text": "Ellison (1994)", "ref_id": "BIBREF5" }, { "start": 153, "end": 166, "text": "Eisner (1997)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Comprehension in Finite-State OT", "sec_num": "4" }, { "text": "In addition to having the right kind of power linguistically, regular relations are closed under various relevant operations and allow (efficient) parallel processing of regular sets of strings. Ellison (1994) exploited such properties to give a production algorithm for finite-state OT. Given x and a finite-state OT grammar, he used finite-state operations to construct the set Y n (x) of optimal candidates, represented as a finite-state automaton.", "cite_spans": [ { "start": 195, "end": 209, "text": "Ellison (1994)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Comprehension in Finite-State OT", "sec_num": "4" }, { "text": "Ellison's construction demonstrates that Y n is always a regular set. Since PRON is regular, it follows that PRODUCE(x) = Z(x) is also a regular set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comprehension in Finite-State OT", "sec_num": "4" }, { "text": "We now show that COMPREHEND(z), in constrast, need not be a regular set. Let \u03a3 = {a, b}, \u2206 = {[, ], a, b, . . .} and suppose that GEN allows candidates like the ones in \u00a73, in which parts of the string may be bracketed between [ and ] . The crucial grammar consists of two finite-state constraints. C 2 penalizes a's that fall between brackets (by inserting next to each one) and also penalizes b's that fall outside of brackets. It is dominated by C 1 , which penalizes brackets that do not fall at either edge of the string. Note that this grammar is completely permissive as to the number and location of surface characters other than brackets.", "cite_spans": [ { "start": 227, "end": 234, "text": "[ and ]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Comprehension in Finite-State OT", "sec_num": "4" }, { "text": "If x contains more a's than b's, then PRODUCE(x) is the set\u2206 * of all unbracketed surface forms, wher\u00ea \u2206 is \u2206 minus the bracket symbols. If x contains fewer a's than b's, then PRODUCE", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comprehension in Finite-State OT", "sec_num": "4" }, { "text": "(x) = [\u2206 * ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comprehension in Finite-State OT", "sec_num": "4" }, { "text": "And if a's and b's appear equally often in x, then PRODUCE(x) is the union of the two sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comprehension in Finite-State OT", "sec_num": "4" }, { "text": "Thus, while the x-to-z mapping is not a regular relation under this grammar, at least PRODUCE(x) is a regular set for each x-just as finite-state OT constraints, notably Koskenniemi's (1983) two-level model, which like OT used finite-state constraints on candidates y that encoded an alignment between underlying x and surface z.", "cite_spans": [ { "start": 170, "end": 190, "text": "Koskenniemi's (1983)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Comprehension in Finite-State OT", "sec_num": "4" }, { "text": "guarantees. But for any unbracketed z \u2208\u2206 * , such as z = abc, COMPREHEND(z) is not regular: it is the set of underlying strings with # of a's \u2265 # of b's.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comprehension in Finite-State OT", "sec_num": "4" }, { "text": "This result seems to eliminate any hope of handling OT comprehension in a finite-state framework. It is interesting to note that both OT and current speech recognition systems construct finitestate models of production and define comprehension as the inverse of production. Speech recognizers do correctly implement comprehension via finite-state optimization (Pereira and Riley, 1997) . But this is impossible in OT because OT has a more complicated production model. (In speech recognizers, the most probable phonetic or phonological surface form is not presumed to have suppressed its competitors.)", "cite_spans": [ { "start": 360, "end": 385, "text": "(Pereira and Riley, 1997)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Comprehension in Finite-State OT", "sec_num": "4" }, { "text": "One might try to salvage the situation by barring constraints like C 1 or C 2 from the theory as linguistically implausible. Unfortunately this is unlikely to succeed. Primitive OT (Eisner, 1997) already restricts OT to something like a bare minimum of constraints, allowing just two simple constraint families that are widely used by practitioners of OT. Yet even these primitive constraints retain enough power to simulate any finite-state constraint. In any case, C 1 and C 2 themselves are fairly similar to \"domain\" constraints used to describe tone systems (Cole and Kisseberth, 1994) . While C 2 is somewhat odd in that it penalizes two distinct configurations at once, one would obtain the same effect by combining three separately plausible constraints: C 2 requires a's between brackets (i.e., in a tone domain) to receive surface high tones, C 3 requires b's outside brackets to receive surface high tones, and C 4 penalizes all surface high tones. 5 Another obvious if unsatisfying hack would impose heuristic limits on the length of x, for example by allowing the comprehension system to return the approximation COMPREHEND(z) \u2229 {x : |x| \u2264 2 \u2022 |z|}. This set is finite and hence regular, so per-haps it can be produced by some finite-state method, although the automaton to describe the set might be large in some cases.", "cite_spans": [ { "start": 181, "end": 195, "text": "(Eisner, 1997)", "ref_id": "BIBREF3" }, { "start": 563, "end": 590, "text": "(Cole and Kisseberth, 1994)", "ref_id": "BIBREF2" }, { "start": 960, "end": 961, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Comprehension in Finite-State OT", "sec_num": "4" }, { "text": "Recent efforts to force OT into a fully finite-state mold are more promising. As we will see, they identify the problem as the harmony ordering , rather than the space of constraints or the potential infinitude of the answer set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comprehension in Finite-State OT", "sec_num": "4" }, { "text": "Since COMPREHEND(z) need not be a regular set in traditional OT, a corollary is that COMPREHEND and its inverse PRODUCE are not regular relations. That much was previously shown by Markus Hiller and Paul Smolensky (Frank and Satta, 1998) , using similar examples.", "cite_spans": [ { "start": 214, "end": 237, "text": "(Frank and Satta, 1998)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Regular-Relation Comprehension", "sec_num": "5" }, { "text": "However, at least some OT grammars ought to describe regular relations. It has long been hypothesized that all human phonologies are regular relations, at least if one omits reduplication, and this is necessarily true of phonologies that were successfully described with pre-OT formalisms (Johnson, 1972; Koskenniemi, 1983) .", "cite_spans": [ { "start": 289, "end": 304, "text": "(Johnson, 1972;", "ref_id": "BIBREF9" }, { "start": 305, "end": 323, "text": "Koskenniemi, 1983)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Regular-Relation Comprehension", "sec_num": "5" }, { "text": "Regular relations are important for us because they are computationally tractable. Any regular relation can be implemented as a finite-state transducer T , which can be inverted and used for comprehension as well as production. PRODUCE(x) = T (x) = range(x \u2022 T ), and COMPREHEND(z) = T \u22121 (z) = domain(T \u2022 z).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Regular-Relation Comprehension", "sec_num": "5" }, { "text": "We are therefore interested in compiling OT grammars into finite-state transducers-by hook or by crook. \u00a76 discusses how; but first let us see how such compilation is useful in realistic situations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Regular-Relation Comprehension", "sec_num": "5" }, { "text": "Any practical comprehension strategy must recognize that the hearer does not really perceive the entire surface form. After all, the surface form contains phonetically invisible material (e.g., syllable and foot boundaries) and makes phonetically imperceptible distinctions (e.g., two copies of a tone versus one doubly linked copy). How to comprehend in this case?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Regular-Relation Comprehension", "sec_num": "5" }, { "text": "The solution is to modify PRON to \"go all the way\"-to delete not only underlying material but also phonetically invisible material. Indeed, PRON can also be made to perform any purely phonetic processing. Each output z of PRODUCE is now not a phonological surface form but a string of phonemes or spectrogram segments. So long as PRON is a regular relation (perhaps a nondeterministic or probabilistic one that takes phonetic variation into account), we will still be able to construct T and use it for production and comprehension as above. 6 How about the lexicon? When the phonology can be represented as a transducer, COMPREHEND(z) is a regular set. It contains all inputs x that could have produced output z. In practice, many of these inputs are not in the lexicon, nor are they possible novel words. One should restrict to inputs that appear in the lexicon (also a regular set) by intersecting COMPREHEND(z) with the lexicon. For novel words this intersection will be empty; but one can find the possible underlying forms of the novel word, for learning's sake, by intersecting COMPREHEND(z) with a larger (infinite) regular set representing all forms satisfying the language's lexical constraints.", "cite_spans": [ { "start": 542, "end": 543, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Regular-Relation Comprehension", "sec_num": "5" }, { "text": "There is an alternative treatment of the lexicon. GEN can be extended \"backwards\" to incorporate morphology just as PRON was extended \"forwards\" to incorporate phonetics. On this view, the input x is a sequence of abstract morphemes, and GEN performs morphological preprocessing to turn x into possible candidates y. GEN looks up each abstract morpheme's phonological string \u2208 \u03a3 * from the lexicon, 7 then combines these phonological strings by concatenation or template merger, then nondeterministically inserts surface material from \u2206 * . Such a GEN can plausibly be built up (by composition) as a regular relation from abstract morpheme sequences to phonological candidates. This regularity, as for PRON, is all that is required.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Regular-Relation Comprehension", "sec_num": "5" }, { "text": "Representing a phonology as a transducer T has additional virtues. T can be applied efficiently to any input string x, whereas Ellison (1994) or Eisner (1997) requires a fresh automaton construction for each x. A nice trick is to build T without PRON and apply it to all conceivable x's in parallel, yielding the complete set of all optimal candidates Y n (\u03a3 * ) = x\u2208\u03a3 * Y n (x). If Y and Y denote the sets of optimal candidates under two grammars, then (Y \u2229 \u00acY ) \u222a (Y \u2229 \u00acY ) yields the candidates that are optimal under only one grammar. Applying GEN \u22121 or PRON to this set finds the regular set of underlying or surface forms that the two grammars would treat differently; one can then look for empirical cases in this set, in order to distinguish between the two grammars.", "cite_spans": [ { "start": 127, "end": 141, "text": "Ellison (1994)", "ref_id": "BIBREF5" }, { "start": 145, "end": 158, "text": "Eisner (1997)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Regular-Relation Comprehension", "sec_num": "5" }, { "text": "Why are OT phonologies not always regular relations? The trouble is that inputs may be arbitrarily long, and so may accrue arbitrarily large numbers of violations. Traditional OT ( \u00a74) is supposed to distinguish all such numbers. Consider syllabification in English, which prefers to syllabify the long input bi bambam . . . bam (with k + 1 codas). NOCODA must therefore distinguish annotated candidates\u0233 with k 's (which are optimal) from those with k + 1 's (which are not). It requires a (\u2265 k + 2)-state automaton to make this distinction by looking only at the 's in\u0233. And if k can be arbitrarily large, then no finite-state automaton will handle all cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "Thus, constraints like NOCODA do not allow an upper bound on k for all x \u2208 \u03a3 * . Of course, the minimal number of violations k of a constraint is fixed given the underlying form x, which is useful in production. 8 But comprehension is less fortunate: we cannot bound k given only the surface form z. In the grammar of \u00a74, COMPREHEND(abc) included underlying forms whose optimal candidates had arbitrarily large numbers of violations k. Now, in most cases, the effect of an OT grammar can be achieved without actually counting anything. (This is to be expected since rewrite-rule 8 Ellison (1994) was able to construct PRODUCE(x) from x. One can even build a transducer for PRODUCE that is correct on all inputs that can achieve \u2264 K violations and returns \u2205 on other inputs (signalling that the transducer needs to be recompiled with increased K). Simply use the construction of (Frank and Satta, 1998; Karttunen, 1998) , composed with a hard constraint that the answer must have \u2264 K violations. grammars were previously written for the same phonologies, and they did not use counting!) This is possible despite the above arguments because for some grammars, the distinction between optimal and suboptimal\u0233 can be made by looking at the non-symbols in\u0233 rather than trying to count the 's. In our NOCODA example, a surface substring such as . . . ib ][a. . . might signal that\u0233 is suboptimal because it contains an \"unnecessary\" coda. Of course, the validity of this conclusion depends on the grammar and specifically the constraints C 1 , . . . C i\u22121 ranked above NOCODA, since whether that coda is really unnecessary depends on whether\u0232 i\u22121 also contains the competing candidate . . . i][ba . . . with fewer codas.", "cite_spans": [ { "start": 579, "end": 595, "text": "8 Ellison (1994)", "ref_id": null }, { "start": 878, "end": 901, "text": "(Frank and Satta, 1998;", "ref_id": "BIBREF6" }, { "start": 902, "end": 918, "text": "Karttunen, 1998)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "But as we have seen, some OT grammars do have effects that overstep the finite-state boundary ( \u00a74). Recent efforts to treat OT with transducers have therefore tried to remove counting from the formalism. We now unify such efforts by showing that they all modify the harmony ordering .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "\u00a74 described finite-state OT grammars as ones where GEN, PRON, and the constraints are regular relations. We claim that if the harmony ordering is also a regular relation on strings of (\u03a3\u222a\u2206\u222a{ }) * , then the entire grammar (PRODUCE) is also regular.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "We require harmony orderings to be compatible with GEN: an ordering must treat\u0233 ,\u0233 as incomparable (neither is the other) if they were produced from different underlying forms. 9 To make the notation readable let us denote the relation by the letter H. Thus, a transducer for H accepts the pair (\u0233 ,\u0233) if\u0233 \u0233. The construction is inductive. Y 0 = GEN is regular by assumption. If Y i\u22121 is regular, then so is Y i since (as we will show)", "cite_spans": [ { "start": 177, "end": 178, "text": "9", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "Y i = (\u0232 i \u2022 \u00acrange(\u0232 i \u2022 H)) \u2022 D (4) where\u0232 i def = Y i\u22121", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "\u2022 C i and maps x to the set of starred candidates that C i will prune; \u00ac denotes the complement of a regular language; and D is a transducer that removes all 's. Therefore PRODUCE = Y n \u2022 PRON is regular as claimed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "It remains to derive (4). Equation 2implies", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "C i (Y i (x)) = {\u0233 \u2208\u0232 i (x) : ( \u0233 \u2208\u0232 i (x))\u0233 \u0233} (5) =\u0232 i (x) \u2212 {\u0233 : (\u2203\u0233 \u2208\u0232 i (x))\u0233 \u0233} (6) =\u0232 i (x) \u2212 H(\u0232 i (x)) (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "One can read H(\u0232 i (x)) as \"starred candidates that are worse than other starred candidates,\" i.e., suboptimal. The set difference (7) leaves only the optimal candidates. We now see", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(x,\u0233) \u2208 Y i \u2022 C i \u21d4\u0233 \u2208 C i (Y i (x))", "eq_num": "(8)" } ], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u21d4\u0233 \u2208\u0232 i (x),\u0233 \u2208 H(\u0232 i (x)) [by (7)]", "eq_num": "(9)" } ], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u21d4\u0233 \u2208\u0232 i (x), ( z)\u0233 \u2208 H(\u0232 i (z)) [see below](10) \u21d4 (x,\u0233) \u2208\u0232 i ,\u0233 \u2208 range(\u0232 i \u2022 H)", "eq_num": "(11)" } ], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u21d4 (x,\u0233) \u2208\u0232 i \u2022 \u00acrange(\u0232 i \u2022 H)", "eq_num": "(12)" } ], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "therefore Y i \u2022 C i =\u0232 i \u2022 \u00acrange(\u0232 i \u2022 H)", "eq_num": "(13)" } ], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "and composing both sides with D yields (4). To justify (9) \u21d4 (10) we must show when\u0233 \u2208\u0232 i (x) that y \u2208 H(\u0232 i (x)) \u21d4 (\u2203z)\u0233 \u2208 H(\u0232 i (z)). For the \u21d2 direction, just take z = x. For \u21d0,\u0233 \u2208 H(\u0232 i (z)) means that (\u2203\u0233 \u2208\u0232 i (z))\u0233 \u0233; but then x = z (giving\u0233 \u2208 H(\u0232 i (x))), since if not, our compatibility requirement on H would have made\u0233 \u2208\u0232 i (z) incomparable with\u0233 \u2208\u0232 i (x).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "Extending the pretty notation of (Karttunen, 1998) , we may use (4) to define a left-associative generalized optimality operator oo H :", "cite_spans": [ { "start": 33, "end": 50, "text": "(Karttunen, 1998)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "Y oo H C def = (Y \u2022C \u2022\u00acrange(Y \u2022C \u2022H))\u2022D (14)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "Then for any regular OT grammar, PRODUCE =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "GEN oo H C 1 oo H C 2 \u2022 \u2022 \u2022 oo H C n \u2022 PRON", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "and can be inverted to get COMPREHEND. More generally, different constraints can usefully be applied with different H's (Eisner, 2000) .", "cite_spans": [ { "start": 120, "end": 134, "text": "(Eisner, 2000)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "The algebraic construction above is inspired by a version that Gerdemann and van Noord (2000) give for a particular variant of OT. Their regular expressions can be used to implement it, simply replacing their add_violation by our H.", "cite_spans": [ { "start": 63, "end": 93, "text": "Gerdemann and van Noord (2000)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "Typically, H ignores surface characters when comparing starred candidates. So H can be written as elim(\u2206) \u2022 G \u2022 elim(\u2206) \u22121 where elim(\u2206) is a transducer that removes all characters of \u2206. To satisfy the compatibility requirement on H, G should be a subset of the relation (\u03a3| |( : )|( : )) * . 10 We now summarize the main proposals from the literature (see \u00a71), propose operator names, and cast them in the general framework.", "cite_spans": [ { "start": 293, "end": 295, "text": "10", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "\u2022 Y o C: Inviolable constraint (Koskenniemi, 1983; Bird, 1995) , implemented by composition.", "cite_spans": [ { "start": 31, "end": 50, "text": "(Koskenniemi, 1983;", "ref_id": "BIBREF12" }, { "start": 51, "end": 62, "text": "Bird, 1995)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "\u2022 Y o+ C: Counting constraint (Prince and Smolensky, 1993) : more violations is more disharmonic. No finite-state implementation possible.", "cite_spans": [ { "start": 30, "end": 58, "text": "(Prince and Smolensky, 1993)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "\u2022 Y oo C: Binary approximation (Karttunen, 1998; Frank and Satta, 1998) . All candidates with any violations are equally disharmonic. Implemented by G = (\u03a3 * ( : )\u03a3 * ) + , which relates underlying forms without violations to the same forms with violations.", "cite_spans": [ { "start": 31, "end": 48, "text": "(Karttunen, 1998;", "ref_id": "BIBREF11" }, { "start": 49, "end": 71, "text": "Frank and Satta, 1998)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "\u2022 Y oo 3 C: 3-bounded approximation (Karttunen, 1998; Frank and Satta, 1998) . Like o+ , but all candidates with \u2265 3 violations are equally disharmonic. G is most easily described with a transducer that keeps count of the input and output 's so far, on a scale of 0, 1, 2, \u2265 3. Final states are those whose output count exceeds their input count on this scale.", "cite_spans": [ { "start": 36, "end": 53, "text": "(Karttunen, 1998;", "ref_id": "BIBREF11" }, { "start": 54, "end": 76, "text": "Frank and Satta, 1998)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "\u2022 Y o\u2282 C: Matching or subset approximation (Gerdemann and van Noord, 2000) . A candidate is more disharmonic than another if it has stars in all the same locations and some more besides. 11 Here G = ((\u03a3| ) * ( : )(\u03a3| ) * ) + .", "cite_spans": [ { "start": 43, "end": 74, "text": "(Gerdemann and van Noord, 2000)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "\u2022 Y o> C: Left-to-right directional evaluation (Eisner, 2000) . A candidate is more disharmonic than another if in the leftmost position where they differ (ignoring surface characters), it has a . This revises OT's \"do only when necessary\" mantra to \"do only when necessary and then as late as possible\" (even if delaying 's means suffering more of them later). Here G = (\u03a3| ) * (( : )|((\u03a3 : )(\u03a3| ) * )). Unlike the other proposals, here two forms can both be optimal only if they have exactly the same pattern of violations with respect to their underlying material.", "cite_spans": [ { "start": 47, "end": 61, "text": "(Eisner, 2000)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "\u2022 Y .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "The novelty of the matching and directional proposals is their attention to where the violations fall. Eisner's directional proposal (o>, gets a different result, as if NOCODA were split into 4 constraints evaluating the syllables separately. More accurately, it is as if NOCODA were split into one constraint per underlying letter, counting the number of 's right after that letter.", "cite_spans": [ { "start": 154, "end": 168, "text": "(Eisner, 2000)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "one defended on linguistic as well as computational grounds. He argues that violation counting (o+) is a bug in OT rather than a feature worth approximating, since it predicts unattested phenomena such as \"majority assimilation\" (Bakovi\u0107, 1999; Lombardi, 1999) . Conversely, he argues that comparing violations directionally is not a hack but a desirable feature, since it naturally predicts \"iterative phenomena\" whose description in traditional OT (via Generalized Alignment) is awkward from both a linguistic and a computational point of view. Fig. 2 contrasts the traditional and directional harmony orderings. Eisner (2000) proved that o> was a regular operator for directional H, by making use of a rather different insight, but that machine-level construction was highly technical. The new algebraic construction is simple and can be implemented with a few regular expressions, as for any other H.", "cite_spans": [ { "start": 229, "end": 244, "text": "(Bakovi\u0107, 1999;", "ref_id": "BIBREF0" }, { "start": 245, "end": 260, "text": "Lombardi, 1999)", "ref_id": "BIBREF13" }, { "start": 615, "end": 628, "text": "Eisner (2000)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 547, "end": 553, "text": "Fig. 2", "ref_id": null } ], "eq_spans": [], "section": "Theorem on Compiling OT", "sec_num": "6" }, { "text": "See the itemized points in \u00a71 for a detailed summary. In general, this paper has laid out a clear, general framework for finite-state OT systems, and used it to obtain positive and negative results about the understudied problem of comprehension. Perhaps these results will have some bearing on the development of realistic learning algorithms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "The paper has also established sufficient conditions for a finite-state OT grammar to compile into a finite-state transducer. It should be easy to imagine new variants of OT that meet these conditions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "An alternative would be to distinguish them by odd and even positions in the string.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "It is never really necessary for GEN to enforce such restrictions, since they can equally well be enforced by the top-ranked constraint C1 (see below).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The tradition already included (inviolable) phonological", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Since the surface tones indicate the total number of a's and b's in the underlying form, COMPREHEND(z) is actually a finite set in this version, hence regular. But the non-regularity argument does go through if the tonal information in z is not available to the comprehension system (as when reading text without diacritics); we cover this case in \u00a75. (One can assume that some lower-ranked constraints require a special suffix before ], so that the bracket information need not be directly available to the comprehension system either.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Pereira and Riley (1997) build a speech recognizer by composing a probabilistic finite-state language model, a finite-state pronouncing dictionary, and a probabilistic finite-state acoustic model. These three components correspond precisely to the input to GEN, the traditional OT grammar, and PRON, so we are simply suggesting the same thing in different terminology.7 Nondeterministically in the case of phonologically conditioned allomorphs: INDEFINITE APPLE \u2192 {\u039baepl, aenaepl} \u2286 \u03a3 * . This yields competing candidates that differ even in their underlying phonological material.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For example, the harmony ordering of traditional OT is {(\u0233 ,\u0233) :\u0233 has the same underlying form as, but contains fewer 's than,\u0233}. If we were allowed to drop the sameunderlying-form condition then the ordering would become regular, and then our claim would falsely imply that all traditional finite-state OT grammars were regular relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This transducer regexp says to map any symbol in \u03a3 \u222a { } to itself, or insert or delete -and then repeat.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Many candidates are incomparable under this ordering, so Gerdemann and van Noord also showed how to weaken the notation of \"same location\" in order to approximate o+ better.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Assimilation to the unmarked. Rutgers Optimality Archive ROA-340", "authors": [ { "first": "Eric", "middle": [], "last": "Bakovi\u0107", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Bakovi\u0107. 1999. Assimilation to the unmarked. Rut- gers Optimality Archive ROA-340., August. Steven Bird. 1995. Computational Phonology: A Constraint-Based Approach. Cambridge.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Some aspects of optimality in natural language interpretation", "authors": [ { "first": "Reinhard", "middle": [], "last": "Blutner", "suffix": "" } ], "year": 1999, "venue": "Papers on Optimality Theoretic Semantics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reinhard Blutner. 1999. Some aspects of optimality in natural language interpretation. In Papers on Optimal- ity Theoretic Semantics. Utrecht.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "An optimal domains theory of harmony", "authors": [ { "first": "J", "middle": [], "last": "Cole", "suffix": "" }, { "first": "C", "middle": [], "last": "Kisseberth", "suffix": "" } ], "year": 1994, "venue": "Studies in the Linguistic Sciences", "volume": "24", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Cole and C. Kisseberth. 1994. An optimal domains theory of harmony. Studies in the Linguistic Sciences, 24(2).", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Efficient generation in primitive Optimality Theory", "authors": [ { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 1997, "venue": "Proc. of ACL/EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Eisner. 1997. Efficient generation in primitive Op- timality Theory. In Proc. of ACL/EACL.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Directional constraint evaluation in Optimality Theory", "authors": [ { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2000, "venue": "Proc. of COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Eisner. 2000. Directional constraint evaluation in Optimality Theory. In Proc. of COLING.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "On reversing the generation process in Optimality Theory", "authors": [ { "first": "T", "middle": [ "Mark" ], "last": "Ellison", "suffix": "" } ], "year": 1994, "venue": "Proc. of COLING J. Eric Fosler", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Mark Ellison. 1994. Phonological derivation in Opti- mality Theory. In Proc. of COLING J. Eric Fosler. 1996. On reversing the generation process in Optimality Theory. Proc. of ACL Student Session.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Optimality Theory and the generative complexity of constraint violability", "authors": [ { "first": "R", "middle": [], "last": "Frank", "suffix": "" }, { "first": "G", "middle": [], "last": "Satta", "suffix": "" } ], "year": 1998, "venue": "Computational Linguistics", "volume": "24", "issue": "2", "pages": "307--315", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Frank and G. Satta. 1998. Optimality Theory and the generative complexity of constraint violability. Com- putational Linguistics, 24(2):307-315.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Approximation and exactness in finite-state Optimality Theory", "authors": [ { "first": "D", "middle": [], "last": "Gerdemann", "suffix": "" }, { "first": "G", "middle": [], "last": "Van Noord", "suffix": "" } ], "year": 2000, "venue": "Proc. of ACL SIGPHON Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Gerdemann and G. van Noord. 2000. Approxima- tion and exactness in finite-state Optimality Theory. In Proc. of ACL SIGPHON Workshop.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Formal and empirical arguments concerning phonological acquisition", "authors": [ { "first": "Mark", "middle": [], "last": "Hale", "suffix": "" }, { "first": "Charles", "middle": [], "last": "Reiss", "suffix": "" } ], "year": 1998, "venue": "Linguistic Inquiry", "volume": "29", "issue": "", "pages": "656--683", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Hale and Charles Reiss. 1998. Formal and empir- ical arguments concerning phonological acquisition. Linguistic Inquiry, 29:656-683.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Formal Aspects of Phonological Description", "authors": [ { "first": "C", "middle": [], "last": "", "suffix": "" }, { "first": "Douglas", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 1972, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Douglas Johnson. 1972. Formal Aspects of Phonolog- ical Description. Mouton.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Regular models of phonological rule systems", "authors": [ { "first": "R", "middle": [], "last": "Kaplan", "suffix": "" }, { "first": "M", "middle": [], "last": "Kay", "suffix": "" } ], "year": 1994, "venue": "Comp. Ling", "volume": "20", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Kaplan and M. Kay. 1994. Regular models of phono- logical rule systems. Comp. Ling., 20(3).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The proper treatment of optimality in computational phonology", "authors": [ { "first": "L", "middle": [], "last": "Karttunen", "suffix": "" } ], "year": 1998, "venue": "Proc. of FSMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Karttunen. 1998. The proper treatment of optimality in computational phonology. In Proc. of FSMNLP.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Two-level morphology: A general computational model for word-form recognition and production", "authors": [ { "first": "Kimmo", "middle": [], "last": "Koskenniemi", "suffix": "" } ], "year": 1983, "venue": "", "volume": "11", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kimmo Koskenniemi. 1983. Two-level morphology: A general computational model for word-form recogni- tion and production. Publication 11, Dept. of General Linguistics, University of Helsinki.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Positional faithfulness and voicing assimilation in Optimality Theory. Natural Language and Linguistic Theory", "authors": [ { "first": "Linda", "middle": [], "last": "Lombardi", "suffix": "" } ], "year": 1999, "venue": "", "volume": "17", "issue": "", "pages": "267--302", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linda Lombardi. 1999. Positional faithfulness and voic- ing assimilation in Optimality Theory. Natural Lan- guage and Linguistic Theory, 17:267-302.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Speech recognition by composition of weighted finite automata", "authors": [ { "first": "C", "middle": [ "N" ], "last": "Fernando", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "", "middle": [], "last": "Riley", "suffix": "" } ], "year": 1997, "venue": "Finite-State Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fernando C. N. Pereira and Michael Riley. 1997. Speech recognition by composition of weighted finite au- tomata. In E. Roche and Y. Schabes, eds., Finite-State Language Processing. MIT Press.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Optimality Theory: Constraint interaction in generative grammar. Ms., Rutgers and U", "authors": [ { "first": "A", "middle": [], "last": "Prince", "suffix": "" }, { "first": "P", "middle": [], "last": "Smolensky", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Prince and P. Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Ms., Rutgers and U. of Colorado (Boulder).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "On the comprehension/production dilemma in child language", "authors": [ { "first": "Paul", "middle": [], "last": "Smolensky", "suffix": "" } ], "year": 1996, "venue": "Linguistic Inquiry", "volume": "27", "issue": "", "pages": "720--731", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Smolensky. 1996. On the comprehen- sion/production dilemma in child language. Linguistic Inquiry, 27:720-731.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "k copies as [bi][bam][bam] . . . [bam] (with k codas) rather than [bib][am][bam] . . . [bam]" }, "TABREF0": { "type_str": "table", "content": "
(a) x =bantodibo [ban][to][di][bo](b)NOCODA ban todibo(c) C 1 NOCODA *! *(d) *!*
[ban][ton][di][bo]ban to dibo****!
[ban][to][dim][bon]ban todi bo***!***
[ban][ton][dim][bon]ban to di bo***!***!**
Figure 2: Counting vs. directionality. [Adapted from
", "text": "C 1 \u03c3 1 \u03c3 2 \u03c3 3 \u03c3 4", "num": null, "html": null } } } }