{ "paper_id": "P95-1035", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:33:53.368976Z" }, "title": "An Efficient Generation Algorithm for Lexicalist MT", "authors": [ { "first": "Victor", "middle": [], "last": "Poznafiski", "suffix": "", "affiliation": { "laboratory": "", "institution": "SHARP Laboratories of Europe Ltd. Oxford Science Park", "location": { "postCode": "OX4 4GA", "settlement": "Oxford", "country": "United Kingdom" } }, "email": "" }, { "first": "John", "middle": [ "L" ], "last": "Beaven", "suffix": "", "affiliation": { "laboratory": "", "institution": "SHARP Laboratories of Europe Ltd. Oxford Science Park", "location": { "postCode": "OX4 4GA", "settlement": "Oxford", "country": "United Kingdom" } }, "email": "" }, { "first": "Pete", "middle": [], "last": "Whitelock", "suffix": "", "affiliation": { "laboratory": "", "institution": "SHARP Laboratories of Europe Ltd. Oxford Science Park", "location": { "postCode": "OX4 4GA", "settlement": "Oxford", "country": "United Kingdom" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The lexicalist approach to Machine Translation offers significant advantages in the development of linguistic descriptions. However, the Shake-and-Bake generation algorithm of (Whitelock, 1992) is NPcomplete. We present a polynomial time algorithm for lexicalist MT generation provided that sufficient information can be transferred to ensure more determinism.", "pdf_parse": { "paper_id": "P95-1035", "_pdf_hash": "", "abstract": [ { "text": "The lexicalist approach to Machine Translation offers significant advantages in the development of linguistic descriptions. However, the Shake-and-Bake generation algorithm of (Whitelock, 1992) is NPcomplete. We present a polynomial time algorithm for lexicalist MT generation provided that sufficient information can be transferred to ensure more determinism.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Lexicalist approaches to MT, particularly those incorporating the technique of Shake-and-Bake generation (Beaven, 1992a; Beaven, 1992b; Whitelock, 1994) , combine the linguistic advantages of transfer (Arnold et al., 1988; Allegranza et al., 1991) and interlingual (Nirenburg et al., 1992; Dorr, 1993) approaches. Unfortunately, the generation algorithms described to date have been intractable. In this paper, we describe an alternative generation component which has polynomial time complexity.", "cite_spans": [ { "start": 105, "end": 120, "text": "(Beaven, 1992a;", "ref_id": null }, { "start": 121, "end": 135, "text": "Beaven, 1992b;", "ref_id": null }, { "start": 136, "end": 152, "text": "Whitelock, 1994)", "ref_id": null }, { "start": 201, "end": 222, "text": "(Arnold et al., 1988;", "ref_id": null }, { "start": 223, "end": 247, "text": "Allegranza et al., 1991)", "ref_id": "BIBREF0" }, { "start": 265, "end": 289, "text": "(Nirenburg et al., 1992;", "ref_id": null }, { "start": 290, "end": 301, "text": "Dorr, 1993)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Shake-and-Bake translation assumes a source grammar, a target grammar and a bilingual dictionary which relates translationally equivalent sets of lexical signs, carrying across the semantic dependencies established by the source language analysis stage into the target language generation stage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The translation process consists of three phases: 1. A parsing phase, which outputs a multiset, or bag, of source language signs instantiated with sufficiently rich linguistic information established by the parse to ensure adequate translations. 2. A lexical-semantic transfer phase which employs the bilingual dictionary to map the bag *We wish to thank our colleagues Kerima Benkerimi, David Elworthy, Peter Gibbins, Inn Johnson, Andrew Kay and Antonio Sanfilippo at SLE, and our anonymous reviewers for useful feedback and discussions on the research reported here and on earlier drafts of this paper. of instantiated source signs onto a bag of target language signs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "the bag of target signs which is guaranteed grammatical according to the monolingual target grammar. This ordering must respect the linguistic constraints which have been transferred into the target signs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A generation phase which imposes an order on", "sec_num": "3." }, { "text": "The Shake-an&Bake generation algorithm of (Whitelock, 1992) combines target language signs using the technique known as generate-and-test. In effect, an arbitrary permutation of signs is input to a shift-reduce parser which tests them for grammatical well-formedness. If they are well-formed, the system halts indicating success. If not, another permutation is tried and the process repeated. The complexity of this algorithm is O(n!) because all permutations (n! for an input of size n) may have to be explored to find the correct answer, and indeed must be explored in order to verify that there is no answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A generation phase which imposes an order on", "sec_num": "3." }, { "text": "Proponents of the Shake-and-Bake approach have employed various techniques to improve generation efficiency. For example, (Beaven, 1992a) employs a chart to avoid recalculating the same combinations of signs more than once during testing, and (Popowich, 1994) proposes a more general technique for storing which rule applications have been attempted; (Brew, 1992) avoids certain pathological cases by employing global constraints on the solution space; researchers such as (Brown et al., 1990) and (Chen and Lee, 1994) provide a system for bag generation that is heuristically guided by probabilities. However, none of these approaches is guaranteed to avoid protracted search times if an exact answer is required, because bag generation is NPcomplete (Brew, 1992) .", "cite_spans": [ { "start": 243, "end": 259, "text": "(Popowich, 1994)", "ref_id": null }, { "start": 473, "end": 493, "text": "(Brown et al., 1990)", "ref_id": null }, { "start": 752, "end": 764, "text": "(Brew, 1992)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A generation phase which imposes an order on", "sec_num": "3." }, { "text": "Our novel generation algorithm has polynomial complexity (O(n4)). The reduction in theoretical complexity is achieved by placing constraints on the power of the target grammar when operating on instantiated signs, and by using a more restrictive data structure than a bag, which we call a target language normalised commutative bracketing (TNCB) . A TNCB records dominance information from derivations and is amenable to incremental updates. This allows us to employ a greedy algorithm to refine the structure progressively until either a target constituent is found and generation has succeeded or no more changes can be made and generation has failed.", "cite_spans": [ { "start": 339, "end": 345, "text": "(TNCB)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A generation phase which imposes an order on", "sec_num": "3." }, { "text": "In the following sections, we will sketch the basic algorithm, consider how to provide it with an initial guess, and provide an informal proof of its efficiency.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A generation phase which imposes an order on", "sec_num": "3." }, { "text": "We begin by describing the fundamentals of a greedy incremental generation algorithm. The cruciM data structure that it employs is the TNCB. We give some definitions, state some key assumptions about suitable TNCBs for generation, and then describe the algorithm itself.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Greedy Incremental Generation Algorithm", "sec_num": "2" }, { "text": "We assume a sign-based grammar with binary rules, each of which may be used to combine two signs by unifying them with the daughter categories and returning the mother. Combination is the commutative equivalent of rule application; the linear ordering of the daughters that leads to successful rule application determines the orthography of the mother. Whitelock's Shake-and-Bake generation algorithm attempts to arrange the bag of target signs until a grammatical ordering (an ordering which allows all of the signs to combine to yield a single sign) is found. However, the target derivation information itself is not used to assist the algorithm. Even in (Beaven, 1992a) , the derivation information is used simply to cache previous results to avoid exact recomputation at a later stage, not to improve on previous guesses. The reason why we believe such improvement is possible is that, given adequate information from the previous stages, two target signs cannot combine by accident; they must do so because the underlying semantics within the signs licenses it.", "cite_spans": [ { "start": 657, "end": 672, "text": "(Beaven, 1992a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "TNCBs", "sec_num": "2.1" }, { "text": "If the linguistic data that two signs contain allows them to combine, it is because they are providing a semantics which might later become more specified. For example, consider the bag of signs that have been derived through the Shake-and-Bake process which represent the phrase:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TNCBs", "sec_num": "2.1" }, { "text": "(1) The big brown dog Now, since the determiner and adjectives all modify the same noun, most grammars will allow us to construct the phrases:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TNCBs", "sec_num": "2.1" }, { "text": "(2) The dog (3) The big dog (4) The brown dog as well as the 'correct' one. Generation will fail if all signs in the bag are not eventually incorporated in tile final result, but in the naive algorithm, the intervening computation may be intractable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TNCBs", "sec_num": "2.1" }, { "text": "In the algorithm presented here, we start from observation that the phrases (2) to (4) are not incorrect semantically; they are simply under-specifications of (1). We take advantage of this by recording the constituents that have combined within the TNCB, which is designed to allow further constituents to be incorporated with minimal recomputation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TNCBs", "sec_num": "2.1" }, { "text": "A TNCB is composed of a sign, and a history of how it was derived from its children. The structure is essentially a binary derivation tree whose children are unordered. Concretely, it is either NIL, or a triple:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TNCBs", "sec_num": "2.1" }, { "text": "TNCB = NILlValue \u00d7 TNCB x TNCB Value = Sign I INCONSISTENT I UNDETERMINED", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TNCBs", "sec_num": "2.1" }, { "text": "The second and third items of the TNCB triple are the child TNCBs. The value of a TNCB is the sign that is formed from the combination of its children, or INCONSISTENT, representing the fact that they cannot grammatically combine, or UN-DETERMINED, i.e. it has not yet been established whether the signs combine.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TNCBs", "sec_num": "2.1" }, { "text": "Undetermined TNCBs are commutative, e.g. they do not distinguish between the structures shown in In section 3 we will see that this property is important when starting up the generation process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TNCBs", "sec_num": "2.1" }, { "text": "Let us introduce some terminology. A TNCB is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TNCBs", "sec_num": "2.1" }, { "text": "\u2022 well-formed iff its value is a sign,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TNCBs", "sec_num": "2.1" }, { "text": "\u2022 ill-formed iff its value is INCONSISTENT, \u2022 undetermined (and its value is UNDETER-MINED) iff it has not been demonstrated whether it is well-formed or ill-formed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TNCBs", "sec_num": "2.1" }, { "text": "\u2022 maximal iff it is well-formed and its parent (if it has one) is ill-formed. In other words, a maximal TNCB is a largest well-formed component of a TNCB.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TNCBs", "sec_num": "2.1" }, { "text": "Since TNCBs are tree-like structures, if a TNCB is undetermined or ill-formed then so are all of its ancestors (the TNCBs that contain it). We define five operations on a TNCB. The first three are used to define the fourth transformation (move) which improves ill-formed TNCBs. The fifth is used to establish the well-formedness of undetermined nodes. In the diagrams, we use a cross to represent ill-formed nodes and a black circle to represent undetermined ones.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TNCBs", "sec_num": "2.1" }, { "text": "Deletion: A maximal TNCB can be deleted from its current position. The structure above it must be adjusted in order to maintain binary branching. In figure 2, we see that when node 4 is deleted, so is its parent node 3. The new node 6, representing the combination of 2 and 5, is marked undetermined. Conjunction: A maximal TNCB can be conjoined with another maximal TNCB if they may be combined by rule. In figure 3, it can be seen how the maximal TNCB composed of nodes 1, 2, and 3 is conjoined with the maximal TNCB composed of nodes 4, 5 and 6 giving the TNCB made up of nodes 1 to 7. The new node, 7, is well-formed. Figure 3:1 is conjoined with 4 giving 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TNCBs", "sec_num": "2.1" }, { "text": "Adjunction: A maximal TNCB can be inserted inside a maximal TNCB, i.e. conjoined with a non-maximal TNCB, where the combination is licensed by rule. In figure 4, the TNCB composed of nodes 1, 2, and 3 is inserted inside the TNCB composed of nodes 4, 5 and 6. All nodes (only 8 in figure 4) which dominate the node corresponding to the new combination (node 7) must be marked undetermined --such nodes are said to be disrupted. Movement: This is a combination of a deletion with a subsequent conjunction or adjunction. In figure 5, we illustrate a move via conjunction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TNCBs", "sec_num": "2.1" }, { "text": "In the left-hand figure, we assume we wish to move the maximal TNCB 4 next to the maximal TNCB 7. This first involves deleting TNCB 4 (noting it), and raising node 3 to replace node 2. We then introduce node 8 above node 7, and make both nodes 7 and 4 its children. Note that during deletion, we remove a surplus node (node 2 in this case) and during conjunction or adjunction we introduce a new one (node 8 in this case) thus maintaining the same number of nodes in the tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TNCBs", "sec_num": "2.1" }, { "text": "Figure 5: A conjoining move from 4 to 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "/L 3 7", "sec_num": "9" }, { "text": "Evaluation: After a movement, the TNCB is undetermined as demonstrated in figure 5 . The signs of the affected parts must be recalculated by combining the recursively evaluated child TNCBs.", "cite_spans": [], "ref_spans": [ { "start": 74, "end": 82, "text": "figure 5", "ref_id": null } ], "eq_spans": [], "section": "/L 3 7", "sec_num": "9" }, { "text": "The Shake-and-Bake system of (Whitelock, 1992) employs a bag generation algorithm because it is assumed that the input to the generator is no more than a collection of instantiated signs. Full-scale bag generation is not necessary because sufficient information can be transferred from the source language to severely constrain the subsequent search during generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Suitable Grammars", "sec_num": "2.2" }, { "text": "The two properties required of TNCBs (and hence the target grammars with instantiated lexicM signs) are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Suitable Grammars", "sec_num": "2.2" }, { "text": "1. Precedence Monotonicity. The order of the orthographies of two combining signs in the orthography of the result must be determinate -it must not depend on any subsequent combination that the result may undergo. This constraint says that if one constituent fails to combine with another, no permutation of the elements making up either would render the combination possible. This allows bottom-up evaluation to occur in linear time. In practice, this restriction requires that sufficiently rich information be transferred from the previous translation stages to ensure that sign combination is deterministic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Suitable Grammars", "sec_num": "2.2" }, { "text": "2. Dominance Monotonicity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Suitable Grammars", "sec_num": "2.2" }, { "text": "If a maximal TNCB is adjoined at the highest possible place inside another TNCB, the result will be wellformed after it is re-evaluated. Adjunction is only attempted if conjunction fails (in fact conjunction is merely a special case of adjunction in which no nodes are disrupted); an adjunction which disrupts i nodes is attempted before one which disrupts i + 1 nodes. Dominance monotonicity merely requires all nodes that are disrupted under this top-down control regime to be well-formed when re-evaluated. We will see that this will ensure the termination of the generation algorithm within n-1 steps, where n is the number of lexical signs input to the process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Suitable Grammars", "sec_num": "2.2" }, { "text": "We are currently investigating the mathematical characterisation of grammars and instantiated signs that obey these constraints. So far, we have not found these restrictions particularly problematic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Suitable Grammars", "sec_num": "2.2" }, { "text": "The generator cycles through two phases: a test phase and a rewrite phase. Imagine a bag of signs, corresponding to \"the big brown dog barked\", has been passed to the generation phase. The first step in the generation process is to convert it into some arbitrary TNCB structure, say the one in figure 6 . In order to verify whether this structure is valid, we evaluate the TNCB. This is the test phase. If the TNCB evaluates successfully, the orthography of its value is the desired result. If not, we enter the rewrite phase.", "cite_spans": [], "ref_spans": [ { "start": 294, "end": 302, "text": "figure 6", "ref_id": null } ], "eq_spans": [], "section": "The Generation Algorithm", "sec_num": "2.3" }, { "text": "If we were continuing in the spirit of the original Shake-and-Bake generation process, we would now form some arbitrary mutation of the TNCB and retest, repeating this test-rewrite cycle until we either found a well-formed TNCB or failed. However, this would also be intractable due to the undirectedness of the search through the vast number of possibilities. Given the added derivation information contained within TNCBs and the properties mentioned above, we can direct this search by incrementally improving on previously evaluated results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Generation Algorithm", "sec_num": "2.3" }, { "text": "We enter the rewrite phase, then, with an illformed TNCB. Each move operation must improve p lg Figure 6 : An arbitrary right-branching TNCB structure it. Let us see why this is so.", "cite_spans": [], "ref_spans": [ { "start": 96, "end": 104, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "The Generation Algorithm", "sec_num": "2.3" }, { "text": "The move operation maintains the same number of nodes in the tree. The deletion of a maximal TNCB removes two ill-formed nodes (figure 2). At the deletion site, a new undetermined node is created, which may or may not be ill-formed. At the destination site of the movement (whether conjunction or adjunction), a new well-formed node is created.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Generation Algorithm", "sec_num": "2.3" }, { "text": "The ancestors of the new well-formed node will be at least as well-formed as they were prior to the movement. We can verify this by case:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Generation Algorithm", "sec_num": "2.3" }, { "text": "1. When two maximal TNCBs are conjoined, nodes dominating the new node, which were previously ill-formed, become undetermined. When re-evaluated, they may remain ill-formed or some may now become well-formed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Generation Algorithm", "sec_num": "2.3" }, { "text": "2. When we adjoin a maximal TNCB within another TNCB, nodes dominating the new wellformed node are disrupted. By dominance monotonicity, all nodes which were disrupted by the adjunction must become well-formed after re-evaluation. And nodes dominating the maximal disrupted node, which were previously ill-formed, may become well-formed after reevaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Generation Algorithm", "sec_num": "2.3" }, { "text": "We thus see that rewriting and re-evaluating must improve the TNCB.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Generation Algorithm", "sec_num": "2.3" }, { "text": "Let us further consider the contrived worst-case starting point provided in figure 6 . After the test phase, we discover that every single interior node is ill-formed. We then scan the TNCB, say top-down from left to right, looking for a maximal TNCB to move. In this case, the first move will be PAST to bark, by conjunction ( figure 7) .", "cite_spans": [], "ref_spans": [ { "start": 76, "end": 84, "text": "figure 6", "ref_id": null }, { "start": 328, "end": 337, "text": "figure 7)", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "The Generation Algorithm", "sec_num": "2.3" }, { "text": "Once again, the test phase fails to provide a wellformed TNCB, so we repeat the rewrite phase, this time finding dog to conjoin with the ( figure 8 shows the state just after the second pass through the test phase).", "cite_spans": [], "ref_spans": [ { "start": 139, "end": 147, "text": "figure 8", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "The Generation Algorithm", "sec_num": "2.3" }, { "text": "After further testing, we again re-enter the rewrite phase and this time note that brown can be inserted in the maximal TNCB the dog barked adjoined with dog (figure 9). Note how, after combining dog and the, the parent sign reflects the correct orthography Figure 9 : The TNCB after \"dog\" is moved to \"the\" After finding that big may not be conjoined with the brown dog, we try to adjoin it within the latter. Since it will combine with brown dog, no adjunction to a lower TNCB is attempted.", "cite_spans": [], "ref_spans": [ { "start": 258, "end": 266, "text": "Figure 9", "ref_id": null } ], "eq_spans": [], "section": "The Generation Algorithm", "sec_num": "2.3" }, { "text": "The final result is the TNCB in figure 11, whose orthography is \"the big brown dog barked\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Generation Algorithm", "sec_num": "2.3" }, { "text": "We thus see that during generation, we formed a basic constituent, the dog, and incrementally refined it by adjoining the modifiers in place. At the heart of this approach is that, once well-formed, constituents can only grow; they can never be dismantled. Even if generation ultimately fails, maximal wellformed fragments will have been built; the latter may be presented to the user, allowing graceful degradation of output quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Generation Algorithm", "sec_num": "2.3" }, { "text": "PAST bXark d'og b~o.n ~he ~'bfg, Figure 10 : The TNCB after \"brown\" is moved to \"dog\" the big brown dog barked PA k he Figure 11 : The final TNCB after \"big\" is moved to \"brown dog\"", "cite_spans": [], "ref_spans": [ { "start": 33, "end": 42, "text": "Figure 10", "ref_id": null }, { "start": 119, "end": 128, "text": "Figure 11", "ref_id": null } ], "eq_spans": [], "section": "the b~", "sec_num": null }, { "text": "3 Initialising the Generator", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "the b~", "sec_num": null }, { "text": "Considering the algorithm described above, we note that the number of rewrites necessary to repair the initial guess is no more than the number of ill-formed TNCBs. This can never exceed the number of interior nodes of the TNCB formed from n lexical signs (i.e. n-2). Consequently, the better formed the initial TNCB used by the generator, the fewer the number of rewrites required to complete generation. In the last section, we deliberately illustrated an initial guess which was as bad as possible. In this section, we consider a heuristic for producing a motivated guess for the initial TNCB. Consider the TNCBs in figure 1. If we interpret the S, O and V as Subject, Object and Verb we can observe an equivalence between the structures with the bracketings:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "the b~", "sec_num": null }, { "text": "(S (V O)), (S (O V)), ((V O) S)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "the b~", "sec_num": null }, { "text": ", and ((O V) S). The implication of this equivalence is that if, say, we are translating into a (S (V O)) language from a head-finM language and have isomorphic dominance structures between the source and target parses, then simply mirroring the source parse structure in the initial target TNCB will provide a correct initiM guess. For example, the English sentence (5):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "the b~", "sec_num": null }, { "text": "(5) the book is red has a corresponding Japanese equivalent (6):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "the b~", "sec_num": null }, { "text": "(6) ((hon wa) (akai desu)) ((book TOP) (red is))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "the b~", "sec_num": null }, { "text": "If we mirror the Japanese bracketing structure in English to form the initial TNCB, we obtain: ((book the) (red is)). This will produce the correct answer in the test phase of generation without the need to rewrite at all.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "the b~", "sec_num": null }, { "text": "Even if there is not an exact isomorphism between the source and target commutative bracketings, the first guess is still reasonable as long as the majority of child commutative bracketings in the target language are isomorphic with their equivalents in the source language. Consider the French sentence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "the b~", "sec_num": null }, { "text": "(7) ((le ((grandchien) brun)) aboya) (8) ((the ((big dog) brown)) barked)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "the b~", "sec_num": null }, { "text": "The TNCB implied by the bracketing in (8) is equivalent to that in figure 10 and requires just one rewrite in order to make it well-formed. We thus see how the TNCBs can mirror the dominance information in the source language parse in order to furnish the generator with a good initial guess. On the other hand, no matter how the SL and TL structures differ, the algorithm will still operate correctly with polynomial complexity. Structural transfer can be incorporated to improve the efficiency of generation, but it is never necessary for correctness or even tractability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "the b~", "sec_num": null }, { "text": "The Complexity of the Generator", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4", "sec_num": null }, { "text": "The theoretical complexity of the generator is O (n4), where n is the size of the input. We give an informal argument for this. The complexity of the test phase is the number of evaluations that have to be made. Each node must be tested no more than twice in the worst case (due to precedence monotonicity), as one might have to try to combine its children in either direction according to the grammar rules. There are always exactly n -1 non-leaf nodes, so the complexity of the test phase is O(n). The complexity of the rewrite phase is that of locating the two TNCBs to be combined. In the worst case, we can imagine picking an arbitrary child TNCB (O(n)) and then trying to find another one with which it combines (O(n)). The complexity of this phase is therefore the product of the picking and combining complexities, i.e. O(n2). The combined complexity of the test-rewrite cycle is thus O(n3). Now, in section 3, we argued that no more than n -1 rewrites would ever be necessary, thus the overall complexity of generation (even when no solution is found) is O(n4). Average case complexity is dependent on the quality of the first guess, how rapidly the TNCB structure is actually improved, and to what extent the TNCB must be re-evaluated after rewriting. In the SLEMaT system (Poznarlski et al., 1993) , we have tried to form a good initial guess by mirroring the source structure in the target TNCB, and allowing some local structural modifications in the bilingual equivalences.", "cite_spans": [ { "start": 1283, "end": 1308, "text": "(Poznarlski et al., 1993)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "4", "sec_num": null }, { "text": "Structural transfer operations only affect the efficiency and not the functionality of generation. Transfer specifications may be incrementally refined and empirically tested for efficiency. Since complete specification of transfer operations is not required for correct generation of grammatical target text, the version of Shake-and-Bake translation presented here maintains its advantage over traditional transfer models, in this respect.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4", "sec_num": null }, { "text": "The monotonicity constraints, on the other hand, might constitute a dilution of the Shake-and-Bake ideal of independent grammars. For instance, precedence monotonicity requires that the status of a clause (strictly, its lexical head) as main or subordinate has to be transferred into German. It is not that the transfer of information per se compromises the ideal --such information must often appear in transfer entries to avoid grammatical but incorrect translation (e.g. a great man translated as un homme grand). The problem is justifying the main/subordinate distinction in every language that we might wish to translate into German. This distinction can be justified monolingually for the other languages that we treat (English, French, and Japanese). Whether the constraints will ultimately require monolingual grammars to be enriched with entirely unmotivated features will only become clear as translation coverage is extended and new language pairs are added.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4", "sec_num": null }, { "text": "We have presented a polynomial complexity generation algorithm which can form part of any Shakeand-Bake style MT system with suitable grammars and information transfer. The transfer module is free to attempt structural transfer in order to produce the best possible first guess. We tested a TNCB-based generator in the SLEMaT MT system with the pathological cases described in (Brew, 1992) against Whitelock's original generation algorithm, and have obtained speed improvements of several orders of magnitude. Somewhat more surprisingly, even for short sentences which were not problematic for Whitelock's system, the generation component has performed consistently better.", "cite_spans": [ { "start": 377, "end": 389, "text": "(Brew, 1992)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Linguistics for Machine Translation: The Eurotra Linguistic Specifications", "authors": [ { "first": "V", "middle": [], "last": "Allegranza", "suffix": "" }, { "first": "P", "middle": [], "last": "Bennett", "suffix": "" }, { "first": "J", "middle": [], "last": "Durand", "suffix": "" }, { "first": "F", "middle": [], "last": "Van Eynde", "suffix": "" }, { "first": "L", "middle": [], "last": "Humphreys", "suffix": "" }, { "first": "P", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "E", "middle": [], "last": "Steiner", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "V. Allegranza, P. Bennett, J. Durand, F. van Eynde, L. Humphreys, P. Schmidt, and E. Steiner. 1991. Linguistics for Machine Translation: The Eurotra Linguistic Specifications. In C. Copeland, J. Du- rand, S. Krauwer, and B. Maegaard, editors, The Eurotra Formal Specifications. Studies in Machine", "links": null } }, "ref_entries": { "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "Figure 1: Equivalent TNCBs" }, "FIGREF2": { "num": null, "type_str": "figure", "uris": null, "text": ":4 is deleted, raising 5" }, "FIGREF4": { "num": null, "type_str": "figure", "uris": null, "text": ":1 is adjoined next to 6 inside 4" }, "FIGREF5": { "num": null, "type_str": "figure", "uris": null, "text": "The initial guess L___t/ \\ PAST bark ~ brown .tg" }, "FIGREF6": { "num": null, "type_str": "figure", "uris": null, "text": "The TNCB after \"PAST\" is moved to \"bark\" even though they did not have the correct linear precedence." } } } }