{ "paper_id": "E91-1012", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:38:04.112641Z" }, "title": "Non-deterministic Recursive Ascent Parsing", "authors": [ { "first": "Ren~", "middle": [], "last": "Leermakers", "suffix": "", "affiliation": { "laboratory": "", "institution": "Philips Research Laboratories", "location": { "postBox": "P.O. Box 80.000", "postCode": "5600 JA", "settlement": "Eindhoven", "country": "The Netherlands" } }, "email": "leermake@rosetta.prl.philips.nl" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "A purely functional implementation of LR-parsers is given, together with a simple correctness proof. It is presented as a generalization of the recursive descent parser. For non-LR grammars the time-complexity of our parser is cubic if the functions that constitute the parser are implemented as memo-functions, i.e. functions that memorize the results of previous invocations. Memo-functions also facilitate a simple way to construct a very compact representation of the parse forest. For LR(0) grammars, our algorithm is closely related to the recursive ascent parsers recently discovered by Kruseman Aretz [1] and Roberts [2]. Extended CF grammars (grammars with regular expressions at the right hand side) can be parsed with a simple modification of the LR-parser for normal CF grammars.", "pdf_parse": { "paper_id": "E91-1012", "_pdf_hash": "", "abstract": [ { "text": "A purely functional implementation of LR-parsers is given, together with a simple correctness proof. It is presented as a generalization of the recursive descent parser. For non-LR grammars the time-complexity of our parser is cubic if the functions that constitute the parser are implemented as memo-functions, i.e. functions that memorize the results of previous invocations. Memo-functions also facilitate a simple way to construct a very compact representation of the parse forest. For LR(0) grammars, our algorithm is closely related to the recursive ascent parsers recently discovered by Kruseman Aretz [1] and Roberts [2]. Extended CF grammars (grammars with regular expressions at the right hand side) can be parsed with a simple modification of the LR-parser for normal CF grammars.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In this paper we give a purely functional implementation of LR-parsers, applicable to general CF grammars. It will be obtained as a generalization of the well-known recursive descent parsing technique. For LR(0) grammars, our result implies a deterministic parser that is closely related to the recursive ascent parsers discovered by Kruseman Aretz [1] and Roberts [2] . In the general non-deterministic case, the parser has cubic time complexity if the parse functions are implemented as memo-functions [3] , which are functions that memorize and re-use the results of previous invocations. Memofunctions are easily implemented in most programming languages. The notion of memo-functions is also used to define an algorithm that constructs a cubic representation for the parse forest, i.e. the collection of parse trees. It has been claimed by Tomita that non-deterministic LR-parsers are useful for natural language processing. In [4] he presented a discussion about how to do nondeterministic LR-parsing, with a device called a graphstructured stack. With our parser we show that no explicit stack manipulations are needed; they can be expressed implicitly with the use of appropriate programming language concepts.", "cite_spans": [ { "start": 349, "end": 352, "text": "[1]", "ref_id": "BIBREF0" }, { "start": 365, "end": 368, "text": "[2]", "ref_id": "BIBREF1" }, { "start": 504, "end": 507, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 933, "end": 936, "text": "[4]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most textbooks on parsing do not include proper correctness proofs for LR-parsers, mainly because such proofs tend to be rather involved. The theory of LRparsing should still be considered underdeveloped, for this reason. Our presentation, however, contains a surprisingly simple correctness proof. In fact, this proof is this paper's major contribution to parsing theory. One of its lessons is that the CF grammar class is often the natural one to proof parsers for, even if these parsers are devoted to some special class of grammars. If the grammarlis restricted in some way, a parser for general CF grammars may have properties that enable smart implementation tricks to enhance efficiency. As we show below, the relation between LR-parsers and LR-grammars is of this kind.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Especially in natural language processing, standard CF grammars are often too limited in their strong generative power. The extended CF grammar formalism, allowing rules to have regular expressions at the right hand side, is a useful extension, for that reason. It is not difficult to generalize our parser to cope with extended grammars, although the application of LR-parsing to extended CF grammars is well-known to be problematic [5] .", "cite_spans": [ { "start": 434, "end": 437, "text": "[5]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We first present the recursive descent recognizer in a way that allows the desired generalization. Then we obtain the recursive ascent recognizer and its proof. If the grammar is LR(0) a few implementation tricks lead to the recursive ascent recognizer of ref. [1] . Subsequently, the time and space complexities of the recognizer are analysed, and the algorithm for constructing a cubic representation for parse forests is given. The paper ends with a discussion of extended CF grammars.", "cite_spans": [ { "start": 261, "end": 264, "text": "[1]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Consider CF grammar G, with terminals VT and nonterminals V/v. Let V = VN U VT. A well-known topdown parsing technique is the recursive descent parser.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recursive descent", "sec_num": null }, { "text": "Recursive descent parsers consist of a number of procedures, usually one for each non-terminal. Here we present a variant that consists of functions, one for each item (dotted rule). We use the unorthodox embracing operator [.] to map each item to its function: (we use greek letters for arbitrary elements of V*) with B E V, and 1/31 the number of symbols in 3 (with H = 0). A recursive ascent recognizer may be obtained by relating to each state q not only the above [q] , but also a function__ that we take to be the result of applying operator [.] to the state: The set $1 may be rewritten using the specification of [q](C, k):", "cite_spans": [ { "start": 469, "end": 472, "text": "[q]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Recursive descent", "sec_num": null }, { "text": "ini(q) = { B --* .AIB -. A ^ A --* a.3 \u2022 q A 3 =\u00a2 B-r}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recursive descent", "sec_num": null }, { "text": "[q] : V x N --* 2 I\u00b0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recursive descent", "sec_num": null }, { "text": "A (l, j) E-~(B, i)}U {(l,i)lI \u2022 q ^ final(l)} [q](B, i) = { (pop(l), J)l (1,j) \u2022 [ooto(q, B)](i)^ pop(l) \u2022 q}U {(I,4)1(J, k) \u2022 [goto(q, B)I~^ pop(J) \u2022 ini(q) ^ (1, j) \u2022 [q](lhs(S), k)} Proof: First we notice that /8 \"** xi+l..-xj 3~(3 ~* zi+l't ^ 7 ~\" z,+2...zj)v 3B~(3 ~\" B-r ^ B ~ c ^ -y --.\" z,+~...zj)v (~=~^i=j) Hence [q](i) = {(A --* a.3,J)l(A --* a.3, j) \u2022 r~(z,+~, i+ 1)}u {(A --, ,~.3, J)l B -.-. eA(A --, a.3,j) \u2022 [q](B,i)}u {(A --~ a.,i)la --* a. \u2022 q}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recursive descent", "sec_num": null }, { "text": "S1 : {(A -'~ a.~,j)l(A -~ a.~,j) E [q](C,k)A C --* B6 A 6 --,\" xi+,...xk}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recursive descent", "sec_num": null }, { "text": "Also, as before, ~ =~* C'r implies that all items C ~ .g are in ini(q), and the existence of C -* .B~ in ini(q) implies C ~ B.~ E goto(q, B):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recursive descent", "sec_num": null }, { "text": "Sx = {(A ~ a.~,j)l(A ~ ~.B,j) E [q](C, k)A C --~ .B~ E ini(q)A (C --. B.6, k) ~ [goto(q, B)](i)}. n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recursive descent", "sec_num": null }, { "text": "In the computation of [q0](0), functions are needed only for states in the canonical collection of LR(0) states [6] for G, i.e. for every state that can be reached from the initial state by repeated application of the goto function.", "cite_spans": [ { "start": 112, "end": 115, "text": "[6]", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Recursive descent", "sec_num": null }, { "text": "Note that in general the state \u00a2 will be among these, and that both [\u00a2](i) and [g](B, i) are empty sets for all i _> 0 and B E V.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recursive descent", "sec_num": null }, { "text": "One can prove that, if the grammar is LR(0), each recognizer function for a canonical LR(0) state results in a set with at most one element. The functions for nonempty q may in this case be rephrased as Therefore, R can be replaced by two variables X E V and an integer I, making the following substitutions in the previous procedures:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deterministic variants", "sec_num": "4" }, { "text": "R:=A--*a. =~ X:=A;I:=Icrl R:=pop(R) =~ l := l-1 pop(R) E q =~ l # l v X = S' lhs( R) =~ X", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deterministic variants", "sec_num": "4" }, { "text": "After these substitutions, one gets close to the recursive ascent recognizer as it was presented in [1] . A recognizer that is virtually the same as in [l~s obtained by replacing the tail-recursive procedure [q] by an iterative loop. Then one is left with one procedure for each state. While parsing there is, at each instance, a stack of activated procedures that corresponds to the stacks that are explicitly maintained in conventional implementations of deterministic LR-parsers.", "cite_spans": [ { "start": 100, "end": 103, "text": "[1]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Deterministic variants", "sec_num": "4" }, { "text": "For LL(0) grammars the recursive descent recognizer is deterministic and works in linear time. The same is true of the ascent recognizer for LR(0) grammars. In the general, non-deterministic, case the recursive descent and ascent recognizers need exponential time unless the functions are implemented as memo-functions [3] . Memo-functions memorize for which arguments they have been called. If a function is called with the same arguments as before, the function returns the previous result without recomputing it. In conventional programming languages memo-functions are not available, but they can easily be implemented. Devices like graphstructured stacks [4] , parse matrices [7] , or welbformed substring tables [8] , are in fact low-level realizations of the abstract notion of memo-functions. The complexity analysis of the recognizers is quite simple. for the whole recognizer.", "cite_spans": [ { "start": 319, "end": 322, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 660, "end": 663, "text": "[4]", "ref_id": "BIBREF3" }, { "start": 681, "end": 684, "text": "[7]", "ref_id": "BIBREF6" }, { "start": 718, "end": 721, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Complexity", "sec_num": "5" }, { "text": "The above considerations only hold if the parser terminates. The recursive descent parser terminates for all grammars that are not left-recursive. For the recursive ascent parser, the situation is more complicated. If the gra_m.mmar has a cyclic derivation B -** B, the execution of The space required for a parser that also calculates a parse forest, is dominated by this forest. We show in the next section that it may be compressed into a cubic amount of space. In the complexity domain our ascent parser beats its rival, Tomita's parsing method [4] , which is non-polynomial: for each integer k there exists a grammar such that the complexity of the Tomita parser is worse than n k.", "cite_spans": [ { "start": 549, "end": 552, "text": "[4]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Complexity", "sec_num": "5" }, { "text": "In addition to the complexity as a function of sentence length, one may also consider the complexity as a function of grammar size. It is clear that both time and space complexity are proportional to the number of parsing procedures. The number of procedures of the recursive descent parser is proportional to the number of items, and hence a linear function of the grammar size. The recursive ascent parser, however, contains two functions for each LR-state and is hence proportional to the size of the canonical collection of LR(0) states. In the worst case, this size is an exponential function of grammar size, but in the average natural language case there seems to be a linear, or even sublinear, dependence ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity", "sec_num": "5" }, { "text": "Usually, the recognition process is followed by the construction of parse trees. For ambiguous grammars, it becomes an issue how to represent the set of parse trees as compactly as possible. Below, we describe how to obtain a cubic representation in cubic time. We do so in three steps.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parse forest", "sec_num": "6" }, { "text": "In the first step, we observe that ambiguity often arises locally: given a certain context C Of course, this idea should be applied recursively. Technically, this leads to a kind of tree-llke structure in which each child is a set of substructures rather than a single one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parse forest", "sec_num": "6" }, { "text": "The sharing of context can be carried one step further. If we have, in one and the same context, a number of applied occurrences of a production rule A ---, a/~ which share also the same parse forest for a, we can represent the context of A ---* a~ itself and the common parse forest for a only once and fit the set of parse forests for fl into that. Again this idea has to be applied recursively. Technically, this leads to a binary representation of parse trees, with each node having at most two sons, and to the application of the context sharing technique to this binary representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parse forest", "sec_num": "6" }, { "text": "These The representation for the set of parse trees is then just f(S, 0, n).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parse forest", "sec_num": "6" }, { "text": "We now come to our third step. Suppose, for the mo- ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parse forest", "sec_num": "6" }, { "text": "An extended CF grammar consists of grammar rules with regular expressions at the right hand side. Every extended CF grammar can be translated into a normal CF grammar by replacing each right hand side by a regular (sub)grammar. The strong generative power is different from CF grammars, however, as the degree of the nodes in a derivation tree is unbounded. To apply our recognizer directly to extended grammars, a few of the foregoing definitiovs have to be revised. As before, a grammar rule is written A --, a, but with a now a regular expression with Na symbols (elements of V). Defining T + = 1...N,, and Ta = 0...Na, regular expression tr can be characterized by goto(q, B) = {(a ---, a, k) i.e. every state has at most one final item, and in case it has a final item it has no items (A --, ~,j)", "cite_spans": [], "ref_spans": [ { "start": 669, "end": 696, "text": "goto(q, B) = {(a ---, a, k)", "ref_id": null } ], "eq_spans": [], "section": "Extended CF grammars", "sec_num": "7" }, { "text": "with k e succ,~(j) A ~b,~(k) \u2022 VT. 2. for all reachable states q, q N ini(q) = ~, and for all I there is at most one J \u2022 ~ such that J E pop(I).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extended CF grammars", "sec_num": "7" }, { "text": "In the deterministic case, the analysis of section 4 can be repeated with one exception: extended grammar items can not be represented by a non-terminal and an integer that equals the number of symbols before thc dot, as this notion is irrelevant in the case of regular expressions. In standard presentations of deterministic LR-parsing this leads to almost unsurmountable problems [5] .", "cite_spans": [ { "start": 382, "end": 385, "text": "[5]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Extended CF grammars", "sec_num": "7" }, { "text": "We established a very simple and elegant implementation of LR(0) parsing. It is easily extended to LALR(k) parsing by letting the functions [q] produce pairs with final items only after inspection of the next k input symbols. The functional LR-parser provides a high-level view of LR-parsing, compared to conventional implementations. A case in point is the ubiquitous stack, that simply corresponds to the procedure stack in the functional case. As the proof of a functional LR-parser is not hindered by unnecessary implementation details, it can be very compact. Nevertheless, the functional implementation is as efficient as conventional ones. Also, the notion of memo-functions is an important primitive for presenting algorithms at a level of abstraction that can not be achieved without them, as is exemplified by this paper's presentation of both the recognizers and the parse forests.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "For non-LR grammars, there is no reason to use the complicated Tomita algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "If indeed nondeterministic LR-parsers beat the Earley algorithm for some natural language grammars, as claimed in [4] , this is because the number of LR(0) states may be smaller than the size of IG for such grammars. Evidently, for the grammars examined in [4] this advantage compensates the loss of efficiency caused by the non-polynomiality of Tomita's algorithm. The present algorithm seems to have the possible advantage of Tomita's parser, while being polynomial.", "cite_spans": [ { "start": 114, "end": 117, "text": "[4]", "ref_id": "BIBREF3" }, { "start": 257, "end": 260, "text": "[4]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" } ], "back_matter": [ { "text": "A considerable part of this research was done in collaboration with Lex Augusteyn and Frans Kruseman Aretz. Both are colleagues at Philips Research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "On a recursive ascent parser", "authors": [ { "first": "F", "middle": [ "E J" ], "last": "Kruseman Aretz", "suffix": "" } ], "year": 1988, "venue": "In]ormation Processing Letters", "volume": "29", "issue": "", "pages": "201--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "F.E.J. Kruseman Aretz, On a recursive ascent parser, In]ormation Processing Letters (1988) 29:201-206.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Recursive Ascent: An LR Analog to Recursive Descent, SIGPLAN Notices", "authors": [ { "first": "G", "middle": [ "H" ], "last": "Roberts", "suffix": "" } ], "year": 1988, "venue": "", "volume": "23", "issue": "", "pages": "23--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "G.H. Roberts, Recursive Ascent: An LR Analog to Recursive Descent, SIGPLAN Notices (1988) 23(8):23-29.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Lazy Memo-Functions in Functional Programming Languages and Computer Architecture edited by", "authors": [ { "first": "J", "middle": [], "last": "Hughes", "suffix": "" } ], "year": 1985, "venue": "Springer Lecture Notes in Computer Science", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Hughes, Lazy Memo-Functions in Functional Pro- gramming Languages and Computer Architecture edited by J.-P. Jouannaud, Springer Lecture Notes in Computer Science (1985) 201.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Efficient Parsing ]or Natural Language", "authors": [ { "first": "M", "middle": [], "last": "Tomita", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Tomita, Efficient Parsing ]or Natural Language (Kluwer Academic Publishers, 1986).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Parsing extended LR(k) grammars, Acta lnformatica", "authors": [ { "first": "P", "middle": [ "W" ], "last": "Purdorn", "suffix": "" }, { "first": "C", "middle": [ "A" ], "last": "Brown", "suffix": "" } ], "year": 1981, "venue": "", "volume": "15", "issue": "", "pages": "115--127", "other_ids": {}, "num": null, "urls": [], "raw_text": "P.W. Purdorn and C.A. Brown, Parsing extended LR(k) grammars, Acta lnformatica (1981) 15:115- 127.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Principles of Compiler Design", "authors": [ { "first": "A", "middle": [ "V" ], "last": "Aho", "suffix": "" }, { "first": "J", "middle": [ "D" ], "last": "Ullman", "suffix": "" } ], "year": 1977, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A.V. Aho and J D. Ullman, Principles of Compiler Design (Addison-Wesley publishing company,1977)", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The theory o] parsing, translation, and compiling", "authors": [ { "first": "A", "middle": [], "last": "", "suffix": "" }, { "first": "V", "middle": [], "last": "Aho", "suffix": "" }, { "first": "J", "middle": [ "D" ], "last": "Ulhnan", "suffix": "" } ], "year": 1972, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A,V. Aho and J.D. Ulhnan, The theory o] parsing, translation, and compiling (Prentice Hall Inc. En- glewood Cliffs N.J.,1972).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Observations on Context Free Parsing in Statistical Methods in Linguistics (Stockhohn (Sweden)", "authors": [ { "first": "B", "middle": [ "A" ], "last": "Shell", "suffix": "" } ], "year": 1976, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B.A. Shell. Observations on Context Free Parsing in Statistical Methods in Linguistics (Stockhohn (Swe- den) 1976).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "An Efficient Context-Free Parsing Algorithm", "authors": [ { "first": "J", "middle": [], "last": "Earley", "suffix": "" } ], "year": 1970, "venue": "Communications ACM", "volume": "13", "issue": "2", "pages": "94--102", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Earley, 1970. An Efficient Context-Free Parsing Algorithm, Communications ACM 13(2):94-102.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "the set of integers, or a subset (0...nm~x), with nma= the maximum seutence length. The functions are to meet the following specification: [A --, a. l(0 = {Jl -*\" with x~...xn the sentence to be parsed. A recursive implementation for these functions is given by (b \u2022 VT, B \u2022 v,,) [A --* a.](i) = {i} [a --* a.b-r](i) = {jib = zi+, ^ j E [A ~ ab.-r](i + 1)} [A ---, a.B-r](i) = {Jl~ \u2022 [B ~ .~](i)^j \u2022 [A -; ~a.-r](~)} We keep to the custom of omitting existential quantification (here for k,/f) in definitions of this kind. The proof is elementary and ba#ed on 3~(3 = x~+a-r A -r ~* zi+~...~:s)V 3B-~$k(3 = B-r ^ B --~ 8 A 8 --;* zi+a...x~^ -r --~* 2;k+l...2~j) If we add a grammar rule S' --* S to G, with S' ([ V then S --** x~...xn is equivalent to n \u2022 [S' --* .S](0). The recursive descent recognizer works for any CF grammar except for grammars for which ~A~(A ---* aAcr --** A3). For such left-recursive grammars the recognizer does not terminate, as execution of [A --* .a](i) will lead to a call of itself. The recognition is not a linear process in general: the function calls [A ---a.B3\"](i) lead to calls [B --* ./i](i) for all values of ~ such that B ---, is a grammar rule.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "The double arrow =\u00a2, denotes a left-most-symbol rewriting Ba =e~ Cfla, using a non-e rule B ---, Cfl. The transition function goto is defined by (B \u2022 V) goto(q, B) = {A -* aB.3]A --* a.B3 \u2022 (q U ini(q))} Also define pop(A ---, aB.3) = A --', a.B3 lhs(A --* a.fl) = A final(A --. a.3) = (131 = 0)", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "This is equivalent to the earlier version because we may replace the clause B ~ e by B ---, .e \u2022 ini(q). Indeed, if state q has item A --* a.fl and if there is a left-mostsymbol derivation/3 =~* B-r then all items B --* .A are included in ini(q).For establishing the correctness of [q--] notice that 3 ~* B3\" either contains zero steps, in which case 3 = B'r, or it contains at least one step:3.y(3 =~* B3\" A 3' --*\" xi+a ...zs) = 3~(3 = B-r ^ -r --\" xi+l...zj)V 3ce.~k(~8 :=~* C-rAG --* B~S A~5 -.*\" xi+ l...x~,A -r -'** xk+l ...x j) Hence [q](B, i) may be written as the union of two sets, [q](B, i) = So USa: So = {(A --~ a.B3\",j)] A ---. ct.B3\" \u2022 qA-r ---** xs+l...xj} S~ = {(a --. a.3,j)lA --* a.3 \u2022 q ^ 3 =~\" C-r^ C ---* B~ ^ $ --** zi+l...xk ^ 3' --*\" zk+l...zi}. By the definition of goto, if A ---, a.B-r \u2022 q then A --, aB.-r \u2022 goto(q, B). tlence, with the specification of [q], So may be rewritten as So = {(A --. a.B-r,j)IA --. a.B-r \u2022 q^ (A ---* aB.3\",j) \u2022 [goto(q, B)](i)}", "num": null, "uris": null, "type_str": "figure" }, "FIGREF3": { "text": "[q](i): if, for some I, I E q A final(l) t__hen return {(I, i)} else if B --..e E ini(q) then ret__.urn [q](B, i) else if i < n then return [q](xi+~, i + 1) else return fi [q](B,i): if [9oto(q, B)](i) = \u00a2 then return ~ else let (I, j) be the unique element of [goto(q, B)](i). Then: if pop(I) E q then return {(pop(l), j)} else return [q](Ihs(l), j) fl fi Reversely, the implementations of [q](i) and [q](B,i) of the previous section can be seen as non-deterministic versions of the present formulation, which therefore provides an intuitive picture that may be helpful to understand the non-deterministic parsing process in an operational way. Each function can be replaced by a procedure that, instead of returning a function result, assigns the result to a global (set) variable. As this set variable may contain at most one element, it can be represented by three variables, a boolean b, an item R and an integer i. If a function would have resulted in the set {(I,j)}, the global variables are set to b = TRUE, R = I and i = j. A function value ~ is represented by b = FALSE. Also the arguments of the functions are superfluous now. The rble of argument i can be played by the global variable with the same na__.rne, and lhs(R)can be used instead of argument B of [q]. Consequently, procedure [\u00a2] becomes a statement b := FALSE, whereas for non-emp.~, q one gets the procedures (keeping the names [q] and [q], trusting no confusion will arise): [q] : if, for some I, I E q A final(l) then R := I else if B --..\u00a2 E ini(q) then R := B --e.; [q] __ else if i < n then R := xi+a --xi+l.; i := i + 1; [q] else b := FALSE fi N M: [goto(q, Ihs(R))l; if b. then if pop(R) E q then R := pop(R) .else [q] fi fi Note that these procedures do not depend on the details of the right hand side of R. Only the number of symbols before the dot is relevant for the test \"pop(R) E q\".", "num": null, "uris": null, "type_str": "figure" }, "FIGREF4": { "text": "[q](B, i) leads to a call of itself. Also, there may be a cycle of transitions labeled by non-terminals that derive e, e.g. if goto(q, B) = q A B ---, e, so that the execution of [q](i) leads to a call of itself. There are non-cyclic grammars that suffer from such a cycle (e.g. S --* SSb, S --* e). Hence, the ascent parser does not terminate if the grammar is cyclic or if it leads to a cycle of transitions labeled b_.~ non-terminals that derive e. Otherwise, execution of [q](B, i) can only lead to calls of [p](i) with p ~ q and to calls of [q](C,k), such that either k > i or C--** BAC ~ B. As there are only finitely many such p, C, the parser terminates. Note that both the recursive descent and ascent recognizer terminate for any grammar, if the recognizer functions are implemented as memo-functions with the property that a call of a function with some arguments yields $ while it is under execution. For instance, if execution of [q](i) leads to a call of itself, the second call is to yield ~. A remark of this kind, for the recursive descent parser, was first made in ref. [8]. The recursive descent parser then becomes virtually equivalent to a version of the standard Earley algorithm [9] that stores items A ---* a./~ in parse matrix entry Ti i if/~ ---,* xi+l...xi, instead of storing it if a --*\u00b0 x~+l...xj.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF6": { "text": "[-], there might be several parse subtrees tl...tk (all deriving the same substring xi+l...xj from the same symbol A) that fit in that same context, leading to the parse trees C[tl], eft2] ..... c[th] for the given string zl...zn. Instead of representing these parse trees separately, repeating each time the context C, we can represent them collectively as C[{~1, ..., tk}].", "num": null, "uris": null, "type_str": "figure" }, "FIGREF7": { "text": "two ideas are captured by introducing a function f with the interpretation that f(f3, i,j) represents the parse forest of all derivations from /~ E V* to zi+~...x~, for all i,j such that 0 < i < j < n. The following recursive definitions fix the parse forest representation formally: f(~, i,j) ={[l[i = J}, f(a, i, j) = {alj = i + 1 ^ x,+l = a}, for all a e liT, f(A,i,j) = {(A,f(ot, i,j))lA ~ aA a .---*\" xi+l...x~}, for all A E VN, f(AB/3, i, j) = {(f(A, i, k), f(B#, k, J))l i < k < jAA ---,\" xi+l...Xk ^ B/~ --~\" xk+l...xj}, for all A, B E V.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF8": { "text": "ment, that the guards a ---,* xi+l...xj and the like, occurring above, can be evaluated in some way or another. Then we can use function f to compute the representation of the set of parse trees for sentence xl...xn. If we make use of memo-functions to avoid repeated computation of a function applied to the same arguments, we see that there are at most O(n 2) function evaluations.If we represent function values by re]erences to the set representations rather than by the sets themselves, the most complicated function evaluation consumes an additional amount of storage that is O(n): for j -i + 1 values of k we have to perform the construction of a pair of (copies of) two references, costing a unit amount of storage each. Therefore, the total amount of space needed for the representation of all parse trees is O(n3).The evaluation of the guards ct ---.\" xi+l...xj etc.amounts exactly to solving a collection of recognition problems. Note that a top-down parser is possible that merges the recognition and tree-building phases, by writingf(A,i,j) = {(A,f(ot, i,j))lA -., a A f(a,i,j) # ~}, for all A E VN, I(AB/i, i, j) = {(f(A, i, k),/(B/i, k, J))l i < k < j A f(A,i,k) # \u00a2 A f(B/i,k,j) # ~}, for all A, B E V, the other cases for f being left unchanged. Note the similarity between the recognizing part of this algorithm and the descent recognizer of section 2. Again, this parser is a cubic algorithm if we use memo-functions. Another approach is to apply a bottom-up recognizer first and derive from it a set P containing triples (/i, i,j) only if/3 ---'\" xi+l...xj, and at least those triples (/i, i,j) for which the guards/3 ---** xi+a ...xj are evaluated during the computation of f(S, O, n) (i.e., for each derivation S ---.\" xl...xkAxj+l...Zn \"-* Xl...XkOl/iXj+l...Xn \"-'** zl...xiflzj+l...xn \"~\" xl...xn, the triples (/i,i,j) and (A,k,j) should be in P). The simplest way to obtain such P from our recognizer is to assume an implementation of memo-functions that enables access to the memoized function results, after executing [q0](O). Then one has the disposal of the set {(/i, i,j)l[q](i ) was invocated and (A --* a./i, j) e [q](i)} Clearly, (/i,i,j) is only in this set if /i --+\" xi+l...x i. Note, however, that no pairs (A --~ ./i,j) are included in [q](i) (except if A = S'). We remedy th__is with a slight change of the specifications of [q] and [q], defining ~ q U ini(q): [q](i) = {(A --.* a.3, j)lA --~ c~./~ E ~A/i ---** xi+l...xj} [q](B,i) = {(a ---* a./i,j)lA ---* a./i E \"~A t3 ~* BT A 7 \"\"* Xi+l\"'Xj} A recursive implementation of the recognition functions now is [q](i) = {(I,Y)l(I,j) e [q](~+~, i + l[}.p {(l,j)l B --.., e ini(q) A (I,j) E [q](B,i)}U {(I, i)lI E ~ A final(l)} [q](B, i) = {(pop(I), J)l(l, J) E [goto(q, B)](i)lu {(I, j)l(J, k} e [goto(q, B)I~}A pop(J) E ini(q) A (I,j) e [q](lhs(J),k)} If we define, for this revised recognizer, P = {(3, i, j)l[q](i) was invocated and (A -...~, j) e [q](i)}u {(A, i, j)l[q](i) was invocated and (a --, .~,j) e [q](i)}u {(x~+~,i,i+ DI0 < i < n}, it contains all triples that are needed in f(S, O, n), and we may write the forest constructing function as f(A,i,j) = {(a,f(a,i,j))lA --, a^ (a,i,j) E P}, for all A E V~, f(AB/i, i, j) ----{(I(A, i, k), f(B/3, k, J))l (A, i, k) e P A (Bit, k, j) e P}, for all A, B e V, the other cases for f being left unchanged again. There exists a representation of P in quadratic space such that the presence or absence of an arbitrary triple can be decided upon in unit time. As a result, the time complexity of f(S, O, n) is cubic.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF9": { "text": "1. a mapping \u00a2~ : T~ + ~ V associating a grammar symbol to each number. 2.. a function succo : To --* 2 T+ mapping each number to its set of successors. The regular expression can start with tile symbols corresponding to the numbers in succo(O).", "num": null, "uris": null, "type_str": "figure" }, "FIGREF10": { "text": "a set a,~ E 2 7`0 of numbers of symbols the regular expression can end with.Note that 0 is not associated to a symbol in V and is not a possible element of succ,,(k). It can be element of a,~ though, in which case there is an empty path through the regular expression.We define an item as a pair (A --, a,k), with the interpretation that number k is 'just before the dot'.The correspondence with dotted rules is the following. Let a = B1...Bt, then a is a simple regular expression characterized by ~ba(k) = Bk, succa(k) = {k + 1} if 0 < k < l, succo(l) = {~, and a,, = {I}. Item (A ---. a,0) corresponds to the initial item A ---* .a and (A ---* a, k) to the dotted-rule item with the dot just after Bk. The predicate final for the new kind of items is defined by final((A ---* a, k)) = (k E an) Given a set q of items, we define ini(q) = {(A --a,0)l(B ---* fl, l) \u2022 qA k \u2022 s.cc,(0 ^ \u00a2a(k) ~\" A~} The function pop becomes set-valued and the transition function can be defined in terms of it (remember: ~ = q U ini(q)): pop((A ~ a, l)) = {(a --. a, k)ll \u2022 succ.(k)}", "num": null, "uris": null, "type_str": "figure" }, "FIGREF11": { "text": "l*.(k) = B a I \u2022 ~A I \u2022 pop((a --* a, k))} A recursive ascent recognizer is now implemented by [q](i) = [q](~ci+l, i + 1)U {(I, j)lJe ini(q) ^ final(J)A (I,j) \u2022 [q](lhs(J), i)}U {(I, i)ll \u2022 q ^ final([)) [q](B,i) = {J,j)lJ \u2022 q ^ J \u2022 pop(I)^ (1, j) \u2022 [goto(q, B)](i)}U t(I,j)l(J, k) \u2022 [goto(q,B)](i) A K \u2022 ini(q)^ K \u2022 pop(J)^ (l,j) \u2022 [q](lhs(J),k)} The initial state q0 is {(S' ---* S, 0)}, and a sentence xl...x, is grammatical if ((S' --* S, 0), n) \u2022 [qo](O). The recognizer is deterministic if 1. there is no shift-reduce or reduce-reduce conflict,", "num": null, "uris": null, "type_str": "figure" }, "TABREF2": { "num": null, "html": null, "text": "There are O(n) different invocations of parser functions. The functions call at most O(n) other functions, that all result in a set with O(n) elements (note that there exist only O(n) pairs (I, j) with I E IG, i _< j _< n). Merging these sets to one set with no duplicates can be accomplished in O(n 2) time on a random access machine. Hence, the total time-complexity is O(na). The space needed for storing function results is O(n) per invocation, i.e. O(n 2)", "type_str": "table", "content": "" } } } }