{ "paper_id": "P79-1007", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:11:42.367777Z" }, "title": "GENF~ALIZED AUGMENTED TRANSITION NETWORK GRAMMARS FOR GENERATION FROM SD\u00a3%NTIC NETWORKS", "authors": [ { "first": "Stuart", "middle": [ "C" ], "last": "Shapiro", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "", "pdf_parse": { "paper_id": "P79-1007", "_pdf_hash": "", "abstract": [], "body_text": [ { "text": "Augmented transition network (ATN) grammars have, since their development by Woods [ 7; ~, become the most used method of describing grammars for natural language understanding end question answering systems. The advantages of the ATN notation have been su,naarized as \"I) perspicuity, 2) generative power, 3) efficiency of representation, 4) the ability to capture linguistic regularities and generalities, and 5) efficiency of operation., [ I ,p.191 ] . The usual method of utilizing an ATN grammar in a natural language system is to provide an interpreter which can take any ATH graam~ar, a lexicon, and a sentence as data and produce either a parse of a sentence or a message that the sentence does not conform to the granunar. A compiler has been written [2;3 ] which takes an ATH grammar as input and produces a specialized parser for that grammar, but in this paper we will presume that an Interpreter is being used.", "cite_spans": [ { "start": 441, "end": 453, "text": "[ I ,p.191 ]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "I. YNTRODUCTYON", "sec_num": null }, { "text": "A particular ATN grammar may be viewed as a program written in the ATH language. The program takes a sentence, a linear sequence of symbols, as input, and produces as output a parse which is usually a parse tree (often represented by a LISP S-expression) or some \"k~ewledge reprssentatioc\" such as a semantic network. The operation of the program depends on the interpreter being used and the particular program (grannar), as well as on the input (sentence) being processed. Several methods have been described for using ATN grammars for sentence generation. One method [1,p.235] is to replace the usual interpreter by a generation interpreter which con take an ATN grammar written for parsing and use it to produce random sentences conforming to the grammar. This is useful for testing and debugging the granmmLr. Another method [5 ] uses a modified interpreter to generate sentences from a semantic network. In this method, an ATN register is initialized to hold a node of the semantic network and the input to the grammar is a linear string of symbols providing a pattern of the sentence to be generated. Another method [4 ] also generates sentences from a semantic network.", "cite_spans": [ { "start": 570, "end": 579, "text": "[1,p.235]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "I. YNTRODUCTYON", "sec_num": null }, { "text": "In this method, input to the granmmr is the semantic network itself.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I. YNTRODUCTYON", "sec_num": null }, { "text": "That is, instead of successive words of a surface sentence or successive symbols of a linear sentence pattern being scanned as the ATM grammar is traversed by the interpreter, different nodes of the ssmantic network are scanned. The gramnar controls the syntax of the generated sentence based on the structural properties of the semantic network and the information contained therein.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I. YNTRODUCTYON", "sec_num": null }, { "text": "It was intended that a single ATN interpreter could be used both for standard ATN parsing and for generation based on this last method. However, a special interpreter was written for generation grammars of the type described in [4 ] , and, indeed, the definition of the ATN formalism given in that paper, though based on the standard ATN formalism, was inconsistent enough with the standard notation that a single interpreter could not be used. This paper reports the results of work carried out to remo~ those inconsistencies.", "cite_spans": [ { "start": 228, "end": 232, "text": "[4 ]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "I. YNTRODUCTYON", "sec_num": null }, { "text": "A generalization of the ATN formalism has been derived which allows a single interpreter to be used for both parsing and generating gras~re.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I. YNTRODUCTYON", "sec_num": null }, { "text": "In fact, parsing and generating grammars can be sub-networks of each other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I. YNTRODUCTYON", "sec_num": null }, { "text": "For example an A~M grammar can be constructed so that the ,,parse,, This material is based on work supported in part by the MaticeuLl Science Foundation under Grant #MCS78-O2274. of a natural language question is the natural language statement which answers it, interaction with representation end inference routines beinR done on arcs along the way. The neW formalism is a strict generalization in the sense that it interprets all old ATN gralnars as having the same semantics (carrying out the same actions and producing the same parses) as before.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I. YNTRODUCTYON", "sec_num": null }, { "text": "In our view, each node of a semantic network represeats a concept. The goal of the generator is, given a node, to express the concept represented by that node in a natural language surface string. The syntactic category of the surface string is determined by the grammar, which can include tests of the stracture of the semantic network connected to the node. In order to express the concept, it is often necessary to include in the string substrings which express the concepts represented by adjacent nodes. For example, if a node represents a fact to he expressed as a statement, part of the statement may he a noun phrase expressing the concept represented by the node connected to the original node by an AGENT case arc. This can be done by a recursive call to a section of the grammar in charge of building noun phrases. This section will be passed the adjacent node. When it finishes, the original statement section of the grammar will continue adding additional substrings to the growing statement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gm~ERATION FROM A S~2~ANTIC NETWGRK--BRIEF OV~VIEg", "sec_num": "2." }, { "text": "In ATN grmrs written for parsing, a recurstve push does not change the input symbol being examined, but when the original level continues, parsing continues at a different symbol. In the generation approach we use, a recursive push often involves a change in the senantic node being examined, and the original level continues with the original node. This difference is a major motivation of some of the generalizations to the ATN formalism discussed below. ~ne other major motivation is that, in parsing a string of symbols, the .,next.. symbol is well defined, but in ,.parsing. a network, .next\" mast be explicitly specified.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gm~ERATION FROM A S~2~ANTIC NETWGRK--BRIEF OV~VIEg", "sec_num": "2." }, { "text": "The following sub-sections shoW the generalized syntax of the ATN formalism, and assume a knowledge of the standard formalimm ([I ] is an excellent introduction). Syntactic structures already familiar to ATH users, but not discussed here remain unchanged. Parentheses and terms in upper case letters are terminal symbols. Lower case terms in angle brackets are non-terminals. Ternm enclosed in square brackets are optional. Terms followed by .*, m~ occur zero or more times in succession. To avoid confusion, in the re, sAnder of this section we will underline the name of the * register.", "cite_spans": [ { "start": 126, "end": 131, "text": "([I ]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "THE GEN~IALIZATION", "sec_num": "3." }, { "text": "Successful traversal of an ATN arc might or might not consume an input symbol. When parsing, such consumpticn normally occurs, when ge~erating it normally does not, but if it does, the next symbol (semantic node) must be specified. To allow for these choices, we have returned to the technique of [6 ] of having two terminal action, TO and J~P, and have added an optional second argent to TO. The syntax is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TERMINAL ACTIONS", "sec_num": "3.1" }, { "text": "(TO [~for~]) (JUMP )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TERMINAL ACTIONS", "sec_num": "3.1" }, { "text": "Both cause the parser to enter the given state . JUMP never conswms the input symbol; TO always does. It the [<~em ]) is successful, the s are performed and transfer is made to . The input s~ubol is con~.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "APeS", "sec_num": "3.2" }, { "text": "The next symbol to be scanned is the value OF
if it is present or the next symbol in the input buffer if ~fer~ is ~Losing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "APeS", "sec_num": "3.2" }, { "text": "The PUSH arc mBk~8 two asnn~lo~ms 1 ) the first symbol to be scud in ths ~zheetvoz4c is the cmTent contents of the * registers 2) the cuzTent input symbol will be consuned~oy the subnet~ork, so the content8 of can be replaced by the value returned by the subnet-~ork. We need an are that causes a ~ive call to su~aetwork, but makes neither of ~heea two assmnptions, so we introduce the CALL arc:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "APeS", "sec_num": "3.2" }, { "text": "(CALL ~fom ~es~> * is . Lf the is successful, all the are performed and a zqenwslve is made to the state whore the next s~mbol to be scanned is the value of s. Y.f the subnetwerk succeeds, its value is placed into and the are performed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "APeS", "sec_num": "3.2" }, { "text": "Just as the normal TO terminal action is the general-Ised TO terminal action with. a default foru, the PUSH arc (which we retain) is the CALL arc with the folloeing defanltss is e! the e! <~gister> is _~.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "APeS", "sec_num": "3.2" }, { "text": "The on~ fm~ which must be added is (OETA (]) \" m fm the spaoified node, or a IAst of such nodes L~ there are more than rose.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "APeS", "sec_num": "3.2" }, { "text": "The generalization o\u00a3 the ATN formalism to one which allN for writing gre~rs which generate s~'Tace strings from semantic networks, yet csn be interpreted bY the same interpreter whAch handles parsing grsm~8, requires no changes other t~an the ones deseribed above. Of course, each t~plementation of an ATN interpreter contains slight di~erences in the set of tests and actions implemented beyond the basic ones.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TESTS, PREACTION, ETC.", "sec_num": "3.2" }, { "text": "Zr~ut to the ATN parser can be thought of as being the contents o\u00a3 a stack, called the input buffer. Zf the input is a string of' words, the ~ ~--'-~vill be at the top of the input buffer and successive words will be in successively deeper positions of the input buffer. ZF the input is a graph, the input buffer might controLs only a single node OF the graph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "h. M INPUT Bb~ee~", "sec_num": null }, { "text": "Ca antes-Lug an arc, the \u2022 register is set to the top element of the input buffer, uhlch must not be empty. The on~ exceptions to this are the VTR and POP arcs. VIR sets e to an element of the HOLD register. POP leaves .M, undefined since e is always the element to be accounted for by the current arc, and a POP arc is not trying to account for ar~ elmmut. ~he input buffer is not changed between the time a PUSH 8re is entered and t~ fine an arc emanating from the stata pushed to is antoM) 8o the contents of e on the latter ar~ will be the same as on the former. A CALL arc is allmred to opeei~ the centante of. on the arcs of the called s1~ta. This is accueplished by replacing the top element of the input buffer by that value before transfer to the called state.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "h. M INPUT Bb~ee~", "sec_num": null }, { "text": "Y~ the value is a list of olemnto) we push each elmwnt individual~ onto the input buffer. ~ makes it particularly easy to loop thz~ a set of nodes, each of which uili contribute the sane syntactic tom to the growing santenee (nob as a st~A~g o\u00a3 adJectlves).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "h. M INPUT Bb~ee~", "sec_num": null }, { "text": "on an arc (except for POP), i.e. during evaluation OF the test and the acts, the onntents OF ~ and the top elanent of the input buffer are the same. This requires spaeial pz~eessing for V~R, P~H, and CALL ares. Atter setting % a VIR are pushes the contents of ~ onto tbe input buffer. When a PUSH are resuaes, and the lower level has sueceestu~ returned a value, the value is placed into * and also pushed onto the input buffer. ~an a CALL resumes, and the Immr level has 8uceassfUlly returned a value, the value is placed into the spueified register, and the centers of ~ is pushed onto the input butter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "h. M INPUT Bb~ee~", "sec_num": null }, { "text": "The s1~eitied register might or might not be e. In either case the contents of. e and the top OF the input buffer a~ the sane.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "h. M INPUT Bb~ee~", "sec_num": null }, { "text": "There are two possible terminal acts, JUMP and TO. JUMP does not affect the input buffer, so the contents OF e will be same on the successor ares (except for POP and VIR) as at the end OF the curreut arc. TO pops the input buffer, but if provided with an optional tom, also pushes the value of ~Jmt form on~o the input butler.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "h. M INPUT Bb~ee~", "sec_num": null }, { "text": "POPping from ~e top level is one7 legal if the input buffer is empty. POPPint fz~m any level should that a constituent has been accounted for. Accounting for a constituent should en~l removing it from the in1~t buffer. From this we conclude that ever~ path within a level fm an initial state to a POP ere oon1'~Lin at least one TO transfer, and in most cases, it is proper to trausfer TO ra~her than to JUMP to a state that hss a POP are emanat~ from it. TO will be terulnal ast for most V~R and PUSH a~s.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "h. M INPUT Bb~ee~", "sec_num": null }, { "text": "In an~ ATN interpreter which abides by this discussion, advancement of the input is a function of the terminal action alone in the sense that at any state JUMPed to, the top of the input buffer will be the last value of *, and at any state Jumped TO it will not be.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "h. M INPUT Bb~ee~", "sec_num": null }, { "text": "Parsing and generating require a lexicon --a file of words giving syntactic categories, features and inflectional forms ~or irregularly inflected words. Parsing and generating require different information, yet we wish to avoid duplication as much as possible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "h. M INPUT Bb~ee~", "sec_num": null }, { "text": "During parsing, morphological analysis is performed. The analyzer is given an inflected form, must segment it, find the stem in the lexicon and modify the lexical entry of the stem according to its analysis of the original form. Irregularly inflected forms must have their own entries in the lexicon. An entry in the lexicon may be lexically ambiguous, so each entry must be associated with a list of one or more lexical feature lists. Each such list, whether stored in the lexicon or constructed by the morphological analyzer, must include a syntactic category and a stem, which serves as a link to the semantic network, as well as other features such as transitivity for a verb.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "h. M INPUT Bb~ee~", "sec_num": null }, { "text": "In the semantic network, sc~e nodes are associated with lexical entries. During generation, these entries, along with other information from the semantic network, are used by a morphological synthesizer to construct an inflected word. We assume that all such entries are unambiguous stems, and so contain only a single lexical feature list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "h. M INPUT Bb~ee~", "sec_num": null }, { "text": "This feature list must contain any irregularly inflected forms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "h. M INPUT Bb~ee~", "sec_num": null }, { "text": "In summary, a single lexicon may be used for both parsing and generating under the following conditions. An unambiguous stem can be used for both parsing and generating if its one lexlcal feature list contains features required for both operations. An ambiguous lexical entry will only be used during parsing. Each of its lexlcal feature lists ,met contain a unique but arbitrary ,stem,' for connection to the semantic network and for holding the lexical information required for generation. Every lexical feature list used for generating must contain the proper natural language spe!1~ng of its stem as well as any irregularly inflected forms. Lexical entries for irregularly inflected forms will only be used during parsing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "h. M INPUT Bb~ee~", "sec_num": null }, { "text": "For the purposes of this paper, it should be irrelevant whether the \"stems,, connected to the semantic network are actual surface words llke \"give,,, deeper sememes such as that underlying both ,,give, and ,,take\", or primitives such as .ATRANS\". Figure I shOWs an example interaction using the SNePS Semantic Network Processing ~ystem [5] in which I/O is controlled by a parsing-generating ATN grammar. Lines begun by \"**\" are user's input, which are all calls to the function named ,, : \". This function passes its argument llst as the input buffer for a parse to begin in state S. The form popped by the top level ATN nedworm is then printed, folluwed by the CPU time in milliseconds. (The system is partly c~lled, partly interpreted LISP on a CYB~ 173. The ATN gra,mer is interpreted. ) Figure 2 shores the grammar in abbreviated graphical form, and Figure 4 gives the details of each arc. The parsing network, beginning at state S~ is included for completeness, but the reader unfamiliar with SMePSUL, the S~ePS User Language, [5] is not expected to understand its details.", "cite_spans": [ { "start": 336, "end": 339, "text": "[5]", "ref_id": null } ], "ref_spans": [ { "start": 247, "end": 255, "text": "Figure I", "ref_id": null }, { "start": 791, "end": 799, "text": "Figure 2", "ref_id": null }, { "start": 854, "end": 862, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "h. M INPUT Bb~ee~", "sec_num": null }, { "text": "The first arc in the network is a PUSH to the parsing network. This network determines whether the inlmat is a statement (type D) or a question (type Q). If a statement, the network builds a SNAPS network representing the information contained in the sentence and pops a semantic node representing the fact conrained in the main clause. If the input is a question the parsing network calls the SNePS deduction routines (DEDUCE) to find the answer, and pops the semantic node representing that (no actual deduction is required in this example). Figure 3 shews the complete SNePS network built during this example. Nodes MTh-M85 were built by the first statement,nodes M89 and MgOby the second.", "cite_spans": [], "ref_spans": [ { "start": 544, "end": 552, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "EXAMPLE", "sec_num": "6." }, { "text": "When the state RESPOND is reached, the input buffer contains the SNAPS node popped by the parsing network. The generating network then builds a sentence. The first two sentences were generated from node M85 before M89 end MgO were built.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EXAMPLE", "sec_num": "6." }, { "text": "The third sentence was generated from MgO, and the fourth from M85 again. Since the voice (VC) register is LIFTRed from the parsing network, the generated sentence has the same voice as the input sentence (see Figure I ).", "cite_spans": [], "ref_spans": [ { "start": 210, "end": 218, "text": "Figure I", "ref_id": null } ], "eq_spans": [], "section": "EXAMPLE", "sec_num": "6." }, { "text": "Of particular note is the sub-network at state PRED which analyzes the proper tense for the generated sentence. For brevity, only simple tenses are included here, but the more complicated tenses presented in [4] can be handled in a similar manner. Also of interest is the subnetwork at state ADJS which generates a string of adjectives which are not already scheduled to be in the sentence. (Compare the third and fourth generated sentences of Figure 1.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EXAMPLE", "sec_num": "6." }, { "text": "A generalization of the ATN formalism has been presented which allows grammars to be written for generating surface sentences from semantic networks. The generalization has involved: adding an optional argument to the TO terminal act; reintroducing the JUMP terminal act; introducing a TO arc similar to the JUMP arc; introducing a CALL arc which is a generalization of the PUSH arc; introducing a GETA form; clarifying the management of the input buffer. The benefits of these few changes are that parsing and generating gramnars may be written in the same familiar notation, may be interpreted (or compiled) by a single program, and may use each other in the same parser-generator network grammar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CONCLUSIONS", "sec_num": "7." }, { "text": "[1] Bates, Nadeleine. The theory and practice of augmented transition network grammars. In L. Bloc, ed. Natural Language Communication with Ccm~uters, Springev-~'erlag, Berlin, 197U, [2] Burton, R.R. Semantic grammar, an engineering technique for constructing natural language understanding systems. BBN Report No. 3h53, Bolt Beranek and Newman, Inc., Cambridge, MA., December 1976. [3] Burton, Richard R. and Woods, ~. A. A compiling system for augmented transition networks. Prtprints of COLING 76z The Lnternational Conference on Computational Linguistics, Ottawa, June 1976.", "cite_spans": [ { "start": 169, "end": 176, "text": "Berlin,", "ref_id": null }, { "start": 177, "end": 182, "text": "197U,", "ref_id": null }, { "start": 300, "end": 382, "text": "BBN Report No. 3h53, Bolt Beranek and Newman, Inc., Cambridge, MA., December 1976.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "R~ENCES", "sec_num": null }, { "text": "[4] Shapiro, Stuart C. Generation as parsing from a network into a linear string. AJCL Microfiche 33 (1975) ~5-62.", "cite_spans": [ { "start": 82, "end": 107, "text": "AJCL Microfiche 33 (1975)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "R~ENCES", "sec_num": null }, { "text": "[5] Shapiro, Stuart C. The SNoPS semantic network processing system. In N.Y. Findler, ed., Associative Networks: Representation and Use of KnowledKe by Computers, Academic Press, New York, I~79, 17~-203.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "R~ENCES", "sec_num": null }, { "text": "[6] ~1~ew, R. and Slocum, J. Generating e~gllsh discot~'se from e~tic networks. CACN ~, 10 (October 1972), 8~-905. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "R~ENCES", "sec_num": null } ], "back_matter": [ { "text": "(av~ G} (za (G~.'m ~PZ) ,~))) (O (JUMP ~ (AND (GE~A OBJECT) (OVERLAP (GETR VC) 'PASS)) (SErR ~ (O~A OBJECT))) (JUMP @$ (AND (O~A AGENT) (DISJOINT (OK\"HI VC) ,PASS)) (SErR SUBJ (OK\"rA AO~T)) (SErR VC 'ACT)) (~ ~ (OK'PA WHICH) (SEI'R 5~IBJ (GErA WHICH)) (SETR VC 'ACT))) (os (cALL NUmR SUSa T NUmR (szm m~z .) (JUMP ore))) (081 (CaLL NP SUBJ T (S~Im DONE) (SENDR NUMBR) Rm (ADDR STRING REO) (JUMP SgB))) (SVB (CALL PRED * T (S~DR NUMBR) (S~#ER VC) (SENIR VB (OR (OKRA LEX (GETA VERB)) 'BE)) REG (AIER STRING PEG) (Ju~ smo~a))) (SUROBJ (CALL NP (OKRA AGENT) (AND GETA AGO'r) (OVERLAP VC 'PASS)) (SENDR DONE) * (ADDR STRING 'BY *) (TO ~D)) (CALL NP (OKRA OBJECT) (AnD (OKRA OBJECT) (OVmLAP VO 'ACT)) (S~Xm DONE) * (ADIR Sm~O *) (TO ram)) (CaLL NP (GETA ADJ) (OEPA ADJ) * (ADDR STRING *) (TO ~D)) ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(S (PUSH SP T (JUMP RESPOND))) (RESPO~ (JeW G} (Z~ (OKrR TrPZ) 'D) (SKrR ST~INO '(I UtmmSTAND THAT)))", "sec_num": null } ], "bib_entries": {}, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Woods, W.A. Transition natwcrk ~smuars for ~.~(z A DOG KISSED YOUNG LUCY) natural langua@s ana~TSlSo CACM I~, 10 (October 1970), (I UND~STAND THAT A DOG KISSED YOUNG LUCY) Woods, W.A. An experimental parsing system for #~(, WHO KISS~ LUCY) transition network Rrsmmaz~. In Ro Rns~Ln, ed., Nat-(A DOG KIS3~ YOUNG LUCY) u~al LanRua~e P,-ocessin~. Algorlthmlcs Press, A ?arsL~-(~nerating GrammarTerminal acta are tnd:Lcated by \"J\" or \"TO\"Figure 3. Samnt, ic Hetwoz.tc Build by ~ent, encea of Figure 1", "uris": null, "num": null } } } }