{ "paper_id": "T75-2001", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:43:02.961811Z" }, "title": "AUGMENTED PHRASE STRUCTURE GRAMMARS", "authors": [ { "first": "George", "middle": [ "E" ], "last": "Heidorn", "suffix": "", "affiliation": { "laboratory": "", "institution": "IBM Thomas J. Watson Research Center Yorktown Heights", "location": { "region": "NY" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Augmented phrase structure grammars consist of phrase structure rules ~with embedded conditions and structure-building actions written in a specially developed language. An attribute-value, record-oriented information structure is an integral part of the theory. I.", "pdf_parse": { "paper_id": "T75-2001", "_pdf_hash": "", "abstract": [ { "text": "Augmented phrase structure grammars consist of phrase structure rules ~with embedded conditions and structure-building actions written in a specially developed language. An attribute-value, record-oriented information structure is an integral part of the theory. I.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The values of the SUPerset and PS (part-of-speech) attributes are really pointers to the records \"ACTIVITY\" and \"VERB\" and could be drawn as directed lines to those other records if they were included in the diagram.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "\"SERVIC\" given here could be considered to be a dictionary entry stating that the VERB stem SERVIC can take endings E, ES, ING and ED, the VERB SERVIC is TRANSitive, and the concept SERVIC is an ACTIVITY.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The named record", "sec_num": null }, { "text": "(When a named record name appears without the explicit mention of an attribute name, the SUPerset attribute is assumed.) The XYZ attribute was included just to illustrate a numerically-valued property. Of course, the true meaning of any of this information depends completely upon the way it is used by the APSG rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The named record", "sec_num": null }, { "text": "During decoding and encoding, records called \"segment records\" are employed to hold information about segments of text. For example, the segment \"are servicing\" could be described by the record:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The named record", "sec_num": null }, { "text": "I SUP \"SERVIC\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The named record", "sec_num": null }, { "text": "PRES,P3,PLUR,PROG which could be interpreted as saying that \"are servicing\" is the present, third person, plural, progressive form of \"service\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The named record", "sec_num": null }, { "text": "Similarly, the sentence \"The big men are servicing a truck.\" could be described by: where the indicators DEF and INDEF mean definite and indefinite, respectively. The sentence \"A truck is being serviced by the big men.\" could be described by exactly the same record structure but with the addition of a PASSIVE indicator in the record on the left. During a dialogue some records that begin as segment records may be kept to become part of longer term memory to represent the entities (in the broadest sense of the term) that are being discussed. Segment records then might have pointers into this longer term memory to show referrents. So, for example, the sentence \"They are servicing a truck.\" might be described by the same record structure shown above if the referrent of \"they\" was known to be a certain group of men who are big.", "cite_spans": [], "ref_spans": [ { "start": 84, "end": 138, "text": "where the indicators DEF and INDEF mean definite", "ref_id": null }, { "start": 354, "end": 676, "text": "During a dialogue some records that begin as segment records may be kept to become part of longer term memory to represent the entities (in the broadest sense of the term) that are being discussed. Segment records then might have pointers into this longer term memory to show referrents.", "ref_id": null } ], "eq_spans": [], "section": "The named record", "sec_num": null }, { "text": "ISUP \"MAN\" SUP \"SERVIC\" 1 ~SIZE \"BIG\" AGENT i GOAL L I ~~ PRES,PROG ~SUP \"TRUCK\" ' ~INDEF,SING", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The named record", "sec_num": null }, { "text": "Decoding is the process by which record structures of the sort just shown are constructed from strings of text. The manner in which these records are to be built is specified by APSG decoding rules. A decoding rule consists of a list of one or more \"segment types\" (meta-symbols) on the left of an arrow to indicate which types of contiguous segments must be present in order for a segment of the type on the right of the arrow to be formed. (as the named record for \"SERVIC\" defined in the previous section would), and this segment is followed immediately by the characters \"i\", \"n\" and \"g\", then create a VERB segment record with the same SUP as the VERBSTEM and with a PRESPART indicator, to describe the entire segment (\"servicing\" in this case). ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "III. ANALYSIS OF TEXT (DECODING)", "sec_num": null }, { "text": "I I t t ! i I i i I I i I ! I t I t PERson indicators.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "III. ANALYSIS OF TEXT (DECODING)", "sec_num": null }, { "text": "Considering the subject to be part of the verb phrase in this manner can simplify the handling of some constructions involving inverted word order.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "III. ANALYSIS OF TEXT (DECODING)", "sec_num": null }, { "text": "If the string being decoded were \"the big men are servicing a truck.\", a rule similar to the last one shown above could be used to pick up the direct object.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "III. ANALYSIS OF TEXT (DECODING)", "sec_num": null }, { "text": "Then the rule", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "III. ANALYSIS OF TEXT (DECODING)", "sec_num": null }, { "text": "./VERBPH(SUBJECT,OBJECTI -TTRANS*IPASSIVE). --> SENT (~VERBPH)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "III. ANALYSIS OF TEXT (DECODING)", "sec_num": null }, { "text": "could be applied, which says if a VERBPH extending between two periods has a SUBJECT attribute and also either has an OBJECT attribute or does not need one because there is no TRANSitive indicator in the named record pointed to by the SUP (i.e. the verb is intransitive) or because there is a PASSIVE indicator, then call it a SENTence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "III. ANALYSIS OF TEXT (DECODING)", "sec_num": null }, { "text": "To get the record structure describing this string into the form shown near the end of the previous section, one more rule would be needed:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "III. ANALYSIS OF TEXT (DECODING)", "sec_num": null }, { "text": "SENT($'ACTION',~PASSIVE,SUBJECT) --> SENT(AGENT=SUBJECT,GOAL=OBJECT, -SUBJECT,-OBJECT)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "III. ANALYSIS OF TEXT (DECODING)", "sec_num": null }, { "text": "This says that for a non-PASSIVE ACTION SENTence that still has a SUBJECT attribute, set the AGENT and GOAL attributes to the values of the SUBJECT and OBJECT attributes, respectively, and then delete the SUBJECT and OBJECT attributes from the record. By deterministicali\u00a5 I mean that once grammatical structure is built, it cannot be discarded in the normal course of the parsing process, i.e. that no \"backtracking\" can take place unless the sentence is consciously perceived as being a \"garden path\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "III. ANALYSIS OF TEXT (DECODING)", "sec_num": null }, { "text": "This notion of grammar puts knowledge about controlling the parsing process on an equal footing with knowledge about its possible outputs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "III. ANALYSIS OF TEXT (DECODING)", "sec_num": null }, { "text": "To test this theory of grammar, a parser has been implemented that provides a language for writing grammars of this sort, and a grammar for English is currently being written that attempts to capture the wait-and-see diagnostics needed to parse English within the constraints of the theory.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "III. ANALYSIS OF TEXT (DECODING)", "sec_num": null }, { "text": "The control structure of the parser strongly reflects the assumptions the theory makes about the structure of language, and the discussion below will use the structure of the parser as an example of the implications of this theory for the parsing process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "III. ANALYSIS OF TEXT (DECODING)", "sec_num": null }, { "text": "The These packets often reflect rather large scale grammatical expectations; for example, the following are some packets", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "III. ANALYSIS OF TEXT (DECODING)", "sec_num": null } ], "back_matter": [], "bib_entries": {}, "ref_entries": { "TABREF2": { "num": null, "type_str": "table", "content": "
wouldcreateaVERBPHrasesegmentrecord
whichisa copy(4)of the VERB segment
record just shown.
If the string \"are\"appearinginthe
inputweredescribedbythe VERB segment
record
I SUP \"BE\" .PRES,P3,PLUR
then the rule
VERB('BE')VERBPH(PRESPART)-->
VERBPH(PROG,FORM=FORM(VERB))
would produce the new VERBPH segment record
I sup PRES,P3,PLUR,PROG I SERWC I
from thetwojustshown,todescribethe
string \"are servicing\".This rule says that
if a segment of the string being decodedis
describedas a VERB with a SUP of \"BE', and
it is followed by a segment describedasa
VERBPHwitha PRESPARTindicator,then
create a new VERBPH segment record whichis
acopy(automatically, because the segment
type is thesame)oftheVERBPHsegment
recordreferred to on the left of the rule,
but which as a PROGressive indicator and the
FORMinformation from the VERB.FORM would
have previously been defined as the nameof
a group of indicators (i.e.those having to
do with tense, person and number).Similar
rulescanbeusedto recognize passives,
perfects and modal constructions.
Continuing withtheexample,ifthe
string\"thebigmen\"were decoded to the
NOUNPH segment record
I SUP \"MAN\" SIZE \"BIG\"
DEF,PLUR
would produce the new VERBPH segmentrecord
(the one on the left in this diagram)
SUP \"SERVIC\" ISUP \"MAN\" I SUBJECT ~ SIZE \"BIG\" PRES,PROG DEF,PLUR I from the previous VERBPH record, to describe
thestring \"the big men are servicing\".It
is important to realize that therecordon
theleftin the above diagram is a segment
record that \"covers\" the entirestringand
that the record shown on the right (which is
the same one from the previous diagram) Just
servesasthevalueofitsSUBJECT
attribute.The rule above saysthatifa
NOUNPHisfollowedbya VERBPH, and the
NUMBer indicators of the VERBPH are the same
asthe NUMBer indicators of the NOUNPH, and
the VERBPH does not already haveaSUBJECT
Then the ruleattribute,then create a new VERBPH segment
record which is a copy of the old one,give
VERB--> VERBPH(~VERB)ita SUBJECTattributepointingtothe
NOUNPH record, anddeletetheNUMBerand
", "text": "", "html": null }, "TABREF3": { "num": null, "type_str": "table", "content": "
!
Because nature of segment record the of portion of the input text it does not result the parallel processing decoding algorithm, when a is created to describe a in the destruction of other records describing the same portion or parts of it. Local ambiguities caused by multiple word senses, idioms and the like may result in more than one segment record being created to describe a particular portion of the text, but usually only one of them is Context-free phrasestructure grammars have been known to be inadequate DIAGNOSIS AS A NOTION OF GRAMMAR for describing natural languages for many years, and context-sensitive phrase Mitchell Marcus structure grammars have not been found Artificial Intelligence Laboratory to be very useful, either. M.I.T. Augmented phrase structure grammars, however, appear to be able to express the facts of a natural language in a very concise and convenient manner, This paper will sketch an approach to they have the natural language parsing based on a new power of computer programs, while conception of what makes up a recognition maintaining the appearance of grammars. grammar for syntactic analysis and how such able to combine with its neighbors to become part of the analysis for an entire sentence. IV. SYNTHESIS OF TEXT (ENCODING) Encoding is the process by which strings of text are produced from record Although APSG was used successfully a grammar should be structured. This theory to of syntactic analysis formalizes a notion implement one fairly large system (NLPQ), it is too early to do a thorough appraisal very much like the psychologist's notion of of its capabilities. \"perceptual strategies\" [Bever \"70] and Through the extensive use anticipated in the next year however, makes this formalized notion -which will be its strengths called the notion of wait-and-see and weaknesses should become more diagnostics -a central and integral part of apparent. a theory of what one knows about theFrom the examples it can also creation specifications take the form be seen of short procedures consisting of statements that for setting the values of attributes. Each element in a creation specification is basically of the form attribute=value (where \"=\" means replacement), but again this is not obvious becuase of the notational shortcuts used. For example, SUP(VERBSTEM) is short for SUP=SUP(VERBSTEM), PRESPART is short for PRESPART:I (note that this form has a different meaning when it is used in a condition specification), and -SUBJECT is short for SUBJECT=0. In all of the examples here, the attribute whose value is set would be in the segment record being built, but that need not always be the case. If, for example, there were some reason to want to give the PRESPART indicator. When the VERB segment shown above comes off the stack, a rule would be applied to put the string \"are\" into the output. Then, after application of a couple more rules, the top of the stack would have the four pairs VERBSTEM [ SUP \"SERVIC\" i I null N null G null which would result in the string \"servicing\" being produced after four cycles of the algorithm. Complete encoding examples may be found in Reference 3.! i |
The set that the named record read \"in the somewhere in the the current record. is appear $'ACTION\" and means must chain of In previous notation \"ACTION'\" \"ACTION\" SUPerset the section the named record \"SERVIC\" was defined to have a SUP of \"ACTIVITY'. If the named record \"ACTIVITY\" were similarly defined to have a SUP of \"ACTION', the segment record discussion here would satisfy the condition $'ACTION'. From the above examples it can be seen that the condition specifications take the form of logical expressions involving the values of attributes. Each element in a condition specification is basicaly of the form value.relation.value, but this is not obvious because there are several notational shortcuts available in the rule language. For example, \"BE\" is short for SUP.EG.'BE',PRESPART is short PRESPART.NE.0, and -~SUBJECT is short for SUBJECT.EQ.0. The elements are combined by and's (commas) and or's (vertical bars). In most cases the attribute whose value is being tested is to be found in the segment record associated with the constituent, but that is not always the case. For example, ING* tests the value of the ING indicator in the named record pointed to by the SUP of the segment record, and could be written ING(SUP) or ING(SUP).NE.0. Another example is NUMB(NOUNPH) which was used to refer to the value of the NUMB indicators in the NOUNPH the PROG and FORM indicators and setting the segment in one of the rules above. resulting in the following two pairs being put on the top of the stack: VERB PRES,P3,PLUR VERBPH I SUPPRESPART'SERVIC\" 1 The above rule says that a VERBPH segment with a PROGressive indicator should be expanded into a VERB segment with a SUP of \"BE\" and the same VERBPH, followed by a new VERBPH which begins as the old one and then is modified by deleting a copy (automatically) of segment FORM indicators as the Co., San Francisco, Calif., 1973, 63-113. THOUGHT AND LANGUAGE, R.C. Schank and K.M. Colby (Eds.), W.H. Freeman and English sentences,\" in COMPUTER MODELS OF computation and use for understanding 6. Simmons, R.F., \"Semantic networks: their ISUP \"BE\" for natural language analysis,\" COMM. ACM 13, 10 (Oct. 1970), 591-606. for following pair were to come off the top of the stack: VERBPH I SUP \"SERVIC\" I PRES,P3,PLUR,PROG the following encoding rule could be applied: VERBPH(PROG) --> VERB('BE',FORM:FORM(VERBPH)) 5. Woods, W.A., \"Transition network grammars VERB(-PROG,-FORM,PRESPART) Level Languages, SIGPLAN NOTICES 9,4 (April 1974), 91-100. programming,\" Proc. Symp. on Very High level language for simulation California, Oct. 1972. 4. Heidorn, G.E., \"English as a very high to a simulation programming system,\" Technical Report NPS-55HD72101A, Naval Postgraduate School, Monterey, 3. Heidorn, G.E., \"Natural language inputs under \"I\") is put out. Eventually the stack becomes empty and the algorithm terminates, having produced the desired output string. For example, if at some point the 24th NAT'L CONF., ACM, NY, 1969, 399-417. B., Deverill, R.S., \"REL: a rapidly extensible language system,\" In PROC. 2. Thompson, F.B., Lockemann, P.C., Dostert, structures of the sort already shown. The manner in which this processing is to be done is specified by APSG encoding structure of language. By recognition grammar, we mean here what a speaker of a ACKNOWLEDGEMENTS language knows about that language that rules. The right side of an encoding rule specifies what segments a segment of the type on the left side is to be expanded into. Conditions and structure-building actions are included in exactly the same manner as in decoding rules. The encoding algorithm begins with a single segment record and its associated type side-by-side on a stack. At each cycle through the algorithm, the top pair is removed from the stack and examined. If there is a rule that can be applied, it results in new pairs being put on the top of the stack, according to its right hand side. Otherwise, either the character string value of the NAME attribute of the SUP of the segment record (e.g. \"servic\") is put out, or the name of the segment type itself (e.g. I. Balzer, R.M., and Farber, D.J., \"APAREL -a parse-request language,\" COMM. ACM 12, 11 (Nov. 1969), 624-631. REFERENCES I am indebted to my former students allows him to assign grammatical structure at the Naval Postgraudate School for their efforts on the original implementation and to the word strings that make up utterances in that language. application, my colleagues at IBM Research --Martin Mikelsons, This theory of grammar is based on the Peter Sheridan, Irving Wladawsky and hypothesis that every language user knows as Ted Codd --for their interest, ideas part of his recognition grammar a set of and work on the current implementatons highly specific diagnostics that he uses to and applications, and my wife, Beryl, decide deterministically what structure to for her build next at each point in the process of typing assistance and general helpfulness. parsing an utterance.record structures of APSG. available by being obtained from stream, and a non-terminal segment becomes the input available whenever a rule is applied.) Then the rule instance record \"waits\" for a segment which can be the next constituent of the associated rule to become available. When such a segment becomes available, the rule instance record is \"extended\". When a rule instance record becomes complete (i.e. all of its constituents are available), the associated rule is applied (i.e. the segment record specified on the right is built and made available). There may be many rule instance records in existence for a particular rule at any point in time. structures of Simmons are very close to the records in APSG, and the semantic network correspond closely to attributes of segment APSG clearly has much in common other current computational theories, with the ideas of procedural specification and arbitrary conditions and strucutre-building actions being popular at this time. It would seem to be most similar to Woods\" augmented transition networks (ATN) [5], especially as used by Simmons [6]. Registers in the ATN model very linguistic with VI. CONCLUDING REMARKS The decoding algorithm used with APSG is basically that of a bottom-up, left-to-right, parallel-processing, syntax-directed compiler. An important and novel feature of this algorithm is something called a \"rule instance record\", which primarily maintains information abut the potential applicability of a rule. A rule instance record is initially created for a rule whenever a segment which can be the first constitutent of that rule becomes available. (A terminal segment becomes work has been documented yet. the expressions, is itself written as a set of APSG rules. This work is part of a project at IBM Research to develop a system which will produce appropriate accounting application programs after carrying on a natural language dialogue with a businessman about his requirements. APSG is also used in the development of alaaguage query system for relational data bases and is being considered for use in other projects at IBM. None of this recent a natural being information in parentheses) into lambda AGENT record of an action an ABC attribute equal to one more than the XYZ attribute of the concept record associated with that action (i.e. the named record pointed to by its SUP), the following could be included in the last rule shown: ABC(AGENT)=XYZ(SUP)+I which can be read as \"set the ABC attribute of the AGENT of this record to the value of the XYZ attribute of the SUP of this record plus I.\" Although in the example rules given here the conditions are primarily syntactic, semantic constraints can be stated in exactly the same manner. Much of the record building shown here can be considered semantic (and somewhat case oriented). The important point, however, is that the of condition testing and structure building done is at the discretion of the person who writes the rules. Complete specifications for the APSG rule language are given in Reference 3. same input and does the same processing the FORTRAN version. An interesting feature of this new version is that the compiler part, whose primary task condition and creation specifications (i.e. is to translate as kind V. IMPLEMENTATIONS AND APPLICATIONS As part of the original work on APSG a computer system called NLP (~atural Language ~rocessor) was developed in 1968. This is a FORTRAN program for the IBM 360/370 computers which will accept as input named record definitions and decoding and encoding rules in exactly the form shown in this paper and then perfor m decoding and encoding of text [3]. A set of about 300 named record definitions and 800 rules was written for NLP to implement a specific system (called NLPQ) which is capable of carrying on a dialogue in English about a simple problem [3,4]. More recently a LISP implementation of NLP has been done, which accepts exactly the in the GPSS simulation language to solve the queuing problem and then producing a program! D ! ! ! I ! ! | ! ! p!
!!
", "text": "There is no limit to the nesting of attribute names used in this manner.", "html": null } } } }