ACL-OCL / Base_JSON /prefixI /json /iwpt /1993.iwpt-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "1993",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:36:45.662746Z"
},
"title": "Compiling Typed Attribute-Value Logic Grammars",
"authors": [
{
"first": "Bob",
"middle": [],
"last": "Carpenter",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {
"postCode": "15213",
"settlement": "Pittsburgh",
"region": "PA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The unification-based approach to processing attribute-value logic grammars, similar to Prolog interpretation, has become the standard. We propose an alternative, embodied in the Attribute-\u2022 Logic Engine (ALE) (Carpenter 1993), based on the Warren Abstract Machine (wAM) approach to compiling Prolog (Art-Kaci 1991). Phrase structure grammars with procedural attachments,. similar to Definite Clause Grammars (occ) (Pereira-Warren 1980), are specified using a typed version of Rounds-Kasper logic (Carpenter 1992). We argue for the benefits of a strong and total version of typing in terms of both clarity and efficiency. Finally, we discuss the compilation of grammars into a few efficient low-level instructions for the basic feature structure operations.",
"pdf_parse": {
"paper_id": "1993",
"_pdf_hash": "",
"abstract": [
{
"text": "The unification-based approach to processing attribute-value logic grammars, similar to Prolog interpretation, has become the standard. We propose an alternative, embodied in the Attribute-\u2022 Logic Engine (ALE) (Carpenter 1993), based on the Warren Abstract Machine (wAM) approach to compiling Prolog (Art-Kaci 1991). Phrase structure grammars with procedural attachments,. similar to Definite Clause Grammars (occ) (Pereira-Warren 1980), are specified using a typed version of Rounds-Kasper logic (Carpenter 1992). We argue for the benefits of a strong and total version of typing in terms of both clarity and efficiency. Finally, we discuss the compilation of grammars into a few efficient low-level instructions for the basic feature structure operations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The first component of an ALE grammar is a type specification, which lays out the basic types of feature structures that will be employed in a grammar, along with the inheritance relations be tween these types and declarations of appropriate features and constraints on their values. Such a specification includes declarations such as the fol lowing for lists of atoms: The idea here is that bot is the most general type, with two subtypes atom and list. The type atom has two subtypes, a and b, which are maximally specific types. The list type also has two sub types, ne..list and e_list for non-empty and empty lists, respectively. Note that the ne..list type introduces two features, hd and tl, whose values are required to be atoms and lists. The idea here is that the only type which has a\ufffdy ap propriate fe\ufffdtures is the ne_list type, and it is appropriate for exactly two features, hd and tl. Inheritance of appropriateness specifications is performed on the basis of the type hierarchy. For instance, consider the following declaration from HPSG: sign sub [word , phrase] intro [phon :phon_list , synsem : synsem_obj , qstore : quant_list] . word sub [] intro [phon :singleton_phon_list] . phrase sub [] intro [dtrs : dtr_struct] .",
"cite_spans": [
{
"start": 1064,
"end": 1079,
"text": "[word , phrase]",
"ref_id": null
},
{
"start": 1159,
"end": 1161,
"text": "[]",
"ref_id": null
},
{
"start": 1168,
"end": 1195,
"text": "[phon :singleton_phon_list]",
"ref_id": null
},
{
"start": 1209,
"end": 1211,
"text": "[]",
"ref_id": null
},
{
"start": 1218,
"end": 1237,
"text": "[dtrs : dtr_struct]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compiling Type Definitions",
"sec_num": "1"
},
{
"text": "Here the type sign introduces three features and provides value restrictions. The subtype for\u2022 words inherits these features and the associated value restrictions, imposing the additional condi tion that the phonology value be a singleton list. In addition, the subtype for phrases introduces an additional features for daughters, which is only appropriate for phrases. Thus, unlike the case for order-sorted terms (see, for instance, \u2022 Meseguer et al. (1987) ), not every subtype of a type need have the same slots for values. This is significant in terms of implementations, as memory cells are only allocated on a structure for appropriate fea tures. ",
"cite_spans": [
{
"start": 435,
"end": 459,
"text": "\u2022 Meseguer et al. (1987)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compiling Type Definitions",
"sec_num": "1"
},
{
"text": "As with other grammar formalisms based on attribute-value logics, the primary data structure used in ALE is the feature structure. The struc tures used in ALE are similar to those in other systems, with the primary difference being that they are required to be totally well-typed (see Carpenter (1992)). In other words, every fea ture structure must be assigned a type and ev ery feature appropriate for that type must ap pear with an appropriate value . This can be con trasted with sorted, but untyped systems, which allow sorts to label feature structures and partic ipate in unification, but don't enforce any typing conditions . It can also be contrasted with sys tems which only perform type inference on val ues, but do not require every appropriate fe ature to be present . There are a number of benefits to typing a programming language . Not the least of these benefits is the ability to detect errors at compile-time. For instance, rules which can not be satisfied and lexical entries which are not well typed are flagged as such. Practice has shown that this cuts down on grammar development time significantly, because one of the most preva lent grammar-writing errors is being inconsistent about which features appear at which level in a structure and how they are bundled together, es pecially when grammar formalisms approach the 200 node lexical entry level as found in significant fragments of HPSG (see Penn (1993b) ).",
"cite_spans": [
{
"start": 1412,
"end": 1434,
"text": "HPSG (see Penn (1993b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compiling Basic Operations",
"sec_num": "2"
},
{
"text": "Ariother significant benefit of employing typed structures is that the features appropriate for a type can be determined at compile time. This has two advantages. First, it allows memory allo cation and deallocation to be handled efficiently, as the type of ea,<;.h structure is known. Second, it allows unification to be greatly speeded up as there is no need to merge features represented as lists; the positions of relevant features are known at compile time. We consider these two benefits in turn. ALE is currently implemented in Prolog, though plans are underway to implement it in C, using WAM techniques directly. As things stand, the WAM implementation of Prolog is exploited heavily to develop WAM-like behavior for ALE. Us ing Prolog for feature structure unification sys tems has its advantages and drawbacks. The drawback is that there are no pointers in Prolog, and thus path compression during dereferencing can not be carried out efficiently ( though it is car ried out on inactive edges during parsing). The advantage is that Prolog is very good at struc ture copying, last call optimization, incremental clause evaluation and search. We will consider all of these topics. But first, we note that the data structure used for feature structures in ALE is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compiling Basic Operations",
"sec_num": "2"
},
{
"text": "Tag-foo (V1 , ... ,Vn)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compiling Basic Operations",
"sec_num": "2"
},
{
"text": "where Tag is a reference pointer, signalling the in tensional identity of the structure, much as a posi tion in memory in an imperative language would do, and where foo is the name of the type of the structure, which must be a Prolog atom, of course, and where V1 through Vn are the values for the features F1 through Fn that are appropri ate for type foo. Given Prolog's compilation to the WA M, this amounts to having the following kind of record structure for feature structures: where the features are coded explicitly in terms of a list. Here the structure required is as follows (ignoring the tag) :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compiling Basic Operations",
"sec_num": "2"
},
{
"text": "I Tag I o--1 --> I foo I 0 --1 --> V1 0--1 --> Vn Contrast",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compiling Basic Operations",
"sec_num": "2"
},
{
"text": "I foo I 0--1 --> I cons I o ---1 --> I I o I --1 --- 1 I V I cons I I F1 0 --1 --> V1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compiling Basic Operations",
"sec_num": "2"
},
{
"text": "In general, our representation requires 4 + n cells for a structure with n features, while the usual one requires 4 + 6n cells for the same struc ture. This constitutes a huge discrepancy when we consider the amount of overhead this induces throughout the grammar in areas such as lexi cal retrieval and copying edges into the chart. Note that this difference between using record like structures as opposed to lists of feature-value pairs is Prolog-independent. We have not said much about the tag . .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compiling Basic Operations",
"sec_num": "2"
},
{
"text": "The second benefit we mentioned for typing structures is that we are able to carry out unifi cation without merging feature-value lists. The standard method in unifying feature structures is to take two lists of features, find the common elements, unify them, and take both symmetric differences and copy the results of this into the final result. Such tasks are extremely costly, es pecially as the number of features grows. Instead, our compiler will produce the following kind of code to perform unification ( which has been sim plified here, but will be expanded upon later): _-ne_list (_-atom,_-list) ).",
"cite_spans": [],
"ref_spans": [
{
"start": 580,
"end": 605,
"text": "_-ne_list (_-atom,_-list)",
"ref_id": null
}
],
"eq_spans": [],
"section": "It is based on the same principle as O'Keefe's method of encoding arrays in Prolog using variables, which provides constant time access and update (see the Quintus library). The basic idea is that each slot in an array is associated with a value and a pointer, which is either a variable or a struc ture consisting of another value and a pointer. Updates are performed by instantiating the vari able to a new pair consisting of a variable and value. Thus values are found by tracking the pointer until it's a variable. To maintain con stant time, the entire array must be regularly up dated. In our case, the tag plays the role of the pointer, and dereferencing is performed by follow ing the tag value until it is a variable. The num ber of dereferencing steps needed at any stage is bounded by the depth of the inheritance hierarchy ( of course, Prolog does its own internal derefer encing, so we can not statically bound the total number of dereferencing steps needed during uni fication). Path compression is then the equivalent of O'Keefe's array updating, and is performed when a completed edge is found during parsing. We will see examples of the use of tags shortly.",
"sec_num": null
},
{
"text": "unify",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "It is based on the same principle as O'Keefe's method of encoding arrays in Prolog using variables, which provides constant time access and update (see the Quintus library). The basic idea is that each slot in an array is associated with a value and a pointer, which is either a variable or a struc ture consisting of another value and a pointer. Updates are performed by instantiating the vari able to a new pair consisting of a variable and value. Thus values are found by tracking the pointer until it's a variable. To maintain con stant time, the entire array must be regularly up dated. In our case, the tag plays the role of the pointer, and dereferencing is performed by follow ing the tag value until it is a variable. The num ber of dereferencing steps needed at any stage is bounded by the depth of the inheritance hierarchy ( of course, Prolog does its own internal derefer encing, so we can not statically bound the total number of dereferencing steps needed during uni fication). Path compression is then the equivalent of O'Keefe's array updating, and is performed when a completed edge is found during parsing. We will see examples of the use of tags shortly.",
"sec_num": null
},
{
"text": "add_to_word(sign(T1-Phon , SynSem ,QSt) , _-word(T-Phon , SynSem,QSt)): add_to_singleton_phon_list (Phon,T) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For completely fresh features such as the head and tail above, there is really no reason to create a structure Tag-bot and then immediately add a type to it. The next release of ALE (Carpenter and Penn forthcoming) will have such an opti mization, as it is statically computable. On the other hand, consider the effect of adding the type word to the type sign given above:",
"sec_num": null
},
{
"text": "Here we see that (pointers to) the feature values for sign are copied over into the new word struc ture created and the additional constraint that the Phon value be a singleton list must also be resolved. Note that this extra bit of (pointer) copying is something that is usually also done in encodings using feature-value pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "43",
"sec_num": null
},
{
"text": "As we hinted at above, the procedure to per form unification on two structures is also com piled before run-time. In particular, consider the code to unify two ne_lists, in its full form: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "43",
"sec_num": null
},
{
"text": "unify_deref (FS1,FS2) :- deref (FS1 ,Tag1 ,TVs1) , deref (FS2 , Tag2 ,TVs2) , ( Tag1 == Tag2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "43",
"sec_num": null
},
{
"text": "the compiler determine that no additional type inference will be required.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "When unifying structures of type b and d, we must instantiate both of their reference pointers to a new structure of type c, with a new featu. re j, and in addition, perform the extra type inference on the value of f. It is worth noting that all and only the necessary type inference is determined at compile-time. For instance, the fact that the h value of c is required to satisfy the unification of the constraints on h in b and d is enough to let",
"sec_num": null
},
{
"text": "In this section, we consider compiling descriptions taken from ALE's attribute-value logic: <desc> <type> <var> <feat> : <desc> <desc> and <desc> <desc> or <desc> As was shown by Smolka (1988) , the lack of vari ables can lead to a quadratic increase in the size of descriptions using only path equations; with variables, path equations are no longer necessary. A complete proof theory with respect to both an algebraic semantics and a feature-structure based interpretation can be found in (Carpenter 1992) . Descriptions are compiled into the operations of add_to_sort, unify, deref , and a combina tion of conjunction and disjunction in Prolog. In addition, to handle constraints of the form <feat> : <desc>, which tell us to add the descrip tion to the value of the feature, we need a proce dure for extracting a feature's value from a struc ture. This is done with clauses such as: ",
"cite_spans": [
{
"start": 179,
"end": 192,
"text": "Smolka (1988)",
"ref_id": null
},
{
"start": 491,
"end": 507,
"text": "(Carpenter 1992)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compiling Descriptions",
"sec_num": "3"
},
{
"text": "Again, we present the first clause for convenience; only the second is used at run-time, combined with the necessary dereferencing. Note that if we look for the hd value of a structure of type list, we coerce list to ne_list: _-ne_list (T-atom, _-list) , T-atom) .",
"cite_spans": [],
"ref_spans": [
{
"start": 226,
"end": 252,
"text": "_-ne_list (T-atom, _-list)",
"ref_id": null
}
],
"eq_spans": [],
"section": "featval_hd(ne_list (H,_) ,Tag ,H) .",
"sec_num": null
},
{
"text": "featval_hd(list ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "featval_hd(ne_list (H,_) ,Tag ,H) .",
"sec_num": null
},
{
"text": "Here we create a new structure of type ne.J.ist, with a fresh head and tail, and return the fresh head as the result. In general, this might require additional type inference, as could be seen by con sidering what would happen if we took the value of the feature j in an object of type d in the above type system. In this case, the type d object would be coerced to one of type c, which in turn requires boosting the type of its h value and adding new f and j values:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "featval_hd(ne_list (H,_) ,Tag ,H) .",
"sec_num": null
},
{
"text": "Again notice that the compiler determines ex actly which type inferences to perform as part of finding a feature's value. Again, in the next release of ALE, the add_to_sort(bot , T) goals will be replaced with instantiated feature structures of type sort.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "featval_j (d(V1 ,T2-TVs2) , _-c(T3-bot ,V1 ,T2-TVs2,T-bot) , T-bot) :-add_to_u(TVs2 ,T2) , add_to_x2 (bot ,T3) , add_to_z(bot ,T) .",
"sec_num": null
},
{
"text": "We are now in a position to see how descrip tions get compiled into Prolog clqauses. To add a description of the sort found on the left to a dereferenced structure Tag-TVs, the Prolog code on the right is generated: Sorts are straightforward, and simply invoke the appropriate add_to goal. Variables are such that they get instantiated to the feature structures which they describe. Thus adding a variable to a structure involves dereferencing the variable, which is instantiated to the current value it has, and unifying it with the structure to which it is being added. All variables are initialized to Tag-bot at compile-time for compatibility with the basic operations over feature structures. The last three cases are recursive. Adding a descrip tion to a feature's value requires finding the fea ture's value, dereferencing it, and adding the em bedded description. Conjunction and disjunction in descriptions are translated into the correspond\ufffd ing Prolog control structures. In particular, this means that we treat disjunction in descriptions as introducing non-determinism in adding a de scription. In this way, Prolog backtracking, and its attendant efficient implementation of search and variables, will take care of the disjunction without any need for explicit copying in the pro gram. Of course, it's still there -it's just that Prolog's doing it. In a non-Prolog implementa tion of this method, a programmer would have to be very clever to implement this kind of control structure, using some kind of lazy copying along the lines of To mabechi (1992) or along the lines of the WAM itself. Conjunction, on the other hand, is treated as goal sequencing in Prolog.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "featval_j (d(V1 ,T2-TVs2) , _-c(T3-bot ,V1 ,T2-TVs2,T-bot) , T-bot) :-add_to_u(TVs2 ,T2) , add_to_x2 (bot ,T3) , add_to_z(bot ,T) .",
"sec_num": null
},
{
"text": "sort V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "featval_j (d(V1 ,T2-TVs2) , _-c(T3-bot ,V1 ,T2-TVs2,T-bot) , T-bot) :-add_to_u(TVs2 ,T2) , add_to_x2 (bot ,T3) , add_to_z(bot ,T) .",
"sec_num": null
},
{
"text": "This compilation of descriptions into Prolog code rather than into feature structures is where ALE departs most radically from other attribute-value based parsers of which we are familiar. The tra ditional method, say for chart parsing, involves taking an inactive edge which has just been cre ated and trying to unify it with the feature struc tures corresponding to the heads of rules in the grammar. Instead, our system will execute the Prolog code compiled from the description of the COMPILING TYPED ATTRIBUTE-VALUE LOGIC GRAMMARS 45 head of a grammar rule. There are two princi pal benefits to our approach. These stem from the fact that we reduce the copying and search methods to those of the WAM itself by compiling the Prolog clauses generated. The first benefit is that early failures in matching a description to a goal do not result in any overcopying -in fact there is really no copying done at all -it's all handled in the heap mechanism of the WA M. The second benefit is that if we have deeply embedded disjunctions in our descriptions, we do not need to expand to a disjunctive normal form or invoke one of the many approaches to disjunctive uni fication. In particular, if we have a description with an embedded disjunction and the first dis junct fails, then we only backtrack to the second disjunct, not all the way back to the beginning of the structure. Again, this operation is very effi cient in the WA M. It should be noted that noth ing here depends on using a chart parser as the control strategy -similar benefits would accrue to any other parsing strategy. In fact, the same benefits could also be gained by using this kind of strategy in generation, say along the lines of van Noord et al. (1992) .",
"cite_spans": [
{
"start": 1708,
"end": 1727,
"text": "Noord et al. (1992)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compiling Grammars and Programs",
"sec_num": "4"
},
{
"text": "The chart parsing strategy used in ALE is not particularly significant qua parser, as it was pri marily motivated by Prolog considerations. What is significant is the way in which descriptions are compiled and made available to the parser, a strategy which can be maintained using many different parsers. For instance, we are also work ing on a left-corner parser which will not require any copying or manipulation of the database. The most significant thing to note about ALE's parser is that it employs a dynamic chart, where inactive edges are asserted into the database, 1 When rule (C1 ,Left , Mid1) is called, the first thing that happens is the description D1 being added to the feature structure CL Assuming this fails, no other work is done, and . no copy ing is performed. Instead, the code generated by the description D1 is simply executed, and failure causes Prolog backtracking either to earlier dis junctions in the description D1, or to other clauses for rule/3 generated by other rules. Assuming D1 is successfully added to C1, rule/3 looks for an inactive edge directly to the right of C1 in the chart. The fact that parsing is done right to left ensures that the chart has been completed to the right of any inactive edge which is being consid ered. If an inactive edge of category C2 to the right is found, rule/3 attempts to add the de scription D2 to a copy of C2. The current bot tleneck in this process is the inordinate amount of copying required, especially when many empty categories are present. A better solution would be to add the descriptions in a more lazy fashion without eagerly copying the whole structure, but Prolog does not provide that kind of fine control of its database. This process continues until the right hand side of the rule is completely matched. At this point, the mother category is constructed by adding the compiled description DO to a fresh where DO is the description of the mother category category, fully dereferencing (path compressing), and the Di are descriptions of the daughter cateasserting it into the database of inactive edges, gories. The grammar rules are then compiled into and recursively calling rule/3. As there are no base cases to rule, it will eventually fail _and back track through all of t'he disjunctive choice points and alternative rules.",
"cite_spans": [
{
"start": 575,
"end": 576,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compiling Grammars and Programs",
"sec_num": "4"
},
{
"text": "The input string is consumed from right to left , at each step adding inactive edges until no more edges can be added. This gives the parser as a whole a mix of breadth-first and depth-first search, to best exploit the inherent behavior of the WAM. The top level control strategy is quite straightforward: ) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compiling Grammars and Programs",
"sec_num": "4"
},
{
"text": "parse",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compiling Grammars and Programs",
"sec_num": "4"
},
{
"text": "The words are reversed and counted, and the chart is built from the right to left , taking lex ical entries for each word and firing rule/3. Be fore considering lexical entries, empty categories are asserted into the chart and processed us ing rule/3. All lexical and empty category al ternatives will be considered during backtrack ing before proceeding leftward to the next word. We should also mention that lexical entries and empty categories are fully expanded as path compressed feature structures at compile time. In addition to allowing categories fo a \u2022 rule, ALE also allows definite clause goals to be invoked, in a way similar to DCG rules such as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compiling Grammars and Programs",
"sec_num": "4"
},
{
"text": "f(Z) ---> h(Y) , g(X) , {foo (X,Y,Z) } , j(Z) . .. ..",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compiling Grammars and Programs",
"sec_num": "4"
},
{
"text": "In this rule, as soon as the h (Y) and g (X) \u2022 categories are found, the goal f oo (X , Y, Z) is invoked and solved before going on to consider j (Z) . The change to rule/3 is minimal; the code for solv ing foo (X, Y, Z) is simply inserted in between the code generated by the categories g (X) and j (Z) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compiling Grammars and Programs",
"sec_num": "4"
},
{
"text": "Definite clause programs can be defined in ALE, where instead of Prolog terms, feature struc-. ture descriptions are used. For instance, we can define standard predicates such as: append(e_list ,X,X) if true .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CARPENTER",
"sec_num": null
},
{
"text": "append (hd:X and tl:Xs, Ys , hd :X and tl :Zs) if append (Xs ,Ys,Zs) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CARPENTER",
"sec_num": null
},
{
"text": "The logical variables are used as in Prolog, with the result being an instance of constraint logic programming over the typed feature structure logic. This bears a close similarity to the LO GIN language of Alt- Kaci -Nasr (1986) , who point out a number of benefits of using an order sorted notion of feature structure for logic pro gramming. A general CLP scheme suiting this application was defined by Hohfeld -Smolka (1988) and this particular application is det_ ailed in (Carpenter 1992). The previous two clauses will translate into the following pieces of code, following O' Keefe's (1990) FS1T ,FS2 , FS3T) ,Goals) .",
"cite_spans": [
{
"start": 212,
"end": 229,
"text": "Kaci -Nasr (1986)",
"ref_id": null
},
{
"start": 405,
"end": 427,
"text": "Hohfeld -Smolka (1988)",
"ref_id": null
},
{
"start": 583,
"end": 597,
"text": "Keefe's (1990)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 598,
"end": 615,
"text": "FS1T ,FS2 , FS3T)",
"ref_id": null
}
],
"eq_spans": [],
"section": "CARPENTER",
"sec_num": null
},
{
"text": "The coding used, with goals being threaded, is to ensure that last call optimization takes place. \u2022 While ALE does not perform indexing, it does sup port full cuts, disjunctions, negations \u2022 by failure and last call optimization. Such procedural attachments can be inter spersed into rules just as in DCGs. This mech anism has been used in ALE grammars for pur poses such as quantifier scoping using Cooper",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CARPENTER",
"sec_num": null
},
{
"text": "Before concluding, we should also point out that ALE has a number of other useful features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Storage, for treating the maximal onset princi pal in syllabification in attribute-value phonology (Mastroianni 1993), and for implementing princi ples such as the non-local feature principle (for slashes) and the binding theory of . HPSG (Penn 1993b). Procedures can even be used to postpone some of the unifications in a rule until after all of the categories have been found, thus encoding a form of restriction similar to that used by Shieber (1985). Such procedures will allow general hooks to Prolog in the next release of ALE, and as the definite clause component of a grammar can be arbitrary, can also be used for interleaving on-line semantic processing with syntactic processing as in Pereira -Pollack (1990).",
"sec_num": null
},
{
"text": "The next release of ALE, scheduled for Summer 1993 , will also include more general constraints on types, following AYt-Kaci (1986) (see also Carpen ter (1992)), inequations and extensionality (see Carpenter (1992) for theoretical details , and Penn -Carpenter ( forthcoming) and Penn ( 1993a) for implementation details and motivation).",
"cite_spans": [
{
"start": 116,
"end": 131,
"text": "AYt-Kaci (1986)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "One of the most interesting of these is the use of lexical rules, which are loosely based on those of PATR-11, in that they map one lexical entry to an other at compile-time. In ALE, such rules may in volve procedural attachments just as other rules , and contain a rudimentary morphological com ponent based on string unification. ALE also fully supports parametric macros which are compiled out statically into the descriptions they abbrevi ate.",
"sec_num": null
},
{
"text": "We have shown how grammars based on attribute-value logic descriptions can be effi ciently compiled into low-level Prolog instructions which exploit the inherent efficiency of the WA M. Unfortunately, there are a few inefficiencies stem ming from this encoding due to Prolog's logi cal variables and its lack of control over copy ing structures from the database. The ideal solu tion will be to build a WA M-like abstract machine language directly for typed feature structures and their associated descriptions. The WA M has proved to be the most efficient architecture yet developed for implementing \"unification-based\" programs, even though, as we saw , it often re lies on structure copying and creation rather than unification ( the only cases of unification in ALE arise from shared variables in a structure -ev erything else is structure copying). ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Current versions of the wAM in s1cstus and Quint us index asserted clauses, allowing the edges _ beginning at a par ticular position to be easily retrieved by hashi \ufffd g -\ufffd 1?\ufffd thod with explicit copying would most likely be faster than the one with assert, and we plan to explore this poss1b1hty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {},
"ref_entries": {
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "this with a representation such as: Tag-foo ([F1 :V1, ... ,Fn:Vn] )"
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "(T-ne_list (H,R) ,T-ne_list (H2 ,R2)) :unify(H,H2) , unify(R,R2) ."
},
"FIGREF8": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Pollard and Sag (in press) , process 15 word sentences, creating 40-50 inactive edges, at times under 2 seconds. ALE Ve rsion /3, as described in this paper, is available from the author without charge for re search purposes. It runs under SICStus and Quin tus Prologs . It is distributed with roughly 100 pages of documentation and sample grarpmars. Ve rsion 1.0 is scheduled for release in August 1993."
},
"TABREF1": {
"html": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td>):-add_to_ne_list (TVs ,Tag) .</td></tr><tr><td>add_to_ne_list (list , _-ne_list (T1-bot , T2-bot) ): add_to_atom (bot ,T1) ,</td></tr></table>",
"num": null
}
}
}
}