{ "paper_id": "C92-1039", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:34:07.403031Z" }, "title": "SYNTACTIC PREFERENCES FOR ROBUST I)ARSING WIT[t SEMANTIC PREFERENCES", "authors": [ { "first": "Jin", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "Computing Research Laboratory", "institution": "New Mexico State University Las Cruces", "location": { "postCode": "88003", "region": "NM" } }, "email": "wjin@nmsu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Using constraints in robust parsing seems to have what we call \"robust parsing paradox\". Preference Sem,'ultics and Coimecfionisln both offered a promising approach to fins problem. However, Prefelence Semantics has not addressed the problem of how to make flfll use of syntactic constraints, and Connectiouism has some inherent difficulties of its OWll which prevent it producing a practical system. In this paper we are proposing a method to add syntactic preferences to the Preferealce Semantics paradigm while nmiutaining its fundamental philosophy. It will be shown ttmt syntactic preferences can be coded as a set of weights associated with the set of symbolically manipulatable rules of a new grammar formalism. ~121e syntactic preferences such codezt can lye easily used to compute with semantic preferences. With the help of some tectmiques borrowed from Connectioeisln, these weights can be adjusted through tralrting.", "pdf_parse": { "paper_id": "C92-1039", "_pdf_hash": "", "abstract": [ { "text": "Using constraints in robust parsing seems to have what we call \"robust parsing paradox\". Preference Sem,'ultics and Coimecfionisln both offered a promising approach to fins problem. However, Prefelence Semantics has not addressed the problem of how to make flfll use of syntactic constraints, and Connectiouism has some inherent difficulties of its OWll which prevent it producing a practical system. In this paper we are proposing a method to add syntactic preferences to the Preferealce Semantics paradigm while nmiutaining its fundamental philosophy. It will be shown ttmt syntactic preferences can be coded as a set of weights associated with the set of symbolically manipulatable rules of a new grammar formalism. ~121e syntactic preferences such codezt can lye easily used to compute with semantic preferences. With the help of some tectmiques borrowed from Connectioeisln, these weights can be adjusted through tralrting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Robust parsing faces what seems to be a \"paradox\". On one hand, it needs to tolerate input sentences which more or less break syntactic and semantic constraints; that is, given m~ illformed sentence, the parser should attempt to interpret it somehow rather thml simply reject it because it violates some constraints. In order to do this it allows syntactic and semantic constralnts to be relaxed. On the other hand, robust parsing still need to be able to disambiguate. For the sake of disambiguation, stronger syntactic and ,semantic coilstraints are required, since the stronger these constraints are, the greater the number of unlikely readings that can be rejected, and the more likely the correct reading is selected. A lot of work related to this problem has been done. The most commonly used snategy is to use the strongest constraints first to parse the input. If the parser fails, the violated constraints are relaxed in order to recover the failed process m~d arrive at a successful parse (CarboneU & Hayes, 1983) (Huang, 1988) (Kwasny & Sondheimer, 1981) (Weischedel & Sondheimer, 1983) . The major problem with this approach is that it is difficult to tell which constraint is actually being violated and, therefore, needs to be relaxed when the parser fails. Tiffs l)roblem is more serious with the parser using backtracking.", "cite_spans": [ { "start": 999, "end": 1023, "text": "(CarboneU & Hayes, 1983)", "ref_id": null }, { "start": 1024, "end": 1037, "text": "(Huang, 1988)", "ref_id": "BIBREF3" }, { "start": 1038, "end": 1065, "text": "(Kwasny & Sondheimer, 1981)", "ref_id": "BIBREF5" }, { "start": 1066, "end": 1097, "text": "(Weischedel & Sondheimer, 1983)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Another strategy is based on Preference Semantics (Wilks, 1975 (Wilks, , 1978 (Fass & Wilks, 1983) . The idea is that all the possible sentence readings can (though not necessarily) be produced. All the \"readings are scored according to how mm~y preference satisfactions they contmn\", and \"the best reading (that is, the one with the most preference satisfactions) is taken, even if it contains preference violations.\" Selfridge (1986) and Slator (1988) 'also use this strategy. One important advantage of this approach is that within a uniform mechanism, the semantic constraints can both be used maximally for the sake of disambiguation m~d be gracefully relaxed when nece,ssmy for the sake of robusmess. However, how to extend the preference philosophy to syntactic constraints have not been addressed. There are two li-equently used approaches to incorporate syntactic constraints in systems using semantic preferences. The first ,~s in (Slao tot, 1988), is to use a weak version of a rather typical syntactic module. The problem with this approach is that the syntactic constraints here still suffer from the problem of \"robust parsing paradox\". Another problem with this approach is that it shifts more bin-dens of disambiguation to semm~tic preferences because the syntactic constralnts need to be weak enough in order not to reject too many noisy inputs. The second approach, as in (Selfridge, 1986) , is to try all the possible combinations without any structural preference. Tfie problein hexe is computational complexity especially with long sentences which have conlplex or recnrsive slJllctares in them.", "cite_spans": [ { "start": 50, "end": 62, "text": "(Wilks, 1975", "ref_id": "BIBREF17" }, { "start": 63, "end": 77, "text": "(Wilks, , 1978", "ref_id": null }, { "start": 78, "end": 98, "text": "(Fass & Wilks, 1983)", "ref_id": "BIBREF0" }, { "start": 419, "end": 435, "text": "Selfridge (1986)", "ref_id": "BIBREF12" }, { "start": 440, "end": 453, "text": "Slator (1988)", "ref_id": "BIBREF14" }, { "start": 1390, "end": 1407, "text": "(Selfridge, 1986)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Cormectionism has also shown some appealing results (Dyer. 1991) (Waltz & Pollack. 1985 ) (l,ehnelt, 1991) on robusmess. However, there are some very difficult problems which they pose, such as the difficulty with complex structures (especially recursive structures), the difficulty with variables, variable fiindings and ratification, and the difficulty with schemes and their instantiations. These problems need to be solved before such an approach can be used. in practical natural language processing systems especially those which involve syntactic processing (Dyer, 1991) .", "cite_spans": [ { "start": 52, "end": 64, "text": "(Dyer. 1991)", "ref_id": "BIBREF1" }, { "start": 65, "end": 87, "text": "(Waltz & Pollack. 1985", "ref_id": "BIBREF15" }, { "start": 565, "end": 577, "text": "(Dyer, 1991)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this paper we will propose a framework of representing syntactic preferences 1 that will keep all the virtues of robustness Preference Semantics suggests. Furthermore, such preferences can be learned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "As we seen the idea of Preference Semantics about robustness is to enable the system to 1) generate (potentially) all the possible readings for all the possible inputs, 2) find out among all the possible readings which one is the most appropriate one and generate it first. To meet this goal for syntactic constraints we want to accomplish the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sketch of Our Approach", "sec_num": "2." }, { "text": "1. A formalism with a practically feasible amount of symbolic mampulatable rules which will have enough generative power to accept any input and produce all the possible structures of it (section 3.1). 2. A scheme to associate weights with each of these rules such that the weights of all the rules that are used to produce a structure will reflect how preferable (syntactically speaking) the whole structure is (section 2.1). 3. A algorithm that will incorporate these rules and weights with semantic preferences so that the overall best reading will be generated first. (section 2.2). 4. A method to train the weights (section 2.1 and 3.3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sketch of Our Approach", "sec_num": "2." }, { "text": "The most popular method to encode syntactic constraints is probably phrase structure grammar. One way to make it robust is to assign for all the grammar rules a cost. So, a rule designed for well formed inputs will have less cost than the rules designed for less well formed inputs; that is to say that a higher cost grammar rule will be less preferred than a lower cost grammar rule. However there are two serious problems with this naive method. First, if all the illformedness will have a set of special grammar rules designed for them, to capture a reasonable I We u~ syntactic preferences in a broader sense which not only includes syntactic feature agreements but also the order of the constituents in the sentence and the way in which different constituents are combined. On the other hand, is yOU will ram, the absence of nonterminals and syntactic derivation tree~ will also di.staoce us from a strong syntax theory. variety of ill-fornledness, we need an unreasonably amount of grammar rules. Second, it is not clear how we can find the costs for all these rules. Next we will show how we will solve these two problems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coding Syntactic Constraints", "sec_num": "2.1." }, { "text": "To solve the first problem, our approach abandons the use of phrase structure grammar, and instead a new grammar formalism is used. The idea of this new formalism is to reduce syntactic constructions to a relative small set of more primitive rides (P-rules), so that each syntactic construction can be produced by the application of a particular sequence of these P-rules. Section 3.1 will give more details about this formalism, and we win just list some of its relevant characteristics here: 1. There are no nonterminals in this formalism. 2. Each P-ntle take only three parameters to specify a pattern on which its action will be fired. All these three parameters will be one of the parts of speech (or word types), such as noun, verb, adverb, etc. For example: (noun, verb, noun) ~> action3 is a P-rule. 3. Each rule has one action. There are only three possible actions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coding Syntactic Constraints", "sec_num": "2.1." }, { "text": "Since the number of parts of speech is generally limited 2, the total rule space (the number of all the possible P-rules) is rather small 3. So, it is possible for us to keep all of the possible P-rules in the grammar rule base of the parser. The output of the parser is a parsing tree in which each node corresponds to a word in the input sentence and the edge represents the dependency or the attachment relation between the words in the sentence. One important property of this formalism is that given any input sentence, for all of the normal parsing trees that can be built on it (the definition of normal parsing tree is given in next section), there exists a sequence of P-rules, such that the tree will be derived by applying this rule sequence. This means that we can now use this small P-rule set to guide the parsing 4 without pre-excluding any kinds of ill-formed inputs for which a normal parsing tree does exist.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coding Syntactic Constraints", "sec_num": "2.1." }, { "text": "To solve the second problem, we will use some techniques bon'owed from the connectionist cmnp. As we have mentioned before each rule takes three parameters to specify a pattern. Each of these parameters will associate with a vector of syntactic feature 5 roles. So, every Prule will have a feature rote vector:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coding Syntactic Constraints", "sec_num": "2.1." }, { "text": "(FFlt ...... Fln, F21 ...... F2n, F31 ......", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coding Syntactic Constraints", "sec_num": "2.1." }, { "text": "Each feature role is in turn associated with a weight. So each P-rule also has a vector of weights:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coding Syntactic Constraints", "sec_num": "2.1." }, { "text": "(W W W 11 ...... W .ln' W21 ......", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coding Syntactic Constraints", "sec_num": "2.1." }, { "text": "31 ...... 3n ) If a rule is applicable at a certain situation, the feature roles will be filled by the corresponding feature values given by the language objects matched out by the pattern of the rule. The value of each feature can be either 1 or 0, Let F' i . denotes the value filling role F i \u2022 at one par-tic~ar application. The cost of al~plying that rule at that situation will be:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "W2n'", "sec_num": null }, { "text": "-Y. Wij \u2022 F'ij (*)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "W2n'", "sec_num": null }, { "text": "The weights can be trained by a training algorithm similar to that used in the single layer perceptron network.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "W2n'", "sec_num": null }, { "text": "To summarize, the syntactic constraints are encoded partly as P-rules and partly as the weights associated with each p-ride. The P-rnles are used to guide the parsing and tell which constituent should combined with which co~tstituent. There are two types of syntactic preferences encoded in the weights: the preferences of the language to different P-rules and the preferences of each P-rule to different syntactic features. \"l~e P-rule is still symbolic and working on a stack. So, the ability of recursion is kept.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "W2n'", "sec_num": null }, { "text": "At the topmost level, the parser is a standard searching algorithm. Due to its efficiency, A* algorithm (Nilsson, 1980) is used. However, it is worth noting that any other standard searching algorithm can also be used. For any searching parsing is actually guided by both syntactic preferenccs and semantic preferences.", "cite_spans": [ { "start": 104, "end": 119, "text": "(Nilsson, 1980)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Organization of the Parser", "sec_num": "2.2." }, { "text": "5 The use of term \"syntactic feature\" is only to make it distinct from semantic preference and semantic type . We, by no means, try to exclude th~ use of those features which are generally considered as somantle feantres but which will help to make the right syntactic choice. The pml:ial re.suit is represented a.s one or more parsing trees which arc kept on a working stack.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Organization of the Parser", "sec_num": "2.2." }, { "text": "Details of this representation are given in next section. The initial state is a pair of an empty stack and a sentence to be parsed. The action of the searching process is to look at the next read-in word and the roots of the top two taees on the wolking stack, which represent the two most recent /ound active constituents. The searching will then search for all the applicable P-rules based on what it sees. All P-rules it has found will then be fired. The action part of these P-rules will decide bow to manipulate the trees on the stack and the current read-in. At will also decide whether it needs to read in next word. Therefore, for each of these P-rules being fired, a new state is created. The cost of creating this new state is following: the cost of applying the P-rule on its father state; tile degree of violation of any new local, preference just being found in the newly built trees; the cost of contextual preference 6 associated with file new consumed read in sense. All these costs are tmrmalized and added to yield the total cost of creating this new state. The reason for normalization is that, for example, the cost of applying P-rule (see section 2.1) needs to be norntalized to be positive as it is required by A* algorithm. Tile relative magnitudes of different types of costs need also be balanced, so that one kind of cosls will not overwhelm the others. The final states of the searching are the states where the unparsed sentence is nil and the working stack has only one complete parsing tree on it. Obviously the output of this searching algorithm is the reading (represented as the n'ee) of dte input sentence which violates the least amount of Sylltactic and semantic preferences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Organization of the Parser", "sec_num": "2.2." }, { "text": "So far the heuristics used in the A* searching is simply taken to be:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Organization of the Parser", "sec_num": "2.2." }, { "text": "6 For the idea of contextual prefe.rence~ see (Slator, 1988 where ct is a constant. I is the length of the unparsed sentence, c is the average cost for parsing each word. It is needed to mention that the input sentence we talked above is actually a list of lexical frames. Each lexical flame contains the following information about the word it stands for: the part of speech, syntactic features, local preferences, contextual preferences, etc. Since words can have more than one senses and one lexical frame will only represent one sense, for each read-in word, the parser will have to work for each of its senses and produce all of their children.", "cite_spans": [ { "start": 46, "end": 59, "text": "(Slator, 1988", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Organization of the Parser", "sec_num": "2.2." }, { "text": "3.1. P-rules, Parsing Trees, and P-parsers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Some Details", "sec_num": "3." }, { "text": "Given an input string a on a vocabulary E 7, a parsing tree of the input is a trees with all its nodes corresponding to one of the words in the input string. For example, one of the parsing trees for input abcde is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Some Details", "sec_num": "3." }, { "text": "(e (a) (c (b) (d)))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Some Details", "sec_num": "3." }, { "text": "Here the head of each list is the root of the tree or the subtree, and the tail of the list are its children. Intuitively the parenthood relation reflects the attachment or dependency relation in the input expression. For example, given an input expression a small dog, the dog will be the father of both a and small. The parsing tree is more formally defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Some Details", "sec_num": "3." }, { "text": "Definition (Parsing Tree): Given an input string a, the parsing tree on a is recursively defined as following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Some Details", "sec_num": "3." }, { "text": "I. (a) is a parsing tree on a, if a is in 2. (a T 1 T 2 ...... Ti) is a parsing tree on a, if a is in a and T k (l~..,,]. The P-rule is used to specify how the P-parser works. If the mput is of a vocabulary E, the P-rules on it can be defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Some Details", "sec_num": "3." }, { "text": "Definition , where the a is the input string to be parsed. There is a set of P-rules in each P-parser to specify its behavior. The Pparser will work non-deterministically. A P-rule can be fired iff its three parameters march the roots of the top two parsing trees on the stack and the root of the tree in the working register ACRES DE COLING-92, NANTES, 23-28 Aotrr 1992 2 4 2 PROC. O1: COLING-92, NANTES, AUG. 23-28, 1992", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Some Details", "sec_num": "3." }, { "text": "respectively. Note the P-rule does not care about the unparsed input part in the configuration triple. A cmtfiguratiml is a final configuration iff the unparsed input string and the working register are all empty mid the stack only has one parsing tree on it. A input is grammatical if and only if the P-parser can reach from the initial configuration to a final configuration by applying a sequence of P-rules taken from its grammar set. If there are more than one possible final states for a given input sWing and the parsing txees produced at these states are different, then we say that the input swing is ambiguous. Here is a simple example to show how the P-parser works. You may need a pen and a piece of paper to work it out. Given input abcde (a good friend of mine, for example), the parsing tree (c (a) (b) (d (e))) can be constructed fly applying the following sequence of rules:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Some Details", "sec_num": "3." }, { "text": "(nil,nil,a)=> 1 (a,nil,b)=> 1 (b,a,c)=>2 (a,nil,c)=>2 (nil,nil,c)=>l (c,nil,d)=>l (d,e,c)=>l (c,d,nil)~.>3 (d,c,nll)=>3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Some Details", "sec_num": "3." }, { "text": "Most properties of this formalism will not be of the interests of this paper, except this theorem:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Some Details", "sec_num": "3." }, { "text": "Theorem: A P-parser with all the P-rules on E as its grammar set cm~ produce all the complete normal parsing trees for any input sWing on Z.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Some Details", "sec_num": "3." }, { "text": "It is easy to prove this by induction on the length of the input string, r2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PROOF:", "sec_num": null }, { "text": "The theorem tells that a P-parser with all the possible P-rules as its rule set can produce for any input string all of its possible parsing trees which have no crossing dependencies. This alone may not be very useful since this P-parser will also accept all the strings over E. However, as we have shown in the last section, with a proper weighting scheme, this P-parser offers a suitable framework for coding syntactic preferences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PROOF:", "sec_num": null }, { "text": "Each lexical item will have a vector (of a fixed length) containing syntactic features. The value of the each feature is either 1 or 0. Each of the three parameters in the P-rule is associated with a vector (of the same length) of syntactic feature roles. Each of these syntactic feature roles is associated with a weighL Each weight thus encodes a preference of the P-rule toward the particular syntactic feature it corresponds to. A higher weight means the P-rule will be more sensitive to the appearance of this syntactic feature, and the result of applying this rule therefore will be more competitive when it does appeaa'. The preferences of the language to dilC ferent P-rules arc 'also ~ellected in weight,s. Instead of being reflected in each individual weight, they are reflected in the distribution of weights in P-rules. A P-rule with a higher average weight is generally more favored than a Pntle with a lower average weight, since the higher tile average weight is, tile lower the cost of applying the P-rule tends to be. It is also necessary to emphasize that these two types of preferences are closely integrated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Preferences in the Weights of P-rules", "sec_num": "3.2." }, { "text": "The weights of ,all the P-rules will be trained. The training is supervised. Tllere are two different methods to train the weights, The first method takes an input string and a correct pars~ ing tree for that stnng. We will use an algoritlmt to computer a sequence of P-rules that will produce the given parsing tree from the given input string. \"[qfis algorithm is not difficult to design, and due to the space limits we will not present it here. After the P-rule sequence is produced, each of P-rule in the sexluence will get a bonus 5. The bonus will farther be distributed to the weights in the P-rule in the same fashion as that in the single layer perceptron network, that is: AWij = ~llSF'ij where r I is the learning factor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weight Learning", "sec_num": "3.3." }, { "text": "The second method requires less interventions. The parser is let to work on its own. 'llae user tufty need to tell the parser whether the output it gives is correct. If the result is correct, all the P-rules used to construct this outpnt will get a bonus and all the rules used during the parsing which did not contribute to tim output will get a punishinent. If the answer is wrong, all the Po rules used to construct this artswer will get a punishment, and the parser will continue to search for a second best guess. The Ixmus and punishment will be diswibuted to the weights of the P-rule in the san~e maimer as that in the first method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weight Learning", "sec_num": "3.3." }, { "text": "The first method is more appropriate at the early stage of the training when most of the rules have about the stone weights. It will take much longer for the parser to find the correct result on its own at this time. On the other hand, the second method needs less interventions. So, it will be more appropriate to be used whenever it can work reasonably well. It is well known that the single layea\" perception net can not be trained to deal with exclusive-or. The same situation will also happea here. Since exclusive-or relations do exist between syntactic AcrEs DE COLING-92, NAm' .ES, 23-28 AO~\" 1992 2 4 3 I'ROC. OF COLING-92, NANTES, AU(L 23-28, 1992", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weight Learning", "sec_num": "3.3." }, { "text": "features, we need to solve this problem. The simplest solution is to make multiple copies for the P-role, and, hopefully, each copy will converge to each side of the exclusive-or relation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weight Learning", "sec_num": "3.3." }, { "text": "The syntactic preference violation is measured by formula (*) in section 2. Both action 2 and action 3 of P-rule actions make an aaachrnent, and some semantic preferences may be either violated or satisfied by the attachment. So, after each application of such p-ride actions, the parser needs to check whether there are some new preferences being violated or satisfied. If there are, it will computer the cost of these violations and report it to the searching process. Similarly, eacli action 1 will cause a new contextual preference being reported. The measurement for the violation degree of both local preferences and contextual preferences is basically taken from PREMO (Slator, 1988) , which we will not repeat here.", "cite_spans": [ { "start": 676, "end": 690, "text": "(Slator, 1988)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Measuring Preference Violations", "sec_num": "3.4." }, { "text": "Preference Semantics offers an excellent framework for robust parsing. However, how to make full use of syntactic constraints has not been addressed. Using weights to code syntactic constraints on a relatively small set of P-rules (from which all the possible syntactic structures can be derived) enables us to expand the philosophy of Preference Semantics from semantic constraints to syntactic constraints. The weighting system not only reflects the preferences of the language, say English, to different P-rnles but also reflects the preferences of each P-rule to each syntactic feature. Besides of this, it also offers a nice interface so that we can integrate the application of these syntactic preferences nicely with the application of both local semantic preferences and contextual preferences by using a highly efficient searching algorithm. This project has also shown that some of the techniques commonly associated with the connectionism, such as coding information as weights, training the weights, and so on, can also be used to benefit symbolic computing. The result is gaining the robustness and adaptability while not losing the advantages of symbolic computing such as recursion, variable binding, etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Comparisons", "sec_num": "4." }, { "text": "The notion 'syntactic preference' has been used in (Pereira, 1985 ) (Frazier & Fodor, 1978 ) (Kimball. 1973) to describe the preference between Right Association and Minimal Attachment. Our approach shares some similarities with (Pereira, 1985) , in that MA and RA simply \"corresponds to two precise roles on how to choose between alternative parsing actions\" at certain parsing configurations. However, he did not offer a framework for how one of them will be preferred. According to our model the preference between them will be based on the weights associated with the two rules, the syntactic features of the words involved and the semantic preferences found between these words. Besides, the idea of syntactic preferences in this paper is more general than the one used in their work, since it includes not only the preference between MA or RA but other kinds of syntactic preferences as well. Wilks, Huang and Fass (1985) showed that prepositmnal phrase attachments are possible with only semantic information. In their approach syntactical preferences are limited to the order of matchings and the default attaching. Their atxaching algorithm can be seen as a special case of the model we proposed here, in that, if they are correct, the preferences between the sequences of rules used for RA and MA would turn out to be very similar so that the semantic preferences involved would generally over shadow their effects. There are some significant differences between our approach and some hybrid systems (Kwasny & FaJsal, 1989 ) (Simmons & Yu, 1990). First, our approach is not a hybrid approach. Everything in our approach is still symbolic computing. Second, in our approach the costs of the applications of syntactic constraints are passed to a global search process. The search process will consider these costs along with the costs given by other types of constraints and make a decision globally. In a hybrid system, the syntactic decision is made by the network locally, and there is no intervention from the semantic processing. Third, our parser is nondeterministic while the parsers in hybrid systems are deterministic since there is no easy way to do backtracking. It is also easy to see that without the intervention of semantic processing, lacking the ability of backtracking is hardly an acceptable s~ategy for a practical natural language parser. Finally, in our approach each P-rule has its own set of weights, while in the hybrid systems all the grammar rules share a common network, and it is quite likely that this net will be overloaded with information when a reasonably large grammar is used.", "cite_spans": [ { "start": 51, "end": 65, "text": "(Pereira, 1985", "ref_id": "BIBREF11" }, { "start": 66, "end": 90, "text": ") (Frazier & Fodor, 1978", "ref_id": null }, { "start": 93, "end": 108, "text": "(Kimball. 1973)", "ref_id": "BIBREF4" }, { "start": 229, "end": 244, "text": "(Pereira, 1985)", "ref_id": "BIBREF11" }, { "start": 899, "end": 927, "text": "Wilks, Huang and Fass (1985)", "ref_id": "BIBREF19" }, { "start": 1510, "end": 1532, "text": "(Kwasny & FaJsal, 1989", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Comparisons", "sec_num": "4." }, { "text": "ACTES DE COIANG-92, NANTES, 23-28 ^Ot~T 1992 2 4 4 PROC. OF COLING-92, NANTES, Aun. 23-28, 1992", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Comparisons", "sec_num": "4." }, { "text": "Pl~oc. ot: COLING-92, NAbrrEs, AuG. 23-28. 1992", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For example in Longnmn Dictionary of Contemporary English (1978) there are only 14 different parts of speech. Given that interjection can be taken care by Jt preprocessor and the complex parts like v adv, v adv prep, v adv;prep, v prep can be represented as v plus anme syntactic feature.s, this number can be further mduce, d to 9.3 If there ere 10 different parts of speech, the total P-rule sl~ee is smaller than lOxlOxlOx3 = 3000.4 Because of the searching algorithm we used, a lot of searching paths sanctioned by p-rules will be pruned by semantic prefurences. In this sense the", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "PRec, OF COLING-92, NANTES, AUG. 23-28, 1992", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "All the experiments for this project are carried on the platform given by PREMO which was designed and implemented by Brian Slator when he was here in CRL. The author \"also wants to thank Dr Yorick WiNs, Dr David Farwell, Dr John Barnden and one referee for their comments. Of course, the author is solely responsi~ ble for all the mistakes in the paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgment", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Recovery Strategies for Parsing Extragrammtical Language", "authors": [ { "first": "J", "middle": [ "G" ], "last": "Carbonell", "suffix": "" }, { "first": "P", "middle": [ "J" ], "last": "Hayes ; Fass", "suffix": "" }, { "first": "Y", "middle": [], "last": "Wilks", "suffix": "" } ], "year": 1983, "venue": "American Journal of Computational Linguistics", "volume": "9", "issue": "", "pages": "178--187", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.G. Carbonell and P. J. Hayes, \"Recovery Strategies for Parsing Extragrammtical Language,\" American Journal of Computa- tional Linguistics, vol. 9(3-4), pp. 123-146, 1983. 2, D. Fass and Y. Wilks, \"Preference Semantics, Ill-Formedness, and Metaphor,\" American Journal of Computational Linguistics, voL 9(3-4), pp. 178-187, 1983.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Symbolic NeuroEngmeelmg md natural language processing: a multilevel research approach", "authors": [ { "first": "M", "middle": [ "G" ], "last": "Dyer", "suffix": "" } ], "year": 1991, "venue": "Advances in Conaectionist and Neural Computation Theory", "volume": "", "issue": "", "pages": "32--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "M.G. Dyer, \"Symbolic NeuroEngmeelmg md natural language processing: a multilevel research approach.,\" in Advances in Conaec- tionist and Neural Computation Theory, Vol. 1., ed. J.A. Bmnden and J.B. Pollack, pp. 32-- 86, Ablex Pnblishing Corp. , Norwood, N.J., 1991.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The Sausage Machine: A New Two-Stage Parsing Model", "authors": [ { "first": "L", "middle": [], "last": "Frazter", "suffix": "" }, { "first": "J", "middle": [ "D" ], "last": "Fodor", "suffix": "" } ], "year": 1978, "venue": "", "volume": "6", "issue": "", "pages": "291--325", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Frazter and J. D. Fodor, \"The Sausage Machine: A New Two-Stage Parsing Model,\" Cognition, vol. 6, pp. 291-325, 1978.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "XTRA: 'Ihe design and Implementation of A Fully Automatic Machine Translatton System (Doctoral Dissertation)", "authors": [ { "first": "X", "middle": [], "last": "Huang", "suffix": "" } ], "year": 1988, "venue": "Memoranda in Computer and Cognitive Science", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Huang, \"XTRA: 'Ihe design and Implemen- tation of A Fully Automatic Machine Transla- tton System (Doctoral Dissertation),\" Memoranda in Computer and Cognitive Sci- ence, vol. MCCS-88-121, Computing Research Laboratory, New Mexico State University, Las Cruces, NM 88003, USA, 1988.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Seven Principles of Surface Structure Parsing in Natural Language", "authors": [ { "first": "J", "middle": [], "last": "Kimball", "suffix": "" } ], "year": 1973, "venue": "Cognition", "volume": "2", "issue": "", "pages": "15--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Kimball, \"Seven Principles of Surface Structure Parsing in Natural Language,\" Cog- nition, vol. 2, pp. 15-47, 1973.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Relaxation theories for parsing ill-forared input", "authors": [ { "first": "C", "middle": [], "last": "Kwasny", "suffix": "" }, { "first": "N", "middle": [ "K" ], "last": "Sondheimer", "suffix": "" } ], "year": 1981, "venue": "American Journal of Computational Linguistics", "volume": "7", "issue": "2", "pages": "99--108", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Kwasny and N. K. Sondheimer, \"Relaxa- tion theories for parsing ill-forared input,\" American Journal of Computational Linguis- tics, vol. 7, no. 2, pp. 99-108, 1981.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Competition and Learmng in a Coanectiohist Deterministic Parser", "authors": [ { "first": "C", "middle": [], "last": "Kwasny", "suffix": "" }, { "first": "K", "middle": [ "A" ], "last": "", "suffix": "" } ], "year": 1989, "venue": "Procs. 11 th Annual Conf. of the Cognitive Science Society", "volume": "", "issue": "", "pages": "635--642", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Kwasny aruJ K. A. Faisal, \"Competition and Learmng in a Coanectiohist Deterministic Parser,\" Procs. 11 th Annual Conf. of the Cog- nitive Science Society, pp. 635-642, Lawrea~ce Erlbaum, Hillsdale, N.J., 1989.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Symbolic/Sabsymbolic Sentence Analysis: Exploiting the best of two worlds", "authors": [ { "first": "G", "middle": [], "last": "Lehnert", "suffix": "" }, { "first": "N", "middle": [ "J" ], "last": "Norwood", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "32--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Lehnert, \"Symbolic/Sabsymbolic Sen- tence Analysis: Exploiting the best of two worlds.,\" in Advances in Com~ectionist and Neural Computation Theory, Vol. 1., ed. J.A. Banlden and J.B. Pollack, pp. 32--86, Ablex Publishing Corp. , Norwood, N.J., 1991.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A Phrase Structure Account of", "authors": [ { "first": "J", "middle": [], "last": "Maling", "suffix": "" }, { "first": "A", "middle": [], "last": "Zaeslen", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Maling and A. Zaeslen, \"A Phrase Structure Account of", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The Nature of Syntactical Representation", "authors": [ { "first": "", "middle": [], "last": "Phenomeala", "suffix": "" } ], "year": 1982, "venue": "", "volume": "", "issue": "", "pages": "229--282", "other_ids": {}, "num": null, "urls": [], "raw_text": "Phenomeala,\" in The Nature of Syntactical Representation, ed. P. Jacobsun and G. K. Pul- lure, pp. 229--282, Reidel Publishing Com- pany, Holland, 1982.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Principles of A/, Tioga pulishing co", "authors": [ { "first": "N", "middle": [], "last": "Nilsson", "suffix": "" } ], "year": 1980, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Nilsson, Principles of A/, Tioga pulishing co., Mtaflo Park, 1980.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A New Characterization of Attachment Preference", "authors": [ { "first": "F", "middle": [ "C N" ], "last": "Pereira", "suffix": "" } ], "year": 1985, "venue": "Natural Language Pursing", "volume": "", "issue": "", "pages": "307--319", "other_ids": {}, "num": null, "urls": [], "raw_text": "F.C.N. Pereira, \"A New Characterization of Attachment Preference,\" in Natural Language Pursing, exl. D. R. Dowry, L. Kalltunen & A. M. Zwicky, pp. 307-319, Cmnhiidge Univer- sity Press, C,'unb[idge, 1985.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Integrated Pr~essing Produces Robust Understanding", "authors": [ { "first": "M", "middle": [], "last": "Selfridge", "suffix": "" } ], "year": 1986, "venue": "Computational Linguistics, wfl", "volume": "12", "issue": "2", "pages": "89--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Selfridge, \"Integrated Pr~essing Produces Robust Understanding,\" Computational Linguistics, wfl. 12(2), pp. 89-106, 1986.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Training a Neural Network to be a Context Sensitive Grmnmar", "authors": [ { "first": "R", "middle": [], "last": "Ii", "suffix": "" }, { "first": "", "middle": [], "last": "Yu", "suffix": "" } ], "year": 1990, "venue": "Proceedings of the 5th Rocky Moant<\u00a2in Co@rence on AI", "volume": "", "issue": "", "pages": "25--256", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. 17 . Sinanons mad Y. II. Yu, \"Training a Neural Network to be a Context Sensitive Grmnmar,\" Proceedings of the 5th Rocky Moant<\u00a2in Co@rence on AI, pp. 25t-256, Las Cruces, NM., 1990.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Lexical Semantics and Preference Semantics Atmlysis (Doctoral Dissertation)", "authors": [ { "first": "B", "middle": [ "M" ], "last": "Slator", "suffix": "" } ], "year": 1988, "venue": "Memorandn in Computer and Cognitive Science", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B.M. Slator, \"Lexical Semantics and Prefer- ence Semantics Atmlysis (Doctoral Disserta- tion),\" Memorandn in Computer and Cogni- tive Science, vol MCCS-88-1, Compnting Research Laboratory, New Mexico State University, L~s Cruces, NM 88003, USA, 1988.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Massive Parallel Prosing: A Strongly h~teractive Model of Natural Language hlterpretation", "authors": [ { "first": "D", "middle": [ "L" ], "last": "Waltz", "suffix": "" }, { "first": "J", "middle": [ "13" ], "last": "Pollack", "suffix": "" } ], "year": 1985, "venue": "Cognitive Science", "volume": "9", "issue": "", "pages": "51--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "D.L. Waltz and J. 13. Pollack, \"Massive Paral- lel Prosing: A Strongly h~teractive Model of Natural Language hlterpretation,\" Cognitive Science, vol. 9, pp. 51-74, 1985.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Meta-rules as a basis for processing illformed output", "authors": [ { "first": "R", "middle": [ "M" ], "last": "Weischedel", "suffix": "" }, { "first": "N", "middle": [ "K" ], "last": "Soodheitner", "suffix": "" } ], "year": 1983, "venue": "Americcul Journal of Computational Linguistics", "volume": "9", "issue": "3-4", "pages": "161--177", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. M. Weischedel and N. K. Soodheitner, \"Meta-rules as a basis for processing ill- formed output,\" Americcul Journal of Compu- tational Linguistics, vol. 9, no. 3-4, pp. 161- 177, 1983.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A Prefereaaial Pattern-Seeking Semamtcs for Natural Language lnferalce", "authors": [ { "first": "Y", "middle": [], "last": "Wilks", "suffix": "" } ], "year": 1975, "venue": "Artificial Intelligence", "volume": "6", "issue": "", "pages": "53--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Wilks, \"A Prefereaaial Pattern-Seeking Semamtcs for Natural Language lnferalce,\" Artificial Intelligence, vol. 6, pp. 53-74, 1975.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Making Preferences More Active", "authors": [ { "first": "Y", "middle": [], "last": "Wi|ks", "suffix": "" } ], "year": 1978, "venue": "Artificial Intelligence", "volume": "11", "issue": "", "pages": "197--223", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Wi|ks, \"Making Preferences More Active,\" Artificial Intelligence, vol. 11, pp. 197-223, 1978.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Syntax, plefercalcc and nght artaclmmnt", "authors": [ { "first": "Y", "middle": [ "A" ], "last": "Wilks", "suffix": "" }, { "first": "D", "middle": [], "last": "Huang", "suffix": "" }, { "first": "", "middle": [], "last": "Fass", "suffix": "" } ], "year": 1985, "venue": "", "volume": "", "issue": "", "pages": "635--642", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y.A. Wilks, X Huang and D. Fass, \"Syntax, plefercalcc and nght artaclmmnt,\" IJCA[-85, pp. 635-642. 1985", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "OF COLING.92, NAN-rES", "authors": [ { "first": "", "middle": [], "last": "I'r(", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "23--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "I'R(){:. OF COLING.92, NAN-rES, AUU. 23-28, 1992", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "(P-rule): A P-rule on E is a member of set E' xE' xE' xA, where E' isE to [nil} and A is the set of actions defined below. Definition (Action): Actions are defined as functions of type E ~ E, where E is the set of configurations. There are total three different acUons used in P-rules, and they are listed below: 5~[S,C,R]= [(cons C S), (list (car R)), (cAr R)] 5?[S,C,R]= [(cdr S), (cons (car C) (Cons (car S) (cdr C))), R] 5~s[S,C,R]= [(cons (append (car S) t (cadr S))) (cddr S)), C, R] Action 1 simply pushes the current read-in on the stack and then reads the next word from the input. Action 2 attaches top of the stack as the first child of the current read-in stored in the working register. Action 3 pops the top of the stack first and then attaches it to the second top of the stack as its last child. The initial configuration for the P-parser is [nil, (list (car a)), (cdr ct)]" }, "TABREF0": { "num": null, "content": "
In the pmser, tile searching states are pairs like:
(<partialresutt>,<unparsedsen-
tence>)
", "type_str": "table", "text": "algorithm, the following things need to be specilied: 1. what the states hi the searching space are: 2. what the initial state is; 3. what the action roles are aed taking one state how they create new states; 4. what the cost tff creating a new node is (please note that the cost is actually charged to the edge of the searching graph); 5. what the final states are.", "html": null } } } }