{ "paper_id": "J89-1002", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:02:27.390296Z" }, "title": "SYNTACTIC GRAPHS: A REPRESENTATION FOR AMBIGUOUS PARSE TREES THE UNION OF ALL", "authors": [ { "first": "Jungyun", "middle": [], "last": "Seo", "suffix": "", "affiliation": { "laboratory": "Artificial Intelligence Laboratory", "institution": "University of Texas at Austin Austin", "location": { "postCode": "78712-1188", "region": "TX" } }, "email": "" }, { "first": "Robert", "middle": [ "F" ], "last": "Simmons", "suffix": "", "affiliation": { "laboratory": "Artificial Intelligence Laboratory", "institution": "University of Texas at Austin Austin", "location": { "postCode": "78712-1188", "region": "TX" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we present a new method of representing the Surface syntactic structure of a sentence. Trees have usually been used in linguistics and natural language processing to represent syntactic structures of a sentence. A tree structure shows only one possible syntactic parse of a sentence, but in order to choose a correct parse, we need to examine all possible tree structures one by one. Syntactic graph representation makes it possible to represent all possible surface syntactic relations in one directed graph (DG). Since a syntactic graph is expressed in terms of a set of triples, higher level semantic processes can access any part of the graph directly without navigating the whole structure. Furthermore, since a syntactic graph represents the union of all possible syntactic readings of a sentence, it is fairly easy to focus on the syntactically ambiguous points. In this paper, we introduce the basic idea of syntactic graph representation and discuss its various properties. We claim that a syntactic graph carries complete syntactic information provided by a parse forestmthe set of all possible parse trees. and an arc name, are generated while parsing a sentence. The parser collects all correct triples and constructs an exclusion matrix, which shows co-occurrence constraints among arcs, by navigating all possible parse", "pdf_parse": { "paper_id": "J89-1002", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we present a new method of representing the Surface syntactic structure of a sentence. Trees have usually been used in linguistics and natural language processing to represent syntactic structures of a sentence. A tree structure shows only one possible syntactic parse of a sentence, but in order to choose a correct parse, we need to examine all possible tree structures one by one. Syntactic graph representation makes it possible to represent all possible surface syntactic relations in one directed graph (DG). Since a syntactic graph is expressed in terms of a set of triples, higher level semantic processes can access any part of the graph directly without navigating the whole structure. Furthermore, since a syntactic graph represents the union of all possible syntactic readings of a sentence, it is fairly easy to focus on the syntactically ambiguous points. In this paper, we introduce the basic idea of syntactic graph representation and discuss its various properties. We claim that a syntactic graph carries complete syntactic information provided by a parse forestmthe set of all possible parse trees. and an arc name, are generated while parsing a sentence. The parser collects all correct triples and constructs an exclusion matrix, which shows co-occurrence constraints among arcs, by navigating all possible parse", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In natural language processing, we use several rules and various items of knowledge to understand a sentence. Syntactic processing, which analyzes the syntactic relations among constituents, is widely used to determine the surface structure of a sentence, because it is effective to show the functional relations between constituents and is based on well-developed linguistic theory. Tree structures, called parse trees, represent syntactic structures of sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1" }, { "text": "In a natural language understanding system in which syntactic and semantic processes are separated, the semantic processor usually takes the surface syntactic structure of a sentence from the syntactic analyzer as input and processes it for further understanding. ~ Since there are many ambiguities in natural language parsing, syntactic processing usually generates more than one parse tree. Therefore, the higher level semantic processor should examine the parse trees one by one to choose a correct one. 2 Since possible parse trees of sentences in ordinary expository text often number in the hundreds, it is impractical to check parse trees one by one without knowing where the ambiguous points are. We have tried to reduce this problem by introducing a new structure, the syntactic graph, that can represent all possible parse trees effectively in a compact form for further processing. As we will show in the rest of this paper, since all syntactically ambiguous points are kept in a syntactic graph, we can easily focus on those points for further disambiguation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1" }, { "text": "Furthermore, syntactic graph representation can be naturally implemented in efficient, parallel, all-path parsers. One-path parsing algorithms, like the DCG (Pereira and Warren 1980) , which enumerates all possible parse trees one by one with backtracking, usually have exponential complexity. All-path parsing algorithms explore all possible paths in parallel without backtracking (Early 1970; Kay 1980; Chester 1980; Tomita 1985) . In these algorithms, it is efficient to generate all possible parse trees. This kind of algorithm has complexity O(N 3) (Aho and Ullman 1972; Tomita 1985) .", "cite_spans": [ { "start": 157, "end": 182, "text": "(Pereira and Warren 1980)", "ref_id": "BIBREF14" }, { "start": 382, "end": 394, "text": "(Early 1970;", "ref_id": "BIBREF6" }, { "start": 395, "end": 404, "text": "Kay 1980;", "ref_id": "BIBREF9" }, { "start": 405, "end": 418, "text": "Chester 1980;", "ref_id": "BIBREF3" }, { "start": 419, "end": 431, "text": "Tomita 1985)", "ref_id": "BIBREF18" }, { "start": 554, "end": 575, "text": "(Aho and Ullman 1972;", "ref_id": "BIBREF0" }, { "start": 576, "end": 588, "text": "Tomita 1985)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1" }, { "text": "We use an all-path parsing algorithm to parse a sentence. Triples, each of which consists of two nodes ., Jl [I'se''vl I vtp . trees in a shared, packed-parse forest) We claim that a syntactic graph represented by the triples and an exclusion matrix contains all important syntactic information in the parse forest.", "cite_spans": [ { "start": 106, "end": 124, "text": "Jl [I'se''vl I vtp", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1" }, { "text": "In the next section, we motivate this work with an example. Then we briefly introduce X (X-bar) theory with head projection, which provides the basis of the graph representation, and the notation of graph representation in Section 3. The properties of a syntactic graph are detailed in Section 4. In Section 5, we introduce the idea of an exclusion matrix to limit possible tree interpretations of a graph representation. In Section 6, we will present the definition of completeness and soundness of the syntactic graph representation compared to parse trees by showing an algorithm that enumerates all syntactic readings using the exclusion matrix from a syntactic graph. We claim that those readings include all the possible syntactic readings of the corresponding parse forest. Finally, after discussing related work, we will ~uggest future research and draw some conclusions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1" }, { "text": "We are currently investigating a model of natural language text understanding in which syntactic and semantic processors are separated. 4 Ordinarily, in this model, a syntactic processor constructs a surface syntactic structure of an input sentence, and then a higher level semantic processor processes it to understand the sentence---i.e., syntactic and semantic processors are pipelined. If the semantic processor fails to understand the sentence with a given parse tree, the semantic processor should ask the syntactic processor for another possible parse tree. This cycle of processing will continue until the semantic processor finds the correct parse tree with which it succeeds in understanding the sentence.", "cite_spans": [ { "start": 136, "end": 137, "text": "4", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "MOTIVATIONAL EXAMPLE", "sec_num": "2" }, { "text": "Let us consider the following sentences, from Waltz (1982) :", "cite_spans": [ { "start": 46, "end": 58, "text": "Waltz (1982)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "MOTIVATIONAL EXAMPLE", "sec_num": "2" }, { "text": "I saw a man on the hill with a telescope. I cleaned the lens to get a better view. When we read the first sentence, we cannot determine whether the man has a telescope or the telescope is used to see the man. This is known as the PP-attachment problem, and many researchers have proposed various ways to solve it (Frazier and Fodor 1979; Shubert 1984 Shubert , 1986 Wilks et. al 1985) . In this sentence, however, it is impossible to choose a correct syntactic reading in syntactic processing---even with commonsense knowledge. The ambiguities must remain until the system extracts more contextual knowledge from other input sentences.", "cite_spans": [ { "start": 313, "end": 337, "text": "(Frazier and Fodor 1979;", "ref_id": "BIBREF7" }, { "start": 338, "end": 350, "text": "Shubert 1984", "ref_id": "BIBREF16" }, { "start": 351, "end": 365, "text": "Shubert , 1986", "ref_id": "BIBREF17" }, { "start": 366, "end": 384, "text": "Wilks et. al 1985)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "MOTIVATIONAL EXAMPLE", "sec_num": "2" }, { "text": "The problems of tree structure representation in the pipelined, natural language processing model are the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MOTIVATIONAL EXAMPLE", "sec_num": "2" }, { "text": "First, since the number of parse trees of a typical sentence in real text easily grows to several hundreds, and it is impossible to resolve syntactic ambiguities by the syntactic processor itself, a semantic processor must check all possible parse trees one by one until it is satisfied by some parse tree. 5 Second, since there is no information about where the ambiguous points are in a parse tree, the semantic processor should check all possibilities before accepting the parse tree. Third, although the semantic processor might be satisfied with a parse tree, the system should keep the status of the syntactic processor for a while, because there is a fair chance that the parse tree may become unsatisfactory after the system processes several more sentences. For example, attaching the prepositional phrase (PP) \"with a telescope\" to \"hill\" or \"man\" would be fine for the semantic processor, since there is nothing semantically wrong with these attachments. However, these attachments become unsatisfactory after the system understands the next sentence. Then, the semantic processor would have to backtrack and request from the syntactic processor another possible parse tree for the earlier sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MOTIVATIONAL EXAMPLE", "sec_num": "2" }, { "text": "We propose the syntactic graph as the output structure of a syntactic processor. The syntactic graph of the first sentence in the previous example is shown in Figure 1 . In this graph, nodes consist of the positions, the root forms, and the categories of words in the sentence. Each node represents a constituent whose head word is the word in the node. Each arc shows a dominatormodifier relationship between two nodes. The name of each arc is uniquely determined according to the grammar rule used to generate the arc. For example, the snp arc is generated from the grammar rule, SNT ~ NP VP, vpp is from the rule, VP ~ VP PP, and ppn from the rule, PP ~ Prep NP, etc.", "cite_spans": [], "ref_spans": [ { "start": 159, "end": 167, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "MOTIVATIONAL EXAMPLE", "sec_num": "2" }, { "text": "As we can see in Figure 1 , all syntactic readings are represented in a directed graph in which every ambiguity--lexical ambiguities from words with multiple syntactic categories and structural ambiguities from the ambiguous grammar--is kept. The nodes which are pointed to by more than one arc show the ambiguous points in the sentence, so the semantic processor can focus on those points to resolve the ambiguities. Furthermore, since a syntactic graph is represented by a set of triples, a semantic processor can directly access any part of a graph without traversing the whole. Finally, syntactic graph representation is compact enough to be kept in memory for a while. 6 3 X THEORY AND SYNTACTIC GRAPHS B X theory was proposed by Chomsky (1970) to explain various linguistic structural properties, and has been widely accepted in linguistic theories. In this notation, the head of any phrase is termed X, the phrasal category containing X is termed_X, and the phrasal category containing X is termed X. For example, the head of a noun__phrase is N (noun), N is an intermediate category, and N corresponds to noun phrase (NP). The general form of the phrase structure rules for X theory is roughly as follows:", "cite_spans": [ { "start": 735, "end": 749, "text": "Chomsky (1970)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 17, "end": 25, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "MOTIVATIONAL EXAMPLE", "sec_num": "2" }, { "text": "\u2022 ~'~ ~* X \u2022 X---> XZ * ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MOTIVATIONAL EXAMPLE", "sec_num": "2" }, { "text": "where * is a Kleene star.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MOTIVATIONAL EXAMPLE", "sec_num": "2" }, { "text": "Yis the phrase that specifies X, and Z is the phrase that modifies X. 7 The properties of the head word of a phrase are projected onto the properties of the phrase. We can express a grammar with X conventions to cover a wide range of English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MOTIVATIONAL EXAMPLE", "sec_num": "2" }, { "text": "Since, in X theory, a syntactic phrase consists of the head of the phrase and the specifiers and modifiers of the head, if there are more than two constituents in the right-hand side of a grammar rule, then there are dominator-modifier (DM) relationships between the head word and the specifier or modifier words in the phrase. Tsukada (1987) discovered that the DM relationship is effective for keeping all the syntactic ambiguities in a compact and handy structure without enumerating all possible syntactic parse trees. His representation, however, is too simple to maintain some important information about syntactic structure that will be discussed in detail in this paper, and hence fails to take full advantage of the DM-relationship representation.", "cite_spans": [ { "start": 328, "end": 342, "text": "Tsukada (1987)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "MOTIVATIONAL EXAMPLE", "sec_num": "2" }, { "text": "We use a slightly different representation to maintain more information in head-modifier relations. Each head-modifier relation is kept in a triple that is equivalent to an arc between two nodes (i.e., words) in a syntactic graph. The first element of a triple is the arc name, which represents the relation between the head and modifier nodes. The second element is the lexical information of the head node, and the third element is that of the modifier node. The direction of an arc is always from a head to a modifier node. For example, the triple [snp, [1,see,v] Figure 1 .", "cite_spans": [ { "start": 551, "end": 566, "text": "[snp, [1,see,v]", "ref_id": null } ], "ref_spans": [ { "start": 567, "end": 575, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "MOTIVATIONAL EXAMPLE", "sec_num": "2" }, { "text": "Since many words have more than one lexical entry, we have to keep the lexical information of each word in a triple so that we can distinguish different usages of a word in higher level processing. The triples corresponding to some common grammar rules are as follows: [[nl,Rl] [mod,[[n3,R3] [[n5,R5] lLs], [[n6,R61L6] ] Each ni represents the position, each Ri represents the root form, and each Li represents a list of the lexical information including the syntactic category of each word in a sentence. Parentheses signify optionality and the asterisk (*) allows repetition. Figure 2 shows the set of triples representing the syntactic graph in Figure I and the grammar rules used to parse the sentence. The sentence in Figure 2 has five possible parse trees in accordance with the grammar rules. All of the dependency information in those five parses is represented in the 12 triples. Those 12 triples represent all possible syntactic readings of the sentence with the grammar rules. Not all triples can co-occur in one syntactic reading in the case of an ambiguous sentence.", "cite_spans": [ { "start": 269, "end": 277, "text": "[[nl,Rl]", "ref_id": null }, { "start": 278, "end": 291, "text": "[mod,[[n3,R3]", "ref_id": null }, { "start": 292, "end": 300, "text": "[[n5,R5]", "ref_id": null }, { "start": 307, "end": 320, "text": "[[n6,R61L6] ]", "ref_id": null } ], "ref_spans": [ { "start": 578, "end": 586, "text": "Figure 2", "ref_id": null }, { "start": 648, "end": 656, "text": "Figure I", "ref_id": null }, { "start": 723, "end": 731, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "MOTIVATIONAL EXAMPLE", "sec_num": "2" }, { "text": "1. N--* Det N \u00a2=~ [det,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MOTIVATIONAL EXAMPLE", "sec_num": "2" }, { "text": "lLl],[[n2,R2]lL2] ] 2. N-* Adj N \u00a2=~", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MOTIVATIONAL EXAMPLE", "sec_num": "2" }, { "text": "lL3],[[n4,R4]ll4] ] 3. N --~ N Prep \u00a2=> [npp,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MOTIVATIONAL EXAMPLE", "sec_num": "2" }, { "text": "The pointers of each triple are the list of the indices that are used as the pointers pointing to that triple. For example, Triple 2 in Figure 2 has a list of three indices as the pointers. Each of those indices can be used as a pointer to access the triple. These indices are actually used as the names of the triple. One triple may have more than one index. The issues of why and how to produce indices of triples will be discussed later in this section.", "cite_spans": [], "ref_spans": [ { "start": 136, "end": 144, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "MOTIVATIONAL EXAMPLE", "sec_num": "2" }, { "text": "Triple 3 in Figure 2 represents the vnp arc in Figure I between ", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 2", "ref_id": null }, { "start": 47, "end": 64, "text": "Figure I between", "ref_id": null } ], "eq_spans": [], "section": "MOTIVATIONAL EXAMPLE", "sec_num": "2" }, { "text": "V' 8. V'-*V' NP vnp headef V' head of NP 9. V'--*V' PP vpp head of V' head of PP I0. V'--~verb verb SENTENCE:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MOTIVATIONAL EXAMPLE", "sec_num": "2" }, { "text": "1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MOTIVATIONAL EXAMPLE", "sec_num": "2" }, { "text": "3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.", "sec_num": null }, { "text": "8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "7.", "sec_num": null }, { "text": "10.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "9.", "sec_num": null }, { "text": "12. [[1,see],categ,verb,tns,past] [ [3,man] ,categ,noun,nbr,sing] [vpp. [[1,see],categ,verb,tns.past] [ [4,on] [det, [[6,hill],categ,noun,nbr,sing] , [[5,the] ,categ, art,ty,def] ]", "cite_spans": [ { "start": 4, "end": 33, "text": "[[1,see],categ,verb,tns,past]", "ref_id": null }, { "start": 36, "end": 43, "text": "[3,man]", "ref_id": null }, { "start": 72, "end": 101, "text": "[[1,see],categ,verb,tns.past]", "ref_id": null }, { "start": 104, "end": 110, "text": "[4,on]", "ref_id": null }, { "start": 117, "end": 147, "text": "[[6,hill],categ,noun,nbr,sing]", "ref_id": null }, { "start": 150, "end": 158, "text": "[[5,the]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "11.", "sec_num": null }, { "text": "[ppn, [[4,on] ,categ,prep], [ [6,hill],categ,noun,nbr,sing] ] [vpp, [[1,see],categ, verb,tns,past] , [[7,with] ", "cite_spans": [ { "start": 6, "end": 13, "text": "[[4,on]", "ref_id": null }, { "start": 30, "end": 59, "text": "[6,hill],categ,noun,nbr,sing]", "ref_id": null }, { "start": 62, "end": 98, "text": "[vpp, [[1,see],categ, verb,tns,past]", "ref_id": null }, { "start": 101, "end": 110, "text": "[[7,with]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "I SAW AMANON THE HILL", "sec_num": null }, { "text": ",categ,prep] ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I SAW AMANON THE HILL", "sec_num": null }, { "text": "[npp, [[6,hill],categ,noun,nbr,sing] , [ [7,with] [[3,man],categ.noun,nbr,sing] , [[7,w\u00b1th] An informal description of the algorithm for generating triples of a syntactic graph using the grammar rules in Figure 3 is the following: The basic algorithm of the parser is an all-path, bottom-up, chart parser that constructs a shared, packed-parse forest. Unlike an ordinary chart parser, the parser uses two charts, one for ; ~ otherwise, fail ) ) ), ~ this rule cannot be applied.", "cite_spans": [ { "start": 6, "end": 36, "text": "[[6,hill],categ,noun,nbr,sing]", "ref_id": null }, { "start": 41, "end": 49, "text": "[7,with]", "ref_id": null }, { "start": 50, "end": 79, "text": "[[3,man],categ.noun,nbr,sing]", "ref_id": null }, { "start": 82, "end": 91, "text": "[[7,w\u00b1th]", "ref_id": null } ], "ref_spans": [ { "start": 204, "end": 212, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "I SAW AMANON THE HILL", "sec_num": null }, { "text": ",categ,prep] ] [npp,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I SAW AMANON THE HILL", "sec_num": null }, { "text": "Triples). constituents and the other for triples. Whenever the parser builds a constituent and its triple, the parser generates an index for the triple, 9 and records the triple on the chart of triples using the index. Then it records the constituent with the index of the triple on the chart of constituents. We use Rule 4 in Figure 3 to illustrate the parser. Rule 4 states that if there are two adjacent constituents, a be-aux followed by a vp, execute the procedure in the third argument position of the rule. The procedure contains the constraints that must be satisfied to make the rule to be fired. If the procedure succeeds, the parser records a new constituent [vp,Vhd]~the first argument of the rule---on the chart. Before the parser records the constituent, it must check the triples for the constituent. The procedure in the third argument position also contains the processes to produce the triples for the constituent.", "cite_spans": [], "ref_spans": [ { "start": 327, "end": 335, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "I SAW AMANON THE HILL", "sec_num": null }, { "text": "The fourth argument of a grammar rule is a list of triples produced by executing the augmenting procedure at the third argument position of the rule. If the constraints in the procedure are satisfied, the triples are also produced. The parser generates a unique index for each triple, records the triples on the chart of triples, and adds to the new constituent, the indices of the new triples. Then, the new constituent is recorded on the chart of constituents. In this example, the head of the new constituent is the same as that of be-aux; i.e., the be-aux dominates the vp.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I SAW AMANON THE HILL", "sec_num": null }, { "text": "After finishing the construction of the shared, packed-parse forest of an input sentence, the parser navigates the parse forest to collect the triples that A packed node contains several nodes, each of which contains the category of the node, its head word, and the list of the pointers to its child nodes and the indices of the triples of the node. Node 1045 in Figure 4 is important to notice that the shared, packed-parse forest generated in this parser is different from that of other parsers. In the shared, packed-parse forest defined by Tomita (1985) , any constituents that have the same category and span the same terminal nodes are regarded as the same constituent and packed into one node. In the parser for syntactic graphs, the packing condition is slightly different in that each constituent is identified by the head word of the constituent as well as the category and the terminals it spans. Therefore, although two nodes might have the same category and span the same terminals, if the nodes have different head words, then they cannot be packed together.", "cite_spans": [ { "start": 544, "end": 557, "text": "Tomita (1985)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 363, "end": 371, "text": "Figure 4", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "I SAW AMANON THE HILL", "sec_num": null }, { "text": "We first define several terms used frequently in the rest of the paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I SAW AMANON THE HILL", "sec_num": null }, { "text": "Definition 1: An in-arc of a node in a syntactic graph is an arc which points to the node, and an out-arc of a node points away from the node. Since, in the syntactic graph representation, arcs point from dominator to modifier nodes, a node with an in-arc is the modifier node of the arc, and a node with an out-arc is the dominator node of the arc. ous sentence, different syntactic analyses of the sentence may have different head verbs; thus there may be more than one root node in a syntactic graph. For example, in the syntactic graph of one famous and highly ambiguous sentence--\"Time flies like an arrow\"Dshown in Figure 6 , there are three different root nodes. These roots are [O,tlmo,v] , [ 1,fly,v] , and [ 2rUke, v] 11 .", "cite_spans": [ { "start": 686, "end": 696, "text": "[O,tlmo,v]", "ref_id": null }, { "start": 699, "end": 709, "text": "[ 1,fly,v]", "ref_id": null }, { "start": 716, "end": 724, "text": "[ 2rUke,", "ref_id": null }, { "start": 725, "end": 727, "text": "v]", "ref_id": null } ], "ref_spans": [ { "start": 621, "end": 629, "text": "Figure 6", "ref_id": "FIGREF8" } ], "eq_spans": [], "section": "I SAW AMANON THE HILL", "sec_num": null }, { "text": "The position of a node is the position of the word which is represented by the node, in a sentence. Since a word may have several syntactic categories, there may be more than one node with the same position in a syntactic graph. For example, since the word \"time\" in Figure 6 , which appeared as the first word in the sentence, has two syntactic categories, noun and verb, there are two nodes, [O,tame,n] and [O,Ume,v] , in the syntactic graph, and the position of the two nodes is 0.", "cite_spans": [ { "start": 394, "end": 404, "text": "[O,tame,n]", "ref_id": null }, { "start": 409, "end": 418, "text": "[O,Ume,v]", "ref_id": null } ], "ref_spans": [ { "start": 267, "end": 275, "text": "Figure 6", "ref_id": "FIGREF8" } ], "eq_spans": [], "section": "Definition 4:", "sec_num": null }, { "text": "One of the most noticeable features of a syntactic graph is that ambiguities are explicit, and can be easily detected by semantic routines that may use fu~her knowledge to resolve them. The following property explains how syntactically ambiguous points can be easily determined in a syntactic graph. reading, because each node can be a modifier node only once in one reading. Therefore, we can focus on the arcs pointing to the same node as ambiguous points. In terms of triples, any two triples with identical modifier terms reveal a point of ambiguity, where a modifier term is dominated by more than one node. In the example in Figure 1 , the syntactic ambiguities are found in two arcs pointing to [4,on,p] and in three arcs pointing to [7,w-it, la,p] . The PP with head [4,on] modifies the VP whose head is [1,see] and it also modifies the NP with head [3,ma~] . Similarly three different in-arcs to the node [7,wit~] show that there are three possible choices to which Node 7 can be attached. The semantic processor can focus on these three possibilities (or on the earlier two possibility set), using semantic information, to choose one dominator. Lacking semantic information, the ambiguities will remain in the graph until they can be resolved by additional knowledge from the context. Property 2: Since all words in a sentence must be used in every syntactic interpretation of the sentence and no word can have multiple categories in one interpretation, one and only one node from each position must participate in every reading of a syntactic graph. In other words, each syntactic reading derived from a syntactic graph must contain one and only one node from every position. Since every node, except the root node, must be attached to another node as a modifier node, we can conclude the following property from properties 1 and 2.", "cite_spans": [ { "start": 702, "end": 710, "text": "[4,on,p]", "ref_id": null }, { "start": 741, "end": 755, "text": "[7,w-it, la,p]", "ref_id": null }, { "start": 775, "end": 781, "text": "[4,on]", "ref_id": null }, { "start": 858, "end": 865, "text": "[3,ma~]", "ref_id": null }, { "start": 914, "end": 922, "text": "[7,wit~]", "ref_id": null } ], "ref_spans": [ { "start": 631, "end": 639, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Definition 4:", "sec_num": null }, { "text": "Property 3: In any one reading of a syntactic graph, the following facts must hold: I. No two triples with the same modifier node can co-occur. 2. One and only one node from each position, except the root node of the reading, must appear as a modifier node. Another advantage of the syntactic graph representation is that we can easily extract the intersection of all possible syntactic readings from it. Since one node from each position must participate in every syntactic reading of a syntactic graph, every node which is not a root node and has only one in-arc, must always be included in every syntactic reading. Such unambiguous nodes are common to the intersections of all possible readings. When we know the exact locations of several pieces in a jigsaw puzzle, it is much easier to place the other pieces. Similarly, if a semantic processor knows which arcs must hold in every reading, it can use these arcs to constrain inferences to understand and disambiguate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 4:", "sec_num": null }, { "text": "Property 4: There is no information in a syntactic graph about the range of terminals spanned by each triple, so one triple may represent several constituents which have the same head and modifying terms, with the same relation name, but which span different ranges of terminals. The compactness and handiness of a graph representation is based on this property. One arc between two nodes in a syntactic graph can replace several complicated structures in the tree representation, and multiple dominating arcs can replace a parse forest.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 4:", "sec_num": null }, { "text": "For example, the arc vnp from [1,see,v] to [3,man,n] in Figure I represents three different constituents. Those constituents have the same category, vpl, the same head, [1,soo,v] , and the same modifier, [3~nan,n], but have different inside structures of the modifying constituent, np, whose head is [3,man,n] . The modifying constituent, np, may span from [2,a] to [3,ma~] , from [2,a] to [6,hfll] , or from [2,a] to [9,telescope] . Actually, in the exclusion matrix described below, each triple with differing constituent structure is represented by multiple subscripts to avoid the generation of trees that did not occur in the parse forest.", "cite_spans": [ { "start": 43, "end": 52, "text": "[3,man,n]", "ref_id": null }, { "start": 169, "end": 178, "text": "[1,soo,v]", "ref_id": null }, { "start": 300, "end": 309, "text": "[3,man,n]", "ref_id": null }, { "start": 357, "end": 362, "text": "[2,a]", "ref_id": null }, { "start": 366, "end": 373, "text": "[3,ma~]", "ref_id": null }, { "start": 381, "end": 386, "text": "[2,a]", "ref_id": null }, { "start": 390, "end": 398, "text": "[6,hfll]", "ref_id": null }, { "start": 409, "end": 414, "text": "[2,a]", "ref_id": null }, { "start": 418, "end": 431, "text": "[9,telescope]", "ref_id": null } ], "ref_spans": [ { "start": 56, "end": 64, "text": "Figure I", "ref_id": null } ], "eq_spans": [], "section": "Definition 4:", "sec_num": null }, { "text": "Another characteristic of a syntactic graph is that the number of nodes in a graph is not always the same as that of the words in a sentence. Since some words may have several syntactic categories, and each category may lead to a syntactically correct parse, one word may require several nodes. For example, there are eight nodes in the syntactic graph in Figure 6 , while there are only five words in the sentence.", "cite_spans": [], "ref_spans": [ { "start": 356, "end": 364, "text": "Figure 6", "ref_id": "FIGREF8" } ], "eq_spans": [], "section": "Definition 4:", "sec_num": null }, { "text": "A syntactic graph is clearly more compact than a parse forest and provides a good way of representing all possible syntactic readings with an efficient focusing mechanism for ambiguous points. However, since one triple may represent several constituents, and there is no information about the relationships between triples, it is possible to lose some important syntactic information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EXCLUSION MATRIX", "sec_num": "5" }, { "text": "This section consists of two parts. ]in Section 5. I, we investigate a co-occurrence problem of arcs in a syntactic graph and suggest the exclusion matrix, to avoid the problem. The algorithms to collect triples of a syntactic graph and to construct an exclusion matrix are presented in Section 5.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EXCLUSION MATRIX", "sec_num": "5" }, { "text": "One of the most important syntactic displays in a tree structured parse, but not in a syntactic graph, is the co-occurrence relationship between constituents. Since one parse tree represents one possible syntactic reading of a sentence, we can see whether any two constituents can co-occur in some reading by checking all parse trees one by one. However, since the syntactic graph keeps all possible constituents as a set of triples, it is sometimes difficult to determine whether two triples can co-occur.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CO-OCCURRENCE PROBLEM BETWEEN ARCS", "sec_num": "5.1" }, { "text": "If a syntactic graph does not carry the information about exclusive arcs, its representation of all possible syntactic structures may include interpretations not allowed by the grammar and cause extra overhead. For example, after a syntactic processor generates the triples, a semantic processor will focus on the ambiguous points such as triples 4 and 5, and triples 8, 9, and 10 in Figure 2 to resolve the ambiguities. In this case, if the semantic processor has a strong clue to choose Triple 4 over Triple 5, it should not consider Triple 10 as a competing triple with triples 8 and 9 since I0 is exclusive with 4.", "cite_spans": [], "ref_spans": [ { "start": 384, "end": 392, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "CO-OCCURRENCE PROBLEM BETWEEN ARCS", "sec_num": "5.1" }, { "text": "Some of the co-occurrence problems can be detected easily. For example, due to Property 1, since there can be only one in-arc to any node in any one reading of a syntactic graph, the arcs that point to the same node cannot co-occur in any reading. Triples including these arcs are called exclusive triples. The following properties of the syntactic graph representation show several cases when arcs cannot co-occur. These cases, however, are not exhaustive. In the syntactic graph in Figure 1 , the arcs vpp from [1,see,v] to [4,on,p] and npp from [3,ma,n,n] to [7,wita'x,p] cannot co-occur in any legal parse trees because they violate the rule that branches in a parse tree cannot cross each other.", "cite_spans": [ { "start": 513, "end": 522, "text": "[1,see,v]", "ref_id": null }, { "start": 526, "end": 534, "text": "[4,on,p]", "ref_id": null }, { "start": 548, "end": 558, "text": "[3,ma,n,n]", "ref_id": null }, { "start": 562, "end": 574, "text": "[7,wita'x,p]", "ref_id": null } ], "ref_spans": [ { "start": 484, "end": 492, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "CO-OCCURRENCE PROBLEM BETWEEN ARCS", "sec_num": "5.1" }, { "text": "The following property shows another case of exclusive arcs which cross each other. Assume that there are two arcs---one is [npp, [5, Wl, noun] , [9,W2,eonj ] ], and the other is [eonjpp, [9,W2,eor~] , [3,W3,prep] ]. The first arc said that the phrase with head word W2 is attached to the noun in position 5. The other triple said that the phrase with head word W3 is attached to the conjunction. This attachment causes crossing branches. The corresponding parse tree for these two triples is in Figure 7 . As we can see, since there is a crossing branch, these two arcs cannot co-occur in any parse tree. The following property shows the symmetric case of Property 6.", "cite_spans": [ { "start": 130, "end": 133, "text": "[5,", "ref_id": "BIBREF27" }, { "start": 134, "end": 137, "text": "Wl,", "ref_id": null }, { "start": 138, "end": 143, "text": "noun]", "ref_id": null }, { "start": 146, "end": 158, "text": "[9,W2,eonj ]", "ref_id": null }, { "start": 188, "end": 199, "text": "[9,W2,eor~]", "ref_id": null }, { "start": 202, "end": 213, "text": "[3,W3,prep]", "ref_id": null } ], "ref_spans": [ { "start": 496, "end": 504, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "CO-OCCURRENCE PROBLEM BETWEEN ARCS", "sec_num": "5.1" }, { "text": "Property 7: In a syntactic graph, any modifier word which is on the left side of its head word cannot be modified by any word which is on the right side of the head word in a sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CO-OCCURRENCE PROBLEM BETWEEN ARCS", "sec_num": "5.1" }, { "text": "Other exclusive arcs are due to lexical ambiguity. Figure 6 . Since the two arcs have the same word with the same position, but with different categories, they cannot co-occur in any syntactic reading. By examination of Figure 6 , we can determine that there are 25 pairwise combinations of exclusive arcs in the syntactic graph of that five word sentence.", "cite_spans": [], "ref_spans": [ { "start": 51, "end": 59, "text": "Figure 6", "ref_id": "FIGREF8" }, { "start": 220, "end": 228, "text": "Figure 6", "ref_id": "FIGREF8" } ], "eq_spans": [], "section": "CO-OCCURRENCE PROBLEM BETWEEN ARCS", "sec_num": "5.1" }, { "text": "The above properties show cases of exclusive arcs but are not exhaustive. Since the number of pairs of exclusive arcs is often very large in real text (syntactically ambiguous sentences), if we ignore the co-occurrence information among triples, the overhead cost to the semantic processor may outweigh the advantage gained from syntactic graph representation. Therefore we have to constrain the syntactic graph representation to include co-occurrence information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CO-OCCURRENCE PROBLEM BETWEEN ARCS", "sec_num": "5.1" }, { "text": "We introduce the exclusion matrix for triples (arcs) to record constraints so that any two triples which cannot co-occur in any syntactic tree, cannot co-occur in any reading of a syntactic graph. The exclusion matrix provides an efficient tool to decide which triples should be discarded when higher level processes choose one among ambiguous triples. For an exclusion matrix (Ematrix), we make an N x N matrix, where N is the number of indices of triples. If Ematrix(ij) = 1 then the triples with the indices i and j cannot co-occur in any syntactic reading. If Ematrix(ij) = 0 then the triples with the indices i and j can co-occur in some syntactic reading.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CO-OCCURRENCE PROBLEM BETWEEN ARCS", "sec_num": "5.1" }, { "text": "Since the several cases of exclusive arcs shown in the previous section are not exhaustive, they are not sufficient to construct a complete exclusion matrix from a syntactic graph. A complete exclusion matrix can be guaranteed by navigating the parse forest when the syntactic processor collects the triples in the forest to construct a syntactic graph. As we have briefly described in Section 3, when the parser constructs a shared, packed forest, triples are also produced, and their indices are kept in the corresponding nonterminal nodes in the forest. 12 The parser navigates the parse forest to collect the triples--in fact, pointers pointing to the triples--and to build an exclusion matrix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "AN ALGORITHM TO CONSTRUCT THE EXCLUSION MATRIX", "sec_num": "5.2" }, { "text": "As we can see in the parse forest in Figure 5 , there may be several nonterminal nodes in one packed node. For each packed node, the parser collects all indices of triples in the subforests whose root nodes are the nonterminal nodes in the packed node, and then records those indices to the packed node. After the parser finishes collecting the indices of the triples in the parse forest, each packed node in the forest has a pointer to the list of collected indices from its subforest. Therefore, the root node of a parse forest has a pointer to the list of all indices of all possible triples in the whole forest, and those triples represent the syntactic graph of the forest. Figure 8 shows the upper part of the parse forest in Figure 5 after collecting triples. A hooked arrow of each nonterminal node points to the list of the indices of the triples that were added to the node in parsing. For example, pointer 2 contains the indices of the triples added to the node snt by the grammar rule: snt ~ np + vp A simple arrow for each packed node points to the list of all indices of the triples in the forest of which it is the root. This list is generated and recorded after the processor collects all indices of triples in its subnodes. Therefore the arrow of the root node of the whole forest, Pointer 1, contains the list of all indices of the triples in the whole forest.", "cite_spans": [], "ref_spans": [ { "start": 37, "end": 45, "text": "Figure 5", "ref_id": null }, { "start": 679, "end": 687, "text": "Figure 8", "ref_id": "FIGREF9" }, { "start": 732, "end": 740, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "AN ALGORITHM TO CONSTRUCT THE EXCLUSION MATRIX", "sec_num": "5.2" }, { "text": "Since several indices may represent the same triple, after collecting all the indices of the triples in the parse forest, the parser removes duplicating triples in the final representation of the syntactic graph of a sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "AN ALGORITHM TO CONSTRUCT THE EXCLUSION MATRIX", "sec_num": "5.2" }, { "text": "Collecting pointers to triples in the subforest of a packed node and constructing the Ematrix is done recursively as follows: First, Ematrix(i j) is initialized to 1, which means all arcs are marked exclusive of each other. Later, if any two arcs indexed with i and j co-occur in some parse tree, then the value of Ematrix, E(ij), is set to 0. For each nonterminal node in a packed node, the parser collects every index appearing below the nonterminal node--i.e., the index of the triples of its subnodes. If a subnode of the nonterminal node was previously visited, and its indices were already collected, then the subnode already has the pointer to the list of collected indices. Therefore the parser does not need to navigate the same subforests again, but it takes the indices using the pointer. The algorithm in pseudo-PASCAL code is in Figure 9 .", "cite_spans": [], "ref_spans": [ { "start": 842, "end": 850, "text": "Figure 9", "ref_id": "FIGREF11" } ], "eq_spans": [], "section": "AN ALGORITHM TO CONSTRUCT THE EXCLUSION MATRIX", "sec_num": "5.2" }, { "text": "After the parser collects the indices of the triples from the subnodes of the nonterminal node, it adjusts the values in the exclusion matrix according to the following cases:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "AN ALGORITHM TO CONSTRUCT THE EXCLUSION MATRIX", "sec_num": "5.2" }, { "text": "I. If the nonterminal node has one child node, its own triples can co-occur with each other, and with every collected triple from its subforest. 2. If the nonterminal node has two child nodes, its own triples can co-occur with each other and with the triples collected from both left and right child nodes, and the triples from the left child node can co-occur with the triples from the right one. This algorithm is described in Figure I0 .", "cite_spans": [], "ref_spans": [ { "start": 429, "end": 438, "text": "Figure I0", "ref_id": null } ], "eq_spans": [], "section": "AN ALGORITHM TO CONSTRUCT THE EXCLUSION MATRIX", "sec_num": "5.2" }, { "text": "For example, the process starts to collect the indices of the triples from SNT node in Figure 8 . Then, it collects the indices in the left subforest whose root is np. After all indices of triples in the subforest of np are collected, those indices and the indices of the triples of the node in 6 are recorded in 5. Similarly all indices in 7 and 4 are recorded in 3 as the indices of the triples in the right subforest of the snt node. The indices in 5 and 3 and the indices in 2 are recorded in I as the indices of the triples of the whole parse forest. In packed nodes with more than one nonterminal node, like vpl, all indices of the triples in the three subforests of vpl and the indices in 8, 9, and 10 are collected and recorded in 7.", "cite_spans": [], "ref_spans": [ { "start": 87, "end": 95, "text": "Figure 8", "ref_id": "FIGREF9" } ], "eq_spans": [], "section": "AN ALGORITHM TO CONSTRUCT THE EXCLUSION MATRIX", "sec_num": "5.2" }, { "text": "By the first case in the above rule, every triple represented by the indices in 4 can co-occur with each other, and every triple represented by the indices in 4 can co-occur with every triple represented by the indices in 7. One example of the second case is that every triple represented by the indices in 2 can co-occur with each other, and every triple represented by the indices in 2 can co-occur with every triple represented by the indices in 5 and 3. Every triple represented by the indices in 5 can co-occur with the triples represented by the indices in 3. Whenever the process finds a pair of co-occurring triples it adjusts the value of Ematrix appropriately.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "AN ALGORITHM TO CONSTRUCT THE EXCLUSION MATRIX", "sec_num": "5.2" }, { "text": "In this section, we will discuss completeness and soundness of a syntactic graph with an exclusion matrix as an alternative for tree representation of syntactic information of a sentence. 2. For every syntactic reading from the syntactic graph, there is a parse tree in the forest that is structurally equivalent to that syntactic reading.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "COMPLETENESS AND SOUNDNESS OF THE SYNTACTIC GRAPH", "sec_num": "6" }, { "text": "To show the completeness and soundness of the syntactic graph representation, we present the algorithm that enumerates all possible syntactic readings from a syntactic graph using an exclusion matrix. This algo- rithm constructs subgraphs of the syntactic graph, one at a time. Each of these subgraphs is equivalent to one reading of the syntactic graph. Since no node can modify itself, each of these subgraphs is a directed acyclic graph (DAG). Furthermore, since every node in each of these subgraphs can have no more than one in-arc, the DAG subgraph is actually a tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(soundness)", "sec_num": null }, { "text": "Before going into detail, we give an intuitive description of the algorithm. The algorithm has two lists of triples as input: a list of triples of a syntactic graph and a list of root triples. A root triple is a triple that represents the highest level constituent in a parse--i.e., ant (sentence) in the grammar in Figure 2 . The head node of a root triple is usually the head verb of a sentence reading.", "cite_spans": [], "ref_spans": [ { "start": 316, "end": 324, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "(soundness)", "sec_num": null }, { "text": "According to Property 3 in Section 4, one reading of a syntactic graph must include one and only one node from every position, except the position of the root node, as a modifier node. This is a necessary requirement for any subgraph of a syntactic graph to be one reading of the graph. One of the simplest ways to make a subgraph of a syntactic graph that satisfies this requirement is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(soundness)", "sec_num": null }, { "text": "Make partitions among triples according to the position of the modifier node of the triples, e.g., triples in Partition 0 have the first word in a sentence as the modifier nodes. Then take one triple from each partition. Here, the algorithm must know the position of the root node so that it can exclude the partition in which triples have the root node as a modifier. When", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(soundness)", "sec_num": null }, { "text": "it chooses a triple, it also must check the exclusion matrix. If a triple from a partition is exclusive with any of the triples already chosen, the triple cannot be included in that reading. The algorithm must try another triple in that partition. Since the exclusion matrix is based on the indices of the triples, when it chooses a triple, it actually chooses an index in the list of indices of the triple.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computational Linguistics, Volume 15, Number 1, March 1989", "sec_num": null }, { "text": "Note that any subgraphs produced in this way satisfy Property 3, and all triples in each subgraph are inclusive with each other according to the exclusion matrix. The top level procedures of the algorithm in Prolog are shown in Figure 11 . 13 We do not have a rigorous proof of the correctness of the algorithm, but we present an informal discussion about how this algorithm can generate all and only the correct syntactic readings from a syntactic graph.", "cite_spans": [ { "start": 240, "end": 242, "text": "13", "ref_id": null } ], "ref_spans": [ { "start": 228, "end": 237, "text": "Figure 11", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Computational Linguistics, Volume 15, Number 1, March 1989", "sec_num": null }, { "text": "Since the syntactic graph of a sentence is explicitly constructed as a union of all parse trees in the parse forest of the sentence, the triples of the syntactic graph imply all the parse trees. This fact is due to the algorithm that constructs a syntactic graph from a parse forest. Therefore, if we can extract all possible syntactic readings from the graph, these readings will include all possible (and more) parse trees in the forest. Intuitively, the set of all subgraphs of a syntactic graph includes all syntactic readings of a syntactic graph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computational Linguistics, Volume 15, Number 1, March 1989", "sec_num": null }, { "text": "In fact, this algorithm generates all possible subgraphs of a syntactic graph that meets the necessary conditions imposed by Property 3. The predicate gen sub1 generates one reading with a given root triple. All readings with a root triple are exhaustively collected by the predicate gen subgraphl using the setof predicate--a meta predicate in Prolog. All readings of a syntactic graph are produced by the predicate gen subgraph, which calls the predicate gen subgraphl for each root triple in RootList. Therefore,this algorithm generates all subgraphs of a syntactic graph that satisfy Property 3 and that are consistent with the exclusion matrix. Hence, the set of subgraphs generated by the algorithm includes all parse trees in the forest.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computational Linguistics, Volume 15, Number 1, March 1989", "sec_num": null }, { "text": "The above algorithm checks the exclusion matrix when it generates subgraphs from the syntactic graph, so all triples in each subgraph generated by the algorithm are guaranteed to co-occur with each other in the exclusion matrix. Unfortunately, it does not appear possible to prove that if triples, say T1 and T2, T2 and T3, and T1 and T3, all co-occur in pairs, that they must all three co-occur in the same tree! So, although empirically all of our experiments have generated only trees from the forest, the exclusion matrix does not provide mathematical assurance of soundness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computational Linguistics, Volume 15, Number 1, March 1989", "sec_num": null }, { "text": "If subsequent experience with our present statistically satisfactory, but unsound exclusion matrix requires it, we can produce, instead, an inclusion matrix that guarantees soundness. The columns of this matrix are I.D. numbers for each parse tree; the rows are triples. The following procedure constructs the matrix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computational Linguistics, Volume 15, Number 1, March 1989", "sec_num": null }, { "text": "1. Navigate the parse forest to extract a parse tree, I, and collect triples appearing in that parse tree. 2. Mark matrix(Tindex, I) = 1, for each triple with the index Tindex appearing in the I-th parse tree. Backtrack to step 1 to extract another possible parse tree until all parse trees are exhausted. Then, given a column number i, all triples marked in that column co-occur in the i-th parse tree. Since this algorithm must navigate all possible parse trees one by one, it is less efficient than the algorithm for constructing the exclusion matrix. But if our present system eventually proves unsound, this inclusion matrix guarantees that we can test any set of constituents to determine unequivocally if they occur in a single parse tree from the forest.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computational Linguistics, Volume 15, Number 1, March 1989", "sec_num": null }, { "text": "Therefore, we claim that syntactic graphs enable us to enumerate all and only the syntactic readings given in a parse forest, and that syntactic graph representation is complete and sound compared to tree representations of the syntactic structure of a sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computational Linguistics, Volume 15, Number 1, March 1989", "sec_num": null }, { "text": "Several researchers have proposed variant representations for syntactic structure. Most of them, however, concentrated on how to use the new structure in the parsing process. Syntactic graph representation in this work does not affect any parsing strategy, but is constructed after the syntactic processor finishes generating a parse ]Forest using any all path parser. Marcus et. al. (1983) propose a parsing representation that is also different from tree representation. They use the new representation for a syntactic structure of a sentence to preserve information, while modifying the structure during parsing, so that they can solve the problems of a deterministic parser (Marcus 1980 )--i.e., parsing garden path sentences. Marcus's representation consists of dominator-modifier relationships between two nodes. It is, however, doubtful that a correct parse tree can be derived from the final structure, which consists of only domination relationships. They do not represent all possible syntactic readings in one structure. Barton and Berwick (1985) also discuss the possibility of a different representation, an \"assertion set\", as an alternative for trees, and show various advantages expected from the new structure. As in Marcus's work, they use the assertion set to preserve information as parsing progresses, so that they can make a deterministic parser to be partially noncommittal, when the parser handles ambiguous phrases. Their representation consists of sets of assertions. Each assertion that represents a constituent is a triple that has the category name and the range of terminals that the constituent spans. It is unclear how to represent dominance relationships between constituents with assertion sets, and whether the final structure represents all possible parses or parts of the parses. Rich et. al. (1987) also propose a syntactic representation in which all syntactic ambiguities are kept. In this work. the ambiguous points are represented as one modifier with many possible dominators. Since, however, this work also does not consider possible problems of exclusive attachments, their representation loses some information present in a parse forest. Tomita (1985) also suggests a disambiguation process, using his shared, packed-parse forest, in which all possible syntactic ambiguities are stored. The disambiguation process navigates a parse forest, and asks a user whenever it meets an ambiguous packed node. It does a \"shaving-a-forest\" operation, which traverses the parse forest to delete ambiguous branches. Deleting one arc accomplishes the \"shave\" in the syntactic graph representation. Furthermore, in a parse forest, the ambiguous points can be checked only by navigating the forest and are not explicit.", "cite_spans": [ { "start": 369, "end": 390, "text": "Marcus et. al. (1983)", "ref_id": "BIBREF12" }, { "start": 678, "end": 690, "text": "(Marcus 1980", "ref_id": "BIBREF11" }, { "start": 1032, "end": 1057, "text": "Barton and Berwick (1985)", "ref_id": "BIBREF1" }, { "start": 1817, "end": 1836, "text": "Rich et. al. (1987)", "ref_id": "BIBREF15" }, { "start": 2184, "end": 2197, "text": "Tomita (1985)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "RELATED WORKS", "sec_num": "7" }, { "text": "Since a parse forest does not allow direct access to its internal structure, a semantic processor would have to traverse the forest whenever it needed to check internal relations to generate case relations and disambiguate without a user's guidance. Syntactic graph representation provides a more concise and efficient structure for higher level processes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RELATED WORKS", "sec_num": "7" }, { "text": "In this paper, we propose the syntactic graph with an exclusion matrix as a new representation of the surface syntactic structure of a sentence. Several properties of syntactic graphs are examined. An algorithm that enumerates all and only the correct syntactic readings from syntactic graph is also presented. Therefore, we claim that syntactic graph representation provides a concise way to represent all possible syntactic readings in one structure without losing any useful information contained in the tree structured representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RELATED WORKS", "sec_num": "7" }, { "text": "To further justify that syntactic graph representation is a suitable formalism for an output format of syntactic processes, we need to investigate methods for using syntactic graphs to make correct decisions in higher level processes. The exclusion matrix is an efficient tool to help semantic processes make correct choices.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RELATED WORKS", "sec_num": "7" }, { "text": "Because of its conciseness, the syntactic graph makes it possible to store temporarily the syntactic structure of sentences that already have been processed. A text understanding process is very likely to find contradicting evidence between a current sentence and the context of the previous sentences. If we did not keep alternative analyses of previous sentences the only thing we could do is backtracking, which is computationally too expensive. Furthermore, since the search space of the syntactic processor is different from that of the semantic processor, it is very important for the syntactic process to commit to a final result. We are currently investigating how to use syntactic graphs of previous sentences to maintain a continuous context whose ambiguity is successively reduced by additional incoming sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RELATED WORKS", "sec_num": "7" }, { "text": "Computational Linguistics, Volume 15, Number 1, March 1989", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Jungyun Seo and Robert F. Simmons Syntactic Graphs: A Representation for the Union of All Ambiguous Parse Trees", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Computational Linguistics, Volume 15, Number 1, March 1989 Jungyun Seo and Robert F. Simmons Syntactic Graphs: A Representation for the Union of All Ambiguous Parse Trees", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work is sponsored by the Army Research Off\u00c9ce under contract DAAG29-84-K-0060. The authors are grateful to Olivier Winghart for his critical review of an earlier draft of this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ACKNOWLEDGMENTS", "sec_num": null }, { "text": "Due to the complexity of the diagram, some of the details are omitted. Not all different readings of a syntactic graph have different root nodes. In this example, [0,t, imo,v] is the root node of two different readings of the graph with simple grammar rules. The equivalent parse trees of the two readings are: [snt,[vp,[verb,[time] ], [np,[np,[noun,[flies] ]], [pp,[prep,[like] ], [np,[det,[an] [vp,[vp,[verb,[time] ], [np,[noun,[flies] []], [pp,[prep,[like] ], [np,[det,[an] ],[noun, [arrow] 12. The: node in a forest is different from the node in a syntactic graph. A non-terminal node in a forest with two children nodes has one ]head-modifier relation, and hence the non-terminal with two children in a forest represents one arc in a syntactic gyaph. 13. We use the syntax of Quintus-Prolog version 2 on SUN systems.The special predicate, ( Cond ---> Then ; Else ), in the algorithm can be interpreted as; if Cond is true, then call Then. Otherwise, call Else.", "cite_spans": [ { "start": 163, "end": 175, "text": "[0,t, imo,v]", "ref_id": null }, { "start": 311, "end": 332, "text": "[snt,[vp,[verb,[time]", "ref_id": null }, { "start": 336, "end": 357, "text": "[np,[np,[noun,[flies]", "ref_id": null }, { "start": 362, "end": 378, "text": "[pp,[prep,[like]", "ref_id": null }, { "start": 382, "end": 395, "text": "[np,[det,[an]", "ref_id": null }, { "start": 396, "end": 416, "text": "[vp,[vp,[verb,[time]", "ref_id": null }, { "start": 420, "end": 437, "text": "[np,[noun,[flies]", "ref_id": null }, { "start": 443, "end": 459, "text": "[pp,[prep,[like]", "ref_id": null }, { "start": 463, "end": 476, "text": "[np,[det,[an]", "ref_id": null }, { "start": 486, "end": 493, "text": "[arrow]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "11.", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The Theory of Parsing, Translation and Compiling 1", "authors": [ { "first": "A", "middle": [ "V" ], "last": "Aho", "suffix": "" }, { "first": "J", "middle": [ "D" ], "last": "Ullman", "suffix": "" } ], "year": 1972, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aho, A. V. and Ullman, J. D. 1972 The Theory of Parsing, Translation and Compiling 1. Prentice-Hall, Englewood Cliffs, NJ.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Parsing with Assertion Sets and Information Monotonicity", "authors": [ { "first": "G", "middle": [ "E" ], "last": "Barton", "suffix": "" }, { "first": "R", "middle": [ "C" ], "last": "Berwick", "suffix": "" } ], "year": 1985, "venue": "Proceedings of International Joint Conference on Artificial Intelligence", "volume": "85", "issue": "", "pages": "769--771", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barton, G. E. and Berwick, R. C. 1985 \"Parsing with Assertion Sets and Information Monotonicity.\" In Proceedings of International Joint Conference on Artificial Intelligence-85 (IJCAI-85): 769- 771.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Conceptual analysis of natural language", "authors": [ { "first": "L", "middle": [], "last": "Birnbaum", "suffix": "" }, { "first": "M", "middle": [], "last": "Selfridge", "suffix": "" } ], "year": 1981, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Birnbaum, L. and Selfridge, M. 1981 \"Conceptual analysis of natural language.\" In R. Schank and C. Riesbeck, eds., Inside Computer Understanding. Lawrence Erlbaum, Hillsdale, NJ.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A Parsing Algorithm that extends Phrases", "authors": [ { "first": "D", "middle": [], "last": "Chester", "suffix": "" } ], "year": 1980, "venue": "American Journal of Computational Linguistics", "volume": "6", "issue": "2", "pages": "87--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chester, D. 1980 \"A Parsing Algorithm that extends Phrases.\" American Journal of Computational Linguistics 6 (2): 87-96.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Remarks on nominalization", "authors": [ { "first": "N", "middle": [], "last": "Chomsky", "suffix": "" } ], "year": 1970, "venue": "Readings in English Transformational Grammar", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chomsky, N. 1970 \"Remarks on nominalization.\" In R. Jacobs and P. S. Rosenbaum, Eds., Readings in English Transformational Grammar. Waltham, MA-Ginn & Co.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "An Efficient Context-free Parsing algorithm", "authors": [ { "first": "J", "middle": [], "last": "Early", "suffix": "" } ], "year": 1970, "venue": "Comm ACM", "volume": "13", "issue": "2", "pages": "94--102", "other_ids": {}, "num": null, "urls": [], "raw_text": "Early, J. 1970 \"An Efficient Context-free Parsing algorithm.\" Comm ACM 13, (2): 94-102.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The Sausage Machine: A New Two-Stage Parsing Model", "authors": [ { "first": "L", "middle": [], "last": "Frazier", "suffix": "" }, { "first": "J", "middle": [], "last": "Fodor", "suffix": "" } ], "year": 1979, "venue": "Cognition", "volume": "6", "issue": "", "pages": "41--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frazier, L. and Fodor, J. 1979 \"The Sausage Machine: A New Two-Stage Parsing Model.\" Cognition 6: 41-58.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Algorithm Schemata and Data Structures in Syntactic Processing", "authors": [ { "first": "M", "middle": [], "last": "Kay", "suffix": "" } ], "year": 1980, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kay, M. 1980 \"Algorithm Schemata and Data Structures in Syntactic Processing.\" Xerox Corporation, Technical Report Number CSL- 80-12, Palo Alto, CA.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Dynamically Combining syntax and semantics in natural language processing", "authors": [ { "first": "S", "middle": [ "L" ], "last": "Lytinen", "suffix": "" } ], "year": 1986, "venue": "Proceedings of The American Association for Artificial lntelligence-86(AAAI-86", "volume": "", "issue": "", "pages": "574--578", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lytinen, S. L. 1986 \"Dynamically Combining syntax and semantics in natural language processing.\" In Proceedings of The American Association for Artificial lntelligence-86(AAAI-86): 574-578.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A Theory of Syntactic Recognition for Natural Language", "authors": [ { "first": "M", "middle": [ "P" ], "last": "Marcus", "suffix": "" } ], "year": 1980, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcus, M. P. 1980 A Theory of Syntactic Recognition for Natural Language. MIT Press, Cambridge, MA.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "D-Theory: Talking about Talking about Trees", "authors": [ { "first": "M", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "D", "middle": [], "last": "Hindle", "suffix": "" }, { "first": "M", "middle": [ "M" ], "last": "Fleck", "suffix": "" } ], "year": 1983, "venue": "Proceedings of 21st", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcus, M. P.; Hindle, D.; and Fleck, M. M. 1983 \"D-Theory: Talking about Talking about Trees.\" In Proceedings of 21st", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "129--136", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics: 129--136.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Definite Clause Grammars --A survey of the formalism and a Comparison with Augmented Transition Network", "authors": [ { "first": "F", "middle": [ "C N" ], "last": "Pereira", "suffix": "" }, { "first": "D", "middle": [ "H" ], "last": "Warren", "suffix": "" } ], "year": 1980, "venue": "Artificial Intelligence", "volume": "13", "issue": "", "pages": "231--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pereira, F. C. N. and Warren, D. H. 1980 \"Definite Clause Grammars --A survey of the formalism and a Comparison with Augmented Transition Network.\" Artificial Intelligence, 13:231-278.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Ambiguity Procrastination", "authors": [ { "first": "A", "middle": [], "last": "Rich", "suffix": "" }, { "first": "J", "middle": [], "last": "Barnett", "suffix": "" }, { "first": "K", "middle": [], "last": "Wittenburg", "suffix": "" }, { "first": "D", "middle": [], "last": "Wroblewski", "suffix": "" } ], "year": 1987, "venue": "Proceedings of AAAI", "volume": "87", "issue": "", "pages": "571--576", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rich, A.; Barnett, J.; Wittenburg, K.; and Wroblewski, D. 1987 \"Ambiguity Procrastination.\" In Proceedings of AAAI--87: 571- 576.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "On Parsing Preferences", "authors": [ { "first": "L", "middle": [ "K" ], "last": "Shubert", "suffix": "" } ], "year": 1984, "venue": "Proceedings of the Conference on Computational Linguistics", "volume": "84", "issue": "", "pages": "247--250", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shubert, L. K. 1984 \"On Parsing Preferences.\" In Proceedings of the Conference on Computational Linguistics 84 Stanford, CA: 247- 250.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Are There Preference Trade-Offs in Attachment Decision?", "authors": [ { "first": "L", "middle": [ "K" ], "last": "Shubert", "suffix": "" } ], "year": 1986, "venue": "In Proceedings of AAAI", "volume": "86", "issue": "", "pages": "601--605", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shubert, L. K. 1986 \"Are There Preference Trade-Offs in Attachment Decision?\" In Proceedings of AAAI-86: 601-605.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Efficient Parsing for Natural Language", "authors": [ { "first": "M", "middle": [], "last": "Tomita", "suffix": "" } ], "year": 1985, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomita, M. 1985 Efficient Parsing for Natural Language. Kluwer Academic Publishers, Boston, MA.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Using Dominator-Modifier Relations to Disambiguate a Sentence", "authors": [ { "first": "D", "middle": [], "last": "Tsukada", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsukada, D. 1987 \"Using Dominator-Modifier Relations to Disam- biguate a Sentence\" (master's thesis), Department of Computer Sciences, University of Texas at Austin.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The State of the Art in Natural Language Understanding", "authors": [ { "first": "D", "middle": [ "L" ], "last": "Waltz", "suffix": "" } ], "year": 1982, "venue": "Strategies for Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Waltz, D. L. 1982 \"The State of the Art in Natural Language Understanding.\" In W. Lehnert and M. Ringle (eds.), Strategies for Natural Language Processing, Lawrence Erlbaum Associates, Inc., Hillsdale, NJ.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Syntax, Preference and Right Attachment", "authors": [ { "first": "Y", "middle": [], "last": "W~lks", "suffix": "" }, { "first": "X", "middle": [], "last": "Huang", "suffix": "" }, { "first": "D", "middle": [], "last": "Fass", "suffix": "" } ], "year": 1985, "venue": "In Proceedings of lnternational Joint Conference on Artificial Intelligence", "volume": "85", "issue": "", "pages": "779--784", "other_ids": {}, "num": null, "urls": [], "raw_text": "W~lks, Y.; Huang, X.; and Fass, D. 1985 \"Syntax, Preference and Right Attachment.\" In Proceedings of lnternational Joint Confer- ence on Artificial Intelligence-85 (IJCAI-85): 779--784.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A Processing Model for Recognition of Discourse Coherence Relations", "authors": [ { "first": "O", "middle": [ "J" ], "last": "Winghart", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Winghart, O. J. 1986 \"A Processing Model for Recognition of Discourse Coherence Relations\" (unpublished Ph.D proposal), Department of Computer Sciences, University of Texas at Austin. NOTES", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Since we are discussing the syntactic representation, we use the term \"semantic processor\" for all higher level processors including the semantic, coherence, and discourse processors", "authors": [ { "first": "!", "middle": [], "last": "", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "!. Since we are discussing the syntactic representation, we use the term \"semantic processor\" for all higher level processors in- cluding the semantic, coherence, and discourse processors.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "By \"correct\" is meant semantically correct. Here, semantics has a broad meaning including pragmatics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "By \"correct\" is meant semantically correct. Here, semantics has a broad meaning including pragmatics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Borrowing a term from Tomita's (1985) system. Although we present an example of a shared, packed-parse forest in Section 3, we refer readers to (Tomita 1985) for more detailed discussion and examples", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Borrowing a term from Tomita's (1985) system. Although we present an example of a shared, packed-parse forest in Section 3, we refer readers to (Tomita 1985) for more detailed discussion and examples.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "There are different views of text processing in which syntactic and semantic processors are integrated (Birnbaum and Selfridge", "authors": [], "year": 1981, "venue": "Lytinen", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "There are different views of text processing in which syntactic and semantic processors are integrated (Birnbaum and Selfridge 1981; Lytinen 1986; Winghart 1986). However detailed discus- sion of other control flows is beyond the scope of this work.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "It is transmitted by eating shellfish such as oysters living in infected waters, or by drinking infected water, or by dirt from soiled fingers", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "For the sentence, \"It is transmitted by eating shellfish such as oysters living in infected waters, or by drinking infected water, or by dirt from soiled fingers\", there are 1433 parses from our context-free grammar.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "In our experience, a graph representing several hundred parse trees may take less than three times the number of triples as one representing a single interpretation graph for the sentence", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "In our experience, a graph representing several hundred parse trees may take less than three times the number of triples as one representing a single interpretation graph for the sentence.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "In a syntactic graph, however, we call both modifier and specifier nodes modifier nodes", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "In a syntactic graph, however, we call both modifier and specifier nodes modifier nodes.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "From now on, we will use the terms node and word, as well as arc and triple", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "From now on, we will use the terms node and word, as well as arc and triple, interchangeably.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "A unique index can be generated using the special function gensym which returns a unique symbol whenever it is called", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A unique index can be generated using the special function gensym which returns a unique symbol whenever it is called.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "Syntactic Graph of the Example Sentence.", "type_str": "figure" }, "FIGREF2": { "num": null, "uris": null, "text": "see,v], and the node [34\"aan,n] represents an NP with head word [3,ma, n,n]. [ 1,see,v] becomes the head word, and [3,rna, u,u] becomes the modifier word, of this triple. The number 1 in r 1,sea,v] is the position of the word \"see\" in the sentence, and v (verb) is the syntactic category of the word. Since a word may appear in several positions in a sentence, and one word may have multiple categories, the position and the category of a word must be recorded to distinguish the same word in different positions or with different categories. A meaningful relation name is assigned to each pair of head and modifier constituents in a grammar rule. Some of these are shown at the top of Figure 2. Rules for generating triples augment each corresponding grammar rule. Some grammar rules in Prolog syntax used to build syntactic graphs are shown in Figure 3.", "type_str": "figure" }, "FIGREF3": { "num": null, "uris": null, "text": "Augmented grammar rules for triple generation.", "type_str": "figure" }, "FIGREF4": { "num": null, "uris": null, "text": "is a packed node in which three different constituents are packed. Those three constituents have the same category, vpl, span the same terminals, (from [ 1,see,v] to [9,telescope.n] ), with the same head word, ([ 1,see,v]), but with different internal substructures. Note that several constituents may have different indices that point to the same triple. For example, in Figure 4, the first vpl in the packed node 1045 has the index 21, the first vpl in the packed node 1027 has the index 10, and the vpl node in the packed node 1013 has the index 13 as the indices of their triples. Actually, these three indices represent the same triple, Triple 3 in Figure 2. Those three constituents have the same category, vpl, the same head, [1,see,v], and the same modifier, [3,man,n], but have different inside structures of the modifying constituent, np, whose head is [3arian,n]. The modifying constituent, np, may span from [2,a] to [3maan], from [2,a] to [6~hill], or from [2,a] to [9,telescope]. There are different types of triples that do not have head-modifier relations. These types of triples are for syntactic characteristics of a sentence such as mood and voice of verbs. For example, grammar rule 4 inFigure 3generates not only triples of head-modifier relations, but also triples that have the information about the voice or progressiveness of the head word of the VP, depending on the type of inflection of the word. This kind of information can be determined in syntactic processing and is used effectively in higher level semantic processing.", "type_str": "figure" }, "FIGREF5": { "num": null, "uris": null, "text": "Shared, Packed-Parse Forest. 4 PROPERTIES OF SYNTACTIC GRAPHS participate in each correct syntactic analysis of the sentence. The collecting algorithm is explained in Section 5.2 in detail. The representation of the shared, packed-parse forest for the example in Figure 2 is in figures 4 and 5.'o It", "type_str": "figure" }, "FIGREF6": { "num": null, "uris": null, "text": "A reading of the syntactic graph of a sentence is one syntactic interpretation of the sentence in the syntactic graph.Since a syntactic graph is a union of syntactic analyses of a sentence, one reading of a syntactic graph is analogous to one parse tree of a parse forest. Definition 3: A root node of one reading of a syntactic graph is a node which has no in-arc in the reading. In most cases, the root node of a reading of the syntactic graph of a sentence is the head verb of the sentence in that syntactic interpretation. In a syntactically ambigu-Shared, Packed-Parse Forest-A Diagram.", "type_str": "figure" }, "FIGREF8": { "num": null, "uris": null, "text": "Representation and Parse Trees of a Highly Ambiguous Sentence.", "type_str": "figure" }, "FIGREF9": { "num": null, "uris": null, "text": "Parse Forest Augmented with Triples.", "type_str": "figure" }, "FIGREF10": { "num": null, "uris": null, "text": "An Algorithm to Construct the Exclusion Matrix.", "type_str": "figure" }, "FIGREF11": { "num": null, "uris": null, "text": "An Algorithm to Collect Triples.", "type_str": "figure" }, "FIGREF12": { "num": null, "uris": null, "text": "The following data are initial input. Partition I = alist of triples which have the I-thwordas amodifier. Sen_length = the position of the last word inasentence. RootList = a list of root triples.gen_subgraph(RootList, Sen_length, Graphs, All_readings):-( RootList = [] ~ if all root triple in RootList had been tried -*All_readings = Graphs ~ then return Graphs as all readings, otherwise, find all readings with a RootTriple ; RootList = [RootTripleIRootListl], gen_subgraphl(RootTriple, Sen_length, Sub_graphs), append(Graphs, Sub graphs, Graphsl), gen_subgraph(RootListl, Graphsl, All_readings)). gen_subgraphl(RootTriple, Sen length, Sub_graphs):-Rh = Position of the head node inRootTriple %i.e., position of the root node Rm = Position of the modifier node in RootTriple, Wlist = [RootTriple], setof(Graph, gen_subl(Rh, Rm, Sen length, Wlist, Graph, 0), Sub_graphs). gen_subl(Rh, Rm, Sen_length, Wlist, Graph, N):-(N>Sen length ~ifit takeatriple from all partitions -*Graph = Wlist ~ then return Wlist as one reading of a syntactic graph, otherwise, pick one triple from partition N. ; ( ( Rh = N ~Don't pick up any triple from root node position. ; Rm = N) ~ A triple from partition Rm is already picked in Wlist. -*true ; get_triple(N, Triple), ~ take a triple(in fact, an index of the triple) from partition N. not_exclusive(Wlist, Triple), ~ check exclusiveness of Triple with other triples in Wlist.N1 is N + l, ~ go to the next partition. gen_subl(Rh, Rm, [Triple[Wlist], Graph, NI))).", "type_str": "figure" }, "FIGREF13": { "num": null, "uris": null, "text": "Algorithm that Generates All and Only Readings ~om an SG.", "type_str": "figure" }, "TABREF1": { "html": null, "content": "
two nodes, [1,aoo,v] and [3,ma, n,n]. The node [1,8oo,v] represents a VP with GRAMMAR RULES AND CORRESPONDING TRIPLES: Grammar rules arc-name head modifier i. SNT--~NP VP snp header VP head of NP 2. NP--,art NP det head of NP art 3. NP--,N' head of N' 4. N' --,N' PP npp head of N' head of PP 5. N'-*noun noun 6. PP--*prep NP ppn prep headofNP 7. VP--~V' head of
", "text": "", "num": null, "type_str": "table" }, "TABREF4": { "html": null, "content": "
WITH A TELESCOPE
Pointers
[22]
[02, 09, 20]
[03, zo, 2l]
[13, 24]
[08, 19]
[06, 17]
[07, 18]
[28]
[18]
,categ,prep]][25]
[det, [[9,telescope],categ,noun.nbr,sing],
[[8,a],categ,art,ty,indef] ][14]
[ppn, [[7,with],categ,prep],
[[9,teleseope],categ,noun.nbr,sing] ][15]
", "text": "", "num": null, "type_str": "table" }, "TABREF5": { "html": null, "content": "
( true ),
[[act, Nhd, Det]]).
~ 3. np--~npl
gr([np, Nhd],
[[npl. Nhd]].since there is only one constituent
(true),% in here
[ ]).no triple will be generated in
this rule
~ 4. vp~be_aux + vp(be + vp) either passive or progressive
gr(Ivp, Aux],
[[beaux. Aux], [vp, Vhd]],
( mempr([inflection, INFL], Vhd).
( INFL = paprt~ if inflection of vp is passive
--~~ participle, then
Triples = [[beaux, Aux. Vhd], [voice, Vhd, passive]]
:~ otherwise,
( INFL = prprt~ if inflection is present participle
-~~ then,
", "text": "~ i. snt--*np + vp gr([snt, Vhd], [[np, Nhd], ~vp. Vhd]], ( true ), [[snp, Vhd, Nhd]]). ~ 2. np~article + npl gr([np. Nhd]. [[art, Det], [npl, Nhd]]. rule. categories and heads of RHS. constraints, in this case, none list of triples generated Vhd is head word, Nhd is modifier.", "num": null, "type_str": "table" }, "TABREF7": { "html": null, "content": "
Property 1: In a syntactic tree, each constituent
except the root must by definition be dominated by a
single constituent. Since a syntactic graph is the
union of aU syntactic trees that the grammar derives
from a sentence, some graph nodes may be domi-
nated by more than one node; such nodes with
multiple dominators have multiple in-arcs in the
syntactic graph and show points at which the node
", "text": "", "num": null, "type_str": "table" }, "TABREF8": { "html": null, "content": "
I N I
N
[3,W3, prep][5, Wl, nl19, W2, conjl
Figure 7 II]ega| Parse Tree from Exclusive Arcs.
as a
head and a modifier, and another arc has m I -th and
m e -th words as a head and a modifier node, then, if
nl<mz<n2<m 2 or ml<nt<m2<n 2, the two arcs can-
not co-occur.
", "text": "Conj................................. \"1", "num": null, "type_str": "table" }, "TABREF10": { "html": null, "content": "
Definition 5: If two nodes, W i and Wj , in a syntactic
graph have the same word and the same position but
with different categories, W i is in conflict with Wj. ,
and we say the two nodes are conflicting nodes.
Property 8: Since words cannot have more than one
syntactic' category in one reading, any two arcs
which have conflicting nodes as either a head or a
modifier cannot co-occur.
", "text": "The example of exclusive arcs involves the vpp arc from [ 1,flTC,v] to [2~lce,la ] and the vnp arc from [0,time,v] to [1,fly,n] in the graph in", "num": null, "type_str": "table" } } } }