{ "paper_id": "C04-1040", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:19:49.329799Z" }, "title": "A Deterministic Word Dependency Analyzer Enhanced With Preference Learning", "authors": [ { "first": "Hideki", "middle": [], "last": "Isozaki", "suffix": "", "affiliation": { "laboratory": "", "institution": "NTT Communication Science Laboratories NTT Corporation", "location": { "addrLine": "2-4 Hikaridai", "postCode": "619-0237", "settlement": "Seikacho, Kyoto", "region": "Sourakugun", "country": "Japan" } }, "email": "isozaki@cslab.kecl.ntt.co.jp" }, { "first": "Hideto", "middle": [], "last": "Kazawa", "suffix": "", "affiliation": { "laboratory": "", "institution": "NTT Communication Science Laboratories NTT Corporation", "location": { "addrLine": "2-4 Hikaridai", "postCode": "619-0237", "settlement": "Seikacho, Kyoto", "region": "Sourakugun", "country": "Japan" } }, "email": "kazawa@cslab.kecl.ntt.co.jp" }, { "first": "Tsutomu", "middle": [], "last": "Hirao", "suffix": "", "affiliation": { "laboratory": "", "institution": "NTT Communication Science Laboratories NTT Corporation", "location": { "addrLine": "2-4 Hikaridai", "postCode": "619-0237", "settlement": "Seikacho, Kyoto", "region": "Sourakugun", "country": "Japan" } }, "email": "hirao@cslab.kecl.ntt.co.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Word dependency is important in parsing technology. Some applications such as Information Extraction from biological documents benefit from word dependency analysis even without phrase labels. Therefore, we expect an accurate dependency analyzer trainable without using phrase labels is useful. Although such an English word dependency analyzer was proposed by Yamada and Matsumoto, its accuracy is lower than state-of-the-art phrase structure parsers because of the lack of top-down information given by phrase labels. This paper shows that the dependency analyzer can be improved by introducing a Root-Node Finder and a Prepositional-Phrase Attachment Resolver. Experimental results show that these modules based on Preference Learning give better scores than Collins' Model 3 parser for these subproblems. We expect this method is also applicable to phrase structure parsers.", "pdf_parse": { "paper_id": "C04-1040", "_pdf_hash": "", "abstract": [ { "text": "Word dependency is important in parsing technology. Some applications such as Information Extraction from biological documents benefit from word dependency analysis even without phrase labels. Therefore, we expect an accurate dependency analyzer trainable without using phrase labels is useful. Although such an English word dependency analyzer was proposed by Yamada and Matsumoto, its accuracy is lower than state-of-the-art phrase structure parsers because of the lack of top-down information given by phrase labels. This paper shows that the dependency analyzer can be improved by introducing a Root-Node Finder and a Prepositional-Phrase Attachment Resolver. Experimental results show that these modules based on Preference Learning give better scores than Collins' Model 3 parser for these subproblems. We expect this method is also applicable to phrase structure parsers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Word dependency is important in parsing technology. Figure 1 shows a word dependency tree. Eisner (1996) proposed probabilistic models of dependency parsing. Collins (1999) used dependency analysis for phrase structure parsing. It is also studied by other researchers (Sleator and Temperley, 1991; Hockenmaier and Steedman, 2002) . However, statistical dependency analysis of English sentences without phrase labels is not studied very much while phrase structure parsing is intensively studied. Recent studies show that Information Extraction (IE) and Question Answering (QA) benefit from word dependency analysis without phrase labels. (Suzuki et al., 2003; Sudo et al., 2003) Recently, Yamada and Matsumoto (2003) proposed a trainable English word dependency analyzer based on Support Vector Machines (SVM). They did not use phrase labels by considering annotation of documents in expert domains. SVM (Vapnik, 1995) has shown good performance in dif- (Kudo and Matsumoto, 2001; Isozaki and Kazawa, 2002) . Most machine learning methods do not work well when the number of given features (dimensionality) is large, but SVM is relatively robust. In Natural Language Processing, we use tens of thousands of words as features. Therefore, SVM often gives good performance.", "cite_spans": [ { "start": 91, "end": 104, "text": "Eisner (1996)", "ref_id": "BIBREF5" }, { "start": 158, "end": 172, "text": "Collins (1999)", "ref_id": "BIBREF3" }, { "start": 268, "end": 297, "text": "(Sleator and Temperley, 1991;", "ref_id": "BIBREF17" }, { "start": 298, "end": 329, "text": "Hockenmaier and Steedman, 2002)", "ref_id": "BIBREF8" }, { "start": 638, "end": 659, "text": "(Suzuki et al., 2003;", "ref_id": "BIBREF19" }, { "start": 660, "end": 678, "text": "Sudo et al., 2003)", "ref_id": "BIBREF18" }, { "start": 689, "end": 716, "text": "Yamada and Matsumoto (2003)", "ref_id": "BIBREF21" }, { "start": 904, "end": 918, "text": "(Vapnik, 1995)", "ref_id": "BIBREF20" }, { "start": 954, "end": 980, "text": "(Kudo and Matsumoto, 2001;", "ref_id": "BIBREF12" }, { "start": 981, "end": 1006, "text": "Isozaki and Kazawa, 2002)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 52, "end": 60, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Dependency Analysis", "sec_num": "1.1" }, { "text": "However, the accuracy of Yamada's analyzer is lower than state-of-the-art phrase structure parsers such as Charniak's Maximum-Entropy-Inspired Parser (MEIP) (Charniak, 2000) and Collins' Model 3 parser. One reason is the lack of top-down information that is available in phrase structure parsers.", "cite_spans": [ { "start": 157, "end": 173, "text": "(Charniak, 2000)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Dependency Analysis", "sec_num": "1.1" }, { "text": "In this paper, we show that the accuracy of the word dependency parser can be improved by adding a base-NP chunker, a Root-Node Finder, and a Prepositional Phrase (PP) Attachment Resolver. We introduce the base-NP chunker because base NPs are important components of a sentence and can be easily annotated. Since most words are contained in a base NP or are adjacent to a base NP, we expect that the introduction of base NPs will improve accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Analysis", "sec_num": "1.1" }, { "text": "We introduce the Root-Node Finder because Yamada's root accuracy is not very good. Each sentence has a root node (word) that does not modify any other words and is modified by all other words directly or indirectly. Here, the root accuracy is defined as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Analysis", "sec_num": "1.1" }, { "text": "Root Accuracy (RA) = #correct root nodes / #sentences (= 2,416) We think that the root node is also useful for dependency analysis because it gives global information to each word in the sentence. Root node finding can be solved by various machine learning methods. If we use classifiers, however, two or more words in a sentence can be classified as root nodes, and sometimes none of the words in a sentence is classified as a root node. Practically, this problem is solved by getting a kind of confidence measure from the classifier. As for SVM, f (x) defined below is used as a confidence measure. However, f (x) is not necessarily a good confidence measure.", "cite_spans": [ { "start": 54, "end": 63, "text": "(= 2,416)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Dependency Analysis", "sec_num": "1.1" }, { "text": "Therefore, we use Preference Learning proposed by Herbrich et al. (1998) and extended by Joachims (2002) . In this framework, a learning system is trained with samples such as \"A is preferable to B\" and \"C is preferable to D.\" Then, the system generalizes the preference relation, and determines whether \"X is preferable to Y\" for unseen X and Y. This framework seems better than SVM to select best things.", "cite_spans": [ { "start": 50, "end": 72, "text": "Herbrich et al. (1998)", "ref_id": "BIBREF6" }, { "start": 89, "end": 104, "text": "Joachims (2002)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Dependency Analysis", "sec_num": "1.1" }, { "text": "On the other hand, it is well known that attachment ambiguity of PP is a major problem in parsing. Therefore, we introduce a PP-Attachment Resolver. The next sentence has two interpretations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Analysis", "sec_num": "1.1" }, { "text": "He saw a girl with a telescope.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Analysis", "sec_num": "1.1" }, { "text": "1) The preposition 'with' modifies 'saw.' That is, he has the telescope. 2) 'With' modifies 'girl.' That is, she has the telescope.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Analysis", "sec_num": "1.1" }, { "text": "Suppose 1) is the correct interpretation. Then, \"with modifies saw\" is preferred to \"with modifies girl.\" Therefore, we can use Preference Learning again.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Analysis", "sec_num": "1.1" }, { "text": "Theoretically, it is possible to build a new Dependency Analyzer by fully exploiting Preference Learning, but we do not because its training takes too long.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Analysis", "sec_num": "1.1" }, { "text": "Preference Learning is a simple modification of SVM. Each training example for SVM is a pair (y i , x i ), where x i is a vector, y i = +1 means that x i is a positive example, and y i = \u22121 means that x i is a negative example. SVM classifies a given test vector x by using a decision function", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SVM and Preference Learning", "sec_num": "1.2" }, { "text": "f (x) = w f \u2022 \u03c6(x) + b = i y i \u03b1 i K(x, x i ) + b,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SVM and Preference Learning", "sec_num": "1.2" }, { "text": "where {\u03b1 i } and b are constants and is the number of training examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SVM and Preference Learning", "sec_num": "1.2" }, { "text": "K(x i , x j ) = \u03c6(x i ) \u2022 \u03c6(x j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SVM and Preference Learning", "sec_num": "1.2" }, { "text": "is a predefined kernel function. \u03c6(x) is a function that maps a vector x into a higher dimensional space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SVM and Preference Learning", "sec_num": "1.2" }, { "text": "Training of SVM corresponds to the following quadratic maximization (Cristianini and Shawe-Taylor, 2000) ", "cite_spans": [ { "start": 68, "end": 104, "text": "(Cristianini and Shawe-Taylor, 2000)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "SVM and Preference Learning", "sec_num": "1.2" }, { "text": "W (\u03b1) = i=1 \u03b1 i \u2212 1 2 i,j=1 \u03b1 i \u03b1 j y i y j K(x i , x j ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SVM and Preference Learning", "sec_num": "1.2" }, { "text": "where 0 \u2264 \u03b1 i \u2264 C and i=1 \u03b1 i y i = 0. C is a soft margin parameter that penalizes misclassification. On the other hand, each training example for Preference Learning is given by a triplet", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SVM and Preference Learning", "sec_num": "1.2" }, { "text": "(y i , x i.1 , x i.2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SVM and Preference Learning", "sec_num": "1.2" }, { "text": ", where x i.1 and x i.2 are vectors. We use x i. * to represent the pair (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SVM and Preference Learning", "sec_num": "1.2" }, { "text": "x i.1 , x i.2 ). y i = +1 means that x i.1 is preferable to x i.2 . We can regard their difference \u03c6(x i.1 ) \u2212 \u03c6(x i.2 ) as a positive ex- ample and \u03c6(x i.2 ) \u2212 \u03c6(x i.1 ) as a negative example. Symmetrically, y i = \u22121 means that x i.2 is prefer- able to x i.1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SVM and Preference Learning", "sec_num": "1.2" }, { "text": "Preference of a vector x is given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SVM and Preference Learning", "sec_num": "1.2" }, { "text": "g(x) = w g \u2022\u03c6(x) = i y i \u03b1 i (K(x i.1 , x)\u2212K(x i.2 , x)). If g(x) > g(x ) holds, x is preferable to x . Since Preference Learning uses the difference \u03c6(x i.1 ) \u2212 \u03c6(x i.2 ) instead of SVM's \u03c6(x i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SVM and Preference Learning", "sec_num": "1.2" }, { "text": ", it corresponds to the following maximization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SVM and Preference Learning", "sec_num": "1.2" }, { "text": "W (\u03b1) = i=1 \u03b1 i \u2212 1 2 i,j=1 \u03b1 i \u03b1 j y i y j K(x i. * , x j. * )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SVM and Preference Learning", "sec_num": "1.2" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SVM and Preference Learning", "sec_num": "1.2" }, { "text": "0 \u2264 \u03b1 i \u2264 C and K(x i. * , x j. * ) = K(x i.1 , x j.1 ) \u2212 K(x i.1 , x j.2 ) \u2212 K(x i.2 , x j.1 ) + K(x i.2 , x j.2 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SVM and Preference Learning", "sec_num": "1.2" }, { "text": "The above linear constraint i=1 \u03b1 i y i = 0 for SVM is not applied to Preference Learning because SVM requires this constraint for the optimal b, but there is no b in g(x). Although SVM light (Joachims, 1999) provides an implementation of Preference Learning, we use our own implementation because the current SVM light implementation does not support non-linear kernels and our implementation is more efficient.", "cite_spans": [ { "start": 192, "end": 208, "text": "(Joachims, 1999)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "SVM and Preference Learning", "sec_num": "1.2" }, { "text": "Herbrich's Support Vector Ordinal Regression (Herbrich et al., 2000) is based on Preference Learning, but it solves an ordered multiclass problem. Preference Learning does not assume any classes.", "cite_spans": [ { "start": 45, "end": 68, "text": "(Herbrich et al., 2000)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "SVM and Preference Learning", "sec_num": "1.2" }, { "text": "Instead of building a word dependency corpus from scratch, we use the standard data set for comparison. That is, we use Penn Treebank's Wall Street Journal data (Marcus et al., 1993) . Sections 02 through 21 are used as training data (about 40,000 sentences) and section 23 is used as test data (2,416 sentences). We converted them to word dependency data by using Collins' head rules (Collins, 1999) .", "cite_spans": [ { "start": 161, "end": 182, "text": "(Marcus et al., 1993)", "ref_id": "BIBREF14" }, { "start": 385, "end": 400, "text": "(Collins, 1999)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "The proposed method uses the following procedures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "\u2022 A base NP chunker: We implemented an SVM-based base NP chunker, which is a simplified version of Kudo's method (Kudo and Matsumoto, 2001) . We use the 'one vs. all others' backward parsing method based on an 'IOB2' chunking scheme. By the chunking, each word is tagged as -B: Beginning of a base NP, -I: Other elements of a base NP.", "cite_spans": [ { "start": 113, "end": 139, "text": "(Kudo and Matsumoto, 2001)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "-O: Otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "Please see Kudo's paper for more details.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "\u2022 A Root-Node Finder (RNF): We will describe this later.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "\u2022 A Dependency Analyzer: It works just like Yamada's Dependency Analyzer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "\u2022 A PP-Attatchment Resolver (PPAR): This resolver improves the dependency accuracy of prepositions whose part-of-speech tags are IN or TO.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "The above procedures require a part-of-speech tagger. Here, we extract part-of-speech tags from the Collins parser's output (Collins, 1997) for section 23 instead of reinventing a tagger. According to the document, it is the output of Ratnaparkhi's tagger (Ratnaparkhi, 1996) . Figure 2 shows the architecture of the system. PPAR's output is used to rewrite the output of the Dependency Analyzer.", "cite_spans": [ { "start": 124, "end": 139, "text": "(Collins, 1997)", "ref_id": "BIBREF2" }, { "start": 256, "end": 275, "text": "(Ratnaparkhi, 1996)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 278, "end": 286, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "When we use SVM, we regard root-node finding as a classification task: Root nodes are positive examples and other words are negative examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding root nodes", "sec_num": "2.1" }, { "text": "For this classification, each word w i in a tagged sentence T = (w 1 /p 1 , . . . , w i /p i , . . . , w N /p N ) is characterized by a set of features. Since the given POS tags are sometimes too specific, we introduce a rough part-of-speech q i defined as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding root nodes", "sec_num": "2.1" }, { "text": "\u2022 q = N if p = NN, NNP, NNS, NNPS, PRP, PRP$, POS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding root nodes", "sec_num": "2.1" }, { "text": "\u2022 q = V if p = VBD, VB, VBZ, VBP, VBN.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding root nodes", "sec_num": "2.1" }, { "text": "\u2022 q = J if p = JJ, JJR, JJS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding root nodes", "sec_num": "2.1" }, { "text": "Then, each word is characterized by the following features, and is encoded by a set of boolean variables.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding root nodes", "sec_num": "2.1" }, { "text": "\u2022 The word itself w i , its POS tags p i and q i , and its base NP tag", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding root nodes", "sec_num": "2.1" }, { "text": "b i = B, I, O.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding root nodes", "sec_num": "2.1" }, { "text": "We introduce boolean variables such as current word is John and current rough POS is J for each of these features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding root nodes", "sec_num": "2.1" }, { "text": "\u2022 Previous word w i\u22121 and its tags,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding root nodes", "sec_num": "2.1" }, { "text": "p i\u22121 , q i\u22121 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding root nodes", "sec_num": "2.1" }, { "text": "and b i\u22121 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding root nodes", "sec_num": "2.1" }, { "text": "\u2022 Next word w i+1 and its tags, p i+1 , q i+1 , and b i+1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding root nodes", "sec_num": "2.1" }, { "text": "\u2022 The set of left words {w 0 , . . . , w i\u22121 }, and their tags,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding root nodes", "sec_num": "2.1" }, { "text": "{p 0 , . . . , p i\u22121 }, {q 0 , . . . , q i\u22121 }, and {b 0 , . . . , b i\u22121 }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding root nodes", "sec_num": "2.1" }, { "text": "We use boolean variables such as one of the left words is Mary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding root nodes", "sec_num": "2.1" }, { "text": "\u2022 The set of right words {w i+1 , . . . , w N }, and their POS tags, {p i+1 , . . . , p N } and {q i+1 , . . . , q N }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding root nodes", "sec_num": "2.1" }, { "text": "\u2022 Whether the word is the first word or not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding root nodes", "sec_num": "2.1" }, { "text": "We also add the following boolean features to get more contextual information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding root nodes", "sec_num": "2.1" }, { "text": "\u2022 Existence of verbs or auxiliary verbs (MD) in the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding root nodes", "sec_num": "2.1" }, { "text": "\u2022 The number of words between w i and the nearest left comma. We use boolean variables such as nearest left comma is two words away.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding root nodes", "sec_num": "2.1" }, { "text": "\u2022 The number of words between w i and the nearest right comma.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding root nodes", "sec_num": "2.1" }, { "text": "Now, we can encode training data by using these boolean features. Each sentence is converted to the set of pairs {(y i , x i )} where y i is +1 when x i corresponds to the root node and y i is \u22121 otherwise. For Preference Learning, we make the set of triplets {(y i , x i.1 , x i.2 )}, where y i is always +1, x i.1 corresponds to the root node, and x i.2 corresponds to a non-root word in the same sentence. Such a triplet means that x i.1 is preferable to x i.2 as a root node.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding root nodes", "sec_num": "2.1" }, { "text": "Our Dependency Analyzer is similar to Yamada's analyzer (Yamada and Matsumoto, 2003) .", "cite_spans": [ { "start": 56, "end": 84, "text": "(Yamada and Matsumoto, 2003)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": "While scanning a tagged sentence T = (w 1 /p 1 , . . . , w n /p n ) backward from the end of the sentence, each word w i is classified into three categories: Left, Right, and Shift. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": "\u2022 Right: Right means that w i directly modifies the right word w i+1 and that no word in T modifies w i . If w i is classified as Right, the analyzer removes w i from T and w i is registered as a left child of w i+1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": "\u2022 Left: Left means that w i directly modifies the left word w i\u22121 and that no word in T modifies w i . If w i is classified as Left, the analyzer removes w i from T and w i is registered as a right child of w i\u22121 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": "\u2022 Shift: Shift means that w i is not next to its modificand or is modified by another word in T . If w i is classified as Shift, the analyzer does nothing for w i and moves to the left word", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": "w i\u22121 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": "This process is repeated until T is reduced to a single word (= root node). Since this is a three-class problem, we use 'one vs. rest' method. First, we train an SVM classifier for each class. Then, for each word in T , we compare their values: f Left (x), f Right (x), and f Shift (x). If f Left (x) is the largest, the word is classified as Left.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": "However, Yamada's algorithm stops when all words in T are classified as Shift, even when T has two or more words. In such cases, the analyzer cannot generate complete dependency trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": "Here, we resolve this problem by reclassifying a word in T as Left or Right. This word is selected in terms of the differences between SVM outputs:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": "\u2022 \u2206 Left (x) = f Shift (x) \u2212 f Left (x), \u2022 \u2206 Right (x) = f Shift (x) \u2212 f Right (x).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": "These values are non-negative because f Shift (x) was selected. For instance,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": "\u2206 Left (x) 0 means that f Left (x) is almost equal to f Shift (x). If \u2206 Left (x k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": "gives the smallest value of these differences, the word corresponding to x k is reclassified as Left. If \u2206 Right (x k ) gives the smallest value, the word corresponding to x k is reclassified as Right. Then, we can resume the analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": "We use the following basic features for each word in a sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": "\u2022 The word itself w i and its tags p i , q i , and b i ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": "\u2022 Whether w i is on the left of the root node or on the right (or at the root node). The root node is determined by the Root-Node Finder.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": "\u2022 Whether w i is inside a quotation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": "\u2022 Whether w i is inside a pair of parentheses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": "\u2022 w i 's left children {w i1 , . . . , w ik }, which were removed by the Dependency Analyzer beforehand because they were classified as 'Right.' We use boolean variables such as one of the left child is Mary. Symmetrically, w i 's right children {w i1 , . . . , w ik } are also used. However, the above features cover only nearsighted information. If w i is next to a very long base NP or a sequence of base NPs, w i cannot get information beyond the NPs. Therefore, we add the following features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": "\u2022 L i , R i : L i is available when w i immediately", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": "follows a base NP sequence. L i is the word before the sequence. That is, the sentence looks like:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": ". . . L i a base NP w i . . . R i is defined symmetrically.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": "The following features of neigbors are also used as w i 's features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": "\u2022 Left words w i\u22123 , . . . , w i\u22121 and their basic features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": "\u2022 Right words w i+1 , . . . , w i+3 and their basic features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": "\u2022 The analyzer's outputs (Left/Right/Shift) for w i+1 , . . . , w i+3 . (This analyzer runs backward from the end of T .)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": "If we train SVM by using the whole data at once, training will take too long. Therefore, we split the data into six groups: nouns, verbs, adjectives, prepositions, punctuations, and others.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency analysis", "sec_num": "2.2" }, { "text": "Since we do not have phrase labels, we use all prepositions (except root nodes) as training data. We use the following features for resolving PP attachment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PP attachment", "sec_num": "2.3" }, { "text": "\u2022 The preposition itself: w i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PP attachment", "sec_num": "2.3" }, { "text": "\u2022 Candidate modificand w j and its POS tag.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PP attachment", "sec_num": "2.3" }, { "text": "\u2022 Left words (w i\u22122 , w i\u22121 ) and their POS tags.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PP attachment", "sec_num": "2.3" }, { "text": "\u2022 Right words (w i+1 , w i+2 ) and their POS tags.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PP attachment", "sec_num": "2.3" }, { "text": "\u2022 Previous preposition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PP attachment", "sec_num": "2.3" }, { "text": "\u2022 Ending word of the following base NP and its POS tag (if any).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PP attachment", "sec_num": "2.3" }, { "text": "\u2022 i \u2212 j, i.e., Number of the words between w i and w j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PP attachment", "sec_num": "2.3" }, { "text": "\u2022 Number of commas between w i and w j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PP attachment", "sec_num": "2.3" }, { "text": "\u2022 Number of verbs between w i and w j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PP attachment", "sec_num": "2.3" }, { "text": "\u2022 Number of prepositions between w i and w j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PP attachment", "sec_num": "2.3" }, { "text": "\u2022 Number of base NPs between w i and w j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PP attachment", "sec_num": "2.3" }, { "text": "\u2022 Number of conjunctions (CCs) between w i and w j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PP attachment", "sec_num": "2.3" }, { "text": "\u2022 Difference of quotation depths between w i and w j . If w i is not inside of a quotation, its quotation depth is zero. If w j is in a quotation, its quotation depth is one. Hence, their difference is one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PP attachment", "sec_num": "2.3" }, { "text": "\u2022 Difference of parenthesis depths between w i and w j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PP attachment", "sec_num": "2.3" }, { "text": "For each preposition, we make the set of triplets {(y i , x i,1 , x i,2 )}, where y i is always +1, x i,1 corresponds to the correct word that is modified by the preposition, and x i,2 corresponds to other words in the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PP attachment", "sec_num": "2.3" }, { "text": "For the Root-Node Finder, we used a quadratic kernel K(x i , x j ) = (x i \u2022 x j + 1) 2 because it was better than the linear kernel in preliminary experiments. When we used the 'correct' POS tags given in the Penn Treebank, and the 'correct' base NP tags given by a tool provided by CoNLL 2000 shared task 2 , RNF's accuracy was 96.5% for section 23. When we used Collins' POS tags and base NP tags based on the POS tags, the accuracy slightly degraded to 95.7%. According to Yamada's paper (Yamada and Matsumoto, 2003) , this root accuracy is better than Charniak's MEIP and Collins' Model 3 parser. We also conducted an experiment to judge the effectiveness of the base NP chunker. Here, we used only the first 10,000 sentences (about 1/4) of the training data. When we used all features described above and the POS tags given in Penn Treebank, the root accuracy was 95.4%. When we removed the base NP information (b i , L i , R i ), it dropped to 94.9%. Therefore, the base NP information improves RNF's performance. Figure 3 compares SVM and Preference Learning in terms of the root accuracy. We used the first 10,000 sentences for training again. According to this graph, Preference Learning is better than SVM, but the difference is small. (They are better than Maximum Entropy Modeling 3 that yielded RA=91.5% for the same data.) C does not affect the scores very much unless C is too small. In this experiment, we used Penn's 'correct' POS tags. When we used Collins' POS tags, the scores dropped by about one point.", "cite_spans": [ { "start": 491, "end": 519, "text": "(Yamada and Matsumoto, 2003)", "ref_id": "BIBREF21" }, { "start": 556, "end": 600, "text": "Charniak's MEIP and Collins' Model 3 parser.", "ref_id": null } ], "ref_spans": [ { "start": 1020, "end": 1028, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Root-Node Finder", "sec_num": "3.1" }, { "text": "As for the dependency learning, we used the same quadratic kernel again because the quadratic kernel gives the best results according to Yamada's experiments. The soft margin parameter C is 1 following Yamada's experiment. We conducted an experiment to judge the effectiveness of the Root-Node Finder. We follow Yamada's definition of accuracy that excludes punctuation marks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Analyzer and PPAR", "sec_num": "3.2" }, { "text": "Dependency Accuracy (DA) = #correct parents / #words (= 49,892) Complete Rate (CR) = #completely parsed sentences / #sentences According to Table 1 , DA is only slightly improved, but CR is more improved. Table 2 shows the improvement given by PPAR. Since training of PPAR takes a very long time, we used only the first 35,000 sentences of the training data. We also calculated the Dependency Accuracy of Collins' Model 3 parser's output for section 23. According to this table, PPAR is better than the Model 3 parser. Now, we use PPAR's output for each preposition instead of the dependency parser's output unless the modification makes the dependency tree into a nontree graph. ", "cite_spans": [], "ref_spans": [ { "start": 140, "end": 147, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 205, "end": 212, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Dependency Analyzer and PPAR", "sec_num": "3.2" }, { "text": "We used Preference Learning to improve the SVMbased Dependency Analyzer for root-node finding and PP-attachment resolution. Preference Learning gave better scores than Collins' Model 3 parser for these subproblems. Therefore, we expect that our method is also applicable to phrase structure parsers. It seems that root-node finding is relatively easy and SVM worked well. However, PP attachment is more difficult and SVM's behavior was unstable whereas Preference Learning was more robust. We want to fully exploit Preference Learning for dependency analysis and parsing, but training takes too long. (Empirically, it takes O( 2 ) or more.) Further study is needed to reduce the computational complexity. (Since we used Isozaki's methods (Isozaki and Kazawa, 2002) , the run-time complexity is not a problem.) Kudo and Matsumoto (2002) proposed an SVMbased Dependency Analyzer for Japanese sentences. Japanese word dependency is simpler because no word modifies a left word. Collins and Duffy (2002) improved Collins' Model 2 parser by reranking possible parse trees. Shen and Joshi (2003) also used the preference kernel K(x i. * , x j. * ) for reranking. They compare parse trees, but our system compares words.", "cite_spans": [ { "start": 738, "end": 764, "text": "(Isozaki and Kazawa, 2002)", "ref_id": "BIBREF9" }, { "start": 810, "end": 835, "text": "Kudo and Matsumoto (2002)", "ref_id": "BIBREF13" }, { "start": 975, "end": 999, "text": "Collins and Duffy (2002)", "ref_id": "BIBREF1" }, { "start": 1068, "end": 1089, "text": "Shen and Joshi (2003)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "Dependency analysis is useful and annotation of word dependency seems easier than annotation of phrase labels. However, lack of phrase labels makes dependency analysis more difficult than phrase structure parsing. In this paper, we improved a deterministic dependency analyzer by adding a Root-Node Finder and a PP-Attachment Resolver. Preference Learning gave better scores than Collins' Model 3 parser for these subproblems, and the performance of the improved system is close to stateof-the-art phrase structure parsers. It turned out that SVM was unstable for PP attachment resolution whereas Preference Learning was not. We expect this method is also applicable to phrase structure parsers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "Yamada used a two-word window, but we use a one-word window for simplicity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://cnts.uia.ac.be/conll200/chunking/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www2.crl.go.jp/jt/a132/members/mutiyama/ software.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A maximum-entropyinspired parser", "authors": [ { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "132--139", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Charniak. 2000. A maximum-entropy- inspired parser. In Proceedings of the North American Chapter of the Association for Compu- tational Linguistics, pages 132-139.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Nigel", "middle": [], "last": "Duffy", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "263--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins and Nigel Duffy. 2002. New rank- ing algorithms for parsing and tagging: Kernels over discrete structures, and the voted percep- tron. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 263-270.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Three generative, lexicalised models for statistical parsing", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "16--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins. 1997. Three generative, lexi- calised models for statistical parsing. In Proceed- ings of the Annual Meeting of the Association for Computational Linguistics, pages 16-23.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Head-Driven Statistical Models for Natural Language Parsing", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, Univ. of Pennsylvania.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "An Introduction to Support Vector Machines", "authors": [ { "first": "Nello", "middle": [], "last": "Cristianini", "suffix": "" }, { "first": "John", "middle": [], "last": "Shawe-Taylor", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nello Cristianini and John Shawe-Taylor. 2000. An Introduction to Support Vector Machines. Cam- bridge University Press.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Three new probabilistic models for dependency parsing: An exploration", "authors": [ { "first": "Jason", "middle": [ "M" ], "last": "Eisner", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "340--345", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason M. Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proceedings of the International Conference on Computational Linguistics, pages 340-345.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Learning preference relations for information retrieval", "authors": [ { "first": "Ralf", "middle": [], "last": "Herbrich", "suffix": "" }, { "first": "Thore", "middle": [], "last": "Graepel", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Bollmann-Sdorra", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Obermayer", "suffix": "" } ], "year": 1998, "venue": "Proceedings of ICML-98 Workshop on Text Categorization and Machine Learning", "volume": "", "issue": "", "pages": "80--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ralf Herbrich, Thore Graepel, Peter Bollmann- Sdorra, and Klaus Obermayer. 1998. Learning preference relations for information retrieval. In Proceedings of ICML-98 Workshop on Text Cate- gorization and Machine Learning, pages 80-84.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Large Margin Rank Boundaries for Ordinal Regression", "authors": [ { "first": "Ralf", "middle": [], "last": "Herbrich", "suffix": "" }, { "first": "Thore", "middle": [], "last": "Graepel", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Obermayer", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "115--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ralf Herbrich, Thore Graepel, and Klaus Ober- mayer, 2000. Large Margin Rank Boundaries for Ordinal Regression, chapter 7, pages 115-132. MIT Press.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Generative models for statistical parsing with combinatory categorial grammar", "authors": [ { "first": "Julia", "middle": [], "last": "Hockenmaier", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "335--342", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julia Hockenmaier and Mark Steedman. 2002. Generative models for statistical parsing with combinatory categorial grammar. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 335-342.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Efficient support vector classifiers for named entity recognition", "authors": [ { "first": "Hideki", "middle": [], "last": "Isozaki", "suffix": "" }, { "first": "Hideto", "middle": [], "last": "Kazawa", "suffix": "" } ], "year": 2002, "venue": "Proceedings of COLING-2002", "volume": "", "issue": "", "pages": "390--396", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hideki Isozaki and Hideto Kazawa. 2002. Efficient support vector classifiers for named entity recog- nition. In Proceedings of COLING-2002, pages 390-396.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Making large-scale support vector machine learning practical", "authors": [ { "first": "Thorsten", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 1999, "venue": "Advances in Kernel Methods, chapter 16", "volume": "", "issue": "", "pages": "170--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Joachims. 1999. Making large-scale support vector machine learning practical. In B. Sch\u00f6lkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods, chapter 16, pages 170-184. MIT Press.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Optimizing search engines using clickthrough data", "authors": [ { "first": "Thorsten", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the ACM Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Joachims. 2002. Optimizing search en- gines using clickthrough data. In Proceedings of the ACM Conference on Knowledge Discovery and Data Mining.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Chunking with support vector machines", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2001, "venue": "Proceedings of NAACL-2001", "volume": "", "issue": "", "pages": "192--199", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taku Kudo and Yuji Matsumoto. 2001. Chunking with support vector machines. In Proceedings of NAACL-2001, pages 192-199.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Japanese dependency analysis using cascaded chunking", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2002, "venue": "Proceedings of CoNLL", "volume": "", "issue": "", "pages": "63--69", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taku Kudo and Yuji Matsumoto. 2002. Japanese dependency analysis using cascaded chunking. In Proceedings of CoNLL, pages 63-69.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Building a large annotated corpus of english: the penn treebank", "authors": [ { "first": "Mitchell", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "Mary", "middle": [ "A" ], "last": "Marcinkiewicz", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary A. Marcinkiewicz. 1993. Building a large annotated corpus of english: the penn treebank. Computa- tional Linguistics, 19(2):313-330.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A maximum entropy part-of-speech tagger", "authors": [ { "first": "Adwait", "middle": [], "last": "Ratnaparkhi", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adwait Ratnaparkhi. 1996. A maximum entropy part-of-speech tagger. In Proceedings of the Con- ference on Empirical Methods in Natural Lan- guage Processing.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "An SVM based voting algorithm with application to parse reranking", "authors": [ { "first": "Libin", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Aravind", "middle": [ "K" ], "last": "Joshi", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Seventh Conference on Natural Language Learning", "volume": "", "issue": "", "pages": "9--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Libin Shen and Aravind K. Joshi. 2003. An SVM based voting algorithm with application to parse reranking. In Proceedings of the Seventh Confer- ence on Natural Language Learning, pages 9-16.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Parsing English with a Link grammar", "authors": [ { "first": "Daniel", "middle": [], "last": "Sleator", "suffix": "" }, { "first": "Davy", "middle": [], "last": "Temperley", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Sleator and Davy Temperley. 1991. Parsing English with a Link grammar. Technical Report CMU-CS-91-196, Carnegie Mellon University.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "An improved extraction pattern representation model for automatic IE pattern acquisition", "authors": [ { "first": "Kiyoshi", "middle": [], "last": "Sudo", "suffix": "" }, { "first": "Satoshi", "middle": [], "last": "Sekine", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Annual Meeting of the Association for Cimputational Linguistics", "volume": "", "issue": "", "pages": "224--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kiyoshi Sudo, Satoshi Sekine, and Ralph Grishman. 2003. An improved extraction pattern represen- tation model for automatic IE pattern acquisi- tion. In Proceedings of the Annual Meeting of the Association for Cimputational Linguistics, pages 224-231.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Hierarchical direct acyclic graph kernel: Methods for structured natural language data", "authors": [ { "first": "Jun", "middle": [], "last": "Suzuki", "suffix": "" }, { "first": "Tsutomu", "middle": [], "last": "Hirao", "suffix": "" }, { "first": "Yutaka", "middle": [], "last": "Sasaki", "suffix": "" }, { "first": "Eisaku", "middle": [], "last": "Maeda", "suffix": "" } ], "year": 2003, "venue": "Proceedings of ACL-2003", "volume": "", "issue": "", "pages": "32--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun Suzuki, Tsutomu Hirao, Yutaka Sasaki, and Eisaku Maeda. 2003. Hierarchical direct acyclic graph kernel: Methods for structured natural lan- guage data. In Proceedings of ACL-2003, pages 32-39.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The Nature of Statistical Learning Theory", "authors": [ { "first": "Vladimir", "middle": [ "N" ], "last": "Vapnik", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vladimir N. Vapnik. 1995. The Nature of Statisti- cal Learning Theory. Springer.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Statistical dependency analysis", "authors": [ { "first": "Hiroyasu", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the International Workshop on Parsing Technologies", "volume": "", "issue": "", "pages": "195--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hiroyasu Yamada and Yuji Matsumoto. 2003. Sta- tistical dependency analysis. In Proceedings of the International Workshop on Parsing Technolo- gies, pages 195-206.", "links": null } }, "ref_entries": { "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "A word dependency tree ferent tasks of Natural Language Processing" }, "FIGREF2": { "num": null, "type_str": "figure", "uris": null, "text": "Dependency Analyzer \u2022 PP-Attachment Resolver \u2022 Root-Node Finder \u2022 Base NP Chunker \u2022 (POS Tagger) \u2022 = SVM, \u2022 = Preference Learning Figure 2: Module layers in the system" }, "FIGREF3": { "num": null, "type_str": "figure", "uris": null, "text": "Comparison of SVM and PreferenceLearning in terms of Root Accuracy (Trained with 10,000 sentences)" }, "FIGREF4": { "num": null, "type_str": "figure", "uris": null, "text": "Comparison of SVM and Preference Learning in terms of Dependency Accuracy of prepositions (Trained with 5,000 sentences) Figure 4 compares SVM and Preference Learning in terms of the Dependency Accuracy of prepositions. SVM's performance is unstable for this task, and Preference Learning outperforms SVM. (We could not get scores of Maximum Entropy Modeling because of memory shortage.)" }, "TABREF0": { "html": null, "type_str": "table", "content": "
DARACR
without RNF 89.4% 91.9% 34.7%The
with RNF89.6% 95.7% 35.7%
Dependency Analyzer was trained with 10,000
sentences.
", "text": "RNF was trained with all of the training data. DA: Dependency Accuracy, RA: Root Acc., CR:Complete Rate", "num": null }, "TABREF1": { "html": null, "type_str": "table", "content": "
Accuracy (%)
74 76 78 80 82\u2022\u2022 \u2022\u2022\u2022 \u2022\u2022\u2022 \u2022 \u2022Preference Learning SVM \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022
0.0001 70 72 \u20220.00030.0010.003\u20220.010.030.1 \u2022C
", "text": "Effectiveness of the Root-Node Finder", "num": null }, "TABREF2": { "html": null, "type_str": "table", "content": "
INTO average
Collins Model 384.6% 87.3% 85.1%
Dependency Analyzer 83.4% 86.1% 83.8%
PPAR85.3% 87.7% 85.7%
PPAR was trained with 35,000 sentences. The number
of IN words is 5,950 and that of TO is 1,240.
", "text": "compares the proposed method with other methods in terms of accuracy. This data except 'Proposed' was cited from Yamada's paper.", "num": null }, "TABREF3": { "html": null, "type_str": "table", "content": "
: PP-Attachment Resolver
", "text": "", "num": null }, "TABREF4": { "html": null, "type_str": "table", "content": "", "text": "Comparison with related workAccording to this table, the proposed method is close to the phrase structure parsers except Complete Rate. Without PPAR, DA dropped to 90.9% and CR dropped to 39.7%.", "num": null } } } }