{ "paper_id": "P03-1004", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:13:52.558037Z" }, "title": "Fast Methods for Kernel-based Text Analysis", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "", "affiliation": { "laboratory": "", "institution": "Nara Institute of Science and Technology", "location": {} }, "email": "taku-ku@is.aist-nara.ac.jp" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "", "affiliation": { "laboratory": "", "institution": "Nara Institute of Science and Technology", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Kernel-based learning (e.g., Support Vector Machines) has been successfully applied to many hard problems in Natural Language Processing (NLP). In NLP, although feature combinations are crucial to improving performance, they are heuristically selected. Kernel methods change this situation. The merit of the kernel methods is that effective feature combination is implicitly expanded without loss of generality and increasing the computational costs. Kernel-based text analysis shows an excellent performance in terms in accuracy; however, these methods are usually too slow to apply to large-scale text analysis. In this paper, we extend a Basket Mining algorithm to convert a kernel-based classifier into a simple and fast linear classifier. Experimental results on English BaseNP Chunking, Japanese Word Segmentation and Japanese Dependency Parsing show that our new classifiers are about 30 to 300 times faster than the standard kernel-based classifiers.", "pdf_parse": { "paper_id": "P03-1004", "_pdf_hash": "", "abstract": [ { "text": "Kernel-based learning (e.g., Support Vector Machines) has been successfully applied to many hard problems in Natural Language Processing (NLP). In NLP, although feature combinations are crucial to improving performance, they are heuristically selected. Kernel methods change this situation. The merit of the kernel methods is that effective feature combination is implicitly expanded without loss of generality and increasing the computational costs. Kernel-based text analysis shows an excellent performance in terms in accuracy; however, these methods are usually too slow to apply to large-scale text analysis. In this paper, we extend a Basket Mining algorithm to convert a kernel-based classifier into a simple and fast linear classifier. Experimental results on English BaseNP Chunking, Japanese Word Segmentation and Japanese Dependency Parsing show that our new classifiers are about 30 to 300 times faster than the standard kernel-based classifiers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Kernel methods (e.g., Support Vector Machines (Vapnik, 1995) ) attract a great deal of attention recently. In the field of Natural Language Processing, many successes have been reported. Examples include Part-of-Speech tagging (Nakagawa et al., 2002) Text Chunking (Kudo and Matsumoto, 2001) , Named Entity Recognition (Isozaki and Kazawa, 2002) , and Japanese Dependency Parsing (Kudo and Matsumoto, 2000; .", "cite_spans": [ { "start": 46, "end": 60, "text": "(Vapnik, 1995)", "ref_id": "BIBREF12" }, { "start": 227, "end": 250, "text": "(Nakagawa et al., 2002)", "ref_id": "BIBREF9" }, { "start": 265, "end": 291, "text": "(Kudo and Matsumoto, 2001)", "ref_id": "BIBREF5" }, { "start": 319, "end": 345, "text": "(Isozaki and Kazawa, 2002)", "ref_id": "BIBREF2" }, { "start": 380, "end": 406, "text": "(Kudo and Matsumoto, 2000;", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "It is known in NLP that combination of features contributes to a significant improvement in accuracy. For instance, in the task of dependency parsing, it would be hard to confirm a correct dependency relation with only a single set of features from either a head or its modifier. Rather, dependency relations should be determined by at least information from both of two phrases. In previous research, feature combination has been selected manually, and the performance significantly depended on these selections. This is not the case with kernel-based methodology. For instance, if we use a polynomial kernel, all feature combinations are implicitly expanded without loss of generality and increasing the computational costs. Although the mapped feature space is quite large, the maximal margin strategy (Vapnik, 1995) of SVMs gives us a good generalization performance compared to the previous manual feature selection. This is the main reason why kernel-based learning has delivered great results to the field of NLP.", "cite_spans": [ { "start": 805, "end": 819, "text": "(Vapnik, 1995)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Kernel-based text analysis shows an excellent performance in terms in accuracy; however, its inefficiency in actual analysis limits practical application. For example, an SVM-based NE-chunker runs at a rate of only 85 byte/sec, while previous rulebased system can process several kilobytes per second (Isozaki and Kazawa, 2002) . Such slow execution time is inadequate for Information Retrieval, Question Answering, or Text Mining, where fast analysis of large quantities of text is indispensable.", "cite_spans": [ { "start": 301, "end": 327, "text": "(Isozaki and Kazawa, 2002)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper presents two novel methods that make the kernel-based text analyzers substantially faster. These methods are applicable not only to the NLP tasks but also to general machine learning tasks where training and test examples are represented in a binary vector.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "More specifically, we focus on a Polynomial Kernel of degree d, which can attain feature combinations that are crucial to improving the performance of tasks in NLP. Second, we introduce two fast classification algorithms for this kernel. One is PKI (Polynomial Kernel Inverted), which is an extension of Inverted Index in Information Retrieval. The other is PKE (Polynomial Kernel Expanded), where all feature combinations are explicitly expanded. By applying PKE, we can convert a kernel-based classifier into a simple and fast liner classifier. In order to build PKE, we extend the PrefixSpan (Pei et al., 2001) , an efficient Basket Mining algorithm, to enumerate effective feature combinations from a set of support examples.", "cite_spans": [ { "start": 595, "end": 613, "text": "(Pei et al., 2001)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Experiments on English BaseNP Chunking, Japanese Word Segmentation and Japanese Dependency Parsing show that PKI and PKE perform respectively 2 to 13 times and 30 to 300 times faster than standard kernel-based systems, without a discernible change in accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Suppose we have a set of training data for a binary classification problem:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kernel Method and Support Vector Machines", "sec_num": "2" }, { "text": "(x 1 , y 1 ), . . . , (x L , y L ) x j \u2208 N , y j \u2208 {+1, \u22121},", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kernel Method and Support Vector Machines", "sec_num": "2" }, { "text": "where x j is a feature vector of the j-th training sample, and y j is the class label associated with this training sample. The decision function of SVMs is defined by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kernel Method and Support Vector Machines", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y(x) = sgn j\u2208SV y j \u03b1 j \u03c6(x j ) \u2022 \u03c6(x) + b ,", "eq_num": "(1)" } ], "section": "Kernel Method and Support Vector Machines", "sec_num": "2" }, { "text": "where: (A) \u03c6 is a non-liner mapping function from", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kernel Method and Support Vector Machines", "sec_num": "2" }, { "text": "N to H (N H). (B) \u03b1 j , b \u2208 , \u03b1 j \u2265 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kernel Method and Support Vector Machines", "sec_num": "2" }, { "text": "The mapping function \u03c6 should be designed such that all training examples are linearly separable in H space. Since H is much larger than N , it requires heavy computation to evaluate the dot products \u03c6(x i ) \u2022 \u03c6(x) in an explicit form. This problem can be overcome by noticing that both construction of optimal parameter \u03b1 i (we will omit the details of this construction here) and the calculation of the decision function only require the evaluation of dot products \u03c6(x i ) \u2022 \u03c6(x). This is critical, since, in some cases, the dot products can be evaluated by a simple Kernel Function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kernel Method and Support Vector Machines", "sec_num": "2" }, { "text": "K(x 1 , x 2 ) = \u03c6(x 1 ) \u2022 \u03c6(x 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kernel Method and Support Vector Machines", "sec_num": "2" }, { "text": ". Substituting kernel function into (1), we have the following decision function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kernel Method and Support Vector Machines", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y(x) = sgn j\u2208SV y j \u03b1 j K(x j , x) + b", "eq_num": "(2)" } ], "section": "Kernel Method and Support Vector Machines", "sec_num": "2" }, { "text": "One of the advantages of kernels is that they are not limited to vectorial object x, but that they are applicable to any kind of object representation, just given the dot products.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kernel Method and Support Vector Machines", "sec_num": "2" }, { "text": "For many tasks in NLP, the training and test examples are represented in binary vectors; or sets, since examples in NLP are usually represented in socalled Feature Structures. Here, we focus on such cases 1 . Suppose a feature set F = {1, 2, . . . , N } and training examples X j (j = 1, 2, . . . , L), all of which are subsets of F (i.e., X j \u2286 F ). In this case, X j can be regarded as a binary vector", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Polynomial Kernel of degree d", "sec_num": "3" }, { "text": "x j = (x j1 , x j2 , . . . , x jN ) where x ji = 1 if i \u2208 X j , x ji = 0 otherwise. The dot product of x 1 and x 2 is given by x 1 \u2022 x 2 = |X 1 \u2229 X 2 |.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Polynomial Kernel of degree d", "sec_num": "3" }, { "text": "Given sets X and Y , corresponding to binary feature vectors x and y, Polynomial Kernel of degree", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 1 Polynomial Kernel of degree d", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "d K d (X, Y ) is given by K d (x, y) = K d (X, Y ) = (1 + |X \u2229 Y |) d ,", "eq_num": "(3)" } ], "section": "Definition 1 Polynomial Kernel of degree d", "sec_num": null }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 1 Polynomial Kernel of degree d", "sec_num": null }, { "text": "d = 1, 2, 3, . . ..", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 1 Polynomial Kernel of degree d", "sec_num": null }, { "text": "In this paper, (3) will be referred to as an implicit form of the Polynomial Kernel.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 1 Polynomial Kernel of degree d", "sec_num": null }, { "text": "It is known in NLP that a combination of features, a subset of feature set F in general, contributes to overall accuracy. In previous research, feature combination has been selected manually. The use of a polynomial kernel allows such feature expansion without loss of generality or an increase in computational costs, since the Polynomial Kernel of degree d implicitly maps the original feature space F into F d space. (i.e., \u03c6 : F \u2192 F d ). This property is critical and some reports say that, in NLP, the polynomial kernel outperforms the simple linear kernel (Kudo and Matsumoto, 2000; Isozaki and Kazawa, 2002) .", "cite_spans": [ { "start": 562, "end": 588, "text": "(Kudo and Matsumoto, 2000;", "ref_id": "BIBREF4" }, { "start": 589, "end": 614, "text": "Isozaki and Kazawa, 2002)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Definition 1 Polynomial Kernel of degree d", "sec_num": null }, { "text": "Here, we will give an explicit form of the Polynomial Kernel to show the mapping function \u03c6(\u2022).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 1 Polynomial Kernel of degree d", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "K d (X, Y ) = d r=0 c d (r) \u2022 |P r (X \u2229 Y )|,", "eq_num": "(4)" } ], "section": "Lemma 1 Explicit form of Polynomial Kernel. The Polynomial Kernel of degree d can be rewritten as", "sec_num": null }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 1 Explicit form of Polynomial Kernel. The Polynomial Kernel of degree d can be rewritten as", "sec_num": null }, { "text": "\u2022 P r (X) is a set of all subsets of X with exactly r elements in it, \u2022 c d (r) = d l=r d l r m=0 (\u22121) r\u2212m \u2022 m l r m .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 1 Explicit form of Polynomial Kernel. The Polynomial Kernel of degree d can be rewritten as", "sec_num": null }, { "text": "Proof See Appendix A. c d (r) will be referred as a subset weight of the Polynomial Kernel of degree d. This function gives a prior weight to the subset s, where |s| = r.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 1 Explicit form of Polynomial Kernel. The Polynomial Kernel of degree d can be rewritten as", "sec_num": null }, { "text": "Given sets X = {a, b, c, d} and Y = {a, b, d, e}, the Quadratic Kernel K 2 (X, Y ) and the Cubic Kernel K 3 (X, Y ) can be calculated in an implicit form as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 1 Quadratic and Cubic Kernel", "sec_num": null }, { "text": "K 2 (X, Y ) = (1 + |X \u2229 Y |) 2 = (1 + 3) 2 = 16, K 3 (X, Y ) = (1 + |X \u2229 Y |) 3 = (1 + 3) 3 = 64.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 1 Quadratic and Cubic Kernel", "sec_num": null }, { "text": "Using Lemma 1, the subset weights of the Quadratic Kernel and the Cubic Kernel can be calculated as c", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 1 Quadratic and Cubic Kernel", "sec_num": null }, { "text": "2 (0) = 1, c 2 (1) = 3, c 2 (2) = 2 and c 3 (0) = 1, c 3 (1) = 7, c 3 (2) = 12, c 3 (3) = 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 1 Quadratic and Cubic Kernel", "sec_num": null }, { "text": "In addition, subsets P r (X \u2229 Y ) (r = 0, 1, 2, 3) are given as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 1 Quadratic and Cubic Kernel", "sec_num": null }, { "text": "P 0 (X \u2229 Y ) = {\u03c6}, P 1 (X \u2229Y ) = {{a}, {b}, {d}}, P 2 (X \u2229Y ) = {{a, b}, {a, d}, {b, d}}, P 3 (X \u2229 Y ) = {{a, b, d}}. K 2 (X, Y ) and K 3 (X, Y ) can similarly be calcu- lated in an explicit form as: function PKI classify (X) r = 0 # an array, initialized as 0 foreach i \u2208 X foreach j \u2208 h(i) r j = r j + 1 end end result = 0 foreach j \u2208 SV result = result + y j \u03b1 j \u2022 (1 + r j ) d end return sgn(result + b) end Figure 1: Pseudo code for PKI K 2 (X, Y ) = 1 \u2022 1 + 3 \u2022 3 + 2 \u2022 3 = 16, K 3 (X, Y ) = 1 \u2022 1 + 7 \u2022 3 + 12 \u2022 3 + 6 \u2022 1 = 64.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 1 Quadratic and Cubic Kernel", "sec_num": null }, { "text": "In this section, we introduce two fast classification algorithms for the Polynomial Kernel of degree d.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fast Classifiers for Polynomial Kernel", "sec_num": "4" }, { "text": "Before describing them, we give the baseline classifier (PKB): ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fast Classifiers for Polynomial Kernel", "sec_num": "4" }, { "text": "y(X) = sgn j\u2208SV y j \u03b1 j \u2022 (1 + |X j \u2229 X|) d + b . (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fast Classifiers for Polynomial Kernel", "sec_num": "4" }, { "text": "Given an item i \u2208 F , if we know in advance the set of support examples which contain item i \u2208 F , we do not need to calculate |X j \u2229 X| for all support examples. This is a naive extension of Inverted Indexing in Information Retrieval. Figure 1 shows the pseudo code of the algorithm PKI. The function h(i) is a pre-compiled table and returns a set of support examples which contain item i.", "cite_spans": [], "ref_spans": [ { "start": 236, "end": 244, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "PKI (Inverted Representation)", "sec_num": "4.1" }, { "text": "The complexity of the PKI is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PKI (Inverted Representation)", "sec_num": "4.1" }, { "text": "O(|X| \u2022 B + |SV |), where B is an average of |h(i)| over all item i \u2208 F .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PKI (Inverted Representation)", "sec_num": "4.1" }, { "text": "The PKI can make the classification speed drastically faster when B is small, in other words, when feature space is relatively sparse (i.e., B |SV |). The feature space is often sparse in many tasks in NLP, since lexical entries are used as features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PKI (Inverted Representation)", "sec_num": "4.1" }, { "text": "The algorithm PKI does not change the final accuracy of the classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PKI (Inverted Representation)", "sec_num": "4.1" }, { "text": "Using Lemma 1, we can represent the decision function (5) in an explicit form:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Idea of PKE", "sec_num": "4.2.1" }, { "text": "y(X) = sgn j\u2208SV y j \u03b1 j d r=0 c d (r) \u2022 |P r (X j \u2229 X)| + b . (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Idea of PKE", "sec_num": "4.2.1" }, { "text": "If we, in advance, calculate", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Idea of PKE", "sec_num": "4.2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "w(s) = j\u2208SV y j \u03b1 j c d (|s|)I(s \u2208 P |s| (X j )) (where I(t) is an indicator function 2 ) for all subsets s \u2208 d r=0 P r (F ),", "eq_num": "(6)" } ], "section": "Basic Idea of PKE", "sec_num": "4.2.1" }, { "text": "can be written as the following simple linear form:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Idea of PKE", "sec_num": "4.2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y(X) = sgn s\u2208\u0393 d (X) w(s) + b .", "eq_num": "(7)" } ], "section": "Basic Idea of PKE", "sec_num": "4.2.1" }, { "text": "where ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Idea of PKE", "sec_num": "4.2.1" }, { "text": "\u0393 d (X) = d r=0 P r (X).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Idea of PKE", "sec_num": "4.2.1" }, { "text": "To apply the PKE, we first calculate", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mining Approach to PKE", "sec_num": "4.2.2" }, { "text": "|\u0393 d (F )| de- gree of vectors w = (w(s 1 ), w(s 2 ), . . . , w(s |\u0393 d (F )| )).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mining Approach to PKE", "sec_num": "4.2.2" }, { "text": "This calculation is trivial only when we use a Quadratic Kernel, since we just project the original feature space F into F \u00d7 F space, which is small enough to be calculated by a naive exhaustive method. However, if we, for instance, use a polynomial kernel of degree 3 or higher, this calculation becomes not trivial, since the size of feature space exponentially increases. Here we take the following strategy:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mining Approach to PKE", "sec_num": "4.2.2" }, { "text": "1. Instead of using the original vector w, we use w , an approximation of w.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mining Approach to PKE", "sec_num": "4.2.2" }, { "text": "2. We apply the Subset Mining algorithm to calculate w efficiently.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mining Approach to PKE", "sec_num": "4.2.2" }, { "text": "2 I(t) returns 1 if t is true,returns 0 otherwise. Definition 2 w : An approximation of w An approximation of w is given by w =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mining Approach to PKE", "sec_num": "4.2.2" }, { "text": "(w (s 1 ), w (s 2 ), . . . , w (s |\u0393 d (F )| ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mining Approach to PKE", "sec_num": "4.2.2" }, { "text": ", where w (s) is set to 0 if w(s) is trivially close to 0. (i.e., \u03c3 neg < w(s) < \u03c3 pos (\u03c3 neg < 0, \u03c3 pos > 0), where \u03c3 pos and \u03c3 neg are predefined thresholds).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mining Approach to PKE", "sec_num": "4.2.2" }, { "text": "The algorithm PKE is an approximation of the PKB, and changes the final accuracy according to the selection of thresholds \u03c3 pos and \u03c3 neg . The calculation of w is formulated as the following mining problem: In this paper, we apply a Sub-Structure Mining algorithm to the feature combination mining problem. Generally speaking, sub-structures mining algorithms efficiently extract frequent sub-structures (e.g., subsets, sub-sequences, sub-trees, or subgraphs) from a large database (set of transactions). In this context, frequent means that there are no less than \u03be transactions which contain a sub-structure. The parameter \u03be is usually referred to as the Minimum Support. Since we must enumerate all subsets of F , we can apply subset mining algorithm, in some times called as Basket Mining algorithm, to our task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mining Approach to PKE", "sec_num": "4.2.2" }, { "text": "There are many subset mining algorithms proposed, however, we will focus on the PrefixSpan algorithm, which is an efficient algorithm for sequential pattern mining, originally proposed by (Pei et al., 2001 ). The PrefixSpan was originally designed to extract frequent sub-sequence (not subset) patterns, however, it is a trivial difference since a set can be seen as a special case of sequences (i.e., by sorting items in a set by lexicographic order, the set becomes a sequence). The basic idea of the PrefixSpan is to divide the database by frequent sub-patterns (prefix) and to grow the prefix-spanning pattern in a depth-first search fashion.", "cite_spans": [ { "start": 188, "end": 205, "text": "(Pei et al., 2001", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Mining Approach to PKE", "sec_num": "4.2.2" }, { "text": "We now modify the PrefixSpan to suit to our feature combination mining.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mining Approach to PKE", "sec_num": "4.2.2" }, { "text": "We only enumerate up to subsets of size d. when we plan to apply the Polynomial Kernel of degree d.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 size constraint", "sec_num": null }, { "text": "In the original PrefixSpan, the frequency of each subset does not change by its size. However, in our mining task, it changes (i.e., the frequency of subset s is weighted by c d (|s|)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Subset weight c d (r)", "sec_num": null }, { "text": "Here, we process the mining algorithm by assuming that each transaction (support example X j ) has its frequency C d y j \u03b1 j , where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Subset weight c d (r)", "sec_num": null }, { "text": "C d = max(c d (1), c d (2), . . . , c d (d)). The weight w(s) is calculated by w(s) = \u03c9(s) \u00d7 c d (|s|)/C d , where \u03c9(s)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Subset weight c d (r)", "sec_num": null }, { "text": "is a frequency of s, given by the original PrefixSpan.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Subset weight c d (r)", "sec_num": null }, { "text": "\u2022 Positive/Negative support examples We first divide the support examples into positive (y i > 0) and negative (y i < 0) examples, and process mining independently. The result can be obtained by merging these two results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Subset weight c d (r)", "sec_num": null }, { "text": "\u2022 Minimum Supports \u03c3 pos , \u03c3 neg In the original PrefixSpan, minimum support is an integer. In our mining task, we can give a real number to minimum support, since each transaction (support example X j ) has possibly non-integer frequency C d y j \u03b1 j . Minimum supports \u03c3 pos and \u03c3 neg control the rate of approximation. For the sake of convenience, we just give one parameter \u03c3, and calculate \u03c3 pos and \u03c3 neg as follows", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Subset weight c d (r)", "sec_num": null }, { "text": "\u03c3 pos = \u03c3 \u2022 #of positive examples #of support examples , \u03c3 neg = \u2212\u03c3 \u2022 #of negative examples #of support examples .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Subset weight c d (r)", "sec_num": null }, { "text": "After the process of mining, a set of tuples \u2126 = { s, w(s) } is obtained, where s is a frequent subset and w(s) is its weight. We use a TRIE to efficiently store the set \u2126. The example of such TRIE compression is shown in Figure 2 . Although there are many implementations for TRIE, we use a Double-Array (Aoe, 1989) in our task. The actual classification of PKE can be examined by traversing the TRIE for all subsets s \u2208 \u0393 d (X).", "cite_spans": [ { "start": 305, "end": 316, "text": "(Aoe, 1989)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 222, "end": 230, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "\u2022 Subset weight c d (r)", "sec_num": null }, { "text": "To demonstrate performances of PKI and PKE, we examined three NLP tasks: English BaseNP Chunking (EBC), Japanese Word Segmentation (JWS) and Figure 2 : \u2126 in TRIE representation Japanese Dependency Parsing (JDP). A more detailed description of each task, training and test data, the system parameters, and feature sets are presented in the following subsections. Table 1 summarizes the detail information of support examples (e.g., size of SVs, size of feature set etc.).", "cite_spans": [], "ref_spans": [ { "start": 141, "end": 149, "text": "Figure 2", "ref_id": null }, { "start": 362, "end": 369, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "\u00a2 \u00a1 \u00a4 \u00a3 \u00a2 \u00a5 \u00a6 \u00a3 \u00a2 \u00a1 \u00a7 \u00a9 \u00a4 \u00a3 \u00a2 \u00a1 \u00a7 \u00a3 \u00a4 \u00a9 \u00a7 \u00a3 \u00a4 \u00a9 \u00a7 \u00a5 \u00a6 \u00a3 \u00a2 \u00a7\u00a5 \u00a6 \u00a3 \u00a4 \u00a9 \u00a7 \u00a7\u00a5 \u00a6 \u00a3 \u00a2 ! \" $ # & % \u00a4 ' ( ) 0 0 1 2 1 ) 1 ) ) 3 5 4 6 4 8 7 9 & @ 5 AB 9 & C D 9 E C F 9 E @ 5 AB F C E G F 9 & H F 9 E C F 9 E C s I", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "Our preliminary experiments show that a Quadratic Kernel performs the best in EBC, and a Cubic Kernel performs the best in JWS and JDP. The experiments using a Cubic Kernel are suitable to evaluate the effectiveness of the basket mining approach applied in the PKE, since a Cubic Kernel projects the original feature space F into F 3 space, which is too large to be handled only using a naive exhaustive method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "All experiments were conducted under Linux using XEON 2.4 Ghz dual processors and 3.5 Gbyte of main memory. All systems are implemented in C++.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "Text Chunking is a fundamental task in NLP -dividing sentences into non-overlapping phrases. BaseNP chunking deals with a part of this task and recognizes the chunks that form noun phrases. Here is an example sentence: A BaseNP chunk is represented as sequence of words between square brackets. BaseNP chunking task is usually formulated as a simple tagging task, where we represent chunks with three types of tags: B: beginning of a chunk. I: non-initial word. O: outside of the chunk. In our experiments, we used the same settings as . We use a standard data set (Ramshaw and Marcus, 1995) consisting of sections 15-19 of the WSJ corpus as training and section 20 as testing. Tables 2, 3 and 4 show the execution time, accuracy 4 , and |\u2126| (size of extracted subsets), by changing \u03c3 from 0.01 to 0.0005.", "cite_spans": [ { "start": 565, "end": 591, "text": "(Ramshaw and Marcus, 1995)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 678, "end": 695, "text": "Tables 2, 3 and 4", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "English BaseNP Chunking (EBC)", "sec_num": "5.1" }, { "text": "The PKI leads to about 2 to 12 times improvements over the PKB. In JDP, the improvement is significant. This is because B, the average of h(i) over all items i \u2208 F , is relatively small in JDP. The improvement significantly depends on the sparsity of the given support examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.4" }, { "text": "The improvements of the PKE are more significant than the PKI. The running time of the PKE is 30 to 300 times faster than the PKB, when we set an appropriate \u03c3, (e.g., \u03c3 = 0.005 for EBC and JWS, \u03c3 = 0.0005 for JDP). In these settings, we could preserve the final accuracies for test data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.4" }, { "text": "The PKE with a Cubic Kernel tends to make \u2126 large (e.g., |\u2126| = 2.32 million for JWS, |\u2126| = 8.26 million for JDP).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Frequency-based Pruning", "sec_num": "5.5" }, { "text": "To reduce the size of \u2126, we examined simple frequency-based pruning experiments. Our extension is to simply give a prior threshold \u03be (= 1, 2, 3, 4 . . .) , and erase all subsets which occur in less than \u03be support examples. The calculation of frequency can be similarly conducted by the PrefixSpan algorithm. Tables 5 and 6 show the results of frequency-based pruning, when we fix \u03c3=0.005 for JWS, and \u03c3=0.0005 for JDP.", "cite_spans": [ { "start": 133, "end": 153, "text": "(= 1, 2, 3, 4 . . .)", "ref_id": null } ], "ref_spans": [ { "start": 308, "end": 322, "text": "Tables 5 and 6", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Frequency-based Pruning", "sec_num": "5.5" }, { "text": "In JDP, we can make the size of set \u2126 about one third of the original size. This reduction gives us not only a slight speed increase but an improvement of accuracy (89.29%\u219289.34%). Frequency-based pruning allows us to remove subsets that have large weight and small frequency. Such subsets may be generated from errors or special outliers in the training examples, which sometimes cause an overfitting in training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Frequency-based Pruning", "sec_num": "5.5" }, { "text": "In JWS, the frequency-based pruning does not work well. Although we can reduce the size of \u2126 by half, the accuracy is also reduced (97.94%\u219297.83%). It implies that, in JWS, features even with frequency of one contribute to the final decision hyperplane. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Frequency-based Pruning", "sec_num": "5.5" }, { "text": "There have been several studies for efficient classification of SVMs. Isozaki et al. propose an XQK (eXpand the Quadratic Kernel) which can make their Named-Entity recognizer drastically fast (Isozaki and Kazawa, 2002) . XQK can be subsumed into PKE. Both XQK and PKE share the basic idea; all feature combinations are explicitly expanded and we convert the kernel-based classifier into a simple linear classifier. The explicit difference between XQK and PKE is that XQK is designed only for Quadratic Kernel. It implies that XQK can only deal with feature combination of size up to two. On the other hand, PKE is more general and can also be applied not only to the Quadratic Kernel but also to the general-style of polynomial kernels (1 + |X \u2229 Y |) d . In PKE, there are no theoretical constrains to limit the size of combinations.", "cite_spans": [ { "start": 192, "end": 218, "text": "(Isozaki and Kazawa, 2002)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "In addition, Isozaki et al. did not mention how to expand the feature combinations. They seem to use a naive exhaustive method to expand them, which is not always scalable and efficient for extracting three or more feature combinations. PKE takes a basket mining approach to enumerating effective feature combinations more efficiently than their exhaustive method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "We focused on a Polynomial Kernel of degree d, which has been widely applied in many tasks in NLP and can attain feature combination that is crucial to improving the performance of tasks in NLP. Then, we introduced two fast classification algorithms for this kernel. One is PKI (Polynomial Kernel Inverted) , which is an extension of Inverted Index. The other is PKE (Polynomial Kernel Expanded), where all feature combinations are explicitly expanded. The concept in PKE can also be applicable to kernels for discrete data structures, such as String Kernel (Lodhi et al., 2002) and Tree Kernel (Kashima and Koyanagi, 2002; Collins and Duffy, 2001 ). For instance, Tree Kernel gives a dot product of an ordered-tree, and maps the original ordered-tree onto its all sub-tree space. To apply the PKE, we must efficiently enumerate the effective sub-trees from a set of support examples. We can similarly apply a sub-tree mining algorithm (Zaki, 2002) to this problem.", "cite_spans": [ { "start": 278, "end": 306, "text": "(Polynomial Kernel Inverted)", "ref_id": null }, { "start": 558, "end": 578, "text": "(Lodhi et al., 2002)", "ref_id": "BIBREF8" }, { "start": 595, "end": 623, "text": "(Kashima and Koyanagi, 2002;", "ref_id": "BIBREF3" }, { "start": 624, "end": 647, "text": "Collins and Duffy, 2001", "ref_id": "BIBREF1" }, { "start": 936, "end": 948, "text": "(Zaki, 2002)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Works", "sec_num": "7" }, { "text": "In the Maximum Entropy model widely applied in NLP, we usually suppose binary feature functions f i (X j ) \u2208 {0, 1}. This formalization is exactly same as representing an example X j in a set {k|f k (X j ) = 1}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Usually, in Japanese, word boundaries are highly constrained by character types, such as hiragana and katakana (both are phonetic characters in Japanese), Chinese characters, English alphabets and numbers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In EBC, accuracy is evaluated using F measure, harmonic mean between precision and recall.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Since there are no explicit spaces between words in Japanese sentences, we must first identify the word boundaries before analyzing deep structure of a sentence. Japanese word segmentation is formalized as a simple classification task.be a sequence of Japanese character types 3 associated with each character, and y i \u2208 {+1, \u22121}, (i = (1, 2, . . . , m \u2212 1)) be a boundary marker. If there is a boundary between c i and c i+1 , y i = 1, otherwise y i = \u22121. The feature set of example x i is given by all characters as well as character types in some constant window (e.g., 5):Note that we distinguish the relative position of each character and character type. We use the Kyoto University Corpus (Kurohashi and Nagao, 1997) , 7,958 sentences in the articles on January 1st to January 7th are used as training data, and 1,246 sentences in the articles on January 9th are used as the test data.", "cite_spans": [ { "start": 696, "end": 723, "text": "(Kurohashi and Nagao, 1997)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Japanese Word Segmentation (JWS)", "sec_num": "5.2" }, { "text": "The task of Japanese dependency parsing is to identify a correct dependency of each Bunsetsu (base phrase in Japanese). In previous research, we presented a state-of-the-art SVMs-based Japanese dependency parser . We combined SVMs into an efficient parsing algorithm, Cascaded Chunking Model, which parses a sentence deterministically only by deciding whether the current chunk modifies the chunk on its immediate right hand side. The input for this algorithm consists of a set of the linguistic features related to the head and modifier (e.g., word, part-of-speech, and inflections), and the output from the algorithm is either of the value +1 (dependent) or -1 (independent). We use a standard data set, which is the same corpus described in the Japanese Word Segmentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Japanese Dependency Parsing (JDP)", "sec_num": "5.3" }, { "text": "Let X, Y be subsets of F = {1, 2, . . . , N }. In this case, |X \u2229 Y | is same as the dot product of vector x, y, wherex j = 1 if j \u2208 X, x j = 0 otherwise.whereis binary (i.e., x k j j \u2208 {0, 1}), the number of r-size subsets can be given by a coefficient of (x 1 y 1 x 2 y 2 . . . x r y r ). ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof.", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "An efficient digital search algorithm by using a double-array structure", "authors": [ { "first": "Junichi", "middle": [], "last": "Aoe", "suffix": "" } ], "year": 1989, "venue": "IEEE Transactions on Software Engineering", "volume": "", "issue": "9", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junichi Aoe. 1989. An efficient digital search algorithm by us- ing a double-array structure. IEEE Transactions on Software Engineering, 15(9).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Convolution kernels for natural language", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Nigel", "middle": [], "last": "Duffy", "suffix": "" } ], "year": 2001, "venue": "Advances in Neural Information Processing Systems 14", "volume": "1", "issue": "", "pages": "625--632", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins and Nigel Duffy. 2001. Convolution kernels for natural language. In Advances in Neural Information Processing Systems 14, Vol.1 (NIPS 2001), pages 625-632.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Efficient support vector classifiers for named entity recognition", "authors": [ { "first": "Hideki", "middle": [], "last": "Isozaki", "suffix": "" }, { "first": "Hideto", "middle": [], "last": "Kazawa", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the COLING-2002", "volume": "", "issue": "", "pages": "390--396", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hideki Isozaki and Hideto Kazawa. 2002. Efficient support vector classifiers for named entity recognition. In Proceed- ings of the COLING-2002, pages 390-396.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Svm kernels for semi-structured data", "authors": [ { "first": "Hisashi", "middle": [], "last": "Kashima", "suffix": "" }, { "first": "Teruo", "middle": [], "last": "Koyanagi", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the ICML-2002", "volume": "", "issue": "", "pages": "291--298", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hisashi Kashima and Teruo Koyanagi. 2002. Svm kernels for semi-structured data. In Proceedings of the ICML-2002, pages 291-298.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Japanese Dependency Structure Analysis based on Support Vector Machines", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the EMNLP/VLC-2000", "volume": "", "issue": "", "pages": "18--25", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taku Kudo and Yuji Matsumoto. 2000. Japanese Dependency Structure Analysis based on Support Vector Machines. In Proceedings of the EMNLP/VLC-2000, pages 18-25.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Chunking with support vector machines", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the the NAACL", "volume": "", "issue": "", "pages": "192--199", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taku Kudo and Yuji Matsumoto. 2001. Chunking with support vector machines. In Proceedings of the the NAACL, pages 192-199.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Japanese dependency analyisis using cascaded chunking", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the CoNLL-2002", "volume": "", "issue": "", "pages": "63--69", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taku Kudo and Yuji Matsumoto. 2002. Japanese dependency analyisis using cascaded chunking. In Proceedings of the CoNLL-2002, pages 63-69.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Kyoto University text corpus project", "authors": [ { "first": "Sadao", "middle": [], "last": "Kurohashi", "suffix": "" }, { "first": "Makoto", "middle": [], "last": "Nagao", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the ANLP-1997", "volume": "", "issue": "", "pages": "115--118", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sadao Kurohashi and Makoto Nagao. 1997. Kyoto University text corpus project. In Proceedings of the ANLP-1997, pages 115-118.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Text classification using string kernels", "authors": [ { "first": "Huma", "middle": [], "last": "Lodhi", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Saunders", "suffix": "" }, { "first": "John", "middle": [], "last": "Shawe-Taylor", "suffix": "" }, { "first": "Nello", "middle": [], "last": "Cristianini", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Watkins", "suffix": "" } ], "year": 2002, "venue": "Journal of Machine Learning Research", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huma Lodhi, Craig Saunders, John Shawe-Taylor, Nello Cris- tianini, and Chris Watkins. 2002. Text classification using string kernels. Journal of Machine Learning Research, 2.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Revision learning and its application to part-of-speech tagging", "authors": [ { "first": "Tetsuji", "middle": [], "last": "Nakagawa", "suffix": "" }, { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the ACL 2002", "volume": "", "issue": "", "pages": "497--504", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tetsuji Nakagawa, Taku Kudo, and Yuji Matsumoto. 2002. Re- vision learning and its application to part-of-speech tagging. In Proceedings of the ACL 2002, pages 497-504.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Prefixspan: Mining sequential patterns by prefix-projected growth", "authors": [ { "first": "Jian", "middle": [], "last": "Pei", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Han", "suffix": "" } ], "year": 2001, "venue": "Proc. of International Conference of Data Engineering", "volume": "", "issue": "", "pages": "215--224", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jian Pei, Jiawei Han, and et al. 2001. Prefixspan: Mining sequential patterns by prefix-projected growth. In Proc. of International Conference of Data Engineering, pages 215- 224.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Text chunking using transformation-based learning", "authors": [ { "first": "A", "middle": [], "last": "Lance", "suffix": "" }, { "first": "Mitchell", "middle": [ "P" ], "last": "Ramshaw", "suffix": "" }, { "first": "", "middle": [], "last": "Marcus", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the VLC", "volume": "", "issue": "", "pages": "88--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lance A. Ramshaw and Mitchell P. Marcus. 1995. Text chunk- ing using transformation-based learning. In Proceedings of the VLC, pages 88-94.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The Nature of Statistical Learning Theory", "authors": [ { "first": "Vladimir", "middle": [ "N" ], "last": "Vapnik", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vladimir N. Vapnik. 1995. The Nature of Statistical Learning Theory. Springer.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Efficiently mining frequent trees in a forest", "authors": [ { "first": "Mohammed", "middle": [], "last": "Zaki", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 8th International Conference on Knowledge Discovery and Data Mining KDD", "volume": "", "issue": "", "pages": "71--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohammed Zaki. 2002. Efficiently mining frequent trees in a forest. In Proceedings of the 8th International Conference on Knowledge Discovery and Data Mining KDD, pages 71- 80.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "The classification algorithm given by (7) will be referred to as PKE. The complexity of PKE is O(|\u0393 d (X)|) = O(|X| d ), independent on the number of support examples |SV |.", "num": null, "type_str": "figure" }, "FIGREF1": { "uris": null, "text": "Feature Combination Mining Given a set of support examples and subset weight c d (r), extract all subsets s and their weights w(s) if w(s) holds w(s) \u2265 \u03c3 pos or w(s) \u2264 \u03c3 neg .", "num": null, "type_str": "figure" }, "FIGREF2": { "uris": null, "text": "He] reckons [the current account deficit] will narrow to [only $ 1.8 billion] .", "num": null, "type_str": "figure" }, "TABREF0": { "content": "", "html": null, "text": "The complexity of PKB is O(|X| \u2022 |SV |), since it takes O(|X|) to calculate (1 + |X j \u2229 X|) d and there are a total of |SV | support examples.", "num": null, "type_str": "table" }, "TABREF1": { "content": "
Data SetEBCJWSJDP
# of examples135,692 265,413 110,355
|SV| # of SVs11,69057,67234,996
# of positive SVs5,63728,44017,528
# of negative SVs6,05329,23217,468
|F | (size of feature)17,47011,64328,157
Avg. of |X j |11.9011.7317.63
B (Avg. of |h(i)|))7.7458.1321.92
(Note: In EBC, to handle K-class problems, we use a pairwise
classification; building K\u00d7(K\u22121)/2 classifiers considering all
pairs of classes, and final class decision was given by majority
voting. The values in this column are averages over all pairwise
classifiers.)
", "html": null, "text": "Details of Data Set", "num": null, "type_str": "table" }, "TABREF2": { "content": "
TimeSpeedupF1|\u2126|
\u03c3 (sec./sent.)Ratio(\u00d7 1000)
0.010.0016105.2 93.79518
0.0050.0016101.3 93.85668
0.0010.001797.793.84858
0.00050.001796.893.84889
PKI0.0208.393.84
PKB0.1641.093.84
", "html": null, "text": "Results of EBC PKE", "num": null, "type_str": "table" }, "TABREF3": { "content": "
TimeSpeedup Acc.(%)|\u2126|
\u03c3 (sec./sent.)Ratio(\u00d7 1000)
0.010.0024358.297.931,228
0.0050.0028300.197.952,327
0.0010.0034242.697.944,392
0.00050.0035238.897.944,820
PKI0.49891.797.94
PKB0.85351.097.94
", "html": null, "text": "Results of JWS PKE", "num": null, "type_str": "table" }, "TABREF4": { "content": "
TimeSpeedup Acc.(%)|\u2126|
\u03c3 (sec./sent.)Ratio(\u00d7 1000)
0.010.004266.888.9173
0.0050.006047.889.051,924
0.0010.008633.389.266,686
0.00050.009031.889.298,262
PKI0.022612.689.29
PKB0.28481.089.29
", "html": null, "text": "Results of JDP PKE", "num": null, "type_str": "table" }, "TABREF5": { "content": "
: Frequency-based pruning (JWS)
PKEtimeSpeedup Acc.(%)|\u2126|
\u03be (sec./sent.)Ratio(\u00d7 1000)
10.0028300.197.952,327
20.0025337.397.83954
30.0023367.097.83591
PKB0.85351.097.94
", "html": null, "text": "", "num": null, "type_str": "table" }, "TABREF6": { "content": "
: Frequency-based pruning (JDP)
PKEtimeSpeedup Acc.(%)|\u2126|
\u03be (sec./sent.)Ratio(\u00d7 1000)
10.009031.889.298,262
20.007239.389.342,450
30.006841.889.311,360
PKB0.28481.089.29
", "html": null, "text": "", "num": null, "type_str": "table" } } } }