{ "paper_id": "Y98-1022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:37:31.160912Z" }, "title": "Automatic Bunsetsu Segmentation of Japanese Sentences Using a Classification Tree", "authors": [ { "first": "Yujie", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Electro-Communications", "location": {} }, "email": "" }, { "first": "Kazuhiko", "middle": [], "last": "Ozekik", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Electro-Communications", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Bunsetsu, which is comprised of a content word followed by, possibly 0, function words, is a convenient unit for dependency structure analysis of Japanese. There are, however, no spaces indicating bunsetsu boundaries in the orthographic writing of Japanese. Thus a sentence must be segmented into bunsetsu's by some means prior to dependency structure analysis. Conventionally, such segmentation has been performed by using some kind of hand-crafted rules. This paper describes a novel segmentation method using a classification tree, by which knowledge about bunsetsu boundaries is automatically acquired from a labeled corpus. The method enables quick and easy adaptation to a new task domain, and also to a new system of morpheme categorization without the need of changing the algorithm. Effectiveness of this method is shown through experiments on an ATR corpus and an EDR corpus.", "pdf_parse": { "paper_id": "Y98-1022", "_pdf_hash": "", "abstract": [ { "text": "Bunsetsu, which is comprised of a content word followed by, possibly 0, function words, is a convenient unit for dependency structure analysis of Japanese. There are, however, no spaces indicating bunsetsu boundaries in the orthographic writing of Japanese. Thus a sentence must be segmented into bunsetsu's by some means prior to dependency structure analysis. Conventionally, such segmentation has been performed by using some kind of hand-crafted rules. This paper describes a novel segmentation method using a classification tree, by which knowledge about bunsetsu boundaries is automatically acquired from a labeled corpus. The method enables quick and easy adaptation to a new task domain, and also to a new system of morpheme categorization without the need of changing the algorithm. Effectiveness of this method is shown through experiments on an ATR corpus and an EDR corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Bunsetsu, which is comprised of a content word with or without being followed by a string of function words, is a convenient unit for dependency structure analysis of Japanese. There are, however, no spaces indicating bunsetsu boundaries in the orthographic writing of Japanese. Thus a sentence must be segmented into bunsetsu's prior to dependency structure analysis. According to the elementary definition of bunsetsu [Nagao, ed. (1984) ], such segmentation might look simple. There are, in reality, many factors that complicate the problem. For example, a prefix and/or a suffix can be attached to a content word, and Chinese characters can be concatenated to form a compound word. Some nouns and verbs have functions different from their original ones. Also, there are many idiomatic usages of morpheme concatenations. All these matters cause difficulties in detecting bunsetsu boundaries. Moreover, there is no system of morpheme categorization in Japanese that has received a general consensus. This situation gives rise to another obstacle to establishing a standard method of bunsetsu segmentation.", "cite_spans": [ { "start": 420, "end": 438, "text": "[Nagao, ed. (1984)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1." }, { "text": "There have been two major approaches to the bunsetsu segmentation problem: one based on an automaton [Fujio et al. (1997) ], or bunsetsu patterns [Kurohashi (1997) ] representing a definition of bunsetsu, and the other based on a set of hand-crafted rules [Suzuki (1996) ]. In the former approach, one has to give a definition of bunsetsu manually. The latter involves handiwork in getting knowledge about bunsetsu boundaries. Thus both approaches have problems in keeping consistency, coverage and optimality of manually obtained knowledge. When the task domain and/or the system of morpheme categorization are changed, one has to repeat the whole manual process to get new knowledge, which is rather laborious.", "cite_spans": [ { "start": 101, "end": 121, "text": "[Fujio et al. (1997)", "ref_id": "BIBREF3" }, { "start": 146, "end": 163, "text": "[Kurohashi (1997)", "ref_id": "BIBREF6" }, { "start": 256, "end": 270, "text": "[Suzuki (1996)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1." }, { "text": "This paper proposes a method of bunsetsu segmentation using a classification tree [Breiman et al. (1984) ], [Quinlan (1993) ]., by which knowledge about bunsetsu boundaries is automatically acquired from a corpus. It enables quick adaptation to a new task domain, and also to a new system of morpheme categorization without the need of changing the algorithm. Effectiveness of the method is shown through experiments on an ATR corpus and an EDR corpus.", "cite_spans": [ { "start": 82, "end": 104, "text": "[Breiman et al. (1984)", "ref_id": "BIBREF1" }, { "start": 108, "end": 123, "text": "[Quinlan (1993)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1." }, { "text": "A classification tree is a binary tree that classifies objects into classes [Breiman et al. (1984) ], [Quinlan (1993) ]. Through an automatic generation of a classification tree, one can rapidly acquire underlying regularities in a large amount of data, which are difficult or even impossible for a human to capture by *Department of Computer Science and. Information Mathematics, The University of Electro-Com.munications 1-5-1 Chofugaoka, Chofu, Tokyo, 182-8585 Japan. Email: zhangGachilleus.cs.uec.ac.jp, ozeki\u00a9cs.uec.ac.jp intuition. This technique has been well studied in such fields as pattern recognition and machine learning [Safavian et al. (1991) ]. A number of applications to natural language processing and speech processing have also been reported [Kuhn et al. (1995) ], [Wang et al. (1992) ], [Ostendorf et al. (1993) ].", "cite_spans": [ { "start": 76, "end": 98, "text": "[Breiman et al. (1984)", "ref_id": "BIBREF1" }, { "start": 102, "end": 117, "text": "[Quinlan (1993)", "ref_id": "BIBREF9" }, { "start": 634, "end": 657, "text": "[Safavian et al. (1991)", "ref_id": "BIBREF10" }, { "start": 763, "end": 782, "text": "[Kuhn et al. (1995)", "ref_id": "BIBREF5" }, { "start": 786, "end": 805, "text": "[Wang et al. (1992)", "ref_id": "BIBREF12" }, { "start": 809, "end": 833, "text": "[Ostendorf et al. (1993)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "CLASSIFICATION TREE", "sec_num": "2." }, { "text": "The process of classification by a tree is shown in Fig.l . Associated with each non-terminal node of a classification tree is a test. If an object passes a test, it reaches \"yes\" child-node; otherwise \"no\" child-node. The process is repeated until the object reaches some terminal node or leaf, which has been assigned to a class label. Only two classes, c1 and c2 , will be considered here. In order to grow a classification tree from the root, objects with class labels, or training objects, are necessary. Also a finite set of tests must be prepared. Suppose that a tree has been grown to some size. It consists of three kinds of nodes: non-terminal nodes, leaves, and active nodes. An active node is a tentative terminal node that will be turned into a non-terminal node or a leaf afterward. Let t be an active node, and L(t) the number of training objects that reach t, among which the number of objects belonging to ci is denoted as Li (t) (i = 1,2). Then the impurity of t, Gini index [Breiman et al. (1984) ], is defined as Li(t) L2(t)", "cite_spans": [ { "start": 993, "end": 1015, "text": "[Breiman et al. (1984)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 52, "end": 57, "text": "Fig.l", "ref_id": null } ], "eq_spans": [], "section": "CLASSIFICATION TREE", "sec_num": "2." }, { "text": "root I (t) : L i (t) , L z (t) yes no \\ /Ono) / (tyes) / Cl C2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLASSIFICATION TREE", "sec_num": "2." }, { "text": "I(t) = L(t) L(t)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLASSIFICATION TREE", "sec_num": "2." }, { "text": "If the impurity is lower than a prescribed threshold, then t is decided to be a leaf. Otherwise all the tests are tried exhaustively. By means of a test, the training objects that reach t are divided into those that reach the \"yes\" child-node ty \" and those that reach the \"no\" child-node trio . Let L(tx ) denote the number of training objects that reach tx , where x equals \"yes\" or \"no\". Then the test that maximizes the reduction of impurity", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLASSIFICATION TREE", "sec_num": "2." }, { "text": "L(t \") L(tno) I(tno) DI t= I(t) Lt I(tyes) Lt)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLASSIFICATION TREE", "sec_num": "2." }, { "text": "is selected as the test associated with t, and new active nodes, t y \" and trio , are appended under t. If there is no test that reduces the impurity of t, then t is decided to be a leaf. With the root as the initial active node, the above procedure is iterated until all the active nodes are turned into non-terminal nodes or leaves. A leaf t is assigned to class label ci if the majority of training objects that reach t belong to class ci (i = 1, 2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLASSIFICATION TREE", "sec_num": "2." }, { "text": "By morphological analysis, a sentence is segmented into morphemes. The attribute values of each morpheme such as the part of speech and the orthographic expression are also obtained. An object to be classified here is a pair of a morpheme sequence derived from a sentence and a boundary between morphemes (a boundary in focus, henceforth):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "APPLICATION OF CLASSIFICATION TREES TO BUNSETSU SEGMENTATION", "sec_num": "3." }, { "text": "(rn i m2 \u2022 \u2022 \u2022 mn , bi )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "APPLICATION OF CLASSIFICATION TREES TO BUNSETSU SEGMENTATION", "sec_num": "3." }, { "text": ", where mk (1 < k < n) is a morpheme labeled with its attribute values, and bi the boundary between mi and mi+ i . Therefore a sentence consisting of N morphemes yields N -1 objects. The purpose of classification is to decide whether the boundary in focus is a bunsetsu boundary in the morpheme sequence; this is a classification problem for two classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "APPLICATION OF CLASSIFICATION TREES TO BUNSETSU SEGMENTATION", "sec_num": "3." }, { "text": "Since it is expected that morphemes near the boundary in focus are important, only two morphemes adjacent to the boundary are tested: one immediately on the left (left morpheme) and the other immediately on the right (right morpheme). Among the attributes of a morpheme, the part of speech is considered to be most important. In some cases, however, the part of speech alone does not provide enough information for bunsetsu boundary detection. Therefore the orthographic expression is employed as another test attribute for some range of morphemes, which are selected by a preliminary experiment. Also the wild card \"*\" is introduced as a symbol to match any attribute value. Let pi be a part of speech, and W(pi ) the set of orthographic expressions, including \"*\", of morphemes that belong to pi and are selected by the preliminary experiment. Then the set of the pairs of p i and its orthographic expressions is denoted as {pi } x W(pi ). Let S be the set of all such pairs plus (*, *):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "APPLICATION OF CLASSIFICATION TREES TO BUNSETSU SEGMENTATION", "sec_num": "3." }, { "text": "S = E({pi} x W(pi )) U {(*, *)}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "APPLICATION OF CLASSIFICATION TREES TO BUNSETSU SEGMENTATION", "sec_num": "3." }, { "text": "Then the set of all the tests is given by S x S in the present work. A. test takes the form < (pi , e l)(P2, *) >, for example. An object will pass this test if the part of speech of the left morpheme equals /31 , its orthographic expression equals e l , and the part of speech of the right morpheme equals /32 . A similar technique has been applied to classification of intonational phrase boundaries [Wang et al. (1992) ], though the purpose and the attributes used in their work are quite different from ours.", "cite_spans": [ { "start": 402, "end": 421, "text": "[Wang et al. (1992)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "APPLICATION OF CLASSIFICATION TREES TO BUNSETSU SEGMENTATION", "sec_num": "3." }, { "text": "In the . process of growing a classification tree, an active node t is decided to be a leaf if the condition I(t) < T is satisfied for a prescribed threshold T, or if there is no test that reduces the impurity of t. In this work the value of T was set at 0.1. This condition implies that more than 90% of the training objects that reach t belong to the same class.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "APPLICATION OF CLASSIFICATION TREES TO BUNSETSU SEGMENTATION", "sec_num": "3." }, { "text": "By the procedure described above, classification trees for the bunsetsu segmentation task were generated by using the training objects, and then the results were evaluated. In order to see the influence of different morpheme categorization systems and different sentence materials on the results, two corpora, an ATR corpus and an EDR corpus, were used in the experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EXPERIMENTAL RESULTS", "sec_num": "4." }, { "text": "The ATR corpus contains 503 sentences taken from newspapers, magazines, and etc. The sentences are segmented into morphemes, and labeled with the part of speech and the bunsetsu boundary [Abe et al. (1990) ]. In this experiment 6994 objects were extracted from the corpus, of which 2000 were used for training, and the remaining 4994 for evaluation. The attribute used for test here was the part of speech only; the orthographic expression was not used. There are 25 parts of speech in the ATR corpus. Fig.2 illustrates a part of the generated tree near the root. The total number of nodes was 69. There was a tendency that the tests related to morphemes with higher frequencies appeared in the nodes closer to the root. Thus it can be said that an efficient order of tests was realized automatically. Table 1 shows the result of evaluation. The symbol Y signifies that the boundary in focus is a bunsetsu boundary, and N signifies that is not. The arrow denotes the classification operation by the tree. So, Y Y means that an object labeled as a bunsetsu boundary in the corpus is classified as a bunsetsu boundary by the tree. The expression IY Y denotes the number of such cases.", "cite_spans": [ { "start": 187, "end": 205, "text": "[Abe et al. (1990)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 502, "end": 507, "text": "Fig.2", "ref_id": null }, { "start": 802, "end": 809, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experiment on ATR Corpus", "sec_num": "4.1" }, { "text": "About 76% of the errors were of the type Y N. It was found that most of the errors of this type came from concatenations of two common nouns. The generated tree decided a morpheme boundary between common nouns not to be a bunsetsu boundary. However, many of the first common nouns in such concatenations were in fact those that functioned as adverbs, and this kind of objects were labeled as bunsetsu boundaries in the corpus. Thus, the coarseness of sub-categorization of noun was a part of the causes of the errors. (1996) ], there are no labels indicating bunsetsu boundaries. Instead, it has detailed information about the syntactic structure of sentences. By utilizing this information and the definition of bunsetsu [Nagao, ed. (1984) ], 400 sentences were labeled with the bunsetsu boundary. Then 6984 objects for training were extracted from randomly selected 200 sentences, and 7110 objects for evaluation from the rest. The sub-categorization of noun in the EDR corpus seemed too coarse for the present purpose. Therefore it was augmented by using the semantic identifier, which was common to the corpus and the dictionary. The resulting number of parts of speech was 19. In the EDR corpus, sub-categorization of particle is coarser than that in the ATR corpus. Moreover, there is no such category as formal verb, which is employed as a category in the ATR corpus. Therefore, the test of the form <(particle, *), (verb, *)> has little power to distinguish between a bunsetsu boundary and a non bunsetsu boundary; the part of speech information alone, especially for particles and verbs, is not enough for bunsetsu segmentation. For that reason, the orthographic expression was also used as an attribute for test together with the part of speech.", "cite_spans": [ { "start": 518, "end": 524, "text": "(1996)", "ref_id": null }, { "start": 722, "end": 740, "text": "[Nagao, ed. (1984)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment on ATR Corpus", "sec_num": "4.1" }, { "text": "A classification tree with 175 nodes was generated in this case. It was observed that the tree generated on the EDR corpus (EDR tree, henceforth) acquired new segmentation rules that were not acquired in the tree generated on the ATR corpus (ATR tree, henceforth):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment on ATR Corpus", "sec_num": "4.1" }, { "text": "1. The boundary between a temporal noun and a common noun was decided to be a bunsetsu boundary, while it was not by the ATR tree. (Because the ATR corpus has no such category as temporal noun, it makes no distinction between a temporal noun and a common noun.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment on ATR Corpus", "sec_num": "4.1" }, { "text": "2. The boundary between an auxiliary verb and a common noun was decided to be a bunsetsu boundary, and a boundary between an auxiliary verb and a formal noun was not, while neither was decided to be a bunsetsu boundary by the ATR tree. (Because the ATR corpus has no such category as formal noun, it makes no distinction between a common noun and a formal noun.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment on ATR Corpus", "sec_num": "4.1" }, { "text": "3. The, boundary between a particle and a common noun was decided to be a bunsetsu boundary, and the boundary between a particle and a formal noun was not, while both were decided to be bunsetsu boundaries by the ATR tree. (The cause is the same as above.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment on ATR Corpus", "sec_num": "4.1" }, { "text": "NO yes/ ono INO 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment on ATR Corpus", "sec_num": "4.1" }, { "text": "The boundary between two common nouns was decided to be a bunsetsu boundary, and the boundary between two proper nouns was not, while neither was decided to be a bunsetsu boundary by the ATR tree. (The ATR corpus does not contain enough amount of data for extracting such a segmentation rule.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment on ATR Corpus", "sec_num": "4.1" }, { "text": "5. The EDR tree acquired 12 rules related to symbols, while the ATR tree acquired no such rules. (The ATR corpus has no occurence of symbols.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment on ATR Corpus", "sec_num": "4.1" }, { "text": "Thus, the classification tree extracted the new segmentation rules exploiting the sub-categorization of noun in the EDR corpus. It is observed that in this way a classification tree can adapt to a new system of morpheme categorization. \u2022 Table 2 shows the result of evaluation. The causes of the errors have been analyzed as follows.", "cite_spans": [], "ref_spans": [ { "start": 238, "end": 245, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experiment on ATR Corpus", "sec_num": "4.1" }, { "text": "\u2022 In this experiment, only two morphemes, one on the left and the other on the right of the boundary in focus, were tested. There were, however, some cases where testing more than two morphemes would improve the result.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment on ATR Corpus", "sec_num": "4.1" }, { "text": "\u2022 The set of orthographic expressions used as attribute values in the tests was incomplete. There were cases where adding some other orthographic expressions would yield better results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment on ATR Corpus", "sec_num": "4.1" }, { "text": "\u2022 The training objects did not cover all the linguistic phenomena concerning bunsetsu boundaries. Thus sparseness of the training objects was a problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment on ATR Corpus", "sec_num": "4.1" }, { "text": "\u2022 Some errors obviously resulted from mislabeling in the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment on ATR Corpus", "sec_num": "4.1" }, { "text": "The results of this work are summarized as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CONCLUSION", "sec_num": "5." }, { "text": "\u2022 A classification tree can acquire linguistic knowledge about bunsetsu boundaries automatically, given an appropriately labeled corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CONCLUSION", "sec_num": "5." }, { "text": "\u2022 Using the criterion of maximum impurity reduction, it generates efficient rules that capture statistical and logical regularities concerning bunsetsu boundaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CONCLUSION", "sec_num": "5." }, { "text": "\u2022 It enalbes quick adaptation to a new task domain, and also to a new system of morpheme categorization without the need of changing the algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CONCLUSION", "sec_num": "5." }, { "text": "Our future work includes:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CONCLUSION", "sec_num": "5." }, { "text": "\u2022 Improvement of the control method for growing a classification tree by adjusting the threshold value T, or by pruning [Breiman et al. (1984) ], [Gelfand et al. (1991) ], [Quinlan (1993) ], so that a better generalization can be attained.", "cite_spans": [ { "start": 120, "end": 142, "text": "[Breiman et al. (1984)", "ref_id": "BIBREF1" }, { "start": 146, "end": 168, "text": "[Gelfand et al. (1991)", "ref_id": "BIBREF4" }, { "start": 172, "end": 187, "text": "[Quinlan (1993)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "CONCLUSION", "sec_num": "5." }, { "text": "\u2022 Pursuit of a better method for assigning leaves to class labels to enhance the reliability of decision at leaves.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CONCLUSION", "sec_num": "5." }, { "text": "\u2022 Automatic acquisition of morphemes whose orthographic expressions are effective in bunsetsu boundary detection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CONCLUSION", "sec_num": "5." } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Speech Database User's Manual", "authors": [ { "first": "Masanobu", "middle": [ ";" ], "last": "Abe", "suffix": "" }, { "first": "", "middle": [], "last": "Sagisaka", "suffix": "" }, { "first": ";", "middle": [], "last": "Yoshinori", "suffix": "" }, { "first": "Tetsuo", "middle": [ ";" ], "last": "Umeda", "suffix": "" }, { "first": "Hisao", "middle": [], "last": "Kuwabara", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abe, Masanobu; Sagisaka, Yoshinori; Umeda, Tetsuo; and Kuwabara, Hisao (1990). Speech Database User's Manual. ATR Interpreting Telephony Research Laboratories (in Japanese).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Classification and Regression Trees", "authors": [ { "first": "Leo", "middle": [ ";" ], "last": "Breiman", "suffix": "" }, { "first": "Jerome", "middle": [ "H" ], "last": "Friedman", "suffix": "" }, { "first": "Richard", "middle": [ "A" ], "last": "Olshen", "suffix": "" }, { "first": "Charles", "middle": [ "A" ], "last": "Stone", "suffix": "" } ], "year": 1984, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Breiman, Leo; Friedman, Jerome H.; Olshen, Richard A.; and Stone, Charles A. (1984). Classifi- cation and Regression Trees. Chapman and Hall.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Specifications of EDR Electronic Dictionary Ver", "authors": [], "year": 1996, "venue": "", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "EDR (Japan Electronic Dictionary Research Institute) (1996). Specifications of EDR Electronic Dictionary Ver.1.5 (in Japanese).", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Statistical Japanese Dependency Structure Analysis Using an EDR Bracketed Corpus", "authors": [ { "first": "Masakazu", "middle": [ ";" ], "last": "Fujio", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 1997, "venue": "Proc. of Symposium on Applications of EDR Electronic Dictionary", "volume": "", "issue": "", "pages": "49--55", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fujio, Masakazu; and Matsumoto, Yuji (1997). Statistical Japanese Dependency Structure Anal- ysis Using an EDR Bracketed Corpus. Proc. of Symposium on Applications of EDR Electronic Dictionary:49-55 (in Japanese).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "An Iterative Growing and Pruning Algorithm for Classification Tree Design", "authors": [ { "first": "Saul", "middle": [ "B" ], "last": "Gelfand", "suffix": "" }, { "first": "C", "middle": [ "S" ], "last": "Ravishankar", "suffix": "" }, { "first": "Edward", "middle": [ "J" ], "last": "Delp", "suffix": "" } ], "year": 1991, "venue": "IEEE Trans. PAMI", "volume": "13", "issue": "2", "pages": "163--174", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gelfand, Saul B.; Ravishankar, C. S.; and Delp, Edward J. (1991). An Iterative Growing and Pruning Algorithm for Classification Tree Design. IEEE Trans. PAMI 13(2):163-174.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The Application of Semantic Classification Trees to Natural Language Understanding", "authors": [ { "first": "Roland", "middle": [ ";" ], "last": "Kuhn", "suffix": "" }, { "first": "De", "middle": [], "last": "Mori", "suffix": "" }, { "first": "Renato", "middle": [], "last": "", "suffix": "" } ], "year": 1995, "venue": "IEEE Trans. PAMI", "volume": "17", "issue": "5", "pages": "449--460", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kuhn, Roland; and De Mori, Renato (1995). The Application of Semantic Classification Trees to Natural Language Understanding. IEEE Trans. PAMI 17(5): 449-460.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Japanese Parsing System, KNP version 2.0 b3, User's Manual", "authors": [ { "first": "Sadao", "middle": [], "last": "Kurohashi", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kurohashi, Sadao (1997). Japanese Parsing System, KNP version 2.0 b3, User's Manual. Faculty of Engineering, Kyoto University (in Japanese).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Japanese Language Information Processing. The Institute of Electronics, Information and Communication Engineers", "authors": [ { "first": "Makoto", "middle": [], "last": "Nagao", "suffix": "" } ], "year": 1984, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nagao, Makoto, ed. (1984). Japanese Language Information Processing. The Institute of Elec- tronics, Information and Communication Engineers (in Japanese).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Parse Scoring with Prosodic Information: an Analysis/Synthesis Approach", "authors": [ { "first": "M", "middle": [], "last": "Ostendorf", "suffix": "" }, { "first": "C", "middle": [ "W" ], "last": "Wightman", "suffix": "" }, { "first": "N", "middle": [ "M" ], "last": "Veilleux", "suffix": "" } ], "year": 1993, "venue": "Computer Speech and Language", "volume": "7", "issue": "", "pages": "193--210", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ostendorf, M.; Wightman, C. W.; and Veilleux, N. M. (1993). Parse Scoring with Prosodic Infor- mation: an Analysis/Synthesis Approach. Computer Speech and Language 7: 193-210.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "C4.5: Programs for Machine Learning", "authors": [ { "first": "J", "middle": [], "last": "Quinlan", "suffix": "" }, { "first": "", "middle": [], "last": "Ross", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quinlan, J. Ross (1993). C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A Survey of Decision Tree Classifier Methodology", "authors": [ { "first": "S", "middle": [], "last": "Safavian", "suffix": "" }, { "first": "David", "middle": [], "last": "Landgrebe", "suffix": "" } ], "year": 1991, "venue": "IEEE Trans. SMC", "volume": "21", "issue": "3", "pages": "660--674", "other_ids": {}, "num": null, "urls": [], "raw_text": "Safavian, S. Rasoul; and Landgrebe, David (1991). A Survey of Decision Tree Classifier Methodol- ogy. IEEE Trans. SMC 21(3): 660-674.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Japanese Sentence Segmentation Algorithm Using Character Patterns Based on the Statistical Investigation. IEICE Trans. on Information and Systems J79-D-II", "authors": [ { "first": "Emiko", "middle": [], "last": "Suzuki", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "1236--1243", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suzuki, Emiko (1996). Japanese Sentence Segmentation Algorithm Using Character Patterns Based on the Statistical Investigation. IEICE Trans. on Information and Systems J79-D-II(7):1236-1243 (in Japanese).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Automatic Classification of Intonational Phrase Boundaries", "authors": [ { "first": "Michelle", "middle": [ "Q" ], "last": "Wang", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hirshberg", "suffix": "" } ], "year": 1992, "venue": "Computer Speech and Language", "volume": "6", "issue": "", "pages": "175--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang, Michelle Q.; and Hirshberg, Julia (1992). Automatic Classification of Intonational Phrase Boundaries. Computer Speech and Language 6: 175-196.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "text": "Classification tree for two classes c1 and c2.", "num": null }, "TABREF0": { "html": null, "type_str": "table", "text": "Part of the tree generated on the ATR corpus. Evaluation result for the ATR corpus.", "num": null, "content": "
<Case_Particle
yes /no
<Case_Particle Dependent_Particie><* Common_Noun>
yesnoyesno
NO<Case_Particle Case_Particle> <Common Noun Common_Noun>
yes,/noyesnoSithtree
NO <Case_Particle Auxiliary_Verb>
yes,'n
<Case_Particle Auxiliary_Particle>
yes ,K\u2022,1/4 no
NOI YES
Fig.2 No. of objects IY YINI 117NI IN Yi % Accuracy
499420152893652198.3
4.2 Experiment on EDR Corpus
In the EDR corpus [EDR
" }, "TABREF1": { "html": null, "type_str": "table", "text": "Evaluation result for the EDR corpus.", "num": null, "content": "
No. of objects 1Y Yi-+ NI IY--+ YI % Accuracy
7110250243416220596.2
" } } } }