{ "paper_id": "Y03-1026", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:34:44.104111Z" }, "title": "A Vector-Based Algorithm for Chinese Text ClassificationEll", "authors": [ { "first": "Luo", "middle": [], "last": "Chang", "suffix": "", "affiliation": {}, "email": "" }, { "first": "He", "middle": [], "last": "Ting", "suffix": "", "affiliation": {}, "email": "hett@l63.net" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, vector-distance-weighted algorithm and representative-vector-distance algorithm are described and used to implement the process of automatic text classification. Two experiments have been done by means of the algorithms (experiment) is based on vector-distance-weighted algorithm and experiment2 is based on representative-vector-distance algorithm). Characters are selected as features. The average precision of experiment) and experiment2 is 80.36% and 69.27%, respectively. Comparing the two experiments, it can be concluded that the efficiency of text classification can be improved by means of vector-distance-weighted algorithm.", "pdf_parse": { "paper_id": "Y03-1026", "_pdf_hash": "", "abstract": [ { "text": "In this paper, vector-distance-weighted algorithm and representative-vector-distance algorithm are described and used to implement the process of automatic text classification. Two experiments have been done by means of the algorithms (experiment) is based on vector-distance-weighted algorithm and experiment2 is based on representative-vector-distance algorithm). Characters are selected as features. The average precision of experiment) and experiment2 is 80.36% and 69.27%, respectively. Comparing the two experiments, it can be concluded that the efficiency of text classification can be improved by means of vector-distance-weighted algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "on its content. In western countries, the study about text classification and its relative fields begins as early as 60 or 70s last century, when Salton put forward the VSM (Vector Space Model) theory and the VSM is used successfully in application. The study on Chinese text classification by means of computer starts in the 90s. The research has achieved a lot, for example, FuDan University and Institute of Computing Technology of Chinese Academy of Sciences have tracked and studied the TREC test. Early on PeKing University and TingHua University have made a study of the technology of web classification on their search engine \"Network Sky\" and \"The guide of Network\" respectively [LIU Bin, HUANG Tie Jun, CHENG Jun, GAO Wen 2002] . The results of these fields are not as satisfactory as those in English research because of the uniqueness of Chinese language.", "cite_spans": [ { "start": 724, "end": 737, "text": "GAO Wen 2002]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "There are two methods for text classification. One is the rule method. This method is applied earlier. For example, the Construe [Church , K.W.Lisa.F.Rau 1995] is developed based on it. The other method is based on statistics, such as Bayes, VSM, KNN, SVM and so on. In the following section, the authors will discuss the expression of texts in VSM, computation of the feature weight, the similarity between text and classes, and the vector-distance algorithm. At the same time, the results of experiments are figured out and analyzed.", "cite_spans": [ { "start": 129, "end": 159, "text": "[Church , K.W.Lisa.F.Rau 1995]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Vector Space Model (VSM) is widely applied in 111. system for its simple conceptions and its simulation of close space to close meanings. The classification method used in texts is introduced from IR system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector Space Model", "sec_num": "2." }, { "text": "A text is composed of characters, words and phrases which are termed as the features of text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector expression of training texts", "sec_num": "2.1." }, { "text": "According to \"Bayes hypothesis\", presuming the effects of features to the class adscription are Ell This paper is supported by Nature Science Foundation of Hubei, China (ID:2001ABBol2) independent, the text can be expressed as the vector of feature collection. After the handling of the training set, \"Term-Documents\" matrix space Aca could be obtained. The row and column are expressed by the feature and text respectively. So we can get the training set vectors space showing in figurel.", "cite_spans": [ { "start": 156, "end": 184, "text": "Hubei, China (ID:2001ABBol2)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Vector expression of training texts", "sec_num": "2.1." }, { "text": "doc1 doc2 doc3 doc d ti a1,1 ai,t \u2022 \u2022 \u2022 atd Figurel.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector expression of training texts", "sec_num": "2.1." }, { "text": "In general, A is a sparse matrix, it can be compressed by using the method described in [Than xue gang, Lin Hongfei, Yao Tianshun 1999] .", "cite_spans": [ { "start": 88, "end": 135, "text": "[Than xue gang, Lin Hongfei, Yao Tianshun 1999]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Vector expression of training texts", "sec_num": "2.1." }, { "text": "From section 2.1, the matrix A ba can be inferred .If au is used to express delegate the non-zero element, then A = [ai,i ] d . In order to show the importance of the features in texts, al,;", "cite_spans": [ { "start": 116, "end": 123, "text": "[ai,i ]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Feature weighting", "sec_num": "2.2." }, { "text": "can't be expressed by the frequency of the features that occur in texts. Generally, feature weight is computed and normalized. TF-IDF model is used to compute the term weight as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature weighting", "sec_num": "2.2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "[21 Local(i, j) * Global(i)", "eq_num": "(1)" } ], "section": "Feature weighting", "sec_num": "2.2." }, { "text": "Where, air; is the term weight; Local(i, j) = log2 (1 + t s i,./ ) , b i, i is the frequency of term i in document j. GlobaAi) = loge an I dfi ) + 1) , n is the number of the train set, dfi is the number of documents containing term i.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature weighting", "sec_num": "2.2." }, { "text": "A new document, formula (1) can be used to calculate the term weight. Because n.=1 and dfi =1, so Globa(i) = 1 . In the experiments, formula (2) is used to compute the term weight and formula 3is used to normalize it:", "cite_spans": [ { "start": 91, "end": 94, "text": "=1,", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Feature weighting", "sec_num": "2.2." }, { "text": "(1+10 n log(tfi, j) 1 + logqi) (2) Sti,i = .44 (Si, j)2 Ls, i j =10* (3) \\ I", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature weighting", "sec_num": "2.2." }, { "text": "Where, Si,; is the weight of term i in document j, l i is the length of the document j, and the ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature weighting", "sec_num": "2.2." }, { "text": "a-\u2022 ,, = , 11\u00b1(Local(i,j)*Global(i)Y i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature weighting", "sec_num": "2.2." }, { "text": "frequency of terms in document is summed up as the document length.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature weighting", "sec_num": "2.2." }, { "text": "There are multi-selection for features, such as characters, words, phrases or their compounds. It is universally accepted that using words as features is superior to other selections. Words can be extracted directly from texts. Because automatic extracting of words is not satisfactory, segment training set is then the first choice. After segmentation, mutual information method can be used to filter the feature. In literature [LIU Bin, HUANG Tie Jun, CHENG Jun, GAO Wen 2002] , the authors employ characters and characters plus words as the features respectively, and compare the results. The results improved little as shown, after adding more than 200 000 words as features. So, employing the characters as the features for text classification study is meaningful.", "cite_spans": [ { "start": 429, "end": 478, "text": "[LIU Bin, HUANG Tie Jun, CHENG Jun, GAO Wen 2002]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Feature selecting", "sec_num": "2.3." }, { "text": "In experiments, characters are selected as the features, but not all the 6763 Chinese characters defined in GB2312-80. The characters of training set are selected as features. The number is less than GB2312-80. There are 5468 characters in the training set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature selecting", "sec_num": "2.3." }, { "text": "There are many algorithms which are based on vector space, such as Support Vector Machine, Nerval Network, KNN, Bayse, Vector-distance and so on. In this paper, Vector-distance algorithm is employed. The simple vector-distance algorithm is used in the vector space model. Simply speaking, this algorithm is used to compute the vector distance between the document to be classified and the classes of training set. Two methods are used in the experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector-distance algorithm", "sec_num": "3." }, { "text": "Method I : vector-distance weighted algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector-distance algorithm", "sec_num": "3." }, { "text": "This is an algorithm to compute the weighted similarity, that is, handling every text of training set to get training set matrix, handling the text to be classified to get vector, computing the similarity between the texts in training set and the text to be classified, and then weighting the similarity, if the training texts have the same class. The main steps are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector-distance algorithm", "sec_num": "3." }, { "text": "Step 1: Formula (1) is used to handle the training set text vector to get the training set matrix space;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector-distance algorithm", "sec_num": "3." }, { "text": "Step2: Formula (2) is used to handle the new document vector, while formula (3) is used to normalize the vector;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector-distance algorithm", "sec_num": "3." }, { "text": "Step3: Formula (4) is used to figure out the similarities;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector-distance algorithm", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "n n Cos (c1 s' J,k a \",k x j ,k", "eq_num": "(4)" } ], "section": "Vector-distance algorithm", "sec_num": "3." }, { "text": "Where, di is the feature vector of the new text, di is the vector of jth document in training set, n is feature number;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector-distance algorithm", "sec_num": "3." }, { "text": "Step4: The same class training set texts are judged in order to calculate the weighted similarities, using formula (5):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector-distance algorithm", "sec_num": "3." }, { "text": "SumSim i, C = Cos (di,d,) *T (Ci t ,C i) ( 5) t=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector-distance algorithm", "sec_num": "3." }, { "text": "where, d; is the new document, Ca(clo cOis as the same as the formula (4), n is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector-distance algorithm", "sec_num": "3." }, { "text": "k=1 k=1 k=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector-distance algorithm", "sec_num": "3." }, { "text": "documents' number of the collection , T(c1\"Cf ) is the class determine function. Suppose 4 belongs to class C j , then T(d,Cf ). 1 , otherwise Rdt ,Cd= 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector-distance algorithm", "sec_num": "3." }, { "text": "Steps: The classes are sorted in descending according to the similarities computed in step4, and are outputted; the first one is the class the new document will belong to.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector-distance algorithm", "sec_num": "3." }, { "text": "Method II: representative-vector-distance algorithm", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector-distance algorithm", "sec_num": "3." }, { "text": "In this method, the representative vector vc i is formed by the mergence of every same class training texts. When there is a new document, construct a vector di for it. Then calculate the cosine-distance (similarity) of d i and vci , then sort the similarity and output the result, the main steps are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector-distance algorithm", "sec_num": "3." }, { "text": "Step 1: Handle every kind of texts of the training set to get all the kinds of representative vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector-distance algorithm", "sec_num": "3." }, { "text": "Furthermore, get the representative vector matrix of the whole collection, to get the normalization matrix, using formula (1);", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector-distance algorithm", "sec_num": "3." }, { "text": "Step2: The vector of new document is gotten by using formula (2) and 3;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector-distance algorithm", "sec_num": "3." }, { "text": "Step3: The similarities are calculated by using formula (4);", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector-distance algorithm", "sec_num": "3." }, { "text": "Step4: At last, get the class that the new document belongs to, according to the size of the similarities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector-distance algorithm", "sec_num": "3." }, { "text": "The analyzing the corpus and the outcome of the experiment on these classes, it is found that:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4." }, { "text": "0 The boundaries of classes are seriously overlapped. The discrimination becomes lower, when using the characters as the features. For above problems, the effectiveness of case Q can be improved by means of changing the feature selection, such as selecting word as the feature. As for the case CD, it is difficult to improve the effectiveness by statistical method only. Semantic comprehension should be added to help us improve the classification effectiveness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4." }, { "text": "4. The authors select characters as features and don't filter the feature set, this probably leads to noise [Than xue gang, Lin Hongfei, Yao Tianshun 1999 , Schutze H, Hull D, Pedersen J. 1996 , ZHOU Shui-Geng, GUAN Ji-Hong, HU Yun-Fa, ZHOU Ao-Ying 2001 , for example, the influence from number, and make the classification effectiveness decline. In the experiments, after the handling of the texts to Unicode, then every number code becomes a feature, so the function of number is larger than before.", "cite_spans": [ { "start": 108, "end": 154, "text": "[Than xue gang, Lin Hongfei, Yao Tianshun 1999", "ref_id": null }, { "start": 155, "end": 192, "text": ", Schutze H, Hull D, Pedersen J. 1996", "ref_id": null }, { "start": 193, "end": 253, "text": ", ZHOU Shui-Geng, GUAN Ji-Hong, HU Yun-Fa, ZHOU Ao-Ying 2001", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4." }, { "text": "5. The data of table 1, 2, 3 are the numbers of successfully classified documents, and each of them has the largest similarity with training set class, and into which they are classified. In the analysis of the corpus, in each class it is found that the categories of some texts judged by machine are different from their original categories which are classified manually. The results judged by machine are regarded as correct, by means of artificial discrimination. Therefore, the results of classification are correct if they are of this case. According to this principle, the number behind \"+\" indicates the increasing correct result.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4." }, { "text": "Table4.The adjusted data of experiment 1 Tables. The adjusted second, the cosine distance discrepancy is very small among the first-three when compared with others; third, using this method, setting the threshold can be avoided. Table7, 8 are related data.", "cite_spans": [], "ref_spans": [ { "start": 41, "end": 61, "text": "Tables. The adjusted", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "4." }, { "text": "Table7. The data of experiment 1 before and after the second adjustment (based on tablel) Comparing the two experiments, it can be concluded that the efficiency of text classification can be improved by means of vector-distance weighted algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4." } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Lin Hongfei, Yao Tianshun 1999] Hierarchical Method for Chinese Document Classification", "authors": [ { "first": "[", "middle": [], "last": "Liu Bin", "suffix": "" }, { "first": "", "middle": [], "last": "Huang Tie Jun", "suffix": "" }, { "first": "", "middle": [], "last": "Cheng Jun", "suffix": "" }, { "first": "", "middle": [], "last": "Gao Wen", "suffix": "" } ], "year": 1995, "venue": "Commercial Applications of Natural Language Processing", "volume": "16", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[LIU Bin, HUANG Tie Jun, CHENG Jun, GAO Wen 2002] A New Statistical-based Method in Automatic Text Classification, Journal of Chinese Information Processing Vol.16 No.6. [Church , K.W.Lisa.F.Rau 1995] Commercial Applications of Natural Language Processing , Communications of ACM,Vol.38.No.11 [Than xue gang, Lin Hongfei, Yao Tianshun 1999] Hierarchical Method for Chinese Document Classification. Journal of Chinese Information Processing Vol.13 No.6 1999.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "He Ji 2000] Term Weighting and Classification Algorithms", "authors": [ { "first": "Wang", "middle": [], "last": "[diao Qian", "suffix": "" }, { "first": "Zhang", "middle": [], "last": "Yongcheng", "suffix": "" }, { "first": "", "middle": [], "last": "Huihui", "suffix": "" } ], "year": 2000, "venue": "Journal of Chinese Information Processing", "volume": "14", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Diao Qian, Wang Yongcheng, Zhang Huihui, He Ji 2000] Term Weighting and Classification Algorithms. Journal of Chinese Information Processing, Vol.14 No.3 2000.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Hinrich Schutze] Foundations of Statistical Natural Language Processing", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D.Manning, Hinrich Schutze] Foundations of Statistical Natural Language Processing. The MIT Press Cambridge,Massachusetts London, England.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A Chinese Doucument Categorization System without Dictionary Support and Segmentation Processing", "authors": [ { "first": "H", "middle": [], "last": "Hull", "suffix": "" }, { "first": "D", "middle": [], "last": "Pedersen", "suffix": "" }, { "first": "J", "middle": [], "last": "Zhou Shui-Geng", "suffix": "" }, { "first": "", "middle": [], "last": "Hong", "suffix": "" }, { "first": "Zhou", "middle": [], "last": "Hu Yun-Fa", "suffix": "" }, { "first": "", "middle": [], "last": "Ao-Ying", "suffix": "" } ], "year": 1996, "venue": "A Comparison of Selective Bayesian Network Classifiers.ICML-96", "volume": "38", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H, Hull D, Pedersen J.1996] A Comparison of Selective Bayesian Network Classifiers.ICML-96, 1996. [ZHOU Shui-Geng, GUAN Ji-Hong, HU Yun-Fa, and ZHOU Ao-Ying 2001] A Chinese Doucument Categorization System without Dictionary Support and Segmentation Processing, Journal of Computer Research & Development Vol.38 No.7 2001.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "Some samples in corpus are excerpts of one or several passages. Then, it is likely that the title of the sample is about medicine and is classified into medicine& sanitation, however, the content of the text is about chemical components and chemical reactions of the medicine, so the text is judged as biology & chemistry class by machine. \u00a9 The corpus of military affairs& gym contains military affairs and gym. After the corpus is analyzed, it is found that the majority of military documents contain historical and military facts, and they overlap with politics-law. The samples of gym overlap with those of medicine& sanitation and education in content. The samples which overlap with other classes' samples are classified correctly, but they are classified wrongly by machine. The same reason is found in other classes whose Recall is low." }, "TABREF1": { "type_str": "table", "html": null, "text": "effectiveness of text classification is measured as Recall and Precision calculated by the following In experiments, the authors employ the Modern Chinese Corpus of State Language Commission as training set and use vector-distance algorithm to implement the process of automatic text classification. The corpus is a balanced corpus and has been manually classified. It includes three parts, namely: Human& Society Science, Natural Science and Integration. Politics, history, society, economy, arts, literature, military affairs& gym, and life, are the 8 classes included in the first part; mathematics & physics, biology & chemistry, astronomy & geography, medicine& sanitation, agriculture &forests, ocean & weather are included in Natural Science, and Integration part contains application documents and others two classes. The size of every text is about 3-4kB, and the content of the texts is selected from newspapers, books and general magazines.", "num": null, "content": "
Table 3.The comparison of experiment 1 &experiment 2
Correct pagesPrecisionRecallFl value
Experiment 134162%62%62%
Experiment 227349.64%49.64%49.64%
It is obviously shown in the above figures that:
1. The effectiveness of experiment) is better than that of experiment2. That is to say, the
merging of the same class training texts to get representative vector for classification
effectiveness does not work well.
In the experiments, 11 classes are randomly selected, including literature-colloquialism, 2. In experiment), the classification effects of literature-colloquialism, politics-law,
politics-law, society-education, mathematics & physics, biology & chemistry, military affairs& gym, society-education, Mathematics & physics, astronomy & geography, ocean & weather are
astronomy & geography, medicine& sanitation, arts, agriculture &forests, ocean & weather, from each better and more stable than others, especially politics-law, society-education, mathematics &
of which 100 passages are selected. The total amount is 1100. Half of them are selected as training set, physics, ocean & weather. The Recall of them achieved as 80% or even more. The main
reason is that the features of these classes are distinct, thus little influence from other
overlapping classes is found.
3. The Recall of biology & chemistry, military affairs& gym, medicine& sanitation, arts and
agriculture &forests is low, especially that of military affairs& gym, only about 10%. After
Tablel. The data of experiment 1Tablet. The data of experiment 2
Right Recall% Precision%Right Recall% Precision%
Literature397898Literature255095.46
Politics499887.46Politics5010083.09
equations: Arts163293.82Arts112292.09
the number of classification that are
Medicine Astronomy18 38Precision 36 Prec correctly assigned to documents 94 Medicine 12 , the number of training set 76 96.55 Astronomy 3324 6693.09 95.46
the number of classification that are
Mathematics Biology40 2380 Recall = 4697.82 correctly assigned to documents Mathematics the number of classification that are correctly 50 94.91 Biology 5100 1078.18 91.82
assigned to documents in whole training set
Society448886.18Society326489.64
In order to synthetically consider the effectiveness of text classification, F 1 test value is used as Agriculture 20 40 94.55 Agriculture 14 28 93.46
follows:
Military Ocean4 508 Fl test value 91.64 Pr ecision x Re call x 2 Military Pr ecision + Re call 100 89.09 Ocean7 3414 6892.18 95.27
5. Results and Discussions
" }, "TABREF2": { "type_str": "table", "html": null, "text": "According to the analysis of the experiments, the result of classification in military affairs & gym, medicine & sanitation, arts, biology& chemistry, agriculture& forests is unsatisfactory. It is partly due to the overlapping of the feature set or class boundary. According to the results of experiments and analysis of the corpus, the authors admit a document can be classified into more than one class. So, in class similarities sort list, the first-three classes of the new document are observed. If any one of them among the three is the same as the original class, it means that the classification of the new document is successful. (This adjustment is only for military affairs & gym, medicine & sanitation, arts, biology& chemistry, and agriculture& forests.) The reasons to do so are: first, the documents in corpus may have several classes;", "num": null, "content": "
data of experiment 2
(based on table 1)(based on table 2 )
CorrectWrongCorrectWrong
Literature39+110Literature25+223
Politics49+10Politics500
Arts16+1123Arts11+1227
Medicine18+329Medicine12+533
Astronomy38+111Astronomy33+710
Mathematics40+19Mathematics500
Biology23+225Biology5+1629
Society44+42Society32+99
Agriculture20+129Agriculture14+729
Military4+2323Military7+1924
Ocean500Ocean34+79
Total341+48161Total273+84193
Table6. The comparison of experiment 1 &experiment 2 after the first adjust
Correct pagesPrecisionRecallF 1 value
Experiment 138970.73%70.73%70.73%
Experiment 235764.91%64.91%64.91%
.
" }, "TABREF3": { "type_str": "table", "html": null, "text": "The results in tablel 8 is better, including that of literature-colloquialism, politics-law, society-education, mathematics & physics, astronomy & geography and ocean & weather, of which the data are not adjusted yet. The average precision of experiment) and experiment2 is 80.36% and 69.27%, respectively.", "num": null, "content": "
ArtsBiology Military Medicine AgricultureCorrectPrecision Recall
Before1623418208162%62%
adjust
After483733303418280.36%80.36%
adjust
Table8. The data of experiment 2 before and after the second adjustment (based on tablet)
Arts BiologyMilitaryMedicineAgricultureCorrectPrecisionRecall
Before115712144949.64%49.64%
adjust
After383432243015869.27%69.27%
adjust.
" } } } }