{ "paper_id": "Y03-1032", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:34:26.730453Z" }, "title": "Extracting Chinese Multi-Word Units from Large-Scale Balanced Corpus", "authors": [ { "first": "Jianzhou", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "China Normal University", "location": { "postCode": "430079", "settlement": "Wuhan", "country": "China" } }, "email": "" }, { "first": "H", "middle": [ "E" ], "last": "Tingting", "suffix": "", "affiliation": { "laboratory": "", "institution": "China Normal University", "location": { "postCode": "430079", "settlement": "Wuhan", "country": "China" } }, "email": "" }, { "first": "Liu", "middle": [], "last": "Xiaohua", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Automatic Multi-word Units Extraction is an important issue in Natural Language Processing. This paper has proposed a new statistical method based on a large-scale balanced corpus to extract multi-word units. We have used two improved traditional parameters: mutual information and log-likelihood ratio, and have increased the precision for the top 10,000 words extracted through the method to 80.13%. The results of the research indicate that this method is more efficient and robust than previous multi-word units extraction methods.", "pdf_parse": { "paper_id": "Y03-1032", "_pdf_hash": "", "abstract": [ { "text": "Automatic Multi-word Units Extraction is an important issue in Natural Language Processing. This paper has proposed a new statistical method based on a large-scale balanced corpus to extract multi-word units. We have used two improved traditional parameters: mutual information and log-likelihood ratio, and have increased the precision for the top 10,000 words extracted through the method to 80.13%. The results of the research indicate that this method is more efficient and robust than previous multi-word units extraction methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Natural language processing is a project based on knowledge, thus human's linguistic knowledge must be stored in the computer and the process of human's comprehending and producing languages be formalized before the computer commands human's linguistic potency. Since multi-word units (words formed with at least two characters) are the primary embodiment of semantics (Pinchuck 1977; Sager 1990 ), research on these words is the starting point for different natural language processing applications. Automatic multi-word units (MWUs) extraction has great theoretical and practical significance to such language information processing research as information indexing, machine translation, voice recognition, document classification as well as thesaurus compiling. Presently, the rapid developments in different professional fields (e.g. computer science, medicine) mean continuous creation of new MWUs, and it is impossible to list them exhaustively in a lexicon. Therefore, automatic extraction of MWUs is a very important issue. Compared with western languages, as for Chinese there is no space between characters and words are hard to define, thus automatic Chinese MWUs extraction will surely confront even more difficulties.", "cite_spans": [ { "start": 369, "end": 384, "text": "(Pinchuck 1977;", "ref_id": null }, { "start": 385, "end": 395, "text": "Sager 1990", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we have proposed a statistical method based on a large-scale balance corpus to realize the automatic extraction of MWUs. The goal is to extract sets of words with exact meaning from the corpus. Our method mainly consists of three phases (the first two phases include 3 steps respectively and the third phase includes 4 steps). First, select \"seeds\" (two character word) ready for extension; then extend these seeds at the front or back by K characters; finally, by comparing these parameters, determine which are MWUs. We have assessed the experiment data by measuring precision rates, and the result indicates that our method is more efficient and robust compared with other approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of this paper is structured as follows. In section 2, we describe in detail the method and all statistical parameters used. In section 3, we make a comprehensive analysis and just evaluation of the experimental data. In section 4, we outline the related works as well as their results. Finally, we give out conclusions and introduce part of our later research work in section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "to measure the association ratio of adjacent characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this method, we use the parameters mi f and logL f to measure the association ratio between characters. These two parameters are improved from the mutual information and log-likelihood by Silva & Lopes (for detailed illustration please see Silva & Lopes (1999) ). At present, the relatively common formula to calculate mutual information is:", "cite_spans": [ { "start": 243, "end": 263, "text": "Silva & Lopes (1999)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction to mit' and logLi parameters", "sec_num": "2.1.1" }, { "text": "P(x, Y) mi(x, y) log( (3.1) P(x) P(Y)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction to mit' and logLi parameters", "sec_num": "2.1.1" }, { "text": "We think this formula only fits bigrams (two character word). It is hard to use this formula to deal with n-grams (n>2), because it is a knotty problem to divide n-grams effectively into x, y parts. Therefore, we will use the parameter mi f improved by Silva & Lopes, and the calculation formula is defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction to mit' and logLi parameters", "sec_num": "2.1.1" }, { "text": "mi _ f = log(- \u2022 ew n)) Avp Where 1 j=n-1 Avp = - \u2022 p(w i+1 ...w) (3.3 ) n -1 =1 is an n-gram,p(w1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction to mit' and logLi parameters", "sec_num": "2.1.1" }, { "text": "is the probability that occurs in the given corpus. Since we cannot directly calculate the probability p(w i ...wn ) , we are able to estimate it by applying MLE (Maximum Likelihood Estimation) method. The estimation formula is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction to mit' and logLi parameters", "sec_num": "2.1.1" }, { "text": "f P(wp -wn) = 1 (3.4) f", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction to mit' and logLi parameters", "sec_num": "2.1.1" }, { "text": "In) stands for the occurrence frequency of w1 ...wn in the corpus. N stands for the number of words in the corpus. Ted Dunning originally proposed the parameter log-likelihood ratio, and the formula was defined as follows: (for detailed illustration please see Dunning (1993) ", "cite_spans": [ { "start": 261, "end": 275, "text": "Dunning (1993)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction to mit' and logLi parameters", "sec_num": "2.1.1" }, { "text": ") -2 log = 2[1og i 1 L(p ,k,n)+ log L(p2 ,k2 ,n2 )-log k1 , n1 ) -log L(p,k2,n2)] Where k1 p2 , p = k 1 + k 2 * - , = f (x, y) , k2 = f (-x, y) , = f (x,) , n 2 = f ( , x,*) n1 n2 n1 + n2 And log L(p,n,k) = k log p + (n -k) log(1 p)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction to mit' and logLi parameters", "sec_num": "2.1.1" }, { "text": "In this method, we have obtained the parameter logL f according to Silva & Lopes processing method, and the calculation formula is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction to mit' and logLi parameters", "sec_num": "2.1.1" }, { "text": "Where log _ f ,z) = 2 . log4 10 kf 2 kf 1, nf 1) nfl+ nf 2 (log g-kfl kfl, nfl) log 1( kf 2 kf 2, nf 2) nfl nf 2 -log L( kfl + kf kf 2, of 2)) (3.5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction to mit' and logLi parameters", "sec_num": "2.1.1" }, { "text": "nfl+ nf 2 kfl = f , kf 2 = Avy kfl nfl = Avx , nf 2 = N -nfl Avx and Avy are respectively defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction to mit' and logLi parameters", "sec_num": "2.1.1" }, { "text": "(3.2) 1 i=n-1 1 i=4 Avx = I f (-Kil ...w i ) , Avy = -\u2022 1 f (wi ...wn ) (3.6) n -1 i=1 n -1 i=2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction to mit' and logLi parameters", "sec_num": "2.1.1" }, { "text": "Our algorithm is done in 3 phases. First, select the seeds for extension; then extend the seeds; finally, determine which extensions are MWUs. Here we are to introduce these phases in a detailed way.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Illustrations of MWUs Extraction Algorithm", "sec_num": "2.2" }, { "text": "The algorithm of selecting seeds is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seeds ready for extension", "sec_num": "2.2.1" }, { "text": "Input: A Corpus L Output: Seeds list db two", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seeds ready for extension", "sec_num": "2.2.1" }, { "text": "Step 1: Collect all unigram frequencies and possible bigrams frequencies from L in DB", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seeds ready for extension", "sec_num": "2.2.1" }, { "text": "Step 2: For all 4-grams w x y z in L, remove one count for x y in DB if -mi_f(x, y) MO kf-Vh, MTTS,140X1-00,501,... then all possible extensions are: (04, V), *), (PP, kIVF),(AkIVFA),(AV, *), (0, Agl*),(04=1, $') and (a AO.VM, Ji).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extension of Seeds", "sec_num": "2.2.2" }, { "text": "When we extend \"0\" into 4-grams, ik) are possible to be extended, so we collect the frequency of the four characters and calculate the values of mi f, logL f of them and the id of seed (\"glilf\") together with the value of sc (sc = logLi(WVh) -iogL _f(0.V)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extension of Seeds", "sec_num": "2.2.2" }, { "text": "The algorithm to determine MWUs is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition of 1IIWUs", "sec_num": "2.2.3" }, { "text": "Input: n-grams list db n, seeds list db two Output: MWUs list M", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition of 1IIWUs", "sec_num": "2.2.3" }, { "text": "Step 1: Unite all list db n to a list M", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition of 1IIWUs", "sec_num": "2.2.3" }, { "text": "Step 2: As for list M, order by id asce, logL f desc", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition of 1IIWUs", "sec_num": "2.2.3" }, { "text": "Step 3: For each n-grams in M:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition of 1IIWUs", "sec_num": "2.2.3" }, { "text": "Determine if it is MWUs. If it is MWUs, isMWus =1 else isMWUs = 0 Step 4: Filter the list by field \"isMWUs\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition of 1IIWUs", "sec_num": "2.2.3" }, { "text": "This part is used to select and output the MWUs found through this method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition of 1IIWUs", "sec_num": "2.2.3" }, { "text": "Step 3 is to deal with nested words. If an n-gram whose logL f value is higher than other n-grams and not contained by others, we consider this n-gram is MWUs. For example, there are some records in our experiment data as follows: (Table 1) According to the above judgment method, we will consider \"IVPM-\" as a multi-word unit, while \"Viti,\" and \"VA\" are not.", "cite_spans": [], "ref_spans": [ { "start": 231, "end": 240, "text": "(Table 1)", "ref_id": null } ], "eq_spans": [], "section": "Definition of 1IIWUs", "sec_num": "2.2.3" }, { "text": "Results, Evaluation and Discussion", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3", "sec_num": null }, { "text": "At present, we mainly test our method with closed data. The corpus we used is the large-scale balanced corpus of Chinese Language Committee. The test data is the core part of the corpus, containing about 20 millions Chinese words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The corpus", "sec_num": "3.1" }, { "text": "We have tested our method with the above corpus. For top 10,000 extracted MWUs, we achieved 80.13% precision and increased above 6% compared with the 74.4% precision achieved by Partick & Dekang (2001) . For the top 1,000 words extracted by our method, we achieved 97.60% precision. The detailed results are as follows:", "cite_spans": [ { "start": 178, "end": 201, "text": "Partick & Dekang (2001)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The results", "sec_num": "3.2" }, { "text": "Precision for Topi-k Extracted Ms 1 2 3 4 5 6 8 9 10 K x 1000) (Fig. 1) At the same time, we make a research on seeds extension and n-grams precision. We selected 2,000 seeds from the results, and selected the extracted MWUs extended by these seeds. We extracted 3,180 MWUs. The n-grams precision is as follows: (Table 2) From the table 2, we can see the average precision rate is much higher than Fung's n-grams average precision rate 54.09% (Fung 1998) ", "cite_spans": [ { "start": 443, "end": 454, "text": "(Fung 1998)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 63, "end": 71, "text": "(Fig. 1)", "ref_id": null }, { "start": 312, "end": 321, "text": "(Table 2)", "ref_id": null } ], "eq_spans": [], "section": "The results", "sec_num": "3.2" }, { "text": "From the results in 3.2, it is not hard to see that he 2-grams extraction proves to have the most ideal result, while 5-grams the worst. The precision change tendency is shown as follows: descendence, and that of the precision rate of odd-grams is even sharper to form two valleys in the curve. This on one hand indicates that Chinese MWUs are primarily m-grams (m is even number), and on the other hand reveals that in this method certain parameters discriminating odd-grams are not the ideal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "3.3" }, { "text": "During the process of extraction, we have discovered that many words begin with \"n\" (of), \"fir (and), \"A\" (is) and \"T\" (int.), especially those in 5-grams. Customarily, these words are not MWUs, thus we can further filter them by means of lexical knowledge in order to raise the precision rate. From the above experiment data, we can see that the proposed extraction method is quite successful. This is mainly because we have synthesized advantages of the two parameters: mutual information and log-likelihood ratio, and avoided their disadvantages. Generally, mutual information can well reflect the association degree of characters, but it also has its shortage in that it overestimates the function of low frequency words. The log-likelihood ratio is an efficient parameter to solve the problem. The disadvantage of log-likelihood ratio is that for those high frequency words that are rarely adjacent, its value turns out to be pretty high. This problem is solved by mutual information in some sense.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "3.3" }, { "text": "Also, selecting the two improved parameters is another main factor that contributes to the improvement of efficiency, which is shown in 3.1 and 3.2. We apply mi f and logL f because we have revealed certain shortages during the process of extraction with two traditional parameters: mutual information and log-likelihood ratio. The problem lies in that these two traditional parameters are hardly utilized in a just way in the process of seed extension. For example, when we calculate the value of mutual information of the 4-gram wfw2 w3 w4 by using the formula 3.1, it is a big problem how to divide wi w2 w3 w4 into two parts x and y. Theoretically, when divided, x and y should be words, but how to define x, y as two words itself is a problem to be solved. In Patrick & Dekang's term extraction algorithm, wi w2 w3 w4 is generally supposed to be divided into two words or terms (Patrick & Dekang 2001 ), yet this is very hard to realize, and with the increase of the length of n-grams, the division method of xy can be more various. So we think this is not the best solution. The improved parameters in our method fully solve the problem, and in fact the final results fully prove that this method is better than others.", "cite_spans": [ { "start": 883, "end": 905, "text": "(Patrick & Dekang 2001", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "3.3" }, { "text": "Traditional approaches of MWUs extraction mainly used rules. However, not all MWUs can be created by rules, there are many words which are not created by rules (SUN Honglin 1998). Our method is mainly based on statistics. Several methods have been proposed for extracting MWUs from corpus by statistical approaches. In this section, we will briefly describe some of them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related works", "sec_num": "4" }, { "text": "Patrick & Dekang (2001) proposed a method based on statistics to automatically extracting domain specific terms from a segmented Chinese corpus. It contains about 10MB of Chinese news text. 10,268 terms extracted from that corpus, with the precision of 74.4%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related works", "sec_num": "4" }, { "text": "Ming-Wen Wu etc (1993) presented a method using mutual information and relative frequency. 9,124 multi-word units are extracted from the corpus, which consists of 74,404 words, with the precision of 47.43%. In this method, the MWUs extraction problem is formulated as classification problem. It also needs a training corpus to estimate parameters for classification model. In our method, we didn't make use of any training corpus. Another difference is that they use the method for English MWUs extraction while we extract Chinese MWUs in our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related works", "sec_num": "4" }, { "text": "Fung (1998) presented a simple system for Chinese MWUs extraction-CXtract. CXtract uses predominantly statistical lexical information to find term boundaries in large text. Evaluations on the corpus consisting of 2 million characters show that the average precision is 54.09%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related works", "sec_num": "4" }, { "text": "In our MWUs automatically extraction method, we use two parameters, mi_f and logL f, deriving from mutual information and log-likelihood ratio. The results of our experiment show that our extraction method is successful. The precision of the top-10,000 MWUs extracted by our method reaches 80.13%, and the precision of the top-1,000 extracted MWUs reaches 97.06%. It is impossible to calculate the overall recall, so we only give the precision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and future works", "sec_num": "5" }, { "text": "In future works, we will prepare a test on open data. Furthermore, we will do research on automatic term extraction. Considering the characteristics of term, we may extract MWUs from the corpus of professional fields compared with the MWUs from this balance corpus so as to ensure that the extracted MWUs are terms, instead of common words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and future works", "sec_num": "5" }, { "text": "Multi-word Units Extraction AlgorithmOur automatic MWUs extraction algorithm takes three phases. Two improved parameters are applied", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Accurate Methods for the Statistics of Surprise and Coincidence", "authors": [ { "first": "", "middle": [ "T" ], "last": "Dunning", "suffix": "" } ], "year": 1993, "venue": "", "volume": "19", "issue": "", "pages": "61--76", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dunning. T. 1993. Accurate Methods for the Statistics of Surprise and Coincidence. Association for Computational Linguistics, 19(1)61-76 1993.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Word Association Norms, Mutual Information and Lexicography. Computational Linguistics", "authors": [ { "first": "K", "middle": [], "last": "Church", "suffix": "" }, { "first": "& K", "middle": [], "last": "Hanks", "suffix": "" } ], "year": 1990, "venue": "", "volume": "16", "issue": "", "pages": "22--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Church & K. Hanks. 1990. in Word Association Norms, Mutual Information and Lexicography. Computational Linguistics, 16(1):22-29.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A local Maxima Method and a Fair Dispersion Normalization for Extracting Multiword Units", "authors": [ { "first": "", "middle": [ "J" ], "last": "Silva", "suffix": "" }, { "first": "", "middle": [], "last": "Lopes", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 6th Meeting on the Mathematics of Language", "volume": "", "issue": "", "pages": "369--381", "other_ids": {}, "num": null, "urls": [], "raw_text": "Silva. J. & Lopes. G 1999. A local Maxima Method and a Fair Dispersion Normalization for Extracting Multiword Units. In Proceedings of the 6th Meeting on the Mathematics of Language, p.369-381.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A Statistical Corpus-Based Term Extractor. Canadian Conference on AI", "authors": [ { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" }, { "first": "& Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "36--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Pantel & Dekang Lin. 2001. A Statistical Corpus-Based Term Extractor. Canadian Conference on AI 2001. p.36-46", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Extracting key term from Chinese and Japanese texts", "authors": [ { "first": "", "middle": [ "P" ], "last": "Fung", "suffix": "" } ], "year": 1998, "venue": "The International Journal on Computer Processing of Oriental Language. Special Issue on Information Retrieval on Oriental Language", "volume": "", "issue": "", "pages": "99--121", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fung. P. 1998. Extracting key term from Chinese and Japanese texts. The International Journal on Computer Processing of Oriental Language. Special Issue on Information Retrieval on Oriental Language. p.99-121.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Identifying Contextual Information for Multi-Word Term Extraction", "authors": [ { "first": "Diana", "middle": [], "last": "Maynard", "suffix": "" }, { "first": "& Sophia", "middle": [], "last": "Ananiadou", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diana Maynard & Sophia Ananiadou. 1999. Identifying Contextual Information for Multi-Word Term Extraction.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The C-Value/NC-Value domain independent method for multi-word term extraction", "authors": [ { "first": "K", "middle": [ "T" ], "last": "Frantzi", "suffix": "" }, { "first": "S", "middle": [], "last": "Ananiadou", "suffix": "" } ], "year": 1999, "venue": "Journal of Natural Language Processing", "volume": "6", "issue": "3", "pages": "145--179", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. T. Frantzi and S. Ananiadou. 1999. The C-Value/NC-Value domain independent method for multi-word term extraction. Journal of Natural Language Processing, 6(3):145-179.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Using Morphological, Syntactical and Statistical Information for Automatic Term Acquisition", "authors": [ { "first": "Joana", "middle": [], "last": "Paulo", "suffix": "" }, { "first": "Margarita", "middle": [], "last": "Correia", "suffix": "" }, { "first": "J", "middle": [], "last": "Nuno", "suffix": "" }, { "first": "", "middle": [], "last": "Mamede", "suffix": "" }, { "first": "", "middle": [], "last": "Caroline", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joana Paulo, Margarita Correia, Nuno J. Mamede & Caroline. 2002. Using Morphological, Syntactical and Statistical Information for Automatic Term Acquisition.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Corpus-based Terminology Extraction applied to Information Access", "authors": [ { "first": "Kyo", "middle": [], "last": "Kageura", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Umino", "suffix": "" }, { "first": "; _ A", "middle": [], "last": "Verdejo", "suffix": "" }, { "first": "Gonzalo", "middle": [ "J" ], "last": "", "suffix": "" } ], "year": 1996, "venue": "Corpus Linguistics", "volume": "3", "issue": "2", "pages": "259--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kyo Kageura, Bin Umino. 1996. Methods of Automatic Term Recognition. Terminology, 3(2):259-289 1996. _ A. Verdejo and Gonzalo J. 2001. Corpus-based Terminology Extraction applied to Information Access. Corpus Linguistics 2001; Lancaster, UK.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Retrieving collocations from text: Xtract", "authors": [ { "first": "", "middle": [], "last": "Smadja", "suffix": "" }, { "first": "", "middle": [], "last": "Frank", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "1", "pages": "143--177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Smadja. Frank. 1993. Retrieving collocations from text: Xtract. Computational Linguistics, 19(1): 143-177.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Evaluating Domain-Oriented Multi-Word Terms from Texts", "authors": [ { "first": "", "middle": [ "F J" ], "last": "Damerau", "suffix": "" } ], "year": 1990, "venue": "Information Processing and Management", "volume": "29", "issue": "4", "pages": "433--447", "other_ids": {}, "num": null, "urls": [], "raw_text": "Damerau. F. J. 1990. Evaluating Domain-Oriented Multi-Word Terms from Texts. Information Processing and Management 29(4), 433-447.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A Statistical Approach to Mechanized Encoding and Searching of Literary Information", "authors": [ { "first": "H", "middle": [ "P" ], "last": "Luhn", "suffix": "" } ], "year": 1957, "venue": "IBM Journal of Research and Development", "volume": "2", "issue": "2", "pages": "159--165", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luhn, H. P. 1957. A Statistical Approach to Mechanized Encoding and Searching of Literary Information. IBM Journal of Research and Development 2(2), 159-165.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Extracting collocations from text corpora", "authors": [ { "first": "Lin", "middle": [ "D" ], "last": "", "suffix": "" } ], "year": 1998, "venue": "Proceedings of COLING/ACL-98 Workshop on Computational Terminology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin. D. 1998. Extracting collocations from text corpora. In Proceedings of COLING/ACL-98 Workshop on Computational Terminology. Montreal, Canada.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A Theory of Tema Importance in Automatic Text Analysis", "authors": [ { "first": "", "middle": [], "last": "Salton", "suffix": "" }, { "first": "Yu", "middle": [ "C C T" ], "last": "Yang", "suffix": "" } ], "year": 1975, "venue": "Journal of the American Society for Information Science", "volume": "26", "issue": "1", "pages": "33--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "Salton. G Yang. C. S and Yu. C. T. 1975. A Theory of Tema Importance in Automatic Text Analysis. Journal of the American Society for Information Science 26(1), 33-44.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Chinese Word Segmentation without Using Lexicon and Handcrafted Training Data", "authors": [ { "first": "", "middle": [ "M" ], "last": "Sun", "suffix": "" }, { "first": "", "middle": [ "D" ], "last": "Shen", "suffix": "" }, { "first": "B", "middle": [ "K" ], "last": "Tsou", "suffix": "" } ], "year": 1998, "venue": "Proceedings of COLING-ACL/98", "volume": "", "issue": "", "pages": "1265--1271", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sun. M., Shen. D. and Tsou B. K. 1998. Chinese Word Segmentation without Using Lexicon and Handcrafted Training Data. In Proceedings of COLING-ACL/98, P.1265-1271.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Highlights: Language-and Domain-Independent Automatic Indexing Terms for Abstracting", "authors": [ { "first": "", "middle": [ "J D" ], "last": "Cohen", "suffix": "" } ], "year": 1995, "venue": "Journal of the American Society for Information Science", "volume": "46", "issue": "3", "pages": "162--174", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cohen. J. D. 1995. Highlights: Language-and Domain-Independent Automatic Indexing Terms for Abstracting. Journal of the American Society for Information Science 46(3), 162-174.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "PAT-tree-based Keyword Extraction for Chinese Information retrieval", "authors": [ { "first": "Lee-Feng", "middle": [], "last": "Chien", "suffix": "" } ], "year": 1997, "venue": "ACMSIGIR'97", "volume": "", "issue": "", "pages": "50--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lee-Feng Chien. 1997. PAT-tree-based Keyword Extraction for Chinese Information retrieval. ACMSIGIR'97, Philadelphia, USA, p.50-58.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Large-scale Automatic Extraction of an English-Chinese Lexicon", "authors": [ { "first": "Dekai", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Xia", "middle": [], "last": "Xuanyin", "suffix": "" } ], "year": 1995, "venue": "Machine Translation", "volume": "9", "issue": "", "pages": "285--313", "other_ids": {}, "num": null, "urls": [], "raw_text": "WU, Dekai and Xuanyin XIA. 1995. Large-scale Automatic Extraction of an English-Chinese Lexicon. Machine Translation 9(3-4), p.285-313.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Evaluating Domain-Oriented Multi-Word Terms from Texts", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Damerau", "suffix": "" } ], "year": 1993, "venue": "Information Processing and Management", "volume": "29", "issue": "4", "pages": "433--447", "other_ids": {}, "num": null, "urls": [], "raw_text": "Damerau, F. J. 1993. Evaluating Domain-Oriented Multi-Word Terms from Texts. Information Processing and Management 29(4). p.433-447.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Corpus-based Automatic Compound Extraction with Mutual Information and Relative Frequency Count", "authors": [ { "first": "Ming-Wen", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Keh-Yih", "middle": [], "last": "Su", "suffix": "" } ], "year": 1993, "venue": "Proceedings of R. 0. C. Computational Linguistics Conference VI", "volume": "", "issue": "", "pages": "207--216", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ming-Wen Wu and Keh-Yih Su. 1993. Corpus-based Automatic Compound Extraction with Mutual Information and Relative Frequency Count. Proceedings of R. 0. C. Computational Linguistics Conference VI. Nantou, Taiwan. p.207-216.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Scientific and Technical Translation", "authors": [ { "first": "Isadore", "middle": [], "last": "Pinchuck", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pinchuck, Isadore. 1997. Scientific and Technical Translation. Andre Deutsch.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A Practical Course in Terminology Processing", "authors": [ { "first": "Juan C ; John", "middle": [], "last": "Sager", "suffix": "" }, { "first": "B", "middle": [ "V" ], "last": "Benjamins", "suffix": "" }, { "first": "Duan", "middle": [], "last": "Sun Honglin", "suffix": "" }, { "first": "", "middle": [], "last": "Huiming", "suffix": "" } ], "year": 1990, "venue": "Chinese Phrase Information Database about Natural Language Processing. Term Standardization and Information Technology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sager, Juan C. 1990. A Practical Course in Terminology Processing. John Benjamins B.V. SUN Honglin and DUAN Huiming. 1998. Chinese Phrase Information Database about Natural Language Processing. Term Standardization and Information Technology (2).", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "It is not difficult to see from theFig.2that the general tendency of the precision rate is", "uris": null, "num": null } } } }