Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y03-1041",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:34:40.596217Z"
},
"title": "Using Mutual Information to Identify New Features for Text documents of Various Domains",
"authors": [
{
"first": "Zhi",
"middle": [],
"last": "Guo",
"suffix": "",
"affiliation": {},
"email": "guozhili@cn.ibm.com"
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The task of identifying proper names, unknown words and new terms, is an important step in text processing systems. This paper describes a method of using mutual information to collect possible segments as candidates of these three feature types in a document scope. Then the construction and context of each possible feature is examined to determine its type, canonical form and meaning. Adding very little domain-specific knowledge, this method adapts to various domains easily.",
"pdf_parse": {
"paper_id": "Y03-1041",
"_pdf_hash": "",
"abstract": [
{
"text": "The task of identifying proper names, unknown words and new terms, is an important step in text processing systems. This paper describes a method of using mutual information to collect possible segments as candidates of these three feature types in a document scope. Then the construction and context of each possible feature is examined to determine its type, canonical form and meaning. Adding very little domain-specific knowledge, this method adapts to various domains easily.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Most text processing systems, e.g., information extraction, text categorization, clustering, and machine translation, use words (rather than characters) as basic units to build their algorithms. Thus morphological process like segmentation and feature identification becomes an important step, extremely for languages like Chinese, which lacks morphological marks like space separator between words or capital letters at proper names. The quality of segmentation and feature identification greatly influences the performance of the overall text processing systems. Unknown words, i.e., words used in a document but not collected in a segmentation dictionary, and unknown proper names (persons, locations, organizations and their abbreviations) often reduce the precision and quality of a Chinese segmentation algorithm. Although there are ways to collect a balanced word list as the basis of a dictionary, a document always has new words and new names. Hence there must be an efficient way to identify these features. If they are not correctly identified, the real single-character words and the lone characters that actually combine into a new word, the common words and proper names will all be mixed together, which impediments followed processing steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper presents a method of using document-scope mutual information to identify three types of features in a unified algorithm. These three types of features are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 proper names, including sub-categories of person names, place names, organization names, and their abbreviation forms; \u2022 unknown words, including common words used in a document but not collected in segmentation dictionary, product and brand names; \u2022 document terms, usually some phrases formed by multiple words. These are often key concepts of a document. Mutual information is used to evaluate the coherence level of two consecutive characters and words in a document, and those bigrams of higher mutual information are assumed as \"seed\" of possible features. Then these seeds are extended to both their right and left sides, still using mutual information as a criterion to determine how long to be extended, to form a list of candidate features. The sifted patterns are assigned a category type and a confidence value, according to their internal constructions, their contexts, and their distribution in the document scope.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since the method uses document-scope statistics as a major criterion to identify proper names, unknown words, and document terms, it's relatively easy to adjust to various domains, by adding relatively little domain-specific knowledge represented via rules. The document statistics results can also be, a helpful resource to assist generating these rules. The method proposed in this paper doesn't require a large manually-tagged corpus, but it could cooperate with such a corpus, in a way of learning useful domain-knowledge from corpus and then applying it to this method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The next section introduces some previous works on feature identification, focusing on that for Chinese language. Section 3 introduces the method to calculate entropy and mutual information values for arbitrary length of patterns. Section 4 introduces the overall steps of feature identification. Section 5 provides the results of some preliminary experiments, and the last section gives some thoughts about future works.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most previous efforts of feature identification fall into two categories. The first category, which has been studied for a long time, is mainly knowledge based, i.e., they utilize various language knowledge (often represented in expert-collected rules) to identify a single type of feature. The second category is based on some kind of machine learning or corpus statistics method, aiming at identifying all manually-tagged feature types in the corpus. Sun (1995) describes a method to identify Chinese person names based on several character lists, e.g., using a list of surnames, two lists of commonly used given-name characters, and a list of commonly used given-name words. Ji 2001combined a probalistic model called inverse name frequency model and language rules to identify Chinese person names. Tan(1999) proposed a knowledge-based Chinese place name identification method. Zhang(1997) described a rule-based method to identify Chinese university and college names, using a list of suffix words and some pattern rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Previous Work",
"sec_num": "2"
},
{
"text": "The method of using machine learning to identify language features is relatively newer. Baluja(1999) describes a method using 29 language cues to identify English proper names. The cues include word marks, dictionary lookup result, put-of-speech, and a word's adjacency to punctuation marks; but they are apparently chosen for English language, such as whether a word is upper-cased and whether a token is found in the dictionary. This kind of method requires much less language expertise, but it needs a sufficiently large mature corpus. Also the training result is usually hard for human beings to understand, thus it's relatively hard to integrate expert knowledge with the machine learning method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Previous Work",
"sec_num": "2"
},
{
"text": "In contrast to the above two categories of feature identification method, this paper uses mutual information to calculate the coherence level between consecutive characters and words, thus collect all possible candidate segments. This stage purely depends on the probalistic distribution of each pattern and its context, and requires no linguistic knowledge. Later I apply linguistic resources like character lists, Chinese classifier word list, and organization suffixes to assign the most probable category of each guessed segment; some simple syntactic knowledge like coordination and prepositional phrase-structure is also applied at this stage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Previous Work",
"sec_num": "2"
},
{
"text": "This section introduces the calculation the entropy of an arbitrary-length text segment and the mutual information between two patterns, which is the basis for next-step feature identification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy and Mutual Information",
"sec_num": "3"
},
{
"text": "An input document is firstly segmented against a general-domain dictionary, whose words are already verified in a large corpus to be common-sense words. High-accuracy pattern-matching features like numbers, dates, URL and e-mail addresses are also identified to reduce overall token numbers and thus accelerate the next-step sorting procedure. But less-accurate features like some Chinese phrases \"-X -Y\" (where X and Y are single Chinese characters of a same part-of-speech. Such phrase pattern is designed to tackle phrases like \"--4-*\" and \"--t33 ='\") are disabled, because recognizing such phrase patterns sometimes causes segmentation errors in its nearby context, and thus causes more side effects; rather these patterns are recognized after the identification of proper names, unknown words, and document terms. The sorting procedure uses the same algorithm as in Guo (1996) . The sorting algorithm is quite suitable for sorting large quantity of patterns. For n text patterns, this algorithm finishes in 0(n *log(n)) time while costing 0(n) memory.",
"cite_spans": [
{
"start": 871,
"end": 881,
"text": "Guo (1996)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistics of the frequencies and contexts for any-length text segments",
"sec_num": "3.1"
},
{
"text": "Sorting results look like this: In the sorting result sequences, each pattern is associated with the number of identical tokens to its immediate succeeding pattern. For example, in Table 1 , the associated number would be 2, 2, 2, and 0. With these associated numbers, it's very easy to calculate the occurrence of any pattern, ranging from a single token to any-length of tokens. To calculate the occurrence frequency of the single token \"s\" in table 1, just locate its first appearance in the sorted sequence, then count all succeeding patterns with an associated number of no less than its length 1, which is the number of tokens it has, we'll know its occurrence frequency is 3. For a 2-token pattern \"VA\", locate its first appearance and count all succeeding patterns with an associated number of no less than its length 2, we'll know its occurrence frequency is also 3.",
"cite_spans": [],
"ref_spans": [
{
"start": 181,
"end": 188,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Statistics of the frequencies and contexts for any-length text segments",
"sec_num": "3.1"
},
{
"text": "Left context pattern SPArg\u2022VWE4:1 / grAV-A-/R/fw",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistics of the frequencies and contexts for any-length text segments",
"sec_num": "3.1"
},
{
"text": "A pattern's right context, in sorted order, can also be directly obtained from the sorted sequences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistics of the frequencies and contexts for any-length text segments",
"sec_num": "3.1"
},
{
"text": "For example, the immediate right context of \"S/A\" are \"e\"X\", \"-i'V/-\u00b1-\", and \"A/f9\", all with an occurrence frequency of 1. I augmented the sorting algorithm so that each pattern's left contexts can also be quickly obtained in sorted order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistics of the frequencies and contexts for any-length text segments",
"sec_num": "3.1"
},
{
"text": "Following the method of using mutual information to construct a relevance network in Butte (2000) , this paper defines the mutual information (MI) between two adjacent text segment s and t as",
"cite_spans": [
{
"start": 85,
"end": 97,
"text": "Butte (2000)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy and Mutual Information",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "MI(s,t)=H(s)+H(t)-H(st)",
"eq_num": "(1)"
}
],
"section": "Entropy and Mutual Information",
"sec_num": "3.2"
},
{
"text": "where H(s) and H(t) are the entropy of s and t in the whole document; H(st) is the entropy of the text segment concatenated by s and t, which indicates how often s and t appear adjacently in the document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy and Mutual Information",
"sec_num": "3.2"
},
{
"text": "The entropy H(x) of a text segment x is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy and Mutual Information",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "I I (X) = -Ip(xi )log2 (p(;))",
"eq_num": "(2)"
}
],
"section": "Entropy and Mutual Information",
"sec_num": "3.2"
},
{
"text": "Since the occurrence frequencies of characters and words in a document are disperse values, I use the frequencies that a pattern appears in the paragraphs of a document. In other words, a document is firstly segmented into paragraphs, and then a pattern's frequencies in each paragraph are counted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy and Mutual Information",
"sec_num": "3.2"
},
{
"text": "For example, in a document about a new medicine \"aft*\" for curing diabetes, there are altogether 8 paragraphs. Among these paragraphs, \"a\" -as a single segmentation unit --appears totally 7 times in 5 paragraphs, namely 1, 1, 2, 2, 1; thus its probabilities are 0.14, 0.14, 0.28, 0.28 and 0.14. By formula (2), its entropy is 2.24.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy and Mutual Information",
"sec_num": "3.2"
},
{
"text": "The entropy numbers calculated for each segmentation unit (either a word or a lone character) via this method implies each segmentation unit's ability of forming a longer feature. A larger entropy number means that the segmentation unit is used in all paragraphs of the document more equally, and thus having a larger possibility of forming new features such as document terms, unknown words, or proper names. Another alternative to calculate each segmentation unit's entropy might be based on their occurrence frequencies in each sentence, but since a document contains much more sentences, the occurrence frequencies are usually very small, leading to all probabilities are almost the same; the result of the overall method is worse than using occurrence frequencies in paragraphs. Table 2 shows the entropy numbers of patterns \"fr\" -patterns starting with \"a\" --in the above-mentioned sam p le documents:",
"cite_spans": [],
"ref_spans": [
{
"start": 784,
"end": 791,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Entropy and Mutual Information",
"sec_num": "3.2"
},
{
"text": "Paragraph id Left context The focused pattern \"W\" 17 TIR/1,4 ildi fl jft/ ' 14 Pf-L4=1-mil:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy and Mutual Information",
"sec_num": "3.2"
},
{
"text": "ore3z-mgrAtifkgi 2 /EKERn/w/ fvf5 _, 1 , EilifFT2/ 22",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy and Mutual Information",
"sec_num": "3.2"
},
{
"text": "Mit/11A-/n/ efiliar/n/rziff/ 23 ' /\"/ 4-.:410*/\"/11/3/4fgrifiN 19 nAtri midwriagimu , 20 ' /3t1/\"/ Mid*/\"/11/3/4456t/ft25--)-/ 21 t/i(d \u00a7/\"/ tig/a/*/\"6gAV Eil.A/=+--iiterlig/ 16",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy and Mutual Information",
"sec_num": "3.2"
},
{
"text": "\" finiOrn/EtilTiRN ftV 7 , I fa.3-}Zil+U_F __IY 31 fINMAI I I /7 .1./ u . 10 01/-7-:+-11. 1:0 ' / fTWM/ I I /71/ 14:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy and Mutual Information",
"sec_num": "3.2"
},
{
"text": "3/ 8 <begin of paragraph> *RN/ I I /11/A/4W117411/ 4 ATI)j-/ffil$1.1/ flff,WrA7 I I Pli ti-.\u00b1-1 /11/3/g-Th 15 l'A/131Mr.e3t1/ WIN/4&e.tiv 18 ' /3d1 fisgoo,mvpk-mnirrzu , 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy and Mutual Information",
"sec_num": "3.2"
},
{
"text": "ag3UffiNt owN-ofmnirr-zaig Table 2 : patterns formed by li r\" and its contexts Based on the occurrence frequencies in Table 2 , the entropy numbers of patterns like \"a*\" are calculated. Table 3 lists the calculation result:",
"cite_spans": [],
"ref_spans": [
{
"start": 27,
"end": 34,
"text": "Table 2",
"ref_id": null
},
{
"start": 118,
"end": 125,
"text": "Table 2",
"ref_id": null
},
{
"start": 186,
"end": 193,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Entropy and Mutual Information",
"sec_num": "3.2"
},
{
"text": "Pattern and its entropy value remarks 02.30 \"Ogic*\"7-1A7 ' ' 1,4 firt \u00a7\"2 ' T-01\u00b1\"WW/A\" cri 114314-4Z U2.24 lariffl3T,Uf\" \"ag*\" 1 1:1 ' 77A. 7 2.2 'ili-a3SijA\"PkftW\" 1 1:1 ' 7 R,W2.24 7 -gra3311.AHINIMON\"11:3 ' , f*,ft/W2.24 77 Table 3 : entropy numbers of patterns \"VP\" The mutual information between two text patterns s and t indicates how tightly they are used as a whole unit, which is taken as a criterion of the semantics coherence level of these two patterns. In this way, mutual information can be used as a metric between two patterns related to their degree of independence. It's hypothesized that the higher mutual information between two patterns, the more likely they form a coherent unit.",
"cite_spans": [],
"ref_spans": [
{
"start": 228,
"end": 235,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Entropy and Mutual Information",
"sec_num": "3.2"
},
{
"text": "A mutual information at zero means that the two patterns s , t and their combination occurs only once in the document, thus gives no meaning in statistics, and any possible features formed by s and t can not be recognized by this paper's document-statistical method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy and Mutual Information",
"sec_num": "3.2"
},
{
"text": "A higher mutual information means that one pattern is non-randomly associated with the other; in semantics, this means they form a new concept or phrase. In the sample document, since the two words \"J171(\" and \"s\" are always used together, and leads to a very high mutual information value of 2.24, only slightly less than the mutual information value 2.30 of \"fig\" and \"fic\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy and Mutual Information",
"sec_num": "3.2"
},
{
"text": "Since the calculation of mutual information has already taken into account the occurrence frequencies and distributions of the segmentation units, it's more suitable to act as the criterion of detecting words coherence level. From the sorting result, I used to select those text segments with two or more occurrences as candidates of new features. But the occurrence number doesn't tell the coherence level of a segment's constituents, and neither is it a good metric to decide how long a text segment extends. Some of my experiments prove that using mutual information is better than using conditional probability or using frequency or distribution alone. Also due to the fact that mutual information values are compared within a document, there seems no need to normalize the values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy and Mutual Information",
"sec_num": "3.2"
},
{
"text": "After calculating the mutual information values of all consecutive segmentation-unit pairs, I use the following three steps to identify new features in the document:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm of Feature Identification",
"sec_num": "4"
},
{
"text": "(1) select those coherent token pairs t i and ti+1 whose mutual information values are above a threshold; the selected pairs are used as \"seed\" for extension in the followed step; (2) scan those tokens following the pair [ ti and ti+i ] to determine its right boundary of the new feature based on mutual information values of a segment and its extensions. Similarly scan the pair's leftward context to determine its left boundary; (3) assign a category (proper name, unknown word, or document term) for each of the sifted candidate patterns, and also compute a confidence value of the pattern as the assigned category.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm of Feature Identification",
"sec_num": "4"
},
{
"text": "To determine what adjacent segmentation-unit pairs can be part of a new feature, this paper uses a relative number of all mutual information values as the threshold. Using a larger threshold will reduce the set of the candidates, leading to a relatively high accuracy rate but a low recall rate; while using a smaller threshold leads to low accuracy rate and high recall rate. As a trade off, I assign the threshold as a relative percentage of the maximum of all mutual information values, i.e. Threshold = k * MAX (all mutual information values) In experiment, 0.6 is used as the coefficient k.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Construction of candidate feature pairs",
"sec_num": "4.1"
},
{
"text": "While selecting possible candidate features from bigram pairs, some morphological cues can be also used. For example, in the above-mentioned sample document, \"VP WA\" has a high mutual information value, due to the fact that \"Z\" always precedes \"WO\" in this document. From a common dictionary and a simple part-of-speech tagging engine, it's easy to know that \"V\" is almost definitely a preposition when used before a noun and forms a prepositional phrase, and thus such patterns could be eliminated to reduce noises for later steps and improve accuracy rate. Patterns with punctuation marks are also eliminated from the candidate set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Construction of candidate feature pairs",
"sec_num": "4.1"
},
{
"text": "To identify those possible new features consisting more than two segmentation units, I use the candidate pairs obtained from previous step as \"seeds\", and then extend the seed to both right and left. By comparing the mutual information values before and after an extension, the algorithm determines one of these three actions: (1) to stop extension, i.e., only adopt the original short pattern as a candidate feature; (2) to adopt both patterns as candidate features; or (3) to adopt the extended pattern and eliminate the original shorter pattern.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determination of the boundaries of a candidate feature",
"sec_num": "4.2"
},
{
"text": "Extensions are tried in both directions to determine both the right and the left boundaries of a possible candidate new feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determination of the boundaries of a candidate feature",
"sec_num": "4.2"
},
{
"text": "For the selected pair pattern \" ti ti+1 \", following methods are applied to determine its right boundary:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determination of the boundaries of a candidate feature",
"sec_num": "4.2"
},
{
"text": "\u2022 If all followed tokens has only one occurrence, the pattern will not be extended, and keep itself as a candidate of new feature. In the example \"'WA\" in Table 1 , there are three possible extended patterns, namely \"SPA/-47-X/\", \"X/A/ifg-/\" and \"M/A/A/\"; all of them appears only one time in the document, therefore only accept \"SPA\" as a candidate feature, and neglect any of the extended three patterns. The rightward extension of this pattern stops as a result;",
"cite_spans": [],
"ref_spans": [
{
"start": 155,
"end": 162,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Determination of the boundaries of a candidate feature",
"sec_num": "4.2"
},
{
"text": "\u2022 If a pattern has only one followed tokens, then eliminate the original shorter pattern and extend it to the extended longer pattern. In the example \"WM/ I I /\", all the followed tokens are \"M\", therefore take \"OWN/ I I PM\" as a candidate feature and at the same time eliminate \"MI/ I I \"; \u2022 If some of the extended patterns occur more than once, then compute mutual information values for all of them. Then compare the original mutual information MAt i , ti,4 ) with that of the extended pattern Mi(titi+i ,ti+2 ) ; divide the comparison result into three types: a) For cases of MAtiti+i , ti+2 < AIM/F(4 ,ti+i ) , Ai 0.4 in actual experiment, which means all the extended patterns have a lower coherence level, then stop rightward extension for \"ti 4+1 \", and keep it self as a candidate new feature; b) For cases of MAtiti+1 ,4,2 > X2MAti , ti+i , k2 = 0.9 in actual experiment, which means the extended patterns have a strong coherence level, then eliminate the original pattern \"ti I ti+1 \", adopt the new longer pattern \"ti I ti+i I ti+2\" into candidate set, and continue to check its rightward patterns for possible extension; c) For cases of a intermediate mutual information of the extended pattern, i.e., the value of MAtiti+i , 4+2 ) falls between the range of X 1 MAti , t and A2 MI(ti ,ti+1 ) , then keep the original pattern \"ti / ti+i \" in the candidate set, adopt the new longer pattern \"t i / ti+1 / ti+2 \", but stop checking any possible rightward extension for \"ti / ti+1 I ti+2\". Use the similar way to find the pattern's left boundary. As an additional syntactic rule, punctuation marks and some kinds of empty words like particles and prepositionals also stop the extension. For example, in the above example in Table 2 , \"0/a/0/\" is followed by a quotation mark, thus \"*/)171/01\" is taken as a new feature; Similarly, \"ow* I I I \u00a7.\" is followed by a full stop punctuation mark in the 7th paragraph, which gives a strong clue to stop extension for its rightward boundary, although in other sentences it's following by various kinds of words.",
"cite_spans": [],
"ref_spans": [
{
"start": 1735,
"end": 1742,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Determination of the boundaries of a candidate feature",
"sec_num": "4.2"
},
{
"text": "All candidate features identified from the previous steps are divided into three categories: proper names (person, place, organization, abbreviations, trade marks, etc.), unknown words, and document terms. The following rule-based heuristic clues are used to determine the categories that a feature belongs to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assignment of a category and confidence to the candidate features",
"sec_num": "4.3"
},
{
"text": "\u2022 Whether the feature is made up of lone characters or it is made up of words: A feature that's made up of lone characters are often person name, place name, abbreviation, or an unknown word; and a feature of words are often organization names or document terms; \u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assignment of a category and confidence to the candidate features",
"sec_num": "4.3"
},
{
"text": "Whether the feature has a proper name suffix: Proper name suffixes, such as place name suffix, organization suffix, and personal titles, are already collected in our linguistic resources. These suffixes can help to determine the category of a new feature. In formal texts, most proper names occur together with such a strong clue for at least one time in the whole document: a person name often occurs with his/her title like \"President XYZ\" and \"Professor XYZ\", an organization's full name is used for the first time before using its partial names like \"EquEmi llEjjai5-micw .L;r76\" (Si-Chuan Fu-Yi Electrics Company) and \"Nj I W4\" (Si-Chuan Fu-Yi);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assignment of a category and confidence to the candidate features",
"sec_num": "4.3"
},
{
"text": "\u2022 Commonly used character tables of proper names:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assignment of a category and confidence to the candidate features",
"sec_num": "4.3"
},
{
"text": "We've collected commonly used characters for Chinese person names, Chinese place names, and transliterated foreign person and place names. These tables are used to check features of lone characters. For example, if the first character in a three-character feature is a Chinese family name and the other two characters are found in Chinese person name character table, then this is taken as a strong evidence to indicate that the feature is a Chinese person name;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assignment of a category and confidence to the candidate features",
"sec_num": "4.3"
},
{
"text": "(HP-OBJ (HP-APP (CP (WHHP-1 (-NONE-*OP*)) (CP (IP (HP-SBJ (-HOME-*T*-1)) (UP (PP-DIR (P J) (HP (NH wm))) (UP (UE 40 OP-OBJ (ADJP (JJ)) r (HP (NH fr ))))) (DEC fr) ) ) (ADJP (JJ j)) (HP (NH ) ) (NP-PH (PU ) (NH Sig ) (PU ) ) )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assignment of a category and confidence to the candidate features",
"sec_num": "4.3"
},
{
"text": "\u2022 Clue words around the feature: Often there're some clue words to help to determine a feature's category. For person names, clue words can be human occupation names like \"-I2,\" (reporter); in other words, if such a clue word is found before or after a feature, this feature will be have a higher confidence of being a person name. Chinese organization names are often preceded with a place name that they're owned like in the previous example \"lEifif hr ialtjjjae4Eglal q\", thus place names can act as clue words for organization features. For the feature of abbreviations, I now only focus on those of organization names. In Chinese, most of the shortened forms are either a part of a full name, like J\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assignment of a category and confidence to the candidate features",
"sec_num": "4.3"
},
{
"text": "(Si-Chuan Fu-Yi) and its full form \"n) f irititjjnkbyEggui\" (Si-Chuan Fu-Yi Electrics Company) , or an acronym made up of several characters selected from its composite words, like \"J f ivg\" (Chuan Fu Dian) for the same full name. The first kind of shortened form is somewhat like the procedure of a rightward pattern extension. For the second kind of shortened form, it's easy to retrieve the words that each character stands for, from the statistics obtained at the very initial step. Both these two kinds of shortened forms reference to their full form, and are stored as a canonical-variants group.",
"cite_spans": [
{
"start": 60,
"end": 94,
"text": "(Si-Chuan Fu-Yi Electrics Company)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Assignment of a category and confidence to the candidate features",
"sec_num": "4.3"
},
{
"text": "Even after applying such clues, if there're still some features that are not determined with any categories, those made up of lone characters are assigned as unknown words, and those made up of words are assigned as document terms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assignment of a category and confidence to the candidate features",
"sec_num": "4.3"
},
{
"text": "If there're too many unknown categories, the document might be of another domain. The candidate features are clustered for manual review; from the review result, domain-specific rules summed up to act as domain-specific knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assignment of a category and confidence to the candidate features",
"sec_num": "4.3"
},
{
"text": "The steps of determining a feature's category also helps to determine its confidence. Strong clues that determine a feature's category are often used as strong confidence of the feature itself. Together with factors of the occurrence frequency and the trail of a feature's mutual information extension, each feature is assigned a confidence value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assignment of a category and confidence to the candidate features",
"sec_num": "4.3"
},
{
"text": "The algorithm can be evaluated in two aspects: its ability to collect candidate features, and the quality of categorizing these new features. To improve the accuracy and recall rate of collecting candidate feature is the main aim of introducing mutual information into the whole algorithm. I use UPenn Chinese treebank as a test corpus. The corpus contains about 180 thousands Chinese characters, or 99.7 thousands words. Among all words there are 9700 proper names, mainly names of persons, places and organizations. Following is a sample of UPenn treebank markup:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5"
},
{
"text": "Of all the features targeted by this paper, four types -namely, person names, place names, abbreviations and unknown words --roughly map to the \"words\" in UPenn treebank, while the other two types --document terms and organization names --roughly map to the \"phrases\" in UPenn treebank. For example, \"tiga/*/\" is an unknown word, \"tjggi),par is a document term in this algorithm's result; both of them match ideally with the word \"Ifiggm\" and the noun phrase \"tfg.rm 4ATI,, in the treebank.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5"
},
{
"text": "While measuring the algorithm's precision rate, an identified feature is considered correct if it matches UPenn treebank's any bracket level. In this way the precision rate is 87%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5"
},
{
"text": "To measure the algorithm's recall rate, we must first decide what are \"correct features\", whereas the bracket level in UPenn treebank varies quite a lot it their length, and doesn't quite fit. If we limit the set of \"correct feature\" to be the treebank's innermost bracket level, the recall rate is above 90%. But such measure will only cover a small portion of the identified document terms; for example, \"V n\" is identified as a document term, but UPenn treebank tags \"A-vnft-vrimn\" as a whole phrase, instead of desired level Ifin\". To expand the \"correct features\" to noun-phrase level of UPenn treebank will reduce the algorithm's recall rate to only 42%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5"
},
{
"text": "Among the precisely identified features, the overall category tagging precision is above 60%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5"
},
{
"text": "To test the algorithm's adaptability to other domains, I selected several English documents. The recall of organization names is 86%, indicating that the algorithm's ability to collect candidate features keeps the same level as in the UPenn Chinese treebank experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5"
},
{
"text": "This paper uses entropy and mutual information to collect candidate features, and then combines with existing linguistic resources such as proper name suffixes and character tables. It proves to be more robust and efficient than the original methods of sentence-based feature identification. Also, because it works on the whole document, a clue at only one place will also take effects to other places; this is especially important in identifying abbreviations and coreferences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "Since this method tries to use document statistics to find candidate features, it's not applicable for those features that only appear one time in the document, for example, the person name \"/3Z[j/d/\" in Table 1 . To produce an overall segmentation result for the whole document, conventional methods like joining lone characters based on person name character table must also be applied. Simple syntactic rules can also be applied, e.g., in a simple coordination structure like \"A B fE1C\", if one piece B is already confidently identified as a person name, then other pieces A and C are most possibly person names. To find a good strategy to make different engine-modules interoperate well is also a challenge.",
"cite_spans": [],
"ref_spans": [
{
"start": 204,
"end": 211,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Applying machine learning for high performance named-entity extraction",
"authors": [
{
"first": "S",
"middle": [],
"last": "Baluja",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Mittal",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Sukthankar",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. Pacific Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baluja S, Mittal V, Sukthankar R. 1999. Applying machine learning for high performance named-entity extraction. Proc. Pacific Association for Computational Linguistics. 1999.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Mutual Information Relevance Networks: Functional Genomic Clustering Using Pairewise Entropy Measurements",
"authors": [
{
"first": "A",
"middle": [
"J"
],
"last": "Butte",
"suffix": ""
},
{
"first": "I",
"middle": [
"S"
],
"last": "Kohane",
"suffix": ""
}
],
"year": 2000,
"venue": "Pacific Symposium on Biocomputing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Butte AJ, Kohane IS. 2000. Mutual Information Relevance Networks: Functional Genomic Clustering Using Pairewise Entropy Measurements. Pacific Symposium on Biocomputing (PSB 2000).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Total Solution for Unknown Words Identification in Automatic Segmentation",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Xiaohe",
"suffix": ""
}
],
"year": 1999,
"venue": "Journal of Language Applications",
"volume": "",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Xiaohe. 1999. A Total Solution for Unknown Words Identification in Automatic Segmentation. Journal of Language Applications, 1999(3).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An Approach to Determine the Boundary of Chinese de-phrase",
"authors": [
{
"first": "Guo",
"middle": [],
"last": "Zhili",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Chunfa",
"suffix": ""
},
{
"first": "Huang",
"middle": [],
"last": "Changning",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the International Conference on Chinese Computing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guo Zhili, Yuan Chunfa, Huang Changning. 1996. An Approach to Determine the Boundary of Chinese de-phrase. Proceedings of the International Conference on Chinese Computing (ICCC-1996).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Inverse Name Frequency Model and Rules Based Chinese Name Identifying",
"authors": [
{
"first": "Ji",
"middle": [],
"last": "Heng",
"suffix": ""
},
{
"first": "Luo",
"middle": [],
"last": "Zhensheng",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the sixth Joint Symposium on Computational Linguistics (JSCL-2001)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ji Heng, Luo Zhensheng. 2001. Inverse Name Frequency Model and Rules Based Chinese Name Identifying. Proceedings of the sixth Joint Symposium on Computational Linguistics (JSCL-2001).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automatic Identification of Chinese Person Names",
"authors": [
{
"first": "Sun",
"middle": [],
"last": "Maosong",
"suffix": ""
},
{
"first": "Shen",
"middle": [],
"last": "Dayang",
"suffix": ""
}
],
"year": 1995,
"venue": "Journal of Chinese Information Processing Society",
"volume": "",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sun Maosong, Shen Dayang. 1995. Automatic Identification of Chinese Person Names. Journal of Chinese Information Processing Society, 1995(2).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Study on the Automatic Identification of Chinese Place Names",
"authors": [
{
"first": "Tan",
"middle": [],
"last": "Hongye",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 5d Joint Symposium on Computational Linguistics (JSCL-1999)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tan Hongye. 1999. Study on the Automatic Identification of Chinese Place Names. Proceedings of the 5d Joint Symposium on Computational Linguistics (JSCL-1999).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Identification of Chinese Organization names",
"authors": [
{
"first": "Zhang",
"middle": [],
"last": "Xiaoheng",
"suffix": ""
}
],
"year": 1997,
"venue": "Journal of Chinese Information Processing Society",
"volume": "",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang Xiaoheng. 1997. Identification of Chinese Organization names. Journal of Chinese Information Processing Society, 1997(4).",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null,
"text": ""
}
}
}
}