|
{ |
|
"paper_id": "Y06-1008", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:34:00.724765Z" |
|
}, |
|
"title": "An Information Retrieval Model Based On Word Concept", |
|
"authors": [ |
|
{ |
|
"first": "Chen", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "wuchen@mail.ioa.ac.cn" |
|
}, |
|
{ |
|
"first": "Quan", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Chinese Academy of Sciences", |
|
"location": { |
|
"postCode": "100080", |
|
"settlement": "Beijing", |
|
"country": "China" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Xiangfeng", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Chinese Academy of Sciences", |
|
"location": { |
|
"postCode": "100080", |
|
"settlement": "Beijing", |
|
"country": "China" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Traditional approaches for information retrieval from texts depend on the term frequency. A shortcoming of these schemes, which consider only occurrences of the terms in a document, is that they have some limitations on extracting semantically exact indexes that represent the semantic content of a document. However, one word can always represent more than one meaning. The word sense ambiguities will also affect the system behavior. To address this issue, we proposed a brand new strategy-a concept extracting strategy to extract the concept of the word and to determine the semantic importance of the concepts in the sentences via analyzing the conceptual structures of the sentences. In this approach, a conceptual vector space model using auto-threshold detection is proposed to process the concepts, and a cluster searching model is also designed. This autothreshold detection method can help the model to obtain the optimal settings of retrieval parameters automatically. An experiment on the TREC6 collection shows that the proposed method outperforms the other two information retrieval (IR) methods based on term frequency (TF), especially for the lower-ranked documents", |
|
"pdf_parse": { |
|
"paper_id": "Y06-1008", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Traditional approaches for information retrieval from texts depend on the term frequency. A shortcoming of these schemes, which consider only occurrences of the terms in a document, is that they have some limitations on extracting semantically exact indexes that represent the semantic content of a document. However, one word can always represent more than one meaning. The word sense ambiguities will also affect the system behavior. To address this issue, we proposed a brand new strategy-a concept extracting strategy to extract the concept of the word and to determine the semantic importance of the concepts in the sentences via analyzing the conceptual structures of the sentences. In this approach, a conceptual vector space model using auto-threshold detection is proposed to process the concepts, and a cluster searching model is also designed. This autothreshold detection method can help the model to obtain the optimal settings of retrieval parameters automatically. An experiment on the TREC6 collection shows that the proposed method outperforms the other two information retrieval (IR) methods based on term frequency (TF), especially for the lower-ranked documents", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Searching in archives of documents becomes increasingly frequent for most of people. So, how to provide useful and efficient IR (Information Retrieval) tools becomes more and more important. Since 1968, when the first formal model for IR [1] comes into being, a number of IR models have been developed, such as vector space, probabilistic, fuzzy, logical, inference, language and so on [2] [3] [4] [5] [6] [7] [8] [9] [10] .While considering all the former methods, we can find most of the methods are based on occurrences of terms in a document (or TF: Term Frequency) and seldom on the content of the document. These algorithms analyze only term occurrences and do not attempt to resolve the meaning of the terms. As we all know, the meaning of words is very helpful to IR. If we disregard the context and dispose the words separately, the performance of IR system will be lowered for the ubiquitous existence of word sense ambiguity. A word may have a lot of meanings. A particular meaning can also be represented by more than one word. Therefore whether we can determine the meaning of the word in a document will affect the accuracy of an IR system. Meanwhile, many studies [11] [12] [13] [14] [15] have shown that people understand things by comprehending the concepts represented by the things. The language works in the same way [16] . Consequently, some research teams began to investigate the language conceptual space and the expression of the space using symbolic system. The main part of the IR method we present is to discuss how to draw the conceptual expressions of a word and a sentence based on a symbolic system [16] [17] [18] [19] . We consider the advantages of case grammar [20] , transformational generative grammar [21] [22] and Wordnet [23] fully, and try to form a strategy to extract the conceptual expressions, and then, process the information in a semantic way based on the conceptual expression. The key rudder of Section 2 is to discuss how to use the concept symbols and the sentence category expressions to represent the words and the sentences via some analyzing strategies and knowledge bases.", |
|
"cite_spans": [ |
|
{ |
|
"start": 238, |
|
"end": 241, |
|
"text": "[1]", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 386, |
|
"end": 389, |
|
"text": "[2]", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 390, |
|
"end": 393, |
|
"text": "[3]", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 394, |
|
"end": 397, |
|
"text": "[4]", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 398, |
|
"end": 401, |
|
"text": "[5]", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 402, |
|
"end": 405, |
|
"text": "[6]", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 406, |
|
"end": 409, |
|
"text": "[7]", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 410, |
|
"end": 413, |
|
"text": "[8]", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 414, |
|
"end": 417, |
|
"text": "[9]", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 418, |
|
"end": 422, |
|
"text": "[10]", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1179, |
|
"end": 1183, |
|
"text": "[11]", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1184, |
|
"end": 1188, |
|
"text": "[12]", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1189, |
|
"end": 1193, |
|
"text": "[13]", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1194, |
|
"end": 1198, |
|
"text": "[14]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1199, |
|
"end": 1203, |
|
"text": "[15]", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1337, |
|
"end": 1341, |
|
"text": "[16]", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1631, |
|
"end": 1635, |
|
"text": "[16]", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1636, |
|
"end": 1640, |
|
"text": "[17]", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1641, |
|
"end": 1645, |
|
"text": "[18]", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1646, |
|
"end": 1650, |
|
"text": "[19]", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1696, |
|
"end": 1700, |
|
"text": "[20]", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 1739, |
|
"end": 1743, |
|
"text": "[21]", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1744, |
|
"end": 1748, |
|
"text": "[22]", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1761, |
|
"end": 1765, |
|
"text": "[23]", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Owing to the fact that a simple search usually retrieves a large collection of documents, clustering becomes an efficient tool. In fact, clustering methods [24] [25] have been intensively studied in information retrieval for textual documents since 1990s. All these clustering methods need users to evaluate, more or less, the thresholds (e.g.: the value k in k-means). The values of thresholds will affect the results of clustering to some extend. Furthermore, the document vectors may always be of large dimension and sparse. So a fixed-threshold is certain to be inappropriate in some situations. Consequently, an autothreshold detection clustering method is proposed in the paper. This clustering method uses the simulated curve of the document distances to find the thresholds. The processing object of this clustering method is the concept instead of the word.", |
|
"cite_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 160, |
|
"text": "[24]", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 161, |
|
"end": 165, |
|
"text": "[25]", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In order to do cluster searching, a searching method is proposed. The method here uses the dispersion among individual maximum likelihood, partial maximum likelihood and global maximum likelihood to measure the appearance probability of queries, then to fulfill the task. This method will be more accurate. Section 3 will discuss the clustering method which is used to generate clustering of the documents, and the searching method which is used to measure the document query similarity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Section 4 compares the experiments using the proposed methodology and the traditional indexing schemes. The conclusions are given in Section 5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this phase, we transfer the words and sentences to their conceptual forms, through three knowledge bases which we have constructed. We also proposed a good processing strategy which extracts the word concept via the knowledge bases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "WORD CONCEPT EXTRACTING STRATEGY", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In the studies of using the conceptual expressions to express the words and the sentences, a conceptual language model has been introduced, which involves the new concepts presented below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic Theoretical Foundations", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "One important concept is semantic chunk [2] which is a semantic unit between word and sentence. But it is different from phrase or other traditional chunks. It is a semantic unit expressing a comprehensive concept. We classify the semantic chuck into two types: the main chunk and the supplementary chunk. The main chunk contains the necessary parts of a sentence from the view of conceptual association. It always describes the object and its actions. The supplementary chunk provides the background knowledge of a sentence, such as time, place etc. Semantic chunk will be meaningful when it interacts with the sentence category [16] . Sentence category is another concept. It is composed of a set of symbolic expressions which stand for the categories of the sentences from the view of the semantics. The expression is named sentence category expression. Each expression contains the expressions of chunks and some conjunctive symbols which are used to describe the relationship among chunks. These expressions are designed in advance and able to describe not only the meaning but also the structure of the sentence. Huang concluded 57 types of primitive sentence category expressions and 57*56 compound ones [16] [18] . As a result, not only Chinese sentences but also English sentences can be expressed by one of the 57 primitive forms or their compounds.", |
|
"cite_spans": [ |
|
{ |
|
"start": 40, |
|
"end": 43, |
|
"text": "[2]", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 630, |
|
"end": 634, |
|
"text": "[16]", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1211, |
|
"end": 1215, |
|
"text": "[16]", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1216, |
|
"end": 1220, |
|
"text": "[18]", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic Theoretical Foundations", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The processing involves three knowledge bases: One is called WCK (Word Concept Knowledge base), the second is called SCK (Sentence Category Knowledge base), and the last is called SRK (Scheduler Rule Knowledge base). WCK stores the relationship between the word and the concept as well as the important features of the sentence if the word in the sentence can be interpreted by the concept it records. The main structure of WCK is shown in Table 1 . Record the word itself, such as \"prosecute\", \"her\", \"drink\" etc.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 440, |
|
"end": 447, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Knowledge Bases", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The number of concepts which the word can be interpreted, such as 1/2, 2/2\u2026", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Relative Concept Number (RCN)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "One concept expression of the word, such as \"va5a\" (corresponding to \"prosecute\"), \"p4003/jx621\" (\"her\"), \"v62221a\" (\"drink\").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Concept Expression (CE)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Sentence category expression which is determined by the content of CE, such as \"T3R011*322\", one of the 57*56 types of forms in the SCK.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Category Expression (SCE)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Describe the characters of chunks decided by the SCE value, such as \"@S: TB: pea56\" which interprets the characters of the main chunk.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Characters of Chunks (CC)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "WCK is the key knowledgebase. It provides the information about which sentence category expressions should be hypothesized and which characters can be used to validate the hypothesis according to the words the computer reads.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Characters of Chunks (CC)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A set of rules have been carried out to tell the researcher how to construct the WCK using concept symbol system. A Chinese WCK containing 50,000 vocabularies have been made by us since 2000.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Characters of Chunks (CC)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "SCK is a standard sentence category database. It contains 57 sentence category expressions which we have introduced before. Each sentence in the nature language can find its counterpart in the database.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Characters of Chunks (CC)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "SRK tells the computer which sentence category expression should be hypothesized first, which one second, and so on, according to the words in the sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Characters of Chunks (CC)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The processing strategy is based on a conceptual language model, which can be concluded as the following expressions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Processing Strategy", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": ") ,..., , ( ) ,..., , ( 2 1 2 1 n n WC WC WC g SCh SCh SCh SCh f SCE = =", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Processing Strategy", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "where, SCE is the sentence category and SCh is the semantic chunk. WC refers to the word concept. Word concept is the conceptual expression of a word sense. It is expressed by a meaningful character string.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Processing Strategy", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "() f and () g are two functions which formalize the semantic relationship between the variables. This model considers the conceptual consistency of the sentence category with the semantic chunks and the word concepts in the sentence. The sentence category is determined by the semantic chunks in the sentence. The semantic chunk is determined by the word concepts in and around the semantic chunk. Meanwhile, the sentence category restricts the word concepts in the sentence in return.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Processing Strategy", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Processing strategy focuses mainly on the processing logic which tells computer how to get the sentence category expression and word concepts through some specific procedures according to the conceptual language model. The diagram of processing strategy is shown as Fig.1 . The processing strategy can be divided into four phases. The first phase is called pretreatment. The actual assignments in this phase depend on the language it processes. If English is processed, we will do", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 266, |
|
"end": 271, |
|
"text": "Fig.1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Processing Strategy", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Perception of chunks and hypothesis of sentence category", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pretreatment", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Each sentence in a document stemming, phrase extracting etc. If Chinese is processed, word segmentation will be performed. In contrast to the traditional word segmentation, every possible segmentation instance will be considered in our processing strategy, which selects the correct segmentation using hypothesis and decision of sentence category. Perception of chunks and hypothesis of sentence category expression is the second phase. In this phase, the chunks in the sentence and their corresponding possible sentence category expression will be hypothesized according to the words in the sentence and their concepts recorded in the \"CE\" of the WCK. The possible sentence category expression is stored in \"SEC\" of the WCK.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence category expression", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Decision of sentence category is the third phase. In this phase, the hypothesis made in the second phase will be checked according to the \"CC\" in the WCK. The right sentence category expression will be proved and obtained.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence category expression", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Composition analysis of chunks is the last phase. Through this phase, we can get all components of the chunks. Components are expressed by the exact word concepts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence category expression", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "After processing, the sentence category expression and the concept expression of each word in the sentence will be obtained. We use the sentence category expressions to analyze which concept is of high semantic importance and should server the clustering process and which one should be ignored. We take advantage of the concepts only in the clustering process and will disregard the sentence category expressions in the process.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence category expression", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The following example explains the strategy and produces the results directly. Example: For six months~|| I || lived ||~without a job ~|| in New York. The sentence category expression of this sentence is: Cn1Cn2S03J, S03 means this sentence is a transposition state sentence due to \"live\" is some kind of state. The main trucks of this sentence are \"I\", \"lived in\" and \"New York\". \"lived in\" is the kernel of the main chunk, \"I\" and \"New York\" are two general main chunks. The supplementary chunks are \"For six months\" and \"without a job\", which act as conditions (CN). Hence we add signs \"CN1\" and \"CN2\" in front of the sentence category expression. The concept of \"I\" in this sentence is \"p4001\" which means a special personal pronoun. The concept of \"live\" is \"v65500214\" which means to reside in a place, while \"New York\" is \"fpj2*304/fpwj2*m1\" which means a city in US, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence category expression", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "HNCIR uses clustering methods to complete the subsequent process, with auto-threshold detecting. Apparently, HNCIR takes the word concept as the processing object instead of the words themselves, which differs from the traditional clustering-based IR System.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HNCIR SYS. BASED ON WORD CONCEPT", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In cluster generation, determination of the number of clusters is baffling, especially using concept. There are no experiential parameters available in the literature. Consequently, how to generate the clusters symmetrically becomes most important. In this paper, we propose a new approach to detect the clusters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Generation Methods", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "HNCIR uses Kullback-Liebler distance algorithm to measure the correlation between documents, or between a document and a cluster. The objective function follows: value indicates that the document (cluster) is highly correlated to another document (cluster). Naturally, these documents are to be clustered into the same cluster, which leads to 0 ) ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Generation Methods", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "| | ) , ( | | ) , ( log | | ) , ( ) , ( c c w n d d w n d d w n c d KL i i d w i i \u2211 \u2208 = (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Generation Methods", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "( = d d KL . The ) , ( x w n i", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Generation Methods", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "is the document-concept weight, which is a measure of the number of occurrences of concept i w in the document (cluster) x.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Generation Methods", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "| |", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Generation Methods", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "x is the number of concepts in the document (cluster) x. In order to generate the clusters automatically, we need to find a set of thresholds to differentiate the correlated documents from others and form a cluster. The main tasks in finding the thresholds include: select a document, calculate the KL distance between this document and each document in the collection, arrange the documents with increasing KL. Finally, uses a function, The generation process can be concluded to the following steps:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Generation Methods", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Step 1: selecting a document d from collection randomly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Generation Methods", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Step Step 5: removing the clustered documents from collection S, namely S = S-P.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Generation Methods", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Step 6: checking whether the collection S is null, if it is null, go to Step8. Step 8: checking whether the number of clusters is within the range of an appropriate value settled in advance, if it is, go to Step17.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Generation Methods", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Step 9: selecting a cluster P from cluster collection SC.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Generation Methods", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Step ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Generation Methods", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "+ \u2208 <= = } , | { .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Generation Methods", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Step 13: removing the cluster P from cluster collection SC.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Generation Methods", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Step 14: checking whether there are no unprocessed clusters in SC, if there are, go to Step 16.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Generation Methods", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Step 15: selecting another cluster P which KL distance equal to Step 18: the process is completed. The whole process can be divided into 3 stages. The first stage is Steps 1-7. It initializes the cluster. The second stage is Steps 8 -16, which merges and rebuilds the clusters via iterated processing. The last stage is Step 18, which adjusts the results obtained from the second stage. The first stage is similar to the second stage. These two stages can be combined when coded into a program.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Generation Methods", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "A searching method is proposed to complete the cluster searching based on Dirichlet smoothing method. Our method uses the linear interpolation technique processing individual maximum likelihood, partial maximum likelihood and global maximum likelihood to rank the documents with respect to the given query. Due to this, a new language model is introduced. Different from the traditional collection model, this model treats the unseen words in an indirect and clustering-based way.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Searching", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In the same way, HNCIR uses word concept as the processing object. However, it is hard to confirm the concept of each query exactly for separate queries always lack strong phraseological relationship between each other. Therefore, we use the proportion of each concept candidate of a word to weighting the probabilities caused by each candidate.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Searching", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The target function of HNCIR searching method follows: There are many parameters in HNCIR that needs to be set. We set these parameters according to the performance of HNCIR in our experiments. We selected the values which lead to the best performance of system as their optimum values.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Searching", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u220f\u2211 \u2211 \u2208 + + \u00d7 = Q q n i P s s jn i jn jn j i Dir j d C P P P w P d w n w q P Q d P \u00b5 \u00b5 | | ) | ( ) | ( ) , ( ) , ( ) | ( where ) , ( i jn d w n", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Searching", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Firstly, the range of the value which is used to compare with the number of clusters in Step 8 of section 3.13 needs to be evaluated. The range of value is an order of magnitude and determined by the number of documents in the document collection. The algorithm is shown as follows. Secondly, the condition which is used to check whether the number of clusters is obviously decreased should be evaluated. The condition is whether the highest digit of a number varies. If it is, it is reasonable to believe that the number of clusters is obviously decreased.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Searching", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Thirdly, the value of threshold MaxKL used to detect the isolated point should be fixed. The actual test results indicated that the algorithm will have a good tolerance of isolated points if the value of MaxKL is within the rage form 0.8 to 0.9, and the change of the value in this range will affect marginally the final results. In our experimental system, we chose the value as 0.85.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Searching", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Lastly, the value of Dirichlet parameter \u00b5 should be evaluated. The results obtained by different queries are differently affected by the parameter \u00b5 . We tested a set of values of \u00b5 , and found that the highest performance of the system occurred when \u00b5 equal to 200, so we chose 200 as the standard value of \u00b5 in our experimental system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Searching", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In order to make comparison, two other IR models are also tested in our experiment. One is Jelinek-Mercer Model based on Term Frequency, the other is Bayesian Model based on Term Frequency. The test results of these 3 systems are given in Table 3 . 34.93% The experiment shows the precision of HNCIR increases more evidently than the other two systems when the recall increases. The reason of getting the good results of HNCIR is the word sense ambiguities are well solved by translating the words into their concept forms via the strategy of word concept extracting. On the other side, the cluster generation method can divide the clusters equably and can well serve our cluster searching method. Otherwise, we also found that the query results of HNCIR involved some inappropriate documents that consist of a lot of complex sentences which can not be analyzed accurately. It is an accessional factor that depresses the system behavior. Another factor that will affect the results is how to tackle the new words in the documents and query, this problem have been solved by finding the words via concept extracting strategy and processing them as an integrated concept.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 239, |
|
"end": 246, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Cluster Searching", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "One of the main difficulties in information retrieval is how to resolve the ambiguities of the meanings which the word represents. These ambiguities become more serious in the translation of the query terms in IR. In this paper, we use word extracting strategy to solve the word sense ambiguities and get the exact concept of the word. We also designed an auto threshold-detecting based clustering method to process the concepts. The reason of using the clustering method is that we are not able to extract the document conceptual structures well at present. Our team will focus on this issue in the following years. The test results show that the information retrieval based on concept can efficiently improve the IR precision in Chinese.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CONCLUSIONS", |
|
"sec_num": "5" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Automatic information organization and retrieval", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Salton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1968, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Salton, G.. Automatic information organization and retrieval. New York: McGraw-Hill. 1968.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Information retrieval", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Van Rijsbergen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1979, |
|
"venue": "London: Butterworths", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Van Rijsbergen, C. J. Information retrieval (2nd ed.). London: Butterworths. 1979.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Introduction to modern information retrieval", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Salton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Mcgill", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1983, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Salton, G., & McGill, M. J. Introduction to modern information retrieval. New York: McGraw-Hill. 1983", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Modern information retrieval", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Baeza-Yates", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Ribeiro-Neto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Baeza-Yates, R., & Ribeiro-Neto, B. Modern information retrieval. Addison-Wesley. 1999", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Engineering a multi-purpose test collection for Web retrieval experiments. Information Processing and Management", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Bailey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Craswell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Hawking", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "39", |
|
"issue": "", |
|
"pages": "853--871", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bailey, P., Craswell, N., & Hawking, D. Engineering a multi-purpose test collection for Web retrieval experiments. Information Processing and Management, 39, 853-871. 2003.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Fuzzy sets in information retrieval and clustering analysis", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Miyamoto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Miyamoto, S. Fuzzy sets in information retrieval and clustering analysis. Kluwer Academic Press. 1990.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Soft computing in information retrieval", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Crestani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "102--121", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Crestani, F., Pasi, G. (Eds.). Soft computing in information retrieval. Germany: Physica Verlag and Co, ISBN:3790812994, 102-121. 2000", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Finding out about", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Belew", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Belew, R. Finding out about. Cambridge University Press. 2000", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Logical models in information retrieval: Introduction and overview. Information Processing and Management", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Lalmas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "34", |
|
"issue": "", |
|
"pages": "19--33", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lalmas, M. Logical models in information retrieval: Introduction and overview. Information Processing and Management, 34(1), 19-33. 1998", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Text information retrieval systems", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Meadow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Boyce", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Kraft", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Meadow, C. T., Boyce, B. R., & Kraft, D. H. Text information retrieval systems (2nd ed.). San Diego, CA: Academic Press. 1999", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Identification of conceptualizations underlying nature lauguage", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Schank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1973, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Schank R. Identification of conceptualizations underlying nature lauguage. In: Schank R, Colby K Eds. Computern Models of Thought and Language. San Francisco, CA: W H Freeman and Company. 1973", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Conceptual Information Processing", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Schank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1975, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Schank R. Conceptual Information Processing. Amsterdam: North Holland. 1975", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "The structure of episodes in money", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Schank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Schank R. The structure of episodes in money. In: Bobrow D, Collins A eds. Representation and Underastanding. New York: Academic Press", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Scripts, Plans, Goals and Understanding", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Schank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Abelson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1977, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Schank R, Abelson R. 1977. Scripts, Plans, Goals and Understanding. Hillsdale, NJ: Erlbaum. 1977", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "HNC (Hierarchical Network Concept) Theory", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Huang Zengyang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "HUANG Zengyang. HNC (Hierarchical Network Concept) Theory. Beijing: Tsinghua University Press. 1998", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Mathematics and physics symbol system of language in language concept space", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Huang Zengyang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "HUANG Zengyang. Mathematics and physics symbol system of language in language concept space. Beijing: Ocean Press. 2004", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Guide of HNC (Hierarchical Network Concept) Theory", |
|
"authors": [ |
|
{ |
|
"first": "Miao", |
|
"middle": [], |
|
"last": "Chuanjiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Miao Chuanjiang. Guide of HNC (Hierarchical Network Concept) Theory. Beijing: Tsinghua University Press. 2005", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "HNC and NLP (Nature Language Processing). Wuhan: Wuhan Institute of Technology", |
|
"authors": [ |
|
{ |
|
"first": "Zhang", |
|
"middle": [], |
|
"last": "Quan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiao", |
|
"middle": [], |
|
"last": "Guozheng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhang Quan, Xiao Guozheng. HNC and NLP (Nature Language Processing). Wuhan: Wuhan Institute of Technology. 2001", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "The case for case", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Fillmore", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1968, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fillmore C J. The case for case. In: Bach E, Harms R eds. Universals in Linguistic Theory. New York: Holt, Rinehart and Winston. 1968", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Syntactic Structures", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Chomsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1957, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chomsky N. 1957. Syntactic Structures. Hague: Mouton. 1957", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Aspects of the Theory of Syntax", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Chomsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1965, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chomsky N. 1965. Aspects of the Theory of Syntax. Cambridge, MA: MIT Press 1965", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "WordNet: an electronic lexical database", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Fellbaum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fellbaum, C. WordNet: an electronic lexical database. The MIT Press. 1998", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Lecture notes in computer science 1980", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Agosti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Crestani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Pasi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Alltheweb", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Agosti, M., Crestani, F., & Pasi, G. Lectures on information retrieval. Lecture notes in computer science 1980. Alltheweb. (2004). Available: http://www.alltheweb.com. 2001", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Self-organizing maps", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Kohonen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kohonen, T. Self-organizing maps (2nd ed.). Germany: Springer-Verlag. 1997", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "The diagram of processing strategy", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"text": "of correlation between a document (or a collection of documents: cluster) d and a document (cluster) c.", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"text": "the partial derivatives of equation(2)with respect to j c to 0, we will obtain m+1 equations. Solutions to these equations solve the function ) The basic idea of Cluster generation is as follows: (a) Selecting a document from a collection randomly, (b) measuring the distance between this document and other documents in the collection based on Eq. (1), (c) calculating the distance threshold, (d) marking off a cluster form the collection according to the calculated threshold, (e) removing the clustered documents from the collection, (f) repeating above steps till all the documents are processed or a stop condition is reached.", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"text": "calculating the distance cj kl between document d and other documents j c in the collection S based on (1). A set of distance can be obtained: 1.2, then setting a document d-centered cluster P,", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF4": { |
|
"text": "selecting another document d which KL distance equal to repeat Steps 2-7.", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF5": { |
|
"text": "between document i d and cluster k P . Each distance has been calculated in Step 17 of section 3Our experiments are made in a Chinese language circumstance. The test collections are chosen from TREC6. A 164,811 document collection including documents from both the People's Daily and the Xinhua News Agency was used.", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF6": { |
|
"text": "of the documents in the document collection; range V is the value of range.", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td>column</td><td>comment</td></tr><tr><td>Word</td><td/></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "The Main Structure Of WCK", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"content": "<table><tr><td>cluster collection SC, repeat Steps 10 -15. Step 16: } , , , , min{ 3 2 1 pk p p p pi kl kl kl kl kl \u2026 =</td><td>max{ kl</td><td>1 p</td><td>,</td><td>kl</td><td>p</td><td>2</td><td>,</td><td>kl</td><td>p</td><td>3</td><td>,</td><td>\u2026</td><td>,</td><td>kl</td><td>pm</td><td>}</td><td>in</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "checking whether the number of clusters is obviously decreased, if it is, repeat steps 8-16.Step 17: calculating every distance between each document d in the collection and the cluster Pj, let , if document d does not belong to Pi, remove document d to cluster Pi", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"content": "<table><tr><td>Precision</td><td>Jelinek-Mercer</td><td>Bayesian</td><td>HNCIR</td></tr><tr><td>Recall</td><td/><td/><td/></tr><tr><td>0</td><td>0.6000</td><td>0.6000</td><td>0.6100</td></tr><tr><td>0.1</td><td>0.5416</td><td>0.5521</td><td>0.5760</td></tr><tr><td>0.2</td><td>0.4812</td><td>0.4256</td><td>0.5164</td></tr><tr><td>0.3</td><td>0.4028</td><td>0.4229</td><td>0.4821</td></tr><tr><td>0.4</td><td>0.3837</td><td>0.4042</td><td>0.4478</td></tr><tr><td>0.5</td><td>0.2586</td><td>0.2776</td><td>0.4010</td></tr><tr><td>0.6</td><td>0.2103</td><td>0.2521</td><td>0.3130</td></tr><tr><td>0.7</td><td>0.1227</td><td>0.1731</td><td>0.2245</td></tr><tr><td>0.8</td><td>0.0834</td><td>0.1237</td><td>0.1804</td></tr><tr><td>0.9</td><td>0.0491</td><td>0.0687</td><td>0.0610</td></tr><tr><td>1</td><td>0.0312</td><td>0.0310</td><td>0.0306</td></tr><tr><td>AvgPre</td><td>24.95%</td><td>26.61%</td><td/></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "The Test Results", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |