Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y07-1045",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:46:44.942003Z"
},
"title": "Refinement of Document Clustering by Using NMF *",
"authors": [
{
"first": "Hiroyuki",
"middle": [],
"last": "Shinnou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ibaraki University",
"location": {
"addrLine": "4-12-1 Nakanarusawa",
"postCode": "316-8511",
"settlement": "Hitachi",
"region": "Ibaraki JAPAN"
}
},
"email": "shinnou@mx.ibaraki.ac.jp"
},
{
"first": "Minoru",
"middle": [],
"last": "Sasaki",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ibaraki University",
"location": {
"addrLine": "4-12-1 Nakanarusawa",
"postCode": "316-8511",
"settlement": "Hitachi",
"region": "Ibaraki JAPAN"
}
},
"email": "msasaki@mx.ibaraki.ac.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we use non-negative matrix factorization (NMF) to refine the document clustering results. NMF is a dimensional reduction method and effective for document clustering, because a term-document matrix is high-dimensional and sparse. The initial matrix of the NMF algorithm is regarded as a clustering result, therefore we can use NMF as a refinement method. First we perform min-max cut (Mcut), which is a powerful spectral clustering method, and then refine the result via NMF. Finally we should obtain an accurate clustering result. However, NMF often fails to improve the given clustering result. To overcome this problem, we use the Mcut object function to stop the iteration of NMF.",
"pdf_parse": {
"paper_id": "Y07-1045",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we use non-negative matrix factorization (NMF) to refine the document clustering results. NMF is a dimensional reduction method and effective for document clustering, because a term-document matrix is high-dimensional and sparse. The initial matrix of the NMF algorithm is regarded as a clustering result, therefore we can use NMF as a refinement method. First we perform min-max cut (Mcut), which is a powerful spectral clustering method, and then refine the result via NMF. Finally we should obtain an accurate clustering result. However, NMF often fails to improve the given clustering result. To overcome this problem, we use the Mcut object function to stop the iteration of NMF.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In this paper, we use non-negative matrix factorization (NMF) to improve the document clustering result generated by a powerful document clustering method. Using this strategy, we can obtain an accurate document clustering result. Document clustering is a task that divides a given document data set into a number of groups according to document similarity. This is the basic intelligent procedure, and an important factor in text-mining systems, from Berry (2003) . Relevant feedback in information retrieval (IR), where retrieved documents are clustered, is a specific application that is actively researched by Hearst et al. (1996) , Leuski (2001) , Zeng et al. (2001) and Kummamuru (2004) . NMF is a dimensional reduction method and an effective document clustering method, because a term-document matrix is high-dimensional and sparse, from Xu et al. (2003) .",
"cite_spans": [
{
"start": 452,
"end": 464,
"text": "Berry (2003)",
"ref_id": "BIBREF1"
},
{
"start": 614,
"end": 634,
"text": "Hearst et al. (1996)",
"ref_id": "BIBREF7"
},
{
"start": 637,
"end": 650,
"text": "Leuski (2001)",
"ref_id": "BIBREF10"
},
{
"start": 653,
"end": 671,
"text": "Zeng et al. (2001)",
"ref_id": "BIBREF14"
},
{
"start": 676,
"end": 692,
"text": "Kummamuru (2004)",
"ref_id": "BIBREF8"
},
{
"start": 846,
"end": 862,
"text": "Xu et al. (2003)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Let X to be a term-document matrix, consisting of m rows (terms) and n columns (documents). If the number of clusters is k, NMF decomposes X to the matrices U and as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "n m \u00d7 t V t UV X =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "where U is , V is and is the transposed matrix of V. The matrix U and V are nonnegative. In NMF, each k dimensional column vector in V corresponds to a document. An actual clustering procedure is usually performed using these reduced vectors. However, NMF does not need such a clustering procedure. The reduced vector expresses its cluster by itself, because each column axis of V represents a topic of the cluster. Furthermore, the matrices V and U are",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "k m \u00d7 k n \u00d7 t V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "obtained by a simple iteration, from Lee (2000) , where the initial matrices and are updated. Therefore, we can regard NMF as a refinement method for a given clustering result, because the matrix V represents a clustering result.",
"cite_spans": [
{
"start": 37,
"end": 47,
"text": "Lee (2000)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "0 U 0 V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In this paper, we use NMF to improve clustering results. Providing NMF with an accurate document clustering result, we can ensure a more accurate result, because NMF is effective for document clustering. However, NMF often fails to improve the initial clustering result. The main reason for this is that the object function of NMF does not properly represent the goodness of clustering. To overcome this problem, we use another object function. After each iteration of NMF, the current clustering result is evaluated by that object function. We first need the initial clustering result. To obtain this, we perform min-max cut (Mcut) proposed by Ding et al. (2001) , which is a spectral clustering method. Mcut is a very powerful clustering method, and we can obtain an accurate clustering result by improving the clustering result generated through Mcut,",
"cite_spans": [
{
"start": 626,
"end": 632,
"text": "(Mcut)",
"ref_id": null
},
{
"start": 645,
"end": 663,
"text": "Ding et al. (2001)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In the experiment, we used 19 data set provided via the CLUTO website. Our method improved the clustering result generated by Mcut. In addition, the accuracy of the obtained clustering result was higher than those of NMF, CLUTO and Mcut.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "NMF decomposes the term-document matrix X to the n m \u00d7 k m \u00d7 matrix U and the transposed matrix of the matrix V , from Xu et al. (2003) , where k is the number of clusters:",
"cite_spans": [
{
"start": 119,
"end": 135,
"text": "Xu et al. (2003)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features of NMF",
"sec_num": "2.1."
},
{
"text": "t V k n \u00d7 . t UV X =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features of NMF",
"sec_num": "2.1."
},
{
"text": "NMF attempts to find the axes corresponding to the topic of the clusters, and represents the document vector and the term vector as a linear combination of the found axes. NMF has following three features: i. V and U are non-negative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features of NMF",
"sec_num": "2.1."
},
{
"text": "The element of V and U refers to the degree of relevance to the topic corresponding to the axis of its element. It is therefore natural to assign a non-negative value to the element. SVD can also reduce dimensions, but negative values appear unlike with NMF. ii. The matrix V represents the clustering result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features of NMF",
"sec_num": "2.1."
},
{
"text": "The dimensional reduction translates high-dimensional data to lower-dimensional data. Therefore, we usually must perform actual clustering for the reduced data. However, NMF does not require this, because the matrix V represents the clustering result. The i-th document corresponds to the i-th row vector of V, that is,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features of NMF",
"sec_num": "2.1."
},
{
"text": "i d ) , , , ( 2 1 ij i i i v v v d L = . The cluster number is obtained from . ij j v max arg",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features of NMF",
"sec_num": "2.1."
},
{
"text": "iii. V and U do not need to be an orthogonal matrix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features of NMF",
"sec_num": "2.1."
},
{
"text": "LSI constructs orthogonal space from document space. On the other hand, in NMF, the axis in the reduced space corresponds to a topic, therefore, these axes do not need to be orthogonal. As a result, NMF attempts to find the axis corresponding to the cluster that has documents containing identical words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features of NMF",
"sec_num": "2.1."
},
{
"text": "For the given term-document matrix X, we can obtain U and V by the following iteration, shown by Lee (2000) . ",
"cite_spans": [
{
"start": 97,
"end": 107,
"text": "Lee (2000)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NMF algorithm",
"sec_num": "2.2."
},
{
"text": "V UV XV u u ) ( ) ( \u2190 (Eq.1) ij t ij t ij ij U VU U X v v ) ( ) ( \u2190 (Eq.2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NMF algorithm",
"sec_num": "2.2."
},
{
"text": "Here, , and are the i-th row and the j-th column element of U, V and a matrix X",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NMF algorithm",
"sec_num": "2.2."
},
{
"text": "respectively. ij u ij v ij X ) (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NMF algorithm",
"sec_num": "2.2."
},
{
"text": "After each iteration, U must be normalized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NMF algorithm",
"sec_num": "2.2."
},
{
"text": "\u2211 \u2190 i ij ij ij u u u 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NMF algorithm",
"sec_num": "2.2."
},
{
"text": "The iteration stops by the fixed maximum iteration number, or the distance J between X and :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NMF algorithm",
"sec_num": "2.2."
},
{
"text": "t UV t UV X J \u2212 = (Eq.3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NMF algorithm",
"sec_num": "2.2."
},
{
"text": "Here, J is the decomposition error.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NMF algorithm",
"sec_num": "2.2."
},
{
"text": "In general, the initial matrices and are constructed using random values. In this paper, we construct the and through a clustering result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering result and initial matrices",
"sec_num": "2.3."
},
{
"text": "0 U 0 V 0 U 0 V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering result and initial matrices",
"sec_num": "2.3."
},
{
"text": "In particular, if the cluster number of the i-th data is clustered into the c-th cluster, the i-th row vector of is constructed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering result and initial matrices",
"sec_num": "2.3."
},
{
"text": "0 V \u23a9 \u23a8 \u23a7 \u2260 = = ) ( 1 . 0 ) ( 0 . 1 c j c j v ij",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering result and initial matrices",
"sec_num": "2.3."
},
{
"text": "Here, is constructed via .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering result and initial matrices",
"sec_num": "2.3."
},
{
"text": "0 U 0 XV 2.4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering result and initial matrices",
"sec_num": "2.3."
},
{
"text": "We can use NMF as a refinement method for a clustering result, because the initial matrix of NMF corresponds to a clustering result. However, NMF often fails to improve the given clustering result. This is because the object function of NMF, that is, Eq. 3, does not properly represent the goodness of clustering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ". Problem of the object function of NMF",
"sec_num": null
},
{
"text": "To confirm this problem, we performed NMF using the document data set ``tr45'' which is a part of the data set used in Section 5. The initial matrix was constructed using the clustering result obtained by Mcut. Figure 1 shows the results of this experiment. LINE-1 and LINE-2 in Figure 1 show the change in J in each iteration and the change in the clustering accuracy, respectively. From Figure 1 , we can confirm that a smaller J does not always mean a more accurate clustering.",
"cite_spans": [],
"ref_spans": [
{
"start": 211,
"end": 219,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 279,
"end": 287,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 389,
"end": 397,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": ". Problem of the object function of NMF",
"sec_num": null
},
{
"text": "To overcome this problem, we evaluated the current clustering result using another object function after each iteration of NMF.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ". Problem of the object function of NMF",
"sec_num": null
},
{
"text": "Specifically, we used the object function of Mcut. We calculated the value of the object function after each iteration of NMF. If the best value was not improved for three consecutive iterations, we stopped NMF. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ". Problem of the object function of NMF",
"sec_num": null
},
{
"text": "Next, we needed the initial clustering result. To obtain this, we used Mcut proposed by Ding et al. (2001) which is a type of spectral clustering. In this spectral clustering method, the data set is represented as a graph. Each data point is represented as a vertex in the graph. If the similarity between data A and B is non-zero, the edge between A and B is drawn and the similarity is used as the weight of the edge. From this graph, clustering can be seen to correspond to the segmentation of the graph into a number of subgraphs by cutting the edges. The preferable cutting is such that the sum of the weights of the edges in the subgraph is large and the sum of weights of the cut edges is small. To find the ideal cut, the object function is used. The spectral clustering method finds the desirable cut by using the fact that an optimum solution of the object function corresponds to the solution of an eigenvalue problem. Different object functions are proposed. In this paper, we use the object function of Mcut.",
"cite_spans": [
{
"start": 88,
"end": 106,
"text": "Ding et al. (2001)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mcut",
"sec_num": "3."
},
{
"text": "First, we define the similarity cut(A,B) between the subgraph A and B as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mcut",
"sec_num": "3."
},
{
"text": "cut(A,B) = W(A,B).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mcut",
"sec_num": "3."
},
{
"text": "The function W (A,B) is the sum of the weights of the edges between A and B. We define W(A) as W (A,A) . The object function of Mcut is the following:",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 20,
"text": "(A,B)",
"ref_id": null
},
{
"start": 97,
"end": 102,
"text": "(A,A)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Mcut",
"sec_num": "3."
},
{
"text": ") ( ) , ( ) ( ) , ( B W B A cut A W B A cut Mcut + = (Eq.4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mcut",
"sec_num": "3."
},
{
"text": "The clustering task is to find A and B to minimize the above equation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mcut",
"sec_num": "3."
},
{
"text": "Note that the spectral clustering method divides the data set into two groups. If the number of clusters is larger than two, the above procedure is iterated recursively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mcut",
"sec_num": "3."
},
{
"text": "The minimization problem of Eq.4 is equivalent to the problem of finding the n dimensional discrete vector y to minimize the following equation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mcut",
"sec_num": "3."
},
{
"text": "Wy y y W D y J t t m ) ( \u2212 = (Eq.5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mcut",
"sec_num": "3."
},
{
"text": "where W is the similarity matrix of data, D = diag(We) and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mcut",
"sec_num": "3."
},
{
"text": ". Each element in the vector y is a or -b, where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mcut",
"sec_num": "3."
},
{
"text": "t e ) 1 , , 1 , 1 ( L = d d d a A B = , d d d b B A = , \u2211 \u2208 = X i ii X D d ) (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mcut",
"sec_num": "3."
},
{
"text": "and . If the i-th element of the vector y is a (or -b), the i-th data element belongs to the cluster A (or B).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mcut",
"sec_num": "3."
},
{
"text": "We can solve Eq.5 by converting the discrete vector y to the continuous vector y. Finally, we can obtain an approximate solution to Eq.5 by solving the following eigenvalue problem:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mcut",
"sec_num": "3."
},
{
"text": "B A d d d + = z z WD D I \u03bb = \u2212 \u2212 ) ( 2 / 1 2 / 1 (Eq.6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mcut",
"sec_num": "3."
},
{
"text": "We obtain the eigenvector z, that is, Fielder vector, corresponding to the second minimum eigenvalue by solving the eigenvalue problem represented by Eq.6. We can obtain the solution y to Eq.5 from . By the sign of the i-th value of y, we can judge whether the i-th data element belongs to cluster A or B. y D z 2 / 1 = Note that Eq.4 is the object function when the number of clusters is two. The object function used in NMF is the following general object function for k clusters .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mcut",
"sec_num": "3."
},
{
"text": "k i i G : 1 } { = ) ( ) , ( ) ( ) , ( ) ( ) , ( 2 2 2 1 1 1 k k k K G W G G cut G W G G cut G W G G cut Mcut + + + = L (Eq.7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mcut",
"sec_num": "3."
},
{
"text": "where k G is the complement of . The smaller is, the better it is.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mcut",
"sec_num": "3."
},
{
"text": "k G K Mcut",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mcut",
"sec_num": "3."
},
{
"text": "In the experiment, we used the data set provided via the CLUTO website http://glaros.dtc.umn.edu/gkhome/cluto/cluto/download. In total, 24 data sets are available. We used data sets that had less than 5,000 data elements. As a result, we used 19 data sets, shown in Table 1 . In each data set, the document vector is not normalized. We normalize them by TF-IDF. Table 2 shows the result. NMF-rfn in the table refers to our method. That is, we obtained the initial clustering results by Mcut and then improved it by performing NMF. The NMF-rfn column in Table 2 shows the ratio of values of Eq.7 obtained using our method to those obtained using Mcut. As shown in Table 2 , the value of Eq.7 of our method is less than (or equal to) Mcut absolutely. This means that our method absolutely improves the clustering results considering Eq.7. Next, we checked the accuracy of our method. Table 3 and Figure 2 show the results. The column of NMF, CLUTO 1 and Mcut in Table 3 shows the accuracy of NMF, CLUTO and Mcut respectively. And the column of NMF-ref is the accuracy of our method. Clustering accuracy is the most rigorous evaluation of the clustering result. However, accuracy is difficult to measure. First of all, all data must be labeled. Fortunately the data sets used satisfy this condition. Next, we must map each obtained cluster to the cluster label. This mapping is usually difficult. In this paper, we assigned the label to the cluster to assure the accuracy is high, by using dynamic programming. As a result, we obtain accurate clustering.",
"cite_spans": [],
"ref_spans": [
{
"start": 266,
"end": 273,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 362,
"end": 369,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 553,
"end": 560,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 663,
"end": 670,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 882,
"end": 889,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 894,
"end": 902,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 960,
"end": 967,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4."
},
{
"text": "The measure of similarity and the clustering method of CLUTO must also be examined We can select these via the optional parameter of CLUTO. In our experiments, we conducted CLUTO without any optional parameters, that is, by using the default setting. In this case, CLUTO uses the cosine similarity measure and the k-way clustering method, which takes a topdown approach to divide data into two partitions and iterates this division until k partitions are obtained. In general, the k-way clustering method is more powerful than k-means for document clustering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4."
},
{
"text": "There were six data sets for which the accuracy was degraded by performing NMF after Mcut. But in seven data sets, the accuracy was improved by NMF. In the remaining six data sets, the accuracy was not changed. Figure 2 shows that the average accuracies of CLUTO, Mcut and NMF-rfn were 58.21%, 61.82% and 63.22% respectively. That is, our method showed the best performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 211,
"end": 219,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4."
},
{
"text": "The object function value of the end clustering result is never degraded from the value in Mcut. However, as shown in Table 2 , there are some data sets for which the clustering accuracy of NMF-rfn is worse than that of Mcut. This is because the object function used does not refer to the goodness of clustering in a precise sense. All object functions suffer from the same problem. Especially, the object function J in Eq. 3 is not so good. In fact, we confirmed that the Mcut object function is better than J in Eq.3 for NMF from another experiment.",
"cite_spans": [],
"ref_spans": [
{
"start": 118,
"end": 125,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Search for the optimum solution",
"sec_num": "5.1."
},
{
"text": "The clustering task has two parts: one is the object function, and the other is the search method for the optimum solution to the object function. Mcut-rfn uses Eq.7 as the object function and combines the search methods of Mcut and NMF as its search method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search for the optimum solution",
"sec_num": "5.1."
},
{
"text": "Recent theoretical analysis shows the equivalence between spectral clustering and other clustering methods. For example, Dhillon et al. (2005) show that a search for an optimum solution via spectral clustering can be performed using the weighted kernel k-means. Additionally, Ding et al. (2005) show the equivalence between spectral clustering and NMF. By using these techniques, a search for an optimum solution may be constructed in a consistent manner, unlike with Mcut-rfn.",
"cite_spans": [
{
"start": 121,
"end": 142,
"text": "Dhillon et al. (2005)",
"ref_id": "BIBREF4"
},
{
"start": 276,
"end": 294,
"text": "Ding et al. (2005)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Search for the optimum solution",
"sec_num": "5.1."
},
{
"text": "However, such a consistent manner cannot avoid falling into a local optimum solution. It is therefore helpful to add a mechanism to jump out from a local optimum solution. Our hybrid approach is an example of such a method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search for the optimum solution",
"sec_num": "5.1."
},
{
"text": "The ``local search'' proposed by Dhillon et al. (2002) is relevant to our approach. This method first obtains a solution by k-means and then improves it by the ``first variation'' and iterates these two steps alternately. Mcut-rfn first obtains a solution by Mcut and then improves it by NMF, but it does not iterate them, because the input of Mcut does not need to be a clustering solution. Using the weighted kernel k-means, we can take the ``ping-pong'' strategy like the local search.",
"cite_spans": [
{
"start": 33,
"end": 54,
"text": "Dhillon et al. (2002)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Search for the optimum solution",
"sec_num": "5.1."
},
{
"text": "In NMF, clustering accuracy depends on the initial matrices. This is because the local optimum solution obtained by NMF varies according to the initial value. Therefore, deciding what initial matrices should be used is a difficult problem, from Wild et al. (2004) .",
"cite_spans": [
{
"start": 245,
"end": 263,
"text": "Wild et al. (2004)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Initial matrices and accuracy of NMF",
"sec_num": "5.2."
},
{
"text": "Regarding the object function, initial accuracy must be improved. Thus, we took the approach to set the value that had a high accuracy as the initial value. However, even if NMF starts from initial values that have low accuracy, NMF can still obtain highly accurate results. For example, for the data set ``k1a'' and ``tr11'' in our experiments, CLUTO was better than Mcut. Using the result of CLUTO as the initial value, accuracy was not improved by NMF. On the other hand, in the case of Mcut, accuracy was improved by NMF, and the final accuracy was better than that of CLUTO. Finally, clustering is an NP-hard combinatorial optimization problem after the object function is fixed. It is impossible to find the optimal initial value. Thus, the clustering algorithm must take an approach that improves the solution gradually. Under such a situation, our approach to set a feasible solution to the initial value is practical.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initial matrices and accuracy of NMF",
"sec_num": "5.2."
},
{
"text": "The clustering task is a purely engineered problem once data is translated into vectors. To get more accurate clustering, we should actively use knowledge on data at the pre-translated stage. In the case of document clustering, we should remember that the data is a document. It may be important to ensure that meta-information such as the publication place, author, aim of clustering is incorporated into the clustering process or vector-translation process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future works for document clustering",
"sec_num": "5.3."
},
{
"text": "Clustering is unsupervised learning. The effective way to raise accuracy is therefore to assign supervised labels to data. Recently, semi-supervised clustering using user-interaction has been actively researched by Basu et al. (2002) , Bilenko et al. (2004) and Xing et al. (2003) . This semi-supervised clustering using meta-information shows promise.",
"cite_spans": [
{
"start": 215,
"end": 233,
"text": "Basu et al. (2002)",
"ref_id": "BIBREF0"
},
{
"start": 236,
"end": 257,
"text": "Bilenko et al. (2004)",
"ref_id": "BIBREF2"
},
{
"start": 262,
"end": 280,
"text": "Xing et al. (2003)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future works for document clustering",
"sec_num": "5.3."
},
{
"text": "In this paper, we have shown that NMF can be used to improve clustering result. For practical use, we used another object function, and we evaluated the current clustering result using that object function after each iteration of NMF. By performing Mcut to obtain the initial clustering result, we can obtain an accurate clustering result. In the experiment, we used 19 data set provided via the CLUTO website. Our method improved the clustering result obtained by Mcut. In addition, the accuracy of the obtained clustering result was higher than those of NMF, CLUTO and Mcut. In future, we will research semi-supervised clustering using metainformation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "CLUTO is a very powerful clustering tool. We can get from the following website.http://glaros.dtc.umn.edu/gkhome/views/cluto (version 2.1.2a)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Semi-supervised Clustering by Seeding",
"authors": [
{
"first": "S",
"middle": [],
"last": "Basu",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "R",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ICML-2002",
"volume": "",
"issue": "",
"pages": "19--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Basu, S., A. Banerjee, and R. J. Mooney. 2002. Semi-supervised Clustering by Seeding. Proceedings of ICML-2002, pp.19-26.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Survey of Text Mining: Clustering, Classification, and Retrieval",
"authors": [
{
"first": "M",
"middle": [
"W"
],
"last": "Berry",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Berry, M. W. 2003. Survey of Text Mining: Clustering, Classification, and Retrieval. Springer.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Integrating Constraints and Metric Learning in Semi-Supervised Clustering",
"authors": [
{
"first": "M",
"middle": [],
"last": "Bilenko",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Basu",
"suffix": ""
},
{
"first": "R",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of ICML-2004",
"volume": "",
"issue": "",
"pages": "81--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bilenko, M., S. Basu and R. J. Mooney. 2004. Integrating Constraints and Metric Learning in Semi-Supervised Clustering. Proceedings of ICML-2004, pp.81-88.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Iterative Clustering of High Dimentional Text Data Augmented by Local Search",
"authors": [
{
"first": "I",
"middle": [
"S"
],
"last": "Dhillon",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Guan",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kogan",
"suffix": ""
}
],
"year": 2002,
"venue": "The 2002 IEEE International Conference on Data Mining",
"volume": "",
"issue": "",
"pages": "131--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dhillon, I. S., Y. Guan and J. Kogan. 2002. Iterative Clustering of High Dimentional Text Data Augmented by Local Search. The 2002 IEEE International Conference on Data Mining, 131-138.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Unified View of Kernel k-means, Spectral Clustering and Graph Cuts. The University of Texas at Austin, Department of Computer Sciences",
"authors": [
{
"first": "I",
"middle": [
"S"
],
"last": "Dhillon",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Guan",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Kulis",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dhillon, I. S., Y. Guan and B. Kulis. 2005. A Unified View of Kernel k-means, Spectral Clustering and Graph Cuts. The University of Texas at Austin, Department of Computer Sciences. Technical Report TR-04-25.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "On the Equivalence of Nonnegative Matrix Factorization and Spectral Clustering",
"authors": [
{
"first": "C",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "H",
"middle": [
"D"
],
"last": "Simon",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of SDM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ding, C., X. He and H. D. Simon. 2005. On the Equivalence of Nonnegative Matrix Factorization and Spectral Clustering. Proceedings of SDM 2005.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Spectral Min-max Cut for Graph Partitioning and Data Clustering",
"authors": [
{
"first": "C",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Zha",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Simon",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ding, C., X. He, H. Zha, M. Gu and H. Simon. 2001. Spectral Min-max Cut for Graph Partitioning and Data Clustering. Lawrence Berkeley National Lab. Tech. report 47848.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Reexamining the Cluster Hypothesis: Scatter/gather on Retrieval Results",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Hearst",
"suffix": ""
},
{
"first": "J",
"middle": [
"O"
],
"last": "Pedersen",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of SIGIR-96",
"volume": "",
"issue": "",
"pages": "76--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hearst, M. A. and J. O. Pedersen. 1996. Reexamining the Cluster Hypothesis: Scatter/gather on Retrieval Results. Proceedings of SIGIR-96, pp.76-84.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Hierarchical Monothetic Document Clustering Algorithm for Summarization and Browsing Search Results",
"authors": [
{
"first": "K",
"middle": [],
"last": "Kummamuru",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Lotlikar",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Singal",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Krishnapuram",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of WWW-04",
"volume": "",
"issue": "",
"pages": "658--665",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kummamuru, K., R. Lotlikar, S. Roy, K. Singal and R. Krishnapuram. 2004. A Hierarchical Monothetic Document Clustering Algorithm for Summarization and Browsing Search Results. Proceedings of WWW-04, pp.658-665.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Algorithms for Non-negative Matrix Factorization",
"authors": [
{
"first": "D",
"middle": [
"D"
],
"last": "Lee",
"suffix": ""
},
{
"first": "H",
"middle": [
"S"
],
"last": "Seung",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of NIPS-2000",
"volume": "",
"issue": "",
"pages": "556--562",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee, D. D. and H. S. Seung. 2000. Algorithms for Non-negative Matrix Factorization. Proceedings of NIPS-2000, pp.556-562.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Evaluating Document Clustering for Interactive Information Retrieval",
"authors": [
{
"first": "A",
"middle": [],
"last": "Leuski",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of CIKM-01",
"volume": "",
"issue": "",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leuski, A. 2001. Evaluating Document Clustering for Interactive Information Retrieval. Proceedings of CIKM-01, pp.33-40.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Improving Non-negative Matrix Factorizations through Structured Initialization",
"authors": [
{
"first": "S",
"middle": [],
"last": "Wild",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Curry",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Dougherty",
"suffix": ""
}
],
"year": 2004,
"venue": "Pattern Recognition",
"volume": "37",
"issue": "11",
"pages": "2217--2232",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wild, S., J. Curry and A. Dougherty. 2004. Improving Non-negative Matrix Factorizations through Structured Initialization. Pattern Recognition, Vol.37, No.11, 2217-2232.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Distance Metric Learning, with Application to Clustering with Side-information",
"authors": [
{
"first": "E",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
},
{
"first": "A",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "M",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Russell",
"suffix": ""
}
],
"year": 2003,
"venue": "Advances in Neural Information Processing Systems",
"volume": "15",
"issue": "",
"pages": "505--512",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xing, E. P., A. Y. Ng, M. I. Jordan and S. Russell. 2003. Distance Metric Learning, with Application to Clustering with Side-information. Advances in Neural Information Processing Systems 15, 505-512.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Document Clustering Based on Non-negative Matrix Factorization",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Gong",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of SIGIR-03",
"volume": "",
"issue": "",
"pages": "267--273",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xu, Wei., X. Liu and Y. Gong. 2003. Document Clustering Based on Non-negative Matrix Factorization. Proceedings of SIGIR-03, pp.267-273.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Learning to Cluster Web Search Results",
"authors": [
{
"first": "H.-J",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Q.-C",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "W.-Y",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of SIGIR-04",
"volume": "",
"issue": "",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeng, H.-J., Q.-C. He, Z. Chen, W.-Y. Ma and J. Ma. 2001. Learning to Cluster Web Search Results. Proceedings of SIGIR-04, pp.33-40",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"text": "Decomposition error and clustering accuracy",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF2": {
"text": "Average accuracy of each method",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF1": {
"text": "",
"num": null,
"content": "<table><tr><td colspan=\"2\">Document data sets</td><td/><td/><td/></tr><tr><td>Data</td><td colspan=\"4\"># of documents # of terms # of non-zero elements # of classes</td></tr><tr><td>cacmcisi</td><td>4,663</td><td>41,681</td><td>83,181</td><td>2</td></tr><tr><td>cranmed</td><td>2,431</td><td>41,681</td><td>140,658</td><td>2</td></tr><tr><td>fbis</td><td>2,463</td><td>2,000</td><td>393,386</td><td>17</td></tr><tr><td>hitech</td><td>2,301</td><td>126,373</td><td>346,881</td><td>6</td></tr><tr><td>k1a</td><td>2,340</td><td>21,839</td><td>349,792</td><td>20</td></tr><tr><td>k1b</td><td>2,340</td><td>21,839</td><td>349,792</td><td>6</td></tr><tr><td>la1</td><td>3,204</td><td>31,472</td><td>484,024</td><td>6</td></tr><tr><td>la2</td><td>3,075</td><td>31,472</td><td>455,383</td><td>6</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF2": {
"text": "Comparison of the object function value",
"num": null,
"content": "<table><tr><td>Data</td><td>NMF-rfn</td></tr><tr><td colspan=\"2\">cacmcisi 1.0000</td></tr><tr><td colspan=\"2\">cranmed 1.0000</td></tr><tr><td>Fbis</td><td>0.9350</td></tr><tr><td>Hitech</td><td>0.9345</td></tr><tr><td>k1a</td><td>0.6340</td></tr><tr><td>k1b</td><td>0.9630</td></tr><tr><td>la1</td><td>1.0000</td></tr><tr><td>la2</td><td>0.9862</td></tr><tr><td>Mm</td><td>0.9979</td></tr><tr><td>re0</td><td>1.0000</td></tr><tr><td>re1</td><td>0.9974</td></tr><tr><td colspan=\"2\">reviews 0.6503</td></tr><tr><td>tr11</td><td>0.8971</td></tr><tr><td>tr12</td><td>1.0000</td></tr><tr><td>tr23</td><td>0.9806</td></tr><tr><td>tr31</td><td>0.9728</td></tr><tr><td>tr41</td><td>0.9409</td></tr><tr><td>tr45</td><td>0.8242</td></tr><tr><td>Wap</td><td>0.7679</td></tr><tr><td colspan=\"2\">Average 0.9201</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF3": {
"text": "Accuracy of each method",
"num": null,
"content": "<table><tr><td/><td>Data</td><td colspan=\"3\">NMF CLUTO Mcut NMF-rfn</td></tr><tr><td/><td colspan=\"3\">cacmcisi 0.5788 0.6054 0.6858</td><td>0.6858</td></tr><tr><td/><td colspan=\"3\">cranmed 0.5825 0.9975 0.9930</td><td>0.9930</td></tr><tr><td/><td>fbis</td><td colspan=\"2\">0.4125 0.4921 0.5278</td><td>0.4941</td></tr><tr><td/><td colspan=\"3\">hitech 0.4633 0.5228 0.3859</td><td>0.5059</td></tr><tr><td/><td>k1a</td><td colspan=\"2\">0.4107 0.4799 0.4658</td><td>0.5684</td></tr><tr><td/><td>k1b</td><td colspan=\"2\">0.6389 0.6081 0.5205</td><td>0.5342</td></tr><tr><td/><td>la1</td><td colspan=\"2\">0.6798 0.7147 0.6879</td><td>0.6879</td></tr><tr><td/><td>la2</td><td colspan=\"2\">0.5873 0.6582 0.7028</td><td>0.6924</td></tr><tr><td/><td>mm</td><td colspan=\"2\">0.5470 0.5331 0.9583</td><td>0.9556</td></tr><tr><td/><td>re0</td><td colspan=\"2\">0.3710 0.3198 0.3670</td><td>0.3670</td></tr><tr><td/><td>re1</td><td colspan=\"2\">0.3826 0.4146 0.4490</td><td>0.4599</td></tr><tr><td/><td colspan=\"3\">reviews 0.7196 0.6316 0.6776</td><td>0.6424</td></tr><tr><td/><td>tr11</td><td colspan=\"2\">0.5556 0.6812 0.6546</td><td>0.7295</td></tr><tr><td/><td>tr12</td><td colspan=\"2\">0.6422 0.6869 0.7764</td><td>0.7764</td></tr><tr><td/><td>tr23</td><td colspan=\"2\">0.3971 0.4559 0.4363</td><td>0.4363</td></tr><tr><td/><td>tr31</td><td colspan=\"2\">0.5696 0.5674 0.7228</td><td>0.6624</td></tr><tr><td/><td>tr41</td><td colspan=\"2\">0.5239 0.6412 0.5661</td><td>0.6014</td></tr><tr><td/><td>tr45</td><td colspan=\"2\">0.6347 0.5986 0.7580</td><td>0.7101</td></tr><tr><td/><td>wap</td><td colspan=\"2\">0.4686 0.4487 0.4109</td><td>0.5096</td></tr><tr><td/><td colspan=\"3\">Average 0.5350 0.5821 0.6182</td><td>0.6322</td></tr><tr><td>0.65</td><td/><td/><td>0.618</td><td>0.632</td></tr><tr><td>0.6</td><td/><td>0.582</td><td/></tr><tr><td>0.55</td><td>0.535</td><td/><td/></tr><tr><td>0.5</td><td/><td/><td/></tr><tr><td/><td>NMF</td><td>CLUTO</td><td>Mcut</td><td>NMF-rfn</td></tr></table>",
"type_str": "table",
"html": null
}
}
}
}