{ "paper_id": "Y16-3003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:47:02.014113Z" }, "title": "Sentence Clustering using PageRank Topic Model", "authors": [ { "first": "Kenshin", "middle": [], "last": "Ikegami", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Tokyo Tokyo", "location": { "country": "Japan" } }, "email": "" }, { "first": "Yukio", "middle": [], "last": "Ohsawa", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Tokyo Tokyo", "location": { "country": "Japan" } }, "email": "ohsawa@sys.t.u-tokyo.ac.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The clusters of review sentences on the viewpoints from the products' evaluation can be applied to various use. The topic models, for example Unigram Mixture (UM), can be used for this task. However, there are two problems. One problem is that topic models depend on the randomly-initialized parameters and computation results are not consistent. The other is that the number of topics has to be set as a preset parameter. To solve these problems, we introduce PageRank Topic Model (PRTM), that approximately estimates multinomial distributions over topics and words in a vocabulary using network structure analysis methods to Word Co-occurrence Graphs. In PRTM, an appropriate number of topics is estimated using the Newman method from a Word Co-occurrence Graph. Also, PRTM achieves consistent results because multinomial distributions over words in a vocabulary are estimated using PageRank and a multinomial distribution over topics is estimated as a convex quadratic programming problem. Using two review datasets about hotels and cars, we show that PRTM achieves consistent results in sentence clustering and an appropriate estimation of the number of topics for extracting the viewpoints from the products' evaluation.", "pdf_parse": { "paper_id": "Y16-3003", "_pdf_hash": "", "abstract": [ { "text": "The clusters of review sentences on the viewpoints from the products' evaluation can be applied to various use. The topic models, for example Unigram Mixture (UM), can be used for this task. However, there are two problems. One problem is that topic models depend on the randomly-initialized parameters and computation results are not consistent. The other is that the number of topics has to be set as a preset parameter. To solve these problems, we introduce PageRank Topic Model (PRTM), that approximately estimates multinomial distributions over topics and words in a vocabulary using network structure analysis methods to Word Co-occurrence Graphs. In PRTM, an appropriate number of topics is estimated using the Newman method from a Word Co-occurrence Graph. Also, PRTM achieves consistent results because multinomial distributions over words in a vocabulary are estimated using PageRank and a multinomial distribution over topics is estimated as a convex quadratic programming problem. Using two review datasets about hotels and cars, we show that PRTM achieves consistent results in sentence clustering and an appropriate estimation of the number of topics for extracting the viewpoints from the products' evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Many people buy products through electronic commerce and Internet auction site. Consumers have to use products' detailed information for decision making in purchasing because they cannot see the real products. In particular, reviews from other consumers give them useful information because reviews contain consumers' experience in practical use. Also, reviews are useful for providers of products or services to measure the consumers' satisfaction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In our research, we focus on generating clusters of review sentences on the viewpoints from the products' evaluation. For example, reviews of home electric appliance are usually written based on the following the viewpoints: performance, design, price, etc. If we generate clusters of the review sentences on these viewpoints, the clusters can be applied to various uses. For example, if we extract representative expressions from clusters of sentences, we can summarize reviews briefly. This is useful because some products have thousands of reviews and hard to be read and understood.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There are various methods to generate clusters of sentences. Among several methods, we adopt probabilistic generative models for sentence clustering because the summarizations of clusters can be represented as word distributions. Probabilistic generative models are the methods that assume underlying probabilistic distributions generating observed data, and that estimate the probabilistic distributions from the observed data. In language modeling, these are called topic models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Latent Dirichlet Allocation (LDA) (Blei et al., 2003) is a well-known topic model used in document clustering. LDA represents each document as a mixture of topics. A topic means a multinomial distribution over words in a vocabulary.", "cite_spans": [ { "start": 34, "end": 53, "text": "(Blei et al., 2003)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Unigram Mixture (UM) (Nigam et al., 2000) as-sumes that each document is generated by a multinomial distribution over words in a vocabulary, \u03c6 k = (\u03c6 k1 , \u2022 \u2022 \u2022 , \u03c6 kV ), where V denotes the size of vocabulary and \u03c6 kv denotes the appearance probability of v-th term in the k-th topic. UM estimates a multinomial distribution over topics,", "cite_spans": [ { "start": 21, "end": 41, "text": "(Nigam et al., 2000)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u03b8 = (\u03b8 1 , \u2022 \u2022 \u2022 , \u03b8 K ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "where \u03b8 k denotes the appearance probability of kth topic. After all, K+1 multinomial distributions, \u03b8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "and \u03c6 = (\u03c6 1 , \u2022 \u2022 \u2022 , \u03c6 K )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "are estimated from the observed data, where K denotes the number of topics. Using estimated \u03b8 and \u03c6, the probability that a document is generated from \u03c6 k is calculated. This probability determines the clusters of the sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In UM, \u03b8 and \u03c6 can be estimated by iterative computation. However, since \u03b8 and \u03c6 are initialized randomly, computation results are not consistent. In addition to this, the number of topics K has to be set as a preset parameter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To estimate the appropriate number of topics, the average cosine distance (AveDis) of each pair of topics can be used (Cao et al., 2009) . This measure is based on the assumption that better topic distributions have fewer overlapping words. However, to estimate the appropriate number of topics based on this measure, we need to set several numbers of topics and it takes much time to calculate.", "cite_spans": [ { "start": 118, "end": 136, "text": "(Cao et al., 2009)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we introduce PageRank Topic Model (PRTM) to consistently estimate \u03c6 and \u03b8 using Word Co-occurrence Graphs. PRTM consists of 4 steps as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. Convert corpus W into a Word Co-occurrence Graph G w .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. Divide graph G w into several communities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. Measure PageRank in each community and estimate multinomial distributions over words in a vocabulary \u03c6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "4. Estimate a multinomial distribution over topics \u03b8 as a convex quadratic programming problem assuming the linearity of \u03c6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Network structures have been applied to several Natural Language Processing tasks (Ohsawa et al., 1998) (Bollegala et al., 2008) . For example, synonyms can be identified using network community detection method, e.g. the Newman method (Clauset et al., 2004) (Sakaki et al., 2007) . In this research, we also apply the Newman method to detect communities of co-occurrence words in step 2. In step 3, we calculate the appearance probability of nodes using PageRank (Brin and Page, 1998) . PageRank is the appearance probability of nodes in a network. In Word Co-occurrence Graph G w , each node represents a word. Therefore, we regard a set of PageRank of nodes as \u03c6. After that, \u03b8 is estimated using a convex quadratic programming problem based on the assumption of the linearity of \u03c6 in step 4. From these steps, reproducible \u03c6, \u03b8 and clustering results can be obtained because the Newman method, PageRank and the convex quadratic programming problem are not depending on random initialization of parameters.", "cite_spans": [ { "start": 82, "end": 103, "text": "(Ohsawa et al., 1998)", "ref_id": "BIBREF15" }, { "start": 104, "end": 128, "text": "(Bollegala et al., 2008)", "ref_id": "BIBREF2" }, { "start": 236, "end": 258, "text": "(Clauset et al., 2004)", "ref_id": "BIBREF0" }, { "start": 259, "end": 280, "text": "(Sakaki et al., 2007)", "ref_id": "BIBREF11" }, { "start": 464, "end": 485, "text": "(Brin and Page, 1998)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There is another advantage to identify communities of co-occurrence words using the Newman method. The Newman method yields an optimized number of communities K in the sense it extracts communities to maximize Modularity Q. Modularity Q is one measure of the strength of division of a network structure into several communities. When modularity Q is maximized, the graph is expected to be divided into an appropriate number of communities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our main contributions are summarized as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Using PRTM, we estimate consistent multinomial distributions over topics and words. It enables us to get consistent computation results of sentence clustering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 PRTM yields an appropriate number of topics, K, as well as the other parameters. It is more suitable to estimate the number of viewpoints from the products' evaluation than the average cosine distance measurement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we first explain our proposed method, PRTM, in section 2. We show the experimental results in section 3 and compare with related works in section 4. At last, we discuss our conclusions in section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we explain the Newman method and PageRank in subsection 2.1, 2.2. After that, we show our proposed method, PageRank Topic Model, in subsection 2.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Method", "sec_num": "2" }, { "text": "The Newman method is a method to detect several communities from a network structure (Clauset et al., 2004) . The method puts together nodes to maximize Modularity Q. Modularity Q is defined as follows:", "cite_spans": [ { "start": 85, "end": 107, "text": "(Clauset et al., 2004)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Newman method", "sec_num": "2.1" }, { "text": "Q = K i=1 (e ii \u2212 a 2 i ) (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Newman method", "sec_num": "2.1" }, { "text": "where K is the number of communities, e ii is the ratio of the number of edges in the i-th community to the total number of edges in the network, a i is the ratio of the number of edges the i-th community from the other communities to the total number of edges in the network. Modularity Q represents the density of connections between the nodes within communities. Therefore, the higher the Modularity Q is, the more accurately the network is divided into communities. In the Newman method, communities are extracted by the following steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Newman method", "sec_num": "2.1" }, { "text": "1. Assign each node to a community.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Newman method", "sec_num": "2.1" }, { "text": "when any two communities are merged into one community.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculate the increment in Modularity \u0394Q", "sec_num": "2." }, { "text": "3. Merge the two communities, that score the highest \u0394Q in the previous process, into one community.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculate the increment in Modularity \u0394Q", "sec_num": "2." }, { "text": "4. Repeat step 2 and step 3 as long as Q increases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculate the increment in Modularity \u0394Q", "sec_num": "2." }, { "text": "PageRank (Brin and Page, 1998) is the algorithm to measure the importance of each node in a network structure. It has been applied to evaluating the importance of websites in the World Wide Web. In PageRank, the transition probability matrix H \u2208", "cite_spans": [ { "start": 9, "end": 30, "text": "(Brin and Page, 1998)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "PageRank", "sec_num": "2.2" }, { "text": "R V \u00d7V +", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank", "sec_num": "2.2" }, { "text": "is generated from network structure, where V denotes the number of nodes. H ij represents the transition probability from node n i to node n j , a ratio of the number of edges from node n i to node n j to the total number of edges from node n i . However, if node n i does not have outgoing edges (dangling node), node n i does not have transition to any other nodes. To solve this problem, matrix H is extended to matrix G \u2208 R V \u00d7V + as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank", "sec_num": "2.2" }, { "text": "G = dH + (1 \u2212 d) 1 V 1 T 1 (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank", "sec_num": "2.2" }, { "text": "where d is a real number within [0, 1] and 1 \u2208 {1} V . PageRank of node n i , i.e. P R(n i ), is calculated using matrix G as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "R T = R T G", "eq_num": "(3)" } ], "section": "PageRank", "sec_num": "2.2" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank", "sec_num": "2.2" }, { "text": "R = (P R(n i ), \u2022 \u2022 \u2022 , P R(n V )) T . Equa- tion", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank", "sec_num": "2.2" }, { "text": "(3) can be solved with the simultaneous linear equations or the power method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank", "sec_num": "2.2" }, { "text": "In this subsection, we explain our proposed method, PageRank Topic Model (PRTM), to estimate a multinomial distribution over topics \u03b8 and words in a vocabulary \u03c6 using a Word Co-occurrence Graph. PRTM consists of 4 steps as shown in section 1. We explain them by following these steps.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank Topic Model", "sec_num": "2.3" }, { "text": "Step 1: First, we convert a dataset into a bag of words. Each bag represents a sentence in the dataset. We define Word Co-occurrence Graph G w (V, E) as an undirected weighted graph where each vocabulary v i is represented by a node n i \u2208 V . An edge e ij \u2208 E is created between node n i and node n j if v i and v j co-occur in the bag of words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank Topic Model", "sec_num": "2.3" }, { "text": "Step 2: We apply the Newman method to graph G w to extract communities Com (k) , where k = 1, \u2022 \u2022 \u2022 , K and K denotes the number of communities. Com (k) is a set of nodes in G w . From this results, we generate Word Co-occurrence SubGraph G", "cite_spans": [ { "start": 75, "end": 78, "text": "(k)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "PageRank Topic Model", "sec_num": "2.3" }, { "text": "(k) w (V (k) , E (k) ). Although V (k) is the same as V of G w , an edge e (k)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank Topic Model", "sec_num": "2.3" }, { "text": "ij \u2208 E (k) is created if node n i or n j exists in Com (k) . Figure 1 shows the relationship between Com (k) and G (k) w .", "cite_spans": [ { "start": 55, "end": 58, "text": "(k)", "ref_id": null }, { "start": 105, "end": 108, "text": "(k)", "ref_id": null } ], "ref_spans": [ { "start": 61, "end": 69, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "PageRank Topic Model", "sec_num": "2.3" }, { "text": "Step 3: We measure the importance of each node in G (k) w with PageRank. Page et al. (1999) explained PageRank by the random surfer model. A random surfer is a person who opens a browser to any page and starts following hyperlinks. PageRank can be interpreted as the probability of a random surfer existence in nodes. In this case, a node n ", "cite_spans": [ { "start": 73, "end": 91, "text": "Page et al. (1999)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "PageRank Topic Model", "sec_num": "2.3" }, { "text": "(k) i represents vocabulary v i . Therefore P R(n (k) i ) represents the", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank Topic Model", "sec_num": "2.3" }, { "text": "v i in G (k) w . We re- gard G (k)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank Topic Model", "sec_num": "2.3" }, { "text": "w as k-th topic and define multinomial distributions over words in a vocabulary \u03c6 k as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank Topic Model", "sec_num": "2.3" }, { "text": "\u03c6 k = (\u03c6 k1 , \u2022 \u2022 \u2022 , \u03c6 kV ) = (P R(n (k) 1 ), \u2022 \u2022 \u2022 , P R(n (k) V )) (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank Topic Model", "sec_num": "2.3" }, { "text": "Step 4: We estimate a multinomial distribution over topics \u03b8 using \u03c6, that is estimated in Step 3. To estimate \u03b8, we assume the linearity of \u03c6 as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank Topic Model", "sec_num": "2.3" }, { "text": "\u03c6 \u2022v = K k=1 \u03c6 kv \u03b8 k (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank Topic Model", "sec_num": "2.3" }, { "text": "where \u03c6 \u2022v denotes the appearance probability of v-th term in graph G w . However, it is impossible to estimate a \u03b8 k that satisfies Equation (5) in all of words in a vocabulary because each \u03c6 k is independently estimated using PageRank. Therefore, we estimate \u03b8 k minimizing the following equation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank Topic Model", "sec_num": "2.3" }, { "text": "arg min \u03b8 L = arg min \u03b8 V v (\u03c6 \u2022v \u2212 K k=1 \u03c6 kv \u03b8 k ) 2 s.t. \u03b8 = 1, \u03b8 \u2265 0 (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank Topic Model", "sec_num": "2.3" }, { "text": "By reformulating Equation (6), the following equation can be obtained:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank Topic Model", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "arg min \u03b8 L = arg min \u03b8 1 2 \u03b8 T Q\u03b8 + c T \u03b8 s.t. \u03b8 = 1, \u03b8 \u2265 0", "eq_num": "(7)" } ], "section": "PageRank Topic Model", "sec_num": "2.3" }, { "text": "where the (i, j)-th element of matrix Q \u2208 R K\u00d7K denotes 2\u03c6 i T \u03c6 j and the i-th element of vector c de-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank Topic Model", "sec_num": "2.3" }, { "text": "notes \u22122\u03c6 \u2022 T \u03c6 i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank Topic Model", "sec_num": "2.3" }, { "text": "Equation 7is formulated as a convex quadratic programming problem, of which a global optimum solution should be obtained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank Topic Model", "sec_num": "2.3" }, { "text": "The probability that document d is generated from k-th topic, i.e. p(z d = k|w d ), is calculated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank Topic Model", "sec_num": "2.3" }, { "text": "p(z d = k|w d ) = p(w d |k)p(k) K k =1 p(w d |k )p(k ) = \u03b8 k V v=1 \u03c6 N dv kv K k =1 \u03b8 k V v=1 \u03c6 N dv k v (8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank Topic Model", "sec_num": "2.3" }, { "text": "where N dv denotes the number of v-th term in document d.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank Topic Model", "sec_num": "2.3" }, { "text": "In this section, we show the evaluation results of PRTM using real-world text data in comparison with UM and LDA. In subsection 3.1, we explain our test datasets and the measure used to evaluate sentence clustering accuracy. Furthermore, we present the conditions of UM and LDA in the same subsection. We show topic examples estimated by PRTM, UM, and LDA in subsection 3.2. In subsection 3.3, we compare the sentence clustering accuracy of PRTM with that of UM and LDA. In addition, we compare the estimated number of topics of PRTM with that of the average cosine distance measurement in subsection 3.4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "In the experiments, we used the following two datasets: Hotel Reviews: This is Rakuten Travel 1 Japanese review dataset and has been published by Rakuten, Inc. In this dataset, there are 4309 sentences of 1000 reviews. We tokenized them using Japanese morphological analyzer, mecab 2 , and selected nouns and adjectives. It contains a vocabulary of 3780 words and 19401 word tokens. During preprocessing, we removed high-frequency words appearing more than 300 times and low frequency words appearing less than two times. The sentences of this dataset were classified by two annotators. The annotators (humans) were asked to classify each sentence into six categories; \"Service\", \"Room\", \"Location\", \"Facility and Amenity\", \"Bathroom\", and \"Food\". We adopted these six categories because Rakuten Travel website scores hotels by these six evaluation viewpoints. In evaluation of sentence clustering accuracy, we used 2000 sentences from the total sentences which both the annotators classified into the same category. Car Reviews: This is Edmunds 3 Car English review dataset and has been published by the Opinion Based Entity Ranking project (Ganesan and Zhai, 2011) . In this dataset, there are 7947 reviews in 2009, out of which we randomly selected 600 reviews consisting of 3933 sentences. We tokenized them using English morphological analyzer, Stanford CoreNLP 4 , and selected nouns, adjectives and verbs. It contains a vocabulary of 3975 words and 27385 word tokens. During preprocessing, we removed high-frequency words appearing more than 300 times and low frequency words appearing less than two times. All of the 3922 sentences were classified into eight categories by two annotators; \"Fuel\", \"Interior\", \"Exterior\", \"Build\", \"Performance\", \"Comfort\", \"Reliability\" and \"Fun\". We adopted these eight categories for the same reason as Hotel Review. There are 1148 sentences which both annotators classified into the same category and we used them in the evaluation of sentence clustering accuracy. Evaluation: We measured Purity, Inverse Purity and their F 1 score for sentence clustering evaluation (Zhao and Karypis, 2001) . Purity focuses on the frequency of the most common category into each cluster. Purity is calculated as follows:", "cite_spans": [ { "start": 1142, "end": 1166, "text": "(Ganesan and Zhai, 2011)", "ref_id": "BIBREF7" }, { "start": 2111, "end": 2135, "text": "(Zhao and Karypis, 2001)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Preparation for Experiment", "sec_num": "3.1" }, { "text": "P urity = i |C i | n max j P recision(C i , L j ) (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preparation for Experiment", "sec_num": "3.1" }, { "text": "where C i is the set of i-th cluster, L j is the set of jth given category and n denotes the number of samples. P recision(C i , L j ) is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preparation for Experiment", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P recision(C i , L j ) = |C i \u2229 L j | |C i |", "eq_num": "(10)" } ], "section": "Preparation for Experiment", "sec_num": "3.1" }, { "text": "However, if we make one cluster per sample, we reach a maximum purity value. Therefore we also measured Inverse Purity. Inverse Purity focuses on the cluster with maximum recall for each category and is defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preparation for Experiment", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "InverseP urity = j |L j | n max i P recision(L j , C i )", "eq_num": "(11)" } ], "section": "Preparation for Experiment", "sec_num": "3.1" }, { "text": "In this experiment, we used the harmonic mean of Purity and Inverse Purity, F 1 score, as clustering accuracy. F 1 score is calculated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preparation for Experiment", "sec_num": "3.1" }, { "text": "F 1 = 2 \u00d7 P urity \u00d7 InverseP urity P urity + InverseP urity (12)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preparation for Experiment", "sec_num": "3.1" }, { "text": "Estimation of number of topics: To estimate the appropriate number of topics, we used the average cosine distance measurement (AveDis) (Cao et al., 2009) . AveDis is calculated using the multinomial distributions \u03c6 as follows:", "cite_spans": [ { "start": 135, "end": 153, "text": "(Cao et al., 2009)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Preparation for Experiment", "sec_num": "3.1" }, { "text": "corre(\u03c6 i , \u03c6 j ) = V v=0 \u03c6 iv \u03c6 jv V v=0 (\u03c6 iv ) 2 V v=0 (\u03c6 jv ) 2 AveDis = K i=0 K j=i+1 corre(\u03c6 i , \u03c6 j ) K \u00d7 (K \u2212 1)/2 (13)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preparation for Experiment", "sec_num": "3.1" }, { "text": "where V denote the number of words in a vocabulary and K denotes the number of topics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preparation for Experiment", "sec_num": "3.1" }, { "text": "If topic i and j are not similar, corre(\u03c6 i , \u03c6 j ) becomes smaller. Therefore, when the appropriate number of topics K is preset, that is all the topics have different word distributions, AveDis becomes smaller. Comparative Methods and Settings: We compared PRTM with UM and LDA in the experiments. UM can be calculated using several methods: EM algorithm (Dempster et al., 1977) , Collapsed Gibbs sampling (Liu, 1994) (Yamamoto and Sadamitsu, 2005) , or Collapsed Variational Baysian (Teh et al., 2006) . In our experiments, topic and word distributions \u03b8, \u03c6 were estimated using Collapsed Gibbs sampling for both the UM and LDA models. The hyper-parameter for all the Dirichlet distributions were set at 0.01 and were updated at every iteration. We stopped iterative computations when the difference of likelihood between steps got lower than 0.01. ", "cite_spans": [ { "start": 357, "end": 380, "text": "(Dempster et al., 1977)", "ref_id": "BIBREF1" }, { "start": 408, "end": 419, "text": "(Liu, 1994)", "ref_id": "BIBREF5" }, { "start": 420, "end": 450, "text": "(Yamamoto and Sadamitsu, 2005)", "ref_id": "BIBREF9" }, { "start": 486, "end": 504, "text": "(Teh et al., 2006)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Preparation for Experiment", "sec_num": "3.1" }, { "text": "We used Hotel Reviews dataset and estimated words distributions \u03c6 by PRTM, UM, and LDA. All of the PRTM, UM, and LDA were given the number of topics K = 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Examples", "sec_num": "3.2" }, { "text": "In Table 1 , we show the terms of top fifth appearance probabilities in each topic estimated. As we can see, PRTM and UM contain similar terms in cluster 1, 2, 3, and 4. For example, in cluster 1, both of PRTM and UM have terms, \"breakfast\" and \"meal\". Therefore its topic seems to be \"Food.\" On the other hand, there are the same terms, \"support\" and \"reception\", in cluster 4. This topic seems to represent \"Service.\" However, in LDA, the estimation seems to fail because all of the topics have similar words (e.g. the word \"breakfast\" exists in all the topics.) For these reasons, it is more suitable to assume that each sentence has one topic than to assume that it has multiple topics.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Topic Examples", "sec_num": "3.2" }, { "text": "We evaluated sentence clustering accuracy comparing PRTM with UM and LDA on Hotel Review and Car Review datasets. By changing the number of topics K from 3 to 20, we trained topics and word distributions \u03b8, \u03c6 with PRTM, UM, and LDA. We generated clusters of sentences by Equation (8) in PRTM and UM. In LDA, we decided the cluster of sentence using topic distributions of each sentence. The sentence clustering accuracy was evaluated by F 1 score on Purity and Inverse Purity. F 1 scores of UM and LDA were the mean values of the tests running ten times, because the computation results vary depending on randomly initialized \u03b8 and \u03c6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Clustering Accuracy", "sec_num": "3.3" }, { "text": "We present sentence clustering accuracy for all the PRTM, UM, and LDA in Figure 2 . As shown in Figure 2 , PRTM outperformed UM when the number of topics is more than six in both the Hotel and Car Review datasets. For UM, F 1 score became highest when K was small and gradually decreased when K became larger. On the other hand, with PRTM, F 1 score did not decrease if K became larger. The F 1 scores of LDA were lower than PRTM and UM because it is not suitable for review sentence clustering as mentioned in subsection 3.2. Table 2 shows the comparison of the appearance probabilities \u03b8 k with the number of topics K = 6 and K = 12. Similar \u03b8 k was estimated by PRTM and UM with K = 6. However, with K = 12, PRTM had the larger deviation of the \u03b8 k from 2.93\u00d7 10 \u22126 to 2.52 \u00d7 10 \u22121 . On the other hand, UM with K = 12 had the more uniform \u03b8 k than PRTM. This large deviation of \u03b8 of PRTM prevents sentences in the same category from being divided into several clusters. This is the reason why the F 1 score of UM gradually decreased and PRTM achieved invariant sentence clustering accuracy. \u03b8 10 1.58 \u00d7 10 \u22125 4.02 \u00d7 10 \u22122 \u03b8 11 1.28 \u00d7 10 \u22125 3.90 \u00d7 10 \u22122 \u03b8 12 2.93 \u00d7 10 \u22126 1.83 \u00d7 10 \u22122 can be estimated using the average cosine distance (AveDis) measurement. Therefore, we compared Modularity of PRTM with AveDis of UM and LDA with different numbers of topics. We trained topic and word distributions \u03b8, \u03c6, and estimated the optimal number of topics K with both of Hotel Reviews and Car Reviews. The AveDis scores of UM and LDA were the mean values of the tests running three times for the same reason as subsection 3.3. Figure 3 shows the experimental results. The AveDis of UM got the smallest scores in K = 47 with Hotel Reviews and in K = 47 in Car Reviews. Furthermore, AveDis of LDA decreased monotonically in the range of K = 3 to K = 60. On the other hand, the Modularity of PRTM got largest in K = 7 with Hotel Reviews and in K = 6 with Car Reviews. When we consider that Rakuten Travel website scores hotels by six viewpoints and that Edmunds website scores cars by eight viewpoints, the Modularity of PRTM estimates more appropriate number of topics than AveDis of UM in review datasets.", "cite_spans": [], "ref_spans": [ { "start": 73, "end": 81, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 96, "end": 104, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 527, "end": 534, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 1622, "end": 1630, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Sentence Clustering Accuracy", "sec_num": "3.3" }, { "text": "There are several previous works of probabilistic generative models. Latent Dirichlet Allocation (LDA) (Blei et al., 2003) estimates topic distributions for each document and word distributions for each topic. On the other hand, Unigram Mixtures (UM) (Nigam et al., 2000) estimates a topic distribution for all the documents and word distributions for each topic. In both papers, their models are tested at document classification task using WebKB datasets which contain 4199 web sites and 23830 words in a vocabulary. Twitter-LDA (Zhao et al., 2011) has been presented to estimate more coherent topic from tweets which consist of less than 140 letters. In Twitter-LDA model, it is hypothesized that one tweet is regarded to be generated from one topic such as UM. Twitter-LDA is tested using over 1 million tweets which have over 20000 words in a vocabulary.", "cite_spans": [ { "start": 103, "end": 122, "text": "(Blei et al., 2003)", "ref_id": null }, { "start": 251, "end": 271, "text": "(Nigam et al., 2000)", "ref_id": "BIBREF6" }, { "start": 531, "end": 550, "text": "(Zhao et al., 2011)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "There are several benefits of using probabilistic generative models for sentence clustering as described in section 1. However, these probabilistic generative models need much amount of datasets to get consistent computation results. In our experiments, we used about 4000 sentences of reviews which are the same number of documents as in WebKB datasets. However, there are few words in a vocabulary since a sentence of reviews has fewer words than a website. Therefore, in UM and LDA, the computation results seriously depended on randomly-initialized parameters, and lower clustering accuracy was obtained than PRTM in our experiment. To get consistent computation results from short sentence corpus with probabilistic generative models, over 1 million sentences are needed for like the experiment in Twitter-LDA. However, our proposed method, PageRank Topic Model (PRTM), can get consistent multinomial distributions over topics and words with few datasets because the network structure analysis methods are not dependent on randomly-initialized parameters. Therefore, PRTM achieved higher sentence clustering accuracy than UM and LDA with few review datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "In this paper, we have presented PageRank Topic Model (PRTM) to estimate a multinomial distribution over topics \u03b8 and words \u03c6 applying the network structure analysis methods and the convex quadratic programming problem to Word -Cooccurrence Graphs. With PRTM, the consistent computation results can be obtained because PRTM is not denpendent on randomly-initialized \u03b8 and \u03c6. Furthermore, compared to other approaches at the task of estimations of the appropriate number of topics, PRTM estimated more appropriate number of topics for extracting the viewpoints from reviews datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "30th Pacific Asia Conference on Language, Information and Computation (PACLIC 30)Seoul, Republic of Korea, October 28-30, 2016", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://travel.rakuten.co.jp/ 2 http://taku910.github.io/mecab/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.edmunds.com/ 4 http://stanfordnlp.github.io/CoreNLP/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was partially supported by Core Research for Evolutionary Science and Technology (CREST) of Japan Science and Technology Agency (JST).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Finding community structure in very large networks", "authors": [ { "first": "Aaron", "middle": [], "last": "Clauset", "suffix": "" }, { "first": "E", "middle": [ "J" ], "last": "Mark", "suffix": "" }, { "first": "Cristopher", "middle": [], "last": "Newman", "suffix": "" }, { "first": "", "middle": [], "last": "Moore", "suffix": "" } ], "year": 2004, "venue": "Physical review E", "volume": "70", "issue": "6", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aaron Clauset, Mark EJ Newman and Cristopher Moore. 2004. Finding community structure in very large net- works. Physical review E, 70(6):066111.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Maximum likelihood from incomplete data via the EM algorithm", "authors": [ { "first": "Arthur", "middle": [ "P" ], "last": "Dempster", "suffix": "" }, { "first": "Nan", "middle": [ "M" ], "last": "Laird", "suffix": "" }, { "first": "Donald", "middle": [ "B" ], "last": "", "suffix": "" } ], "year": 1977, "venue": "Journal of the Royal Statistical Society, Series B (methodological)", "volume": "39", "issue": "1", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arthur P. Dempster, Nan M. Laird, and Donald B. Ru- bin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B (methodological), 39(1): 1-38.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A Co-occurrence Graph-based Approach for Personal Name Alias Extraction from Anchor Texts", "authors": [ { "first": "Danushka", "middle": [], "last": "Bollegala", "suffix": "" }, { "first": "Yutaka", "middle": [], "last": "Matsuo", "suffix": "" }, { "first": "Mitsuru", "middle": [], "last": "Ishizuka", "suffix": "" } ], "year": 2008, "venue": "Proceedings of International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "865--870", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danushka Bollegala, Yutaka Matsuo, and Mitsuru Ishizuka. 2008. A Co-occurrence Graph-based Ap- proach for Personal Name Alias Extraction from An- chor Texts. In Proceedings of International Joint Con- ference on Natural Language Processing: 865-870.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A density-based method for adaptive LDA model selection", "authors": [ { "first": "Juan", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Tian", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Jintao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yongdong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Tang", "suffix": "" } ], "year": 2009, "venue": "Neurocomputing", "volume": "72", "issue": "", "pages": "1775--1781", "other_ids": {}, "num": null, "urls": [], "raw_text": "Juan Cao, Tian Xia, Jintao Li, Yongdong Zhang, and Sheng Tang. 2009. A density-based method for adaptive LDA model selection. Neurocomputing, 72: 1775-1781.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The collapsed Gibbs sampler in Bayesian computations with applications to a gene regulation problem", "authors": [ { "first": "S", "middle": [], "last": "Jun", "suffix": "" }, { "first": "", "middle": [], "last": "Liu", "suffix": "" } ], "year": 1994, "venue": "Journal of the American Statistical Association", "volume": "89", "issue": "427", "pages": "958--966", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun S. Liu. 1994. The collapsed Gibbs sampler in Bayesian computations with applications to a gene regulation problem. Journal of the American Statis- tical Association, 89(427): 958-966.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Text Classification from Labeled and Unlabeled Documents using EM", "authors": [ { "first": "Kamal", "middle": [], "last": "Nigam", "suffix": "" }, { "first": "Andrew", "middle": [ "K" ], "last": "Mccallum", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Thrun", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 2000, "venue": "Machine Learning", "volume": "39", "issue": "", "pages": "61--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kamal Nigam, Andrew K. McCallum, Sebastian Thrun, and Tom Mitchell. 2000. Text Classification from La- beled and Unlabeled Documents using EM. Machine Learning, 39(2/3): 61-67.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Opinion-Based Entity Ranking", "authors": [ { "first": "Kavita", "middle": [], "last": "Ganesan", "suffix": "" }, { "first": "Chengxiang", "middle": [], "last": "Zhai", "suffix": "" } ], "year": 2011, "venue": "Information Retrieval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kavita Ganesan and ChengXiang Zhai. 2011. Opinion- Based Entity Ranking. Information Retrieval.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The pagerank citation ranking: Bringing order to the web", "authors": [ { "first": "Larry", "middle": [], "last": "Page", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Brin", "suffix": "" }, { "first": "Rajeev", "middle": [], "last": "Motwani", "suffix": "" }, { "first": "Terry", "middle": [], "last": "Winograd", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Larry Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The pagerank citation ranking: Bringing order to the web. Technical Report 1999-66, Stanford InfoLab. Previous number = SIDL-WP-1999- 0120.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Dirichlet Mixtures in Text Modeling", "authors": [ { "first": "Mikio", "middle": [], "last": "Yamamoto", "suffix": "" }, { "first": "Kugatsu", "middle": [], "last": "Sadamitsu", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikio Yamamoto and Kugatsu Sadamitsu. 2005. Dirich- let Mixtures in Text Modeling. CS Technical report CS-TR-05-1, University of Tsukuba, Japan.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The anatomy of a large-scale hypertextual Web search engine. Computer Networks and ISDN Systems", "authors": [ { "first": "Sergey", "middle": [], "last": "Brin", "suffix": "" }, { "first": "Larry", "middle": [], "last": "Page", "suffix": "" } ], "year": 1998, "venue": "", "volume": "30", "issue": "", "pages": "107--117", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sergey Brin and Larry Page. 1998. The anatomy of a large-scale hypertextual Web search engine. Com- puter Networks and ISDN Systems, 30(17):107-117.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Construction of Related Terms Thesauri from the Web", "authors": [ { "first": "Takeshi", "middle": [], "last": "Sakaki", "suffix": "" }, { "first": "Yutaka", "middle": [], "last": "Matsuo", "suffix": "" } ], "year": 2007, "venue": "Journal of Natural Language Processing", "volume": "14", "issue": "2", "pages": "3--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Takeshi Sakaki, Yutaka Matsuo, Koki Uchiyama and Mit- suru Ishizuka 2007. Construction of Related Terms Thesauri from the Web. Journal of Natural Language Processing, 14(2):3-31.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Comparing twitter and traditional media using topic models", "authors": [ { "first": "Jing", "middle": [], "last": "Wayne Xin Zhao", "suffix": "" }, { "first": "Jianshu", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Weng", "suffix": "" }, { "first": "Ee-Peng", "middle": [], "last": "He", "suffix": "" }, { "first": "Hongfei", "middle": [], "last": "Lim", "suffix": "" }, { "first": "Xiaoming", "middle": [], "last": "Yan", "suffix": "" }, { "first": "", "middle": [], "last": "Li", "suffix": "" } ], "year": 2011, "venue": "The annual European Conference on Information Retrieval", "volume": "", "issue": "", "pages": "338--349", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wayne Xin Zhao, Jing Jiang, Jianshu Weng, Jing He, Ee-Peng Lim, Hongfei Yan, and Xiaoming Li. 2011. Comparing twitter and traditional media using topic models. The annual European Conference on Infor- mation Retrieval:338-349.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A collapsed variational Bayesian inference algorithm for latent Dirichlet allocation", "authors": [ { "first": "Yee", "middle": [ "W" ], "last": "Teh", "suffix": "" }, { "first": "David", "middle": [], "last": "Newman", "suffix": "" }, { "first": "Max", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2006, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "1353--1360", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yee W. Teh, David Newman, and Max Welling. 2006. A collapsed variational Bayesian inference algorithm for latent Dirichlet allocation. In Advances in Neural Information Processing Systems: 1353-1360.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Criterion functions for document clustering: Experiments and analysis", "authors": [ { "first": "Ying", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "George", "middle": [], "last": "Karypis", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ying Zhao and George Karypis. 2001. Criterion func- tions for document clustering: Experiments and analy- sis. Technical Report TR 01-40, Department of Com- puter Science, University of Minnesota, Minneapolis, MN.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "KeyGraph: Automatic Indexing by Cooccurrence Graph based on Building Construction Metaphor", "authors": [ { "first": "Yukio", "middle": [], "last": "Ohsawa", "suffix": "" }, { "first": "Nels", "middle": [ "E" ], "last": "Benson", "suffix": "" }, { "first": "Masahiko", "middle": [], "last": "Yachida", "suffix": "" } ], "year": 1998, "venue": "Proceedings of Advanced Digital Library Conference", "volume": "", "issue": "", "pages": "12--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yukio Ohsawa, Nels E. Benson, and Masahiko Yachida. 1998. KeyGraph: Automatic Indexing by Co- occurrence Graph based on Building Construction Metaphor. In Proceedings of Advanced Digital Li- brary Conference: 12-18.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "The relationship between Com(k) and G (k) w appearance probability of word" }, "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "F 1 score comparison with different numbers of topics. (a) Hotel Reviews. (b) Car Reviews." }, "FIGREF2": { "uris": null, "type_str": "figure", "num": null, "text": "Modularity and Ave-Dis comparison with different numbers of topics. (a) Hotel Reviews. (b) Car Reviews." }, "TABREF1": { "html": null, "num": null, "text": "Top 5th terms in each topic by PRTM, UM, and LDA. Each term has been translated from Japanese to English using Google translation.", "content": "", "type_str": "table" }, "TABREF2": { "html": null, "num": null, "text": "The appearance probabilities \u03b8 k comparison with K = 6 and K = 12. Sorted in descending order.", "content": "
", "type_str": "table" } } } }