|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:33:27.912103Z" |
|
}, |
|
"title": "Semi-supervised Category-specific Review Tagging on Indonesian E-Commerce Product Reviews", |
|
"authors": [ |
|
{ |
|
"first": "Meng", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Marie", |
|
"middle": [ |
|
"Stephen" |
|
], |
|
"last": "Leo", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "marie.leo@tokopedia.com" |
|
}, |
|
{ |
|
"first": "Eram", |
|
"middle": [], |
|
"last": "Munawwar", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "eram.munawwar@tokopedia.com" |
|
}, |
|
{ |
|
"first": "Albert", |
|
"middle": [], |
|
"last": "Hidayat", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "albert.hidayat@tokopedia.com" |
|
}, |
|
{ |
|
"first": "Danang", |
|
"middle": [], |
|
"last": "Muhamad", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kerianto", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Condylis", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "paul.condylis@tokopedia.com" |
|
}, |
|
{ |
|
"first": "Sheng-Yi", |
|
"middle": [], |
|
"last": "Kong", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Product reviews are a huge source of natural language data in e-commerce applications. Several millions of customers write reviews regarding a variety of topics. We categorize these topics into two groups as either \"category-specific\" topics or as \"generic\" topics that span multiple product categories. While we can use a supervised learning approach to tag review text for generic topics, it is impossible to use supervised approaches to tag category-specific topics due to the sheer number of possible topics for each category. In this paper, we present an approach to tag each review with several product category-specific tags on Indonesian language product reviews using a semi-supervised approach. We show that our proposed method can work at scale on real product reviews at Tokopedia 1 , a major e-commerce platform in Indonesia. Manual evaluation shows that the proposed method can efficiently generate category-specific product tags.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Product reviews are a huge source of natural language data in e-commerce applications. Several millions of customers write reviews regarding a variety of topics. We categorize these topics into two groups as either \"category-specific\" topics or as \"generic\" topics that span multiple product categories. While we can use a supervised learning approach to tag review text for generic topics, it is impossible to use supervised approaches to tag category-specific topics due to the sheer number of possible topics for each category. In this paper, we present an approach to tag each review with several product category-specific tags on Indonesian language product reviews using a semi-supervised approach. We show that our proposed method can work at scale on real product reviews at Tokopedia 1 , a major e-commerce platform in Indonesia. Manual evaluation shows that the proposed method can efficiently generate category-specific product tags.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "1 Introduction E-commerce product reviews are a rich source of direct feedback from the customers. Written in free text natural language, product reviews contain a significant amount of information regarding a variety of topics that are important to prospective buyers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Tokopedia conducted customer survey research to understand the sources of information that potential buyers assess while making a purchase decision. This internal research shows that around 15% customers consider product reviews as the most important source of information and it is the third 1 www.tokopedia.com highest among all 20 possible information sources. Internal analysis of the \"click rate\" of various components on the platform's product listing page also shows that components related to product reviews have the second highest click rate which further emphasises the importance of product reviews for prospective buyers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Although reviews are important information sources, manually filtering relevant information is a cumbersome process for a buyer when making a purchase decision. Tokopedia has several hundreds of millions of customer reviews, generated by millions of users over the years. Therefore, extracting relevant tags for each product so that prospective buyers can quickly filter the most relevant reviews based on their topic of interest becomes important to make a quick purchase decision and improve buyer engagement on the platform.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We categorize topics in reviews into two types. The first type of topics are the generic topics that exist in reviews of products from any category, and they are about the generic information that customers care about. In the e-commerce platform, for example, the generic topics are \"customer service\", \"delivery, \"packaging quality\", \"price\", and so on. The second type of topics are the category-specific topics. These topics are detailed description of the product specific attributes. Since different products have different attributes, the category-specific topics are very different for products from different categories. For example, for products in Phone Case category, the category-specific topic could be \"cable hole\", while for products in Herbal Medicine category, the category-specific topic would be \"ingredients\". The focus of this paper is to generate tags of category-specific topics for products across different categories.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "There are several challenges for this work. Firstly, the category-specific topics are widely different among products of different categories. Therefore, it's impossible to get labeled data to apply supervised methods which are normally used when generating tags. Secondly, we work on informal Indonesian language. Though Indonesian language shares the same alphabet with English, Indonesian language differs from English in certain significant ways such as different sentence structure, prefix and suffix modifiers and slang spellings. Also since we work on reviews, the texts are informal, and contain a mixture of Indonesian, English, abbreviations and slang, which further increases the difficulty.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The focus of this work is to address the above mentioned challenges. We proposed a semisupervised method, and successfully applied it to product reviews from different categories in the ecommerce platform. We also evaluated our results with manually labeled data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The rest of this paper is organized as follows. We describe related work in the literature in Section 2. We then describe our approach to extract categoryspecific tags from Indonesian language review text in Section 3. Experiments and results are discussed in Section 4.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "While we can use a supervised learning approach to get generic topics from product reviews, it is impossible to use supervised approaches with \"categoryspecific\" topics due to the sheer number of possible topics for each product category. Therefore, we use an unsupervised method to extract topics from product reviews in this paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "One of the earliest unsupervised method to extract keywords from text is the statistics based method. Frequency or Term Frequency -Inverse Document Frequency (TF-IDF) score is calculated on the n-grams of all the reviews. The n-grams with higher score will be extracted as tags. Graph-based methods (Mihalcea and Tarau, 2004; Altuncu et al., 2019) can also used to extract keywords, where each token is a vertex and an edge is defined when two tokens are in the same context window. Both methods however, fail to group n-grams of similar meaning together.", |
|
"cite_spans": [ |
|
{ |
|
"start": 299, |
|
"end": 325, |
|
"text": "(Mihalcea and Tarau, 2004;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 326, |
|
"end": 347, |
|
"text": "Altuncu et al., 2019)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Latent Dirichlet Allocation (LDA) (Blei et al., 2003) and it's variants (Yan et al., 2013; Xiong and Guo, 2019) are popular methods to group words into topics. However LDA processes a document as a bag of words with the assumption that each word is independent of each other. Therefore this method loses valuable occurrence information. Clustering method like k-means, DBSCAN can group similar words based on word embedding. However, word embedding is high dimensional data and clustering fails to work well on it due to the curse of dimensionality.", |
|
"cite_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 53, |
|
"text": "(Blei et al., 2003)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 72, |
|
"end": 90, |
|
"text": "(Yan et al., 2013;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 91, |
|
"end": 111, |
|
"text": "Xiong and Guo, 2019)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A neural network model was proposed by He et al. (2017) to group phrases into topics. It overcomes the drawbacks of LDA and clustering methods by utilizing the embedding information with attention mechanism to attend to important tokens in the sentence. We use this model in this paper.", |
|
"cite_spans": [ |
|
{ |
|
"start": 39, |
|
"end": 55, |
|
"text": "He et al. (2017)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this section we describe how the categoryspecific topic and the product tags are generated. The pipeline is shown in Figure. 1.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 120, |
|
"end": 127, |
|
"text": "Figure.", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Category-specific Tag Generation Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We extract phrases from each text review using Stanford NLP's dependency parser (Manning et al., 2014) . Among all the extracted dependencies (Nivre et al., 2016) , we choose three kinds as shown in Table 1 . These dependencies are about nouns, as the phrases extracted by them are more likely to be about the products. Examples of dependencies that are not selected such as verb, adverb and so on is shown in We further drop phrases which contain stop words derived from NLTK Indonesian stop word list (https://www.nltk.org/), and a list that is manually labeled by an internal product team. We only remove stopwords after phrase extraction, since phrase extraction needs the complete sentence input to extract phrases more accurately. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 80, |
|
"end": 102, |
|
"text": "(Manning et al., 2014)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 142, |
|
"end": 162, |
|
"text": "(Nivre et al., 2016)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 199, |
|
"end": 206, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Phrase Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "A topic is a group of phrases sharing a similar concept. Different topics, on the other hand, are separate groups of phrases of different concepts. On the phrases from each product category, we apply the Unsupervised Aspect Extraction (UAE) model (He et al., 2017) to extract topics. The UAE model generates topics by first learning K topic embeddings, the number of topics K is predefined. Phrases within a product category are then grouped to the topic that is closest in embedding. As shown in Figure 2 , the model has three layers: the embedding layer, the attention layer and the auto-encoder layer. We concatenate the review phrases from one product as the input to the embedding layer. The embedding layer is initialized with a word2vec embedding of dimension d, that is trained on all the reviews of this category. Since StanfordNLP dependency parser generates phrases with two tokens, concatenating the embeddings of each token in the phrase gives us a phrase embedding of dimension 2d.", |
|
"cite_spans": [ |
|
{ |
|
"start": 247, |
|
"end": 264, |
|
"text": "(He et al., 2017)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 497, |
|
"end": 505, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Topic Generation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The attention layer takes these phrase embeddings, and calculates a weighted sum of the phrases, as z s = n i=1 a i e w i , where e w i \u2208 IR 1\u00d72d is the embedding for the i th input phrase, and a i is the weight computed by the attention layer based on both the relevance of the filtered phrase to the K aspects and the relevance to the whole sentence which is trained with the following formulas.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Generation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "a i = exp(d i ) n j=1 exp(d j ) d i = e T w i \u2022 M \u2022 y s y s = 1 n n i=1 e w i", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Generation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In the auto-encoder layer, the encoder compresses z s to a vector of probabilities p t with p t = softmax(W \u2022 z s + b) and the decoder reconstructs a sentence embedding with r s = T T \u2022 p t . Here T \u2208 IR K\u20222d is the learned aspect embedding matrix, which is in the same embedding space as the phrase embedding.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Generation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The loss function of the model is defined as L(\u03b8) = J(\u03b8) + \u03bbU (\u03b8), where \u03b8 represents the model parameter, J(\u03b8) is proportional to the hinge loss between r s and z s , and U (\u03b8) is the regularization term which encourages orthogonality among the rows in the aspect embedding T .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Generation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Category-specific topics are unique to each product category and not generic. To sift out the general topics from all the generated topics, we use a supervised method.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Category-specific Topic Filtering", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "As the generic topics are similar across all product categories, we made a general word list which contains the frequent words in general phrases. Examples from the general word list are berfungsi, semoga, bonus, sis, kwalitas, oke, super, boss. (The English translations are function, hopefully, bonuses, sis, quality, okay, super, boss.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Category-specific Topic Filtering", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "A phrase is considered a general phrase if both words in the phrase are in the general word list. If more than a certain percentage \u03b7 of all the phrases in one topic are general phrases, the topic is considered a general topic, otherwise the topic is a generated category-specific topic, which will be used in the next step.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Category-specific Topic Filtering", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "After supervised filtering, manual labeling is applied to each phrase on the generated categoryspecific topics. Since we've already applied topic extraction and supervised filtering, the number of phrases to be manually labeled is reduced dramatically. For each phrase, we label it either as generic, incoherent or category-specific. Generic phrases are those phrases about general aspects, including delivery, fits description, packing quality, customer service, price. General descriptions about the product quality are also general phrases, these phrases can be used to describe products from most of other categories as well, such as produk bagus (good product). Incoherent phrases are those that are not about the same concept as the majority of the other phrases in the same topic. And category-specific phrases are the phrase about the category-specific aspects of the category, and they are coherent with the majority of the phrases in the same topic.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Category-specific Topic Filtering", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The category-specific phrases in each topic will be used for tag generation as will be described in Section 3.4. And the frequent words in the generic phrases will be added to the general word list for use in supervised filtering of future topics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Category-specific Topic Filtering", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "With the filtered category-specific topics, we generate the category-specific tags.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Category-specific Tag Generation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "For each product, we group the review phrases to corresponding topics as discussed in Section 3.1 and Section 3.2. We use supervised method shown in Section 3.3 to filter category-specific topics from all the generated topics. Then, we rank the phrases in each topic according to the frequency of phrases in the reviews of this product and choose the one with highest ranking as the tag of this topic for this product. The results are uploaded to a data warehouse.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Category-specific Tag Generation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "In this section, we apply our proposed method to product reviews from Tokopedia. We demonstrate the experimental results, and show the evaluation results of the generated category-specific topics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We use reviews from 89.5 Million products across 18 product categories as the dataset. The average number of reviews in each category, and the average string length of reviews in shown in Table 3 (column: \"#reviews\" and \"average length\"). ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 188, |
|
"end": 195, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "After doing phrase extraction, we applied UAE model for topic extraction. We performed the same preprocessing as He et al. (2017) and used word2vec to train the word embeddings with dimension d = 200. We modified the model structure to accept phrase input as described in Section 3.2, and we shared the same parameter settings as He et al. (2017) . We apply our method to each category separately, and we set the number of topics as K = 14 for topic generation. Then, we apply category-specific filter on the extracted topics for all categories with \u03b7 = 40%. The general word list we used contains 127 words. The average time to get generated categoryspecific topics on extracted phrases is 2 hours per category with around 0.5M reviews. On average, we generate 5 category-specific topics for each category. We show the number of generated categoryspecific topics for each category in Table 3 (column: \"#topics\"). We show some of these generated category-specific topics in Table 4 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 129, |
|
"text": "He et al. (2017)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 330, |
|
"end": 346, |
|
"text": "He et al. (2017)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 885, |
|
"end": 892, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 974, |
|
"end": 981, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model Result", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The most essential part of this work is the automatic generation of category-specific topics. In this section, we show the evaluation results for the quality of the category-specific topic generation. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "An internal product team labeled the results from supervised filtering. They label each phrase as category-specific, general or incoherent as described in Section 3.3. On average, it took one person 3 minutes to label all the phrases of one topic. We apply the evaluation metrics used in He et al. (2017) and Chen et al. (2014) . Following their setting, we get the score precision@n (p@n) for each generated category-specific topic, as the number of category-specific phrases among the top n phrases. We show the average p@100 for sample categories in Table 3 (column: \"average p@100\"). From the result, we can see the majority of the phrases in the generated topics are categoryspecific in meaning. We define any topic with p@n > 60 as a category-specific topic, and we define topic rate as topic rate = #category-specific topics #generated category-specific topics", |
|
"cite_spans": [ |
|
{ |
|
"start": 288, |
|
"end": 304, |
|
"text": "He et al. (2017)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 309, |
|
"end": 327, |
|
"text": "Chen et al. (2014)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 553, |
|
"end": 560, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation Metric", |
|
"sec_num": "4.3.1" |
|
}, |
|
{ |
|
"text": "We show the topic rate for selected category in Table 3 (column: \"topic rate\"). We can see more than half of the generated category-specific topics will be selected after manual filtering, thus, human labeling will be very efficient on the automatically generated category-specific topics.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 55, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation Metric", |
|
"sec_num": "4.3.1" |
|
}, |
|
{ |
|
"text": "In this paper, we described a pipeline for categoryspecific review tagging using phrase extraction, topic generation, category-specific topic filtering and tag generation. Given the product reviews, the pipeline generates the category-specific tags for each product and customers can filter product reviews with these tags. The pipeline is being implemented on product reviews at Tokopedia, and proved to be successful when scaled to large number of reviews. We also evaluated the quality of the generated category-specific topics with manual labeling and results show that the pipeline can generate coherent category-specific topics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Extracting information from free text through unsupervised graph-based clustering: an application to patient incident records", |
|
"authors": [ |
|
{ |
|
"first": "Eloise", |
|
"middle": [], |
|
"last": "M Tarik Altuncu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joshua", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Sorin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erik", |
|
"middle": [], |
|
"last": "Symons", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mayer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Sophia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francesca", |
|
"middle": [], |
|
"last": "Yaliraki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mauricio", |
|
"middle": [], |
|
"last": "Toni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Barahona", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1909.00183" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M Tarik Altuncu, Eloise Sorin, Joshua D Symons, Erik Mayer, Sophia N Yaliraki, Francesca Toni, and Mauricio Barahona. 2019. Extracting information from free text through unsupervised graph-based clustering: an application to patient incident records. arXiv preprint arXiv:1909.00183.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Latent dirichlet allocation", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Jordan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "993--1022", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. Journal of Ma- chine Learning Research, 3:993-1022.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Aspect extraction with automated prior knowledge learning", |
|
"authors": [ |
|
{ |
|
"first": "Zhiyuan", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arjun", |
|
"middle": [], |
|
"last": "Mukherjee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "347--358", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhiyuan Chen, Arjun Mukherjee, and Bing Liu. 2014. Aspect extraction with automated prior knowledge learning. In Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 347-358, Balti- more, Maryland.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "An unsupervised neural attention model for aspect extraction", |
|
"authors": [ |
|
{ |
|
"first": "Ruidan", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hwee Tou", |
|
"middle": [], |
|
"last": "Wee Sun Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dahlmeier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "388--397", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2017. An unsupervised neural attention model for aspect extraction. In Proceedings of the 55th Annual Meeting of the Association for Com- putational Linguistics, pages 388-397, Vancouver, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "The Stanford CoreNLP natural language processing toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Bauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jenny", |
|
"middle": [], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Bethard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mc-Closky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Association for Computational Linguistics (ACL) System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "55--60", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Association for Compu- tational Linguistics (ACL) System Demonstrations, pages 55-60.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Textrank: Bringing order into text", |
|
"authors": [ |
|
{ |
|
"first": "Rada", |
|
"middle": [], |
|
"last": "Mihalcea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Tarau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "404--411", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rada Mihalcea and Paul Tarau. 2004. Textrank: Bring- ing order into text. In Proceedings of the 2004 Con- ference on Empirical Methods in Natural Language Processing, pages 404-411, Barcelona, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Universal dependencies v1: A multilingual treebank collection", |
|
"authors": [ |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie-Catherine", |
|
"middle": [], |
|
"last": "De Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Filip", |
|
"middle": [], |
|
"last": "Ginter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Hajic", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Slav", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sampo", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Natalia", |
|
"middle": [], |
|
"last": "Pyysalo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Silveira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC'16)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1659--1666", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joakim Nivre, Marie-Catherine De Marneffe, Filip Gin- ter, Yoav Goldberg, Jan Hajic, Christopher D Man- ning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal dependencies v1: A multilingual treebank collection. In Proceed- ings of the 10th International Conference on Lan- guage Resources and Evaluation (LREC'16), pages 1659-1666.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Chinese news keyword extraction algorithm based on textrank and topic model", |
|
"authors": [ |
|
{ |
|
"first": "Ao", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qing", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "International Conference on Artificial Intelligence for Communications and Networks", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "334--341", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ao Xiong and Qing Guo. 2019. Chinese news keyword extraction algorithm based on textrank and topic model. In International Conference on Artificial In- telligence for Communications and Networks, pages 334-341. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "A biterm topic model for short texts", |
|
"authors": [ |
|
{ |
|
"first": "Xiaohui", |
|
"middle": [], |
|
"last": "Yan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiafeng", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yanyan", |
|
"middle": [], |
|
"last": "Lan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xueqi", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 22nd International Conference on World Wide Web", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1445--1456", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaohui Yan, Jiafeng Guo, Yanyan Lan, and Xueqi Cheng. 2013. A biterm topic model for short texts. In Proceedings of the 22nd International Confer- ence on World Wide Web, pages 1445-1456, Rio de Janeiro, Brazil.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Category-specific topic extraction and product tagging pipelines", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "UAE Model Structure", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td>UDP</td><td>meaning</td><td>example</td></tr><tr><td>amod</td><td>adjectival modifier</td><td/></tr><tr><td>nsubj</td><td>nominal subject</td><td/></tr><tr><td>compound</td><td>compound</td><td/></tr></table>", |
|
"type_str": "table", |
|
"text": "", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF1": { |
|
"content": "<table><tr><td>//universaldependencies.org/u/dep/)</td><td>(We</td></tr><tr><td>show examples in English.)</td><td/></tr></table>", |
|
"type_str": "table", |
|
"text": "Universal Dependency Relations (UDP) chosen to extract phrases from product reviews. (https:", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Product review statistics and evaluation results for 4 sample categories.", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"content": "<table><tr><td>Category English</td><td>Topic Example</td></tr><tr><td>Handphone</td><td>Id android hp</td></tr><tr><td>Charger</td><td/></tr></table>", |
|
"type_str": "table", |
|
"text": ",sony hp,lenovo hp,mini ipad En android hp,sony hp,lenovo hp,mini ipad Men Sneakers Id sesuai model,sesuai size,sesuai bentuk En fit models, fit sizes, fit shapes Men Analogue Clock Id automatic jam, pria jam,jutaan jamm En automatic clocks,men clocks,millions of hours Plant Seeds Id semi tumbuh,bismillah tumbuh,daya tumbuh En spring grows, bismillah grows, power grows", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF6": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Example of generated category-specific topics in Indonesian language (Id) for four selected categories and their English (En) translations.", |
|
"num": null, |
|
"html": null |
|
} |
|
} |
|
} |
|
} |