{ "paper_id": "P14-1032", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:07:34.353216Z" }, "title": "Product Feature Mining: Semantic Clues versus Syntactic Constituents", "authors": [ { "first": "Liheng", "middle": [], "last": "Xu", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "Chinese Academy of Sciences", "location": { "postCode": "100190", "settlement": "Beijing", "country": "China" } }, "email": "lhxu@nlpr.ia.ac.cn" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "Chinese Academy of Sciences", "location": { "postCode": "100190", "settlement": "Beijing", "country": "China" } }, "email": "kliu@nlpr.ia.ac.cn" }, { "first": "Siwei", "middle": [], "last": "Lai", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "Chinese Academy of Sciences", "location": { "postCode": "100190", "settlement": "Beijing", "country": "China" } }, "email": "swlai@nlpr.ia.ac.cn" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "Chinese Academy of Sciences", "location": { "postCode": "100190", "settlement": "Beijing", "country": "China" } }, "email": "jzhao@nlpr.ia.ac.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Product feature mining is a key subtask in fine-grained opinion mining. Previous works often use syntax constituents in this task. However, syntax-based methods can only use discrete contextual information, which may suffer from data sparsity. This paper proposes a novel product feature mining method which leverages lexical and contextual semantic clues. Lexical semantic clue verifies whether a candidate term is related to the target product, and contextual semantic clue serves as a soft pattern miner to find candidates, which exploits semantics of each word in context so as to alleviate the data sparsity problem. We build a semantic similarity graph to encode lexical semantic clue, and employ a convolutional neural model to capture contextual semantic clue. Then Label Propagation is applied to combine both semantic clues. Experimental results show that our semantics-based method significantly outperforms conventional syntaxbased approaches, which not only mines product features more accurately, but also extracts more infrequent product features.", "pdf_parse": { "paper_id": "P14-1032", "_pdf_hash": "", "abstract": [ { "text": "Product feature mining is a key subtask in fine-grained opinion mining. Previous works often use syntax constituents in this task. However, syntax-based methods can only use discrete contextual information, which may suffer from data sparsity. This paper proposes a novel product feature mining method which leverages lexical and contextual semantic clues. Lexical semantic clue verifies whether a candidate term is related to the target product, and contextual semantic clue serves as a soft pattern miner to find candidates, which exploits semantics of each word in context so as to alleviate the data sparsity problem. We build a semantic similarity graph to encode lexical semantic clue, and employ a convolutional neural model to capture contextual semantic clue. Then Label Propagation is applied to combine both semantic clues. Experimental results show that our semantics-based method significantly outperforms conventional syntaxbased approaches, which not only mines product features more accurately, but also extracts more infrequent product features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In recent years, opinion mining has helped customers a lot to make informed purchase decisions. However, with the rapid growth of e-commerce, customers are no longer satisfied with the overall opinion ratings provided by traditional sentiment analysis systems. The detailed functions or attributes of products, which are called product features, receive more attention. Nevertheless, a product may have thousands of features, which makes it impractical for a customer to investigate them all. Therefore, mining product features automatically from online reviews is shown to be a key step for opinion summarization (Hu and Liu, 2004; Qiu et al., 2009) and fine-grained sentiment analysis (Jiang et al., 2011; Li et al., 2012) .", "cite_spans": [ { "start": 614, "end": 632, "text": "(Hu and Liu, 2004;", "ref_id": "BIBREF7" }, { "start": 633, "end": 650, "text": "Qiu et al., 2009)", "ref_id": "BIBREF18" }, { "start": 687, "end": 707, "text": "(Jiang et al., 2011;", "ref_id": "BIBREF9" }, { "start": 708, "end": 724, "text": "Li et al., 2012)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous works often mine product features via syntactic constituent matching (Popescu and Etzioni, 2005; Qiu et al., 2009; Zhang et al., 2010) . The basic idea is that reviewers tend to comment on product features in similar syntactic structures. Therefore, it is natural to mine product features by using syntactic patterns. For example, in Figure 1 , the upper box shows a dependency tree produced by Stanford Parser (de Marneffe et al., 2006) , and the lower box shows a common syntactic pattern from (Zhang et al., 2010) , where is a wildcard to be fit in reviews and NN denotes the required POS tag of the wildcard. Usually, the product name mp3 is specified, and when screen matches the wildcard, it is likely to be a product feature of mp3. Figure 1 : An example of syntax-based product feature mining procedure. The word screen matches the wildcard . Therefore, screen is likely to be a product feature of mp3.", "cite_spans": [ { "start": 78, "end": 105, "text": "(Popescu and Etzioni, 2005;", "ref_id": "BIBREF17" }, { "start": 106, "end": 123, "text": "Qiu et al., 2009;", "ref_id": "BIBREF18" }, { "start": 124, "end": 143, "text": "Zhang et al., 2010)", "ref_id": "BIBREF26" }, { "start": 420, "end": 446, "text": "(de Marneffe et al., 2006)", "ref_id": "BIBREF3" }, { "start": 505, "end": 525, "text": "(Zhang et al., 2010)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 343, "end": 351, "text": "Figure 1", "ref_id": null }, { "start": 762, "end": 770, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Generally, such syntactic patterns extract product features well but they still have some limitations. For example, the product-have-feature pattern may fail to find the fm tuner in a very similar case in Example 1(a), where the product is mentioned by using player instead of mp3. Similarly, it may also fail on Example 1(b), just with have replaced by support. In essence, syntactic pattern is a kind of one-hot representation for encoding the contexts, which can only use partial and discrete features, such as some key words (e.g., have) or shallow information (e.g., POS tags). Therefore, such a representation often suffers from the data sparsity problem (Turian et al., 2010) .", "cite_spans": [ { "start": 661, "end": 682, "text": "(Turian et al., 2010)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One possible solution for this problem is using a more general pattern such as NP-VB-feature, where NP represents a noun or noun phrase and VB stands for any verb. However, this pattern becomes too general that it may find many irrelevant cases such as the one in Example 1(c), which is not talking about the product. Consequently, it is very difficult for a pattern designer to balance between precision and generalization. To solve the problems stated above, it is argued that deeper semantics of contexts shall be exploited. For example, we can try to automatically discover that the verb have indicates a part-whole relation (Zhang et al., 2010) and support indicates a product-function relation, so that both sth. have and sth. support suggest that terms following them are product features, where sth. can be replaced by any terms that refer to the target product (e.g., mp3, player, etc.). This is called contextual semantic clue. Nevertheless, only using contexts is not sufficient enough. As in Example 1(d), we can see that the word flaws follows mp3 have, but it is not a product feature. Thus, a noise term may be extracted even with high contextual support. Therefore, we shall also verify whether a candidate is really related to the target product. We call it lexical semantic clue.", "cite_spans": [ { "start": 629, "end": 649, "text": "(Zhang et al., 2010)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper proposes a novel bootstrapping approach for product feature mining, which leverages both semantic clues discussed above. Firstly, some reliable product feature seeds are automatically extracted. Then, based on the assumption that terms that are more semantically similar to the seeds are more likely to be product features, a graph which measures semantic similarities between terms is built to capture lexical semantic clue. At the same time, a semi-supervised convolutional neural model (Collobert et al., 2011) is employed to encode contextual semantic clue. Finally, the two kinds of semantic clues are com-bined by a Label Propagation algorithm.", "cite_spans": [ { "start": 500, "end": 524, "text": "(Collobert et al., 2011)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the proposed method, words are represented by continuous vectors, which capture latent semantic factors of the words (Turian et al., 2010) . The vectors can be unsupervisedly trained on large scale corpora, and words with similar semantics will have similar vectors. This enables our method to be less sensitive to lexicon change, so that the data sparsity problem can be alleviated . The contributions of this paper include:", "cite_spans": [ { "start": 120, "end": 141, "text": "(Turian et al., 2010)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 It uses semantics of words to encode contextual clues, which exploits deeper level information than syntactic constituents. As a result, it mines product features more accurately than syntaxbased methods. \u2022 It exploits semantic similarity between words to capture lexical clues, which is shown to be more effective than co-occurrence relation between words and syntactic patterns. In addition, experiments show that the semantic similarity has the advantage of mining infrequent product features, which is crucial for this task. For example, one may say \"This hotel has low water pressure\", where low water pressure is seldom mentioned, but fatal to someone's taste. \u2022 We compare the proposed semantics-based approach with three state-of-the-art syntax-based methods. Experiments show that our method achieves significantly better results. The rest of this paper is organized as follows. Section 2 introduces related work. Section 3 describes the proposed method in details. Section 4 gives the experimental results. Lastly, we conclude this paper in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In product feature mining task, Hu and Liu (2004) proposed a pioneer research. However, the association rules they used may potentially introduce many noise terms. Based on the observation that product features are often commented on by similar syntactic structures, it is natural to use patterns to capture common syntactic constituents around product features. Popescu and Etzioni (2005) designed some syntactic patterns to search for product feature candidates and then used Pointwise Mutual Information (PMI) to remove noise terms. Qiu et al. (2009) proposed eight heuristic syntactic rules to jointly extract product features and sentiment lexicons, where a bootstrapping algorithm named Double Propagation was applied to expand a given seed set. Zhang et al. (2010) improved Qiu's work by adding more feasible syntactic patterns, and the HITS algorithm (Kleinberg, 1999) was employed to rank candidates. Moghaddam and Ester (2010) extracted product features by automatical opinion pattern mining. Zhuang et al. (2006) used various syntactic templates from an annotated movie corpus and applied them to supervised movie feature extraction. Wu et al. (2009) proposed a phrase level dependency parsing for mining aspects and features of products.", "cite_spans": [ { "start": 32, "end": 49, "text": "Hu and Liu (2004)", "ref_id": "BIBREF7" }, { "start": 363, "end": 389, "text": "Popescu and Etzioni (2005)", "ref_id": "BIBREF17" }, { "start": 536, "end": 553, "text": "Qiu et al. (2009)", "ref_id": "BIBREF18" }, { "start": 752, "end": 771, "text": "Zhang et al. (2010)", "ref_id": "BIBREF26" }, { "start": 859, "end": 876, "text": "(Kleinberg, 1999)", "ref_id": "BIBREF11" }, { "start": 910, "end": 936, "text": "Moghaddam and Ester (2010)", "ref_id": "BIBREF15" }, { "start": 1003, "end": 1023, "text": "Zhuang et al. (2006)", "ref_id": "BIBREF28" }, { "start": 1145, "end": 1161, "text": "Wu et al. (2009)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "As discussed in the first section, syntactic patterns often suffer from data sparsity. Furthermore, most pattern-based methods rely on term frequency, which have the limitation of finding infrequent but important product features. A recent research (Xu et al., 2013) extracted infrequent product features by a semi-supervised classifier, which used word-syntactic pattern co-occurrence statistics as features for the classifier. However, this kind of feature is still sparse for infrequent candidates. Our method adopts a semantic word representation model, which can train dense features unsupervisedly on a very large corpus. Thus, the data sparsity problem can be alleviated.", "cite_spans": [ { "start": 249, "end": 266, "text": "(Xu et al., 2013)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We propose a semantics-based bootstrapping method for product feature mining. Firstly, some product feature seeds are automatically extracted. Then, a semantic similarity graph is created to capture lexical semantic clue, and a Convolutional Neural Network (CNN) (Collobert et al., 2011) is trained in each bootstrapping iteration to encode contextual semantic clue. Finally we use Label Propagation to find some reliable new seeds for the training of the next bootstrapping iteration.", "cite_spans": [ { "start": 263, "end": 287, "text": "(Collobert et al., 2011)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "The Proposed Method", "sec_num": "3" }, { "text": "The seed set consists of positive labeled examples (i.e. product features) and negative labeled examples (i.e. noise terms). Intuitively, popular product features are frequently mentioned in reviews, so they can be extracted by simply mining frequently occurring nouns (Hu and Liu, 2004) . However, this strategy will also find many noise terms (e.g., commonly used nouns like thing, one, etc.). To produce high quality seeds, we employ a Domain Relevance Measure (DRM) (Jiang and Tan, 2010) , which combines term frequency with a domain-specific measuring metric called Likelihood Ratio Test (LRT) (Dunning, 1993) . Let \u03bb(t) denotes the LRT score of a product feature candidate t,", "cite_spans": [ { "start": 269, "end": 287, "text": "(Hu and Liu, 2004)", "ref_id": "BIBREF7" }, { "start": 470, "end": 491, "text": "(Jiang and Tan, 2010)", "ref_id": "BIBREF8" }, { "start": 599, "end": 614, "text": "(Dunning, 1993)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Automatic Seed Generation", "sec_num": "3.1" }, { "text": "\u03bb(t) = p k 1 (1 \u2212 p) n 1 \u2212k 1 p k 2 (1 \u2212 p) n 2 \u2212k 2 p k 1 1 (1 \u2212 p 1 ) n 1 \u2212k 1 p k 2 2 (1 \u2212 p 2 ) n 2 \u2212k 2 (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Seed Generation", "sec_num": "3.1" }, { "text": "where k 1 and k 2 are the frequencies of t in the review corpus R and a background corpus 1 B, n 1 and n 2 are the total number of terms in R and B, p = (k 1 + k 2 )/(n 1 + n 2 ), p 1 = k 1 /n 1 and p 2 = k 2 /n 2 . Then a modified DRM 2 is proposed,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Seed Generation", "sec_num": "3.1" }, { "text": "DRM (t) = tf (t) max[tf (\u2022)] \u00d7 1 log df (t) \u00d7 | log \u03bb(t)| \u2212 min| log \u03bb(\u2022)| max| log \u03bb(\u2022)| \u2212 min| log \u03bb(\u2022)| (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Seed Generation", "sec_num": "3.1" }, { "text": "where tf (t) is the frequency of t in R and df (t) is the frequency of t in B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Seed Generation", "sec_num": "3.1" }, { "text": "All nouns in R are ranked by DRM (t) in descent order, where top N nouns are taken as the positive example set V + s . On the other hand, Xu et al. 2013show that a set of general nouns seldom appear to be product features. Therefore, we employ their General Noun Corpus to create the negative example set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Seed Generation", "sec_num": "3.1" }, { "text": "V \u2212 s , where N most frequent terms are selected. Besides, it is guaranteed that V + s \u2229 V \u2212 s = \u2205, i.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Seed Generation", "sec_num": "3.1" }, { "text": "e., conflicting terms are taken as negative examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Seed Generation", "sec_num": "3.1" }, { "text": "To capture lexical semantic clue, each word is first converted into word embedding, which is a continuous vector with each dimension's value corresponds to a semantic or grammatical interpretation (Turian et al., 2010) . Learning large-scale word embeddings is very time-consuming (Collobert et al., 2011) , we thus employ a faster method named Skip-gram model (Mikolov et al., 2013) .", "cite_spans": [ { "start": 197, "end": 218, "text": "(Turian et al., 2010)", "ref_id": "BIBREF21" }, { "start": 281, "end": 305, "text": "(Collobert et al., 2011)", "ref_id": "BIBREF2" }, { "start": 361, "end": 383, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Capturing Lexical Semantic Clue in a Semantic Similarity Graph", "sec_num": "3.2" }, { "text": "Semantic Representation Given a sequence of training words W = {w 1 , w 2 , ..., w m }, the goal of the Skip-gram model is to learn a continuous vector space EB = {e 1 , e 2 , ..., e m }, where e i is the word embedding of w i . The training objective is to maximize the average log probability of using word w t to predict a surrounding word w t+j ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Word Embedding for", "sec_num": "3.2.1" }, { "text": "EB = argmax et\u2208EB 1 m m t=1 \u2212c\u2264j\u2264c,j =0 log p(w t+j |w t ; e t ) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Word Embedding for", "sec_num": "3.2.1" }, { "text": "where c is the size of the training window. Basically, p(w t+j |w t ; e t ) is defined as, p(w t+j |w t ; e t ) =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Word Embedding for", "sec_num": "3.2.1" }, { "text": "exp(e T t+j e t ) m w=1 exp(e T w e t )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Word Embedding for", "sec_num": "3.2.1" }, { "text": "where e i is an additional training vector associated with e i . This basic formulation is impractical because it is proportional to m. A hierarchical softmax approximation can be applied to reduce the computational cost to log 2 (m), see (Morin and Bengio, 2005) for details.", "cite_spans": [ { "start": 239, "end": 263, "text": "(Morin and Bengio, 2005)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Learning Word Embedding for", "sec_num": "3.2.1" }, { "text": "To alleviate the data sparsity problem, EB is first trained on a very large corpus 3 (denoted by C), and then fine-tuned on the target review corpus R. Particularly, for phrasal product features, a statistic-based method in (Zhu et al., 2009 ) is used to detect noun phrases in R. Then, an Unfolding Recursive Autoencoder (Socher et al., 2011) is trained on C to obtain embedding vectors for noun phrases. In this way, semantics of infrequent terms in R can be well captured. Finally, the phrasebased Skip-gram model in (Mikolov et al., 2013) is applied on R.", "cite_spans": [ { "start": 224, "end": 241, "text": "(Zhu et al., 2009", "ref_id": "BIBREF27" }, { "start": 322, "end": 343, "text": "(Socher et al., 2011)", "ref_id": "BIBREF19" }, { "start": 520, "end": 542, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Learning Word Embedding for", "sec_num": "3.2.1" }, { "text": "Lexical semantic clue is captured by measuring semantic similarity between terms. The underlying motivation is that if we have known some product feature seeds, then terms that are more semantically similar to these seeds are more likely to be product features. For example, if screen is known to be a product feature of mp3, and lcd is of high semantic similarity with screen, we can infer that lcd is also a product feature. Analogously, terms that are semantically similar to negative labeled seeds are not product features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building the Semantic Similarity Graph", "sec_num": "3.2.2" }, { "text": "Word embedding naturally meets the demand above: words that are more semantically similar to each other are located closer in the embedding space (Collobert et al., 2011) . Therefore, we can use cosine distance between two embedding vectors as the semantic distance measuring metric. Thus, our method does not rely on term frequency 3 Wikipedia(http://www.wikipedia.org) is used in practice.", "cite_spans": [ { "start": 146, "end": 170, "text": "(Collobert et al., 2011)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Building the Semantic Similarity Graph", "sec_num": "3.2.2" }, { "text": "to rank candidates. This could potentially improve the ability of mining infrequent product features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building the Semantic Similarity Graph", "sec_num": "3.2.2" }, { "text": "Formally, we create a semantic similarity graph", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building the Semantic Similarity Graph", "sec_num": "3.2.2" }, { "text": "G = (V, E, W ), where V = {V s \u222a V c }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building the Semantic Similarity Graph", "sec_num": "3.2.2" }, { "text": "is the vertex set, which contains the labeled seed set V s and the unlabeled candidate set V c ; E is the edge set which connects every vertex pair (u, v) , where u, v \u2208 V ; W = {w uv : cos(EB u , EB v )} is a function which associates a weight to each edge.", "cite_spans": [ { "start": 148, "end": 154, "text": "(u, v)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Building the Semantic Similarity Graph", "sec_num": "3.2.2" }, { "text": "The CNN is trained on each occurrence of seeds that is found in review texts. Then for a candidate term t, the CNN classifies all of its occurrences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoding Contextual Semantic Clue Using Convolutional Neural Network", "sec_num": "3.3" }, { "text": "Since seed terms tend to have high frequency in review texts, only a few seeds will be enough to provide plenty of occurrences for the training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoding Contextual Semantic Clue Using Convolutional Neural Network", "sec_num": "3.3" }, { "text": "The architecture of the Convolutional Neural Network is shown in Figure 2 . For a product feature candidate t in sentence s, every consecutive subsequence q i of s that containing t with a window of length l is fed to the CNN. For example, as in Figure 2 , if t = {screen}, and l = 3, there are three inputs:", "cite_spans": [], "ref_spans": [ { "start": 65, "end": 73, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 246, "end": 254, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "The architecture of the Convolutional Neural Network", "sec_num": "3.3.1" }, { "text": "q 1 = [the, ipod, screen], q 2 = [ipod, screen, is], q 3 = [screen, is, impressive].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The architecture of the Convolutional Neural Network", "sec_num": "3.3.1" }, { "text": "Partially, t is replaced by a token \"*PF*\" to remove its lexicon influence 4 . To get the output score, q i is first converted into a concatenated vector x i = [e 1 ; e 2 ; ...; e l ], where e j is the word embedding of the j-th word. In this way, the CNN serves as a soft pattern miner: since words that have similar semantics have similar low-dimension embedding vectors, the CNN is less sensitive to lexicon change. The network is computed by,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The architecture of the Convolutional Neural Network", "sec_num": "3.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y (1) i = tanh(W (1) x i + b (1) ) (5) y (2) = max(y (1) i )", "eq_num": "(6)" } ], "section": "The architecture of the Convolutional Neural Network", "sec_num": "3.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y (3) = W (3) y (2) + b (3)", "eq_num": "(7)" } ], "section": "The architecture of the Convolutional Neural Network", "sec_num": "3.3.1" }, { "text": "where y (i) is the output score of the i-th layer, and b (i) is the bias of the i-th layer; W (1) \u2208 R h\u00d7 (nl) and W (3) \u2208 R 2\u00d7h are parameter matrixes, where n is the dimension of word embedding, and h is the size of nodes in the hidden layer.", "cite_spans": [ { "start": 105, "end": 109, "text": "(nl)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The architecture of the Convolutional Neural Network", "sec_num": "3.3.1" }, { "text": "In conventional neural models, the candidate term t is placed in the center of the window. However, from Example 2, when l = 5, we can see that the best windows should be the bracketed texts (Because, intuitively, the windows should contain mp3, which is a strong evidence for finding the product feature), where t = {screen} is at the boundary. Therefore, we use Equ. 6 to formulate a max-convolutional layer, which is aimed to enable the CNN to find more evidences in contexts than conventional neural models. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The architecture of the Convolutional Neural Network", "sec_num": "3.3.1" }, { "text": "Let \u03b8 = {EB, W (\u2022) , b (\u2022) } denotes all the trainable parameters. The softmax function is used to convert the output score of the CNN to a probability,", "cite_spans": [ { "start": 15, "end": 18, "text": "(\u2022)", "ref_id": null }, { "start": 23, "end": 26, "text": "(\u2022)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(t|X; \u03b8) = exp(y (3) ) |C| j=1 exp(y (3) j )", "eq_num": "(8)" } ], "section": "Training", "sec_num": "3.3.2" }, { "text": "where X is the input set for term t, and C = {0, 1} is the label set representing product feature and non-product feature, respectively. To train the CNN, we first use V s to collect each occurrence of the seeds in R to form a training set T s . Then, the training criterion is to minimize cross-entropy over T s ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b8 = argmin \u03b8 |Ts| i=1 \u2212 log \u03b4 i p(t i |X i ; \u03b8)", "eq_num": "(9)" } ], "section": "Training", "sec_num": "3.3.2" }, { "text": "where \u03b4 i is the binomial target label distribution for one entry. Backpropagation algorithm with mini-batch stochastic gradient descent is used to solve this optimization problem. In addition, some useful tricks can be applied during the training. The weight matrixes W (\u2022) are initialized by normalized initialization (Glorot and Bengio, 2010) . W (1) is pre-trained by an autoencoder (Hinton, 1989) to capture semantic compositionality. To speed up the learning, a momentum method is applied .", "cite_spans": [ { "start": 271, "end": 274, "text": "(\u2022)", "ref_id": null }, { "start": 320, "end": 345, "text": "(Glorot and Bengio, 2010)", "ref_id": "BIBREF5" }, { "start": 387, "end": 401, "text": "(Hinton, 1989)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.3.2" }, { "text": "We propose a Label Propagation algorithm to combine both semantic clues in a unified process. Each term t \u2208 V is assumed to have a label distribution", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining Lexical and Contextual Semantic Clues by Label Propagation", "sec_num": "3.4" }, { "text": "L t = (p + t , p \u2212 t )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining Lexical and Contextual Semantic Clues by Label Propagation", "sec_num": "3.4" }, { "text": ", where p + t denotes the probability of the candidate being a product feature, and on the contrary, p \u2212 t = 1 \u2212 p + t . The classified results of the CNN which encode contextual semantic clue serve as the prior knowledge,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining Lexical and Contextual Semantic Clues by Label Propagation", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "I t = \uf8f1 \uf8f2 \uf8f3 (1, 0), if t \u2208 V + s (0, 1), if t \u2208 V \u2212 s (r + t , r \u2212 t ), if t \u2208 V c", "eq_num": "(10)" } ], "section": "Combining Lexical and Contextual Semantic Clues by Label Propagation", "sec_num": "3.4" }, { "text": "where (r + t , r \u2212 t ) is estimated by,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining Lexical and Contextual Semantic Clues by Label Propagation", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "r + t = count + (t) count + (t) + count \u2212 (t)", "eq_num": "(11)" } ], "section": "Combining Lexical and Contextual Semantic Clues by Label Propagation", "sec_num": "3.4" }, { "text": "where count + (t) is the number of occurrences of term t that are classified as positive by the CNN, and count \u2212 (t) represents the negative count. Label Propagation is applied to propagate the prior knowledge distribution I to the product feature distribution L via semantic similarity graph G, so that a product feature candidate is determined by exploring its semantic relations to all of the seeds and other candidates globally. We propose an adapted version on the random walking view of the Adsorption algorithm (Baluja et al., 2008) by updating the following formula until L converges,", "cite_spans": [ { "start": 518, "end": 539, "text": "(Baluja et al., 2008)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Combining Lexical and Contextual Semantic Clues by Label Propagation", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L i+1 = (1 \u2212 \u03b1)M T L i + \u03b1DI", "eq_num": "(12)" } ], "section": "Combining Lexical and Contextual Semantic Clues by Label Propagation", "sec_num": "3.4" }, { "text": "where M is the semantic transition matrix built from G; D = Diag[log tf (t)] is a diagonal matrix of log frequencies, which is designed to assign higher \"confidence\" scores to more frequent seeds; and \u03b1 is a balancing parameter. Particularly, when \u03b1 = 0, we can set the prior knowledge I without V c to L 0 so that only lexical semantic clue is used; otherwise if \u03b1 = 1, only contextual semantic clue is used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining Lexical and Contextual Semantic Clues by Label Propagation", "sec_num": "3.4" }, { "text": "We summarize the bootstrapping framework of the proposed method in Algorithm 1. During bootstrapping, the CNN is enhanced by Label Propagation which finds more labeled examples for training, and then the performance of Label Propagation is also improved because the CNN outputs a more accurate prior distribution. After running for several iterations, the algorithm gets enough seeds, and a final Label Propagation is conducted to produce the results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Bootstrapping Framework", "sec_num": "3.5" }, { "text": "Algorithm 1: Bootstrapping using semantic clues Input: The review corpus R, a large corpus C Output: The mined product feature list P Initialization: Train word embedding set EB first on C, and then on R", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Bootstrapping Framework", "sec_num": "3.5" }, { "text": "Step 1: Generate product feature seeds Vs (Section 3.1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Bootstrapping Framework", "sec_num": "3.5" }, { "text": "Step 2: Build semantic similarity graph G (Section 3.2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Bootstrapping Framework", "sec_num": "3.5" }, { "text": "while iter < MAX ITER do", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Bootstrapping Framework", "sec_num": "3.5" }, { "text": "Step 3: Use Vs to collect occurrence set Ts from R for training", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Bootstrapping Framework", "sec_num": "3.5" }, { "text": "Step 4: Train a CNN N on Ts (Section 3.3) Apply mini-batch SGD on Equ. 9;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Bootstrapping Framework", "sec_num": "3.5" }, { "text": "Step 5: Run Label Propagation (Section 3.4) Classify candidates using N to setup I;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Bootstrapping Framework", "sec_num": "3.5" }, { "text": "L 0 \u2190 I; repeat L i+1 \u2190 (1 \u2212 \u03b1)M T L i + \u03b1DI; until ||L i+1 \u2212 L i || 2 < \u03b5;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Bootstrapping Framework", "sec_num": "3.5" }, { "text": "Step 6: Expand product feature seeds Move top T terms from Vc to Vs;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Bootstrapping Framework", "sec_num": "3.5" }, { "text": "iter++ end", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Bootstrapping Framework", "sec_num": "3.5" }, { "text": "Step 7: Run Label Propagation for a final result L f Rank terms by L", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Bootstrapping Framework", "sec_num": "3.5" }, { "text": "+ f to get P , where L + f > L \u2212 f ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Bootstrapping Framework", "sec_num": "3.5" }, { "text": "4 Experiments", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Bootstrapping Framework", "sec_num": "3.5" }, { "text": "Datasets: We select two real world datasets to evaluate the proposed method. The first one is a benchmark dataset in Wang et al. (2011) , which contains English review sets on two domains (MP3 and Hotel) 5 . The second dataset is proposed by Chinese Opinion Analysis Evaluation 2008 (COAE 2008 , where two review sets (Camera and Car) are selected. Xu et al. (2013) had manually annotated product features on these four domains, so we directly employ their annotation as the gold standard. The detailed information can be found in their original paper.", "cite_spans": [ { "start": 117, "end": 135, "text": "Wang et al. (2011)", "ref_id": "BIBREF22" }, { "start": 267, "end": 282, "text": "Evaluation 2008", "ref_id": null }, { "start": 283, "end": 293, "text": "(COAE 2008", "ref_id": null }, { "start": 349, "end": 365, "text": "Xu et al. (2013)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets and Evaluation Metrics", "sec_num": "4.1" }, { "text": "Evaluation Metrics: We evaluate the proposed method in terms of precision(P), recall(R) and Fmeasure(F). The English results are evaluated by exact string match. And for Chinese results, we use an overlap matching metric, because determining the exact boundaries is hard even for human (Wiebe et al., 2005) .", "cite_spans": [ { "start": 286, "end": 306, "text": "(Wiebe et al., 2005)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets and Evaluation Metrics", "sec_num": "4.1" }, { "text": "For English corpora, the pre-processing are the same as that in (Qiu et al., 2009) , and for Chinese corpora, the Stanford Word Segmenter (Chang et al., 2008) is used to perform word segmentation. We select three state-of-the-art syntax-based methods to be compared with our method:", "cite_spans": [ { "start": 64, "end": 82, "text": "(Qiu et al., 2009)", "ref_id": "BIBREF18" }, { "start": 138, "end": 158, "text": "(Chang et al., 2008)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "4.2" }, { "text": "DP uses a bootstrapping algorithm named as Double Propagation (Qiu et al., 2009) , which is a conventional syntax-based method.", "cite_spans": [ { "start": 62, "end": 80, "text": "(Qiu et al., 2009)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "4.2" }, { "text": "DP-HITS is an enhanced version of DP proposed by Zhang et al. (2010) , which ranks product feature candidates by", "cite_spans": [ { "start": 49, "end": 68, "text": "Zhang et al. (2010)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s(t) = log tf (t) * importance(t)", "eq_num": "(13)" } ], "section": "Experimental Settings", "sec_num": "4.2" }, { "text": "where importance(t) is estimated by the HITS algorithm (Kleinberg, 1999) . SGW is the Sentiment Graph Walking algorithm proposed in (Xu et al., 2013) , which first extracts syntactic patterns and then uses random walking to rank candidates. Afterwards, wordsyntactic pattern co-occurrence statistic is used as feature for a semi-supervised classifier TSVM (Joachims, 1999) to further refine the results. This two-stage method is denoted as SGW-TSVM.", "cite_spans": [ { "start": 55, "end": 72, "text": "(Kleinberg, 1999)", "ref_id": "BIBREF11" }, { "start": 132, "end": 149, "text": "(Xu et al., 2013)", "ref_id": "BIBREF25" }, { "start": 356, "end": 372, "text": "(Joachims, 1999)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "4.2" }, { "text": "LEX only uses lexical semantic clue. Label Propagation is applied alone in a self-training manner. The dimension of word embedding n = 100, the convergence threshold \u03b5 = 10 \u22127 , and the number of expanded seeds T = 40. The size of the seed set N is 40. To output product features, it ranks candidates in descent order by using the positive score L + f (t). CONT only uses contextual semantic clue, which only contains the CNN. The window size l is 5. The CNN is trained with a mini-batch size of 50. The hidden layer size h = 250. Finally, importance(t) in Equ. 13 is replaced with r + t in Equ. 11 to rank candidates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "4.2" }, { "text": "LEX&CONT leverages both semantic clues. Table 1 : Experimental results of product feature mining. The precision or recall of CONT is the average performance over five runs with different random initialization of parameters of the CNN. Avg. stands for the average score.", "cite_spans": [], "ref_spans": [ { "start": 40, "end": 47, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Experimental Settings", "sec_num": "4.2" }, { "text": "The experimental results are shown in Table 1 , from which we have the following observations:", "cite_spans": [], "ref_spans": [ { "start": 38, "end": 45, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "The Semantics-based Methods vs. State-of-the-art Syntax-based Methods", "sec_num": "4.3" }, { "text": "(i) Our method achieves the best performance among all of the compared methods. We also equally split the dataset into five subsets, and perform one-tailed t-test (p \u2264 0.05), which shows that the proposed semanticsbased method (LEX&CONT) significantly outperforms the three syntax-based strong competitors (DP, DP-HITS and SGW-TSVM).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Semantics-based Methods vs. State-of-the-art Syntax-based Methods", "sec_num": "4.3" }, { "text": "(ii) LEX&CONT which leverages both lexical and contextual semantic clues outperforms approaches that only use one kind of semantic clue (LEX and CONT), showing that the combination of the semantic clues is helpful.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Semantics-based Methods vs. State-of-the-art Syntax-based Methods", "sec_num": "4.3" }, { "text": "(iii) Our methods which use only one kind of semantic clue (LEX and CONT) outperform syntax-based methods (DP, DP-HITS and SGW). Comparing DP-HITS with LEX and CONT, the difference between them is that DP-HITS uses a syntax-pattern-based algorithm to estimate importance(t) in Equ. 13, while our methods use lexical or contextual semantic clue instead. We believe the reason that LEX or CONT is better is that syntactic patterns only use discrete and local information. In contrast, CONT exploits latent semantics of each word in context, and LEX takes advantage of word embedding, which is induced from global word co-occurrence statistic. Furthermore, comparing SGW and LEX, both methods are base on random surfer model, but LEX gets better results than SGW. Therefore, the wordword semantic similarity relation used in LEX is more reliable than the word-syntactic pattern relation used in SGW.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Semantics-based Methods vs. State-of-the-art Syntax-based Methods", "sec_num": "4.3" }, { "text": "(iv) LEX&CONT achieves the highest recall among all of the evaluated methods. Since DP and DP-HITS rely on frequency for ranking product features, infrequent candidates are ranked low in their extracted list. As for SGW-TSVM, the features they used for the TSVM suffer from the data sparsity problem for infrequent terms. In contrast, LEX&CONT is frequency-independent to the review corpus. Further discussions on this observation are given in the next section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Semantics-based Methods vs. State-of-the-art Syntax-based Methods", "sec_num": "4.3" }, { "text": "We conservatively regard 30% product features with the highest frequencies in R as frequent features, so the remaining terms in the gold standard are infrequent features. In product feature mining task, frequent features are relatively easy to find. Table 2 shows the recall of all the four approaches for mining frequent product features. We can see that the performance are very close among different methods. Therefore, the recall mainly depends on mining the infrequent features. Table 2 : The recall of frequent product features. Figure 3 gives the recall of infrequent product features, where LEX&CONT achieves the best performance. So our method is less influenced by term frequency. Furthermore, LEX gets better recall than CONT and all syntax-based methods, which indicates that lexical semantic clue does aid to mine more infrequent features as expected. .6", "cite_spans": [], "ref_spans": [ { "start": 250, "end": 257, "text": "Table 2", "ref_id": null }, { "start": 484, "end": 491, "text": "Table 2", "ref_id": null }, { "start": 535, "end": 543, "text": "Figure 3", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "The Results on Extracting Infrequent Product Features", "sec_num": "4.4" }, { "text": ".7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Results on Extracting Infrequent Product Features", "sec_num": "4.4" }, { "text": ".8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Results on Extracting Infrequent Product Features", "sec_num": "4.4" }, { "text": ".9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Results on Extracting Infrequent Product Features", "sec_num": "4.4" }, { "text": "1.0 Figure 4 : Accuracy (y-axis) of product feature seed expansion at each bootstrapping iteration (x-axis). The error bar shows the standard deviation over five runs. ", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 12, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "The Results on Extracting Infrequent Product Features", "sec_num": "4.4" }, { "text": "This section studies the effects of lexical semantic clue and contextual semantic clue during seed expansion (Step 6 in Algorithm 1), which is controlled by \u03b1. When \u03b1 = 1, we get the CONT; and if \u03b1 is set 0, we get the LEX. To take into account the correctly expanded terms for both positive and negative seeds, we use Accuracy as the evaluation metric,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Semantic Clue vs. Contextual Semantic Clue", "sec_num": "4.5" }, { "text": "Accuracy = #T P + #T N # Extracted Seeds", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Semantic Clue vs. Contextual Semantic Clue", "sec_num": "4.5" }, { "text": "where T P denotes the true positive seeds, and T N denotes the true negative seeds. Figure 4 shows the performance of seed expansion during bootstrapping, in which the accuracy is computed on 40 seeds (20 being positive and 20 being negative) expanded in each iteration. We can see that the accuracies of CONT and LEX&CONT retain at a high level, which shows that they can find reliable new product feature seeds. However, the performance of LEX oscillates sharply and it is very low for some points, which indicates that using lexical semantic clue alone is infeasible. On another hand, comparing CONT with LEX in Table 1 , we can see that LEX performs generally better than CONT. Although LEX is not so accurate as CONT during seed expansion, its final performance surpasses CONT. Consequently, we can draw conclusion that CONT is more suitable for the seed expansion, and LEX is more robust for the final result production.", "cite_spans": [], "ref_spans": [ { "start": 84, "end": 92, "text": "Figure 4", "ref_id": null }, { "start": 615, "end": 622, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Lexical Semantic Clue vs. Contextual Semantic Clue", "sec_num": "4.5" }, { "text": "To combine advantages of the two kinds of semantic clues, we set \u03b1 = 0.7 in Step 5 of Algorithm 1, so that contextual semantic clue plays a key role to find new seeds accurately. For Step 7, we set \u03b1 = 0.3. Thus, lexical semantic clue is emphasized for producing the final results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Semantic Clue vs. Contextual Semantic Clue", "sec_num": "4.5" }, { "text": "Two non-convolutional variations of the proposed method are used to be compared with the convolutional method in CONT. FW-5 uses a traditional neural network with a fixed window size of 5 to replace the CNN in CONT, and the candidate term to be classified is placed in the center of the window. Similarly, FW-9 uses a fixed window size of 9. Note that CONT uses a 5-term dynamic window containing the candidate term, so the exploited number of words in the context is equivalent to FW-9. Table 3 shows the experimental results. We can see that the performance of FW-5 is much worse than CONT. The reason is that FW-5 only exploits half of the context as that of CONT, which is not sufficient enough. Meanwhile, although FW-9 exploits equivalent range of context as that of CONT, it gets lower precisions. It is because FW-9 has approximately two times parameters in the parameter matrix W (1) than that in Equ. 5 of CONT, which makes it more difficult to be trained with the same amount of data. Also, lengths of many sentences in the review corpora are shorter than 9. Therefore, the convolutional approach in CONT is the most effective way among these settings.", "cite_spans": [], "ref_spans": [ { "start": 488, "end": 495, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "The Effect of Convolutional Layer", "sec_num": "4.6" }, { "text": "We investigate two key parameters of the proposed method: the initial number of seeds N , and the size of the window l used by the CNN. Figure 5 shows the performance under different N , where the F-Measure saturates when N equates to 40 and beyond. Hence, very few seeds are needed for starting our algorithm. Figure 6 shows F-Measure under different window size l. We can see that the performance is improved little when l is larger than 5. Therefore, l = 5 is a proper window size for these datasets. .6", "cite_spans": [], "ref_spans": [ { "start": 136, "end": 144, "text": "Figure 5", "ref_id": "FIGREF5" }, { "start": 311, "end": 319, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Parameter Study", "sec_num": "4.7" }, { "text": ".7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Study", "sec_num": "4.7" }, { "text": ".8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Study", "sec_num": "4.7" }, { "text": ".9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Study", "sec_num": "4.7" }, { "text": "Hotel Camera Car Figure 6 : F-Measure vs. l for the final results.", "cite_spans": [], "ref_spans": [ { "start": 17, "end": 25, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "MP3", "sec_num": null }, { "text": "This paper proposes a product feature mining method by leveraging contextual and lexical semantic clues. A semantic similarity graph is built to capture lexical semantic clue, and a convolutional neural network is used to encode contextual semantic clue. Then, a Label Propagation algorithm is applied to combine both semantic clues. Experimental results prove the effectiveness of the proposed method, which not only mines product features more accurately than conventional syntax-based method, but also extracts more infrequent product features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "In future work, we plan to extend the proposed method to jointly mine product features along with customers' opinions on them. The learnt semantic representations of words may also be utilized to predict fine-grained sentiment distributions over product features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "Google-n-Gram (http://books.google.com/ngrams) is used as the background corpus.2 The df (t) part of the original DRM is slightly modified because we want a tf \u00d7 idf -like scheme(Liu et al., 2012).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Otherwise, the CNN will quickly get overfitting on t, because very few seed lexicons are used for the training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://timan.cs.uiuc.edu/downloads.html 6 http://ir-china.org.cn/coae2008.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Video suggestion and discovery for youtube: Taking random walks through the view graph", "authors": [ { "first": "Shumeet", "middle": [], "last": "Baluja", "suffix": "" }, { "first": "Rohan", "middle": [], "last": "Seth", "suffix": "" }, { "first": "D", "middle": [], "last": "Sivakumar", "suffix": "" }, { "first": "Yushi", "middle": [], "last": "Jing", "suffix": "" }, { "first": "Jay", "middle": [], "last": "Yagnik", "suffix": "" }, { "first": "Shankar", "middle": [], "last": "Kumar", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 17th International Conference on World Wide Web, WWW '08", "volume": "", "issue": "", "pages": "895--904", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shumeet Baluja, Rohan Seth, D. Sivakumar, Yushi Jing, Jay Yagnik, Shankar Kumar, Deepak Ravichandran, and Mohamed Aly. 2008. Video suggestion and discovery for youtube: Taking ran- dom walks through the view graph. In Proceedings of the 17th International Conference on World Wide Web, WWW '08, pages 895-904, New York, NY, USA. ACM.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Optimizing chinese word segmentation for machine translation performance", "authors": [ { "first": "Pi-Chuan", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Third Workshop on Statistical Machine Translation, StatMT '08", "volume": "", "issue": "", "pages": "224--232", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pi-Chuan Chang, Michel Galley, and Christopher D. Manning. 2008. Optimizing chinese word segmen- tation for machine translation performance. In Pro- ceedings of the Third Workshop on Statistical Ma- chine Translation, StatMT '08, pages 224-232.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "J. Mach. Learn. Res", "volume": "12", "issue": "", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493-2537, November.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Generating typed dependency parses from phrase structure parses", "authors": [ { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Maccartney", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the IEEE / ACL'06 Workshop on Spoken Language Technology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the IEEE / ACL'06 Workshop on Spoken Language Technology.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Accurate methods for the statistics of surprise and coincidence", "authors": [ { "first": "Ted", "middle": [], "last": "Dunning", "suffix": "" } ], "year": 1993, "venue": "Comput. Linguist", "volume": "19", "issue": "1", "pages": "61--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ted Dunning. 1993. Accurate methods for the statis- tics of surprise and coincidence. Comput. Linguist., 19(1):61-74, March.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Understanding the difficulty of training deep feedforward neural networks", "authors": [ { "first": "Xavier", "middle": [], "last": "Glorot", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the International Conference on Artificial Intelligence and Statistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xavier Glorot and Yoshua Bengio. 2010. Understand- ing the difficulty of training deep feedforward neural networks. In Proceedings of the International Con- ference on Artificial Intelligence and Statistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Connectionist learning procedures", "authors": [ { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" } ], "year": 1989, "venue": "Artificial Intelligence", "volume": "40", "issue": "1C3", "pages": "185--234", "other_ids": {}, "num": null, "urls": [], "raw_text": "Geoffrey E. Hinton. 1989. Connectionist learning pro- cedures. Artificial Intelligence, 40(1C3):185 -234.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Mining and summarizing customer reviews", "authors": [ { "first": "Minqing", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '04", "volume": "", "issue": "", "pages": "168--177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, KDD '04, pages 168-177, New York, NY, USA. ACM.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Crctol: A semantic-based domain ontology learning system", "authors": [ { "first": "Xing", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Ah-Hwee", "middle": [], "last": "Tan", "suffix": "" } ], "year": 2010, "venue": "Journal of the American Society for Information Science and Technology", "volume": "61", "issue": "1", "pages": "150--168", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xing Jiang and Ah-Hwee Tan. 2010. Crctol: A semantic-based domain ontology learning system. Journal of the American Society for Information Sci- ence and Technology, 61(1):150-168.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Target-dependent twitter sentiment classification", "authors": [ { "first": "Long", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Mo", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Xiaohua", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Tiejun", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "151--160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Long Jiang, Mo Yu, Ming Zhou, Xiaohua Liu, and Tiejun Zhao. 2011. Target-dependent twitter sen- timent classification. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies -Vol- ume 1, HLT '11, pages 151-160, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Transductive inference for text classification using support vector machines", "authors": [ { "first": "Thorsten", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 16th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "200--209", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Joachims. 1999. Transductive inference for text classification using support vector machines. In Proceedings of the 16th International Conference on Machine Learning, pages 200-209.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Authoritative sources in a hyperlinked environment", "authors": [ { "first": "Jon", "middle": [ "M" ], "last": "Kleinberg", "suffix": "" } ], "year": 1999, "venue": "J. ACM", "volume": "46", "issue": "5", "pages": "604--632", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jon M. Kleinberg. 1999. Authoritative sources in a hyperlinked environment. J. ACM, 46(5):604-632, September.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Cross-domain co-extraction of sentiment and topic lexicons", "authors": [ { "first": "Fangtao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ou", "middle": [], "last": "Sinno Jialin Pan", "suffix": "" }, { "first": "Qiang", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Xiaoyan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers", "volume": "1", "issue": "", "pages": "410--419", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fangtao Li, Sinno Jialin Pan, Ou Jin, Qiang Yang, and Xiaoyan Zhu. 2012. Cross-domain co-extraction of sentiment and topic lexicons. In Proceedings of the 50th Annual Meeting of the Association for Compu- tational Linguistics: Long Papers -Volume 1, ACL '12, pages 410-419, Stroudsburg, PA, USA. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Opinion target extraction using word-based translation model", "authors": [ { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Liheng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "1346--1356", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kang Liu, Liheng Xu, and Jun Zhao. 2012. Opin- ion target extraction using word-based translation model. In Proceedings of the 2012 Joint Confer- ence on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1346-1356, Jeju Island, Korea, July. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in Neural Information Processing Systems, pages 3111-3119.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Opinion digger: An unsupervised opinion miner from unstructured product reviews", "authors": [ { "first": "Samaneh", "middle": [], "last": "Moghaddam", "suffix": "" }, { "first": "Martin", "middle": [ "Ester" ], "last": "", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 19th ACM International Conference on Information and Knowledge Management, CIKM '10", "volume": "", "issue": "", "pages": "1825--1828", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samaneh Moghaddam and Martin Ester. 2010. Opin- ion digger: An unsupervised opinion miner from unstructured product reviews. In Proceedings of the 19th ACM International Conference on Informa- tion and Knowledge Management, CIKM '10, pages 1825-1828, New York, NY, USA. ACM.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Hierarchical probabilistic neural network language model", "authors": [ { "first": "Frederic", "middle": [], "last": "Morin", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the international workshop on artificial intelligence and statistics, AISTATS05", "volume": "", "issue": "", "pages": "246--252", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frederic Morin and Yoshua Bengio. 2005. Hierarchi- cal probabilistic neural network language model. In Proceedings of the international workshop on arti- ficial intelligence and statistics, AISTATS05, pages 246-252.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Extracting product features and opinions from reviews", "authors": [ { "first": "Ana-Maria", "middle": [], "last": "Popescu", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT '05", "volume": "", "issue": "", "pages": "339--346", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ana-Maria Popescu and Oren Etzioni. 2005. Extract- ing product features and opinions from reviews. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Lan- guage Processing, HLT '05, pages 339-346.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Expanding domain sentiment lexicon through double propagation", "authors": [ { "first": "Guang", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jiajun", "middle": [], "last": "Bu", "suffix": "" }, { "first": "Chun", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 21st international jont conference on Artifical intelligence, IJCAI'09", "volume": "", "issue": "", "pages": "1199--1204", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2009. Expanding domain sentiment lexicon through double propagation. In Proceedings of the 21st in- ternational jont conference on Artifical intelligence, IJCAI'09, pages 1199-1204.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Dynamic pooling and unfolding recursive autoencoders for paraphrase detection", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "H", "middle": [], "last": "Eric", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Huang", "suffix": "" }, { "first": "", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Y", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2011, "venue": "NIPS'2011", "volume": "24", "issue": "", "pages": "801--809", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Eric H Huang, Jeffrey Pennington, Andrew Y Ng, and Christopher D Manning. 2011. Dynamic pooling and unfolding recursive autoen- coders for paraphrase detection. In NIPS'2011, vol- ume 24, pages 801-809.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "James", "middle": [], "last": "Martens", "suffix": "" }, { "first": "George", "middle": [], "last": "Dahl", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 30 th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, James Martens, George Dahl, and Ge- offrey Hinton. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the 30 th International Conference on Machine Learning.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Word representations: A simple and general method for semi-supervised learning", "authors": [ { "first": "Joseph", "middle": [], "last": "Turian", "suffix": "" }, { "first": "Lev", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL '10", "volume": "", "issue": "", "pages": "384--394", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Com- putational Linguistics, ACL '10, pages 384-394, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Latent aspect rating analysis without aspect keyword supervision", "authors": [ { "first": "Hongning", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Chengxiang", "middle": [], "last": "Zhai", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '11", "volume": "", "issue": "", "pages": "618--626", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hongning Wang, Yue Lu, and ChengXiang Zhai. 2011. Latent aspect rating analysis without aspect key- word supervision. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '11, pages 618- 626, New York, NY, USA. ACM.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Annotating expressions of opinions and emotions in language. Language Resources and Evaluation", "authors": [ { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Theresa", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2005, "venue": "", "volume": "39", "issue": "", "pages": "165--210", "other_ids": {}, "num": null, "urls": [], "raw_text": "Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emo- tions in language. Language Resources and Evalu- ation, 39(2-3):165-210.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Phrase dependency parsing for opinion mining", "authors": [ { "first": "Yuanbin", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Lide", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", "volume": "3", "issue": "", "pages": "1533--1541", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuanbin Wu, Qi Zhang, Xuanjing Huang, and Lide Wu. 2009. Phrase dependency parsing for opinion min- ing. In Proceedings of the 2009 Conference on Em- pirical Methods in Natural Language Processing: Volume 3 -Volume 3, EMNLP '09, pages 1533- 1541, Stroudsburg, PA, USA. Association for Com- putational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Mining opinion words and opinion targets in a two-stage framework", "authors": [ { "first": "Liheng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Siwei", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Yubo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1764--1773", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liheng Xu, Kang Liu, Siwei Lai, Yubo Chen, and Jun Zhao. 2013. Mining opinion words and opinion tar- gets in a two-stage framework. In Proceedings of the 51st Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1764-1773, Sofia, Bulgaria, August. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Extracting and ranking product features in opinion documents", "authors": [ { "first": "Lei", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Suk", "middle": [ "Hwan" ], "last": "Lim", "suffix": "" }, { "first": "Eamonn O'", "middle": [], "last": "Brien-Strain", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters, COLING '10", "volume": "", "issue": "", "pages": "1462--1470", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lei Zhang, Bing Liu, Suk Hwan Lim, and Eamonn O'Brien-Strain. 2010. Extracting and ranking prod- uct features in opinion documents. In Proceedings of the 23rd International Conference on Compu- tational Linguistics: Posters, COLING '10, pages 1462-1470, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Multi-aspect opinion polling from textual reviews", "authors": [ { "first": "Jingbo", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Huizhen", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Benjamin", "middle": [ "K" ], "last": "Tsou", "suffix": "" }, { "first": "Muhua", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 18th ACM Conference on Information and Knowledge Management, CIKM '09", "volume": "", "issue": "", "pages": "1799--1802", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingbo Zhu, Huizhen Wang, Benjamin K. Tsou, and Muhua Zhu. 2009. Multi-aspect opinion polling from textual reviews. In Proceedings of the 18th ACM Conference on Information and Knowledge Management, CIKM '09, pages 1799-1802, New York, NY, USA. ACM.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Movie review mining and summarization", "authors": [ { "first": "Li", "middle": [], "last": "Zhuang", "suffix": "" }, { "first": "Feng", "middle": [], "last": "Jing", "suffix": "" }, { "first": "Xiao-Yan", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 15th ACM International Conference on Information and Knowledge Management, CIKM '06", "volume": "", "issue": "", "pages": "43--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Zhuang, Feng Jing, and Xiao-Yan Zhu. 2006. Movie review mining and summarization. In Pro- ceedings of the 15th ACM International Conference on Information and Knowledge Management, CIKM '06, pages 43-50, New York, NY, USA. ACM.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "This player has an :: fm ::::: tuner. (b) This mp3 supports :::: wma ::: file. (c) This review has helped ::::: people a lot. (d) This mp3 has some ::::: flaws." }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "The architecture of the Convolutional Neural Network." }, "FIGREF2": { "uris": null, "num": null, "type_str": "figure", "text": "The [screen of this mp3 is] great. (b) This [mp3 has a great screen]." }, "FIGREF4": { "uris": null, "num": null, "type_str": "figure", "text": "The recall of infrequent features. The error bar shows the standard deviation over five different runs." }, "FIGREF5": { "uris": null, "num": null, "type_str": "figure", "text": ": F-Measure vs. N for the final results." }, "TABREF4": { "html": null, "type_str": "table", "content": "
.9DP
DP-HITS
.8SGW-TSVM CONT
LEX
LEX&CONT
Recall.6 .7
.5
.4
MP3HotelCameraCar
", "text": "The results of convolutional method vs. the results of non-convolutional methods.", "num": null } } } }