{ "paper_id": "D12-1014", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:23:48.680126Z" }, "title": "A Weakly Supervised Model for Sentence-Level Semantic Orientation Analysis with Multiple Experts", "authors": [ { "first": "Lizhen", "middle": [], "last": "Qu", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Rainer", "middle": [], "last": "Gemulla", "suffix": "", "affiliation": {}, "email": "rgemulla@mpi-inf.mpg.de" }, { "first": "Gerhard", "middle": [], "last": "Weikum", "suffix": "", "affiliation": {}, "email": "weikum@mpi-inf.mpg.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We propose the weakly supervised Multi-Experts Model (MEM) for analyzing the semantic orientation of opinions expressed in natural language reviews. In contrast to most prior work, MEM predicts both opinion polarity and opinion strength at the level of individual sentences; such fine-grained analysis helps to understand better why users like or dislike the entity under review. A key challenge in this setting is that it is hard to obtain sentence-level training data for both polarity and strength. For this reason, MEM is weakly supervised: It starts with potentially noisy indicators obtained from coarse-grained training data (i.e., document-level ratings), a small set of diverse base predictors, and, if available, small amounts of fine-grained training data. We integrate these noisy indicators into a unified probabilistic framework using ideas from ensemble learning and graph-based semi-supervised learning. Our experiments indicate that MEM outperforms state-of-the-art methods by a significant margin.", "pdf_parse": { "paper_id": "D12-1014", "_pdf_hash": "", "abstract": [ { "text": "We propose the weakly supervised Multi-Experts Model (MEM) for analyzing the semantic orientation of opinions expressed in natural language reviews. In contrast to most prior work, MEM predicts both opinion polarity and opinion strength at the level of individual sentences; such fine-grained analysis helps to understand better why users like or dislike the entity under review. A key challenge in this setting is that it is hard to obtain sentence-level training data for both polarity and strength. For this reason, MEM is weakly supervised: It starts with potentially noisy indicators obtained from coarse-grained training data (i.e., document-level ratings), a small set of diverse base predictors, and, if available, small amounts of fine-grained training data. We integrate these noisy indicators into a unified probabilistic framework using ideas from ensemble learning and graph-based semi-supervised learning. Our experiments indicate that MEM outperforms state-of-the-art methods by a significant margin.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Opinion mining is concerned with analyzing opinions expressed in natural language text. For example, many internet websites allow their users to provide both natural language reviews and numerical ratings to items of interest (such as products or movies). In this context, opinion mining aims to uncover the relationship between users and (features of) items. Preferences of users to items can be well understood by coarse-grained methods of opinion mining, which focus on analyzing the semantic orientation of documents as a whole. To understand why users like or dislike certain items, however, we need to perform more fine-grained analysis of the review text itself.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we focus on sentence-level analysis of semantic orientation (SO) in online reviews. The SO consists of polarity (positive, negative, or other 1 ) and strength (degree to which a sentence is positive or negative). Both quantities can be analyzed jointly by mapping them to numerical ratings: Large negative/positive ratings indicate a strong negative/positive orientation. A key challenge in finegrained rating prediction is that fine-grained training data for both polarity and strength is hard to obtain. We thus focus on a weakly supervised setting in which only coarse-level training data (such as document ratings and subjectivity lexicons) and, optionally, a small amount of fine-grained training data (such as sentence polarities) is available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A number of lexicon-based approaches for phraselevel rating prediction has been proposed in the literature (Taboada et al., 2011; Qu et al., 2010) . These methods utilize a subjectivity lexicon of words along with information about their semantic orientation; they focus on phrases that contain words from the lexicon. A key advantage of sentence-level methods is that they are able to cover all sentences in a review and that phrase identification is avoided. To the best of our knowledge, the problem of rating prediction at the sentence level has not been addressed in the literature. A naive approach would be to simply average phrase-level ratings. Such an approach performs poorly, however, since (1) phrases are analyzed out of context (e.g., modal verbs or conditional clauses), (2) domain-dependent information about semantic orientation is not captured in the lexicons, (3) only phrases that contain lexicon words are covered. Here (1) and (2) lead to low precision, (3) to low recall.", "cite_spans": [ { "start": 107, "end": 129, "text": "(Taboada et al., 2011;", "ref_id": "BIBREF23" }, { "start": 130, "end": 146, "text": "Qu et al., 2010)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To address the challenges outlined above, we propose the weakly supervised Multi-Experts Model (MEM) for sentence-level rating prediction. MEM starts with a set of potentially noisy indicators of SO including phrase-level predictions, language heuristics, and co-occurrence counts. We refer to these indicators as base predictors; they constitute the set of experts used in our model. MEM is designed such that new base predictors can be easily integrated. Since the information provided by the base predictors can be contradicting, we use ideas from ensemble learning (Dietterichl, 2002) to learn the most confident indicators and to exploit domain-dependent information revealed by document ratings. Thus, instead of averaging base predictors, MEM integrates their features along with the available coarse-grained training data into a unified probabilistic model.", "cite_spans": [ { "start": 569, "end": 588, "text": "(Dietterichl, 2002)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The integrated model can be regarded as a Gaussian process (GP) model (Rasmussen, 2004) with a novel multi-expert prior. The multi-expert prior decomposes into two component distributions. The first component distribution integrates sentence-local information obtained from the base predictors. It forms a special realization of stacking (Dzeroski and Zenko, 2004) but uses the features from the base predictors instead of the actual predictions. The second component distribution propagates SO information across similar sentences using techniques from graphbased semi-supervised learning (GSSL) (Zhu et al., 2003; Belkin et al., 2006) . It aims to improve the predictions on sentences that are not covered well enough by our base predictors. Traditional GSSL algorithms support either discrete labels (classification) or numerical labels (regression); we extend these techniques to support both types of labels simultaneously. We use a novel variant of word sequence kernels (Cancedda et al., 2003) to measure sentence similarity. Our kernel takes the relative positions of words but also their SO and synonymity into account.", "cite_spans": [ { "start": 70, "end": 87, "text": "(Rasmussen, 2004)", "ref_id": "BIBREF22" }, { "start": 338, "end": 364, "text": "(Dzeroski and Zenko, 2004)", "ref_id": "BIBREF8" }, { "start": 597, "end": 615, "text": "(Zhu et al., 2003;", "ref_id": "BIBREF28" }, { "start": 616, "end": 636, "text": "Belkin et al., 2006)", "ref_id": "BIBREF0" }, { "start": 977, "end": 1000, "text": "(Cancedda et al., 2003)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our experiments indicate that MEM significantly outperforms prior work in both sentence-level rating prediction and sentence-level polarity classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There exists a large body of work on analyzing the semantic orientation of natural language text. Our approach is unique in that it is weakly supervised, predicts both polarity and strength, and operates on the sentence level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Supervised approaches for sentiment analysis focus mainly on opinion mining at the document level (Pang and Lee, 2004; Pang et al., 2002; Pang and Lee, 2005; Goldberg and Zhu, 2006) , but have also been applied to sentence-level polarity classification in specific domains (Mao and Lebanon, 2006; Pang and Lee, 2004; McDonald et al., 2007) . In these settings, a sufficient amount of training data is available. In contrast, we focus on opinion mining tasks with little or no fine-grained training data.", "cite_spans": [ { "start": 98, "end": 118, "text": "(Pang and Lee, 2004;", "ref_id": "BIBREF17" }, { "start": 119, "end": 137, "text": "Pang et al., 2002;", "ref_id": "BIBREF19" }, { "start": 138, "end": 157, "text": "Pang and Lee, 2005;", "ref_id": "BIBREF18" }, { "start": 158, "end": 181, "text": "Goldberg and Zhu, 2006)", "ref_id": "BIBREF11" }, { "start": 273, "end": 296, "text": "(Mao and Lebanon, 2006;", "ref_id": "BIBREF14" }, { "start": 297, "end": 316, "text": "Pang and Lee, 2004;", "ref_id": "BIBREF17" }, { "start": 317, "end": 339, "text": "McDonald et al., 2007)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The weakly supervised HCRF model (T\u00e4ckstr\u00f6m and McDonald, 2011b; T\u00e4ckstr\u00f6m and McDonald, 2011a) for sentence-level polarity classification is perhaps closest to our work in spirit. Similar to MEM, HCRF uses coarse-grained training data and, when available, a small amount of fine-grained sentence polarities. In contrast to MEM, HCRF does not predict the strength of semantic orientation and ignores the order of words within sentences.", "cite_spans": [ { "start": 33, "end": 64, "text": "(T\u00e4ckstr\u00f6m and McDonald, 2011b;", "ref_id": "BIBREF25" }, { "start": 65, "end": 95, "text": "T\u00e4ckstr\u00f6m and McDonald, 2011a)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "There exists a large number of lexicon-based methods for polarity classification (Ding et al., 2008; Choi and Cardie, 2009; Hu and Liu, 2004; Zhuang et al., 2006; Fu and Wang, 2010; Ku et al., 2008) . The lexicon-based methods of (Taboada et al., 2011; Qu et al., 2010) also predict ratings at the phrase level; these methods are used as experts in our model. MEM leverages ideas from ensemble learning (Dietterichl, 2002; Bishop, 2006) and GSSL methods (Zhu et al., 2003; Zhu and Ghahramani, 2002; Chapelle et al., 2006; Belkin et al., 2006) . We extend GSSL with support for multiple, heterogenous labels. This allows us to integrate our base predictors as well as the available training data into a unified model that exploits that strengths of algorithms from both families.", "cite_spans": [ { "start": 81, "end": 100, "text": "(Ding et al., 2008;", "ref_id": "BIBREF7" }, { "start": 101, "end": 123, "text": "Choi and Cardie, 2009;", "ref_id": "BIBREF4" }, { "start": 124, "end": 141, "text": "Hu and Liu, 2004;", "ref_id": "BIBREF12" }, { "start": 142, "end": 162, "text": "Zhuang et al., 2006;", "ref_id": "BIBREF29" }, { "start": 163, "end": 181, "text": "Fu and Wang, 2010;", "ref_id": "BIBREF10" }, { "start": 182, "end": 198, "text": "Ku et al., 2008)", "ref_id": "BIBREF13" }, { "start": 230, "end": 252, "text": "(Taboada et al., 2011;", "ref_id": "BIBREF23" }, { "start": 253, "end": 269, "text": "Qu et al., 2010)", "ref_id": "BIBREF21" }, { "start": 403, "end": 422, "text": "(Dietterichl, 2002;", "ref_id": "BIBREF6" }, { "start": 423, "end": 436, "text": "Bishop, 2006)", "ref_id": "BIBREF1" }, { "start": 454, "end": 472, "text": "(Zhu et al., 2003;", "ref_id": "BIBREF28" }, { "start": 473, "end": 498, "text": "Zhu and Ghahramani, 2002;", "ref_id": "BIBREF27" }, { "start": 499, "end": 521, "text": "Chapelle et al., 2006;", "ref_id": "BIBREF3" }, { "start": 522, "end": 542, "text": "Belkin et al., 2006)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Each of our base predictors predicts the polarity or the rating of a single phrase. As indicated above, we do not use these predictions directly in MEM but instead integrate the features of the base predictors (see Sec. 4.4). MEM is designed such that new base predictors can be integrated easily.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Base Predictors", "sec_num": "3" }, { "text": "Our base predictors use a diverse set of available web and linguistic resources. The hope is that this diversity increases overall prediction performance (Dietterichl, 2002) : The statistical polarity predictor focuses on local syntactic patterns; it is based on corpus statistics for SO-carrying words and opinion topic words. The heuristic polarity predictor uses manually constructed rules to achieve high precision but low recall. Both the bag-of-opinions rating predictor and the SO-CAL rating predictor are based on lexicons. The BoO predictor uses a lexicon trained from a large generic-domain corpus and is recall-oriented; the SO-CAL predictor uses a different lexicon with manually assigned weights and is precision-oriented.", "cite_spans": [ { "start": 154, "end": 173, "text": "(Dietterichl, 2002)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Base Predictors", "sec_num": "3" }, { "text": "The polarity of an SO-carrying word strongly depends on its target word. For example, consider the phrase \"I began this novel with the greatest of hopes [...]\". Here, \"greatest\" has a positive semantic orientation in all subjectivity lexicons, but the combination \"greatest of hopes\" often indicates a negative sentiment. We refer to a pair of SO-carrying word (\"greatest\") and a target word (\"hopes\") as an opinion-target pair. Our statistical polarity predictor learns the polarity of opinions and targets jointly, which increases the robustness of its predictions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Polarity Predictor", "sec_num": "3.1" }, { "text": "Syntactic dependency relations of the form A R \u2212 \u2192 B are a strong indicator for opinion-target pairs (Qiu et al., 2009; Zhuang et al., 2006) ; e.g., \"great\" nmod \u2212 \u2212\u2212 \u2192\"product\". To achieve high precision, we only consider pairs connected by the following predefined set of shortest dependency paths:", "cite_spans": [ { "start": 101, "end": 119, "text": "(Qiu et al., 2009;", "ref_id": "BIBREF20" }, { "start": 120, "end": 140, "text": "Zhuang et al., 2006)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Statistical Polarity Predictor", "sec_num": "3.1" }, { "text": "verb subj \u2190 \u2212 \u2212 noun, verb obj \u2190\u2212 noun, adj nmod \u2212 \u2212\u2212 \u2192 noun, adj prd \u2212 \u2212 \u2192 verb subj \u2190 \u2212 \u2212 noun.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Polarity Predictor", "sec_num": "3.1" }, { "text": "We only retain opiniontarget pairs that are sufficiently frequent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Polarity Predictor", "sec_num": "3.1" }, { "text": "For each extracted pair z, we count how often it co-occurs with each document polarity y \u2208 Y, where Y = {positive, negative, other} denotes the set of polarities. If z occurs in a document but is preceded by a negator, we treat it as a co-occurrence of opposite document polarity. If z occurs in a document with polarity other, we count the occurrence with only half weight, i.e., we increase both #z and #(other, z) by 0.5. These documents are typically a mixture of positive and negative opinions so that we want to reduce their impact. The marginal distribution of polarity label y given that z occurs in a sentence is estimated as P (y | z) = #(y, z)/#z. The predictor is trained using the text and ratings of the reviews in the training data, i.e., without relying on fine-grained annotations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Polarity Predictor", "sec_num": "3.1" }, { "text": "The statistical polarity predictor can be used to predict sentence-level polarities by averaging the phraselevel predictions. As discussed previously, such an approach is problematic; we use it as a baseline approach in our experimental study. We also employ phrase-level averaging to estimate the variance of base predictors; see Sec. 4.3. Denote by Z(x) the set of opinion-target pairs in sentence x. To predict the sentence polarity y \u2208 Y, we take the Bayesian average of the phrase-level predictors:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Polarity Predictor", "sec_num": "3.1" }, { "text": "P (y | Z(x)) = z\u2208Z(x) P (y | z)P (z) = z\u2208Z(x) P (y, z).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Polarity Predictor", "sec_num": "3.1" }, { "text": "Thus the most likely polarity is the one with the highest co-occurrence count.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Polarity Predictor", "sec_num": "3.1" }, { "text": "Heuristic patterns can also serve as base predictors. In particular, we found that some authors list positive and negative aspects separately after keywords such as \"pros\" and \"cons\". A heuristic that exploits such patterns achieved a high precision (> 90%) but low recall (< 5%) in our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic Polarity Predictor", "sec_num": "3.2" }, { "text": "We leverage the bag-of-opinion (BoO) model of Qu et al. (2010) as a base predictor for phrase-level ratings. The BoO model was trained from a large generic corpus without fine-grained annotations.", "cite_spans": [ { "start": 46, "end": 62, "text": "Qu et al. (2010)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Bag-of-Opinions Rating Predictor", "sec_num": "3.3" }, { "text": "In BoO, an opinion consists of three components: an SO-carrying word (e.g., \"good\"), a set of intensifiers (e.g., \"very\") and a set of negators (e.g., \"not\"). Each opinion is scored based on these words (represented as a boolean vector b) and the polarity of the SO-carrying word (represented as sgn(r) \u2208 {\u22121, 1}) as indicated by the MPQA lexicon of Wilson et al. (2005) . In particular, the score is computed as sgn(r)\u03c9 T b, where \u03c9 is the learned weight vector. The sign function sgn(r) ensures consistent weight assignment for intensifiers and negators. For example, an intensifier like \"very\" can obtain a large positive or a large negative weight depending on whether it is used with a positive or negative SO-carrying word, respectively.", "cite_spans": [ { "start": 350, "end": 370, "text": "Wilson et al. (2005)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Bag-of-Opinions Rating Predictor", "sec_num": "3.3" }, { "text": "The Semantic Orientation Calculator (SO-CAL) of Taboada et al. (2011) also predicts phrase-level ratings via a scoring function similar to the one of BoO. The SO-CAL predictor uses a manually created lexicon, in which each word is classified as either an SOcarrying word (associated with a numerical score), an intensifier (associated with a modifier on the numerical score), or a negator. SO-CAL employs various heuristics to detect irrealis and to correct for the positive bias inherent in most lexicon-based classifiers. Compared to BoO, SO-CAL has lower recall but higher precision.", "cite_spans": [ { "start": 48, "end": 69, "text": "Taboada et al. (2011)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "SO-CAL Rating Predictor", "sec_num": "3.4" }, { "text": "Our multi-experts model incorporates features from the individual base predictors, coarse-grained labels (i.e., document ratings or polarities), similarities between sentences, and optionally a small amount of sentence polarity labels into an unified probabilistic model. We first give an overview of MEM, and then describe its components in detail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Experts Model", "sec_num": "4" }, { "text": "Denote by X = {x 1 , . . . , x N } a set of sentences. We associate each sentence x i with a set of initial labels\u0177 i , which are strong indicators of semantic orientation: the coarse-grained rating of the corresponding document, the polarity label of our heuristic polarity predictor, the phrase-level ratings from the SO-CAL predictor, and optionally a manual polarity label. Note that the number of initial labels may vary from sentence to sentence and that initial labels are heterogeneous in that they refer to either polarities or ratings. Let\u0176 = {\u0177 1 , . . . ,\u0177 N }. Our goal is to predict the unobserved ratings r = {r 1 , . . . , r N } of each sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Overview", "sec_num": "4.1" }, { "text": "Our multi-expert model is a probabilistic model for X,\u0176, and r. In particular, we model the rating vector r via a multi-expert prior P E (r | X, \u03b2) with parameter \u03b2 (Sec. 4.2). P E integrates both features from the base predictors and sentence similarities. We correlate ratings to initial labels via a set of conditional distributions P b (\u0177 b | r), where b denotes the type of initial label (Sec. 4.3). The posterior of r is then given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Overview", "sec_num": "4.1" }, { "text": "P (r | X,\u0176, \u03b2) \u221d b P b (\u0177 b | r)P E (r | X, \u03b2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Overview", "sec_num": "4.1" }, { "text": "Note that the posterior is influenced by both the multiexpert prior and the set of initial labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Overview", "sec_num": "4.1" }, { "text": "We use MAP inference to obtain the most likely rating of each sentence, i.e., we solve", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Overview", "sec_num": "4.1" }, { "text": "argmin r,\u03b2 \u2212 b log(P b (\u0177 b | r)) \u2212 log(P E (r | X, \u03b2)),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Overview", "sec_num": "4.1" }, { "text": "where as before \u03b2 denotes the model parameters. We solve the above optimization problem using cyclic coordinate descent (Friedman et al., 2008) .", "cite_spans": [ { "start": 120, "end": 143, "text": "(Friedman et al., 2008)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Model Overview", "sec_num": "4.1" }, { "text": "The multi-expert prior P E (r | X, \u03b2) consists of two component distributions N 1 and N 2 . Distribution N 1 integrates features from the base predictors, N 2 incorporates sentence similarities to propagate information across sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Expert Prior", "sec_num": "4.2" }, { "text": "In a slight abuse of notation, denote by x i the set of features for the i-th sentence. Vector x i contains the features of all the base predictors but also includes bigram features for increased coverage of syntactic patterns; see Sec. 4.4 for details about the feature design. Let m(x i ) = \u03b2 T x i be a linear predictor for r i , where \u03b2 is a real weight vector. Assuming Gaussian noise,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Expert Prior", "sec_num": "4.2" }, { "text": "r i follows a Gaussian distribution N 1 (r i | m i , \u03c3 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Expert Prior", "sec_num": "4.2" }, { "text": "with mean m i = m(x i ) and variance \u03c3 2 . Note that predictor m can be regarded as a linear combination of base predictors because both m and each of the base predictors are linear functions. By integrating all features into a single function, the base predictors are trained jointly so that weight vector \u03b2 automatically adapts to domain-dependent properties of the data. This integrated approach significantly outperformed the alternative approach of using a weighted vote of the individual predictions made by the base predictors. We regularize the weight vector \u03b2 using a Laplace prior P (\u03b2 | \u03b1) with parameter \u03b1 to encourage sparsity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Expert Prior", "sec_num": "4.2" }, { "text": "Note that the bigram features in x i partially capture sentence similarity. However, such features cannot be extended to longer subsequences such as trigrams due to data sparsity: useful features become as infrequent as noisy terms. Moreover, we would like to capture sentence similarity using gapped (i.e., non-consecutive) subsequences. For example, the sentences \"The book is an easy read.\" and \"It is easy to read.\" are similar but do not share any consecutive bigrams. They do share the subsequence \"easy read\", however. To capture this similarity, we make use of a novel sentiment-augmented variant of word sequence kernels (Cancedda et al., 2003) . Our kernel is used to construct a similarity matrix W among sentences and the corresponding regularized Laplacian L. To capture the intuition that similar sentences should have similar ratings, we introduce a Gaussian prior N 2 (r | 0, L \u22121 ) as a component into our multi-expert prior; see Sec. 4.5 for details and a discussion of why this prior encourages similar ratings for similar sentences.", "cite_spans": [ { "start": 630, "end": 653, "text": "(Cancedda et al., 2003)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Multi-Expert Prior", "sec_num": "4.2" }, { "text": "Since the two component distributions feature different expertise, we take their product and obtain the multi-expert prior", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Expert Prior", "sec_num": "4.2" }, { "text": "P E (r | X, \u03b2) \u221d N 1 (r | m, I\u03c3 2 )N 2 (r | 0, L \u22121 )P (\u03b2 | \u03b1),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Expert Prior", "sec_num": "4.2" }, { "text": "where m = (m 1 , . . . , m N ). Note that the normalizing constant of P E can be ignored during MAP inference since it does not depend on \u03b2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Expert Prior", "sec_num": "4.2" }, { "text": "Recall that the initial labels\u0176 are strong indicators of semantic orientation associated with each sentence; they correspond to either discrete polarity labels or to continuous rating labels. This heterogeneity constitutes the main difficulty for incorporating the initial labels via the conditional distributions P b (\u0177 b | r). We assume independence throughout so that P b (\u0177", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Initial Labels", "sec_num": "4.3" }, { "text": "b | r) = i P b (\u0177 b i | r i ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Initial Labels", "sec_num": "4.3" }, { "text": "Rating Labels For continuous labels, we assume Gaussian noise and set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Initial Labels", "sec_num": "4.3" }, { "text": "P b (\u0177 b i | r i ) = N (\u0177 b i | r i , \u03b7 b i ), where variance \u03b7 b", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Initial Labels", "sec_num": "4.3" }, { "text": "i is a type-and sentence-dependent. For SO-CAL labels, we simply set \u03b7 SO-CAL i = \u03b7 SO-CAL , where \u03b7 SO-CAL is a hyperparameter. The SO-CAL scores have limited influence in our overall model; we found that more complex designs lead to little improvement. We proceed differently for document ratings. Our experiment suggests that document ratings constitute the most important indicator of the SO of a sentence. Thus sentence ratings should be close to document ratings unless strong evidence to the contrary exists. In other words, we want variance \u03b7 Doc i to be small. When no manually created sentence-level polarity labels are available, we set the value of \u03b7 Doc i depending on the polarity class. In particular, we set \u03b7 Doc i = 1 for both positive and negative documents, and \u03b7 Doc i = 2 for neutral documents. The reasoning behind this choice is that sentence ratings in neutral documents express higher variance because these documents often contain a mixture of positive and negative sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Initial Labels", "sec_num": "4.3" }, { "text": "When a small set of manually created sentence polarity labels is available, we train a classifier that predicts whether the sentence polarity coincides with the document polarity. If so, we set the corresponding variance \u03b7 Doc i to a small value; otherwise, we choose a larger value. In particular, we train a logistic regression classifier (Bishop, 2006) using the following binary features: (1) an indicator variable for each document polarity, and (2) an indicator variable for each triple of base predictor, predicted polarity, and document polarity (set to 1 if the polarities match). We then set \u03b7 Doc i = (\u03c4 p i ) \u22121 , where p i is the probability of matching polarities obtained from the classifier and \u03c4 is a hyperparameter that ensures correct scaling.", "cite_spans": [ { "start": 341, "end": 355, "text": "(Bishop, 2006)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Incorporating Initial Labels", "sec_num": "4.3" }, { "text": "We now describe how to model the correlation between the polarity of a sentence and its rating. An simple and effective approach is to partition the range of ratings into three consecutive partitions, one for each polarity class. We thus considering the polarity classes {positive, other, negative} as ordered and formulate polarity classification as an ordinal regression problem (Chu and Ghahramani, 2006) . We immediately obtain the distribution", "cite_spans": [ { "start": 381, "end": 407, "text": "(Chu and Ghahramani, 2006)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Polarity Labels", "sec_num": null }, { "text": "P b (\u0177 b i = pos | r i ) = \u03a6 r i \u2212 b + \u03b7 b P b (\u0177 b i = oth | r i ) = \u03a6 b + \u2212 r i \u03b7 b \u2212 \u03a6 b \u2212 \u2212 r i \u03b7 b P b (\u0177 b i = neg | r i ) = \u03a6 b \u2212 \u2212 r i \u03b7 b ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Polarity Labels", "sec_num": null }, { "text": "where b + and b \u2212 are the partition boundaries between positive/other and other/negative, respectively, 2 \u03a6(x) denotes the cumulative distribution function of the Gaussian distribution, and variance \u03b7 b is a hyperparameter. It is easy to verify that \u0177 b i \u2208Y p(\u0177 b i | r i ) = 1. The resulting distribution is shown in Fig. 1 . We can use the same distribution to use MEM for sentence-level polarity classification; in this case, we pick the polarity with the highest probability.", "cite_spans": [], "ref_spans": [ { "start": 319, "end": 325, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Polarity Labels", "sec_num": null }, { "text": "Base predictors are integrated into MEM via component N 1 (r i | m i , \u03c3 2 ) of the multi-expert prior (see Sec. 4.2). Recall that m i is a linear function of the features x i of each sentence. In this section, we discuss how x i is constructed from the features of the base predictors. New base predictors can be integrated easily by exposing their features to MEM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Base Predictors", "sec_num": "4.4" }, { "text": "Most base predictors operate on the phrase level; our goal is to construct features for the entire sen ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Base Predictors", "sec_num": "4.4" }, { "text": "i = (n b i ) \u22121 j o b", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Base Predictors", "sec_num": "4.4" }, { "text": "ij . We proceed slightly differently and average the features associated with phrases of positive prior polarity separately from those of phrases with negative prior polarity (Taboada et al., 2011) . We then concatenate the averaged feature vectors, i.e., we set ij associated with phrases of prior polarity p. This procedure allows us to learn a different weight for each feature depending on its context (e.g., the weight of intensifier \"very\" may differ for positive and negative phrases). We construct x i by concatenating the sentence-level features x b i of each base predictor and a feature vector of bigrams.", "cite_spans": [ { "start": 175, "end": 197, "text": "(Taboada et al., 2011)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Incorporating Base Predictors", "sec_num": "4.4" }, { "text": "To integrate a base predictor, we only need to specify the relevant features and, if applicable, prior phrase polarities. For our choice of base predictors, we use the following features:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Base Predictors", "sec_num": "4.4" }, { "text": "SO-CAL predictor. The prior polarity of a SO-CAL phrase is given by the polarity of its SOcarrying word in the SO-CAL lexicon. The feature vector o SO-CAL ij consists of the weight of the SOcarrying word from the lexicon as well the set of negator words, irrealis marker words, and intensifier words in the phrase. Moreover, we add the first two words preceding the SO-carrying word as context features (skipping nouns, negators, irrealis markers, and intensifiers, and stopping at clause boundaries). All words are encoded as binary indicator features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Base Predictors", "sec_num": "4.4" }, { "text": "BoO predictor. Similar to SO-CAL, we determine the prior polarity of a phrase based on the BoO dictionary. In contrast to SO-CAL, we directly use the BoO score as a feature because the BoO predictor weights have been trained on a very large corpus and are thus reliable. We also add irrealis marker words in the form of indicator features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Base Predictors", "sec_num": "4.4" }, { "text": "Statistical polarity predictor. Recall that the statistical polarity predictor is based on co-occurrence counts of opinion-topic pairs and document polarities. We treat each opinion-topic pair as a phrase and use the most frequently co-occurring polarity as the phrase's prior polarity. We use the logarithm of the co-occurrence counts with positive, negative, and other polarity as features; this set of features performed better than using the co-occurrence counts or estimated class probabilities directly. We also add the same type of context features as for SO-CAL, but rescale each binary feature by the logarithm of the occurrence count #z of the opinion-topic pair (i.e., the features take values in {0, log #z}).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Base Predictors", "sec_num": "4.4" }, { "text": "The component distribution N 2 (r | 0, L \u22121 ) in the multi-expert prior encourages similar sentences to have similar ratings. The main purpose of N 2 is to propagate information from sentences on which the base predictors perform well to sentences for which base prediction is unreliable or unavailable (e.g., be-cause they do not contain SO-carrying words). To obtain this distribution, we first construct an N \u00d7 N sentence similarity matrix W using a sentimentaugmented word sequence kernel (see below). We then compute the regularized graph Laplacian L = L+I/\u03bb 2 based on the unnormalized graph Laplacian L = D \u2212 W (Chapelle et al., 2006) , where D be a diagonal matrix with d ii = j w ij and hyperparameter \u03bb 2 controls the scale of sentence ratings.", "cite_spans": [ { "start": 618, "end": 641, "text": "(Chapelle et al., 2006)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Incorporating Sentence Similarities", "sec_num": "4.5" }, { "text": "To gain insight into distribution N 2 , observe that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Sentence Similarities", "sec_num": "4.5" }, { "text": "N 2 (r | 0, L \u22121 ) \u221d exp \u2212 1 2 i,j w ij (r i \u2212 r j ) 2 \u2212 r 2 2 /\u03bb 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Sentence Similarities", "sec_num": "4.5" }, { "text": "The left term in the exponent forces the ratings of similar sentences to be similar: the larger the sentence similarity w ij , the more penalty is paid for dissimilar ratings. For this reason, N 2 has a smoothing effect. The right term is an L2 regularizer and encourages small ratings; it is controlled by hyperparameter \u03bb 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Sentence Similarities", "sec_num": "4.5" }, { "text": "The entries w ij in the sentence similarity matrix determine the degree of smoothing for each pair of sentence ratings. We compute these values by a novel sentiment-augmented word sequence kernel, which extends the well-known word sequence kernel of Cancedda et al. (2003) by (1) BoO weights to strengthen the correlation of sentence similarity and rating similarity and (2) synonym resolution based on Word-Net (Miller, 1995) .", "cite_spans": [ { "start": 250, "end": 272, "text": "Cancedda et al. (2003)", "ref_id": "BIBREF2" }, { "start": 403, "end": 426, "text": "Word-Net (Miller, 1995)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Incorporating Sentence Similarities", "sec_num": "4.5" }, { "text": "In general, a word sequence kernel computes a similarity score of two sequences based on their shared subsequences. In more detail, we first define a score function for a pair of shared subsequences, and then sum up these scores to obtain the overall similarity score. Consider for example the two sentences \"The book is an easy read.\" (s 1 ) and \"It is easy to read.\" (s 2 ) along with the shared subsequence \"is easy read\" (u). Observe that the words \"an\" and \"to\" serve as gaps as they are not part of the subsequence. We represent subsequence u in sentence s via a real-valued projection function \u03c6 u (s). In our example, \u03c6 u (s 1 ) = \u03c5 is \u03c5 g an \u03c5 easy \u03c5 read and \u03c6 u (s 2 ) = \u03c5 is \u03c5 easy \u03c5 g to \u03c5 read . The decay factors \u03c5 w \u2208 (0, 1] for matching words characterize the importance of a word (large values for significant words). On the contrary, decay factors \u03c5 g w \u2208 (0, 1] for gap words are penalty terms for mismatches (small values for significant words). The score of subsequence u is defined as \u03c6 u (s 1 )\u03c6 u (s 2 ). Thus two shared subsequences have high similarity if they share significant words and few gaps. Following Cancedda et al. (2003) , we define the similarity between two sequences as", "cite_spans": [ { "start": 1136, "end": 1158, "text": "Cancedda et al. (2003)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Incorporating Sentence Similarities", "sec_num": "4.5" }, { "text": "k n (s i , s j ) = u\u2208\u2126 n \u03c6 u (s i )\u03c6 u (s j ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Sentence Similarities", "sec_num": "4.5" }, { "text": "where \u2126 is a finite set of words and n denotes the length of the considered subsequences. This similarity function can be computed efficiently using dynamic programming.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Sentence Similarities", "sec_num": "4.5" }, { "text": "To apply the word sequence kernel, we need to specify the decay factors. A traditional choice is \u03c5 w = log( N Nw )/ log(N ), where N w is the document frequency of the word w and N is the total number of documents. This IDF decay factor is not wellsuited to our setting: Important opinion words such as \"great\" have a low IDF value due to their high document frequency. To overcome this problem, we incorporate additional weights for SO-carrying words using the BoO lexicon. To do so, we first rescale the BoO weights into [0, 1] using the sigmoid g(w) = (1 + exp(\u2212a\u03c9 w + b)) \u22121 , where \u03c9 w denotes the BoO weight of word w. 3 We then set \u03c5 w = min(log( N Nw )/ log(N ) + g(w), 0.9). The decay factor for gaps is given by \u03c5 g w = 1 \u2212 \u03c5 w . Thus we strongly penalize gaps that consist of infrequent words or opinion words.", "cite_spans": [ { "start": 625, "end": 626, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Incorporating Sentence Similarities", "sec_num": "4.5" }, { "text": "To address data sparsity, we incorporate synonyms and hypernyms from WordNet into our kernel. In particular, we represent words found in WordNet by their first two synset names (for verbs, adjectives, nouns) and their direct hypernym (nouns only). Two words are considered the same when their synsets overlap. Thus, for example, \"writer\" has the same representation as \"author\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Sentence Similarities", "sec_num": "4.5" }, { "text": "To build the similarity matrix W, we construct a k-nearest-neighbor graph for all sentences. 4 We consider subsequences consisting of three words (i.e., w ij = k 3 (s i , s j )); longer subsequences are overly sparse, shorter subsequences are covered by the bigrams features in N 1 .", "cite_spans": [ { "start": 93, "end": 94, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Incorporating Sentence Similarities", "sec_num": "4.5" }, { "text": "We evaluated both MEM and a number of alternative approaches for both sentence-level polarity classification and sentence-level strength prediction across a number of domains. We found that MEM outperforms state-of-the-art approaches by a significant margin.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "We implemented MEM as well as the HCRF classifier of (T\u00e4ckstr\u00f6m and McDonald, 2011a; T\u00e4ckstr\u00f6m and McDonald, 2011b) , which is the best-performing estimator of sentence-level polarity in the weaklysupervised setting reported in the literature. We train both methods using (1) only coarse labels (MEM-Coarse, HCRF-Coarse) and (2) additionally a small number of sentence polarities (MEM-Fine, HCRF-Fine 5 ). We also implemented a number of baselines for both polarity classification and strength prediction: a document oracle (DocOracle) that simply uses the document label for each sentence, the BoO rating predictor (Base BoO ), and the SO-CAL rating predictor (Base SO-CAL ). For polarity classification, we compare our methods also to the statistical polarity predictor (Base polarity ). To judge on the effectiveness of our multi-export prior for combining base predictors, we take the majority vote of all base predictors and document polarity as an additional baseline (Majority-Vote). Similarly, for strength prediction, we take the arithmetic mean of the document rating and the phrase-level predictions of Base BoO and Base SO-CAL as a baseline (Mean-Rating). We use the same hyperparameter setting for MEM across all our experiments.", "cite_spans": [ { "start": 53, "end": 84, "text": "(T\u00e4ckstr\u00f6m and McDonald, 2011a;", "ref_id": "BIBREF24" }, { "start": 85, "end": 115, "text": "T\u00e4ckstr\u00f6m and McDonald, 2011b)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1" }, { "text": "We evaluated all methods on Amazon reviews from different domains using the corpus of Ding et al. (2008) and the test set of T\u00e4ckstr\u00f6m and McDonald (2011a) . For each domain, we constructed a large balanced dataset by randomly sampling 33,000 reviews from the corpus of Ding et al. (2008) . We chose the books, electronics, and music domains for our experiments; the dvd domain was used for development. For sentence polarity classification, we use the test set of T\u00e4ckstr\u00f6m and McDonald (2011a) , which contains roughly 60 reviews per domain (20 for each polarity). For strength evaluation, we created a test set of 300 pairs of sentences per domain from the polarity test set. Each pair consisted of two sentences of the same polarity; we manually determined which of the sentences is more positive. We chose this pairwise approach because (1) we wanted the evaluation to be invariant to the scale of the predicted ratings, and (2) it much easier for human annotators to rank a pair of sentences than to rank a large collection of sentences.", "cite_spans": [ { "start": 86, "end": 104, "text": "Ding et al. (2008)", "ref_id": "BIBREF7" }, { "start": 125, "end": 155, "text": "T\u00e4ckstr\u00f6m and McDonald (2011a)", "ref_id": "BIBREF24" }, { "start": 270, "end": 288, "text": "Ding et al. (2008)", "ref_id": "BIBREF7" }, { "start": 465, "end": 495, "text": "T\u00e4ckstr\u00f6m and McDonald (2011a)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1" }, { "text": "We followed T\u00e4ckstr\u00f6m and McDonald (2011b) and used 3-fold cross-validation, where each fold consisted of a set of roughly 20 documents from the test set. In each fold, we merged the test set with the reviews from the corresponding domain. For MEM-Fine and HCRF-Fine, we use the data from the other two folds as fine-grained polarity annotations. For our experiments on polarity classification, we converted the predicted ratings of MEM, Base BoO , and Base SO-CAL into polarities by the method described in Sec. 4.3. We compare the performance of each method in terms of accuracy, which is defined as the fraction of correct predictions on the test set (correct label for polarity / correct ranking for strength). All reported numbers are averages over the three folds. In our tables, boldface numbers are statistically significant against all other methods (t-test, p-value 0.05). Table 1 summarizes the results of our experiments for sentence polarity classification. The base predictors perform poorly across all domains, mainly due to the aforementioned problems associated with averaging phrase-level predictions. In fact, DocOracle performs almost always better than any of the base predictors. However, accurracy increases when we combine base predictors and DocOracle using majority voting, which indicates that ensemble methods work well.", "cite_spans": [ { "start": 12, "end": 42, "text": "T\u00e4ckstr\u00f6m and McDonald (2011b)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 883, "end": 890, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1" }, { "text": "When no fine-grained annotations are available (HCRF-Coarse, MEM-Coarse), both MEM-Coarse and Majority-Vote outperformed HCRF-Coarse, which in turn has been shown to outperform a number of lexicon-based methods as well as classifiers trained on document labels (T\u00e4ckstr\u00f6m and McDonald, 2011a) . MEM-Coarse also performs better than Majority-Vote. This is because MEM propagates evidence across similar sentences, which is especially useful when no explicit SO-carrying words exist. Also, MEM learns weights of features of base predictors, which leads to a more adaptive integration, and our ordinal regression formulation for polarity prediction allows direct competition among positive and negative evidence for improved accuracy. When we incorporate a small amount of sentence polarity labels (HCRF-Fine, MEM-Fine), the accuracy of all models greatly improves. HCRF-Fine has been shown to outperform the strongest supervised method on the same dataset (McDonald et al., 2007; T\u00e4ckstr\u00f6m and McDonald, 2011b) . MEM-Fine falls short of HCRF-Fine only in the electronics domain but performs better on all other domains. In the book and music domains, where MEM-Fine is particularly effective, many sentences feature complex syntactic structure and SO-carrying words are often used without reference to the quality of the product (but to describe contents, e.g., \"a love story\" or \"a horrible accident\").", "cite_spans": [ { "start": 261, "end": 292, "text": "(T\u00e4ckstr\u00f6m and McDonald, 2011a)", "ref_id": "BIBREF24" }, { "start": 954, "end": 977, "text": "(McDonald et al., 2007;", "ref_id": "BIBREF15" }, { "start": 978, "end": 1008, "text": "T\u00e4ckstr\u00f6m and McDonald, 2011b)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Results for Polarity Classification", "sec_num": "5.2" }, { "text": "Our models perform especially well when they are applied to sentences containing no or few opinion words from lexicons. Table 2 reports the evaluation results for both sentences containing SO-carrying words from either MPQA or SO-CAL lexicons and for sentences containing no such words. The results explain why our model falls short of HCRF-Fine in the electronics domain: reviews of electronic products contain many SO-carrying words, which almost always express opinions. Nevertheless, MEM-Fine handles sentences without explicit SO-carrying words well across all domains; here the propagation of information across sentences helps to learn the SO Book Electronics Music op fact op fact op fact HCRF-Fine 55.7 55.9 63.3 54.6 59.0 57.4 MEM-Fine 58.9 62.4 60.7 56.7 64.5 60.8 Table 2 : Accuracy of polarity classification for sentences with opinion words (op) and without opinion words (fact).", "cite_spans": [], "ref_spans": [ { "start": 120, "end": 127, "text": "Table 2", "ref_id": null }, { "start": 776, "end": 783, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Results for Polarity Classification", "sec_num": "5.2" }, { "text": "of facts (such as \"short battery life\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results for Polarity Classification", "sec_num": "5.2" }, { "text": "We found that for all methods, most of the errors are caused by misclassifying positive/negative sentences as other and vice versa. Moreover, sentences with polarity opposite to the document polarity are hard cases if they do not feature frequent strong patterns. Another difficulty lies in off-topic sentences, which may contain explicit SO-carrying words but are not related to the item under review. This is one of the main reasons for the poor performance of the lexicon-based methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results for Polarity Classification", "sec_num": "5.2" }, { "text": "Overall, we found that MEM-Fine is the method of choice. Thus our multi-expert model can indeed balance the strength of the individual experts to obtain better estimation accuracy. Table 3 shows the accuracy results for strength prediction. Here our models outperformed all baselines by a large margin. Although document ratings are strong indicators in the polarity classification task, they lead to worse performance than lexicon-based methods. The main reason for this drop in accuracy is that the document oracle assigns the same rating to all sentences within a review. Thus DocOracle cannot rank sentences from the same review, which is a severe limitation. This shortage can be partly compensated by averaging the base predictions and document rating (Mean-Rating). Note that it is nontrivial to apply existing ensemble methods for the weights of individual base predictors because of the absence of the sentence ratings as training labels. In contrast, our MEM models use indirect supervision to adaptively assign weights to the features from base predictors. Similar to polarity classification, a small amount of sentence polarity labels often improved the performance of MEM. ", "cite_spans": [], "ref_spans": [ { "start": 181, "end": 188, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results for Polarity Classification", "sec_num": "5.2" }, { "text": "We proposed the Multi-Experts Model for analyzing both opinion polarity and opinion strength at the sentence level. MEM is weakly supervised; it can run without any fine-grained annotations but is also able to leverage such annotations when available. MEM is driven by a novel multi-expert prior, which integrates a number of diverse base predictors and propagates information across sentences using a sentiment-augmented word sequence kernel. Our experiments indicate that MEM achieves better overall accuracy than alternative methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "We assign polarity other to text fragments that are off-topic or not directly related to the entity under review.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We set b + = 0.3 and b \u2212 = \u22120.3 to calibrate to SO-CAL, which treats ratings in [\u22120.3, 0, 3] as polarity other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We set a = 2 and b = 1 in our experiments.4 We use k = 15 and only consider neighbors with a similarity above 0.001.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We used the best-performing model that fuses HCRF-Coarse and the supervised model(McDonald et al., 2007) by interpolation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Manifold regularization: A geometric framework for learning from labeled and unlabeled examples", "authors": [ { "first": "Mikhail", "middle": [], "last": "Belkin", "suffix": "" }, { "first": "Partha", "middle": [], "last": "Niyogi", "suffix": "" }, { "first": "Vikas", "middle": [], "last": "Sindhwani", "suffix": "" } ], "year": 2006, "venue": "The Journal of Machine Learning Research", "volume": "7", "issue": "", "pages": "2399--2434", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. 2006. Manifold regularization: A geometric frame- work for learning from labeled and unlabeled examples. The Journal of Machine Learning Research, 7:2399- 2434.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Pattern recognition and machine learning", "authors": [ { "first": "Christopher", "middle": [ "M" ], "last": "Bishop", "suffix": "" } ], "year": 2006, "venue": "", "volume": "4", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher M. Bishop. 2006. Pattern recognition and machine learning, volume 4. Springer New York.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Word-sequence kernels", "authors": [ { "first": "Nicola", "middle": [], "last": "Cancedda", "suffix": "" }, { "first": "\u00c9ric", "middle": [], "last": "Gaussier", "suffix": "" }, { "first": "Cyril", "middle": [], "last": "Goutte", "suffix": "" }, { "first": "Jean-Michel", "middle": [], "last": "Renders", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "1059--1082", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nicola Cancedda,\u00c9ric Gaussier, Cyril Goutte, and Jean- Michel Renders. 2003. Word-sequence kernels. Jour- nal of Machine Learning Research, 3:1059-1082.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Semi-Supervised Learning", "authors": [ { "first": "Oliver", "middle": [], "last": "Chapelle", "suffix": "" }, { "first": "Bernhard", "middle": [], "last": "Sch\u00f6lkopf", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Zien", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oliver Chapelle, Bernhard Sch\u00f6lkopf, and Alexander Zien. 2006. Semi-Supervised Learning. MIT Press.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Adapting a polarity lexicon using integer linear programming for domainspecific sentiment classification", "authors": [ { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "2", "issue": "", "pages": "590--598", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yejin Choi and Claire Cardie. 2009. Adapting a polarity lexicon using integer linear programming for domain- specific sentiment classification. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, volume 2, pages 590-598.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Gaussian processes for ordinal regression", "authors": [ { "first": "Wei", "middle": [], "last": "Chu", "suffix": "" }, { "first": "Zoubin", "middle": [], "last": "Ghahramani", "suffix": "" } ], "year": 2006, "venue": "Journal of Machine Learning Research", "volume": "6", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Chu and Zoubin Ghahramani. 2006. Gaussian pro- cesses for ordinal regression. Journal of Machine Learning Research, 6(1):1019.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Ensemble learning. The Handbook of Brain Theory and Neural Networks", "authors": [ { "first": "Thomas", "middle": [ "G" ], "last": "Dietterichl", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "405--408", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas G. Dietterichl. 2002. Ensemble learning. The Handbook of Brain Theory and Neural Networks, pages 405-408.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A holistic lexicon-based approach to opinion mining", "authors": [ { "first": "Xiaowen", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Philip", "middle": [ "S" ], "last": "Yu", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the International Conference on Web Search and Data Mining", "volume": "", "issue": "", "pages": "231--240", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaowen Ding, Bing Liu, and Philip S. Yu. 2008. A holistic lexicon-based approach to opinion mining. In Proceedings of the International Conference on Web Search and Data Mining, pages 231-240.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Is combining classifiers with stacking better than selecting the best one?", "authors": [ { "first": "Saso", "middle": [], "last": "Dzeroski", "suffix": "" }, { "first": "Bernard", "middle": [], "last": "Zenko", "suffix": "" } ], "year": 2004, "venue": "Machine Learning", "volume": "54", "issue": "", "pages": "255--273", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saso Dzeroski and Bernard Zenko. 2004. Is combining classifiers with stacking better than selecting the best one? Machine Learning, 54(3):255-273.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Regularization paths for generalized linear models via coordinate descent", "authors": [ { "first": "Jerome", "middle": [ "H" ], "last": "Friedman", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Hastie", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Tibshirani", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jerome H. Friedman, Trevor Hastie, and Rob Tibshirani. 2008. Regularization paths for generalized linear mod- els via coordinate descent. Technical report.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Chinese sentencelevel sentiment classification based on fuzzy sets", "authors": [ { "first": "Guohong", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "312--319", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guohong Fu and Xin Wang. 2010. Chinese sentence- level sentiment classification based on fuzzy sets. In Proceedings of the International Conference on Com- putational Linguistics, pages 312-319. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Seeing stars when there aren't many stars: Graph-based semisupervised learning for sentiment categorization", "authors": [ { "first": "B", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Xiaojun", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2006, "venue": "HLT-NAACL 2006 Workshop on Textgraphs: Graphbased Algorithms for Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew B. Goldberg and Xiaojun Zhu. 2006. Seeing stars when there aren't many stars: Graph-based semi- supervised learning for sentiment categorization. In HLT-NAACL 2006 Workshop on Textgraphs: Graph- based Algorithms for Natural Language Processing.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Mining and summarizing customer reviews", "authors": [ { "first": "Minqing", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "168--177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minqing Hu and Bing Liu. 2004. Mining and summariz- ing customer reviews. In Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 168-177.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Sentence-level opinion analysis by copeopi in ntcir-7", "authors": [ { "first": "Lun-Wei", "middle": [], "last": "Ku", "suffix": "" }, { "first": "I-Chien", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Chia-Ying", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Hsin-Hsi", "middle": [], "last": "Kuan Hua Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2008, "venue": "Proceedings of NTCIR-7 Workshop Meeting", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lun-Wei Ku, I-Chien Liu, Chia-Ying Lee, Kuan hua Chen, and Hsin-Hsi Chen. 2008. Sentence-level opinion anal- ysis by copeopi in ntcir-7. In Proceedings of NTCIR-7 Workshop Meeting.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Isotonic Conditional Random Fields and Local Sentiment Flow", "authors": [ { "first": "Yi", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Guy", "middle": [], "last": "Lebanon", "suffix": "" } ], "year": 2006, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "961--968", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Mao and Guy Lebanon. 2006. Isotonic Conditional Random Fields and Local Sentiment Flow. Advances in Neural Information Processing Systems, pages 961- 968.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Structured models for fine-to-coarse sentiment analysis", "authors": [ { "first": "Ryan", "middle": [ "T" ], "last": "Mcdonald", "suffix": "" }, { "first": "Kerry", "middle": [], "last": "Hannan", "suffix": "" }, { "first": "Tyler", "middle": [], "last": "Neylon", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Wells", "suffix": "" }, { "first": "Jeffrey", "middle": [ "C" ], "last": "Reynar", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Annual Meeting on Association for Computational Linguistics", "volume": "45", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan T. McDonald, Kerry Hannan, Tyler Neylon, Mike Wells, and Jeffrey C. Reynar. 2007. Structured models for fine-to-coarse sentiment analysis. In Proceedings of the Annual Meeting on Association for Computational Linguistics, volume 45, page 432.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "WordNet: a lexical database for English", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1995, "venue": "Communications of the ACM", "volume": "38", "issue": "11", "pages": "39--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A. Miller. 1995. WordNet: a lexical database for English. Communications of the ACM, 38(11):39-41.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "271--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the Annual Meeting on Association for Computational Linguistics, pages 271-278.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "124--131", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the Annual Meeting of the Association for Computational Linguis- tics, pages 124-131.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Thumbs up?: sentiment classification using machine learning techniques", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Shivakumar", "middle": [], "last": "Vaithyanathan", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using ma- chine learning techniques. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing, pages 79-86.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Expanding Domain Sentiment Lexicon through Double Propagation", "authors": [ { "first": "Guang", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jiajun", "middle": [], "last": "Bu", "suffix": "" }, { "first": "Chun", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2009, "venue": "International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "1199--1204", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2009. Expanding Domain Sentiment Lexicon through Dou- ble Propagation. In International Joint Conference on Artificial Intelligence, pages 1199-1204.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "The bag-of-opinions method for review rating prediction from sparse text patterns", "authors": [ { "first": "Lizhen", "middle": [], "last": "Qu", "suffix": "" }, { "first": "Georgiana", "middle": [], "last": "Ifrim", "suffix": "" }, { "first": "Gerhard", "middle": [], "last": "Weikum", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "913--921", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lizhen Qu, Georgiana Ifrim, and Gerhard Weikum. 2010. The bag-of-opinions method for review rating predic- tion from sparse text patterns. In Proceedings of the International Conference on Computational Linguis- tics, pages 913-921.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Gaussian processes in machine learning", "authors": [ { "first": "Carl", "middle": [ "Edward" ], "last": "Rasmussen", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carl Edward Rasmussen. 2004. Gaussian processes in machine learning. Springer.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Lexiconbased methods for sentiment analysis", "authors": [ { "first": "Maite", "middle": [], "last": "Taboada", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Brooke", "suffix": "" }, { "first": "Milan", "middle": [], "last": "Tofiloski", "suffix": "" }, { "first": "Kimberly", "middle": [ "D" ], "last": "Voll", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Stede", "suffix": "" } ], "year": 2011, "venue": "Computational Linguistics", "volume": "37", "issue": "2", "pages": "267--307", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maite Taboada, Julian Brooke, Milan Tofiloski, Kim- berly D. Voll, and Manfred Stede. 2011. Lexicon- based methods for sentiment analysis. Computational Linguistics, 37(2):267-307.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Discovering Fine-Grained Sentiment with Latent Variable Structured Prediction Models", "authors": [ { "first": "Oscar", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" }, { "first": "Ryan", "middle": [ "T" ], "last": "Mcdonald", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the European Conference on Information Retrieval", "volume": "", "issue": "", "pages": "368--374", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oscar T\u00e4ckstr\u00f6m and Ryan T. McDonald. 2011a. Dis- covering Fine-Grained Sentiment with Latent Variable Structured Prediction Models. In Proceedings of the European Conference on Information Retrieval, pages 368-374.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Semisupervised latent variable models for sentence-level sentiment analysis", "authors": [ { "first": "Oscar", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" }, { "first": "Ryan", "middle": [ "T" ], "last": "Mcdonald", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "569--574", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oscar T\u00e4ckstr\u00f6m and Ryan T. McDonald. 2011b. Semi- supervised latent variable models for sentence-level sentiment analysis. In Proceedings of the Annual Meet- ing of the Association for Computational Linguistics, pages 569-574.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Recognizing contextual polarity in phrase-level sentiment analysis", "authors": [ { "first": "Theresa", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Hoffmann", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Human Language Technology Conference and the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "347--354", "other_ids": {}, "num": null, "urls": [], "raw_text": "Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase-level senti- ment analysis. In Proceedings of the Human Language Technology Conference and the Conference on Empir- ical Methods in Natural Language Processing, pages 347-354.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Learning from labeled and unlabeled data with label propagation", "authors": [ { "first": "Xiaojin", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Zoubin", "middle": [], "last": "Ghahramani", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaojin Zhu and Zoubin Ghahramani. 2002. Learning from labeled and unlabeled data with label propagation. Technical report.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Semi-supervised learning using Gaussian fields and harmonic functions", "authors": [ { "first": "Xiaojin", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Zoubin", "middle": [], "last": "Ghahramani", "suffix": "" }, { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the International Conference on Machine Learning", "volume": "", "issue": "", "pages": "912--919", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaojin Zhu, Zoubin Ghahramani, and John Lafferty. 2003. Semi-supervised learning using Gaussian fields and harmonic functions. In Proceedings of the Inter- national Conference on Machine Learning, pages 912- 919.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Movie review mining and summarization", "authors": [ { "first": "Li", "middle": [], "last": "Zhuang", "suffix": "" }, { "first": "Feng", "middle": [], "last": "Jing", "suffix": "" }, { "first": "Xiaoyan", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the ACM international conference on Information and knowledge management", "volume": "", "issue": "", "pages": "43--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Zhuang, Feng Jing, and Xiaoyan Zhu. 2006. Movie review mining and summarization. In Proceedings of the ACM international conference on Information and knowledge management, pages 43-50.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Distribution of polarity given rating.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF1": { "text": "tence. Denote by n b i the number of phrases in the i-th sentence covered by base predictor b, and let o b ij denote a set of associated features. Features o b ij may or may not correspond directly to the features of base predictor b; see the discussion below. A straightforward strategy is to set x b", "uris": null, "num": null, "type_str": "figure" }, "FIGREF2": { "text": "where\u014d b,p ij denotes the average of the feature vectors o b", "uris": null, "num": null, "type_str": "figure" }, "TABREF1": { "content": "