{ "paper_id": "P14-1031", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:07:59.848784Z" }, "title": "Context-aware Learning for Sentence-level Sentiment Analysis with Posterior Regularization", "authors": [ { "first": "Bishan", "middle": [], "last": "Yang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Cornell University", "location": {} }, "email": "bishan@cs.cornell.edu" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "", "affiliation": { "laboratory": "", "institution": "Cornell University", "location": {} }, "email": "cardie@cs.cornell.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper proposes a novel context-aware method for analyzing sentiment at the level of individual sentences. Most existing machine learning approaches suffer from limitations in the modeling of complex linguistic structures across sentences and often fail to capture nonlocal contextual cues that are important for sentiment interpretation. In contrast, our approach allows structured modeling of sentiment while taking into account both local and global contextual information. Specifically, we encode intuitive lexical and discourse knowledge as expressive constraints and integrate them into the learning of conditional random field models via posterior regularization. The context-aware constraints provide additional power to the CRF model and can guide semi-supervised learning when labeled data is limited. Experiments on standard product review datasets show that our method outperforms the state-of-theart methods in both the supervised and semi-supervised settings.", "pdf_parse": { "paper_id": "P14-1031", "_pdf_hash": "", "abstract": [ { "text": "This paper proposes a novel context-aware method for analyzing sentiment at the level of individual sentences. Most existing machine learning approaches suffer from limitations in the modeling of complex linguistic structures across sentences and often fail to capture nonlocal contextual cues that are important for sentiment interpretation. In contrast, our approach allows structured modeling of sentiment while taking into account both local and global contextual information. Specifically, we encode intuitive lexical and discourse knowledge as expressive constraints and integrate them into the learning of conditional random field models via posterior regularization. The context-aware constraints provide additional power to the CRF model and can guide semi-supervised learning when labeled data is limited. Experiments on standard product review datasets show that our method outperforms the state-of-theart methods in both the supervised and semi-supervised settings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The ability to extract sentiment from text is crucial for many opinion-mining applications such as opinion summarization, opinion question answering and opinion retrieval. Accordingly, extracting sentiment at the fine-grained level (e.g. at the sentence-or phrase-level) has received increasing attention recently due to its challenging nature and its importance in supporting these opinion analysis tasks (Pang and Lee, 2008) .", "cite_spans": [ { "start": 406, "end": 426, "text": "(Pang and Lee, 2008)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we focus on the task of sentencelevel sentiment classification in online reviews. Typical approaches to the task employ supervised machine learning algorithms with rich features and take into account the interactions between words to handle compositional effects such as polarity reversal (e.g. (Nakagawa et al., 2010; Socher et al., 2013) ). Still, their methods can encounter difficulty when the sentence on its own does not contain strong enough sentiment signals (due to the lack of statistical evidence or the requirement for background knowledge). Consider the following review for example, Existing feature-based classifiers may be effective in identifying the positive sentiment of the first sentence due to the use of the word revelation, but they could be less effective in the last two sentences due to the lack of explicit sentiment signals. However, if we examine these sentences within the discourse context, we can see that: the second sentence expresses sentiment towards the same aspect -the music -as the first sentence; the third sentence expands the second sentence with the discourse connective In fact. These discourse-level relations help indicate that sentence 2 and 3 are likely to have positive sentiment as well.", "cite_spans": [ { "start": 310, "end": 333, "text": "(Nakagawa et al., 2010;", "ref_id": "BIBREF15" }, { "start": 334, "end": 354, "text": "Socher et al., 2013)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The importance of discourse for sentiment analysis has become increasingly recognized. Most existing work considers discourse relations between adjacent sentences or clauses and incorporates them as constraints (Kanayama and Nasukawa, 2006; Zhou et al., 2011) or features in classifiers Trivedi and Eisenstein (2013; Lazaridou et al. (2013) . Very little work has explored long-distance discourse relations for sentiment analysis. Somasundaran et al. (2008) defines coreference relations on opinion targets and applies them to constrain the polarity of sentences.", "cite_spans": [ { "start": 211, "end": 240, "text": "(Kanayama and Nasukawa, 2006;", "ref_id": "BIBREF10" }, { "start": 241, "end": 259, "text": "Zhou et al., 2011)", "ref_id": "BIBREF30" }, { "start": 287, "end": 316, "text": "Trivedi and Eisenstein (2013;", "ref_id": "BIBREF27" }, { "start": 317, "end": 340, "text": "Lazaridou et al. (2013)", "ref_id": "BIBREF11" }, { "start": 431, "end": 457, "text": "Somasundaran et al. (2008)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, the discourse relations were obtained from fine-grained annotations and implemented as hard constraints on polarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Obtaining sentiment labels at the fine-grained level is costly. Semi-supervised techniques have been proposed for sentence-level sentiment classification (T\u00e4ckstr\u00f6m and McDonald, 2011a; Qu et al., 2012) . However, they rely on a large amount of document-level sentiment labels that may not be naturally available in many domains.", "cite_spans": [ { "start": 154, "end": 185, "text": "(T\u00e4ckstr\u00f6m and McDonald, 2011a;", "ref_id": "BIBREF25" }, { "start": 186, "end": 202, "text": "Qu et al., 2012)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose a sentence-level sentiment classification method that can (1) incorporate rich discourse information at both local and global levels;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(2) encode discourse knowledge as soft constraints during learning; (3) make use of unlabeled data to enhance learning. Specifically, we use the Conditional Random Field (CRF) model as the learner for sentence-level sentiment classification, and incorporate rich discourse and lexical knowledge as soft constraints into the learning of CRF parameters via Posterior Regularization (PR) (Ganchev et al., 2010) . As a framework for structured learning with constraints, PR has been successfully applied to many structural NLP tasks (Ganchev et al., 2009; Ganchev et al., 2010; Ganchev and Das, 2013) . Our work is the first to explore PR for sentiment analysis. Unlike most previous work, we explore a rich set of structural constraints that cannot be naturally encoded in the feature-label form, and show that such constraints can improve the performance of the CRF model. We evaluate our approach on the sentencelevel sentiment classification task using two standard product review datasets. Experimental results show that our model outperforms state-ofthe-art methods in both the supervised and semisupervised settings. We also show that discourse knowledge is highly useful for improving sentence-level sentiment classification.", "cite_spans": [ { "start": 385, "end": 407, "text": "(Ganchev et al., 2010)", "ref_id": "BIBREF6" }, { "start": 529, "end": 551, "text": "(Ganchev et al., 2009;", "ref_id": "BIBREF5" }, { "start": 552, "end": 573, "text": "Ganchev et al., 2010;", "ref_id": "BIBREF6" }, { "start": 574, "end": 596, "text": "Ganchev and Das, 2013)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There has been a large amount of work on sentiment analysis at various levels of granularity (Pang and Lee, 2008) . In this paper, we focus on the study of sentence-level sentiment classification. Existing machine learning approaches for the task can be classified based on the use of two ideas. The first idea is to exploit sentiment signals at the sentence level by learning the relevance of sentiment and words while taking into account the context in which they occur: Nakagawa et al. (2010) uses tree-CRF to model word interactions based on dependency tree structures; Choi and Cardie (2008) applies compositional inference rules to handle polarity reversal; Socher et al. (2011) and Socher et al. (2013) compute compositional vector representations for words and phrases and use them as features in a classifier.", "cite_spans": [ { "start": 93, "end": 113, "text": "(Pang and Lee, 2008)", "ref_id": "BIBREF17" }, { "start": 473, "end": 495, "text": "Nakagawa et al. (2010)", "ref_id": "BIBREF15" }, { "start": 574, "end": 596, "text": "Choi and Cardie (2008)", "ref_id": "BIBREF1" }, { "start": 664, "end": 684, "text": "Socher et al. (2011)", "ref_id": "BIBREF21" }, { "start": 689, "end": 709, "text": "Socher et al. (2013)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The second idea is to exploit sentiment signals at the inter-sentential level. Polanyi and Zaenen (2006) argue that discourse structure is important in polarity classification. Various attempts have been made to incorporate discourse relations into sentiment analysis: Pang and Lee (2004) explored the consistency of subjectivity between neighboring sentences; Mao and Lebanon (2007) , McDonald et al. (2007) , and T\u00e4ckstr\u00f6m and McDonald (2011a) developed structured learning models to capture sentiment dependencies between adjacent sentences; Kanayama and Nasukawa (2006) and Zhou et al. (2011) use discourse relations to constrain two text segments to have either the same polarity or opposite polarities; Trivedi and Eisenstein (2013) and Lazaridou et al. (2013) encode the discourse connectors as model features in supervised classifiers. Very little work has explored long-distance discourse relations. Somasundaran et al. (2008) define opinion target relations and apply them to constrain the polarity of text segments annotated with target relations. Recently, Zhang et al. (2013) explored the use of explanatory discourse relations as soft constraints in a Markov Logic Network framework for extracting subjective text segments.", "cite_spans": [ { "start": 79, "end": 104, "text": "Polanyi and Zaenen (2006)", "ref_id": "BIBREF18" }, { "start": 269, "end": 288, "text": "Pang and Lee (2004)", "ref_id": "BIBREF16" }, { "start": 361, "end": 383, "text": "Mao and Lebanon (2007)", "ref_id": "BIBREF13" }, { "start": 386, "end": 408, "text": "McDonald et al. (2007)", "ref_id": "BIBREF14" }, { "start": 415, "end": 445, "text": "T\u00e4ckstr\u00f6m and McDonald (2011a)", "ref_id": "BIBREF25" }, { "start": 545, "end": 573, "text": "Kanayama and Nasukawa (2006)", "ref_id": "BIBREF10" }, { "start": 578, "end": 596, "text": "Zhou et al. (2011)", "ref_id": "BIBREF30" }, { "start": 709, "end": 738, "text": "Trivedi and Eisenstein (2013)", "ref_id": "BIBREF27" }, { "start": 743, "end": 766, "text": "Lazaridou et al. (2013)", "ref_id": "BIBREF11" }, { "start": 909, "end": 935, "text": "Somasundaran et al. (2008)", "ref_id": "BIBREF23" }, { "start": 1069, "end": 1088, "text": "Zhang et al. (2013)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Leveraging both ideas, our approach exploits sentiment signals from both intra-sentential and inter-sentential context. It has the advantages of utilizing rich discourse knowledge at different levels of context and encoding it as soft constraints during learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our approach is also semi-supervised. Compared to the existing work on semi-supervised learning for sentence-level sentiment classification (T\u00e4ckstr\u00f6m and McDonald, 2011a; T\u00e4ckstr\u00f6m and McDonald, 2011b; Qu et al., 2012) , our work does not rely on a large amount of coarse-grained (document-level) labeled data, instead, distant supervision mainly comes from linguisticallymotivated constraints.", "cite_spans": [ { "start": 140, "end": 171, "text": "(T\u00e4ckstr\u00f6m and McDonald, 2011a;", "ref_id": "BIBREF25" }, { "start": 172, "end": 202, "text": "T\u00e4ckstr\u00f6m and McDonald, 2011b;", "ref_id": "BIBREF26" }, { "start": 203, "end": 219, "text": "Qu et al., 2012)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our work also relates to the study of posterior regularization (PR) (Ganchev et al., 2010) . PR has been successfully applied to many structured NLP tasks such as dependency parsing, information extraction and cross-lingual learning tasks (Ganchev et al., 2009; Bellare et al., 2009; Ganchev et al., 2010; Ganchev and Das, 2013) . Most previous work using PR mainly experiments with featurelabel constraints. In contrast, we explore a rich set of linguistically-motivated constraints which cannot be naturally formulated in the feature-label form. We also show that constraints derived from the discourse context can be highly useful for disambiguating sentence-level sentiment.", "cite_spans": [ { "start": 68, "end": 90, "text": "(Ganchev et al., 2010)", "ref_id": "BIBREF6" }, { "start": 239, "end": 261, "text": "(Ganchev et al., 2009;", "ref_id": "BIBREF5" }, { "start": 262, "end": 283, "text": "Bellare et al., 2009;", "ref_id": "BIBREF0" }, { "start": 284, "end": 305, "text": "Ganchev et al., 2010;", "ref_id": "BIBREF6" }, { "start": 306, "end": 328, "text": "Ganchev and Das, 2013)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this section, we present the details of our proposed approach. We formulate the sentence-level sentiment classification task as a sequence labeling problem. The inputs to the model are sentencesegmented documents annotated with sentencelevel sentiment labels (positive, negative or neutral) along with a set of unlabeled documents. During prediction, the model outputs sentiment labels for a sequence of sentences in the test document. We utilize conditional random fields and use Posterior Regularization (PR) to learn their parameters with a rich set of context-aware constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "In what follows, we first briefly describe the framework of Posterior Regularization. Then we introduce the context-aware constraints derived based on intuitive discourse and lexical knowledge. Finally we describe how to perform learning and inference with these constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "PR is a framework for structured learning with constraints (Ganchev et al., 2010) . In this work, we apply PR in the context of CRFs for sentencelevel sentiment classification.", "cite_spans": [ { "start": 59, "end": 81, "text": "(Ganchev et al., 2010)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Posterior Regularization", "sec_num": "3.1" }, { "text": "Denote x as a sequence of sentences within a document and y as a vector of sentiment labels associated with x. The CRF model the following conditional probabilities:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Posterior Regularization", "sec_num": "3.1" }, { "text": "p \u03b8 (y|x) = exp(\u03b8 \u2022 f (x, y)) Z \u03b8 (x)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Posterior Regularization", "sec_num": "3.1" }, { "text": "where f (x, y) are the model features, \u03b8 are the model parameters, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Posterior Regularization", "sec_num": "3.1" }, { "text": "Z \u03b8 (x) = y exp(\u03b8 \u2022 f (x, y))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Posterior Regularization", "sec_num": "3.1" }, { "text": "is a normalization constant. The objective function for a standard CRF is to maximize the log-likelihood over a collection of labeled doc-uments plus a regularization term:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Posterior Regularization", "sec_num": "3.1" }, { "text": "max \u03b8 L(\u03b8) = max \u03b8 (x,y) log p \u03b8 (y|x) \u2212 ||\u03b8|| 2 2 2\u03b4 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Posterior Regularization", "sec_num": "3.1" }, { "text": "PR makes the assumption that the labeled data we have is not enough for learning good model parameters, but we have a set of constraints on the posterior distribution of the labels. We can define the set of desirable posterior distrbutions as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Posterior Regularization", "sec_num": "3.1" }, { "text": "Q = {q(Y) : E q [\u03c6(X, Y)] = b} (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Posterior Regularization", "sec_num": "3.1" }, { "text": "where \u03c6 is a constraint function, b is a vector of desired values of the expectations of the constraint functions under the distribution q 1 . Note that the distribution q is defined over a collection of unlabeled documents where the constraint functions apply, and we assume independence between documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Posterior Regularization", "sec_num": "3.1" }, { "text": "The PR objective can be written as the original model objective penalized with a regularization term, which minimizes the KL-divergence between the desired model posteriors and the learned model posteriors with an L2 penalty 2 for the constraint violations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Posterior Regularization", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "max \u03b8 L(\u03b8) \u2212 min q\u2208Q {KL(q(Y)||p \u03b8 (Y|X)) + \u03b2||E q [\u03c6(X, Y)] \u2212 b|| 2 2 }", "eq_num": "(2)" } ], "section": "Posterior Regularization", "sec_num": "3.1" }, { "text": "The objective can be optimized by an EM-like scheme that iteratively solves the minimization problem and the maximization problem. Solving the minimization problem is equivalent to solving its dual since the objective is convex. The dual problem is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Posterior Regularization", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "arg max \u03bb \u03bb \u2022 b \u2212 log Z \u03bb (X) \u2212 1 4\u03b2 ||\u03bb|| 2 2", "eq_num": "(3)" } ], "section": "Posterior Regularization", "sec_num": "3.1" }, { "text": "We optimize the objective function 2 using stochastic projected gradient, and compute the learning rate using AdaGrad (Duchi et al., 2010) .", "cite_spans": [ { "start": 118, "end": 138, "text": "(Duchi et al., 2010)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Posterior Regularization", "sec_num": "3.1" }, { "text": "We develop a rich set of context-aware posterior constraints for sentence-level sentiment analysis by exploiting lexical and discourse knowledge. Specifically, we construct the lexical constraints by extracting sentiment-bearing patterns within sentences and construct the discourse-level constraints by extracting discourse relations that indicate sentiment coherence or sentiment changes both within and across sentences. Each constraint can be formulated as equality between the expectation of a constraint function value and a desired value set by prior knowledge. The equality is not strictly enforced (due to the regularization in the PR objective 2). Therefore all the constraints are applied as soft constraints. Table 1 provides intuitive description and examples for all the constraints used in our model.", "cite_spans": [], "ref_spans": [ { "start": 721, "end": 728, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Context-aware Posterior Constraints", "sec_num": "3.2" }, { "text": "Lexical Patterns The existence of a polaritycarrying word alone may not correctly indicate the polarity of the sentence, as the polarity can be reversed by other polarity-reversing words. We extract lexical patterns that consist of polar words and negators 3 , and apply the heuristics based on compositional semantics (Choi and Cardie, 2008) to assign a sentiment value to each pattern.", "cite_spans": [ { "start": 319, "end": 342, "text": "(Choi and Cardie, 2008)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Context-aware Posterior Constraints", "sec_num": "3.2" }, { "text": "We encode the extracted lexical patterns along with their sentiment values as feature-label constraints. The constraint function can be written as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-aware Posterior Constraints", "sec_num": "3.2" }, { "text": "\u03c6 w (x, y) = i f w (x i , y i ) where f w (x i , y i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-aware Posterior Constraints", "sec_num": "3.2" }, { "text": "is a feature function which has value 1 when sentence x i contains the lexical pattern w and its sentiment label y i equals to the expected sentiment value and has value 0 otherwise. The constraint expectation value is set to be the prior probability of associating w with its sentiment value. Note that sentences with neutral sentiment can also contain such lexical patterns. Therefore we allow the lexical patterns to be assigned a neutral sentiment with a prior probability r 0 (we compute this value as the empirical probability of neutral sentiment in the training documents). Using the polarity indicated by lexical patterns to constrain the sentiment of sentences is quite aggressive. Therefore we only consider lexical patterns that are strongly discriminative (many opinion words in the lexicon only indicate sentiment with weak strength). The selected lexical patterns include a handful of seed patterns (such as \"pros\" and \"cons\") and the lexical patterns that have high precision (larger then 0.9) of predicting sentiment in the training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-aware Posterior Constraints", "sec_num": "3.2" }, { "text": "Discourse Connectives. Lexical patterns can be limited in capturing contextual information since they only look at interactions between words within an expression. To capture context at the clause or sentence level, we consider discourse connectives, which are cue phrases or words that indicate discourse relations between adjacent sentences or clauses. To identify discourse connectives, we apply a discourse tagger trained on the Penn Discourse Treebank (Prasad et al., 2008) 4 to our data. Discourse connectives are tagged with four senses: Expansion, Contingency, Comparison, Temporal.", "cite_spans": [ { "start": 457, "end": 480, "text": "(Prasad et al., 2008) 4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Context-aware Posterior Constraints", "sec_num": "3.2" }, { "text": "Discourse connectives can operate at both intrasentential and inter-sentential level. For example, the word \"although\" is often used to connect two polar clauses within a sentence, while the word \"however\" is often used to at the beginning of the sentence to connect two polar sentences. It is important to distinguish these two types of discourse connectives. We consider a discourse connective to be intra-sentential if it has the Comparison sense and connects two polar clauses with opposite polarities (determined by the lexical patterns). We construct a feature-label constraint for each intra-sentential discourse connective and set its expected sentiment value to be neutral.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-aware Posterior Constraints", "sec_num": "3.2" }, { "text": "Unlike the intra-sentential discourse connectives, the inter-sentential discourse connectives can indicate sentiment transitions between sentences. Intuitively, discourse connectives with the senses of Expansion (e.g. also, for example, furthermore) and Contingency (e.g. as a result, hence, because) are likely to indicate sentiment coherence; discourse connectives with the sense of Comparison (e.g. but, however, nevertheless) are likely to indicate sentiment changes. This intuition is reasonable but it assumes the two sentences connected by the discourse connective are both polar sentences. In general, discourse connectives can also be used to connect non-polar (neutral) sentences. Thus it is hard to directly constrain the posterior expectation for each type of sentiment transitions using inter-sentential discourse connectives.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-aware Posterior Constraints", "sec_num": "3.2" }, { "text": "Instead, we impose constraints on the model posteriors by reducing constraint violations. We", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-aware Posterior Constraints", "sec_num": "3.2" }, { "text": "Inter-sentential", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Description and Examples", "sec_num": null }, { "text": "The sentence containing a polar lexical pattern w tends to have the polarity indicated by w. Example lexical patterns are annoying, hate, amazing, not disappointed, no concerns, favorite, recommend.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical patterns", "sec_num": null }, { "text": "The sentence containing a discourse connective c which connects its two clauses that have opposite polarities indicated by the lexical patterns tends to have neutral sentiment. Example connectives are while, although, though, but.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Connectives (clause)", "sec_num": null }, { "text": "Two adjacent sentences which are connected by a discourse connective c tends to have the same polarity if c indicates a Expansion or Contingency relation, e.g. also, for example, in fact, because ; opposite polarities if c indicates a Comparison relation, e.g. otherwise, nevertheless, however.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Connectives (sentence)", "sec_num": null }, { "text": "The sentences which contain coreferential entities appeared as targets of opinion expressions tend to have the same polarity. Listing patterns A series of sentences connected via a listing tend to have the same polarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coreference", "sec_num": null }, { "text": "The sentence-level polarity tends to be consistent with the document-level polarity. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global labels", "sec_num": null }, { "text": "\u03c6 c,s (x, y) = i f c,s (x i , y i , y i\u22121 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global labels", "sec_num": null }, { "text": "where c denotes a discourse connective, s indicates its sense, and f c,s is a penalty function that takes value 1.0 when y i and y i\u22121 form a contradictory sentiment transition, that is,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global labels", "sec_num": null }, { "text": "y i = polar y i\u22121 if s \u2208 {Expansion, Contingency}, or y i = polar y i\u22121 if s = Comparison.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global labels", "sec_num": null }, { "text": "The desired value for the constraint expectation is set to 0 so that the model is encouraged to have less constraint violations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global labels", "sec_num": null }, { "text": "Opinion Coreference Sentences in a discourse can be linked by many types of coherence relations (Jurafsky et al., 2000) . Coreference is one of the commonly used relations in written text. In this work, we explore coreference in the context of sentence-level sentiment analysis. We consider a set of polar sentences to be linked by the opinion coreference relation if they contain coreferring opinion-related entities. For example, the following sentences express opinions towards \"the speaker phone\", \"The speaker phone\" and \"it\" respectively. As these opinion targets are coreferential (referring to the same entity \"the speaker phone\"), they are linked by the opinion coreference relation 5 .", "cite_spans": [ { "start": 96, "end": 119, "text": "(Jurafsky et al., 2000)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Global labels", "sec_num": null }, { "text": "My favorite features are the speaker phone and the radio. The speaker phone is very functional. I use it in the car, very audible even with freeway noise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global labels", "sec_num": null }, { "text": "Our coreference relations indicated by opinion targets overlap with the same target relation introduced in (Somasundaran et al., 2009) . The differences are: (1) we encode the coreference relations as soft constraints during learning instead of applying them as hard constraints during inference time; (2) our constraints can apply to both polar and non-polar sentences; (3) our identification of coreference relations is automatic without any fine-grained annotations for opinion targets.", "cite_spans": [ { "start": 107, "end": 134, "text": "(Somasundaran et al., 2009)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Global labels", "sec_num": null }, { "text": "To extract coreferential opinion targets, we apply Stanford's coreference system (Lee et al., 2013) to extract coreferential mentions in the document, and then apply a set of syntactic rules to identify opinion targets from the extracted mentions. The syntactic rules correspond to the shortest dependency paths between an opinion word and an extracted mention. We consider the 10 most frequent dependency paths in the training data. Example dependency paths include nsubj(opinion, mention), nobj(opinion, mention), and amod(mention, opinion).", "cite_spans": [ { "start": 81, "end": 99, "text": "(Lee et al., 2013)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Global labels", "sec_num": null }, { "text": "For sentences connected by the opinion coreference relation, we expect their sentiment to be consistent. To encode this intuition, we define the following constraint function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global labels", "sec_num": null }, { "text": "\u03c6 coref (x, y) = i,ant(i)=j,j\u22650 f coref (x i , x j , y i , y j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global labels", "sec_num": null }, { "text": "where ant(i) denotes the index of the sentence which contains an antecedent target of the target mentioned in sentence i (the antecedent relations over pairs of opinion targets can be constructed using the coreference resolver), and f coref is a penalty function which takes value 1.0 when the expected sentiment coherency is violated, that is, y i = polar y j . Similar to the inter-sentential dis-course connectives, modeling opinion coreference via constraint violations allows the model to handle neutral sentiment. The expected value of the constraint functions is set to 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global labels", "sec_num": null }, { "text": "Listing Patterns Another type of coherence relations we observe in online reviews is listing, where a reviewer expresses his/her opinions by listing a series of statements followed by a sequence of numbers. For example, \"1. It's smaller than the ipod mini .... 2. It has a removable battery ....\". We expect sentences connected by a listing to have consistent sentiment. We implement this constraint in the same form as the coreference constraint (the antecedent assignments are constructed from the numberings).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global labels", "sec_num": null }, { "text": "Global Sentiment Previous studies have demonstrated the value of document-level sentiment in guiding the semi-supervised learning of sentence-level sentiment (T\u00e4ckstr\u00f6m and McDonald, 2011b; Qu et al., 2012) . In this work, we also take into account this information and encode it as posterior constraints. Note that these constraints are not necessary for our model and can be applied when the document-level sentiment labels are naturally available.", "cite_spans": [ { "start": 158, "end": 189, "text": "(T\u00e4ckstr\u00f6m and McDonald, 2011b;", "ref_id": "BIBREF26" }, { "start": 190, "end": 206, "text": "Qu et al., 2012)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Global labels", "sec_num": null }, { "text": "Based on an analysis of the Amazon review data, we observe that sentence-level sentiment usually doesn't conflict with the document-level sentiment in terms of polarity. For example, the proportion of negative sentences in the positive documents is very small compared to the proportion of positive sentences. To encode this intuition, we define the following constraint function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global labels", "sec_num": null }, { "text": "\u03c6 g (x, y) = n i \u03b4(y i = polar g)/n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global labels", "sec_num": null }, { "text": "where g \u2208 {positive, negative} denotes the sentiment value of a polar document, n is the total number of sentences in x, and \u03b4 is an indicator function. We hope the expectation of the constraint function takes a small value. In our experiments, we set the expected value to be the empirical estimate of the probability of \"conflicting\" sentiment in polar documents using the training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global labels", "sec_num": null }, { "text": "During training, we need to compute the constraint expectations and the feature expectations under the auxiliary distribution q at each gradient step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Inference", "sec_num": "3.3" }, { "text": "We can derive q by solving the dual problem in 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Inference", "sec_num": "3.3" }, { "text": "q(y|x) = exp(\u03b8 \u2022 f (x, y) + \u03bb \u2022 \u03c6(x, y)) Z \u03bb,\u03b8 (X) (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Inference", "sec_num": "3.3" }, { "text": "where Z \u03bb,\u03b8 (X) is a normalization constant. Most of our constraints can be factorized in the same way as factorizing the model features in the firstorder CRF model, and we can compute the expectations under q very efficiently using the forwardbackward algorithm. However, some of our discourse constraints (opinion coreference and listing) can break the tractable structure of the model. For constraints with higher-order structures, we use Gibbs Sampling (Geman and Geman, 1984) to approximate the expectations. Given a sequence", "cite_spans": [ { "start": 457, "end": 480, "text": "(Geman and Geman, 1984)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Training and Inference", "sec_num": "3.3" }, { "text": "x, we sample a label y i at each position i by computing the unnormalized conditional probabilities", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Inference", "sec_num": "3.3" }, { "text": "p(y i = l|y \u2212i ) \u221d exp(\u03b8 \u2022 f (x, y i = l, y \u2212i ) + \u03bb \u2022 \u03c6(x, y i = l, y \u2212i ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Inference", "sec_num": "3.3" }, { "text": "and renormalizing them. Since the possible label assignments only differ at position i, we can make the computation efficient by maintaining the structure of the coreference clusters and precomputing the constraint function for different types of violations. During inference, we find the best label assignment by computing arg max y q(y|x). For documents where the higher-order constraints apply, we use the same Gibbs sampler as described above to infer the most likely label assignment, otherwise, we use the Viterbi algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Inference", "sec_num": "3.3" }, { "text": "We experimented with two product review datasets for sentence-level sentiment classification: the Customer Review (CR) data (Hu and Liu, 2004) 6 which contains 638 reviews of 14 products such as cameras and cell phones, and the Multi-domain Amazon (MD) data from the test set of T\u00e4ckstr\u00f6m and McDonald (2011a) which contains 294 reivews from 5 different domains. As in Qu et al. (2012) , we chose the books, electronics and music domains for evaluation. Each domain also comes with 33,000 extra reviews with only document-level sentiment labels.", "cite_spans": [ { "start": 124, "end": 142, "text": "(Hu and Liu, 2004)", "ref_id": "BIBREF8" }, { "start": 279, "end": 309, "text": "T\u00e4ckstr\u00f6m and McDonald (2011a)", "ref_id": "BIBREF25" }, { "start": 369, "end": 385, "text": "Qu et al. (2012)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We evaluated our method in two settings: supervised and semi-supervised. In the supervised setting, we treated the test data as unlabeled data and performed transductive learning. In the semisupervised setting, our unlabeled data consists of both the available unlabeled data and the test data. For each domain in the MD dataset, we made use of no more than 100 unlabeled documents in which our posterior constraints apply. We adopted the evaluation schemes used in previous work: 10fold cross validation for the CR dataset and 3-fold cross validation for the MD dataset. We also report both two-way classification (positive vs. negative) and three-way classification results (positive, negative or neutral). We use accuracy as the performance measure. In our tables, boldface numbers are statistically significant by paired t-test for p < 0.05 against the best baseline developed in this paper 7 . We trained our model using a CRF incorporated with the proposed posterior constraints. For the CRF features, we include the tokens, the partof-speech tags, the prior polarities of lexical patterns indicated by the opinion lexicon and the negator lexicon, the number of positive and negative tokens and the output of the vote-flip algorithm (Choi and Cardie, 2009) . In addition, we include the discourse connectives as local or transition features and the document-level sentiment labels as features (only available in the MD dataset).", "cite_spans": [ { "start": 895, "end": 896, "text": "7", "ref_id": null }, { "start": 1239, "end": 1262, "text": "(Choi and Cardie, 2009)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We set the CRF regularization parameter \u03c3 = 1 and set the posterior regularization parameter \u03b2 and \u03b3 (a trade-off parameter we introduce to balance the supervised objective and the posterior regularizer in 2) by using grid search 8 . For approximation inference with higher-order constraints, we perform 2000 Gibbs sampling iterations where the first 1000 iterations are burn-in iterations. To make the results more stable, we construct three Markov chains that run in parallel, and select the sample with the largest objective value.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "All posterior constraints were developed using the training data on each training fold. For the MD dataset, we also used the dvd domain as additional labeled data for developing the constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Baselines. We compared our method to a number of baselines: (1) CRF: CRF with the same set of model features as in our method. (2) CRF-INF: CRF augmented with inference constraints. We can incorporate the proposed constraints (constraints derived from lexical patterns and discourse connectives) as hard constraints into CRF during (Nakagawa et al., 2010) 81.4 -Dropout LR (Wang and Manning, 2013) 82.1 - Table 3 : Accuracy results (%) for semi-supervised sentiment classification (three-way) on the MD dataset inference by manually setting \u03bb in equation 4 to a large value, 9 . When \u03bb is large enough, it is equivalent to adding hard constraints to the viterbi inference. To better understand the different effects of lexical and discourse constraints, we report results for applying only the lexical constraints (CRF-INF lex ) as well as results for applying only the discourse constraints (CRF-INF disc ). (3) PR lex : a variant of our PR model which only applies the lexical constraints. For the three-way classification task on the MD dataset, we also implemented the following baselines: (4) VOTEFLIP: a rulebased algorithm that leverages the positive, negative and neutral cues along with the effect of negation to determine the sentence sentiment (Choi and Cardie, 2009) . (5) DOCORACLE: assigns each sentence the label of its corresponding document.", "cite_spans": [ { "start": 332, "end": 355, "text": "(Nakagawa et al., 2010)", "ref_id": "BIBREF15" }, { "start": 373, "end": 397, "text": "(Wang and Manning, 2013)", "ref_id": "BIBREF28" }, { "start": 1255, "end": 1278, "text": "(Choi and Cardie, 2009)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 405, "end": 412, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We first report results on a binary (positive or negative) sentence-level sentiment classification task. For this task, we used the supervised setting and performed transductive learning for our model. that PR significantly outperforms all other baselines in both the CR dataset and the MD dataset (average accuracy across domains is reported).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.1" }, { "text": "The poor performance of CRF-INF lex indicates that directly applying lexical constraints as hard constraints during inference could only hurt the performance. CRF-INF disc slightly outperforms CRF but the improvement is not significant. In contrast, both PR lex and PR significantly outperform CRF, which implies that incorporating lexical and discourse constraints as posterior constraints is much more effective. The superior performance of PR over PR lex further suggests that the proper use of discourse information can significantly improve accuracy for sentence-level sentiment classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.1" }, { "text": "We also analyzed the model's performance on a three-way sentiment classification task. By introducing the \"neutral\" category, the sentiment classification problem becomes harder. Table 4 shows the results in terms of accuracy for each domain in the MD dataset. We can see that both PR and PR lex significantly outperform all other baselines in all domains. The rule-based baseline VOTE-FLIP gave the weakest performance because it has no prediction power on sentences with no opinion words. DOCORACLE performs much better than VOTEFLIP and performs especially well on the Music domain. This indicates that the documentlevel sentiment is a very strong indicator of the sentence-level sentiment label. For the CRF baseline and its invariants, we observe a similar performance trend as in the two-way classification task: there is nearly no performance improvement from applying the lexical and discourseconnective-based constraints during CRF inference. In contrast, both PR lex and PR provide substantial improvements over CRF. This con-firms that encoding lexical and discourse knowledge as posterior constraints allows the featurebased model to gain additional learning power for sentence-level sentiment prediction. In particular, incorporating discourse constraints leads to consistent improvements to our model. This demonstrates that our modeling of discourse information is effective and that taking into account the discourse context is important for improving sentence-level sentiment analysis. We also compare our results to the previously published results on the same dataset. HCRF (T\u00e4ckstr\u00f6m and Mc-Donald, 2011a ) and MEM (Qu et al., 2012) are two state-of-the-art semi-supervised methods for sentence-level sentiment classification. We can see that our best model PR gives the best results in most categories. Table 4 shows the results in terms of F1 scores for each sentiment category (positive, negative and neutral). We can see that the PR models are able to provide improvements over all the sentiment categories compared to all the baselines in general. We observe that the DOCORACLE baseline provides very strong F1 scores on the positive and negative categories especially in the Books and Music domains, but very poor F1 on the neutral category. This is because it over-predicts the polar sentences in the polar documents, and predicts no polar sentences in the neutral documents. In contrast, our PR models provide more balanced F1 scores among all the sentiment categories. Compared to the CRF baseline and its variants, we found that the PR models can greatly improve the precision of predicting positive and negative sentences, resulting in a significant improvement on the positive/negative F1 scores. However, the improvement on the neutral category is modest. A plausible explanation is that most of our constraints focus on discriminating polar sentences. They can help reduce the errors of misclassifying polar sentences, but the model needs more constraints in order to distinguish neutral sentences from polar sentences. We plan to address this issue in future work.", "cite_spans": [ { "start": 1593, "end": 1624, "text": "(T\u00e4ckstr\u00f6m and Mc-Donald, 2011a", "ref_id": null }, { "start": 1635, "end": 1652, "text": "(Qu et al., 2012)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 179, "end": 186, "text": "Table 4", "ref_id": "TABREF4" }, { "start": 1824, "end": 1831, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results", "sec_num": "4.1" }, { "text": "We analyze the errors to better understand the merits and limitations of the PR model. We found that the PR model is able to correct many CRF errors caused by the lack of labeled data. The first row in Table 5 shows an example of such errors. The lexical features return and exchange may be good indicators of negative sentiment for the sentence. However, with limited labeled data, the CRF learner can only associate very weak sentiment signals to these features. In contrast, the PR model is able to associate stronger sentiment signals to these features by leveraging unlabeled data for indirect supervision. A simple lexicon-based constraint during inference time may also correct this case. However, hard-constraint baselines can hardly improve the performance in general because the contributions of different constraints are not learned and their combination may not lead to better predictions. This is also demonstrated by the limited performance of CRF-INF in our experiments. We also found that the discourse constraints play an important role in improving the sentiment prediction. The lexical constraints alone are often not sufficient since their coverage is limited by the sentiment lexicon and they can only constrain sentiment locally. On the contrary, discourse constraints are not dependent on sentiment lexicons, and more importantly, they can provide sentiment preferences on multiple sentences at the same time. When combining discourse constraints with features from different sentences, the PR model becomes more powerful in disambiguating sentiment. The second example in Table 5 shows that the PR model learned with discourse constraints correctly predicts the sentiment of two sentences where no lexical constraints apply.", "cite_spans": [], "ref_spans": [ { "start": 202, "end": 209, "text": "Table 5", "ref_id": "TABREF6" }, { "start": 1596, "end": 1603, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Discussion", "sec_num": "4.2" }, { "text": "However, discourse constraints are not always helpful. One reason is that they do not constrain the neutral sentiment. As a result they could not help disambiguate neutral sentiment from polar sentiment, such as the third example in Table 5 . This is also a problem for most of our lexical constraints. In general, it is hard to learn reliable indicators for the neutral sentiment. In the MD dataset, a neutral label may be given because the sentence contains mixed sentiment or no sentiment or it is off-topic. We plan to explore more refined constraints that can deal with the neutral sentiment in future work. Another limitation of the discourse constraints is that they could be affected by the errors of the discourse parser and the coreference resolver. A potential way to address this issue is to learn discourse constraints jointly with sentiment. We plan to study this in future research.", "cite_spans": [], "ref_spans": [ { "start": 233, "end": 240, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Discussion", "sec_num": "4.2" }, { "text": "In this paper, we propose a context-aware approach for learning sentence-level sentiment. Our approach incorporates intuitive lexical and discourse knowledge as expressive constraints while training a conditional random field model via posterior regularization. We explore a rich set of context-aware constraints at both intra-and intersentential levels, and demonstrate their effectiveness in the analysis of sentence-level sentiment. While we focus on the sentence-level task, our approach can be easily extended to handle sentiment analysis at finer levels of granularity. Our experiments show that our model achieves better accuracy than existing supervised and semi-supervised models for the sentence-level sentiment classification task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "In general, inequality constraints can also be used. We focus on the equality constraints since we found them to express the sentiment-relevant constraints well.2 Other convex functions can be used for the penalty. We use L2 norm because it works well in practice. \u03b2 is a regularization constant", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The polar words are identified using the MPQA lexicon and the negators are identified using a handful of seed words extended by the General Inquirer dictionary and WordNet as described in(Choi and Cardie, 2008).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.cis.upenn.edu/\u02dcepitler/ discourse.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In general, the opinion-related entities include both the opinion targets and the opinion holders. In this work, we only consider the targets since we experiment with singleauthor product reviews. The opinion holders can be included in a similar way as the opinion targets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Available at http://www.cs.uic.edu/\u02dcliub/ FBS/sentiment-analysis.html.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Significance test was not conducted over the previous methods as we do not have their results for each fold.8 We conducted 10-fold cross-validation on each training fold with the parameter space: \u03b2 : [0.01, 0.05, 0.1, 0.5, 1.0] and \u03b3 : [0.1, 0.5, 1.0, 5.0, 10.0].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We set \u03bb to 1000 for the lexical constraints and -1000 to the discourse connective constraints in the experiments", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported in part by DARPA-BAA-12-47 DEFT grant #12475008 and NSF grant BCS-0904822. We thank Igor Labutov for helpful discussion and suggestions; Oscar T\u00e4ckstr\u00f6m and Lizhen Qu for providing their Amazon review datasets; and the anonymous reviewers for helpful comments and suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Alternating projections for learning with expectation constraints", "authors": [ { "first": "Kedar", "middle": [], "last": "Bellare", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Druck", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence", "volume": "", "issue": "", "pages": "43--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kedar Bellare, Gregory Druck, and Andrew McCal- lum. 2009. Alternating projections for learning with expectation constraints. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 43-50. AUAI Press.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Learning with compositional semantics as structural inference for subsentential sentiment analysis", "authors": [ { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "793--801", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yejin Choi and Claire Cardie. 2008. Learning with compositional semantics as structural inference for subsentential sentiment analysis. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 793-801. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Adapting a polarity lexicon using integer linear programming for domain-specific sentiment classification", "authors": [ { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", "volume": "2", "issue": "", "pages": "590--598", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yejin Choi and Claire Cardie. 2009. Adapting a po- larity lexicon using integer linear programming for domain-specific sentiment classification. In Pro- ceedings of the 2009 Conference on Empirical Meth- ods in Natural Language Processing: Volume 2- Volume 2, pages 590-598. Association for Compu- tational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Adaptive subgradient methods for online learning and stochastic optimization", "authors": [ { "first": "John", "middle": [], "last": "Duchi", "suffix": "" }, { "first": "Elad", "middle": [], "last": "Hazan", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2010, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2121--2159", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2010. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121-2159.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Crosslingual discriminative learning of sequence models with posterior regularization", "authors": [ { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kuzman Ganchev and Dipanjan Das. 2013. Cross- lingual discriminative learning of sequence models with posterior regularization.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Dependency grammar induction via bitext projection constraints", "authors": [ { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Gillenwater", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the ACL-IJCNLP", "volume": "", "issue": "", "pages": "369--377", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kuzman Ganchev, Jennifer Gillenwater, and Ben Taskar. 2009. Dependency grammar induction via bitext projection constraints. In Proceedings of the ACL-IJCNLP, pages 369-377.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Posterior regularization for structured latent variable models", "authors": [ { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "" }, { "first": "Joao", "middle": [], "last": "Gra\u00e7a", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Gillenwater", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" } ], "year": 2010, "venue": "The Journal of Machine Learning Research", "volume": "99", "issue": "", "pages": "2001--2049", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kuzman Ganchev, Joao Gra\u00e7a, Jennifer Gillenwater, and Ben Taskar. 2010. Posterior regularization for structured latent variable models. The Journal of Machine Learning Research, 99:2001-2049.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Stochastic relaxation, gibbs distributions, and the bayesian restoration of images. Pattern Analysis and Machine Intelligence", "authors": [ { "first": "Stuart", "middle": [], "last": "Geman", "suffix": "" }, { "first": "Donald", "middle": [], "last": "Geman", "suffix": "" } ], "year": 1984, "venue": "IEEE Transactions on", "volume": "", "issue": "6", "pages": "721--741", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stuart Geman and Donald Geman. 1984. Stochas- tic relaxation, gibbs distributions, and the bayesian restoration of images. Pattern Analysis and Machine Intelligence, IEEE Transactions on, (6):721-741.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Mining and summarizing customer reviews", "authors": [ { "first": "Minqing", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining", "volume": "", "issue": "", "pages": "168--177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowl- edge discovery and data mining, pages 168-177. ACM.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Speech and language processing: An introduction to natural language processing, computational linguistics, and speech recognition", "authors": [ { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "H", "middle": [], "last": "James", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Kehler", "suffix": "" }, { "first": "Nigel", "middle": [], "last": "Vander Linden", "suffix": "" }, { "first": "", "middle": [], "last": "Ward", "suffix": "" } ], "year": 2000, "venue": "", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Jurafsky, James H Martin, Andrew Kehler, Keith Vander Linden, and Nigel Ward. 2000. Speech and language processing: An introduction to natu- ral language processing, computational linguistics, and speech recognition, volume 2. MIT Press.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Fully automatic lexicon expansion for domainoriented sentiment analysis", "authors": [ { "first": "Hiroshi", "middle": [], "last": "Kanayama", "suffix": "" }, { "first": "Tetsuya", "middle": [], "last": "Nasukawa", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "355--363", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hiroshi Kanayama and Tetsuya Nasukawa. 2006. Fully automatic lexicon expansion for domain- oriented sentiment analysis. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 355-363. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A bayesian model for joint unsupervised induction of sentiment, aspect and discourse representations", "authors": [ { "first": "Angeliki", "middle": [], "last": "Lazaridou", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" }, { "first": "Caroline", "middle": [], "last": "Sporleder", "suffix": "" } ], "year": 2013, "venue": "To Appear in Proceedings of the 51th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Angeliki Lazaridou, Ivan Titov, and Caroline Sporleder. 2013. A bayesian model for joint unsupervised induction of sentiment, aspect and discourse representations. In To Appear in Proceed- ings of the 51th Annual Meeting of the Association for Computational Linguistics, Sofia, Bulgaria, August. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Deterministic coreference resolution based on entity-centric", "authors": [ { "first": "Heeyoung", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Angel", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Peirsman", "suffix": "" }, { "first": "Nathanael", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu, and Dan Ju- rafsky. 2013. Deterministic coreference resolution based on entity-centric, precision-ranked rules.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Isotonic conditional random fields and local sentiment flow", "authors": [ { "first": "Yi", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Guy", "middle": [], "last": "Lebanon", "suffix": "" } ], "year": 2007, "venue": "Advances in neural information processing systems", "volume": "19", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Mao and Guy Lebanon. 2007. Isotonic conditional random fields and local sentiment flow. Advances in neural information processing systems, 19:961.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Structured models for fine-to-coarse sentiment analysis", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Kerry", "middle": [], "last": "Hannan", "suffix": "" }, { "first": "Tyler", "middle": [], "last": "Neylon", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Wells", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Reynar", "suffix": "" } ], "year": 2007, "venue": "Annual Meeting-Association For Computational Linguistics", "volume": "45", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald, Kerry Hannan, Tyler Neylon, Mike Wells, and Jeff Reynar. 2007. Structured mod- els for fine-to-coarse sentiment analysis. In An- nual Meeting-Association For Computational Lin- guistics, volume 45, page 432.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Dependency tree-based sentiment classification using crfs with hidden variables", "authors": [ { "first": "Tetsuji", "middle": [], "last": "Nakagawa", "suffix": "" }, { "first": "Kentaro", "middle": [], "last": "Inui", "suffix": "" }, { "first": "Sadao", "middle": [], "last": "Kurohashi", "suffix": "" } ], "year": 2010, "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "786--794", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tetsuji Nakagawa, Kentaro Inui, and Sadao Kurohashi. 2010. Dependency tree-based sentiment classifica- tion using crfs with hidden variables. In Human Language Technologies: The 2010 Annual Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics, pages 786-794. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd annual meeting on Association for Computational Linguistics, page 271. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang and Lillian Lee. 2004. A sentimental educa- tion: Sentiment analysis using subjectivity summa- rization based on minimum cuts. In Proceedings of the 42nd annual meeting on Association for Compu- tational Linguistics, page 271. Association for Com- putational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Opinion mining and sentiment analysis", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Now Pub.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Contextual valence shifters", "authors": [ { "first": "Livia", "middle": [], "last": "Polanyi", "suffix": "" }, { "first": "Annie", "middle": [], "last": "Zaenen", "suffix": "" } ], "year": 2006, "venue": "Computing attitude and affect in text: Theory and applications", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Livia Polanyi and Annie Zaenen. 2006. Contextual valence shifters. In Computing attitude and affect in text: Theory and applications, pages 1-10. Springer.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "The penn discourse treebank 2.0", "authors": [ { "first": "Rashmi", "middle": [], "last": "Prasad", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Dinesh", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Eleni", "middle": [], "last": "Miltsakaki", "suffix": "" }, { "first": "Livio", "middle": [], "last": "Robaldo", "suffix": "" }, { "first": "K", "middle": [], "last": "Aravind", "suffix": "" }, { "first": "Bonnie", "middle": [ "L" ], "last": "Joshi", "suffix": "" }, { "first": "", "middle": [], "last": "Webber", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind K Joshi, and Bon- nie L Webber. 2008. The penn discourse treebank 2.0. In LREC. Citeseer.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A weakly supervised model for sentence-level semantic orientation analysis with multiple experts", "authors": [ { "first": "Lizhen", "middle": [], "last": "Qu", "suffix": "" }, { "first": "Rainer", "middle": [], "last": "Gemulla", "suffix": "" }, { "first": "Gerhard", "middle": [], "last": "Weikum", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "149--159", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lizhen Qu, Rainer Gemulla, and Gerhard Weikum. 2012. A weakly supervised model for sentence-level semantic orientation analysis with multiple experts. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning, pages 149-159. Association for Computational Lin- guistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Semi-supervised recursive autoencoders for predicting sentiment distributions", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "H", "middle": [], "last": "Eric", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Huang", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "151--161", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. 2011. Semi-supervised recursive autoencoders for predict- ing sentiment distributions. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing, pages 151-161. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Perelygin", "suffix": "" }, { "first": "Y", "middle": [], "last": "Jean", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Y", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2013, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep mod- els for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Discourse level opinion interpretation", "authors": [ { "first": "Swapna", "middle": [], "last": "Somasundaran", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Ruppenhofer", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 22nd International Conference on Computational Linguistics", "volume": "1", "issue": "", "pages": "801--808", "other_ids": {}, "num": null, "urls": [], "raw_text": "Swapna Somasundaran, Janyce Wiebe, and Josef Rup- penhofer. 2008. Discourse level opinion interpre- tation. In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pages 801-808. Association for Computational Lin- guistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Supervised and unsupervised methods in employing discourse relations for improving opinion polarity classification", "authors": [ { "first": "Swapna", "middle": [], "last": "Somasundaran", "suffix": "" }, { "first": "Galileo", "middle": [], "last": "Namata", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Lise", "middle": [], "last": "Getoor", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", "volume": "1", "issue": "", "pages": "170--179", "other_ids": {}, "num": null, "urls": [], "raw_text": "Swapna Somasundaran, Galileo Namata, Janyce Wiebe, and Lise Getoor. 2009. Supervised and unsupervised methods in employing discourse rela- tions for improving opinion polarity classification. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, pages 170-179. Association for Com- putational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Discovering fine-grained sentiment with latent variable structured prediction models", "authors": [ { "first": "Oscar", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" } ], "year": 2011, "venue": "Advances in Information Retrieval", "volume": "", "issue": "", "pages": "368--374", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oscar T\u00e4ckstr\u00f6m and Ryan McDonald. 2011a. Dis- covering fine-grained sentiment with latent variable structured prediction models. In Advances in Infor- mation Retrieval, pages 368-374. Springer.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Semisupervised latent variable models for sentence-level sentiment analysis", "authors": [ { "first": "Oscar", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oscar T\u00e4ckstr\u00f6m and Ryan McDonald. 2011b. Semi- supervised latent variable models for sentence-level sentiment analysis.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Discourse connectors for latent subjectivity in sentiment analysis", "authors": [ { "first": "Rakshit", "middle": [], "last": "Trivedi", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" } ], "year": 2013, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "808--813", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rakshit Trivedi and Jacob Eisenstein. 2013. Discourse connectors for latent subjectivity in sentiment analy- sis. In Proceedings of NAACL-HLT, pages 808-813.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Fast dropout training", "authors": [ { "first": "Sida", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 30th International Conference on Machine Learning (ICML-13)", "volume": "", "issue": "", "pages": "118--126", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sida Wang and Christopher Manning. 2013. Fast dropout training. In Proceedings of the 30th Inter- national Conference on Machine Learning (ICML- 13), pages 118-126.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Discourse level explanatory relation extraction from product reviews using firstorder logic", "authors": [ { "first": "Qi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jin", "middle": [], "last": "Qian", "suffix": "" }, { "first": "Huan", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jihua", "middle": [], "last": "Kang", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qi Zhang, Jin Qian, Huan Chen, Jihua Kang, and Xu- anjing Huang. 2013. Discourse level explanatory relation extraction from product reviews using first- order logic.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Unsupervised discovery of discourse relations for eliminating intra-sentence polarity ambiguities", "authors": [ { "first": "Lanjun", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Binyang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Zhongyu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Kam-Fai", "middle": [], "last": "Wong", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "162--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lanjun Zhou, Binyang Li, Wei Gao, Zhongyu Wei, and Kam-Fai Wong. 2011. Unsupervised discovery of discourse relations for eliminating intra-sentence polarity ambiguities. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing, pages 162-171. Association for Com- putational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Hearing the music in real stereo is a true revelation. 2. You can feel that the music is no longer constrained by the mono recording. 3. In fact, it is more like the players are performing on a stage in front of you ...", "type_str": "figure", "uris": null, "num": null }, "TABREF0": { "text": "Summarization of Posterior Constraints for Sentence-level Sentiment Classification define the following constraint function:", "content": "", "num": null, "html": null, "type_str": "table" }, "TABREF2": { "text": "", "content": "
: Accuracy results (%) for supervised sen-
timent classification (two-way)
Books Electronics Music Avg
VoteFlip44.645.047.845.8
DocOracle53.650.563.055.7
CRF57.457.561.858.9
CRF-inf lex56.756.460.457.8
CRF-inf disc57.257.662.159.0
PR lex60.359.963.261.1
PR61.661.064.462.3
Previous work
HCRF55.961.058.758.5
MEM59.759.663.861.0
", "num": null, "html": null, "type_str": "table" }, "TABREF3": { "text": "shows the accuracy results. We can see", "content": "
BooksElectronicsMusic
pos/neg/neu pos/neg/neu pos/neg/neu
VoteFlip43/42/4745/46/4450/46/46
DocOracle54/60/4957/54/4272/65/52
CRF47/51/6460/61/5267/60/58
CRF-inf lex46/52/6359/61/5065/59/57
CRF-inf disc47/51/6460/61/5267/61/59
PR lex50/56/6664/63/5367/64/59
PR52/56/6864/66/5369/65/60
", "num": null, "html": null, "type_str": "table" }, "TABREF4": { "text": "", "content": "
: F1 scores for each sentiment cate-
gory (positive, negative and neutral) for semi-
supervised sentiment classification on the MD
dataset
", "num": null, "html": null, "type_str": "table" }, "TABREF5": { "text": "neg If I could, I would like to return it or exchange for something better. /neg neu \u00d7 Example 2: neg Things I wasn't a fan of -the ending was to cutesy for my taste. /neg neg Also, all of the side characters (particularly the mom, vee, and the teacher) were incredibly flat and stereotypical to me. /neg neu pos \u00d7 Example 3: neg I also have excessive noise when I talk and have phone in my pocket while walking. /neg neu But other models are no better. /neu", "content": "
Example SentencesCRFPR
Example 1: neg pos \u00d7neg pos \u00d7
", "num": null, "html": null, "type_str": "table" }, "TABREF6": { "text": "Example sentences where PR succeeds and fails to correct the mistakes of CRF", "content": "", "num": null, "html": null, "type_str": "table" } } } }