{ "paper_id": "D09-1018", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:39:28.791268Z" }, "title": "Supervised and Unsupervised Methods in Employing Discourse Relations for Improving Opinion Polarity Classification", "authors": [ { "first": "Swapna", "middle": [], "last": "Somasundaran", "suffix": "", "affiliation": { "laboratory": "", "institution": "Univ. of Pittsburgh Pittsburgh", "location": { "postCode": "15260", "region": "PA" } }, "email": "swapna@cs.pitt.edu" }, { "first": "Galileo", "middle": [], "last": "Namata", "suffix": "", "affiliation": {}, "email": "namatag@cs.umd.edu" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "", "affiliation": { "laboratory": "", "institution": "Univ. of Pittsburgh Pittsburgh", "location": { "postCode": "15260", "region": "PA" } }, "email": "wiebe@cs.pitt.edu" }, { "first": "Lise", "middle": [], "last": "Getoor", "suffix": "", "affiliation": {}, "email": "getoor@cs.umd.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This work investigates design choices in modeling a discourse scheme for improving opinion polarity classification. For this, two diverse global inference paradigms are used: a supervised collective classification framework and an unsupervised optimization framework. Both approaches perform substantially better than baseline approaches, establishing the efficacy of the methods and the underlying discourse scheme. We also present quantitative and qualitative analyses showing how the improvements are achieved.", "pdf_parse": { "paper_id": "D09-1018", "_pdf_hash": "", "abstract": [ { "text": "This work investigates design choices in modeling a discourse scheme for improving opinion polarity classification. For this, two diverse global inference paradigms are used: a supervised collective classification framework and an unsupervised optimization framework. Both approaches perform substantially better than baseline approaches, establishing the efficacy of the methods and the underlying discourse scheme. We also present quantitative and qualitative analyses showing how the improvements are achieved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The importance of discourse in opinion analysis is being increasingly recognized (Polanyi and Zaenen, 2006) . Motivated by the need to enable discourse-based opinion analysis, previous research (Asher et al., 2008; Somasundaran et al., 2008) developed discourse schemes and created manually annotated corpora. However, it was not known whether and how well these linguistic ideas and schemes can be translated into effective computational implementations.", "cite_spans": [ { "start": 81, "end": 107, "text": "(Polanyi and Zaenen, 2006)", "ref_id": "BIBREF17" }, { "start": 194, "end": 214, "text": "(Asher et al., 2008;", "ref_id": "BIBREF0" }, { "start": 215, "end": 241, "text": "Somasundaran et al., 2008)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we first investigate ways in which an opinion discourse scheme can be computationally modeled, and then how it can be utilized to improve polarity classification. Specifically, the discourse scheme we use is from Somasundaran et al. (2008) , which was developed to support a global, interdependent polarity interpretation. To achieve discourse-based global inference, we explore two different frameworks. The first is a supervised framework that learns interdependent opinion interpretations from training data. The second is an unsupervised optimization framework which uses constraints to express the ideas of coherent opinion interpretation embodied in the scheme. For the supervised framework, we use Iterative Collective Classification (ICA), which facilitates machine learning using relational information. The unsupervised optimization is implemented as an Integer Linear Programming (ILP) problem. Via our implementations, we aim to empirically test if discourse-based approaches to opinion analysis are useful.", "cite_spans": [ { "start": 228, "end": 254, "text": "Somasundaran et al. (2008)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our results show that both of our implementations achieve significantly better accuracies in polarity classification than classifiers using local information alone. This confirms the hypothesis that the discourse-based scheme is useful, and also shows that both of our design choices are effective. We also find that there is a difference in the way ICA and ILP achieve improvements, and a simple hybrid approach, which incorporates the strengths of both, is able to achieve significant overall improvements over both. Our analyses show that even when our discourse-based methods bootstrap from noisy classifications, they can achieve good improvements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of this paper is organized as follows: we discuss related work in Section 2 and the discourse scheme in Section 3. We present our discourse-based implementations in Section 4, experiments in Section 5, discussions in Section 6 and conclusions in Section 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous work on polarity disambiguation has used contextual clues and reversal words (Wilson et al., 2005; Kennedy and Inkpen, 2006; Kanayama and Nasukawa, 2006; Devitt and Ahmad, 2007; Sadamitsu et al., 2008) . However, these do not capture discourse-level relations.", "cite_spans": [ { "start": 86, "end": 107, "text": "(Wilson et al., 2005;", "ref_id": "BIBREF28" }, { "start": 108, "end": 133, "text": "Kennedy and Inkpen, 2006;", "ref_id": "BIBREF10" }, { "start": 134, "end": 162, "text": "Kanayama and Nasukawa, 2006;", "ref_id": "BIBREF9" }, { "start": 163, "end": 186, "text": "Devitt and Ahmad, 2007;", "ref_id": "BIBREF6" }, { "start": 187, "end": 210, "text": "Sadamitsu et al., 2008)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Researchers, such as (Polanyi and Zaenen, 2006) , have discussed how the discourse structure can influence opinion interpretation; and previous work, such as (Asher et al., 2008; Somasundaran et al., 2008) , have developed annota-tion schemes for interpreting opinions with discourse relations. However, they do not empirically demonstrate how automatic methods can use their ideas to improve polarity classification. In this work, we demonstrate concrete ways in which a discourse-based scheme can be modeled using global inference paradigms.", "cite_spans": [ { "start": 21, "end": 47, "text": "(Polanyi and Zaenen, 2006)", "ref_id": "BIBREF17" }, { "start": 158, "end": 178, "text": "(Asher et al., 2008;", "ref_id": "BIBREF0" }, { "start": 179, "end": 205, "text": "Somasundaran et al., 2008)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Joint models have been previously explored for other NLP problems (Haghighi et al., 2005; Moschitti et al., 2006; Moschitti, 2009) . Our global inference model focuses on opinion polarity recognition task.", "cite_spans": [ { "start": 66, "end": 89, "text": "(Haghighi et al., 2005;", "ref_id": "BIBREF8" }, { "start": 90, "end": 113, "text": "Moschitti et al., 2006;", "ref_id": "BIBREF12" }, { "start": 114, "end": 130, "text": "Moschitti, 2009)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The biggest difference between this work and previous work in opinion analysis that use global inference methods is in the type of linguistic relations used to achieve the global inference. Some of the work is not related to discourse at all (e.g., lexical similarities (Takamura et al., 2007) , morphosyntactic similarities (Popescu and Etzioni, 2005) and word-based measures like TF-IDF (Goldberg and Zhu, 2006) ). Others use sentence cohesion (Pang and Lee, 2004) , agreement/disagreement between speakers (Thomas et al., 2006; Bansal et al., 2008) , or structural adjacency. In contrast, our work focuses on discoursebased relations for global inference. Another difference from the above work is that our work is over multi-party conversations.", "cite_spans": [ { "start": 270, "end": 293, "text": "(Takamura et al., 2007)", "ref_id": "BIBREF25" }, { "start": 325, "end": 352, "text": "(Popescu and Etzioni, 2005)", "ref_id": "BIBREF18" }, { "start": 389, "end": 413, "text": "(Goldberg and Zhu, 2006)", "ref_id": "BIBREF7" }, { "start": 446, "end": 466, "text": "(Pang and Lee, 2004)", "ref_id": "BIBREF16" }, { "start": 509, "end": 530, "text": "(Thomas et al., 2006;", "ref_id": "BIBREF27" }, { "start": 531, "end": 551, "text": "Bansal et al., 2008)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Previous work on emotion and subjectivity detection in multi-party conversations has explored using prosodic information (Neiberg et al., 2006) , combining linguistic and acoustic information (Raaijmakers et al., 2008) and combining lexical and dialog information (Somasundaran et al., 2007) . Our work is focused on harnessing discourse-based knowledge and on interdependent inference.", "cite_spans": [ { "start": 121, "end": 143, "text": "(Neiberg et al., 2006)", "ref_id": "BIBREF14" }, { "start": 192, "end": 218, "text": "(Raaijmakers et al., 2008)", "ref_id": "BIBREF19" }, { "start": 264, "end": 291, "text": "(Somasundaran et al., 2007)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "There are several collective classification frameworks, including (Neville and Jensen, 2000; Lu and Getoor, 2003; Taskar et al., 2004; Richardson and Domingos, 2006; Bilgic et al., 2007) . In this paper, we use an approach by (Lu and Getoor, 2003) which iteratively predicts class values using local and relational features. ILP has been used on other NLP tasks, e.g., (Denis and Baldridge, 2007; Choi et al., 2006; Roth and Yih, 2004) . In this work, we employ ILP for modeling discourse constraints for polarity classification.", "cite_spans": [ { "start": 66, "end": 92, "text": "(Neville and Jensen, 2000;", "ref_id": "BIBREF15" }, { "start": 93, "end": 113, "text": "Lu and Getoor, 2003;", "ref_id": "BIBREF11" }, { "start": 114, "end": 134, "text": "Taskar et al., 2004;", "ref_id": "BIBREF26" }, { "start": 135, "end": 165, "text": "Richardson and Domingos, 2006;", "ref_id": "BIBREF20" }, { "start": 166, "end": 186, "text": "Bilgic et al., 2007)", "ref_id": "BIBREF2" }, { "start": 226, "end": 247, "text": "(Lu and Getoor, 2003)", "ref_id": "BIBREF11" }, { "start": 369, "end": 396, "text": "(Denis and Baldridge, 2007;", "ref_id": "BIBREF5" }, { "start": 397, "end": 415, "text": "Choi et al., 2006;", "ref_id": "BIBREF4" }, { "start": 416, "end": 435, "text": "Roth and Yih, 2004)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The scheme in Somasundaran et al. (2008) has been developed and annotated over the AMI meeting corpus (Carletta et al., 2005) . 1 This scheme annotates opinions, their polarities (positive, negative, neutral) and their targets (a target is what the opinion is about). The targets of opinions are related via two types of relations: the same relation, which relates targets referring to the same entity or proposition, and the alternative relation, which relates targets referring to mutually exclusive options in the context of the discourse. Additionally, the scheme relates opinions via two types of frame relations: the reinforcing and nonreinforcing relations. The frame relations represent discourse scenarios: reinforcing relations exist between opinions when they contribute to the same overall stance, while non-reinforcing relations exist between opinions that show ambivalence.", "cite_spans": [ { "start": 14, "end": 40, "text": "Somasundaran et al. (2008)", "ref_id": "BIBREF24" }, { "start": 102, "end": 125, "text": "(Carletta et al., 2005)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Discourse Scheme and Data", "sec_num": "3" }, { "text": "The opinion annotations are text-span based, while in this work, we use Dialog Act (DA) based segmentation of meetings. 2 As the DAs are our units of classification, we map opinion annotations to the DA units as follows. If a DA unit contains an opinion annotation, the label is transferred upwards to the containing DA. When a DA contains multiple opinion annotations, each with a different polarity, one of them is randomly chosen as the label for the DA. The discourse relations existing between opinions are also transferred upwards, between the DAs containing each of these annotations. We recreate an example from Somasundaran et al. (2008) using DA segmentation in Example 1. Here, the speaker has a positive opinion towards the rubbery material for the TV remote.", "cite_spans": [ { "start": 620, "end": 646, "text": "Somasundaran et al. (2008)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Discourse Scheme and Data", "sec_num": "3" }, { "text": "(1) DA-1: ... this kind of rubbery material, DA-2: it's a bit more bouncy, DA-3: like you said they get chucked around a lot. DA-4: A bit more durable and that can also be ergonomic and DA-5: it kind of feels a bit different from all the other remote controls.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Scheme and Data", "sec_num": "3" }, { "text": "In the example, the individual opinion expressions (shown in bold) are essentially regarding the same thing -the rubbery material. Thus, the explicit targets (shown in italics), it's, that, and it, and the implicit target of a bit more durable are all linked with same target relations. Also, notice that the opinions reinforce a particular stance, i.e., a prorubbery-material stance. Thus, the scheme links the opinions via reinforcing relations. Figure 1 illustrates the corresponding discourse relations between the containing DA units.", "cite_spans": [], "ref_spans": [ { "start": 448, "end": 456, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Discourse Scheme and Data", "sec_num": "3" }, { "text": "The hypothesis in using discourse information for polarity classification is that the global discourse view will improve upon a classification with only a local view. Thus, we implement a local classifier to bootstrap the classification process, and then implement classifiers that use discourse information from the scheme annotations, over it. We explore two approaches for implementing our discourse-based classifier. The first is ICA, where discourse relations and the neighborhood information brought in by these relations are incorporated as features into the learner. The second approach is ILP optimization, which tries to maximize the class distributions predicted by the local classifier, subject to constraints imposed by discourse relations. Both classifiers thus accommodate preferences of the local classifier and for coherence with discourse neighbors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementing the Discourse Model", "sec_num": "4" }, { "text": "A supervised local classifier, Local, is used to provide the classifications to bootstrap the discoursebased classifiers. 3 It is important to make Local as reliable as possible; otherwise, the discourse relations will propagate misclassifications. Thus, we build Local using a variety of knowledge sources that have been shown to be useful for opinion analysis in previous work. Specifically, we construct features using polarity lexicons (used by (Wilson et al., 2005) ), DA tags (used by (Somasundaran et al., 2007) ) and unigrams (used by many researchers, e.g., (Pang and Lee, 2004) ). Note that, as our discourse-based classifiers attempt to improve upon the local classifications, Local is also a baseline for our experiments.", "cite_spans": [ { "start": 122, "end": 123, "text": "3", "ref_id": null }, { "start": 449, "end": 470, "text": "(Wilson et al., 2005)", "ref_id": "BIBREF28" }, { "start": 491, "end": 518, "text": "(Somasundaran et al., 2007)", "ref_id": "BIBREF23" }, { "start": 567, "end": 587, "text": "(Pang and Lee, 2004)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Local Classifier", "sec_num": "4.1" }, { "text": "We use a variant of ICA (Lu and Getoor, 2003; Neville and Jensen, 2000) , which is a collective classification algorithm shown to perform consistently well over a wide variety of relational data.", "cite_spans": [ { "start": 24, "end": 45, "text": "(Lu and Getoor, 2003;", "ref_id": "BIBREF11" }, { "start": 46, "end": 71, "text": "Neville and Jensen, 2000)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Iterative Collective Classification", "sec_num": "4.2" }, { "text": "for each instance i do {bootstrapping} Compute polarity for i using local attributes end for repeat {iterative}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1 ICA Algorithm", "sec_num": null }, { "text": "Generate ordering I over all instances for each i in I do Compute polarity for i using local and relational attributes end for until Stopping criterion is met ICA uses two classifiers: a local classifier and a relational classifier. The local classifier is trained to predict the DA labels using only the local features. We use Local, described in Section 4.1, for this purpose. The relational classifier is trained using the local features, and an additional set of features commonly referred to as relational features. The value of a relational feature, for a given DA, depends on the polarity of the discourse neighbors of that DA. Thus, the relational features incorporate discourse and neighbor information; that is, they incorporate the information about the frame and target relations in conjunction with the polarity of the discourse neighbors. Intuitively, our motivation for this approach can be explained using Example 1. Here, in interpreting the ambiguous opinion a bit different as being positive, we use the knowledge that it participates in a reinforcing discourse, and that all its neighbors (e.g., ergonomic, durable) are positive opinions regarding the same thing. On the other hand, if it had been a non-reinforcing discourse, then the polarity of a bit different, when viewed with respect to the other opinions, could have been interpreted as negative. Table 1 lists the relational features we defined for our experiments where each row represents a Percent of neighbors with polarity type a related via frame relation f Percent of neighbors with polarity type a related via target relation t Percent of neighbors with polarity type a related via frame relation f and target relation t Percent of neighbors with polarity type a and same speaker related via frame relation f Percent of neighbors with polarity type a and same speaker related via target relation t Percent of neighbors with polarity type a related via a frame relation or target relation Percent of neighbors with polarity type a related via a reinforcing frame relation or same target relation Percent of neighbors with polarity type a related via a non-reinforcing frame relation or alt target relation Most common polarity type of neighbors related via a same target relation Most common polarity type of neighbors related via a reinforcing frame relation and same target relation Table 1 : Relational features: a \u2208 {non-neutral (i.e., positive or negative), positive, negative}, t \u2208 {same, alt}, f \u2208 {reinforcing, non-reinforcing}, t \u2208 {same or alt, same, alt}, f \u2208 {reinforcing or non-reinforcing, reinforcing, non-reinforcing} set of features. Features are generated for all combinations of a, t, t , f and f for each row. For example, one of the features in the first row is Percent of neighbors with polarity type positive, that are related via a reinforcing frame relation. Thus, each feature for the relational classifier identifies neighbors for a given instance via a specific relation (f , t, f or t , obtained from the scheme annotations) and factors in their polarity values (a, obtained from the classifier predictions from the previous round). This adds a total of 59 relational features to the already existing local features.", "cite_spans": [], "ref_spans": [ { "start": 1374, "end": 1381, "text": "Table 1", "ref_id": null }, { "start": 2370, "end": 2377, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Algorithm 1 ICA Algorithm", "sec_num": null }, { "text": "ICA has two main phases: the bootstrapping and iterative phases. In the bootstrapping phase, the polarity of each instance is initialized to the most likely value given only the local classifier and its features. In the iterative phase, we create a random ordering of all the instances and, in turn, apply the relational classifier to each instance where the relational features, for a given instance, are computed using the most recent polarity assignments of its neighbors. We repeat this until some stopping criterion is met. For our experiments, we use a fixed number of 30 iterations, which has been found to be sufficient in most data sets for ICA to converge to a solution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1 ICA Algorithm", "sec_num": null }, { "text": "The pseudocode for the algorithm is shown in Algorithm 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1 ICA Algorithm", "sec_num": null }, { "text": "First, we explain the intuition behind viewing discourse relations as enforcing constraints on polarity interpretation. Then, we explain how the constraints are encoded in the optimization problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integer Linear Programming", "sec_num": "4.3" }, { "text": "The discourse relations between opinions can provide coherence constraints on the way their polarity is interpreted. Consider a discourse scenario in which a speaker expresses multiple opinions regarding the same thing, and is reinforcing his stance in the process (as in Example 1). The set of individual polarity assignments that is most coherent with this global scenario is the one where all the opinions have the same (equal) polarity. On the other hand, a pair of individual polarity assignments most consistent with a discourse scenario where a speaker reinforces his stance via opinions towards alternative options, is one with opinions having mutually opposite polarity. For instance, in the utterance \"Shapes should be curved, nothing square-like\", the speaker reinforces his procurved stance via his opinions about the alternative shapes: curved and square-like. And, we see that the first opinion is positive and the second is negative. Table 2 lists the discourse relations (target and frame relation combinations) found in the corpus, and the likely polarity interpretation for the related instances. : Discourse relations and their polarity constraints on the related instances.", "cite_spans": [], "ref_spans": [ { "start": 949, "end": 956, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Discourse Constraints on Polarity", "sec_num": "4.3.1" }, { "text": "For each DA instance i in a dataset, the local classifier provides a class distribution", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Problem", "sec_num": "4.3.2" }, { "text": "[p i , q i , r i ],", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Problem", "sec_num": "4.3.2" }, { "text": "where p i , q i and r i correspond to the probabilities that i belongs to positive, negative and neutral categories, respectively. The optimization problem is formulated as an ILP minimization of the objective function in Equation 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Problem", "sec_num": "4.3.2" }, { "text": "\u22121\u00d7 i (p i x i +q i y i +r i z i )+ i,j ij + i,j \u03b4 ij (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Problem", "sec_num": "4.3.2" }, { "text": "where the x i , y i and z i are binary class variables corresponding to positive, negative and neutral classes, respectively. When a class variable is 1, the corresponding class is chosen. Variables ij and \u03b4 ij are binary slack variables that correspond to the discourse constraints between two distinct DA instances i and j. When a given slack variable is 1, the corresponding discourse constraint is violated. Note that the objective function tries to achieve two goals. The first part", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Problem", "sec_num": "4.3.2" }, { "text": "( i p i x i + q i y i + r i z i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Problem", "sec_num": "4.3.2" }, { "text": "is a maximization that tries to choose a classification for the instances that maximizes the probabilities provided by the local classifier. The second part ( i,j ij + i,j \u03b4 ij ) is a minimization that tries to minimize the number of slack variables used, that is, minimize the number of discourse constraints violated. Constraints in Equations 2 and 3 listed below impose binary constraints on the variables. The constraint in Equation 4 ensures that, for each instance i, only one class variable is set to 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Problem", "sec_num": "4.3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "x i \u2208 {0, 1}, y i \u2208 {0, 1}, z i \u2208 {0, 1} , \u2200i (2) ij \u2208 {0, 1}, \u03b4 ij \u2208 {0, 1} , \u2200i = j (3) x i + y i + z i = 1 , \u2200i", "eq_num": "(4)" } ], "section": "Optimization Problem", "sec_num": "4.3.2" }, { "text": "We pair distinct DA instances i and j as ij, and if there exists a discourse relation between them, they can be subject to the corresponding polarity constraints listed in Table 2 . For this, we define two binary discourse-constraint constants: the equal-polarity constant, e ij and the oppositepolarity constant, o ij . If a given DA pair ij is related by either a same+reinforcing relation or an alternative+non-reinforcing relation (rows 1, 4 of Table 2) , then e ij = 1; otherwise it is zero. Similarly, if it is related by either a same+nonreinforcing relation or an alternative+reinforcing relation (rows 2, 3 of Table 2 ), then o ij = 1. Both e ij and o ij are zero if the instance pair is unrelated in the discourse.", "cite_spans": [], "ref_spans": [ { "start": 172, "end": 179, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 449, "end": 457, "text": "Table 2)", "ref_id": "TABREF1" }, { "start": 619, "end": 626, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Optimization Problem", "sec_num": "4.3.2" }, { "text": "For each DA instance pair ij, equal-polarity constraints are applied to the polarity variables of i (x i , y i ) and j (x j , y j ) via the following equations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Problem", "sec_num": "4.3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "|x i \u2212 x j | \u2264 1 \u2212 e ij + ij , \u2200i = j (5) |y i \u2212 y j | \u2264 1 \u2212 e ij + ij , \u2200i = j (6) \u2212(x i + y i ) \u2264 \u2212l i , \u2200i", "eq_num": "(7)" } ], "section": "Optimization Problem", "sec_num": "4.3.2" }, { "text": "When e ij = 1, the Equation 5 constrains x i and x j to be of the same value (both zero or both one). Similarly, Equation 6 constrains y i and y j to be of the same value. Via these equations, we ensure that the instances i and j do not have the opposite polarity when e ij = 1. However, notice that, if we use just Equations 5 and 6, the optimization can converge to the same, non-polar (neutral) category. To guide the convergence to the same polar (positive or negative) category, we use Equation 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Problem", "sec_num": "4.3.2" }, { "text": "Here l i = 1 if the instance i participates in one or more discourse relations. When e ij = 0, x i and x j (and y i and y j ), can take on assignments independently of one another. Notice that both constraints 5 and 6 are relaxed when ij = 1; thus, x i and x j (or y i and y j ) can take on values independently of one another, even if e ij = 1. Next, the opposite-polarity constraints are applied via the following equations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Problem", "sec_num": "4.3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "|x i + x j \u2212 1| \u2264 1 \u2212 o ij + \u03b4 ij , \u2200i = j (8) |y i + y j \u2212 1| \u2264 1 \u2212 o ij + \u03b4 ij , \u2200i = j", "eq_num": "(9)" } ], "section": "Optimization Problem", "sec_num": "4.3.2" }, { "text": "In the above equations, when o ij = 1, x i and x j (and y i and y j ) take on opposite values; for example, if x i = 1 then x j = 0 and vice versa. When o ij = 0, the variable assignments are independent of one another. This set of constraints is relaxed when \u03b4 ij = 1. In general, in our ILP formulation, notice that if an instance does not have a discourse relation to any other instance in the data, its classification is unaffected by the optimization. Also, as the underlying discourse scheme poses constraints only on the interpretation of the polarity of the related instances, discourse constraints are applied only to the polarity variables x and y, and not to the neutral class variable, z. Finally, even though slack variables are used, we discourage the ILP system from indiscriminately setting the slack variables to 1 by making them a part of the objective function that is minimized.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Problem", "sec_num": "4.3.2" }, { "text": "In this work, we are particularly interested in improvements due to discourse-based methods. Thus, we report performance under three conditions: over only those instances that are related via discourse relations (Connected), over instances not related via discourse relations (Singletons), and over all instances (All).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "The annotated data consists of 7 scenario-based, multi-party meetings from the AMI meeting corpus. We filter out very small DAs (DAs with fewer than 3 tokens, punctuation included). This gives us a total of 4606 DA instances, of which 1935 (42%) have opinion annotations. For our experiments, the DAs with no opinion annotations as well as those with neutral opinions are considered as neutral. Table 3 shows the class distributions in the data for the three conditions. Connected 643 343 81 1067 Singleton 553 233 2753 3539 All 1196 576 2834 4606 Table 3 : Class distribution over connected, single and all instances.", "cite_spans": [], "ref_spans": [ { "start": 395, "end": 402, "text": "Table 3", "ref_id": null }, { "start": 471, "end": 567, "text": "Connected 643 343 81 1067 Singleton 553 233 2753 3539 All 1196 576 2834 4606 Table 3", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "Our first baseline, Base, is a simple distributionbased classifier that classifies the test data based on the overall distribution of the classes in the training data. However, in Table 3 , the class distribution is different for the Connected and Singleton conditions. We incorporate this in a smarter baseline, Base-2, which constructs separate distributions for connected instances and singletons. Thus, given a test instance, depending on whether it is connected, Base-2 uses the corresponding distribution to make its prediction. The third baseline is the supervised classifier, Local, described in Section 4.1. It is implemented using the SVM classifiers from the Weka toolkit (Witten and Frank, 2002) . 4 Our supervised discourse-based classifier, ICA from Section 4.2, also uses a similar SVM implementation for its relational classifier. We implement our ILP approach from Section 4.3 using the optimization toolbox from Mathworks (http://www.mathworks.com) and GNU Linear Programming Kit.", "cite_spans": [ { "start": 683, "end": 707, "text": "(Witten and Frank, 2002)", "ref_id": "BIBREF29" }, { "start": 710, "end": 711, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 180, "end": 187, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Classifiers", "sec_num": "5.1" }, { "text": "We observed that the ILP system performs better than the ICA system on instances that are connected, while ICA performs better on singletons. Thus, we also implemented a simple hybrid classifier (HYB), which selects the ICA prediction for classification of singletons and the ILP prediction for classification of connected instances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifiers", "sec_num": "5.1" }, { "text": "We performed 7-fold cross validation experiments, where six meetings are used for training and the seventh is used for testing the supervised classifiers (Base, Base-2, Local and ICA). In the case of ILP, the optimization is applied to the output of Local for each test fold. Table 4 reports the accuracies of the classifiers, averaged over 7 folds.", "cite_spans": [], "ref_spans": [ { "start": 276, "end": 283, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "First, we observe that Base performs poorly over connected instances, but performs considerably better over singletons. This is expected as the overall majority class is neutral and the singletons are more likely to be neutral. Base-2, which incorporates the differentiated distributions, performs substantially better than Base. Local achieves an overall performance improvement over Base and Base-2 by 23 percentage points and 9 percentage points, respectively. In general, Local outperforms Base for all three conditions (p < 0.001), and Base-2 for the Singleton and All conditions (p < 0.001). This overall improvement in Local's accuracy corroborates the utility of the lexical, unigram and DA based features for polarity detection in this corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "Turning to the discourse-based classifiers, ICA, ILP and HYB, all of these perform better than Base and Base-2 for all conditions. ICA improves over Local by 9 percentage points for Connected, 3 points for Singleton and 4 points for All. ILP's improvement over Local for Connected and All is even more substantial: 28 percentage points and 6 points, respectively. Notice that ILP has the same performance as Local for Singletons, as the discourse constraints are not applied over unconnected instances. Finally, HYB significantly outperforms Local under all conditions. The significance levels of the improvements over Local are highlighted in Table 4 . These improvements also signify that the underlying discourse scheme is effective, and adaptable to different implementations.", "cite_spans": [], "ref_spans": [ { "start": 644, "end": 651, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "Interestingly, ICA and ILP improve over Local in different ways. While ILP sharply improves the performance over the connected instances, ICA shows relatively modest improvements over both connected and singletons. ICA's improvement over singletons is interesting because it indicates that, even though the features in Table 1 are focused on discourse relations, ICA utilizes them to learn the classification of singletons too.", "cite_spans": [], "ref_spans": [ { "start": 319, "end": 326, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "Comparing our discourse-based approaches, ILP does significantly better than ICA over connected instances (p < 0.001), while ICA does significantly better than ILP over singletons ( Table 4 : Accuracies of the classifiers measured over Connected, Singleton and All instances. Performance significantly better than Local are indicated in bold for p < 0.001 and underline for p < 0.01. 0.01). However, there is no significant difference between ICA and ILP for the All condition. The HYB classifier outperforms ILP for the Singleton condition (p < 0.01) and ICA for the Connected condition (p < 0.001). Interestingly, over all instances (the All condition), HYB also performs significantly better than both ICA (p < 0.001) and ILP (p < 0.01).", "cite_spans": [ { "start": 180, "end": 181, "text": "(", "ref_id": null } ], "ref_spans": [ { "start": 182, "end": 189, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "Amongst our two approaches, ILP performs better, and hence we further analyze its behavior to understand how the improvements are achieved. Table 5 reports the performance of ILP and Local for the precision, recall and f-measure metrics (averaged over 7 test folds), measured separately for each of the opinion categories. The most prominent improvement by ILP is observed for the recall of the polar categories under the Connected condition: 40 percentage points for the positive class, and 29 percentage points for the negative class. The gain in recall is not accompanied by a significant loss in precision. This results in an improvement in f-measure for the polar categories (24 points for positive and 16 points for negative). Also note that, by virtue of the constraint in Equation 7, ILP does not classify any connected instance as neutral; thus the precision is NaN, recall is 0 and the f-meaure is NaN. This is indicated as * in the Table. The improvement of ILP for the All condition, for the polar classes, follows a similar trend for recall (18 to 21 point improvement) and f-measure (9 to 13 point improvement). In addition to this, the ILP has an overall improvement in precision over Local. This may seem counterintuitive, as in Table 5 , ILP's precision for connected nodes is similar to, or lower than, that of Local. This is explained by the fact that, while going from connected to overall conditions, Local's polar predictions increase by threefold (565 to 1482), but its correct polar predictions increase by only twofold (430 to 801). Thus, the ratio of change in the total polar predictions to the correct polar predictions is 3 : 2. On the other hand, while polar predictions by ILP increase by only twofold (1067 to 1984), its correct polar predictions increase by 1.5 times (804 to 1175). Here, the ratio of change in the total polar predictions to the correct polar predictions is 4 : 3, a smaller ratio.", "cite_spans": [], "ref_spans": [ { "start": 140, "end": 147, "text": "Table 5", "ref_id": null }, { "start": 943, "end": 949, "text": "Table.", "ref_id": null }, { "start": 1245, "end": 1252, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Analysis", "sec_num": "5.3" }, { "text": "The contingency table (Table 6) shows how Local and ILP compare against the gold standard annotations. Notice here, that even though ILP makes more polar guesses as compared to Local, a greater proportion of the ILP guesses are correct. The number of non-diagonal elements are much smaller for ILP, resulting in the accuracy improvements seen in Table 4 .", "cite_spans": [], "ref_spans": [ { "start": 22, "end": 31, "text": "(Table 6)", "ref_id": "TABREF4" }, { "start": 346, "end": 353, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Analysis", "sec_num": "5.3" }, { "text": "The results in Table 4 show that Local, which provides the classifications for bootstrapping ICA and ILP, predicts an incorrect class for more than 50% of the connected instances. Methods starting with noisy starting points are in danger of propagating the errors and hence worsening the performance. Interestingly, in spite of starting with so many bad classifications, ILP is able to achieve a large performance improvement. We discovered that, given a set of connected instances, even when Local has only one correct guess, ILP is able to use this to rectify the related instances. We illustrate this situation in Figure 2, for each DA from the gold standard (G), the Local classifier (L) and the ILP classifier (ILP). Observe that Local predicts the correct positive class (+) for only DA-4 (the DA containing bit more durable and ergonomic). Notice that these are clear cases of positive evaluation. It incorrectly predicts the polarity of DA-2 (containing bit more bouncy) as neutral (*), and DA-5 (containing a bit different from all the other remote controls) as negative (-). DA-2 and DA-5 exemplify the fact that polarity classification is a complex and difficult problem: being bouncy is a positive evaluation in this particular discourse context, and may not be so elsewhere. Thus, naturally, lexicons and unigram-based learning would fail to capture this positive evaluation. Similarly, \"being different\" could be deemed negative in other discourse contexts. However, ILP is able to arrive at the correct predictions for all the instances. As the DA-4 is connected to both DA-2 and DA-5 via a discourse relation that enforces an equal-polarity constraint (same+reinforcing relation of row 1, Table 2 ), both of the misclassifications are rectified. Presumably, the incorrect predictions made by Local are low confidence estimates, while the predictions of the correct cases have high confidence, which makes it possible for ILP to make the corrections.", "cite_spans": [], "ref_spans": [ { "start": 15, "end": 22, "text": "Table 4", "ref_id": null }, { "start": 617, "end": 626, "text": "Figure 2,", "ref_id": "FIGREF1" }, { "start": 1705, "end": 1712, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Examples and Discussion", "sec_num": "6" }, { "text": "We also observed the propagation of the correct classification for other types of discourse relations, for more complex types of connectivity, and also for conditions where an instance is not directly connected to the correctly predicted instance. The meeting snippet below (Example 2) and its corresponding DA relations (Figure 3 ) illustrate this. This example is a reinforcing discourse where the speaker is arguing for the number keypad, which is an alternative to the scrolling option. Thus, he argues against the scrolling, and argues for entering the number (which is a capability of the number keypad).", "cite_spans": [], "ref_spans": [ { "start": 321, "end": 330, "text": "(Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Examples and Discussion", "sec_num": "6" }, { "text": "(2) D-1: I reckon you're gonna have to have a number keypad anyway for the amount of channels these days, D-2: You wouldn't want to just have to scroll through all the channels to get to the one you want D-3: You wanna enter just the number of it , if you know it D-4: I reckon we're gonna have to have a number keypad anyway", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Examples and Discussion", "sec_num": "6" }, { "text": "In Figure 3 , we see that, DA-2 is connected via an alternative+reinforcing discourse relation to each of its neighbors DA-1 and DA-3, which encourages the optimization to choose a class for it that is opposite to DA-1 and DA-3. Notice also, that even though Local predicts only DA-4 correctly, this correct classification finally influences the correct choice for all the instances, including the remotely connected DA-2.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Examples and Discussion", "sec_num": "6" }, { "text": "This work focuses on the first step to ascertain whether discourse relations are useful for improving opinion polarity classification, whether they can be modeled and what modeling choices can be used. To this end, we explored two distinct paradigms: the supervised ICA and the unsupervised ILP. We showed that both of our approaches are effective in exploiting discourse relations to significantly improve polarity classification. We found that there is a difference in how ICA and ILP achieve improvements, and that combining the two in a hybrid approach can lead to further overall improvement. Quantitatively, we showed that our approach is able to achieve a large increase in recall of the polar categories without harming the precision, which results in the performance improvements. Qualitatively, we illustrated how, even if the bootstrapping process is noisy, the optimization and discourse constraints effectively rectify the misclassifications. The improvements of our diverse global inference approaches indicate that discourse information can be adapted in different ways to augment and improve existing opinion analysis techniques. The automation of the discourse-relation recognition is the next step in this research. The behavior of ICA and ILP can change, depending on the automation of discourse level recognition. The implementation and comparison of the two methods under full automation is the focus of our future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "7" }, { "text": "The AMI corpus contains a set of scenario-based meetings where participants have to design a new TV remote prototype.2 DA segmentation is provided with the AMI corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Local is supervised, as previous work has shown that supervised methods are effective in opinion analysis. Even though this makes the final end-to-end system with the ILP implementation semi-supervised, note that the discoursebased ILP part is itself unsupervised.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use the SMO implementation, which, when used with the logistic regression, has an output that can be viewed as a posterior probability distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was supported in part by the Department of Homeland Security under grant N000140710152 and NSF Grant No. 0746930. We would also like to thank the anonymous reviewers for their helpful comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Distilling opinion in discourse: A preliminary study", "authors": [ { "first": "F", "middle": [], "last": "Asher", "suffix": "" }, { "first": "Y", "middle": [], "last": "Benamara", "suffix": "" }, { "first": "", "middle": [], "last": "Mathieu", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Asher, F. Benamara, and Y. Mathieu. 2008. Dis- tilling opinion in discourse: A preliminary study. COLING-2008.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The power of negative thinking: Exploiting label disagreement in the min-cut classification framework", "authors": [ { "first": "M", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "C", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "L", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2008, "venue": "COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Bansal, C. Cardie, and L. Lee. 2008. The power of negative thinking: Exploiting label disagreement in the min-cut classification framework. In COLING- 2008.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Combining collective classification and link prediction", "authors": [ { "first": "M", "middle": [], "last": "Bilgic", "suffix": "" }, { "first": "G", "middle": [ "M" ], "last": "Namata", "suffix": "" }, { "first": "L", "middle": [], "last": "Getoor", "suffix": "" } ], "year": 2007, "venue": "Workshop on Mining Graphs and Complex Structures at the IEEE International Conference on Data Mining", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Bilgic, G. M. Namata, and L. Getoor. 2007. Com- bining collective classification and link prediction. In Workshop on Mining Graphs and Complex Struc- tures at the IEEE International Conference on Data Mining.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The ami meetings corpus", "authors": [ { "first": "J", "middle": [], "last": "Carletta", "suffix": "" }, { "first": "S", "middle": [], "last": "Ashby", "suffix": "" }, { "first": "S", "middle": [], "last": "Bourban", "suffix": "" }, { "first": "M", "middle": [], "last": "Flynn", "suffix": "" }, { "first": "M", "middle": [], "last": "Guillemot", "suffix": "" }, { "first": "T", "middle": [], "last": "Hain", "suffix": "" }, { "first": "J", "middle": [], "last": "Kadlec", "suffix": "" }, { "first": "V", "middle": [], "last": "Karaiskos", "suffix": "" }, { "first": "W", "middle": [], "last": "Kraaij", "suffix": "" }, { "first": "M", "middle": [], "last": "Kronenthal", "suffix": "" }, { "first": "G", "middle": [], "last": "Lathoud", "suffix": "" }, { "first": "M", "middle": [], "last": "Lincoln", "suffix": "" }, { "first": "A", "middle": [], "last": "Lisowska", "suffix": "" }, { "first": "I", "middle": [], "last": "Mccowan", "suffix": "" }, { "first": "W", "middle": [], "last": "Post", "suffix": "" }, { "first": "D", "middle": [], "last": "Reidsma", "suffix": "" }, { "first": "P", "middle": [], "last": "Wellner", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Measuring Behavior Symposium on \"Annotating and measuring Meeting Behavior", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Carletta, S. Ashby, S. Bourban, M. Flynn, M. Guillemot, T. Hain, J. Kadlec, V. Karaiskos, W. Kraaij, M. Kronenthal, G. Lathoud, M. Lincoln, A. Lisowska, I. McCowan, W. Post, D. Reidsma, and P. Wellner. 2005. The ami meetings corpus. In Pro- ceedings of the Measuring Behavior Symposium on \"Annotating and measuring Meeting Behavior\".", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Joint extraction of entities and relations for opinion recognition", "authors": [ { "first": "Y", "middle": [], "last": "Choi", "suffix": "" }, { "first": "E", "middle": [], "last": "Breck", "suffix": "" }, { "first": "C", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Choi, E. Breck, and C. Cardie. 2006. Joint extrac- tion of entities and relations for opinion recognition. In EMNLP 2006.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Joint determination of anaphoricity and coreference resolution using integer programming", "authors": [ { "first": "P", "middle": [], "last": "Denis", "suffix": "" }, { "first": "J", "middle": [], "last": "Baldridge", "suffix": "" } ], "year": 2007, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Denis and J. Baldridge. 2007. Joint determination of anaphoricity and coreference resolution using in- teger programming. In HLT-NAACL 2007.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Sentiment polarity identification in financial news: A cohesion-based approach", "authors": [ { "first": "A", "middle": [], "last": "Devitt", "suffix": "" }, { "first": "K", "middle": [], "last": "Ahmad", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Devitt and K. Ahmad. 2007. Sentiment polarity identification in financial news: A cohesion-based approach. In ACL 2007.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Seeing stars when there aren't many stars: Graph-based semisupervised learning for sentiment categorization", "authors": [ { "first": "A", "middle": [ "B" ], "last": "Goldberg", "suffix": "" }, { "first": "X", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2006, "venue": "HLT-NAACL 2006 Workshop on Textgraphs: Graphbased Algorithms for Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. B. Goldberg and X. Zhu. 2006. Seeing stars when there aren't many stars: Graph-based semi- supervised learning for sentiment categorization. In HLT-NAACL 2006 Workshop on Textgraphs: Graph- based Algorithms for Natural Language Processing.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A joint model for semantic role labeling", "authors": [ { "first": "A", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "K", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "C", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Haghighi, K. Toutanova, and C. Manning. 2005. A joint model for semantic role labeling. In CoNLL.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Fully automatic lexicon expansion for domain-oriented sentiment analysis", "authors": [ { "first": "H", "middle": [], "last": "Kanayama", "suffix": "" }, { "first": "T", "middle": [], "last": "Nasukawa", "suffix": "" } ], "year": 2006, "venue": "EMNLP-2006", "volume": "", "issue": "", "pages": "355--363", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Kanayama and T. Nasukawa. 2006. Fully auto- matic lexicon expansion for domain-oriented sen- timent analysis. In EMNLP-2006, pages 355-363, Sydney, Australia.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Sentiment classification of movie reviews using contextual valence shifters", "authors": [ { "first": "A", "middle": [], "last": "Kennedy", "suffix": "" }, { "first": "D", "middle": [], "last": "Inkpen", "suffix": "" } ], "year": 2006, "venue": "Computational Intelligence", "volume": "22", "issue": "2", "pages": "110--125", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Kennedy and D. Inkpen. 2006. Sentiment classi- fication of movie reviews using contextual valence shifters. Computational Intelligence, 22(2):110- 125.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Link-based classification", "authors": [ { "first": "Q", "middle": [], "last": "Lu", "suffix": "" }, { "first": "L", "middle": [], "last": "Getoor", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the International Conference on Machine Learning (ICML)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Q. Lu and L. Getoor. 2003. Link-based classification. In Proceedings of the International Conference on Machine Learning (ICML).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Semantic role labeling via tree kernel joint inference", "authors": [ { "first": "A", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "D", "middle": [], "last": "Pighin", "suffix": "" }, { "first": "R", "middle": [], "last": "Basili", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Moschitti, D. Pighin, and R. Basili. 2006. Seman- tic role labeling via tree kernel joint inference. In CoNLL.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Syntactic and semantic kernels for short text pair categorization", "authors": [ { "first": "A", "middle": [], "last": "Moschitti", "suffix": "" } ], "year": 2009, "venue": "EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Moschitti. 2009. Syntactic and semantic kernels for short text pair categorization. In EACL.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Emotion recognition in spontaneous speech using gmms", "authors": [ { "first": "D", "middle": [], "last": "Neiberg", "suffix": "" }, { "first": "K", "middle": [], "last": "Elenius", "suffix": "" }, { "first": "K", "middle": [], "last": "Laskowski", "suffix": "" } ], "year": 2006, "venue": "INTERSPEECH 2006 ICSLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Neiberg, K. Elenius, and K. Laskowski. 2006. Emotion recognition in spontaneous speech using gmms. In INTERSPEECH 2006 ICSLP.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Iterative classification in relational data", "authors": [ { "first": "J", "middle": [], "last": "Neville", "suffix": "" }, { "first": "D", "middle": [], "last": "Jensen", "suffix": "" } ], "year": 2000, "venue": "Proc. AAAI-2000 Workshop on Learning Statistical Models from Relational Data", "volume": "", "issue": "", "pages": "13--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Neville and D. Jensen. 2000. Iterative classifica- tion in relational data. In In Proc. AAAI-2000 Work- shop on Learning Statistical Models from Relational Data, pages 13-20. AAAI Press.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts", "authors": [ { "first": "B", "middle": [], "last": "Pang", "suffix": "" }, { "first": "L", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Pang and L. Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In ACl 2004.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Contextual Valence Shifters. Computing Attitude and Affect in Text: Theory and Applications", "authors": [ { "first": "L", "middle": [], "last": "Polanyi", "suffix": "" }, { "first": "A", "middle": [], "last": "Zaenen", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Polanyi and A. Zaenen, 2006. Contextual Valence Shifters. Computing Attitude and Affect in Text: Theory and Applications.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Extracting product features and opinions from reviews", "authors": [ { "first": "A.-M", "middle": [], "last": "Popescu", "suffix": "" }, { "first": "O", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2005, "venue": "HLT-EMNLP 2005", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A.-M. Popescu and O. Etzioni. 2005. Extracting prod- uct features and opinions from reviews. In HLT- EMNLP 2005.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Multimodal subjectivity analysis of multiparty conversation", "authors": [ { "first": "S", "middle": [], "last": "Raaijmakers", "suffix": "" }, { "first": "K", "middle": [], "last": "Truong", "suffix": "" }, { "first": "T", "middle": [], "last": "Wilson", "suffix": "" } ], "year": 2008, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Raaijmakers, K. Truong, and T. Wilson. 2008. Mul- timodal subjectivity analysis of multiparty conversa- tion. In EMNLP.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Markov logic networks", "authors": [ { "first": "M", "middle": [], "last": "Richardson", "suffix": "" }, { "first": "P", "middle": [], "last": "Domingos", "suffix": "" } ], "year": 2006, "venue": "Mach. Learn", "volume": "62", "issue": "1-2", "pages": "107--136", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Richardson and P. Domingos. 2006. Markov logic networks. Mach. Learn., 62(1-2):107-136.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A linear programming formulation for global inference in natural language tasks", "authors": [ { "first": "D", "middle": [], "last": "Roth", "suffix": "" }, { "first": "W", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2004, "venue": "Proceedings of CoNLL-2004", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Roth and W. Yih. 2004. A linear programming formulation for global inference in natural language tasks. In Proceedings of CoNLL-2004, pages 1-8. Boston, MA, USA.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Sentiment analysis based on probabilistic models using inter-sentence information", "authors": [ { "first": "K", "middle": [], "last": "Sadamitsu", "suffix": "" }, { "first": "S", "middle": [], "last": "Sekine", "suffix": "" }, { "first": "M", "middle": [], "last": "Yamamoto", "suffix": "" } ], "year": 2008, "venue": "LREC'08", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Sadamitsu, S. Sekine, and M. Yamamoto. 2008. Sentiment analysis based on probabilistic models us- ing inter-sentence information. In LREC'08.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Detecting arguing and sentiment in meetings", "authors": [ { "first": "S", "middle": [], "last": "Somasundaran", "suffix": "" }, { "first": "J", "middle": [], "last": "Ruppenhofer", "suffix": "" }, { "first": "J", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 2007, "venue": "SIGdial Workshop on Discourse and Dialogue", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Somasundaran, J. Ruppenhofer, and J. Wiebe. 2007. Detecting arguing and sentiment in meetings. In SIGdial Workshop on Discourse and Dialogue 2007.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Discourse level opinion interpretation", "authors": [ { "first": "S", "middle": [], "last": "Somasundaran", "suffix": "" }, { "first": "J", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "J", "middle": [], "last": "Ruppenhofer", "suffix": "" } ], "year": 2008, "venue": "Coling", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Somasundaran, J. Wiebe, and J. Ruppenhofer. 2008. Discourse level opinion interpretation. In Coling 2008.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Extracting semantic orientations of phrases from dictionary", "authors": [ { "first": "H", "middle": [], "last": "Takamura", "suffix": "" }, { "first": "T", "middle": [], "last": "Inui", "suffix": "" }, { "first": "M", "middle": [], "last": "Okumura", "suffix": "" } ], "year": 2007, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Takamura, T. Inui, and M. Okumura. 2007. Extract- ing semantic orientations of phrases from dictionary. In HLT-NAACL 2007.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Link prediction in relational data", "authors": [ { "first": "B", "middle": [], "last": "Taskar", "suffix": "" }, { "first": "M", "middle": [], "last": "Wong", "suffix": "" }, { "first": "P", "middle": [], "last": "Abbeel", "suffix": "" }, { "first": "D", "middle": [], "last": "Koller", "suffix": "" } ], "year": 2004, "venue": "Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Taskar, M. Wong, P. Abbeel, and D. Koller. 2004. Link prediction in relational data. In Neural Infor- mation Processing Systems.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Get out the vote: Determining support or opposition from congressional floor-debate transcripts", "authors": [ { "first": "M", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "B", "middle": [], "last": "Pang", "suffix": "" }, { "first": "L", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Thomas, B. Pang, and L. Lee. 2006. Get out the vote: Determining support or opposition from con- gressional floor-debate transcripts. In EMNLP 2006.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Recognizing contextual polarity in phrase-level sentiment analysis", "authors": [ { "first": "T", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "J", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "P", "middle": [], "last": "Hoffmann", "suffix": "" } ], "year": 2005, "venue": "HLT-EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Wilson, J. Wiebe, and P. Hoffmann. 2005. Recog- nizing contextual polarity in phrase-level sentiment analysis. In HLT-EMNLP 2005.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Data mining: practical machine learning tools and techniques with java implementations", "authors": [ { "first": "I", "middle": [ "H" ], "last": "Witten", "suffix": "" }, { "first": "E", "middle": [], "last": "Frank", "suffix": "" } ], "year": 2002, "venue": "SIGMOD Rec", "volume": "31", "issue": "1", "pages": "76--77", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. H. Witten and E. Frank. 2002. Data mining: practi- cal machine learning tools and techniques with java implementations. SIGMOD Rec., 31(1):76-77.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "text": "Discourse Relations between DA segments for Example 1.", "uris": null }, "FIGREF1": { "type_str": "figure", "num": null, "text": "Discourse Relations and Classifications for Example 1.", "uris": null }, "FIGREF2": { "type_str": "figure", "num": null, "text": "Discourse Relations and Classifications for Example 2.", "uris": null }, "TABREF1": { "type_str": "table", "html": null, "content": "", "num": null, "text": "" }, "TABREF2": { "type_str": "table", "html": null, "content": "
Base Base-2 LocalICAILPHYB
Connected 24.447.56
p <
", "num": null, "text": "46.66 55.64 75.07 75.07 Singleton 51.72 63.23 75.73 78.72 75.73 78.72 All 45.34 59.46 68.72 73.31 75.35 77.72" }, "TABREF4": { "type_str": "table", "html": null, "content": "", "num": null, "text": "Contingency table over all instances." }, "TABREF5": { "type_str": "table", "html": null, "content": "
PositiveNegativeNeutral
Local ILP Local ILP Local ILP
Connected-Prec78.1 78.271.9 69.812.1
Connected-Recall45.3 86.344.1 73.462.8*
Connected-F156.8 81.554.0 70.718.5
All-Prec56.2 61.352.3 54.676.3 88.3
All-Recall46.6 67.744.3 62.583.9 81.5
All-F150.4 64.046.0 57.179.6 84.6
Table 5: Precision, Recall,
which reproduces the connected
DAs for Example 1. It shows the classifications
", "num": null, "text": "Fmeasure for each Polarity category. Performance significantly better than Local are indicated in bold (p < 0.001), underline (p < 0.01) and italics (p < 0.05). The * denotes that ILP does not retrieve any connected node as neutral." } } } }