{ "paper_id": "P15-1040", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:08:24.874522Z" }, "title": "Aligning Opinions: Cross-Lingual Opinion Mining with Dependencies", "authors": [ { "first": "Mariana", "middle": [ "S C" ], "last": "Almeida", "suffix": "", "affiliation": { "laboratory": "", "institution": "Instituto Superior T\u00e9cnico", "location": { "postCode": "1049-001", "settlement": "Lisboa", "country": "Portugal" } }, "email": "" }, { "first": "Cl\u00e1udia", "middle": [], "last": "Pinto", "suffix": "", "affiliation": { "laboratory": "", "institution": "Instituto Superior T\u00e9cnico", "location": { "postCode": "1049-001", "settlement": "Lisboa", "country": "Portugal" } }, "email": "" }, { "first": "Helena", "middle": [], "last": "Figueira", "suffix": "", "affiliation": { "laboratory": "", "institution": "Instituto Superior T\u00e9cnico", "location": { "postCode": "1049-001", "settlement": "Lisboa", "country": "Portugal" } }, "email": "" }, { "first": "Pedro", "middle": [], "last": "Mendes", "suffix": "", "affiliation": { "laboratory": "", "institution": "Instituto Superior T\u00e9cnico", "location": { "postCode": "1049-001", "settlement": "Lisboa", "country": "Portugal" } }, "email": "" }, { "first": "Andr\u00e9", "middle": [ "F T" ], "last": "Martins", "suffix": "", "affiliation": { "laboratory": "", "institution": "Instituto Superior T\u00e9cnico", "location": { "postCode": "1049-001", "settlement": "Lisboa", "country": "Portugal" } }, "email": "" }, { "first": "Priberam", "middle": [], "last": "Labs", "suffix": "", "affiliation": { "laboratory": "", "institution": "Instituto Superior T\u00e9cnico", "location": { "postCode": "1049-001", "settlement": "Lisboa", "country": "Portugal" } }, "email": "" }, { "first": "Alameda", "middle": [ "D" ], "last": "Afonso Henriques", "suffix": "", "affiliation": { "laboratory": "", "institution": "Instituto Superior T\u00e9cnico", "location": { "postCode": "1049-001", "settlement": "Lisboa", "country": "Portugal" } }, "email": "" }, { "first": "", "middle": [], "last": "Lisboa", "suffix": "", "affiliation": { "laboratory": "", "institution": "Instituto Superior T\u00e9cnico", "location": { "postCode": "1049-001", "settlement": "Lisboa", "country": "Portugal" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We propose a cross-lingual framework for fine-grained opinion mining using bitext projection. The only requirements are a running system in a source language and word-aligned parallel data. Our method projects opinion frames from the source to the target language, and then trains a system on the target language using the automatic annotations. Key to our approach is a novel dependency-based model for opinion mining, which we show, as a byproduct, to be on par with the current state of the art for English, while avoiding the need for integer programming or reranking. In cross-lingual mode (English to Portuguese), our approach compares favorably to a supervised system (with scarce labeled data), and to a delexicalized model trained using universal tags and bilingual word embeddings.", "pdf_parse": { "paper_id": "P15-1040", "_pdf_hash": "", "abstract": [ { "text": "We propose a cross-lingual framework for fine-grained opinion mining using bitext projection. The only requirements are a running system in a source language and word-aligned parallel data. Our method projects opinion frames from the source to the target language, and then trains a system on the target language using the automatic annotations. Key to our approach is a novel dependency-based model for opinion mining, which we show, as a byproduct, to be on par with the current state of the art for English, while avoiding the need for integer programming or reranking. In cross-lingual mode (English to Portuguese), our approach compares favorably to a supervised system (with scarce labeled data), and to a delexicalized model trained using universal tags and bilingual word embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The goal of opinion mining is to extract opinions and sentiments from text (Pang and Lee, 2008; Wilson, 2008; Liu, 2012) . With the advent of social media and the increasing amount of data available on the Web, this has become a very active area of research, with applications in summarization of customer reviews (Hu and Liu, 2004; Wu et al., 2011) , tracking of newswire and blogs (Ku et al., 2006) , question answering (Yu and Hatzivassiloglou, 2003) , and text-to-speech synthesis (Alm et al., 2005) .", "cite_spans": [ { "start": 75, "end": 95, "text": "(Pang and Lee, 2008;", "ref_id": "BIBREF35" }, { "start": 96, "end": 109, "text": "Wilson, 2008;", "ref_id": "BIBREF55" }, { "start": 110, "end": 120, "text": "Liu, 2012)", "ref_id": "BIBREF27" }, { "start": 314, "end": 332, "text": "(Hu and Liu, 2004;", "ref_id": "BIBREF17" }, { "start": 333, "end": 349, "text": "Wu et al., 2011)", "ref_id": "BIBREF56" }, { "start": 383, "end": 400, "text": "(Ku et al., 2006)", "ref_id": "BIBREF25" }, { "start": 422, "end": 453, "text": "(Yu and Hatzivassiloglou, 2003)", "ref_id": "BIBREF61" }, { "start": 485, "end": 503, "text": "(Alm et al., 2005)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While early work has focused on determining sentiment at document and sentence level (Pang et al., 2002; Turney, 2002; Balog et al., 2006) , research has gradually progressed towards finegrained opinion mining, where rather than determining global sentiment, the goal is to parse text into opinion frames, identifying opinion expressions, agents, targets, and polarities (Ding et al., 2008) , or addressing compositionality (Socher et al., 2013b) . Since the release of the MPQA corpus 1 Wilson, 2008) , a standard corpus for fine-grained opinion mining of news documents, a long string of work has been produced (reviewed in \u00a72). Despite the large volume of prior work, opinion mining has by and large been limited to monolingual approaches in English. 2 This is explained by the heavy effort of annotation necessary for current learning-based approaches to succeed, which delays the deployment of opinion miners for new languages.", "cite_spans": [ { "start": 85, "end": 104, "text": "(Pang et al., 2002;", "ref_id": "BIBREF36" }, { "start": 105, "end": 118, "text": "Turney, 2002;", "ref_id": "BIBREF49" }, { "start": 119, "end": 138, "text": "Balog et al., 2006)", "ref_id": "BIBREF3" }, { "start": 371, "end": 390, "text": "(Ding et al., 2008)", "ref_id": "BIBREF13" }, { "start": 424, "end": 446, "text": "(Socher et al., 2013b)", "ref_id": "BIBREF43" }, { "start": 488, "end": 501, "text": "Wilson, 2008)", "ref_id": "BIBREF55" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We bridge the existing gap by proposing a cross-lingual approach to fine-grained opinion mining via bitext projection. This technique has been quite effective in several NLP tasks, such as part-of-speech (POS) tagging (T\u00e4ckstr\u00f6m et al., 2013) , named entity recognition (Wang and Manning, 2014) , syntactic parsing (Yarowsky and Ngai, 2001; Hwa et al., 2005) , semantic role labeling (Pad\u00f3 and Lapata, 2009) , and coreference resolution (Martins, 2015) . Given a corpus of parallel sentences (bitext), the idea is to run a pre-trained system on the source side and then to use word alignments to transfer the produced annotations to the target side, creating an automatic training corpus for the impoverished language.", "cite_spans": [ { "start": 218, "end": 242, "text": "(T\u00e4ckstr\u00f6m et al., 2013)", "ref_id": "BIBREF47" }, { "start": 270, "end": 294, "text": "(Wang and Manning, 2014)", "ref_id": "BIBREF51" }, { "start": 315, "end": 340, "text": "(Yarowsky and Ngai, 2001;", "ref_id": "BIBREF60" }, { "start": 341, "end": 358, "text": "Hwa et al., 2005)", "ref_id": "BIBREF18" }, { "start": 384, "end": 407, "text": "(Pad\u00f3 and Lapata, 2009)", "ref_id": "BIBREF34" }, { "start": 437, "end": 452, "text": "(Martins, 2015)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To alleviate the complexity of the task, we start by introducing a lightweight representationcalled dependency-based opinion mining-and convert the MPQA corpus to this formalism ( \u00a73). We propose a simple arc-factored model that permits easy decoding ( \u00a74) and we show that, despite its simplicity, this model is on par with state-ofthe-art opinion mining systems for English ( \u00a75). Then, through bitext projection, we transfer these dependency-based opinion frames to Portuguese (our target language), and train a system on the resulting corpus ( \u00a76).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As part of this work, a validation corpus in Portuguese with subjectivity annotations was created, along with a translation of the MPQA Subjectivity lexicon of . 3 Experimental evaluation ( \u00a77) shows that our cross-lingual approach surpasses a supervised system trained on a small corpus in the target language, as well as a delexicalized baseline trained using universal POS tags, bilingual word embeddings and a projected lexicon.", "cite_spans": [ { "start": 162, "end": 163, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A considerable amount of work on fine-grained opinion mining is based on the MPQA corpus. Kim and Hovy (2006) proposed a method for finding opinion holders and topics, with the aid of a semantic role labeler. Choi et al. (2005) and Breck et al. (2007) used CRFs for finding opinion holders and recognizing opinion expressions, respectively. The two things are predicted jointly by Choi et al. (2006) , with integer programming, and Johansson and Moschitti (2010), via reranking. The same method was applied later for joint prediction of opinion expressions and their polarities (Johansson and Moschitti, 2011) . The advantage of a joint model was also shown by Choi and Cardie (2010) and Yang and Cardie (2014) . Yang and Cardie (2012) classified expressions with a semi-Markov decoder, outperforming a B-I-O tagger; in later work, the same authors proposed an ILP decoder to jointly retrieve opinion expressions, holders, and targets (Yang and Cardie, 2013) . A more recent work (\u0130rsoy and Cardie, 2014) proposes a recurrent neural network to identify opinion spans.", "cite_spans": [ { "start": 90, "end": 109, "text": "Kim and Hovy (2006)", "ref_id": "BIBREF23" }, { "start": 209, "end": 227, "text": "Choi et al. (2005)", "ref_id": "BIBREF8" }, { "start": 232, "end": 251, "text": "Breck et al. (2007)", "ref_id": "BIBREF5" }, { "start": 381, "end": 399, "text": "Choi et al. (2006)", "ref_id": "BIBREF9" }, { "start": 578, "end": 609, "text": "(Johansson and Moschitti, 2011)", "ref_id": "BIBREF21" }, { "start": 661, "end": 683, "text": "Choi and Cardie (2010)", "ref_id": "BIBREF7" }, { "start": 688, "end": 710, "text": "Yang and Cardie (2014)", "ref_id": "BIBREF59" }, { "start": 713, "end": 735, "text": "Yang and Cardie (2012)", "ref_id": "BIBREF57" }, { "start": 935, "end": 958, "text": "(Yang and Cardie, 2013)", "ref_id": "BIBREF58" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "All the approaches above rely on a span-based representation of the opinion elements. This makes joint decoding procedures more complicated, since they must forbid overlap of opinion elements or add further constraints, leading to integer programming or reranking strategies. Besides, there is little consensus about what should be the correct span boundaries, the inter-annotator agreement being quite low . In constrast, we use dependencies to model opinion elements and relations, leading to a compact representation that does not depend on spans and which is tractable to decode. A dependency scheme was also used by Wu et al. (2011) for fine-grained opinion mining. Our work differs in which we mine opinions in news articles instead of product reviews, a considerably different task. In addition, the approach of Wu et al. (2011) relies on \"span nodes\" (instead of head words), requiring solving an ILP followed by an approximate heuristic.", "cite_spans": [ { "start": 621, "end": 637, "text": "Wu et al. (2011)", "ref_id": "BIBREF56" }, { "start": 819, "end": 835, "text": "Wu et al. (2011)", "ref_id": "BIBREF56" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Query-based multilingual opinion mining was addressed in several NTCIR shared tasks (Seki et al., 2007; Seki et al., 2010) . 4 However, to our best knowledge, a cross-lingual approach has never been attempted. Some steps were taken by Mihalcea et al. (2007) and Banea et al. (2008) , who translated an English lexicon and the MPQA corpus to Romanian and Spanish, but for the much simpler task of sentence-level subjectivity analysis. Cross-lingual sentiment classification was addressed by Wan (2009) , Prettenhofer and Stein (2010) and Wei and Pal (2010) at document level, and by Lu et al. (2011) at sentence level. Recently, Gui et al. (2013) applied projection learning for opinion mining in Chinese. However, this work only addresses agent detection and requires translating the MPQA corpus. While all these works are relevant, none addresses fine-grained opinion mining in its full generality, where the goal is to predict full opinion frames.", "cite_spans": [ { "start": 84, "end": 103, "text": "(Seki et al., 2007;", "ref_id": "BIBREF40" }, { "start": 104, "end": 122, "text": "Seki et al., 2010)", "ref_id": "BIBREF41" }, { "start": 235, "end": 257, "text": "Mihalcea et al. (2007)", "ref_id": "BIBREF33" }, { "start": 262, "end": 281, "text": "Banea et al. (2008)", "ref_id": "BIBREF4" }, { "start": 490, "end": 500, "text": "Wan (2009)", "ref_id": "BIBREF50" }, { "start": 503, "end": 532, "text": "Prettenhofer and Stein (2010)", "ref_id": "BIBREF38" }, { "start": 537, "end": 555, "text": "Wei and Pal (2010)", "ref_id": "BIBREF52" }, { "start": 582, "end": 598, "text": "Lu et al. (2011)", "ref_id": "BIBREF28" }, { "start": 628, "end": 645, "text": "Gui et al. (2013)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "This work addresses various elements of subjectivity annotated in the MPQA corpus, namely:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency-Based Opinion Mining", "sec_num": "3" }, { "text": "\u2022 direct-subjective expressions (henceforth, opinions) that are direct mentions of a private state, e.g. opinions, beliefs, emotions, sentiments, speculations, goals, etc.;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency-Based Opinion Mining", "sec_num": "3" }, { "text": "\u2022 the opinion agent, i.e., the holder of the opinion;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency-Based Opinion Mining", "sec_num": "3" }, { "text": "\u2022 the opinion target, i.e., what is being argued about;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency-Based Opinion Mining", "sec_num": "3" }, { "text": "\u2022 the opinion polarity, i.e., the sentiment (positive, negative or neutral) towards the target.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency-Based Opinion Mining", "sec_num": "3" }, { "text": "As an example, consider the sentence in Figure 1, which has two opinions, expressed by the spans \"is believed\" (O 1 ) and \"are against\" (O 2 ). The first opinion has an implicit agent and a neutral polarity toward the target \"the rich elites\" (T 1 ). This target is also the agent (A 2 ) of the second opinion, which has a negative polarity toward \"Hugo Ch\u00e1vez\" (T 2 ).", "cite_spans": [], "ref_spans": [ { "start": 40, "end": 46, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Dependency-Based Opinion Mining", "sec_num": "3" }, { "text": "As noted in prior work (Choi et al., 2005; Kim and Hovy, 2006; Johansson and Moschitti, 2010) , one source of difficulty when learning opinion miners on MPQA is with the boundaries of the entity spans. The fact that no criterion for choosing these boundaries is explicitly defined in the annotation guidelines leads to a low inter-annotator agreement. To circumvent this problem and make the learning task easier, we depart from the classical span-based approaches toward dependency-based opinion mining. This decision is inspired by the success of dependency models for syntax and semantics (Buchholz and Marsi, 2006; Surdeanu et al., 2008) . These dependency relations can be further converted to opinion spans (as described in \u00a73.3), or directly used as features in downstream applications. As we will see, a compact representation based on dependencies can achieve state-of-the-art results and has the advantage of being easily transferred to other languages through a parallel corpus. Figure 1 depicts a sentence-level dependency representation for fine-grained opinion mining. The overall structure is a graph whose nodes are head words (plus two special nodes, root and null), connected by labeled arcs, as outlined below.", "cite_spans": [ { "start": 23, "end": 42, "text": "(Choi et al., 2005;", "ref_id": "BIBREF8" }, { "start": 43, "end": 62, "text": "Kim and Hovy, 2006;", "ref_id": "BIBREF23" }, { "start": 63, "end": 93, "text": "Johansson and Moschitti, 2010)", "ref_id": "BIBREF20" }, { "start": 592, "end": 618, "text": "(Buchholz and Marsi, 2006;", "ref_id": "BIBREF6" }, { "start": 619, "end": 641, "text": "Surdeanu et al., 2008)", "ref_id": null } ], "ref_spans": [ { "start": 990, "end": 998, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Motivation", "sec_num": "3.1" }, { "text": "Determining head nodes. The three opinion elements that we want to detect (opinions, agents and targets) are each represented by a head node, which corresponds to a single word (underlined in Figure 1 ). When converting the MPQA corpus to dependencies, we determine this \"representative\" word automatically, by using the following simple heuristic: we first parse the sentence using the Stanford dependency parser (Socher et al., 2013a) ; then, we pick the last word in the span whose syntactic parent is outside the span (if the span is a syntactic phrase, there is only one word whose parent is outside the span, which is the lexical head). The same heuristic has been used for identifying the heads of mention spans in coreference resolution (Durrett and Klein, 2013) .", "cite_spans": [ { "start": 414, "end": 436, "text": "(Socher et al., 2013a)", "ref_id": "BIBREF42" }, { "start": 745, "end": 770, "text": "(Durrett and Klein, 2013)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 192, "end": 200, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Dependency Graph", "sec_num": "3.2" }, { "text": "Defining labeled arcs. The opinion relations are represented as labeled arcs that link these head nodes. Two artificial nodes are added: a root node, which links to all nodes that represent opinion words, with the label OPINION; and a null node, which is used for representing implicit relations. To represent opinion-agent relations, we draw an arc labeled AGENT toward the agent word. For opinion-target relations, the arc is toward the target word and has one of the labels TARGET:0, TARGET:+, or TARGET:-; this encodes the polarity in addition to the type of relation. We also include implicit arcs for opinion elements whose agent or target is not mentioned inside the sentence-these are modeled as arcs pointing to the null node.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Graph", "sec_num": "3.2" }, { "text": "Dependency opinion graph. We have the following requirements for a well-formed dependency opinion graph:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Graph", "sec_num": "3.2" }, { "text": "1. No self-arcs or arcs linking root to null.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Graph", "sec_num": "3.2" }, { "text": "comes from the root node.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An arc is labeled as OPINION if and only if it", "sec_num": "2." }, { "text": "3. Arcs labeled as AGENT or TARGET must come from an opinion node (i.e., a node with an incoming OPINION arc).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An arc is labeled as OPINION if and only if it", "sec_num": "2." }, { "text": "4. Every opinion node has exactly one AGENT and one TARGET outgoing arcs (possibly implicit). 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An arc is labeled as OPINION if and only if it", "sec_num": "2." }, { "text": "Similarly to prior work (Choi and Cardie, 2010; Johansson and Moschitti, 2011; Johansson and Moschitti, 2013) , we map the MPQA's polarityinto three levels: positive, negative and neutral, where the latter includes spans without polarity annotation or annotated as \"both\". As in Johansson and Moschitti (2013), we also ignore the \"uncertain\" aspect of the annotated polarities.", "cite_spans": [ { "start": 24, "end": 47, "text": "(Choi and Cardie, 2010;", "ref_id": "BIBREF7" }, { "start": 48, "end": 78, "text": "Johansson and Moschitti, 2011;", "ref_id": "BIBREF21" }, { "start": 79, "end": 109, "text": "Johansson and Moschitti, 2013)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "An arc is labeled as OPINION if and only if it", "sec_num": "2." }, { "text": "To evaluate the opinion miner against manual annotations and compare with other systems, we need a procedure to convert back from predicted dependencies to spans. In this work, we used a very simple procedure that we next describe, Figure 1 : Example of an opinion mining graph in our dependency formalism. Heads are underlined.", "cite_spans": [], "ref_spans": [ { "start": 232, "end": 240, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Dependency-to-Span Conversion", "sec_num": "3.3" }, { "text": "which assumes the sentence was previously parsed using a syntactic dependency parser.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency-to-Span Conversion", "sec_num": "3.3" }, { "text": "To generate agent and target spans, we compute the largest span, containing the head word, whose words are all descendants in the dependency parse tree and that are, simultaneously, not punctuations. To generate opinion spans, we start with the head word and expand the span by adding all neighbouring verbal words. In the case of English, we also allow adverbs, adjectives, modal verbs and the word to, when expanding to the left.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency-to-Span Conversion", "sec_num": "3.3" }, { "text": "The application of this simple approach to the gold dependency graphs in the training partition of the MPQA leads to oracle F 1 scores of 86.0%, 95.8% and 93.0% in the reconstruction of opinion, agent and target spans, respectively, according to the proportional scores described in \u00a75.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency-to-Span Conversion", "sec_num": "3.3" }, { "text": "One of the advantages of the dependency representation is that we can easily decode opinion-agenttarget relations without the need of complicated constrained sequence models or integer programming, as done in prior work (Choi et al., 2006; Yang and Cardie, 2012; Yang and Cardie, 2013) .", "cite_spans": [ { "start": 220, "end": 239, "text": "(Choi et al., 2006;", "ref_id": "BIBREF9" }, { "start": 240, "end": 262, "text": "Yang and Cardie, 2012;", "ref_id": "BIBREF57" }, { "start": 263, "end": 285, "text": "Yang and Cardie, 2013)", "ref_id": "BIBREF58" } ], "ref_spans": [], "eq_spans": [], "section": "Arc-Factored Model", "sec_num": "4" }, { "text": "We model dependency-based opinion mining as a structured classification problem. Let x be a sentence and y \u2208 Y(x) a set of well-formed dependency graphs, according to the constraints stated in \u00a73. We define a score function that decomposes as a sum of labeled arc scores,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f (x, y) = a\u2208y f a (x, y a )", "eq_num": "(1)" } ], "section": "Decoding", "sec_num": "4.1" }, { "text": "where y a is a labeled arc and the sum is over the arcs of the graph y. We use a linear model with weight vector w and local features \u03c6 a (x, y a ):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f a (x, y a ) = w \u2022 \u03c6 a (x, y a ).", "eq_num": "(2)" } ], "section": "Decoding", "sec_num": "4.1" }, { "text": "For making predictions, we need to compute", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y = arg max y\u2208Y(x) f (x, y).", "eq_num": "(3)" } ], "section": "Decoding", "sec_num": "4.1" }, { "text": "Under the assumptions stated in \u00a73, this problem decouples into independent maximization problems (one for each possible opinion word in the sentence). The detailed procedure is as follows, where arcs a can take the form o \u2192 h (opinion to agent) and o \u2192 t (opinion to target). For every candidate opinion word o:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "4.1" }, { "text": "1. Obtain the most compatible agent word, h := arg max h f o\u2192h (x, AGENT);", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "4.1" }, { "text": "2. Obtain the best target word and its polarity, ( t, p) := arg max t,p f o\u2192t (x, TARGET:p);", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "4.1" }, { "text": "3. Compute the total score of this candidate opinion as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "4.1" }, { "text": "s o := f root\u2192o (x, OPINION) + f o\u2192 h (x, AGENT) + f o\u2192 t (x, TARGET: p).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "4.1" }, { "text": "Then, if s o \u2265 0, add the arcs root \u2192 o, o \u2192 h, and o \u2192 t to the dependency graph, respectively with labels OPINION, AGENT, and TARGET: p.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "4.1" }, { "text": "For a sentence with L words, this decoding procedure takes O(L 2 ) time. In practice, we speed up this process by pruning from the candidate list arcs whose connected POS were not observed in the training set and whose length were larger than the ones observed in the training set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "4.1" }, { "text": "We now describe our features \u03c6 a , which are computed after processing the sentence to predict POS tags, syntactic dependency trees, lemmas and voice (active or passive) information. For English, we used the Stanford dependency parser (Socher et al., 2013a) for the syntactic annotations, the Porter stemmer to compute word stems, and a set of rules for computing the voice of each word. Our Portuguese corpus include all these preprocessing elements ( \u00a76.3), with the exception of the voice information (features depending on voice were only used for English).", "cite_spans": [ { "start": 235, "end": 257, "text": "(Socher et al., 2013a)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.2" }, { "text": "We also used the Subjectivity Lexicon 6 of Wilson et al. (2005) that we translated to Portuguese ( \u00a76.3), and a set of negation words (e.g. not, never, nor) and quantity words (e.g. very, much, less) collected for both languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.2" }, { "text": "Our arc-factored features are described below; they are inspired by prior work on dependency parsing (Martins et al., 2013) and fine-grained opinion mining (Breck et al., 2007; Johansson and Moschitti, 2013) .", "cite_spans": [ { "start": 156, "end": 176, "text": "(Breck et al., 2007;", "ref_id": "BIBREF5" }, { "start": 177, "end": 207, "text": "Johansson and Moschitti, 2013)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.2" }, { "text": "Opinion features. We define a set of features that only look at the opinion word; special symbols are used if the opinion is connected to a root or null node. The features below are also conjoined with the arc label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.2" }, { "text": "\u2022 OPINION WORD. The word itself, the lemma, the POS, and the voice. Conjunction of the word with the POS, and of the lemma with the POS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.2" }, { "text": "\u2022 BIGRAMS. Bigrams of words and POS corresponding to the opinion word conjoined with its previous (and next) word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.2" }, { "text": "\u2022 LEXICON (BASIC). Conjunction of the strength and polarity of the opinion word in the Subjectivity Lexicon 6 (e.g., \"weaksubj+neg\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.2" }, { "text": "\u2022 LEXICON (COUNT). Number of subjective words (total, positive and negative) in a sentence, with and without being conjoined with the polarity of the opinion word in the lexicon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.2" }, { "text": "\u2022 LEXICON (CONTEXT). For each word that is in the lexicon and within the 4-word context of the opinion, the form and the polarity of that word in the lexicon, with and without being conjoined with the form and the polarity in the lexicon of the opinion word. Besides the 4-word context, we also used the next/previous word in the sentence which is in the lexicon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.2" }, { "text": "\u2022 NEGATION AND QUANTITY WORDS. Within the 4-word context, features indicating if a word is a negation or quantity word, conjoined with the word itself and the opinion word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.2" }, { "text": "\u2022 SYNTACTIC PATH. The number of words up to the top of the syntactic dependency tree, and the sequence of POS tags in that path.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.2" }, { "text": "Opinion-Argument features. In case of arcs that neither connect to null nor root, the features above are also conjoined with the binned distance between the two words.For these arcs, we did not use the LEXICON (COUNT)/(CONTEXT) features, but we added features regarding the pair of opinion-argument words (below).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.2" }, { "text": "\u2022 OPINION-ARGUMENT WORD PAIR. Several conjunctions of word form, POS, voice and syntactic dependency relations corresponding to the pair opinion-argument.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.2" }, { "text": "\u2022 OPINION-ARGUMENT SYNTACTIC PATH. The syntactic path from the opinion word to the argument, conjoined with the POS and the dependency relations in the path (in Figure 1 , for the agent \"elites\" headed by \"are\" with relation nsuj, we have: \"VBP\u2193NNS\" and \"nsuj\u2193\").", "cite_spans": [], "ref_spans": [ { "start": 161, "end": 169, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Features", "sec_num": "4.2" }, { "text": "For arcs that neither connect to null or root, we conjoin voice features with the label, distance, and the direction of the arc. For these arcs, we also include back-off features where the polarity information is removed from the (target) labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.2" }, { "text": "In a first set of experiments, we evaluated the performance of our dependency-based model for opinion mining ( \u00a73) in the MPQA English corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "English Monolingual Experiments", "sec_num": "5" }, { "text": "We trained arc-factored models by running 25 epochs of max-loss MIRA (Crammer et al., 2006) . Our cost function takes into account mismatches between predicted and gold dependencies, with a cost C P on labeled arcs incorrectly predicted (false positives) and a cost C R = 1 \u2212 C P on missed gold labeled arcs (false negatives). The cost C P , the regularization constant, and the number of epochs were tuned in the development set.", "cite_spans": [ { "start": 69, "end": 91, "text": "(Crammer et al., 2006)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Learning", "sec_num": "5.1" }, { "text": "Opinion spans (Op.) are evaluated with F 1 scores, according to two matching criteria commonly used in the literature: overlap matching (OM), where a predicted span is counted as correct if it overlaps a gold one, and proportional matching (PM), proposed by Johansson and Moschitti (2010) . For the latter, we use the following formula for the recall, where we consider the sets of gold (G) and predicted (P) opinion spans: 7", "cite_spans": [ { "start": 258, "end": 288, "text": "Johansson and Moschitti (2010)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "5.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "R(G, P) = p\u2208P max g\u2208G |g p|/|p| |P| ;", "eq_num": "(4)" } ], "section": "Evaluation Metrics", "sec_num": "5.2" }, { "text": "the precision is P (G, P) = R(P, G). We also report metrics based on a head matching (HM) criterion, where a predicted span is considered correct if its syntactic head matches the head of the gold span. We consider that a pair opinion-agent (Op-Ag.) or opinion-target (Op-Tg.) is correctly extracted according to the OM or the HM criteria, if both the elements satisfy these criteria and the relation holds in the gold data. We also compute the metric described in Johansson and Moschitti (2010) which measures how well agents of opinions are predicted based on a proportional matching (PM) criterion. This metric is applied to evaluate the extraction of both agents and targets. Finally, to evaluate the opinions' polarities (Op-Pol. metric) we consider as correct opinions where the span and polarity both match the gold ones.", "cite_spans": [ { "start": 465, "end": 495, "text": "Johansson and Moschitti (2010)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "5.2" }, { "text": "We assess the quality of our monolingual dependency-based model by comparing it to the recent state-of-the-art approach of Johansson and Moschitti (2013) , whose code is available online. 8 That paper reports the performance of a basic span-based pipeline system (which extracts opinions with a CRF, followed by two separate classifiers to detect polarities and agents), and of a more sophisticated system that applies a reranking procedure to account for more complex features that consider interactions accross opinion elements.", "cite_spans": [ { "start": 123, "end": 153, "text": "Johansson and Moschitti (2013)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Results: Dependency-Based Model", "sec_num": "5.3" }, { "text": "We ran experiments using the same data and MPQA partitions as Johansson and Moschitti (2013) . However, since our system is designed for predicting opinion, agents and targets together, we removed the documents that were not annotated with targets. The final train/development/test sets have a total of 6,774/1,404/2,559 sentences and 3,834/881/1,426 opinions, respectively. Table 1 reports the results; since the systems of Johansson and Moschitti (2013) do not predict targets, Table 1 omits target scores. 9 We observe that our dependency-based system achieves results competitive with the best results of Johansson and Moschitti (2013) and clearly above the ones reached by their basic system that does not use re-ranking features. Though the two systems are not fully comparable, 10 the results in Table 1 show that our dependency-based approach ( \u00a73.2) followed by a simple dependency-to-span conversion ( \u00a73.3) is, despite its simplicity, on par with a top-performing opinion mining system. We conjecture that this is due to the ability to extract opinions, agents, and targets jointly using exact decoding. Note that our proposed dependency scheme would also be able to include additional global features relating pairs of opinions (by adding scores to pairs of opinion arcs) or two opinions having the same agent (by adding scores to pairs of agent arcs sharing its argument), similar to the reranking features used by Johansson and Moschitti (2013) . Similar second-order scores have been used in syntactic and semantic dependency parsing (Martins et al., 2013; , but with an increase in the complexity of the model and of the decoder.", "cite_spans": [ { "start": 62, "end": 92, "text": "Johansson and Moschitti (2013)", "ref_id": "BIBREF22" }, { "start": 509, "end": 510, "text": "9", "ref_id": null }, { "start": 1428, "end": 1458, "text": "Johansson and Moschitti (2013)", "ref_id": "BIBREF22" }, { "start": 1549, "end": 1571, "text": "(Martins et al., 2013;", "ref_id": "BIBREF30" } ], "ref_spans": [ { "start": 375, "end": 382, "text": "Table 1", "ref_id": null }, { "start": 803, "end": 810, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Results: Dependency-Based Model", "sec_num": "5.3" }, { "text": "We now turn to the problem of learning a opinion mining system for a resource-poor language (Portuguese), in a cross-lingual manner. We use a bitext projection approach ( \u00a76.1), whose only requirements are a model for a resource-rich language (English) and parallel data ( \u00a76.2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Opinion Mining", "sec_num": "6" }, { "text": "Our methodology is outlined as Algorithm 1. For simplicity, we call the source and target languages English (e) and \"foreign\" (f ), respectively. The procedure is inspired by the idea of bitext projection (Yarowsky and Ngai, 2001) . We start by training an English system on the labeled data L e (line 1), which in our case is the MPQA v.2.0 corpus. This system is then used to label the English side of the parallel data, automatically identifying opinion frames (line 2). The next step is to run a word aligner on the parallel data (line 3). The automatic alignments are then used to project the opinion frames to the target language (along with some filtering), yielding an automatic corpus D (f ) (line 4), which finally serves to train a system for the target language (line 5).", "cite_spans": [ { "start": 205, "end": 230, "text": "(Yarowsky and Ngai, 2001)", "ref_id": "BIBREF60" } ], "ref_spans": [], "eq_spans": [], "section": "Bitext Projection", "sec_num": "6.1" }, { "text": "We use an English-Portuguese parallel corpus based on the scientific news Brazilian magazine Revista Pesquisa FAPESP, collected by Aziz and has access not only to direct subjective spans but also to subjective expressions annotations with their agents and polarity information. 48.5 48.9* 52.5 47.9 47.0 50.7 Table 1 : Method comparison: F 1 scores obtained in the MPQA corpus, for our dependency based method and the approaches in Johansson and Moschitti (2013) , with and without reranking. The symbol * indicates that the best system beats the other systems with statistical significance, with p < 0.05 and according to a bootstrap resampling test (Koehn, 2004) . Algorithm 1 Cross-Lingual Opinion Mining Input: Labeled data L e , parallel data D e and D f . Output: Target opinion mining system S f . 1: S e \u2190 LEARNOPINIONMINER(L e ) 2: D e \u2190 RUNOPINIONMINER(S e , D e ) 3:", "cite_spans": [ { "start": 432, "end": 462, "text": "Johansson and Moschitti (2013)", "ref_id": "BIBREF22" }, { "start": 651, "end": 664, "text": "(Koehn, 2004)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 309, "end": 316, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Parallel Data", "sec_num": "6.2" }, { "text": "D e\u2194f \u2190 RUNWORDALIGNER(D e , D f ) 4: D f \u2190 PROJECTANDFILTER(D e\u2194f , D e ) 5: S f \u2190 LEARNOPINIONMINER( D f )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Data", "sec_num": "6.2" }, { "text": "Specia (2011). Though this corpus is in Brazilian Portuguese (while our validation corpus is in European Portuguese), we preferred FAPESP over other commonly used parallel corpora (such as the Europarl and UN datasets), since it is closer to our newswire target domain, with a smaller prominence of direct speech. We computed word alignments using the Berkeley aligner , intersected them and filtered out all the alignments whose confidence is below 0.95. After annotating the English side of FAPESP with the pre-trained system ( D e in Algorithm 1, with a total of 166,719 sentences and 81,492 opinions), the high confidence alignments (D e\u2194f ) are used to project the annotations to the Portuguese side of the corpus. The automatic annotations produced by our dependency-based system are easily transferred at a word level (for words with high confidence alignments), as illustrated in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 888, "end": 896, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Parallel Data", "sec_num": "6.2" }, { "text": "To improve the quality of the resulting corpus, we excluded sentences whose alignments cover less than 70% of the words in the target side of the corpus, or sentences whose opinion elements were not fully projected through high confidence alignments. At this point, we obtain an automatically annotated corpus in Portuguese ( D f ), with 106,064 sentences and 32,817 opinions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Data", "sec_num": "6.2" }, { "text": "For validation purposes, we also created a Portuguese corpus with manually annotated finegrained opinions. The corpus consists of a subset of the documents of the Priberam Compressive Summarization Corpus 11 , which contains 80 news topics with 10 documents each, collected from several Portuguese newspapers, TV and radio websites in the biennia 2010-2011 and 2012-2013. In the scope of the current work, we selected and annotated one document of each of the 80 topics. The first biennium was selected as the test set and the second biennium was split into development and training sets (see Ta Table 3 : Inter-annotator agreement in the test partition (shown are F 1 scores).", "cite_spans": [], "ref_spans": [ { "start": 593, "end": 595, "text": "Ta", "ref_id": null }, { "start": 596, "end": 603, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Portuguese Opinion Mining Corpus", "sec_num": "6.3" }, { "text": "The corpus was annotated in a similar vein as the MPQA , with the addition of the head node for each element of the opinion frame. It includes spans for direct-subjective expressions with intensity and polarity information; agent spans; and target spans. The annotation was carried out by three linguists, after reading the MPQA annotation guidelines Wilson, 2008) and having a small practice period using the provided examples and some MPQA annotated sentences. Each document was annotated by two of the three linguists and then revised by the third linguist, who (in case of any doubts) discussed with the initial annotators to reach for the final consensus. Scores for inter-annotator agreement are shown in Table 3 .", "cite_spans": [ { "start": 351, "end": 364, "text": "Wilson, 2008)", "ref_id": "BIBREF55" } ], "ref_spans": [ { "start": 711, "end": 718, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Portuguese Opinion Mining Corpus", "sec_num": "6.3" }, { "text": "The corpus was annotated with automatic POS tags and dependency parse trees using TurboParser (Martins et al., 2013) . 12 We used an in-house lemmatizer to obtain lemmas for each inflected word in the corpus. A Portuguese lexicon of subjectivity was created by translating the words in the Subjectivity Lexicon of . The annotated corpus and the translated subjectivity lexicon are available at http://labs.priberam.com/ Resources/Fine-Grained-Opinion-Corpus, and http://labs.priberam.com/Resources/ Subjectivity-Lexicon-PT, respectively. Table 4 : F 1 scores obtained in English (MPQA), for our full system and the DELEXICALIZED one.", "cite_spans": [ { "start": 94, "end": 116, "text": "(Martins et al., 2013)", "ref_id": "BIBREF30" }, { "start": 119, "end": 121, "text": "12", "ref_id": null } ], "ref_spans": [ { "start": 538, "end": 545, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Portuguese Opinion Mining Corpus", "sec_num": "6.3" }, { "text": "In a final set of experiments, we compare three systems of fine-grained opinion mining for Portuguese. All were trained as described in \u00a75.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Experiments", "sec_num": "7" }, { "text": "Baseline #1: Supervised System. A SUPER-VISED system was trained on the small Portuguese training set described in \u00a76.3. Though being a small training corpus, this is, to the best of our knowledge, the only existing corpus with finegrained opinions in Portuguese. We used the same arc-factored model and features described in \u00a74.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Description", "sec_num": "7.1" }, { "text": "Baseline #2: Delexicalized System with Bilingual Embeddings. This baseline consists of a direct model transfer: a DELEXICALIZED system is trained in the source language, without language specific features, so that it can be directly applied to the target language. Despite its simplicity, this strategy managed to provide a fairly strong baseline in several NLP tasks (Zeman and Resnik, 2008; McDonald et al., 2011; S\u00f8gaard, 2011) .", "cite_spans": [ { "start": 368, "end": 392, "text": "(Zeman and Resnik, 2008;", "ref_id": "BIBREF62" }, { "start": 393, "end": 415, "text": "McDonald et al., 2011;", "ref_id": "BIBREF32" }, { "start": 416, "end": 430, "text": "S\u00f8gaard, 2011)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "System Description", "sec_num": "7.1" }, { "text": "To achieve a unified feature representation, we mapped all language-specific POS tags to universal tags (Petrov et al., 2012) , and removed all features depending on the dependency relations, but maintained those depending on the syntactic path (but not on the dependency relations themselves). In addition, we replaced the lexical features by 128-dimensional cross-lingual word embeddings. 13 To obtain these bilingual neural embeddings, we ran the method of Hermann and Blunsom (2014) on the parallel data ( \u00a76.1). We scaled the embeddings by a factor of 2.0 (selected on the dev-set), following the procedure described in Turian et al. (2010) .", "cite_spans": [ { "start": 104, "end": 125, "text": "(Petrov et al., 2012)", "ref_id": "BIBREF37" }, { "start": 391, "end": 393, "text": "13", "ref_id": null }, { "start": 625, "end": 645, "text": "Turian et al. (2010)", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "System Description", "sec_num": "7.1" }, { "text": "We trained the English delexicalized system on the MPQA corpus, using the same test documents Table 5 : Comparison of cross-lingual approaches. F 1 scores obtained in our Portuguese validation corpus using: a SUPERVISED system trained on the small available data, a DELEXICALIZED system trained with universal POS tags and multilingual embeddings and our BITEXT PROJECTION OF DEPENDENCIES. The symbol * indicates that the best system beats the other systems with statistical significance, with p < 0.05 and according to a bootstrap resampling test (Koehn, 2004) .", "cite_spans": [ { "start": 548, "end": 561, "text": "(Koehn, 2004)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 94, "end": 101, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "System Description", "sec_num": "7.1" }, { "text": "as Riloff and Wiebe (2003) and whose list is available with the corpus, but selecting only documents annotated with targets. We randomly split the remaining documents into train and development sets, respectively with a total of 6,471 and 782 sentences. 14 Table 4 shows the performance of the delexicalized baseline in English, compared with a lexicalized system. We will see how this model behaves in a cross-lingual setting in \u00a77.2.", "cite_spans": [ { "start": 3, "end": 26, "text": "Riloff and Wiebe (2003)", "ref_id": "BIBREF39" }, { "start": 254, "end": 256, "text": "14", "ref_id": null } ], "ref_spans": [ { "start": 257, "end": 264, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "System Description", "sec_num": "7.1" }, { "text": "Our System: Bitext Projection of Opinion Dependencies. Finally, we implemented our crosslingual BITEXT approach ( \u00a76). We trained the (lexicalized) English model on the MPQA corpus (the performance of this model is shown in Table 4). Then, we ran this model on the English side of the parallel corpus, generating automatic annotations, and projected these annotations to the Portuguese side, as described in \u00a76.2. Finally, a Portuguese model was trained on these projected annotations using the arc-factored model and features described in \u00a74. Table 5 shows the F 1 scores obtained by the three systems on the Portuguese test partition. We observe that the BITEXT approach outperformed the SUPERVISED and the DELEXICALIZED ones in all metrics with a considerable margin, which shows the effectiveness of our proposed method. The SUPERVISED system suffers from the fact that the training set is too small to allow good generalization; the bitext projection method, in contrast, can create arbitrarily large training corpora without any annotation effort. The performance of 14 Note that this split is different from the one we used in \u00a75. There we used the same split as Johansson and Moschitti (2013) , for a fair comparison with their system; here, we follow the standard MPQA test partition.", "cite_spans": [ { "start": 1170, "end": 1200, "text": "Johansson and Moschitti (2013)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 544, "end": 551, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "System Description", "sec_num": "7.1" }, { "text": "the DELEXICALIZED system is rather disappointing. This result is justified by a decrease of performance in English due to the delexicalization (cf. Table 4), followed by an extra loss of quality due to language differences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison", "sec_num": "7.2" }, { "text": "Though our BITEXT approach scores the best, the scores are behind the range of values obtained for English (Table 4) , and far from the interannotator agreement numbers (Table 3) , suggesting room for improvement. The polarity scores in Table 5 appear to be relatively low. This fact is probably be justified with the annotator agreement scores (Table 3) which are considerably lower for these metrics.", "cite_spans": [], "ref_spans": [ { "start": 107, "end": 116, "text": "(Table 4)", "ref_id": null }, { "start": 169, "end": 178, "text": "(Table 3)", "ref_id": null }, { "start": 237, "end": 244, "text": "Table 5", "ref_id": null }, { "start": 345, "end": 354, "text": "(Table 3)", "ref_id": null } ], "eq_spans": [], "section": "Comparison", "sec_num": "7.2" }, { "text": "We presented a cross-lingual framework for finegrained opinion mining. We used a bitext projection technique to transfer dependency-based opinion frames from English to Portuguese. Experimentally, our dependency model achieved state-of-the-art results for English, and the Portuguese system trained with bitext projection outperformed two baselines: a supervised system trained on a small dataset, and a delexicalized model with bilingual word embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "http://mpqa.cs.pitt.edu/corpora/mpqa_ corpus.2 Besides English, monolingual systems have also been developed for Chinese and Japanese(Seki et al., 2007), German(Clematide et al., 2012) and Bengali(Das and Bandyopadhyay, 2010).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The Portuguese corpus and the lexicon are available at http://labs.priberam.com/Resources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "NTCIR-8 had a cross-lingual track but in a very different sense: there, queries and documents are in different languages; in contrast, we transfer a model accross languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Even though this assumption is not always met in practice, it is typical in MPQA (only 10% of the opinions have multiple agents, typically coreferent; and only 13% have multiple targets). When multiple agents or targets exist, we keep the ones that are closest to the opinion expression.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://mpqa.cs.pitt.edu/lexicons/ subj_lexicon/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This metric is slightly different from the PM metric of Johansson and Moschitti (2010), in which recall was computed as R(G, P) = p\u2208P g\u2208G |g\u2229p|/|p| |P|. The reason why we replace the \"sum\" by a \"max\" is that each predicted span p in (4) could contribute to the recall with a value greater than 1. Since most of the predicted spans only overlap a single gold span, this fix has a very small effect in the final scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://demo.spraakdata.gu.se/richard/ unitn_opinion/details.html9 We will report target scores later in \u00a77.10 Our system makes use of target annotations to predict the opinion frames, whileJohansson and Moschitti (2013)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://labs.priberam.com/Resources/ PCSC", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "A delexicalized system trained without the word embeddings had a worse performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank the anonymous reviewers for their insightful comments, and Richard Johansson for sharing his code and for answering several questions.This work was partially supported by the EU/FEDER programme, QREN/POR Lisboa (Portugal), under the Intelligo project (contract 2012/24803) and by a FCT grants UID/EEA/50008/2013 and PTDC/EEI-SII/2312/2012.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "9" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Emotions from text: machine learning for textbased emotion prediction", "authors": [ { "first": "Cecilia", "middle": [], "last": "Ovesdotter Alm", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Sproat", "suffix": "" } ], "year": 2005, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cecilia Ovesdotter Alm, Dan Roth, and Richard Sproat. 2005. Emotions from text: machine learning for text- based emotion prediction. In EMNLP.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Priberam compressive summarization corpus: A new multi-document summarization corpus for european portuguese", "authors": [ { "first": "Miguel", "middle": [ "B" ], "last": "Almeida", "suffix": "" }, { "first": "Mariana", "middle": [ "S C" ], "last": "Almeida", "suffix": "" }, { "first": "F", "middle": [ "T" ], "last": "Andr\u00e9", "suffix": "" }, { "first": "Helena", "middle": [], "last": "Martins", "suffix": "" }, { "first": "Pedro", "middle": [], "last": "Figueira", "suffix": "" }, { "first": "Cl\u00e1udia", "middle": [], "last": "Mendes", "suffix": "" }, { "first": "", "middle": [], "last": "Pinto", "suffix": "" } ], "year": 2014, "venue": "LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miguel B. Almeida, Mariana S. C. Almeida, Andr\u00e9 F. T. Mar- tins, Helena Figueira, Pedro Mendes, and Cl\u00e1udia Pinto. 2014. Priberam compressive summarization corpus: A new multi-document summarization corpus for european portuguese. In LREC.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Fully automatic compilation of a Portuguese-English parallel corpus for statistical machine translation", "authors": [ { "first": "Wilker", "middle": [], "last": "Aziz", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2011, "venue": "STIL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wilker Aziz and Lucia Specia. 2011. Fully automatic com- pilation of a Portuguese-English parallel corpus for statis- tical machine translation. In STIL.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Why are they excited?: Identifying and explaining spikes in blog mood levels", "authors": [ { "first": "Krisztian", "middle": [], "last": "Balog", "suffix": "" }, { "first": "Gilad", "middle": [], "last": "Mishne", "suffix": "" }, { "first": "Maarten", "middle": [], "last": "De Rijke", "suffix": "" } ], "year": 2006, "venue": "EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Krisztian Balog, Gilad Mishne, and Maarten de Rijke. 2006. Why are they excited?: Identifying and explaining spikes in blog mood levels. In EACL.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Multilingual subjectivity analysis using machine translation", "authors": [ { "first": "Carmen", "middle": [], "last": "Banea", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Samer", "middle": [], "last": "Hassan", "suffix": "" } ], "year": 2008, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carmen Banea, Rada Mihalcea, Janyce Wiebe, and Samer Hassan. 2008. Multilingual subjectivity analysis using machine translation. In EMNLP.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Identifying expressions of opinion in context", "authors": [ { "first": "Eric", "middle": [], "last": "Breck", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2007, "venue": "IJCAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Breck, Yejin Choi, and Claire Cardie. 2007. Identifying expressions of opinion in context. In IJCAI.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "CoNLL-X shared task on multilingual dependency parsing", "authors": [ { "first": "Sabine", "middle": [], "last": "Buchholz", "suffix": "" }, { "first": "Erwin", "middle": [], "last": "Marsi", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In CoNLL.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Hierarchical sequential learning for extracting opinions and their attributes", "authors": [ { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2010, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yejin Choi and Claire Cardie. 2010. Hierarchical sequential learning for extracting opinions and their attributes. In ACL.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Identifying sources of opinions with conditional random fields and extraction patterns", "authors": [ { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "Siddharth", "middle": [], "last": "Patwardhan", "suffix": "" } ], "year": 2005, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yejin Choi, Claire Cardie, Ellen Riloff, and Siddharth Pat- wardhan. 2005. Identifying sources of opinions with con- ditional random fields and extraction patterns. In EMNLP.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Joint extraction of entities and relations for opinion recognition", "authors": [ { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Breck", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2006, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yejin Choi, Eric Breck, and Claire Cardie. 2006. Joint ex- traction of entities and relations for opinion recognition. In EMNLP.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "MLSA A Multilayered Reference Corpus for German Sentiment Analysis", "authors": [ { "first": "Simon", "middle": [], "last": "Clematide", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Gindl", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Klenner", "suffix": "" }, { "first": "Stefanos", "middle": [], "last": "Petrakis", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Remus", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Ruppenhofer", "suffix": "" }, { "first": "Ulli", "middle": [], "last": "Waltinger", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Wiegand", "suffix": "" } ], "year": 2012, "venue": "LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simon Clematide, Stefan Gindl, Manfred Klenner, Ste- fanos Petrakis, Robert Remus, Josef Ruppenhofer, Ulli Waltinger, and Michael Wiegand. 2012. MLSA A Multi- layered Reference Corpus for German Sentiment Analy- sis. In LREC.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Shai Shalev-Shwartz, and Yoram Singer", "authors": [ { "first": "Koby", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "Ofer", "middle": [], "last": "Dekel", "suffix": "" }, { "first": "Joseph", "middle": [], "last": "Keshet", "suffix": "" } ], "year": 2006, "venue": "Journal of Machine Learning Research", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev- Shwartz, and Yoram Singer. 2006. Online Passive- Aggressive Algorithms. Journal of Machine Learning Re- search.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Labeling emotion in bengali blog corpus a fine grained tagging at sentence level", "authors": [ { "first": "Dipankar", "middle": [], "last": "Das", "suffix": "" }, { "first": "Sivaji", "middle": [], "last": "Bandyopadhyay", "suffix": "" } ], "year": 2010, "venue": "(ALR8), COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dipankar Das and Sivaji Bandyopadhyay. 2010. Labeling emotion in bengali blog corpus a fine grained tagging at sentence level. In (ALR8), COLING.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A holistic lexicon-based approach to opinion mining", "authors": [ { "first": "Xiaowen", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Philip S", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2008, "venue": "WSDM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaowen Ding, Bing Liu, and Philip S Yu. 2008. A holistic lexicon-based approach to opinion mining. In WSDM.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Easy victories and uphill battles in coreference resolution", "authors": [ { "first": "Greg", "middle": [], "last": "Durrett", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2013, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In EMNLP.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A cross-lingual approach for opinion holder extraction", "authors": [ { "first": "Lin", "middle": [], "last": "Gui", "suffix": "" }, { "first": "Ruifeng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Chenxiang", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2013, "venue": "Journal of Computational Information Systems", "volume": "", "issue": "6", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin Gui, Ruifeng Xu, Jun Xu, and Chenxiang Liu. 2013. A cross-lingual approach for opinion holder extraction. Journal of Computational Information Systems, 9(6).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Multilingual Models for Compositional Distributional Semantics", "authors": [], "year": 2014, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karl Moritz Hermann and Phil Blunsom. 2014. Multilingual Models for Compositional Distributional Semantics. In ACL.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Mining opinion features in customer reviews", "authors": [ { "first": "Minqing", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2004, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minqing Hu and Bing Liu. 2004. Mining opinion features in customer reviews. In AAAI.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Bootstrapping parsers via syntactic projection across parallel texts", "authors": [ { "first": "Rebecca", "middle": [], "last": "Hwa", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "Amy", "middle": [], "last": "Weinberg", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Cabezas", "suffix": "" }, { "first": "Okan", "middle": [], "last": "Kolak", "suffix": "" } ], "year": 2005, "venue": "Natural Language Engineering", "volume": "", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rebecca Hwa, Philip Resnik, Amy Weinberg, Clara Cabezas, and Okan Kolak. 2005. Bootstrapping parsers via syn- tactic projection across parallel texts. Natural Language Engineering, 11(3).", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Opinion mining with deep recurrent neural networks", "authors": [ { "first": "Claire", "middle": [], "last": "Ozanirsoy", "suffix": "" }, { "first": "", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ozan\u0130rsoy and Claire Cardie. 2014. Opinion mining with deep recurrent neural networks. In EMNLP.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Reranking models in fine-grained opinion analysis", "authors": [ { "first": "Richard", "middle": [], "last": "Johansson", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" } ], "year": 2010, "venue": "COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Johansson and Alessandro Moschitti. 2010. Rerank- ing models in fine-grained opinion analysis. In COLING.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Extracting opinion expressions and their polarities: exploration of pipelines and joint models", "authors": [ { "first": "Richard", "middle": [], "last": "Johansson", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" } ], "year": 2011, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Johansson and Alessandro Moschitti. 2011. Extract- ing opinion expressions and their polarities: exploration of pipelines and joint models. In ACL.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Relational features in fine-grained opinion analysis", "authors": [ { "first": "Richard", "middle": [], "last": "Johansson", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" } ], "year": 2013, "venue": "Computational Linguistics", "volume": "", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Johansson and Alessandro Moschitti. 2013. Rela- tional features in fine-grained opinion analysis. Computa- tional Linguistics, 39(3).", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Extracting opinions, opinion holders, and topics expressed in online news media text", "authors": [ { "first": "Min", "middle": [], "last": "Soo", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Kim", "suffix": "" }, { "first": "", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2006, "venue": "SST", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Soo-Min Kim and Eduard Hovy. 2006. Extracting opinions, opinion holders, and topics expressed in online news me- dia text. In SST.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Statistical signicance tests for machine translation evaluation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2004, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Koehn. 2004. Statistical signicance tests for machine translation evaluation. In ACL.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Opinion extraction, summarization and tracking in news and blog corpora", "authors": [ { "first": "Lun-Wei", "middle": [], "last": "Ku", "suffix": "" }, { "first": "Yu-Ting", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Hsin-Hsi", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2006, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lun-Wei Ku, Yu-Ting Liang, and Hsin-Hsi Chen. 2006. Opinion extraction, summarization and tracking in news and blog corpora. In AAAI.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Alignment by agreement", "authors": [ { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2006, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Percy Liang, Ben Taskar, and Dan Klein. 2006. Alignment by agreement. In NAACL.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Sentiment analysis and opinion mining", "authors": [ { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2012, "venue": "Synthesis Lectures on Human Language Technologies", "volume": "5", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bing Liu. 2012. Sentiment analysis and opinion min- ing. Synthesis Lectures on Human Language Technolo- gies, 5(1).", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Joint bilingual sentiment classification with unlabeled parallel corpora", "authors": [ { "first": "Bin", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Chenhao", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "Benjamin", "middle": [ "K" ], "last": "Tsou", "suffix": "" } ], "year": 2011, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bin Lu, Chenhao Tan, Claire Cardie, and Benjamin K. Tsou. 2011. Joint bilingual sentiment classification with unla- beled parallel corpora. In ACL.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Priberam: A turbo semantic parser with second order features", "authors": [ { "first": "F", "middle": [ "T" ], "last": "Andr\u00e9", "suffix": "" }, { "first": "M", "middle": [ "S C" ], "last": "Martins", "suffix": "" }, { "first": "", "middle": [], "last": "Almeida", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andr\u00e9 F. T. Martins and M. S. C. Almeida. 2014. Priberam: A turbo semantic parser with second order features. In SemEval.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Turning on the turbo: Fast third-order nonprojective turbo parsers", "authors": [ { "first": "F", "middle": [ "T" ], "last": "Andr\u00e9", "suffix": "" }, { "first": "Miguel", "middle": [ "B" ], "last": "Martins", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Almeida", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2013, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andr\u00e9 F. T. Martins, Miguel B. Almeida, and Noah A. Smith. 2013. Turning on the turbo: Fast third-order non- projective turbo parsers. In ACL.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Transferring coreference resolvers with posterior regularization", "authors": [ { "first": "F", "middle": [ "T" ], "last": "Andr\u00e9", "suffix": "" }, { "first": "", "middle": [], "last": "Martins", "suffix": "" } ], "year": 2015, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andr\u00e9 F. T. Martins. 2015. Transferring coreference re- solvers with posterior regularization. In ACL.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Multisource transfer of delexicalized dependency parsers", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Hall", "suffix": "" } ], "year": 2011, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi- source transfer of delexicalized dependency parsers. In EMNLP.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Learning multilingual subjective language via crosslingual projections", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Carmen", "middle": [], "last": "Banea", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 2007, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea, Carmen Banea, and Janyce Wiebe. 2007. Learning multilingual subjective language via cross- lingual projections. In ACL.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Cross-lingual annotation projection for semantic roles", "authors": [ { "first": "Sebastian", "middle": [], "last": "Pad\u00f3", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2009, "venue": "Journal of Artificial Intelligence Research", "volume": "36", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Pad\u00f3 and Mirella Lapata. 2009. Cross-lingual an- notation projection for semantic roles. Journal of Artifi- cial Intelligence Research, 36(1).", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2008, "venue": "", "volume": "2", "issue": "", "pages": "1--2", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang and Lillian Lee. 2008. Opinion mining and sen- timent analysis. Foundations and Trends in Information Retrieval, 2(1-2).", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Thumbs up?: Sentiment classification using machine learning techniques", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Shivakumar", "middle": [], "last": "Vaithyanathan", "suffix": "" } ], "year": 2002, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: Sentiment classification using machine learning techniques. In EMNLP.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "A universal part-of-speech tagset", "authors": [ { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" } ], "year": 2012, "venue": "LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A universal part-of-speech tagset. In LREC.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Cross-language text classification using structural correspondence learning", "authors": [ { "first": "Peter", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" } ], "year": 2010, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Prettenhofer and Benno Stein. 2010. Cross-language text classification using structural correspondence learn- ing. In ACL.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Learning extraction patterns for subjective expressions", "authors": [ { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 2003, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen Riloff and Janyce Wiebe. 2003. Learning extraction patterns for subjective expressions. In EMNLP.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Overview of opinion analysis pilot task at NTCIR-6", "authors": [ { "first": "Yohei", "middle": [], "last": "Seki", "suffix": "" }, { "first": "David", "middle": [ "Kirk" ], "last": "Evans", "suffix": "" }, { "first": "Lun-Wei", "middle": [], "last": "Ku", "suffix": "" }, { "first": "Hsin-Hsi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Noriko", "middle": [], "last": "Kando", "suffix": "" }, { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yohei Seki, David Kirk Evans, Lun-Wei Ku, Hsin-Hsi Chen, Noriko Kando, and Chin-Yew Lin. 2007. Overview of opinion analysis pilot task at NTCIR-6. In NTCIR-6.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Overview of opinion analysis pilot task at NTCIR-8: A Step Toward Cross Lingual Opinion Analysis", "authors": [ { "first": "Yohei", "middle": [], "last": "Seki", "suffix": "" }, { "first": "Lun-Wei", "middle": [], "last": "Ku", "suffix": "" }, { "first": "Le", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Hsin-Hsi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Noriko", "middle": [], "last": "Kando", "suffix": "" } ], "year": 2010, "venue": "NTCIR-8", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yohei Seki, Lun-Wei Ku, Le Sun, Hsin-Hsi Chen, and Noriko Kando. 2010. Overview of opinion analysis pilot task at NTCIR-8: A Step Toward Cross Lingual Opinion Analy- sis. In NTCIR-8.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Parsing with compositional vector grammars", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" } ], "year": 2013, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, John Bauer, Christopher D. Manning, and Andrew Y. Ng. 2013a. Parsing with compositional vector grammars. In ACL.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Perelygin", "suffix": "" }, { "first": "Y", "middle": [], "last": "Jean", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Y", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2013, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013b. Recursive deep models for semantic com- positionality over a sentiment treebank. In EMNLP.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Data point selection for crosslanguage adaptation of dependency parsers", "authors": [ { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2011, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anders S\u00f8gaard. 2011. Data point selection for cross- language adaptation of dependency parsers. In ACL.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Shared Task on Joint Parsing of Syntactic and Semantic Dependencies", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shared Task on Joint Parsing of Syntactic and Semantic Dependencies. In CoNLL.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Token and type constraints for cross-lingual part-of-speech tagging", "authors": [ { "first": "Oscar", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2013, "venue": "Trans. of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, Slav Petrov, Ryan McDon- ald, and Joakim Nivre. 2013. Token and type constraints for cross-lingual part-of-speech tagging. Trans. of the As- sociation for Computational Linguistics.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Word representations: a simple and general method for semisupervised learning", "authors": [ { "first": "Joseph", "middle": [], "last": "Turian", "suffix": "" }, { "first": "Lev", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2010, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi- supervised learning. In ACL.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Thumbs up or thumbs down?: Semantic orientation applied to unsupervised classification of reviews", "authors": [ { "first": "D", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Turney", "suffix": "" } ], "year": 2002, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter D. Turney. 2002. Thumbs up or thumbs down?: Se- mantic orientation applied to unsupervised classification of reviews. In ACL.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Co-training for cross-lingual sentiment classification", "authors": [ { "first": "Xiaojun", "middle": [], "last": "Wan", "suffix": "" } ], "year": 2009, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaojun Wan. 2009. Co-training for cross-lingual sentiment classification. In ACL.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Cross-lingual projected expectation regularization for weakly supervised learning", "authors": [ { "first": "Mengqiu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Trans. of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mengqiu Wang and Chris Manning. 2014. Cross-lingual projected expectation regularization for weakly super- vised learning. Trans. of the Association for Computa- tional Linguistics, 2.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Cross lingual adaptation: an experiment on sentiment classifications", "authors": [ { "first": "Bin", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Pal", "suffix": "" } ], "year": 2010, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bin Wei and Christopher Pal. 2010. Cross lingual adaptation: an experiment on sentiment classifications. In ACL.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Annotating expressions of opinions and emotions in language", "authors": [ { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Theresa", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2005, "venue": "Language Resources and Evaluation", "volume": "39", "issue": "2-3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in lan- guage. Language Resources and Evaluation, 39(2-3).", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Recognizing contextual polarity in phrase-level sentiment analysis", "authors": [ { "first": "Theresa", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Hoffmann", "suffix": "" } ], "year": 2005, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. In EMNLP.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Fine-Grained Subjectivity Analysis", "authors": [ { "first": "Theresa", "middle": [], "last": "Wilson", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Theresa Wilson. 2008. Fine-Grained Subjectivity Analysis. Ph.D. thesis, University of Pittsburgh.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Structural opinion mining for graph-based sentiment representation", "authors": [ { "first": "Yuanbin", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Lide", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2011, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuanbin Wu, Qi Zhang, Xuanjing Huang, and Lide Wu. 2011. Structural opinion mining for graph-based senti- ment representation. In EMNLP.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Extracting opinion expressions with semi-markov conditional random fields", "authors": [ { "first": "Bishan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2012, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bishan Yang and Claire Cardie. 2012. Extracting opinion expressions with semi-markov conditional random fields. In EMNLP.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Joint inference for fine-grained opinion extraction", "authors": [ { "first": "Bishan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2013, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bishan Yang and Claire Cardie. 2013. Joint inference for fine-grained opinion extraction. In ACL.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Joint modeling of opinion expression extraction and attribute classification", "authors": [ { "first": "Bishan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2014, "venue": "Trans. of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bishan Yang and Claire Cardie. 2014. Joint modeling of opinion expression extraction and attribute classification. Trans. of the Association for Computational Linguistics.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "Inducing multilingual pos taggers and np bracketers via robust projection across aligned corpora", "authors": [ { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" }, { "first": "Grace", "middle": [], "last": "Ngai", "suffix": "" } ], "year": 2001, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Yarowsky and Grace Ngai. 2001. Inducing multilin- gual pos taggers and np bracketers via robust projection across aligned corpora. In NAACL.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences", "authors": [ { "first": "Hong", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Vasileios", "middle": [], "last": "Hatzivassiloglou", "suffix": "" } ], "year": 2003, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hong Yu and Vasileios Hatzivassiloglou. 2003. Towards answering opinion questions: Separating facts from opin- ions and identifying the polarity of opinion sentences. In EMNLP.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "Cross-language parser adaptation between related languages", "authors": [ { "first": "Daniel", "middle": [], "last": "Zeman", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2008, "venue": "IJCNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Zeman and Philip Resnik. 2008. Cross-language parser adaptation between related languages. In IJCNLP.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "Excerpt of a bitext document from FAPESP, with automatic opinion dependencies. The annotations are directly projected to Portuguese via automatic word alignments.", "type_str": "figure" }, "TABREF2": { "html": null, "type_str": "table", "num": null, "text": "Number of documents, sentences and opinions in the Portuguese Corpus.", "content": "
HM PM OM
Op.77.0 76.7 79.2
Op-Ag. 69.1 72.3 73.5
Op-Tg. 61.9 65.4 71.4
Op-Pol. 49.4 49.1 50.7
" }, "TABREF3": { "html": null, "type_str": "table", "num": null, "text": "12 http://www.ark.cs.cmu.edu/TurboParser OUR SYSTEM DELEXICALIZED HM PM OM HM PM OM Op. 65.7 63.5 69.8 50.1 45.8 52.7 Op-Ag. 47.6 48.8 51.1 33.8 34.8 35.7 Op-Tg. 34.9 44.8 50.3 19.9 28.0 32.1 Op-Pol. 51.5 50.2 54.4 36.7 34.7 38.8", "content": "" } } } }