{ "paper_id": "P11-1033", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:49:12.644960Z" }, "title": "Joint Bilingual Sentiment Classification with Unlabeled Parallel Corpora", "authors": [ { "first": "Bin", "middle": [], "last": "Lu", "suffix": "", "affiliation": { "laboratory": "", "institution": "City University of Hong Kong", "location": { "settlement": "Hong Kong" } }, "email": "" }, { "first": "Chenhao", "middle": [], "last": "Tan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Cornell University", "location": { "settlement": "Ithaca", "region": "NY", "country": "USA" } }, "email": "chenhao@cs.cornell.edu" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "", "affiliation": { "laboratory": "", "institution": "Cornell University", "location": { "settlement": "Ithaca", "region": "NY", "country": "USA" } }, "email": "cardie@cs.cornell.edu" }, { "first": "Benjamin", "middle": [ "K" ], "last": "Tsou", "suffix": "", "affiliation": { "laboratory": "", "institution": "City University of Hong Kong", "location": { "settlement": "Hong Kong" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Most previous work on multilingual sentiment analysis has focused on methods to adapt sentiment resources from resource-rich languages to resource-poor languages. We present a novel approach for joint bilingual sentiment classification at the sentence level that augments available labeled data in each language with unlabeled parallel data. We rely on the intuition that the sentiment labels for parallel sentences should be similar and present a model that jointly learns improved monolingual sentiment classifiers for each language. Experiments on multiple data sets show that the proposed approach (1) outperforms the monolingual baselines, significantly improving the accuracy for both languages by 3.44%-8.12%; (2) outperforms two standard approaches for leveraging unlabeled data; and (3) produces (albeit smaller) performance gains when employing pseudo-parallel data from machine translation engines.", "pdf_parse": { "paper_id": "P11-1033", "_pdf_hash": "", "abstract": [ { "text": "Most previous work on multilingual sentiment analysis has focused on methods to adapt sentiment resources from resource-rich languages to resource-poor languages. We present a novel approach for joint bilingual sentiment classification at the sentence level that augments available labeled data in each language with unlabeled parallel data. We rely on the intuition that the sentiment labels for parallel sentences should be similar and present a model that jointly learns improved monolingual sentiment classifiers for each language. Experiments on multiple data sets show that the proposed approach (1) outperforms the monolingual baselines, significantly improving the accuracy for both languages by 3.44%-8.12%; (2) outperforms two standard approaches for leveraging unlabeled data; and (3) produces (albeit smaller) performance gains when employing pseudo-parallel data from machine translation engines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The field of sentiment analysis has quickly attracted the attention of researchers and practitioners alike (e.g. Pang et al., 2002; Turney, 2002; Hu and Liu, 2004; Wiebe et al., 2005; Breck et al., 2007; Pang and Lee, 2008) . 1 Indeed, sentiment analysis systems, which mine opinions from textual sources (e.g. news, blogs, and reviews), can be used in a wide variety of applications, including interpreting product reviews, opinion retrieval and political polling.", "cite_spans": [ { "start": 113, "end": 131, "text": "Pang et al., 2002;", "ref_id": "BIBREF25" }, { "start": 132, "end": 145, "text": "Turney, 2002;", "ref_id": "BIBREF34" }, { "start": 146, "end": 163, "text": "Hu and Liu, 2004;", "ref_id": "BIBREF12" }, { "start": 164, "end": 183, "text": "Wiebe et al., 2005;", "ref_id": "BIBREF37" }, { "start": 184, "end": 203, "text": "Breck et al., 2007;", "ref_id": "BIBREF7" }, { "start": 204, "end": 223, "text": "Pang and Lee, 2008)", "ref_id": "BIBREF24" }, { "start": 226, "end": 227, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Not surprisingly, most methods for sentiment classification are supervised learning techniques, which require training data annotated with the appropriate sentiment labels (e.g. document-level or sentence-level positive vs. negative polarity). This data is difficult and costly to obtain, and must be acquired separately for each language under consideration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous work in multilingual sentiment analysis has therefore focused on methods to adapt sentiment resources (e.g. lexicons) from resourcerich languages (typically English) to other languages, with the goal of transferring sentiment or subjectivity analysis capabilities from English to other languages (e.g. Mihalcea et al. (2007) ; Banea et al. (2008; ; Wan (2008; 2009) ; Prettenhofer and Stein (2010) ). In recent years, however, sentiment-labeled data is gradually becoming available for languages other than English (e.g. Seki et al. (2007; 2008) ; Nakagawa et al. (2010) ; Schulz et al. (2010) ). In addition, there is still much room for improvement in existing monolingual (including English) sentiment classifiers, especially at the sentence level (Pang and Lee, 2008) . This paper tackles the task of bilingual sentiment analysis. In contrast to previous work, we (1) assume that some amount of sentimentlabeled data is available for the language pair under study, and (2) investigate methods to simultaneously improve sentiment classification for both languages. Given the labeled data in each language, we propose an approach that exploits an unlabeled parallel corpus with the following intuition: two sentences or documents that are parallel (i.e. translations of one another) should exhibit the same sentimenttheir sentiment labels (e.g. polarity, subjectivity, intensity) should be similar. The proposed maximum entropy-based EM approach jointly learns two monolingual sentiment classifiers by treating the sentiment labels in the unlabeled parallel text as unobserved latent variables, and maximizes the regularized joint likelihood of the language-specific labeled data together with the inferred sentiment labels of the parallel text. Although our approach should be applicable at the document-level and for additional sentiment tasks, we focus on sentence-level polarity classification in this work.", "cite_spans": [ { "start": 311, "end": 333, "text": "Mihalcea et al. (2007)", "ref_id": "BIBREF19" }, { "start": 336, "end": 355, "text": "Banea et al. (2008;", "ref_id": "BIBREF2" }, { "start": 358, "end": 368, "text": "Wan (2008;", "ref_id": "BIBREF35" }, { "start": 369, "end": 374, "text": "2009)", "ref_id": "BIBREF36" }, { "start": 377, "end": 406, "text": "Prettenhofer and Stein (2010)", "ref_id": "BIBREF26" }, { "start": 530, "end": 548, "text": "Seki et al. (2007;", "ref_id": "BIBREF30" }, { "start": 549, "end": 554, "text": "2008)", "ref_id": "BIBREF35" }, { "start": 557, "end": 579, "text": "Nakagawa et al. (2010)", "ref_id": "BIBREF21" }, { "start": 582, "end": 602, "text": "Schulz et al. (2010)", "ref_id": "BIBREF28" }, { "start": 760, "end": 780, "text": "(Pang and Lee, 2008)", "ref_id": "BIBREF24" }, { "start": 1350, "end": 1390, "text": "(e.g. polarity, subjectivity, intensity)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We evaluate our approach for English and Chinese on two dataset combinations (see Section 4) and find that the proposed approach outperforms the monolingual baselines (i.e. maximum entropy and SVM classifiers) as well as two alternative methods for leveraging unlabeled data (transductive SVMs (Joachims, 1999b) and cotraining (Blum and Mitchell, 1998) ). Accuracy is significantly improved for both languages, by 3.44%-8.12%. We furthermore find that improvements, albeit smaller, are obtained when the parallel data is replaced with a pseudo-parallel (i.e. automatically translated) corpus. To our knowledge, this is the first multilingual sentiment analysis study to focus on methods for simultaneously improving sentiment classification for a pair of languages based on unlabeled data rather than resource adaptation from one language to another.", "cite_spans": [ { "start": 294, "end": 311, "text": "(Joachims, 1999b)", "ref_id": "BIBREF15" }, { "start": 327, "end": 352, "text": "(Blum and Mitchell, 1998)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of the paper is organized as follows. Section 2 introduces related work. In Section 3, the proposed joint model is described. Sections 4 and 5, respectively, provide the experimental setup and results; the conclusion (Section 6) follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Multilingual Sentiment Analysis. There is a growing body of work on multilingual sentiment analysis. Most approaches focus on resource adaptation from one language (usually English) to other languages with few sentiment resources. Mihalcea et al. (2007) , for example, generate subjectivity analysis resources in a new language from English sentiment resources by leveraging a bilingual dictionary or a parallel corpus. Banea et al. (2008; instead automatically translate the English resources using automatic machine translation engines for subjectivity classification. Prettenhofer and Stein (2010) investigate crosslingual sentiment classification from the perspective of domain adaptation based on structural correspondence learning (Blitzer et al., 2006) .", "cite_spans": [ { "start": 231, "end": 253, "text": "Mihalcea et al. (2007)", "ref_id": "BIBREF19" }, { "start": 420, "end": 439, "text": "Banea et al. (2008;", "ref_id": "BIBREF2" }, { "start": 571, "end": 600, "text": "Prettenhofer and Stein (2010)", "ref_id": "BIBREF26" }, { "start": 737, "end": 759, "text": "(Blitzer et al., 2006)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Approaches that do not explicitly involve resource adaptation include Wan (2009) , which uses co-training (Blum and Mitchell, 1998) with English vs. Chinese features comprising the two independent -views\u2016 to exploit unlabeled Chinese data and a labeled English corpus and thereby improves Chinese sentiment classification. Another notable approach is the work of Boyd-Graber and Resnik (2010) , which presents a generative model ---supervised multilingual latent Dirichlet allocation ---that jointly models topics that are consistent across languages, and employs them to better predict sentiment ratings.", "cite_spans": [ { "start": 70, "end": 80, "text": "Wan (2009)", "ref_id": "BIBREF36" }, { "start": 106, "end": 131, "text": "(Blum and Mitchell, 1998)", "ref_id": "BIBREF5" }, { "start": 363, "end": 392, "text": "Boyd-Graber and Resnik (2010)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Unlike the methods described above, we focus on simultaneously improving the performance of sentiment classification in a pair of languages by developing a model that relies on sentimentlabeled data in each language as well as unlabeled parallel text for the language pair.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Semi-supervised Learning. Another line of related work is semi-supervised learning, which combines labeled and unlabeled data to improve the performance of the task of interest (Zhu and Goldberg, 2009) . Among the popular semisupervised methods (e.g. EM on Na\u00efve Bayes (Nigam et al., 2000) , co-training (Blum and Mitchell, 1998) , transductive SVMs (Joachims, 1999b) , and co-regularization (Sindhwani et al., 2005; Amini et al., 2010) ), our approach employs the EM algorithm, extending it to the bilingual case based on maximum entropy. We compare to co-training and transductive SVMs in Section 5.", "cite_spans": [ { "start": 177, "end": 201, "text": "(Zhu and Goldberg, 2009)", "ref_id": "BIBREF40" }, { "start": 269, "end": 289, "text": "(Nigam et al., 2000)", "ref_id": "BIBREF22" }, { "start": 304, "end": 329, "text": "(Blum and Mitchell, 1998)", "ref_id": "BIBREF5" }, { "start": 350, "end": 367, "text": "(Joachims, 1999b)", "ref_id": "BIBREF15" }, { "start": 392, "end": 416, "text": "(Sindhwani et al., 2005;", "ref_id": "BIBREF31" }, { "start": 417, "end": 436, "text": "Amini et al., 2010)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Multilingual NLP for Other Tasks. Finally, there exists related work using bilingual resources to help other NLP tasks, such as word sense disambiguation (e.g. Ido and Itai (1994)), parsing (e.g. Burkett and Klein (2008) ; Zhao et al. (2009) ; Burkett et al. (2010)), information retrieval (Gao et al., 2009) , named entity detection (Burkett et al., 2010) ; topic extraction (e.g. Zhang et al., 2010 ), text classification (e.g. Amini et al., 2010) , and hyponym-relation acquisition (e.g. Oh et al., 2009) .", "cite_spans": [ { "start": 196, "end": 220, "text": "Burkett and Klein (2008)", "ref_id": "BIBREF9" }, { "start": 223, "end": 241, "text": "Zhao et al. (2009)", "ref_id": "BIBREF39" }, { "start": 290, "end": 308, "text": "(Gao et al., 2009)", "ref_id": "BIBREF11" }, { "start": 334, "end": 356, "text": "(Burkett et al., 2010)", "ref_id": "BIBREF8" }, { "start": 382, "end": 400, "text": "Zhang et al., 2010", "ref_id": "BIBREF38" }, { "start": 430, "end": 449, "text": "Amini et al., 2010)", "ref_id": "BIBREF0" }, { "start": 491, "end": 507, "text": "Oh et al., 2009)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In these cases, multilingual models increase performance because different languages contain different ambiguities and therefore present complementary views on the shared underlying labels. Our work shares a similar motivation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We propose a maximum entropy-based statistical model. Maximum entropy (MaxEnt) models 1 have been widely used in many NLP tasks (Berger et al., 1996; Ratnaparkhi, 1997; Smith, 2006) . The models assign the conditional probability of the label given the observation as follows:", "cite_spans": [ { "start": 128, "end": 149, "text": "(Berger et al., 1996;", "ref_id": "BIBREF3" }, { "start": 150, "end": 168, "text": "Ratnaparkhi, 1997;", "ref_id": "BIBREF27" }, { "start": 169, "end": 181, "text": "Smith, 2006)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "A Joint Model with Unlabeled Parallel Text", "sec_num": "3" }, { "text": "( 1)where is a real-valued vector of feature weights and is a feature function that maps pairs to a nonnegative real-valued feature vector. Each feature has an associated parameter, , which is called its weight; and is the corresponding normalization factor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Joint Model with Unlabeled Parallel Text", "sec_num": "3" }, { "text": "Maximum likelihood parameter estimation (training) for such a model, with a set of labeled examples , amounts to solving the following optimization problem:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Joint Model with Unlabeled Parallel Text", "sec_num": "3" }, { "text": "(2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Joint Model with Unlabeled Parallel Text", "sec_num": "3" }, { "text": "Given two languages and , suppose we have two distinct (i.e. not parallel) sets of sentimentlabeled data, and written in and respectively. In addition, we have unlabeled (w.r.t. sentiment) bilingual (in and ) parallel data that are defined as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Definition", "sec_num": "3.1" }, { "text": "where denotes the polarity of the -th instance (positive or negative); and are respectively the numbers of labeled instances in and ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Definition", "sec_num": "3.1" }, { "text": "and are parallel instances in and , respectively (i.e. they are supposed to be translations of one another), whose labels and are unobserved, but according to the intuition outlined in Section 1, should be similar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Definition", "sec_num": "3.1" }, { "text": "Given the input data and , our task is to jointly learn two monolingual sentiment classifiers one for and one for . With MaxEnt, we learn from the input data:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Definition", "sec_num": "3.1" }, { "text": "where and are the vectors of feature weights for and , respectively (for brevity we denote them as and in the remaining sections). In this study, we focus on sentence-level sentiment classification, i.e. each is a sentence, and and are parallel sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Definition", "sec_num": "3.1" }, { "text": "Given the problem definition above, we now present a novel model to exploit the correspondence of parallel sentences in unlabeled bilingual text. The model maximizes the following joint likelihood with respect to and :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Joint Model", "sec_num": "3.2" }, { "text": "(3) where denotes or ; the first term on the right-hand side is the likelihood of labeled data for both and ; and the second term is the likelihood of the unlabeled parallel data .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Joint Model", "sec_num": "3.2" }, { "text": "If we assume that parallel sentences are perfect translations, the two sentences in each pair should have the same polarity label, which gives us:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Joint Model", "sec_num": "3.2" }, { "text": "(4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Joint Model", "sec_num": "3.2" }, { "text": "where is the unobserved class label for the -th instance in the unlabeled data. This probability directly models the sentiment label agreement between and . However, there could be considerable noise in real-world parallel data, i.e. the sentence pairs may be noisily parallel (or even comparable) instead of fully parallel (Munteanu and Marcu, 2005) . In such noisy cases, the labels (positive or negative) could be different for the two monolingual sentences in a sentence pair. Although we do not know the exact probability that a sentence pair exhibits the same label, we can approximate it using their translation probabilities, which can be computed using word alignment toolkits such as Giza++ (Och and Ney, 2003) or the Berkeley word aligner (Liang et al., 2006) . The intuition here is that if the translation probability of two sentences is high, the probability that they have the same sentiment label should be high as well. Therefore, by considering the noise in parallel data, we get:", "cite_spans": [ { "start": 324, "end": 350, "text": "(Munteanu and Marcu, 2005)", "ref_id": "BIBREF20" }, { "start": 701, "end": 720, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF23" }, { "start": 750, "end": 770, "text": "(Liang et al., 2006)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "The Joint Model", "sec_num": "3.2" }, { "text": "(5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Joint Model", "sec_num": "3.2" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Joint Model", "sec_num": "3.2" }, { "text": "is the translation probability of the -th sentence pair in ; 2 is the opposite of ; the first term models the probability that and have the same label; and the second term models the probability that they have different labels. By further considering the weight to ascribe to the unlabeled data vs. the labeled data (and the weight for the L2-norm regularization), we get the following regularized joint log likelihood to be maximized: 6where the first term on the right-hand side is the log likelihood of the labeled data from both and the second is the log likelihood of the unlabeled parallel data , multiplied by , a constant that controls the contribution of the unlabeled data; and is a regularization constant that penalizes model complexity or large feature weights. When is 0, the algorithm ignores the unlabeled data and degenerates to two MaxEnt models trained on only the labeled data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Joint Model", "sec_num": "3.2" }, { "text": "To solve the optimization problem for the model, we need to jointly estimate the optimal parameters for the two monolingual classifiers by finding:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The EM Algorithm on MaxEnt", "sec_num": "3.3" }, { "text": "(7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The EM Algorithm on MaxEnt", "sec_num": "3.3" }, { "text": "This can be done with an EM algorithm, whose steps are summarized in Algorithm 1. First, the MaxEnt parameters, and , are estimated from just the labeled data. Then, in the E-step, the classifiers, based on current values of and , compute for each labeled example and assign probabilistically-weighted class labels to each unlabeled example. Next, in the M-step, the parameters, and , are updated using both the original labeled data ( and ) and the newly labeled data . These last two steps are iterated until convergence or a predefined iteration limit . In the M-step, we can optimize the regularized joint log likelihood using any gradient-based optimization technique (Malouf, 2002) . The gradient for Equation 3 based on Equation 4 is shown in Appendix A; those for Equations 5 and 6 can be derived similarly. In our experiments, we use the L-BFGS algorithm (Liu et al., 1989) and run EM until the change in regularized joint log likelihood is less than 1e-5 or we reach 100 iterations. 3", "cite_spans": [ { "start": 673, "end": 687, "text": "(Malouf, 2002)", "ref_id": "BIBREF18" }, { "start": 864, "end": 882, "text": "(Liu et al., 1989)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "The EM Algorithm on MaxEnt", "sec_num": "3.3" }, { "text": "We also consider the case where a parallel corpus is not available: to obtain a pseudo-parallel corpus (i.e. sentences in one language with their corresponding automatic translations), we use an automatic machine translation system (e.g. Google machine translation 4 ) to translate unlabeled indomain data from to or vice versa.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pseudo-Parallel Labeled and Unlabeled Data", "sec_num": "3.4" }, { "text": "Since previous work (Banea et al., 2008; Wan, 2009) has shown that it could be useful to automatically translate the labeled data from the source language into the target language, we can further incorporate such translated labeled data into the joint model by adding the following component into Equation 6: 8where is the alternative class of , is the automatically translated example from ; and is a constant that controls the weight of the translated labeled data.", "cite_spans": [ { "start": 20, "end": 40, "text": "(Banea et al., 2008;", "ref_id": "BIBREF2" }, { "start": 41, "end": 51, "text": "Wan, 2009)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Pseudo-Parallel Labeled and Unlabeled Data", "sec_num": "3.4" }, { "text": "The following labeled datasets are used in our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Sets and Preprocessing", "sec_num": "4.1" }, { "text": "MPQA (Labeled English Data): The Multi-Perspective Question Answering (MPQA) corpus (Wiebe et al., 2005) consists of newswire documents manually annotated with phrase-level subjectivity information. We extract all sentences containing strong (i.e. intensity is medium or higher), sentiment-bearing (i.e. polarity is positive or negative) expressions following Choi and Cardie (2008) . Sentences with both positive and negative strong expressions are then discarded, and the polarity of each remaining sentence is set to that of its sentiment-bearing expression(s).", "cite_spans": [ { "start": 84, "end": 104, "text": "(Wiebe et al., 2005)", "ref_id": "BIBREF37" }, { "start": 360, "end": 382, "text": "Choi and Cardie (2008)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Data Sets and Preprocessing", "sec_num": "4.1" }, { "text": "NTCIR-EN (Labeled English Data) and NTCIR-CH (Labeled Chinese Data): The NTCIR Opinion Analysis task (Seki et al., 2007; 2008) provides sentiment-labeled news data in Chinese, Japanese and English. Only those sentences with a polarity label (positive or negative) agreed to by at least two annotators are extracted. We use the Chinese data from NTCIR-6 as our Chinese labeled data. Since far fewer sentences in the English data pass the annotator agreement filter, we combine the English data from NTCIR-6 and NTCIR-7. The Chinese sentences are segmented using the Stanford Chinese word segmenter (Tseng et al., 2005) .", "cite_spans": [ { "start": 101, "end": 120, "text": "(Seki et al., 2007;", "ref_id": "BIBREF30" }, { "start": 121, "end": 126, "text": "2008)", "ref_id": "BIBREF35" }, { "start": 597, "end": 617, "text": "(Tseng et al., 2005)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Data Sets and Preprocessing", "sec_num": "4.1" }, { "text": "The number of sentences in each of these datasets is shown in Table 1 . In our experiments, we evaluate two settings of the data: (1) MPQA+NTCIR-CH, and (2) NTCIR-EN+NTCIR-CH. In each setting, the English labeled data constitutes and the Chinese labeled data, . , 2003) to obtain a new translation probability for each sentence pair, and select the 100,000 pairs with the highest translation probabilities. 5 We also try to remove neutral sentences from the parallel data since they can introduce noise into our model, which deals only with positive and negative examples. To do this, we train a single classifier from the combined Chinese and English labeled data for each data setting above by concatenating the original English and Chinese feature sets. We then classify each unlabeled sentence pair by combining the two sentences in each pair into one. We choose the most confidently predicted 10,000 positive and 10,000 negative pairs to constitute the unlabeled parallel corpus for each data setting.", "cite_spans": [ { "start": 262, "end": 269, "text": ", 2003)", "ref_id": null }, { "start": 407, "end": 408, "text": "5", "ref_id": null } ], "ref_spans": [ { "start": 62, "end": 69, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Data Sets and Preprocessing", "sec_num": "4.1" }, { "text": "In our experiments, the proposed joint model is compared with the following baseline methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Methods", "sec_num": "4.2" }, { "text": "MaxEnt: This method learns a MaxEnt classifier for each language given the monolingual labeled data; the unlabeled data is not used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Methods", "sec_num": "4.2" }, { "text": "SVM: This method learns an SVM classifier for each language given the monolingual labeled data; the unlabeled data is not used. SVM-light (Joachims, 1999a) is used for all the SVM-related experiments.", "cite_spans": [ { "start": 138, "end": 155, "text": "(Joachims, 1999a)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Methods", "sec_num": "4.2" }, { "text": "Monolingual TSVM (TSVM-M): This method learns two transductive SVM (TSVM) classifiers given the monolingual labeled data and the monolingual unlabeled data for each language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Methods", "sec_num": "4.2" }, { "text": "Bilingual TSVM (TSVM-B): This method learns one TSVM classifier given the labeled training data in two languages together with the unlabeled sentences by combining the two sentences in each unlabeled pair into one. We expect this method to perform better than TSVM-M since the combined (bilingual) unlabeled sentences could be more helpful than the unlabeled monolingual sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Methods", "sec_num": "4.2" }, { "text": "This method applies SVM-based co-training given both the labeled training data and the unlabeled parallel data following Wan (2009) . First, two monolingual SVM classifiers are built based on only the corresponding labeled data, and then they are bootstrapped by adding the most confident predicted examples from the unlabeled data into the training set. We run bootstrapping for 100 iterations. In each iteration, we select the most confidently predicted 50 positive and 50 negative sentences from each of the two classifiers, and take the union of the resulting 200 sentence pairs as the newly labeled training data. (Examples with conflicting labels within the pair are not included.)", "cite_spans": [ { "start": 121, "end": 131, "text": "Wan (2009)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Co-Training with SVMs (Co-SVM):", "sec_num": null }, { "text": "In our experiments, the methods are tested in the two data settings with the corresponding unlabeled parallel corpus as mentioned in Section 4. 6 We use 6 The results reported in this section employ Equation 4. Preliminary experiments showed that Equation 5 does not significantly improve the performance in our case, which is reasonable since we choose only sentence pairs with the highest translation probabilities to be our unlabeled data (see Section 4.1). 5-fold cross-validation and report average accuracy (also MicroF1 in this case) and MacroF1 scores. Unigrams are used as binary features for all models, as Pang et al. (2002) showed that binary features perform better than frequency features for sentiment classification. The weights for unlabeled data and regularization, and , are set to 1 unless otherwise stated. Later, we will show that the proposed approach performs well with a wide range of parameter values. 7", "cite_spans": [ { "start": 144, "end": 145, "text": "6", "ref_id": null }, { "start": 153, "end": 154, "text": "6", "ref_id": null }, { "start": 617, "end": 635, "text": "Pang et al. (2002)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Results and Analysis", "sec_num": "5" }, { "text": "We first compare the proposed joint model (Joint) with the baselines in Table 2 . As seen from the table, the proposed approach outperforms all five baseline methods in terms of both accuracy and MacroF1 for both English and Chinese and in both of the data settings. 8 By making use of the unlabeled parallel data, our proposed approach improves the accuracy, compared to MaxEnt, by 8.12% (or 33.27% error reduction) on English and 3.44% (or 16.92% error reduction) on Chinese in the first setting, and by 5.07% (or 19.67% error reduction) on English and 3.87% (or 19.4% error reduction) on Chinese in the second setting.", "cite_spans": [], "ref_spans": [ { "start": 72, "end": 79, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Method Comparison", "sec_num": "5.1" }, { "text": "Among the baselines, the best is Co-SVM; TSVMs do not always improve performance using the unlabeled data compared to the standalone SVM; and TSVM-B outperforms TSVM-M except for Chinese in the second setting. The MPQA data is more difficult in general compared to the NTCIR data. Without unlabeled parallel data, the performance on the Chinese data is better than on the English data, which is consistent with results reported in NTCIR-6 (Seki et al., 2007) .", "cite_spans": [ { "start": 439, "end": 458, "text": "(Seki et al., 2007)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Method Comparison", "sec_num": "5.1" }, { "text": "Overall, the unlabeled parallel data improves classification accuracy for both languages when using our proposed joint model and Co-SVM. The joint model makes better use of the unlabeled parallel data than Co-SVM or TSVMs presumably because of its attempt to jointly optimize the two monolingual models via soft (probabilistic) assignments of the unlabeled instances to classes in each iteration, instead of the hard assignments in Co-SVM and TSVMs. Although English sentiment classification alone is more difficult than Chinese for our datasets, we obtain greater performance gains for English by exploiting unlabeled parallel data as well as the Chinese labeled data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method Comparison", "sec_num": "5.1" }, { "text": "Unlabeled Data Figure 1 shows the accuracy curve of the proposed approach for the two data settings when varying the weight for the unlabeled data, , from 0 to 1. When is set to 0, the joint model degenerates to two MaxEnt models trained with only the labeled data.", "cite_spans": [], "ref_spans": [ { "start": 15, "end": 23, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Varying the Weight and Amount of", "sec_num": "5.2" }, { "text": "We can see that the performance gains for the proposed approach are quite remarkable even when is set to 0.1; performance is largely stable after reaches 0.4. Although MPQA is more difficult in general compared to the NTCIR data, we still see steady improvements in performance with unlabeled parallel data. Overall, the proposed approach performs quite well for a wide range of parameter values of . Figure 2 shows the accuracy curve of the proposed approach for the two data settings when varying the amount of unlabeled data from 0 to 20,000 instances. We see that the performance of the proposed approach improves steadily by adding more and more unlabeled data. However, even with only 2,000 unlabeled sentence pairs, the proposed approach still produces large performance gains.", "cite_spans": [], "ref_spans": [ { "start": 401, "end": 409, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Varying the Weight and Amount of", "sec_num": "5.2" }, { "text": "As discussed in Section 3.4, we generate pseudoparallel data by translating the monolingual sentences in each setting using Google's machine translation system. Figures 3 and 4 show the performance of our model using the pseudoparallel data versus the real parallel data, in the two settings, respectively. The EN->CH pseudoparallel data consists of the English unlabeled data and its automatic Chinese translation, and vice versa.", "cite_spans": [], "ref_spans": [ { "start": 161, "end": 176, "text": "Figures 3 and 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Results on Pseudo-Parallel Unlabeled Data", "sec_num": "5.3" }, { "text": "Although not as significant as those with parallel data, we can still obtain improvements using the pseudo-parallel data, especially in the first setting. The difference between using parallel versus pseudo-parallel data is around 2-4% in Figures 3 and 4 , which is reasonable since the quality of the pseudo-parallel data is not as good as that of the parallel data. Therefore, the performance using pseudo-parallel data is better with a small weight (e.g. = 0.1) in some cases. ", "cite_spans": [], "ref_spans": [ { "start": 239, "end": 255, "text": "Figures 3 and 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Results on Pseudo-Parallel Unlabeled Data", "sec_num": "5.3" }, { "text": "In this section, we investigate how adding automatically translated labeled data might influence the performance as mentioned in Section 3.4. We use only the translated labeled data to train classifiers, and then directly classify the test data. The average accuracies in setting 1 are 66.61% and 63.11% on English and Chinese, respectively; while the accuracies in setting 2 are 58.43% and 54.07% on English and Chinese, respectively. This result is reasonable because of the language gap between the original language and the translated language. In addition, the class distributions of the English labeled data and the Chinese are quite different (30% vs. 55% for positive as shown in Table 1 ).", "cite_spans": [], "ref_spans": [ { "start": 688, "end": 695, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Adding Pseudo-Parallel Labeled Data", "sec_num": "5.4" }, { "text": "Figures 5 and 6 show the accuracies when varying the weight of the translated labeled data vs. the labeled data, with and without the unlabeled parallel data. From Figure 5 for setting 1, we can see that the translated data can be helpful given the labeled data and even the unlabeled data, as long as is small; while in Figure 6 , the translated data decreases the performance in most cases for setting 2. One possible reason is that in the first data setting, the NTCIR English data covers the same topics as the NTCIR Chinese data and thus direct translation is helpful, while the English and Chinese topics are quite different in the second data setting, and thus direct translation hurts the performance given the existing labeled data in each language.", "cite_spans": [], "ref_spans": [ { "start": 164, "end": 172, "text": "Figure 5", "ref_id": null }, { "start": 321, "end": 329, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Adding Pseudo-Parallel Labeled Data", "sec_num": "5.4" }, { "text": "To further understand what contributions our proposed approach makes to the performance gain, we look inside the parameters in the MaxEnt models learned before and after adding the parallel unlabeled data. Table 3 shows the features in the model learned from the labeled data that have the largest weight change after adding the parallel data; Table 4 shows the newly learned features from the unlabeled data with the largest weights. From Table 3 10 we can see that the weight changes of the original features are quite reasonable, e.g. the top words in the positive class are obviously positive and the proposed approach gives them higher weights. The new features also seem reasonable given the knowledge that the labeled and unlabeled data includes negative news about for specific topics (e.g. Germany, Taiwan),.", "cite_spans": [], "ref_spans": [ { "start": 206, "end": 213, "text": "Table 3", "ref_id": null }, { "start": 344, "end": 351, "text": "Table 4", "ref_id": null }, { "start": 440, "end": 447, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "5.5" }, { "text": "We also examine the process of joint training by checking the performance on test data and the agreement of the two monolingual models on the unlabeled parallel data in both settings. The average agreement across 5 folds is 85.06% and 73.87% in settings 1 and 2, respectively, before the joint training, and increases to 100% and 99.89%, respectively, after 100 iterations of joint training. Although the average agreement has already increased to 99.50% and 99.02% in settings 1 and 2, respectively, after 30 iterations, the performance on the test set steadily improves in both settings until around 50-60 iterations, and then becomes relatively stable after that.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.5" }, { "text": "Examination of those sentence pairs in setting 2 for which the two monolingual models still 9 This is an abbreviation for the Organization of African Unity. 10 The features and weights in Tables 3 and 4 are extracted from the English model in the first fold of setting 1. disagree after 100 iterations of joint training often produces sentences that are not quite parallel, e.g.:", "cite_spans": [ { "start": 157, "end": 159, "text": "10", "ref_id": null } ], "ref_spans": [ { "start": 188, "end": 202, "text": "Tables 3 and 4", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "5.5" }, { "text": "English: The two sides attach great importance to international cooperation on protection and promotion of human rights.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.5" }, { "text": "Chinese: \u53cc\u65b9\u8ba4\u4e3a,\u5728\u4eba\u6743\u95ee\u9898\u4e0a\u4e0d\u80fd\u91c7\u53d6-\u53cc\u91cd\u6807\u51c6\u2016,\u53cd\u5bf9\u5728 \u56fd\u9645\u5173\u7cfb\u4e2d\u5229\u7528\u4eba\u6743\u95ee\u9898\u65bd\u538b\u3002(Both sides agree that double standards on the issue of human rights are to be avoided, and are opposed to using pressure on human rights issues in international relations.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.5" }, { "text": "Since the two sentences discuss human rights from very different perspectives, it is reasonable that the two monolingual models will classify them with different polarities (i.e. positive for the English sentence and negative for the Chinese sentence) even after joint training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.5" }, { "text": "In this paper, we study bilingual sentiment classification and propose a joint model to simultaneously learn better monolingual sentiment classifiers for each language by exploiting an unlabeled parallel corpus together with the labeled data available for each language. Our experiments show that the proposed approach can significantly improve sentiment classification for both languages. Moreover, the proposed approach continues to produce (albeit smaller) performance gains when employing pseudo-parallel data from machine translation engines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "In future work, we would like to apply the joint learning idea to other learning frameworks (e.g. SVMs), and to extend the proposed model to handle word-level parallel information, e.g. bilingual dictionaries or word alignment information. Another issue is to investigate how to improve multilingual sentiment analysis by exploiting comparable corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "They are sometimes referred to as log-linear models, but also known as exponential models, generalized linear models, or logistic regression.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The probability should be rescaled within the range of [0, 1], where 0.5 means that we are completely unsure if the sentences are translations of each other or not, and only those translation pairs with a probability larger than 0.5 are meaningful for our purpose.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Since the EM-based algorithm may find a local maximum of the objective function, the initialization of the parameters is important. Our experiments show that an effective maximum can usually be found by initializing the parameters with those learned from the labeled data; performance would be much worse if we initialize all the parameters to 0 or 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://translate.google.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We removed sentence pairs with an original confidence score (given in the corpus) smaller than 0.98, and also removed the pairs that are too long (more than 60 characters in one sentence) to facilitate Giza++. We first obtain translation probabilities for both directions (i.e. Chinese to English and English to Chinese) with Giza++, take the log of the product of those two probabilities, and then divide it by the sum of lengths of the two sentences in each pair.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The code is at http://sites.google.com/site/lubin2010. 8 Significance is tested using paired t-tests with <0.05: \u20ac denotes statistical significance compared to the corresponding performance of MaxEnt; * denotes statistical significance compared to SVM; and \u0393 denotes statistical significance compared to Co-SVM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank Shuo Chen, Long Jiang, Thorsten Joachims, Lillian Lee, Myle Ott, Yan Song, Xiaojun Wan, Ainur Yessenalina, Jingbo Zhu and the anonymous reviewers for many useful comments and discussion. This work was supported in part by National Science Foundation Grants BCS-0904822, BCS-0624277, IIS-0968450; and by a gift from Google. Chenhao Tan is supported by NSF (DMS-0808864), ONR (YIP-N000140910911), and a grant from Microsoft.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "In this appendix, we derive the gradient for the objective function in Equation 3, which is used in parameter estimation. As mentioned in Section 3.3, the parameters can be learned by finding:Since the first term on the right-hand side is just the expression for the standard MaxEnt problem, we will focus on the gradient for the second term, and denote as ( ).Let denote or , and be the th weight in the vector . For brevity, we drop the in the above notation, and write to denote . Then the partial derivative of (*) based on Equation 4 with respect to is as follows:(1) Further, we obtain:(2) Merge (2) into (1), we get:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Combining coregularization and consensusbased self-training for multilingual text categorization", "authors": [ { "first": "Massih-Reza", "middle": [], "last": "Amini", "suffix": "" }, { "first": "Cyril", "middle": [], "last": "Goutte", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Usunier", "suffix": "" } ], "year": 2010, "venue": "Proceeding of SIGIR'10", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Massih-Reza Amini, Cyril Goutte, and Nicolas Usunier. 2010. Combining coregularization and consensus- based self-training for multilingual text categorization. In Proceeding of SIGIR'10.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Multilingual subjectivity: Are more languages better?", "authors": [ { "first": "Carmen", "middle": [], "last": "Banea", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 2010, "venue": "Proceedings of COLING'10", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carmen Banea, Rada Mihalcea, and Janyce Wiebe. 2010. Multilingual subjectivity: Are more languages better? In Proceedings of COLING'10.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Multilingual subjectivity analysis using machine translation", "authors": [ { "first": "Carmen", "middle": [], "last": "Banea", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Samer", "middle": [], "last": "Hassan", "suffix": "" } ], "year": 2008, "venue": "Proceedings of EMNLP'08", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carmen Banea, Rada Mihalcea, Janyce Wiebe, and Samer Hassan. 2008. Multilingual subjectivity analysis using machine translation. In Proceedings of EMNLP'08.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A maximum entropy approach to natural language processing", "authors": [ { "first": "Adam", "middle": [ "L" ], "last": "Berger", "suffix": "" }, { "first": "Stephen", "middle": [ "A Della" ], "last": "Pietra", "suffix": "" }, { "first": "Vincent", "middle": [ "J Della" ], "last": "Pietra", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam L. Berger, Stephen A. Della Pietra and Vincent J. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Domain adaptation with structural corresponddence learning", "authors": [ { "first": "John", "middle": [], "last": "Blitzer", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2006, "venue": "Proceedings of EMNLP'06", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspond- dence learning. In Proceedings of EMNLP'06.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Combining labeled and unlabeled data with co-training", "authors": [ { "first": "Avrim", "middle": [], "last": "Blum", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 1998, "venue": "Proceedings of COLT'98", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Avrim Blum and Tom Mitchell. 1998. Combining labeled and unlabeled data with co-training. In Proceedings of COLT'98.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Holistic sentiment analysis across languages: Multilingual supervised Latent Dirichlet Allocation", "authors": [ { "first": "Jordan", "middle": [], "last": "Boyd", "suffix": "" }, { "first": "-", "middle": [], "last": "Graber", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2010, "venue": "Proceedings of EMNLP'10", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jordan Boyd-Graber and Philip Resnik. 2010. Holistic sentiment analysis across languages: Multilingual supervised Latent Dirichlet Allocation. In Proceedings of EMNLP'10.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Identifying expressions of opinion in context", "authors": [ { "first": "Eric", "middle": [], "last": "Breck", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2007, "venue": "Proceedings of IJCAI'07", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Breck, Yejin Choi, and Claire Cardie. 2007. Identifying expressions of opinion in context. In Proceedings of IJCAI'07.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Learning better monolingual models with unannotated bilingual text", "authors": [ { "first": "David", "middle": [], "last": "Burkett", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "John", "middle": [], "last": "Blitzer", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2010, "venue": "Proceedings of CoNLL'10", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Burkett, Slav Petrov, John Blitzer, and Dan Klein. 2010. Learning better monolingual models with unannotated bilingual text. In Proceedings of CoNLL'10.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Two languages are better than one (for syntactic parsing)", "authors": [ { "first": "David", "middle": [], "last": "Burkett", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2008, "venue": "Proceedings of EMNLP'08", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Burkett and Dan Klein. 2008. Two languages are better than one (for syntactic parsing). In Proceedings of EMNLP'08.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning with compositional semantics as structural inference for subsentential sentiment analysis", "authors": [ { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2008, "venue": "Proceedings of EMNLP'08", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yejin Choi and Claire Cardie. 2008. Learning with compositional semantics as structural inference for subsentential sentiment analysis. In Proceedings of EMNLP'08.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Exploiting bilingual information to improve web search", "authors": [ { "first": "Wei", "middle": [], "last": "Gao", "suffix": "" }, { "first": "John", "middle": [], "last": "Blitzer", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Kam-Fai", "middle": [], "last": "Wong", "suffix": "" } ], "year": 2009, "venue": "Proceedings of ACL/IJCNLP'09", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Gao, John Blitzer, Ming Zhou, and Kam-Fai Wong. 2009. Exploiting bilingual information to improve web search. In Proceedings of ACL/IJCNLP'09.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Mining opinion features in customer reviews", "authors": [ { "first": "Minqing", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of AAAI'04", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minqing Hu and Bing Liu. 2004. Mining opinion features in customer reviews. In Proceedings of AAAI'04.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Word sense disambiguation using a second language monolingual corpus", "authors": [ { "first": "Alon", "middle": [], "last": "Ido Dagan", "suffix": "" }, { "first": "", "middle": [], "last": "Itai", "suffix": "" } ], "year": 1994, "venue": "Computational Linguistics", "volume": "20", "issue": "4", "pages": "563--596", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ido Dagan, and Alon Itai. 1994. Word sense disambiguation using a second language monolingual corpus, Computational Linguistics, 20(4): 563-596.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Making Large-Scale SVM Learning Practical", "authors": [ { "first": "Thorsten", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 1999, "venue": "Advances in Kernel Methods -Support Vector Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Joachims. 1999a. Making Large-Scale SVM Learning Practical. In: Advances in Kernel Methods - Support Vector Learning, B. Sch\u00f6lkopf, C. Burges, and A. Smola (ed.), MIT Press.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Transductive inference for text classification using support vector machines", "authors": [ { "first": "Thorsten", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 1999, "venue": "Proceedings of ICML'99", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Joachims. 1999b. Transductive inference for text classification using support vector machines. In Proceedings of ICML'99.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Alignment by agreement", "authors": [ { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2006, "venue": "Proceedings of NAACL'06", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Percy Liang, Ben Taskar, and Dan Klein. 2006. Alignment by agreement. In Proceedings of NAACL'06.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "On the limited memory BFGS method for large scale optimization", "authors": [ { "first": "C", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Jorge", "middle": [], "last": "Liu", "suffix": "" }, { "first": "", "middle": [], "last": "Nocedal", "suffix": "" } ], "year": 1989, "venue": "Mathematical Programming", "volume": "", "issue": "45", "pages": "503--528", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dong C. Liu and Jorge Nocedal. 1989. On the limited memory BFGS method for large scale optimization. Mathematical Programming, (45): 503-528.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A comparison of algorithms for maximum entropy parameter estimation", "authors": [ { "first": "Robert", "middle": [], "last": "Malouf", "suffix": "" } ], "year": 2002, "venue": "Proceedings of CoNLL'02", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Malouf. 2002. A comparison of algorithms for maximum entropy parameter estimation. In Proceedings of CoNLL'02.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Learning multilingual subjective language via cross-lingual projections", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Carmen", "middle": [], "last": "Banea", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 2007, "venue": "Proceedings of ACL'07", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea, Carmen Banea, and Janyce Wiebe. 2007. Learning multilingual subjective language via cross-lingual projections. In Proceedings of ACL'07.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Improving machine translation performance by exploiting non-parallel corpora", "authors": [ { "first": "S", "middle": [], "last": "Dragos", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Munteanu", "suffix": "" }, { "first": "", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2005, "venue": "Computational Linguistics", "volume": "31", "issue": "4", "pages": "477--504", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dragos S. Munteanu and Daniel Marcu. 2005. Improving machine translation performance by exploiting non-parallel corpora. Computational Linguistics, 31(4): 477-504.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Dependency tree-based sentiment classification using CRFs with hidden variables", "authors": [ { "first": "Tetsuji", "middle": [], "last": "Nakagawa", "suffix": "" }, { "first": "Kentaro", "middle": [], "last": "Inui", "suffix": "" }, { "first": "Sadao", "middle": [], "last": "Kurohashi", "suffix": "" } ], "year": 2010, "venue": "Proceedings of NAACL/HLT '10", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tetsuji Nakagawa, Kentaro Inui, and Sadao Kurohashi. 2010. Dependency tree-based sentiment classification using CRFs with hidden variables. In Proceedings of NAACL/HLT '10.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Text classification from labeled and unlabeled documents using EM", "authors": [ { "first": "Kamal", "middle": [], "last": "Nigam", "suffix": "" }, { "first": "Andrew", "middle": [ "K" ], "last": "Mccallum", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Thrun", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 2000, "venue": "Machine Learning", "volume": "39", "issue": "", "pages": "103--134", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kamal Nigam, Andrew K. Mccallum, Sebastian Thrun, and Tom Mitchell. 2000. Text classification from labeled and unlabeled documents using EM. Machine Learning, 39(2): 103-134.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "J", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz J. Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1): 19-51.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Opinion mining and sentiment analysis, Foundations and Trends in Information Retrieval", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis, Foundations and Trends in Information Retrieval, Now Publishers.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Thumbs up? Sentiment classification using machine learning techniques", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Shivakumar", "middle": [], "last": "Vaithyanathan", "suffix": "" } ], "year": 2002, "venue": "Proceedings of EMNLP'02", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? Sentiment classification using machine learning techniques. In Proceedings of EMNLP'02.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Crosslanguage text classification using structural correspondence learning", "authors": [ { "first": "Peter", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" } ], "year": 2010, "venue": "Proceedings of ACL'10", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Prettenhofer and Benno Stein. 2010. Cross- language text classification using structural correspondence learning. In Proceedings of ACL'10.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A simple introduction to maximum entropy models for natural language processing", "authors": [ { "first": "Adwait", "middle": [], "last": "Ratnaparkhi", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adwait Ratnaparkhi. 1997. A simple introduction to maximum entropy models for natural language processing. Technical Report 97-08, University of Pennsylvania.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Multilingual corpus development for opinion mining", "authors": [ { "first": "Julia", "middle": [ "M" ], "last": "Schulz", "suffix": "" }, { "first": "Christa", "middle": [], "last": "Womser-Hacker", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Mandl", "suffix": "" } ], "year": 2010, "venue": "Proceedings of LREC'10", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julia M. Schulz, Christa Womser-Hacker, and Thomas Mandl. 2010. Multilingual corpus development for opinion mining. In Proceedings of LREC'10.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Overview of multilingual opinion analysis task at NTCIR-7", "authors": [ { "first": "Yohei", "middle": [], "last": "Seki", "suffix": "" }, { "first": "David", "middle": [ "Kirk" ], "last": "Evans", "suffix": "" }, { "first": "Lun-Wei", "middle": [], "last": "Ku", "suffix": "" }, { "first": "Le", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Hsin-His", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Noriko", "middle": [], "last": "Kando", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the NTCIR-7 Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yohei Seki, David Kirk Evans, Lun-Wei Ku, Le Sun, Hsin-His Chen, and Noriko Kando. 2008. Overview of multilingual opinion analysis task at NTCIR-7. In Proceedings of the NTCIR-7 Workshop.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Overview of opinion analysis pilot task at NTCIR-6", "authors": [ { "first": "Yohei", "middle": [], "last": "Seki", "suffix": "" }, { "first": "David", "middle": [ "K" ], "last": "Evans", "suffix": "" }, { "first": "Lun-Wei", "middle": [], "last": "Ku", "suffix": "" }, { "first": "Le", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the NTCIR-6 Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yohei Seki, David K. Evans, Lun-Wei Ku, Le Sun, Hsin-His Chen, Noriko Kando, and Chin-Yew Lin. 2007. Overview of opinion analysis pilot task at NTCIR-6. In Proceedings of the NTCIR-6 Workshop.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "A co-regularization approach to semisupervised learning with multiple views", "authors": [ { "first": "Vikas", "middle": [], "last": "Sindhwani", "suffix": "" }, { "first": "Partha", "middle": [], "last": "Niyogi", "suffix": "" }, { "first": "Mikhail", "middle": [], "last": "Belkin", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ICML'05", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vikas Sindhwani, Partha Niyogi, and Mikhail Belkin. 2005. A co-regularization approach to semi- supervised learning with multiple views. In Proceedings of ICML'05.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Novel estimation methods for unsupervised discovery of latent structure in natural language text", "authors": [ { "first": "A", "middle": [], "last": "Noah", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noah A. Smith. 2006. Novel estimation methods for unsupervised discovery of latent structure in natural language text. Ph.D. thesis, Department of Computer Science, Johns Hopkins University.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "A conditional random field word segmenter", "authors": [ { "first": "Huihsin", "middle": [], "last": "Tseng", "suffix": "" }, { "first": "Pichuan", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Galen", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "Proeedings of the 4 th SIGHAN Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky and Christopher Manning. 2005. A conditional random field word segmenter. In Proeedings of the 4 th SIGHAN Workshop.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews", "authors": [ { "first": "D", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Turney", "suffix": "" } ], "year": 2002, "venue": "Proceedings of ACL'02", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter D. Turney. 2002. Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews, In Proceedings of ACL'02.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Using Bilingual Knowledge and Ensemble Techniques for Unsupervised Chinese Sentiment Analysis", "authors": [ { "first": "Xiaojun", "middle": [], "last": "Wan", "suffix": "" } ], "year": 2008, "venue": "Proceedings of EMNLP'08", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaojun Wan. 2008. Using Bilingual Knowledge and Ensemble Techniques for Unsupervised Chinese Sentiment Analysis. In Proceedings of EMNLP'08.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Co-training for cross-lingual sentiment classification", "authors": [ { "first": "Xiaojun", "middle": [], "last": "Wan", "suffix": "" } ], "year": 2009, "venue": "Proceedings of ACL/AFNLP'09", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaojun Wan. 2009. Co-training for cross-lingual sentiment classification. In Proceedings of ACL/AFNLP'09.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Annotating expressions of opinions and emotions in language", "authors": [ { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Theresa", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2005, "venue": "Language Resources and Evaluation", "volume": "39", "issue": "2-3", "pages": "165--210", "other_ids": {}, "num": null, "urls": [], "raw_text": "Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. Language Resources and Evaluation, 39(2-3): 165-210.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Cross-lingual latent topic extraction", "authors": [ { "first": "Duo", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Qiaozhu", "middle": [], "last": "Mei", "suffix": "" }, { "first": "Chengxiang", "middle": [], "last": "Zhai", "suffix": "" } ], "year": 2010, "venue": "Proceedings of ACL'10", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duo Zhang, Qiaozhu Mei, and ChengXiang Zhai. 2010. Cross-lingual latent topic extraction, In Proceedings of ACL'10.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Cross language dependency parsing using a bilingual lexicon", "authors": [ { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Song", "suffix": "" }, { "first": "Chunyu", "middle": [], "last": "Kit", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2009, "venue": "Proceedings of ACL/IJCNLP'09", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hai Zhao, Yan Song, Chunyu Kit, and Guodong Zhou. 2009. Cross language dependency parsing using a bilingual lexicon. In Proceedings of ACL/IJCNLP'09.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Introduction to Semi-Supervised Learning", "authors": [ { "first": "Xiaojin", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Andrew", "middle": [ "B" ], "last": "Goldberg", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaojin Zhu and Andrew B. Goldberg. 2009. Introduction to Semi-Supervised Learning. Morgan & Claypool Publishers.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Two monolingual MaxEnt classifiers with parameters and , respectively1. Train two initial monolingual modelsTrain and initialize and on the labeled data 2. Jointly optimize two monolingual models for to do // T: number of iterations EIf the increase of the joint log likelihood is sufficiently small, break; end for 3. Output as s, and as", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "Accuracy vs. Weight of Unlabeled Data Figure 2. Accuracy vs. Amount of Unlabeled Data on NTCIR-EN+NTCIR-CH Chinese on NTCIR-EN+NTCIR-CH English on MPQA+NTCIR-CH Chinese on MPQA+NTCIR-CH", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "Accuracy with Pseudo-Parallel UnlabeledFigure 4. Accuracy with Pseudo-Parallel Unlabeled Data in Setting 1 Data in Setting 2", "num": null, "uris": null, "type_str": "figure" }, "TABREF1": { "text": "", "html": null, "num": null, "content": "", "type_str": "table" }, "TABREF3": { "text": "Comparison of Results", "html": null, "num": null, "content": "
", "type_str": "table" } } } }