{ "paper_id": "D10-1005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:51:28.404018Z" }, "title": "Holistic Sentiment Analysis Across Languages: Multilingual Supervised Latent Dirichlet Allocation", "authors": [ { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "", "affiliation": { "laboratory": "", "institution": "UMIACS University of Maryland College Park", "location": { "region": "MD" } }, "email": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "", "affiliation": { "laboratory": "", "institution": "UMIACS University of Maryland", "location": { "settlement": "College Park", "region": "MD" } }, "email": "resnik@umd.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "", "pdf_parse": { "paper_id": "D10-1005", "_pdf_hash": "", "abstract": [], "body_text": [ { "text": "In this paper, we develop multilingual supervised latent Dirichlet allocation (MLSLDA), a probabilistic generative model that allows insights gleaned from one language's data to inform how the model captures properties of other languages. MLSLDA accomplishes this by jointly modeling two aspects of text: how multilingual concepts are clustered into thematically coherent topics and how topics associated with text connect to an observed regression variable (such as ratings on a sentiment scale). Concepts are represented in a general hierarchical framework that is flexible enough to express semantic ontologies, dictionaries, clustering constraints, and, as a special, degenerate case, conventional topic models. Both the topics and the regression are discovered via posterior inference from corpora. We show MLSLDA can build topics that are consistent across languages, discover sensible bilingual lexical correspondences, and leverage multilingual corpora to better predict sentiment. Sentiment analysis (Pang and Lee, 2008) offers the promise of automatically discerning how people feel about a product, person, organization, or issue based on what they write online, which is potentially of great value to businesses and other organizations. However, the vast majority of sentiment resources and algorithms are limited to a single language, usually English (Wilson, 2008; Baccianella and Sebastiani, 2010) . Since no single language captures a majority of the content online, adopting such a limited approach in an increasingly global community risks missing important details and trends that might only be available when text in multiple languages is taken into account.", "cite_spans": [ { "start": 1009, "end": 1029, "text": "(Pang and Lee, 2008)", "ref_id": "BIBREF33" }, { "start": 1364, "end": 1378, "text": "(Wilson, 2008;", "ref_id": "BIBREF49" }, { "start": 1379, "end": 1412, "text": "Baccianella and Sebastiani, 2010)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Up to this point, multiple languages have been addressed in sentiment analysis primarily by transferring knowledge from a resource-rich language to a less rich language (Banea et al., 2008) , or by ignoring differences in languages via translation into English (Denecke, 2008) . These approaches are limited to a view of sentiment that takes place through an English-centric lens, and they ignore the potential to share information between languages. Ideally, learning sentiment cues holistically, across languages, would result in a richer and more globally consistent picture.", "cite_spans": [ { "start": 169, "end": 189, "text": "(Banea et al., 2008)", "ref_id": "BIBREF2" }, { "start": 261, "end": 276, "text": "(Denecke, 2008)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In this paper, we introduce Multilingual Supervised Latent Dirichlet Allocation (MLSLDA), a model for sentiment analysis on a multilingual corpus. MLSLDA discovers a consistent, unified picture of sentiment across multiple languages by learning \"topics,\" probabilistic partitions of the vocabulary that are consistent in terms of both meaning and relevance to observed sentiment. Our approach makes few assumptions about available resources, requiring neither parallel corpora nor machine translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The rest of the paper proceeds as follows. In Section 1, we describe the probabilistic tools that we use to create consistent topics bridging across languages and the MLSLDA model. In Section 2, we present the inference process. We discuss our set of semantic bridges between languages in Section 3, and our experiments in Section 4 demonstrate that this approach functions as an effective multilingual topic model, discovers sentiment-biased topics, and uses multilingual corpora to make better sentiment predictions across languages. Sections 5 and 6 discuss related research and discusses future work, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "As its name suggests, MLSLDA is an extension of Latent Dirichlet allocation (LDA) (Blei et al., 2003) , a modeling approach that takes a corpus of unannotated documents as input and produces two outputs, a set of \"topics\" and assignments of documents to topics. Both the topics and the assignments are probabilistic: a topic is represented as a probability distribution over words in the corpus, and each document is assigned a probability distribution over all the topics. Topic models built on the foundations of LDA are appealing for sentiment analysis because the learned topics can cluster together sentimentbearing words, and because topic distributions are a parsimonious way to represent a document. 1 LDA has been used to discover latent structure in text (e.g. for discourse segmentation (Purver et al., 2006) and authorship (Rosen-Zvi et al., 2004) ). MLSLDA extends the approach by ensuring that this latent structure -the underlying topics -is consistent across languages. We discuss multilingual topic modeling in Section 1.1, and in Section 1.2 we show how this enables supervised regression regardless of a document's language.", "cite_spans": [ { "start": 82, "end": 101, "text": "(Blei et al., 2003)", "ref_id": null }, { "start": 708, "end": 709, "text": "1", "ref_id": null }, { "start": 798, "end": 819, "text": "(Purver et al., 2006)", "ref_id": "BIBREF35" }, { "start": 835, "end": 859, "text": "(Rosen-Zvi et al., 2004)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Predictions from Multilingual Topics", "sec_num": "1" }, { "text": "Topic models posit a straightforward generative process that creates an observed corpus. For each document d, some distribution \u03b8 d over unobserved topics is chosen. Then, for each word position in the document, a topic z is selected. Finally, the word for that position is generated by selecting from the topic indexed by z. (Recall that in LDA, a \"topic\" is a distribution over words).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Capturing Semantic Correlations", "sec_num": "1.1" }, { "text": "In monolingual topic models, the topic distribution is usually drawn from a Dirichlet distribution. Using Dirichlet distributions makes it easy to specify sparse priors, and it also simplifies posterior inference because Dirichlet distributions are conjugate to multinomial distributions. However, drawing topics from Dirichlet distributions will not suffice if our vocabulary includes multiple languages. If we are working with English, German, and Chinese at the same time, a Dirichlet prior has no way to favor distributions z such that p(good|z), p(gut|z), and p(h\u01ceo|z) all tend to be high at the same time, or low at the same time. More generally, the structure of our model must encourage topics to be consistent across languages, and Dirichlet distributions cannot encode correlations between elements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Capturing Semantic Correlations", "sec_num": "1.1" }, { "text": "One possible solution to this problem is to use the multivariate normal distribution, which can produce correlated multinomials (Blei and Lafferty, 2005) , in place of the Dirichlet distribution. This has been done successfully in multilingual settings (Cohen and Smith, 2009) . However, such models complicate inference by not being conjugate.", "cite_spans": [ { "start": 128, "end": 153, "text": "(Blei and Lafferty, 2005)", "ref_id": "BIBREF3" }, { "start": 253, "end": 276, "text": "(Cohen and Smith, 2009)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Capturing Semantic Correlations", "sec_num": "1.1" }, { "text": "Instead, we appeal to tree-based extensions of the Dirichlet distribution, which has been used to induce correlation in semantic ontologies (Boyd-Graber et al., 2007) and to encode clustering constraints (Andrzejewski et al., 2009) . The key idea in this approach is to assume the vocabularies of all languages are organized according to some shared semantic structure that can be represented as a tree. For concreteness in this section, we will use WordNet (Miller, 1990) as the representation of this multilingual semantic bridge, since it is well known, offers convenient and intuitive terminology, and demonstrates the full flexibility of our approach. However, the model we describe generalizes to any tree-structured representation of multilingual knowledge; we discuss some alternatives in Section 3.", "cite_spans": [ { "start": 140, "end": 166, "text": "(Boyd-Graber et al., 2007)", "ref_id": "BIBREF7" }, { "start": 204, "end": 231, "text": "(Andrzejewski et al., 2009)", "ref_id": "BIBREF0" }, { "start": 458, "end": 472, "text": "(Miller, 1990)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Capturing Semantic Correlations", "sec_num": "1.1" }, { "text": "WordNet organizes a vocabulary into a rooted, directed acyclic graph of nodes called synsets, short for \"synonym sets.\" A synset is a child of another synset if it satisfies a hyponomy relationship; each child \"is a\" more specific instantiation of its parent concept (thus, hyponomy is often called an \"isa\" relationship). For example, a \"dog\" is a \"canine\" is an \"animal\" is a \"living thing,\" etc. As an approximation, it is not unreasonable to assume that WordNet's structure of meaning is language independent, i.e. the concept encoded by a synset can be realized using terms in different languages that share the same meaning. In practice, this organization has been used to create many alignments of international WordNets to the original English WordNet (Ordan and Wintner, 2007; Sagot and Fi\u0161er, 2008; Isahara et al., 2008) .", "cite_spans": [ { "start": 760, "end": 785, "text": "(Ordan and Wintner, 2007;", "ref_id": "BIBREF31" }, { "start": 786, "end": 808, "text": "Sagot and Fi\u0161er, 2008;", "ref_id": "BIBREF41" }, { "start": 809, "end": 830, "text": "Isahara et al., 2008)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Capturing Semantic Correlations", "sec_num": "1.1" }, { "text": "Using the structure of WordNet, we can now describe a generative process that produces a distribution over a multilingual vocabulary, which encourages correlations between words with similar mean-ings regardless of what language each word is in. For each synset h, we create a multilingual word distribution for that synset as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Capturing Semantic Correlations", "sec_num": "1.1" }, { "text": "1. Draw transition probabilities \u03b2 h \u223c Dir (\u03c4 h ) 2. Draw stop probabilities \u03c9 h \u223c Dir (\u03ba h ) 3. For each language l, draw emission probabilities for that synset \u03c6 h,l \u223c Dir (\u03c0 h,l ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Capturing Semantic Correlations", "sec_num": "1.1" }, { "text": "For conciseness in the rest of the paper, we will refer to this generative process as multilingual Dirichlet hierarchy, or MULTDIRHIER(\u03c4 , \u03ba, \u03c0). 2 Each observed token can be viewed as the end result of a sequence of visited synsets \u03bb. At each node in the tree, the path can end at node i with probability \u03c9 i,1 , or it can continue to a child synset with probability \u03c9 i,0 . If the path continues to another child synset, it visits child j with probability \u03b2 i,j . If the path ends at a synset, it generates word k with probability \u03c6 i,l,k . 3 The probability of a word being emitted from a path with visited synsets r and final synset h in language l is therefore", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Capturing Semantic Correlations", "sec_num": "1.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(w, \u03bb = r, h|l, \u03b2, \u03c9, \u03c6) = \uf8eb \uf8ed (i,j)\u2208r \u03b2 i,j \u03c9 i,0 \uf8f6 \uf8f8 (1 \u2212 \u03c9 h,1 )\u03c6 h,l,w .", "eq_num": "(1)" } ], "section": "Capturing Semantic Correlations", "sec_num": "1.1" }, { "text": "Note that the stop probability \u03c9 h is independent of language, but the emission \u03c6 h,l is dependent on the language. This is done to prevent the following scenario: while synset A is highly probable in a topic and words in language 1 attached to that synset have high probability, words in language 2 have low probability. If this could happen for many synsets in a topic, an entire language would be effectively silenced, which would lead to inconsistent topics (e.g.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Capturing Semantic Correlations", "sec_num": "1.1" }, { "text": "2 Variables \u03c4 h , \u03c0 h,l , and \u03ba h are hyperparameters. Their mean is fixed, but their magnitude is sampled during inference (i.e.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Capturing Semantic Correlations", "sec_num": "1.1" }, { "text": "\u03c4 h,i P k \u03c4 h,k is constant, but \u03c4 h,i is not)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Capturing Semantic Correlations", "sec_num": "1.1" }, { "text": ". For the bushier bridges, (e.g. dictionary and flat), their mean is uniform. For GermaNet, we took frequencies from two balanced corpora of German and English: the British National Corpus (University of Oxford, 2006) and the Kern Corpus of the Digitales W\u00f6rterbuch der Deutschen Sprache des 20. Jahrhunderts project (Geyken, 2007) . We took these frequencies and propagated them through the multilingual hierarchy, following LDAWN's (Boyd-Graber et al., 2007) formulation of information content (Resnik, 1995) as a Bayesian prior. The variance of the priors was initialized to be 1.0, but could be sampled during inference.", "cite_spans": [ { "start": 204, "end": 217, "text": "Oxford, 2006)", "ref_id": null }, { "start": 317, "end": 331, "text": "(Geyken, 2007)", "ref_id": "BIBREF13" }, { "start": 434, "end": 460, "text": "(Boyd-Graber et al., 2007)", "ref_id": "BIBREF7" }, { "start": 496, "end": 510, "text": "(Resnik, 1995)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Capturing Semantic Correlations", "sec_num": "1.1" }, { "text": "3 Note that the language and word are taken as given, but the path through the semantic hierarchy is a latent random variable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Capturing Semantic Correlations", "sec_num": "1.1" }, { "text": "Topic 1 is about baseball in English and about travel in German). Separating path from emission helps ensure that topics are consistent across languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Capturing Semantic Correlations", "sec_num": "1.1" }, { "text": "Having defined topic distributions in a way that can preserve cross-language correspondences, we now use this distribution within a larger model that can discover cross-language patterns of use that predict sentiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Capturing Semantic Correlations", "sec_num": "1.1" }, { "text": "We will view sentiment analysis as a regression problem: given an input document, we want to predict a real-valued observation y that represents the sentiment of a document. Specifically, we build on supervised latent Dirichlet allocation (SLDA, (Blei and McAuliffe, 2007) ), which makes predictions based on the topics expressed in a document; this can be thought of projecting the words in a document to low dimensional space of dimension equal to the number of topics. Blei et al. showed that using this latent topic structure can offer improved predictions over regressions based on words alone, and the approach fits well with our current goals, since word-level cues are unlikely to be identical across languages. In addition to text, SLDA has been successfully applied to other domains such as social networks (Chang and Blei, 2009) and image classification (Wang et al., 2009) . The key innovation in this paper is to extend SLDA by creating topics that are globally consistent across languages, using the bridging approach above.", "cite_spans": [ { "start": 246, "end": 272, "text": "(Blei and McAuliffe, 2007)", "ref_id": "BIBREF4" }, { "start": 817, "end": 839, "text": "(Chang and Blei, 2009)", "ref_id": "BIBREF8" }, { "start": 865, "end": 884, "text": "(Wang et al., 2009)", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "The MLSLDA Model", "sec_num": "1.2" }, { "text": "We express our model in the form of a probabilistic generative latent-variable model that generates documents in multiple languages and assigns a realvalued score to each document. The score comes from a normal distribution whose sum is the dot product between a regression parameter \u03b7 that encodes the influence of each topic on the observation and a variance \u03c3 2 . With this model in hand, we use statistical inference to determine the distribution over latent variables that, given the model, best explains observed data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The MLSLDA Model", "sec_num": "1.2" }, { "text": "The generative model is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The MLSLDA Model", "sec_num": "1.2" }, { "text": "1. For each topic i = 1 . . . K, draw a topic distribution {\u03b2 i , \u03c9 i , \u03c6 i } from MULTDIRHIER(\u03c4 , \u03ba, \u03c0). 2. For each document d = 1 . . . M with language l d :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The MLSLDA Model", "sec_num": "1.2" }, { "text": "(a) Choose a distribution over topics \u03b8 d \u223c Dir (\u03b1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The MLSLDA Model", "sec_num": "1.2" }, { "text": "(b) For each word in the document n = 1 . . . N d , choose a topic assignment z d,n \u223c Mult (\u03b8 d ) and a path \u03bb d,n ending at word w d,n according to Equation 1 using", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The MLSLDA Model", "sec_num": "1.2" }, { "text": "{\u03b2 z d,n , \u03c9 z d,n , \u03c6 z d,n }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The MLSLDA Model", "sec_num": "1.2" }, { "text": "3. Choose a response variable from y", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The MLSLDA Model", "sec_num": "1.2" }, { "text": "\u223c Norm \u03b7 z, \u03c3 2 , wherez d \u2261 1 N N n=1 z d,n .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The MLSLDA Model", "sec_num": "1.2" }, { "text": "Crucially, note that the topics are not independent of the sentiment task; the regression encourages terms with similar effects on the observation y to be in the same topic. The consistency of topics described above allows the same regression to be done for the entire corpus regardless of the language of the underlying document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The MLSLDA Model", "sec_num": "1.2" }, { "text": "Finding the model parameters most likely to explain the data is a problem of statistical inference. We employ stochastic EM (Diebolt and Ip, 1996) , using a Gibbs sampler for the E-step to assign words to paths and topics. After randomly initializing the topics, we alternate between sampling the topic and path of a word (z d,n , \u03bb d,n ) and finding the regression parameters \u03b7 that maximize the likelihood. We jointly sample the topic and path conditioning on all of the other path and document assignments in the corpus, selecting a path and topic with probability", "cite_spans": [ { "start": 124, "end": 146, "text": "(Diebolt and Ip, 1996)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(z n = k, \u03bb n = r|z \u2212n , \u03bb \u2212n , w n , \u03b7, \u03c3, \u0398) = p(y d |z, \u03b7, \u03c3)p(\u03bb n = r|z n = k, \u03bb \u2212n , w n , \u03c4 , \u03ba, \u03c0) p(z n = k|z \u2212n , \u03b1).", "eq_num": "(2)" } ], "section": "Inference", "sec_num": "2" }, { "text": "Each of these three terms reflects a different influence on the topics from the vocabulary structure, the document's topics, and the response variable. In the next paragraphs, we will expand each of them to derive the full conditional topic distribution. As discussed in Section 1.1, the structure of the topic distribution encourages terms with the same meaning to be in the same topic, even across languages. During inference, we marginalize over possible multinomial distributions \u03b2, \u03c9, and \u03c6, using the observed transitions from i to j in topic k; T k,i,j , stop counts in synset i in topic k, O k,i,0 ; continue counts in synsets i in topic k, O k,i,1 ; and emission counts in synset i in language l in topic k, F k,i,l . The", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "2" }, { "text": "H L M N \u03b8 d z d,n \u03bb d,n \u03b1 w d,n \u03c3 \u03b7 y d K \u03b2 i,h \u03c4 h \u03c9 i,h \u03ba h \u03c6 i,h,l \u03c0 h,l", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "2" }, { "text": "Text Documents Sentiment Prediction probability of taking a path r is then", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multilingual Topics", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(\u03bb n = r|z n = k, \u03bb \u2212n ) = (i,j)\u2208r B k,i,j + \u03c4 i,j j B k,i,j + \u03c4 i,j O k,i,1 + \u03c9 i s\u22080,1 O k,i,s + \u03c9 i,s Transition O k,r end ,0 + \u03c9 r end s\u22080,1 O k,r end ,s + \u03c9 r end ,s F k,r end ,wn + \u03c0 r end ,l w F r end ,w + \u03c0 r end ,w Emission .", "eq_num": "(3)" } ], "section": "Multilingual Topics", "sec_num": null }, { "text": "Equation 3 reflects the multilingual aspect of this model. The conditional topic distribution for SLDA (Blei and McAuliffe, 2007) replaces this term with the standard Multinomial-Dirichlet. However, we believe this is the first published SLDA-style model using MCMC inference, as prior work has used variational inference (Blei and McAuliffe, 2007; Chang and Blei, 2009; Wang et al., 2009) .", "cite_spans": [ { "start": 103, "end": 129, "text": "(Blei and McAuliffe, 2007)", "ref_id": "BIBREF4" }, { "start": 322, "end": 348, "text": "(Blei and McAuliffe, 2007;", "ref_id": "BIBREF4" }, { "start": 349, "end": 370, "text": "Chang and Blei, 2009;", "ref_id": "BIBREF8" }, { "start": 371, "end": 389, "text": "Wang et al., 2009)", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "Multilingual Topics", "sec_num": null }, { "text": "Because the observed response variable depends on the topic assignments of a document, the conditional topic distribution is shifted toward topics that explain the observed response. Topics that move the predicted response\u0177 d toward the true y d will be favored. We drop terms that are constant across all topics for the effect of the response variable,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multilingual Topics", "sec_num": null }, { "text": "p(y d |z, \u03b7, \u03c3) \u221d exp 1 \u03c3 2 y d \u2212 k N d,k \u03b7 k k N d,k \u03b7 z k k N d,k Other words' influence exp \u2212\u03b7 2 z k 2\u03c3 2 k N 2 d,k", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multilingual Topics", "sec_num": null }, { "text": "This word's influence .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multilingual Topics", "sec_num": null }, { "text": "The above equation represents the supervised aspect of the model, which is inherited from SLDA. Finally, there is the effect of the topics already assigned to a document; the conditional distribution favors topics already assigned in a document,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multilingual Topics", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(z n = k|z \u2212n , \u03b1) = T d,k + \u03b1 k k T d,k + \u03b1 k .", "eq_num": "(5)" } ], "section": "Multilingual Topics", "sec_num": null }, { "text": "This term represents the document focus of this model; it is present in all Gibbs sampling inference schemes for LDA . Multiplying together Equations 3, 4, and 5 allows us to sample a topic using the conditional distribution from Equation 2, based on the topic and path of the other words in all languages. After sampling the path and topic for each word in a document, we then find new regression parameters \u03b7 that maximize the likelihood conditioned on the current state of the sampler. This is simply a least squares regression using the topic assignmentsz d to predict y d .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multilingual Topics", "sec_num": null }, { "text": "Prediction on documents for which we don't have an observed y d is equivalent to marginalizing over y d and sampling topics for the document from Equations 3 and 5. The prediction for y d is then the dot product of \u03b7 and the empirical topic distributionz d .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multilingual Topics", "sec_num": null }, { "text": "We initially optimized all hyperparameters using slice sampling. However, we found that the regression variance \u03c3 2 was not stable. Optimizing \u03c3 2 seems to balance between modeling the language in the documents and the prediction, and thus is sensitive to documents' length. Given this sensitivity, we did not optimize \u03c3 2 for our prediction experiments in Section 4, but instead kept it fixed at 0.25. We leave optimizing this variable, either through cross validation or adapting the model, to future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multilingual Topics", "sec_num": null }, { "text": "In Section 1.1, we described connections across languages as offered by semantic networks in a general way, using WordNet as an example. In this section, we provide more specifics, as well as alternative ways of building semantic connections across languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bridges Across Languages", "sec_num": "3" }, { "text": "Flat First, we can consider a degenerate mapping that is nearly equivalent to running SLDA independently across multiple languages, relating topics only based on the impact on the response variable. Consider a degenerate tree with only one node, with all words in all languages associated with that node. This is consistent with our model, but there is really no shared semantic space, as all emitted words must come from this degenerate \"synset\" and the model only represents the output distribution for this single node.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bridges Across Languages", "sec_num": "3" }, { "text": "WordNet We took the alignment of GermaNet to WordNet 1.6 (Kunze and Lemnitzer, 2002) and removed all synsets that were had no mapped German words. Any German synsets that did not have English translations had their words mapped to the lowest extant English hypernym (e.g. \"beinbruch,\" a broken leg, was mapped to \"fracture\"). We stemmed all words to account for inflected forms not being present (Porter and Boulton, 1970) . An example of the paths for the German word \"wunsch\" (wish, request) is shown in Figure 2 ", "cite_spans": [ { "start": 57, "end": 84, "text": "(Kunze and Lemnitzer, 2002)", "ref_id": "BIBREF21" }, { "start": 396, "end": 422, "text": "(Porter and Boulton, 1970)", "ref_id": "BIBREF34" } ], "ref_spans": [ { "start": 506, "end": 514, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Bridges Across Languages", "sec_num": "3" }, { "text": "Dictionaries A dictionary can be viewed as a many to many mapping, where each entry e i maps one or more words in one language s i to one or more words t i in another language. Entries were taken from an English-German dictionary (Richter, 2008) a Chinese-English dictionary (Denisowski, 1997), and a Chinese-German dictionary (Hefti, 2005) . As with WordNet, the words in entries for English and German were stemmed to improve coverage. An example for German is shown in Figure 2(b) .", "cite_spans": [ { "start": 230, "end": 245, "text": "(Richter, 2008)", "ref_id": "BIBREF38" }, { "start": 327, "end": 340, "text": "(Hefti, 2005)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 472, "end": 483, "text": "Figure 2(b)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "(a).", "sec_num": null }, { "text": "Algorithmic Connections In addition to handcurated connections across languages, one could also consider automatic means of mapping across languages, such as using edit distance or local context (Haghighi et al., 2008; Rapp, 1995) or using a lexical translation table obtained from parallel text (Melamed, 1998 with these techniques, constructing appropriate hierarchies from these resources required many arbitrary decisions about cutoffs and which words to include. Thus, we do not consider them in this paper.", "cite_spans": [ { "start": 195, "end": 218, "text": "(Haghighi et al., 2008;", "ref_id": "BIBREF16" }, { "start": 219, "end": 230, "text": "Rapp, 1995)", "ref_id": "BIBREF36" }, { "start": 296, "end": 310, "text": "(Melamed, 1998", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "(a).", "sec_num": null }, { "text": "We evaluate MLSLDA on three criteria: how well it can discover consistent topics across languages for matching parallel documents, how well it can discover sentiment-correlated word lists from nonaligned text, and how well it can predict sentiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We took the 1996 documents from the Europarl corpus (Koehn, 2005 ) using three bridges: GermaNet, dictionary, and the uninformative flat matching. 4 The model is unaware that the translations of documents in one language are present in the other language. Note that this does not use the supervised framework (as there is no associated response variable for Europarl documents); this experiment is to demonstrate the effectiveness of the multilingual aspect of the model. To test whether the topics learned by the model are consistent across languages, we represent each document using the probability distribution \u03b8 d over topic assignments. Each \u03b8 d is a vector of length K and is a language-independent representation of the document. For each document in one language, we computed the Hellinger distance between it and all of the documents in the other language and sorted the documents by decreasing distance. The translation of the document is somewhere in that set; the higher the normalized rank (the percentage of documents with a rank lower than the translation of the document), the better the underlying topic model connects languages.", "cite_spans": [ { "start": 52, "end": 64, "text": "(Koehn, 2005", "ref_id": "BIBREF20" }, { "start": 147, "end": 148, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Matching on Multilingual Topics", "sec_num": "4.1" }, { "text": "We compare three bridges against what is to our knowledge the only other topic model for unaligned text, Multilingual Topics for Unaligned Text (Boyd-Graber and Blei, 2009). 5 Average Parallel Document Rank Figure 3 shows the results of this experiment. The dictionary-based bridge had the best performance on the task, ranking a large proportion of documents (0.95) below the translated document once enough topics were available. Although GermaNet is richer, its coverage is incomplete; the dictionary structure had a much larger vocabulary and could build a more complete multilingual topics. Using comparable input information, this more flexible model performed better on the matching task than the existing multilingual topic model available for unaligned text. The degenerate flat bridge did no better than the baseline of random guessing, as expected.", "cite_spans": [ { "start": 144, "end": 175, "text": "(Boyd-Graber and Blei, 2009). 5", "ref_id": null } ], "ref_spans": [ { "start": 207, "end": 215, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Matching on Multilingual Topics", "sec_num": "4.1" }, { "text": "One of the key tasks in sentiment analysis has been the collection of lists of words that convey sentiment (Wilson, 2008; Riloff et al., 2003) . These resources are often created using or in reference to resources like WordNet (Whitelaw et al., 2005; Baccianella and Sebastiani, 2010) . MLSLDA provides a method for extracting topical and sentimentcorrelated word lists from multilingual corpora. If was updated more frequently.", "cite_spans": [ { "start": 107, "end": 121, "text": "(Wilson, 2008;", "ref_id": "BIBREF49" }, { "start": 122, "end": 142, "text": "Riloff et al., 2003)", "ref_id": "BIBREF39" }, { "start": 227, "end": 250, "text": "(Whitelaw et al., 2005;", "ref_id": "BIBREF48" }, { "start": 251, "end": 284, "text": "Baccianella and Sebastiani, 2010)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Qualitative Sentiment-Correlated Topics", "sec_num": "4.2" }, { "text": "a WordNet-like resource is used as the bridge, the resulting topics are distributions over synsets, not just over words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Sentiment-Correlated Topics", "sec_num": "4.2" }, { "text": "As our demonstration corpus, we used the Amherst Sentiment Corpus (Constant et al., 2009) , as it has documents in multiple languages (English, Chinese, and German) with numerical assessments of sentiment (number of stars assigned to the review). We segmented the Chinese text (Tseng et al., 2005) and used a classifier trained on character n-grams to remove English-language documents that were mixed in among the Chinese and German language reviews. Figure 4 shows extracted topics from German-English and German-Chinese corpora. MLSLDA is able to distinguish sentiment-bearing topics from content bearing topics. For example; in the German-English corpus, \"food\" and \"children\" topics are not associated with a consistent sentiment signal, while \"religion\" is associated with a more negative sentiment. In contrast, in the German-Chinese corpus, the \"religion/society\" topic is more neutral, and the gender-oriented topic is viewed more negatively. Negative sentiment-bearing topics have reasonable words such as \"pages,\" \"k\u01d2ng p\u00e0\" (Chinese for \"I'm afraid that . . . \") and \"tuo\" (Chienese for \"discard\"), and positive sentiment-bearing topics have reasonable words such as \"great,\" \"good,\" and \"juwel\" (German for \"jewel\").", "cite_spans": [ { "start": 66, "end": 89, "text": "(Constant et al., 2009)", "ref_id": "BIBREF10" }, { "start": 277, "end": 297, "text": "(Tseng et al., 2005)", "ref_id": "BIBREF43" } ], "ref_spans": [ { "start": 452, "end": 460, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Qualitative Sentiment-Correlated Topics", "sec_num": "4.2" }, { "text": "The qualitative topics also betray some of the weaknesses of the model. For example, in one of the negative sentiment topics, the German word \"gut\" (good) is present. Because topics are distributions over words, they can encode the presence of negations like \"kein\" (no) and \"nicht\" (not), but not collocations like \"nicht gut.\" More elaborate topic models that can model local syntax and collocations (Johnson, 2010) provide options for addressing such problems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Sentiment-Correlated Topics", "sec_num": "4.2" }, { "text": "We do not report the results for sentiment prediction for this corpus because the baseline of predicting a positive review is so strong; most algorithms do extremely well by always predicting a positive review, ours included.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Sentiment-Correlated Topics", "sec_num": "4.2" }, { "text": "We gathered 330 film reviews from a German film review site (Vetter et al., 2000) and combined them with a much larger English film review corpus of over Figure 4 : Topics, along with associated regression coefficient \u03b7 from a learned 25-topic model on German-English (left) and German-Chinese (right) documents. Notice that theme-related topics have regression parameter near zero, topics discussing the number of pages have negative regression parameters, topics with \"good,\" \"great,\" \"h\u01ceo\" (good) and \"\u00fcberzeugt\" (convinced) have positive regression parameters. For the German-Chinese corpus, note the presence of \"gut\" (good) in one of the negative sentiment topics, showing the difficulty of learning collocations. 5000 film reviews (Pang and Lee, 2005) to create a multilingual film review corpus. 6 The results for predicting sentiment in German documents with 25 topics are presented in Table 1 . On a small monolingual corpus, prediction is very poor. The model over-fits, especially when it has the entire vocabulary to select from. The slightly better performance using GermaNet and a dictionary as topic priors can be viewed as basic feature selection, removing proper names from the vocabulary to 6 We followed Pang and Lee's method for creating a numerical score between 0 and 1 from a star rating. We then converted that to an integer by multiplying by 100; this was done because initial data preprocessing assumed integer values (although downstream processing did not assume integer values). The German movie review corpus is available at http://www.umiacs.umd.edu/\u02dcjbg/ static/downloads_and_media.html prevent over-fitting.", "cite_spans": [ { "start": 60, "end": 81, "text": "(Vetter et al., 2000)", "ref_id": "BIBREF44" }, { "start": 738, "end": 758, "text": "(Pang and Lee, 2005)", "ref_id": "BIBREF32" }, { "start": 804, "end": 805, "text": "6", "ref_id": null }, { "start": 1210, "end": 1211, "text": "6", "ref_id": null } ], "ref_spans": [ { "start": 154, "end": 162, "text": "Figure 4", "ref_id": null }, { "start": 895, "end": 902, "text": "Table 1", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Sentiment Prediction", "sec_num": "4.3" }, { "text": "One would expect that prediction improves with a larger training set. For this model, such an improvement is seen even when the training set includes no documents in the target language. Note that even the degenerate flat bridge across languages provides useful information. After introducing English data, the model learns to prefer smaller regression parameters (this can be seen as a form of regularization).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentiment Prediction", "sec_num": "4.3" }, { "text": "Performance is best when a reasonably large corpus is available including some data in the target language. For each bridge, performance improves dramatically, showing that MLSLDA is successfully able to incorporate information learned from both languages to build a single, coherent picture of how sentiment is expressed in both languages. With the GermaNet bridge, performance is better than both the degenerate and dictionary based bridges, showing that the model is sharing information both through the multilingual topics and the regression parameters. Performance on English prediction is comparable to previously published results on this dataset (Blei and McAuliffe, 2007) ; with enough data, a monolingual model is no longer helped by adding additional multilingual data.", "cite_spans": [ { "start": 654, "end": 680, "text": "(Blei and McAuliffe, 2007)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Sentiment Prediction", "sec_num": "4.3" }, { "text": "The advantages of MLSLDA reside largely in the assumptions that it makes and does not make: documents need not be parallel, sentiment is a normally distributed document-level property, words are exchangeable, and sentiment can be predicted as a regression on a K-dimensional vector.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relationship to Previous Research", "sec_num": "5" }, { "text": "By not assuming parallel text, this approach can be applied to a broad class of corpora. Other multilingual topic models require parallel text, either at the document (Ni et al., 2009; Mimno et al., 2009) or word-level (Kim and Khudanpur, 2004; Zhao and Xing, 2006) . Similarly, other multilingual sentiment approaches also require parallel text, often supplied via automatic translation; after the translated text is available, either monolingual analysis (Denecke, 2008) or co-training is applied (Wan, 2009) . In contrast, our approach requires fewer resources for a language: a dictionary (or similar knowledge structure relating words to nodes in a graph) and comparable text, instead of parallel text or a machine translation system.", "cite_spans": [ { "start": 167, "end": 184, "text": "(Ni et al., 2009;", "ref_id": "BIBREF30" }, { "start": 185, "end": 204, "text": "Mimno et al., 2009)", "ref_id": "BIBREF29" }, { "start": 219, "end": 244, "text": "(Kim and Khudanpur, 2004;", "ref_id": "BIBREF19" }, { "start": 245, "end": 265, "text": "Zhao and Xing, 2006)", "ref_id": "BIBREF50" }, { "start": 457, "end": 472, "text": "(Denecke, 2008)", "ref_id": "BIBREF11" }, { "start": 499, "end": 510, "text": "(Wan, 2009)", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "Relationship to Previous Research", "sec_num": "5" }, { "text": "Rather than viewing one language through the lens of another language, MLSLDA views all languages through the lens of the topics present in a document. This is a modeling decision with pros and cons. It allows a language agnostic decision about sentiment to be made, but it restricts the expressiveness of the model in terms of sentiment in two ways. First, it throws away information important to sentiment analysis like syntactic constructions (Greene and Resnik, 2009) and document structure (McDonald et al., 2007) that may impact the sentiment rating. Second, a single real number is not always sufficient to capture the nuances of sentiment. Less critically, assuming that sentiment is normally distributed is not true of all real-world corpora; review corpora often have a skew toward positive reviews. We standardize responses by the mean and variance of the training data to partially address this issue, but other response distributions are possible, such as generalized linear models (Blei and McAuliffe, 2007) and vector machines , which would allow more traditional classification predictions.", "cite_spans": [ { "start": 446, "end": 471, "text": "(Greene and Resnik, 2009)", "ref_id": "BIBREF14" }, { "start": 495, "end": 518, "text": "(McDonald et al., 2007)", "ref_id": "BIBREF25" }, { "start": 995, "end": 1021, "text": "(Blei and McAuliffe, 2007)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Relationship to Previous Research", "sec_num": "5" }, { "text": "Other probabilistic models for sentiment classification view sentiment as a word level feature. Some models use sentiment word lists, either given or learned from a corpus, as a prior to seed topics so that they attract other sentiment bearing words (Mei et al., 2007; Lin and He, 2009) . Other approaches view sentiment or perspective as a perturbation of a log-linear topic model (Lin et al., 2008) . Such techniques could be combined with the multilingual approach presented here by using distributions over words that not only bridge different languages but also encode additional information. For example, the vocabulary hierarchies could be structured to encourage topics that encourage correlation among similar sentiment-bearing words (e.g. clustering words associated with price, size, etc.). Future work could also more rigorously validate that the multilingual topics discovered by MLSLDA are sentiment-bearing via human judgments.", "cite_spans": [ { "start": 250, "end": 268, "text": "(Mei et al., 2007;", "ref_id": "BIBREF26" }, { "start": 269, "end": 286, "text": "Lin and He, 2009)", "ref_id": "BIBREF22" }, { "start": 382, "end": 400, "text": "(Lin et al., 2008)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Relationship to Previous Research", "sec_num": "5" }, { "text": "In contrast, MLSLDA draws on techniques that view sentiment as a regression problem based on the topics used in a document, as in supervised latent Dirichlet allocation (SLDA) (Blei and McAuliffe, 2007) or in finer-grained parts of a document (Titov and McDonald, 2008) . Extending these models to multilingual data would be more straightforward.", "cite_spans": [ { "start": 176, "end": 202, "text": "(Blei and McAuliffe, 2007)", "ref_id": "BIBREF4" }, { "start": 243, "end": 269, "text": "(Titov and McDonald, 2008)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Relationship to Previous Research", "sec_num": "5" }, { "text": "MLSLDA is a \"holistic\" statistical model for multilingual corpora that does not require parallel text or expensive multilingual resources. It discovers connections across languages that can recover latent structure in parallel corpora, discover sentimentcorrelated word lists in multiple languages, and make accurate predictions across languages that improve with more multilingual data, as demonstrated in the context of sentiment analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "More generally, MLSLDA provides a formalism that can be used to incorporate the many insights of topic modeling-driven sentiment analysis to multilingual corpora by tying together word distributions across languages. MLSLDA can also contribute to the development of word list-based sentiment systems: the topics discovered by MLSLDA can serve as a first-pass means of sentiment-based word lists for languages that might lack annotated resources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "MLSLDA also can be viewed as a sentimentinformed multilingual word sense disambiguation (WSD) algorithm. When the multilingual bridge is an explicit representation of sense such as WordNet, part of the generative process is an explicit assignment of every word to sense (the path latent variable \u03bb); this is discovered during inference. The dictionarybased technique may be viewed as a disambiguation via a transfer dictionary. How sentiment prediction impacts the implicit WSD is left to future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "Better capturing local syntax and meaningful collocations would also improve the model's ability to predict sentiment and model multilingual topics, as would providing a better mechanism for representing words not included in our bridges. We intend to develop such models as future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "This research was funded in part by the Army Research Laboratory through ARL Cooperative Agreement W911NF-09-2-0072 and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the Army Research Laboratory. All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of ARL, IARPA, the ODNI, or the U.S. Government. The authors thank the anonymous reviewers, Jonathan Chang, Christiane Fellbaum, and Lawrence Watts for helpful comments. The authors especially thank Chris Potts for providing help in obtaining and processing reviews.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": "7" }, { "text": "The latter property has also made LDA popular for information retrieval(Wei and Croft, 2006)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For English and German documents in all experiments, we removed stop words(Loper and Bird, 2002), stemmed words(Porter and Boulton, 1970), and created a vocabulary of the most frequent 5000 words per language (this vocabulary limit was mostly done to ensure that the dictionary-based bridge was of manageable size). Documents shorter than fifty content words were excluded.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The bipartite matching was initialized with the dictionary weights as specified by the Multilingual Topics for Unaligned Text algorithm. The matching size was limited to 250 and the bipartite matching was only updated on the initial iteration then held fixed. This yielded results comparable to when the matching", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Incorporating domain knowledge into topic modeling via Dirichlet forest priors", "authors": [ { "first": "David", "middle": [], "last": "Andrzejewski", "suffix": "" }, { "first": "Xiaojin", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Craven", "suffix": "" } ], "year": 2009, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Andrzejewski, Xiaojin Zhu, and Mark Craven. 2009. Incorporating domain knowledge into topic mod- eling via Dirichlet forest priors. In ICML.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Sentiwordnet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining", "authors": [ { "first": "Andrea", "middle": [ "Esuli" ], "last": "", "suffix": "" }, { "first": "Stefano", "middle": [], "last": "Baccianella", "suffix": "" }, { "first": "Fabrizio", "middle": [], "last": "Sebastiani", "suffix": "" } ], "year": 2010, "venue": "LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrea Esuli Stefano Baccianella and Fabrizio Sebastiani. 2010. Sentiwordnet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. In LREC.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Multilingual subjectivity analysis using machine translation", "authors": [ { "first": "Carmen", "middle": [], "last": "Banea", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Samer", "middle": [], "last": "Hassan", "suffix": "" } ], "year": 2008, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carmen Banea, Rada Mihalcea, Janyce Wiebe, and Samer Hassan. 2008. Multilingual subjectivity analysis using machine translation. In EMNLP.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Correlated topic models", "authors": [ { "first": "M", "middle": [], "last": "David", "suffix": "" }, { "first": "John", "middle": [ "D" ], "last": "Blei", "suffix": "" }, { "first": "", "middle": [], "last": "Lafferty", "suffix": "" } ], "year": 2005, "venue": "NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M. Blei and John D. Lafferty. 2005. Correlated topic models. In NIPS.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Supervised topic models", "authors": [ { "first": "M", "middle": [], "last": "David", "suffix": "" }, { "first": "Jon", "middle": [ "D" ], "last": "Blei", "suffix": "" }, { "first": "", "middle": [], "last": "Mcauliffe", "suffix": "" } ], "year": 2007, "venue": "NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M. Blei and Jon D. McAuliffe. 2007. Supervised topic models. In NIPS. MIT Press.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Multilingual topic models for unaligned text", "authors": [ { "first": "Jordan", "middle": [], "last": "Boyd", "suffix": "" }, { "first": "-", "middle": [], "last": "Graber", "suffix": "" }, { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" } ], "year": 2009, "venue": "UAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jordan Boyd-Graber and David M. Blei. 2009. Multilin- gual topic models for unaligned text. In UAI.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A topic model for word sense disambiguation", "authors": [ { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "" }, { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "Xiaojin", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2007, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jordan Boyd-Graber, David M. Blei, and Xiaojin Zhu. 2007. A topic model for word sense disambiguation. In EMNLP.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Relational topic models for document networks", "authors": [ { "first": "Jonathan", "middle": [], "last": "Chang", "suffix": "" }, { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" } ], "year": 2009, "venue": "AISTATS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Chang and David M. Blei. 2009. Relational topic models for document networks. In AISTATS.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Shared logistic normal distributions for soft parameter tying in unsupervised grammar induction", "authors": [ { "first": "B", "middle": [], "last": "Shay", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Cohen", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2009, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shay B. Cohen and Noah A. Smith. 2009. Shared lo- gistic normal distributions for soft parameter tying in unsupervised grammar induction. In NAACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The pragmatics of expressive content: Evidence from large corpora", "authors": [ { "first": "Noah", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Davis", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" }, { "first": "Florian", "middle": [], "last": "Schwarz", "suffix": "" } ], "year": 2009, "venue": "Sprache und Datenverarbeitung", "volume": "33", "issue": "", "pages": "1--2", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noah Constant, Christopher Davis, Christopher Potts, and Florian Schwarz. 2009. The pragmatics of expressive content: Evidence from large corpora. Sprache und Datenverarbeitung, 33(1-2).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Using SentiWordNet for multilingual sentiment analysis", "authors": [ { "first": "Kerstin", "middle": [], "last": "Denecke", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kerstin Denecke. 2008. Using SentiWordNet for multilin- gual sentiment analysis. In ICDEW 2008. Paul Denisowski. 1997. CEDICT. http://www.mdbg.net/chindict/.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Markov Chain Monte Carlo in Practice, chapter Stochastic EM: method and application", "authors": [ { "first": "Jean", "middle": [], "last": "Diebolt", "suffix": "" }, { "first": "Eddie", "middle": [ "H S" ], "last": "Ip", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jean Diebolt and Eddie H.S. Ip, 1996. Markov Chain Monte Carlo in Practice, chapter Stochastic EM: method and application. Chapman and Hall, London.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The DWDS corpus: A reference corpus for the German language of the 20th century", "authors": [ { "first": "Alexander", "middle": [], "last": "Geyken", "suffix": "" } ], "year": 2007, "venue": "Idioms and Collocations: Corpus-based Linguistic", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Geyken. 2007. The DWDS corpus: A ref- erence corpus for the German language of the 20th century. In Idioms and Collocations: Corpus-based Linguistic, Lexicographic Studies. Continuum Press.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "More than words: Syntactic packaging and implicit sentiment", "authors": [ { "first": "Stephan", "middle": [], "last": "Greene", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2009, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephan Greene and Philip Resnik. 2009. More than words: Syntactic packaging and implicit sentiment. In NAACL.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Finding scientific topics", "authors": [ { "first": "L", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Griffiths", "suffix": "" }, { "first": "", "middle": [], "last": "Steyvers", "suffix": "" } ], "year": 2004, "venue": "", "volume": "101", "issue": "", "pages": "5228--5235", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas L. Griffiths and Mark Steyvers. 2004. Finding scientific topics. PNAS, 101(Suppl 1):5228-5235.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Learning bilingual lexicons from monolingual corpora", "authors": [ { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Taylor", "middle": [], "last": "Berg-Kirkpatrick", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2005, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. 2008. Learning bilingual lexicons from monolingual corpora. In ACL, Columbus, Ohio. Jan Hefti. 2005. HanDeDict. http://chdw.de.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Development of the Japanese WordNet", "authors": [ { "first": "Hitoshi", "middle": [], "last": "Isahara", "suffix": "" }, { "first": "Fransis", "middle": [], "last": "Bond", "suffix": "" }, { "first": "Kiyotaka", "middle": [], "last": "Uchimoto", "suffix": "" }, { "first": "Masao", "middle": [], "last": "Utiyama", "suffix": "" }, { "first": "Kyoko", "middle": [], "last": "Kanzaki", "suffix": "" } ], "year": 2008, "venue": "LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hitoshi Isahara, Fransis Bond, Kiyotaka Uchimoto, Masao Utiyama, and Kyoko Kanzaki. 2008. Development of the Japanese WordNet. In LREC.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "PCFGs, topic models, adaptor grammars and learning topical collocations and the structure of proper names", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2010, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Johnson. 2010. PCFGs, topic models, adaptor grammars and learning topical collocations and the structure of proper names. In ACL.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Lexical triggers and latent semantic analysis for cross-lingual language model adaptation", "authors": [ { "first": "Woosung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Khudanpur", "suffix": "" } ], "year": 2004, "venue": "TALIP", "volume": "3", "issue": "2", "pages": "94--112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Woosung Kim and Sanjeev Khudanpur. 2004. Lexical triggers and latent semantic analysis for cross-lingual language model adaptation. TALIP, 3(2):94-112.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Europarl: A parallel corpus for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2005, "venue": "MT Summit", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT Summit. http://www.statmt.org/europarl/.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Standardizing WordNets in a web-compliant format: The case of GermaNet", "authors": [ { "first": "Claudia", "middle": [], "last": "Kunze", "suffix": "" }, { "first": "Lothar", "middle": [], "last": "Lemnitzer", "suffix": "" } ], "year": 2002, "venue": "Workshop on Wordnets Structures and Standardisation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Claudia Kunze and Lothar Lemnitzer. 2002. Standardiz- ing WordNets in a web-compliant format: The case of GermaNet. In Workshop on Wordnets Structures and Standardisation.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Joint sentiment/topic model for sentiment analysis", "authors": [ { "first": "Chenghua", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Yulan", "middle": [], "last": "He", "suffix": "" } ], "year": 2009, "venue": "CIKM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chenghua Lin and Yulan He. 2009. Joint sentiment/topic model for sentiment analysis. In CIKM.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A joint topic and perspective model for ideological discourse", "authors": [ { "first": "Wei-Hao", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Xing", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Hauptmann", "suffix": "" } ], "year": 2008, "venue": "ECML PKDD", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei-Hao Lin, Eric Xing, and Alexander Hauptmann. 2008. A joint topic and perspective model for ideo- logical discourse. In ECML PKDD.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "NLTK: the natural language toolkit", "authors": [ { "first": "Edward", "middle": [], "last": "Loper", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" } ], "year": 2002, "venue": "Tools and methodologies for teaching. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward Loper and Steven Bird. 2002. NLTK: the natu- ral language toolkit. In Tools and methodologies for teaching. ACL.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Structured models for fine-to-coarse sentiment analysis", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Kerry", "middle": [], "last": "Hannan", "suffix": "" }, { "first": "Tyler", "middle": [], "last": "Neylon", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Wells", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Reynar", "suffix": "" } ], "year": 2007, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald, Kerry Hannan, Tyler Neylon, Mike Wells, and Jeff Reynar. 2007. Structured models for fine-to-coarse sentiment analysis. In ACL.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Topic sentiment mixture: modeling facets and opinions in weblogs", "authors": [ { "first": "Qiaozhu", "middle": [], "last": "Mei", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Wondra", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Su", "suffix": "" }, { "first": "Chengxiang", "middle": [], "last": "Zhai", "suffix": "" } ], "year": 2007, "venue": "WWW", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qiaozhu Mei, Xu Ling, Matthew Wondra, Hang Su, and ChengXiang Zhai. 2007. Topic sentiment mixture: modeling facets and opinions in weblogs. In WWW.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Empirical methods for exploiting parallel texts", "authors": [ { "first": "Melamed", "middle": [], "last": "Ilya Dan", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Dan Melamed. 1998. Empirical methods for exploit- ing parallel texts. Ph.D. thesis, University of Pennsyl- vania.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Nouns in WordNet: A lexical inheritance system", "authors": [ { "first": "George", "middle": [ "A" ], "last": "Miller", "suffix": "" } ], "year": 1990, "venue": "International Journal of Lexicography", "volume": "3", "issue": "4", "pages": "245--264", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A. Miller. 1990. Nouns in WordNet: A lexical inheritance system. International Journal of Lexicog- raphy, 3(4):245-264.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Polylingual topic models", "authors": [ { "first": "David", "middle": [], "last": "Mimno", "suffix": "" }, { "first": "Hanna", "middle": [], "last": "Wallach", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Naradowsky", "suffix": "" }, { "first": "David", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2009, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Mimno, Hanna Wallach, Jason Naradowsky, David Smith, and Andrew McCallum. 2009. Polylingual topic models. In EMNLP.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Mining multilingual topics from Wikipedia", "authors": [ { "first": "Xiaochuan", "middle": [], "last": "Ni", "suffix": "" }, { "first": "Jian-Tao", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Zheng", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaochuan Ni, Jian-Tao Sun, Jian Hu, and Zheng Chen. 2009. Mining multilingual topics from Wikipedia. In WWW.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Hebrew Word-Net: a test case of aligning lexical databases across languages", "authors": [ { "first": "Noam", "middle": [], "last": "Ordan", "suffix": "" }, { "first": "Shuly", "middle": [], "last": "Wintner", "suffix": "" } ], "year": 2007, "venue": "International Journal of Translation", "volume": "19", "issue": "1", "pages": "39--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noam Ordan and Shuly Wintner. 2007. Hebrew Word- Net: a test case of aligning lexical databases across lan- guages. International Journal of Translation, 19(1):39- 58.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2005, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In ACL.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Opinion Mining and Sentiment Analysis", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang and Lillian Lee. 2008. Opinion Mining and Sentiment Analysis. Now Publishers Inc.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Snowball stemmer", "authors": [ { "first": "Martin", "middle": [], "last": "Porter", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Boulton", "suffix": "" } ], "year": 1970, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Porter and Richard Boulton. 1970. Snowball stemmer. http://snowball.tartarus.org/credits.php.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Unsupervised topic modelling for multi-party spoken discourse", "authors": [ { "first": "Matthew", "middle": [], "last": "Purver", "suffix": "" }, { "first": "Konrad", "middle": [], "last": "K\u00f6rding", "suffix": "" }, { "first": "Thomas", "middle": [ "L" ], "last": "Griffiths", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Tenenbaum", "suffix": "" } ], "year": 2006, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Purver, Konrad K\u00f6rding, Thomas L. Griffiths, and Joshua Tenenbaum. 2006. Unsupervised topic modelling for multi-party spoken discourse. In ACL.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Identifying word translations in non-parallel texts", "authors": [ { "first": "Reinhard", "middle": [], "last": "Rapp", "suffix": "" } ], "year": 1995, "venue": "ACL", "volume": "", "issue": "", "pages": "320--322", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reinhard Rapp. 1995. Identifying word translations in non-parallel texts. In ACL, pages 320-322.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Using information content to evaluate semantic similarity in a taxonomy", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1995, "venue": "IJCAI", "volume": "", "issue": "", "pages": "448--453", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Resnik. 1995. Using information content to evalu- ate semantic similarity in a taxonomy. In IJCAI, pages 448-453.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Dictionary nice grep", "authors": [ { "first": "Frank", "middle": [], "last": "Richter", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frank Richter. 2008. Dictionary nice grep. http://www- user.tu-chemnitz.de/ fri/ding/.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Learning subjective nouns using extraction pattern bootstrapping", "authors": [ { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Theresa", "middle": [], "last": "Wilson", "suffix": "" } ], "year": 2003, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen Riloff, Janyce Wiebe, and Theresa Wilson. 2003. Learning subjective nouns using extraction pattern boot- strapping. In NAACL.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "The author-topic model for authors and documents", "authors": [ { "first": "Michal", "middle": [], "last": "Rosen-Zvi", "suffix": "" }, { "first": "Thomas", "middle": [ "L" ], "last": "Griffiths", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steyvers", "suffix": "" }, { "first": "Padhraic", "middle": [], "last": "Smyth", "suffix": "" } ], "year": 2004, "venue": "UAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michal Rosen-Zvi, Thomas L. Griffiths, Mark Steyvers, and Padhraic Smyth. 2004. The author-topic model for authors and documents. In UAI.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Building a Free French WordNet from Multilingual Resources", "authors": [ { "first": "Beno\u00eet", "middle": [], "last": "Sagot", "suffix": "" }, { "first": "Darja", "middle": [], "last": "Fi\u0161er", "suffix": "" } ], "year": 2008, "venue": "On-toLex", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Beno\u00eet Sagot and Darja Fi\u0161er. 2008. Building a Free French WordNet from Multilingual Resources. In On- toLex.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "A joint model of text and aspect ratings for sentiment summarization", "authors": [ { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" } ], "year": 2008, "venue": "ACL", "volume": "", "issue": "", "pages": "308--316", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Titov and Ryan McDonald. 2008. A joint model of text and aspect ratings for sentiment summarization. In ACL, pages 308-316.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "A conditional random field word segmenter", "authors": [ { "first": "Huihsin", "middle": [], "last": "Tseng", "suffix": "" }, { "first": "Pichuan", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Galen", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "SIGHAN Workshop on Chinese Language Processing. University of Oxford. 2006. British National Corpus", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Ju- rafsky, and Christopher Manning. 2005. A conditional random field word segmenter. In SIGHAN Workshop on Chinese Language Processing. University of Oxford. 2006. British Na- tional Corpus. http://www.natcorp.ox.ac.uk/.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Filmrezension.de: Online-magazin f\u00fcr filmkritik", "authors": [ { "first": "Tobias", "middle": [], "last": "Vetter", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Sauer", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Wallutat", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tobias Vetter, Manfred Sauer, and Philipp Wallutat. 2000. Filmrezension.de: Online-magazin f\u00fcr filmkritik. http://www.filmrezension.de.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Co-training for cross-lingual sentiment classification", "authors": [ { "first": "Xiaojun", "middle": [], "last": "Wan", "suffix": "" } ], "year": 2009, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaojun Wan. 2009. Co-training for cross-lingual senti- ment classification. In ACL.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Simultaneous image classification and annotation", "authors": [ { "first": "Chong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "David", "middle": [], "last": "Blei", "suffix": "" }, { "first": "Li", "middle": [], "last": "Fei-Fei", "suffix": "" } ], "year": 2009, "venue": "CVPR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chong Wang, David Blei, and Li Fei-Fei. 2009. Simulta- neous image classification and annotation. In CVPR.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "LDA-based document models for ad-hoc retrieval", "authors": [ { "first": "Xing", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Bruce", "middle": [], "last": "Croft", "suffix": "" } ], "year": 2006, "venue": "SIGIR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xing Wei and Bruce Croft. 2006. LDA-based document models for ad-hoc retrieval. In SIGIR.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Using appraisal groups for sentiment analysis", "authors": [ { "first": "Casey", "middle": [], "last": "Whitelaw", "suffix": "" }, { "first": "Navendu", "middle": [], "last": "Garg", "suffix": "" }, { "first": "Shlomo", "middle": [], "last": "Argamon", "suffix": "" } ], "year": 2005, "venue": "CIKM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Casey Whitelaw, Navendu Garg, and Shlomo Argamon. 2005. Using appraisal groups for sentiment analysis. In CIKM.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Fine-grained Subjectivity and Sentiment Analysis: Recognizing the Intensity, Polarity, and Attitudes of Private States", "authors": [ { "first": "Wilson", "middle": [], "last": "Theresa Ann", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Theresa Ann Wilson. 2008. Fine-grained Subjectivity and Sentiment Analysis: Recognizing the Intensity, Polarity, and Attitudes of Private States. Ph.D. thesis, University of Pittsburgh.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "BiTAM: Bilingual topic admixture models for word alignment", "authors": [ { "first": "Bing", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Eric", "middle": [ "P" ], "last": "Xing", "suffix": "" } ], "year": 2006, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bing Zhao and Eric P. Xing. 2006. BiTAM: Bilingual topic admixture models for word alignment. In ACL.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Medlda: maximum margin supervised topic models for regression and classification", "authors": [ { "first": "Jun", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Amr", "middle": [], "last": "Ahmed", "suffix": "" }, { "first": "Eric", "middle": [ "P" ], "last": "Xing", "suffix": "" } ], "year": 2009, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun Zhu, Amr Ahmed, and Eric P. Xing. 2009. Medlda: maximum margin supervised topic models for regres- sion and classification. In ICML.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "Graphical model representing MLSLDA. Shaded nodes represent observations, plates denote replication, and lines show probabilistic dependencies." }, "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "Two methods for constructing multilingual distributions over words. On the left, paths to the German word \"wunsch\" in GermaNet are shown. On the right, paths to the English word \"room\" are shown. Both English and German words are shown; some internal nodes in GermaNet have been omitted for space (represented by dashed lines). Note that different senses are denoted by different internal paths, and that internal paths are distinct from the per-language expression." }, "FIGREF2": { "type_str": "figure", "num": null, "uris": null, "text": "Average rank of paired translation document recovered from the multilingual topic model. Random guessing would yield 0.5; MLSLDA with a dictionary based matching performed best." }, "TABREF3": { "content": "