{ "paper_id": "D10-1028", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:51:19.986644Z" }, "title": "Modeling Perspective using Adaptor Grammars", "authors": [ { "first": "Eric", "middle": [ "A" ], "last": "Hardisty", "suffix": "", "affiliation": { "laboratory": "", "institution": "UMIACS University of Maryland College Park", "location": { "region": "MD" } }, "email": "hardisty@cs.umd.edu" }, { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "", "affiliation": { "laboratory": "UMD iSchool", "institution": "UMIACS University of Maryland", "location": { "settlement": "College Park", "region": "MD" } }, "email": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "", "affiliation": { "laboratory": "", "institution": "UMIACS University of Maryland", "location": { "settlement": "College Park", "region": "MD" } }, "email": "resnik@umd.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Strong indications of perspective can often come from collocations of arbitrary length; for example, someone writing get the government out of my X is typically expressing a conservative rather than progressive viewpoint. However, going beyond unigram or bigram features in perspective classification gives rise to problems of data sparsity. We address this problem using nonparametric Bayesian modeling, specifically adaptor grammars (Johnson et al., 2006). We demonstrate that an adaptive na\u00efve Bayes model captures multiword lexical usages associated with perspective, and establishes a new state-of-the-art for perspective classification results using the Bitter Lemons corpus, a collection of essays about mid-east issues from Israeli and Palestinian points of view.", "pdf_parse": { "paper_id": "D10-1028", "_pdf_hash": "", "abstract": [ { "text": "Strong indications of perspective can often come from collocations of arbitrary length; for example, someone writing get the government out of my X is typically expressing a conservative rather than progressive viewpoint. However, going beyond unigram or bigram features in perspective classification gives rise to problems of data sparsity. We address this problem using nonparametric Bayesian modeling, specifically adaptor grammars (Johnson et al., 2006). We demonstrate that an adaptive na\u00efve Bayes model captures multiword lexical usages associated with perspective, and establishes a new state-of-the-art for perspective classification results using the Bitter Lemons corpus, a collection of essays about mid-east issues from Israeli and Palestinian points of view.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Most work on the computational analysis of sentiment and perspective relies on lexical features. This makes sense, since an author's choice of words is often used to express overt opinions (e.g. describing healthcare reform as idiotic or wonderful) or to frame a discussion in order to convey a perspective more implicitly (e.g. using the term death tax instead of estate tax). Moreover, it is easy and efficient to represent texts as collections of the words they contain, in order to apply a well known arsenal of supervised techniques (Laver et al., 2003; Mullen and Malouf, 2006; Yu et al., 2008) .", "cite_spans": [ { "start": 538, "end": 558, "text": "(Laver et al., 2003;", "ref_id": "BIBREF9" }, { "start": 559, "end": 583, "text": "Mullen and Malouf, 2006;", "ref_id": "BIBREF15" }, { "start": 584, "end": 600, "text": "Yu et al., 2008)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "At the same time, standard lexical features have their limitations for this kind of analysis. Such features are usually created by selecting some small n-gram size in advance. Indeed, it is not uncommon to see the feature space for sentiment analysis limited to unigrams. However, important indicators of perspective can also be longer (get the government out of my). Trying to capture these using standard machine learning approaches creates a problem, since allowing n-grams as features for larger n gives rise to problems of data sparsity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we employ nonparametric Bayesian models (Orbanz and Teh, 2010) in order to address this limitation. In contrast to parametric models, for which a fixed number of parameters are specified in advance, nonparametric models can \"grow\" to the size best suited to the observed data. In text analysis, models of this type have been employed primarily for unsupervised discovery of latent structure -for example, in topic modeling, when the true number of topics is not known (Teh et al., 2006) ; in grammatical inference, when the appropriate number of nonterminal symbols is not known (Liang et al., 2007) ; and in coreference resolution, when the number of entities in a given document is not specified in advance (Haghighi and Klein, 2007) . Here we use them for supervised text classification.", "cite_spans": [ { "start": 55, "end": 77, "text": "(Orbanz and Teh, 2010)", "ref_id": "BIBREF16" }, { "start": 483, "end": 501, "text": "(Teh et al., 2006)", "ref_id": "BIBREF20" }, { "start": 594, "end": 614, "text": "(Liang et al., 2007)", "ref_id": "BIBREF11" }, { "start": 724, "end": 750, "text": "(Haghighi and Klein, 2007)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Specifically, we use adaptor grammars (Johnson et al., 2006) , a formalism for nonparametric Bayesian modeling that has recently proven useful in unsupervised modeling of phonemes (Johnson, 2008) , grammar induction (Cohen et al., 2010) , and named entity structure learning (Johnson, 2010) , to make supervised na\u00efve Bayes classification nonparametric in order to improve perspective modeling. Intuitively, na\u00efve Bayes associates each class or label with a probability distribution over a fixed vocabulary. We introduce adaptive na\u00efve Bayes (ANB), for which in principle the vocabulary can grow as needed to include collocations of arbitrary length, as determined by the properties of the dataset. We show that using adaptive na\u00efve Bayes improves on state of the art classification using the Bitter Lemons corpus (Lin et al., 2006) , a document collection that has been used by a variety of authors to evaluate perspective classification.", "cite_spans": [ { "start": 38, "end": 60, "text": "(Johnson et al., 2006)", "ref_id": "BIBREF6" }, { "start": 180, "end": 195, "text": "(Johnson, 2008)", "ref_id": "BIBREF7" }, { "start": 216, "end": 236, "text": "(Cohen et al., 2010)", "ref_id": "BIBREF1" }, { "start": 275, "end": 290, "text": "(Johnson, 2010)", "ref_id": "BIBREF8" }, { "start": 814, "end": 832, "text": "(Lin et al., 2006)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In Section 2, we review adaptor grammars, show how na\u00efve Bayes can be expressed within the formalism, and describe how -and how easily -an adaptive na\u00efve Bayes model can be created. Section 3 validates the approach via experimentation on the Bitter Lemons corpus. In Section 4, we summarize the contributions of the paper and discuss directions for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work we apply the adaptor grammar formalism introduced by Johnson, Griffiths, and Goldwater (Johnson et al., 2006) . Adaptor grammars are a generalization of probabilistic context free grammars (PCFGs) that make it particularly easy to express nonparametric Bayesian models of language simply and readably using context free rules. Moreover, Johnson et al. provide an inference procedure based on Markov Chain Monte Carlo techniques that makes parameter estimation straightforward for all models that can be expressed using adaptor grammars. 1 Variational inference for adaptor grammars has also been recently introduced (Cohen et al., 2010) .", "cite_spans": [ { "start": 100, "end": 122, "text": "(Johnson et al., 2006)", "ref_id": "BIBREF6" }, { "start": 629, "end": 649, "text": "(Cohen et al., 2010)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Adapting Na\u00efve Bayes to be Less Na\u00efve", "sec_num": "2" }, { "text": "Briefly, adaptor grammars allow nonterminals to be rewritten to entire subtrees. In contrast, a nonterminal in a PCFG rewrites only to a collection of grammar symbols; their subsequent productions are independent of each other. For instance, a traditional PCFG might learn probabilities for the rewrite rule PP \u2192 P NP. In contrast, an adaptor grammar can learn (or \"cache\") the production PP \u2192 (P up)(NP(DET a)(N tree)). It does this by positing that the distribution over children for an adapted non-terminal comes from a Pitman-Yor distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adapting Na\u00efve Bayes to be Less Na\u00efve", "sec_num": "2" }, { "text": "A Pitman-Yor distribution (Pitman and Yor, 1997 ) is a distribution over distributions. It has three parameters: the discount, a, such that 0 \u2264 a < 1, the strength, b, a real number such that \u2212a < b, 1 And, better still, they provide code that implements the inference algorithm; see http://www.cog.brown.edu/ mj/Software.htm. and a probability distribution G 0 known as the base distribution. Adaptor grammars allow distributions over subtrees to come from a Pitman-Yor distribution with the PCFG's original distribution over trees as the base distribution. The generative process for obtaining draws from a distribution drawn from a Pitman-Yor distribution can be described by the \"Chinese restaurant process\" (CRP). We will use the CRP to describe how to obtain a distribution over observations composed of sequences of n-grams, the key to our model's ability to capture perspective-bearing n-grams.", "cite_spans": [ { "start": 26, "end": 47, "text": "(Pitman and Yor, 1997", "ref_id": "BIBREF17" }, { "start": 200, "end": 201, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Adapting Na\u00efve Bayes to be Less Na\u00efve", "sec_num": "2" }, { "text": "Suppose that we have a base distribution \u2126 that is some distribution over all sequences of words (the exact structure of such a distribution is unimportant; such a distribution will be defined later in Table 1 ). Suppose further we have a distribution \u03c6 drawn from P Y (a, b, \u2126), and we wish to draw a series of observations w from \u03c6. The CRP gives us a generative process for doing those draws from \u03c6, marginalizing out \u03c6. Following the restaurant metaphor, we imagine the i th customer in the series entering the restaurant to take a seat at a table. The customer sits by making a choice that determines the value of the n-gram w i for that customer: she can either sit at an existing table or start a new table of her own. 2 If she sits at a new table j, that table is assigned a draw y j from the base distribution, \u2126; note that, since \u2126 is a distribution over n-grams, y j is an ngram. The value of w i is therefore assigned to be y j , and y j becomes the sequence of words assigned to that new table. On the other hand, if she sits at an existing table, then w i simply takes the sequence of words already associated with that table (assigned as above when it was first occupied).", "cite_spans": [ { "start": 726, "end": 727, "text": "2", "ref_id": null } ], "ref_spans": [ { "start": 202, "end": 209, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Adapting Na\u00efve Bayes to be Less Na\u00efve", "sec_num": "2" }, { "text": "The probability of joining an existing table j, with c j patrons already seated at table j, is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adapting Na\u00efve Bayes to be Less Na\u00efve", "sec_num": "2" }, { "text": "c j \u2212a c\u2022+b ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adapting Na\u00efve Bayes to be Less Na\u00efve", "sec_num": "2" }, { "text": "where c \u2022 is the number of patrons seated at all tables: Notice that \u03c6 is a distribution over the same space as \u2126, but it can drastically shift the mass of the distribution, compared with \u2126, as more and more pa-trons are seated at tables. However, there is always a chance of drawing from the base distribution, and therefore every word sequence can also always be drawn from \u03c6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adapting Na\u00efve Bayes to be Less Na\u00efve", "sec_num": "2" }, { "text": "c \u2022 = j c j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adapting Na\u00efve Bayes to be Less Na\u00efve", "sec_num": "2" }, { "text": "In the next section we will write a na\u00efve Bayes-like generative process using PCFGs. We will then use the PCFG distribution as the base distribution for a Pitman-Yor distribution, adapting the na\u00efve Bayes process to give us a distribution over n-grams, thus learning new language substructures that are useful for modeling the differences in perspective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adapting Na\u00efve Bayes to be Less Na\u00efve", "sec_num": "2" }, { "text": "Na\u00efve Bayes is a venerable and popular mechanism for text classification (Lewis, 1998) . It posits that there are K distinct categories of text -each with a distinct distribution over words -and that every document, represented as an exchangeable bag of words, is drawn from one (and only one) of these distributions. Learning the per-category word distributions and global prevalence of the classes is a problem of posterior inference which can be approached using a variety of inference techniques (Lowd and Domingos, 2005) .", "cite_spans": [ { "start": 73, "end": 86, "text": "(Lewis, 1998)", "ref_id": "BIBREF10" }, { "start": 500, "end": 525, "text": "(Lowd and Domingos, 2005)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Classification Models as PCFGs", "sec_num": "2.1" }, { "text": "More formally, na\u00efve Bayes models can be expressed via the following generative process: 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Models as PCFGs", "sec_num": "2.1" }, { "text": "1. Draw a global distribution over classes \u03b8 \u223c", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Models as PCFGs", "sec_num": "2.1" }, { "text": "Dir (\u03b1) 2. For each class i \u2208 {1, . . . , K}, draw a word distribution \u03c6 i \u223c Dir (\u03bb) 3. For each document d \u2208 {1, . . . , M }: (a) Draw a class assignment z d \u223c Mult (\u03b8) (b) For each word position n \u2208 {1, . . . , N d , draw w d,n \u223c Mult (\u03c6 z d )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Models as PCFGs", "sec_num": "2.1" }, { "text": "A variant of the na\u00efve Bayes generative process can be expressed using the adaptor grammar formalism ( Table 1) . The left hand side of each rule represents a nonterminal which can be expanded, and the right hand side represents the rewrite rule. The rightmost indices show replication; for instance, there are |V | rules that allow WORD i to rewrite to each word in the vocabulary. One can assume a symmetric Dirichlet prior of Dir (1) over the production choices unless otherwise specified -as with the DOC d production rule above, where a sparse prior is used. Notice that the distribution over expansions for WORD i corresponds directly to \u03c6 i in Figure 1 (a). There are, however, some differences between the model that we have described above and the standard na\u00efve Bayes model depicted in Figure 1(a) . In particular, there is no longer a single choice of class per document; each sentence is assigned a class. If the distribution over per-sentence labels is sparse (as it is above for DOC d ), this will closely approximate na\u00efve Bayes, since it will be very unlikely for the sentences in a document to have different labels. A non-sparse prior leads to behavior more like models that allow parts of a document to express sentiment or perspective differently.", "cite_spans": [], "ref_spans": [ { "start": 103, "end": 111, "text": "Table 1)", "ref_id": "TABREF1" }, { "start": 651, "end": 659, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 796, "end": 807, "text": "Figure 1(a)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Classification Models as PCFGs", "sec_num": "2.1" }, { "text": "SENT \u2192 DOC d d = 1, . . . , m DOC d 0.001 \u2192 ID d WORDSi d = 1, . . . , m; i \u2208 {1, K} WORDSi \u2192 WORDSi WORDi i \u2208 {1, K} WORDSi \u2192 WORDi i \u2208 {1, K} WORDi \u2192 v v \u2208 V ; i \u2208 {1, K}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Models as PCFGs", "sec_num": "2.1" }, { "text": "The na\u00efve Bayes generative distribution posits that when writing a document, the author selects a distribution of categories z d for the document from \u03b8. The author then generates words one at a time: each word is selected independently from a flat multinomial distribution \u03c6 z d over the vocabulary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Moving Beyond the Bag of Words", "sec_num": "2.2" }, { "text": "However, this is a very limited picture of how text is related to underlying perspectives. Clearly words are often connected with each other as collocations, and, just as clearly, extending a flat vocabulary to include bigram collocations does not suffice, since sometimes relevant perspective-bearing phrases are longer than two words. Consider phrases like health care for all or government takeover of health care, connected with progressive and conservative positions, respectively, during the national debate on healthcare reform. Simply applying na\u00efve Bayes, or any other model, to a bag of n-grams for high n is going to lead to unworkable levels of data sparsity; a model should be flexible enough to support both unigrams and longer phrases as needed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Moving Beyond the Bag of Words", "sec_num": "2.2" }, { "text": "K M \u03b1 z d N d W d,n \u03bb \u03b8 \u03c6 i (a) Na\u00efve Bayes K M \u03b1 z d N d W d,n a \u03b8 \u03c6 i b \u03a9 \u03c4 (b) Adaptive Na\u00efve Bayes", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Moving Beyond the Bag of Words", "sec_num": "2.2" }, { "text": "Following Johnson (2010) , however, we can use adaptor grammars to extend na\u00efve Bayes flexibly to include richer structure like collocations when they improve the model, and not including them when they do not. This can be accomplished by introducing adapted nonterminal rules: in a revised generative process, the author can draw from Pitman-Yor distribution whose base distribution is over word sequences of arbitrary length. 4 Thus in a setting where, say, K = 2, and our two classes are PROGRESSIVE and CONSERVATIVE, the sequence health care for all might be generated as a single unit for the progressive perspective, but in the conservative perspective the same sequence might be generated as three separate draws: health care, for, all. Such a model is presented in Figure 1(b) . Note the following differences between Figures 1(a) and 1(b): \u2022 \u2126 is the Pitman-Yor base distribution with \u03c4 as its uniform hyperparameter.", "cite_spans": [ { "start": 10, "end": 24, "text": "Johnson (2010)", "ref_id": "BIBREF8" }, { "start": 428, "end": 429, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 773, "end": 784, "text": "Figure 1(b)", "ref_id": "FIGREF0" }, { "start": 826, "end": 838, "text": "Figures 1(a)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Moving Beyond the Bag of Words", "sec_num": "2.2" }, { "text": "\u2022 z d selects", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Moving Beyond the Bag of Words", "sec_num": "2.2" }, { "text": "Returning to the CRP metaphor discussed when we introduced the Pitman-Yor distribution, there are two restaurants, one for the PROGRESSIVE distribution and one for the CONSERVATIVE distribution. Health care for all has its own table in the PROGRESSIVE restaurant, and enough people are sitting at it to make it popular. There is no such table in the CONSERVA-TIVE restaurant, so in order to generate those words, the phrase health care for all would need to come from a new table; however, it is more easily explained by three customers sitting at three existing, popular tables: health care, for, and all.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Moving Beyond the Bag of Words", "sec_num": "2.2" }, { "text": "We follow the convention of Johnson (2010) by writing adapted nonterminals as underlined. The grammar for adaptive na\u00efve Bayes is shown in Table 2. The adapted COLLOC i rule means that every time we need to generate that nonterminal, we are actually drawing from a distribution drawn from a Pitman-Yor distribution. The distribution over the possible yields of the WORDS i rule serves as the base distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Moving Beyond the Bag of Words", "sec_num": "2.2" }, { "text": "Given this generative process for documents, we can now use statistical inference to uncover the posterior distribution over the latent variables, thus discovering the tables and seating assignments of our metaphorical restaurants that each cater to a specific perspective filled with tables populated by words and n-grams.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Moving Beyond the Bag of Words", "sec_num": "2.2" }, { "text": "The model presented in Table 2 is the most straightforward way of extending na\u00efve Bayes to collocations. For completeness, we also consider the alternative of using a shared base distribution rather than distinguishing the base distributions of the two classes. Briefly, using a shared base distribution posits that the two classes use similar word distributions, but generate collocations unique to each class, whereas using separate base distributions assumes that the distribution of words is unique to each class.", "cite_spans": [], "ref_spans": [ { "start": 23, "end": 30, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Moving Beyond the Bag of Words", "sec_num": "2.2" }, { "text": "SENT \u2192 DOC d d = 1, . . . , m DOC d 0.001 \u2192 ID d SPANi d = 1, . . . , m; i \u2208 {1, K} SPANi \u2192 SPANi COLLOCi i \u2208 {1, K} SPANi \u2192 COLLOCi i \u2208 {1, K} COLLOCi \u2192 WORDSi i \u2208 {1, K} WORDSi \u2192 WORDSi WORDi i \u2208 {1, K} WORDSi \u2192 WORDi i \u2208 {1, K} WORDi \u2192 v v \u2208 V ; i \u2208 {1, K}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Moving Beyond the Bag of Words", "sec_num": "2.2" }, { "text": "\u2192 ID d SPANi d = 1, . . . , m; i \u2208 {1, K} SPANi \u2192 SPANi COLLOCi i \u2208 {1, K} SPANi \u2192 COLLOCi i \u2208 {1, K} COLLOCi \u2192 WORDS i \u2208 {1, K} WORDS \u2192 WORDS WORD WORDS \u2192 WORD WORD \u2192 v v \u2208 V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Moving Beyond the Bag of Words", "sec_num": "2.2" }, { "text": "We conducted our classification experiments on the Bitter Lemons (BL) corpus, which is a collection of 297 essays averaging 700-800 words in length, on various Middle East issues, written from both the Israeli and Palestinian perspectives. The BL corpus was compiled by Lin et al. (2006) and is derived from a website that invites weekly discussions on a topic and publishes essays from two sets of authors each week. 5 Two of the authors are guests, one from each perspective, and two essays are from the site's regular contributors, also one from each perspective, for a 5 http://www.bitterlemons.org total of four essays on each topic per week. We chose this corpus to allow us to directly compare our results with Greene and Resnik's (2009) Consistent with prior work, we prepared the corpus by dividing it into two groups, one group containing all of the essays written by the regular site contributors, which we call the Editor set, and one group comprised of all the essays written by the guest contributors, which we call the Guest set. Similar to the above mentioned prior work, we perform classification using one group as training data and the other as test data and perform two folds of classification. The overall experimental setup and corpus preparation process is presented in Figure 3 .", "cite_spans": [ { "start": 270, "end": 287, "text": "Lin et al. (2006)", "ref_id": "BIBREF12" }, { "start": 718, "end": 744, "text": "Greene and Resnik's (2009)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 1293, "end": 1301, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Corpus Description", "sec_num": "3.1" }, { "text": "K M \u03b1 z d N d W d,n a \u03b8 \u03c6 i b \u03a9 \u03c4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Description", "sec_num": "3.1" }, { "text": "The vocabulary generator determines the vocabulary used by a given experiment by converting the training set to lower case, stemming with the Porter stemmer, and filtering punctuation. We remove from the vocabulary any words that appeared in only one document regardless of frequency within that document, words with frequencies lower than a threshold, and stop words. 6 The vocabulary is then passed to a grammar generator and a corpus filter.", "cite_spans": [ { "start": 369, "end": 370, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3.2" }, { "text": "The grammar generator uses the vocabulary to generate the terminating rules of the grammar from the ANB grammar presented in Tables 2 and 3 . The corpus filter takes in a set of documents and replaces all words not in the vocabulary with \"out of vocabulary\" markers. This process ensures that in all experiments the vocabulary is composed entirely of words from the training set. After the groups have been filtered, the group used as the test set has its labels removed. The test and training set are then sent, along with the grammar, into the adaptor grammar inference engine.", "cite_spans": [], "ref_spans": [ { "start": 125, "end": 139, "text": "Tables 2 and 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3.2" }, { "text": "Each experiment ran for 3000 iterations. For the runs where adaptation was used we set the initial Pitman-Yor a and b parameters to 0.01 and 10 respectively, then slice sample (Johnson and Goldwater, 2009) .", "cite_spans": [ { "start": 176, "end": 205, "text": "(Johnson and Goldwater, 2009)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3.2" }, { "text": "We use the resulting sentence parses for classification. By design of the grammar, each sentence's words will belong to one and only one distribution. We identify that distribution from each of the test set sentence parses and use it as the sentence level classification for that particular sentence. We then use majority rule on the individual sentence classifications in a document to obtain the document classification. (In most cases the sentence-level assignments are overwhelmingly dominated by one class.) Table 4 gives the results and compares to prior work. The support vector machine (SVM), NB-B and LSPM results are taken directly from Lin et al. (2006) . NB-B indicates na\u00efve Bayes with full Bayesian inference. LSPM is the Latent Sentence Perspective Model, also from Lin et al. (2006) . OPUS results are taken from Greene and Resnik (2009) . Briefly, OPUS features are generated from observable grammatical relations that come from dependency parses of the corpus. Use of these features provided the best classification accuracy for this task prior to this work. ANB* refers to the grammar from Table 2 , but with adaptation disabled. The reported accuracy values for ANB*, ANB with a common base distribution (see Table 3 ), and ANB with separate base distributions (see Table 2 ) are the mean values from five separate sampling chains. Bold face indicates statistical signficance (p < 0.05) by unpaired t-test between the reported value and ANB*. Consistent with all prior work on this corpus we found that the classification accuracy for training on editors and testing on guests was lower than the other direction since the larger number of editors in the guest set allows for greater generalization. The difference between ANB* and ANB with a common base distribution is not statistically significant. Also of note is that the classification accuracy improves for testing on Guests when the ANB grammar is allowed to adapt and a separate base distribution is used for the two classes (88.28% versus 84.98% without adaptation). learned once inference is complete. The column labeled unique unigrams cached indicates the number of unique unigrams that appear on the right hand side of the adapted rules. Similarly, unique n-grams cached indicates the number of unique n-grams that appear on the right hand side of the adapted rules. The rightmost column indicates the percentage of terms from the group vocabulary that appear on the right hand side of adapted rules as unigrams. Values less than 100% indicate that the remaining vocabulary terms are cached in n-grams. As the table shows, a significant number of the rules learned during inference are n-grams of various sizes. Inspection of the captured bigrams showed that it captured sequences that a human might associate with one perspective over the other. Table 6 lists just a few of the more charged bigrams that were captured in the adapted rules.", "cite_spans": [ { "start": 647, "end": 664, "text": "Lin et al. (2006)", "ref_id": "BIBREF12" }, { "start": 781, "end": 798, "text": "Lin et al. (2006)", "ref_id": "BIBREF12" }, { "start": 840, "end": 853, "text": "Resnik (2009)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 513, "end": 520, "text": "Table 4", "ref_id": "TABREF6" }, { "start": 1109, "end": 1116, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 1229, "end": 1236, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 1286, "end": 1293, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 2830, "end": 2837, "text": "Table 6", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3.2" }, { "text": "More specific caching information on the individual groups and classes is provided in Table 7 . This data clearly demonstrates that raw n-gram frequency alone is not indicative of how many times an n-gram is used as a cached rule. For example, consider the bigram people go, which is used as a cached bigram only three times, yet appears in the corpus 407 times. Compare that with isra palestinian, which is cached the same number of times but appears only 18 times in the corpus. In other words, the sequence people go is more easily explained by two sequential unigrams, not a bigram. The ratio of cache use counts to raw bigrams gives a measure of strength of collocation between the terms of the n-gram. We conjecture that the rareness of caching for n > 2 is a function of the small corpus size. Also of note is the improvement in performance of ANB* over NB-B when training on guests, which we suspect is due to our use of sampled versus fixed hyperparameters.", "cite_spans": [], "ref_spans": [ { "start": 86, "end": 93, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Results and Analysis", "sec_num": "3.3" }, { "text": "In this paper, we have applied adaptor grammars in a supervised setting to model lexical properties of text and improve document classification according to perspective, by allowing nonparametric discovery of collocations that aid in perspective classification. The adaptive na\u00efve Bayes model improves on state of the art supervised classification performance in head-to-head comparisons with previous approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "4" }, { "text": "Although there have been many investigations on the efficacy of using multiword collocations in text classification (Bekkerman and Allan, 2004) , usually such approaches depend on a preprocessing step such as computing tf-idf or other measures of frequency based on either word bigrams (Tan et al., 2002) or character n-grams (Raskutti et al., 2001 ). In contrast, our approach combines phrase discovery with the probabilistic model of the text. This allows for the collocation selection and classification to be expressed in a single model, which can then be extended later; it also is truly generative, as compared with feature induction and selection algorithms that either under-or over-generate data.", "cite_spans": [ { "start": 116, "end": 143, "text": "(Bekkerman and Allan, 2004)", "ref_id": "BIBREF0" }, { "start": 286, "end": 304, "text": "(Tan et al., 2002)", "ref_id": null }, { "start": 326, "end": 348, "text": "(Raskutti et al., 2001", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "4" }, { "text": "There are a number of interesting directions in which to take this research. As Johnson et al. (2006) argue, and as we have confirmed here, the adaptor of ARL, IARPA, the ODNI, or the U.S. Government.", "cite_spans": [ { "start": 80, "end": 101, "text": "Johnson et al. (2006)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "4" }, { "text": "Note that we are abusing notation by allowing wi to correspond to a word sequence of length \u2265 1 rather than a single word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Here \u03b1 and \u03bb are hyperparameters used to specify priors for the class distribution and classes' word distributions, respectively; \u03b1 is a symmetric K-dimensional vector where each element is \u03c0. N d is the length of document d.Resnik and Hardisty (2010) provide a tutorial introduction to the na\u00efve Bayes generative process and underlying concepts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "As defined above, the base distribution is that of the PCFG production rule WORDSi. Although it has non-zero probability of producing any sequence of words, it is biased toward shorter word sequences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In these experiments, a frequency threshold of 4 was selected prior to testing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was funded in part by the Army Research Laboratory through ARL Cooperative Agreement W911NF-09-2-0072 and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the Army Research Laboratory. All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies The authors thank Mark Johnson and the anonymous reviewers for their helpful comments and discussions. We are particularly grateful to Mark Johnson for making his adaptor grammar code available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "grammar formalism makes it quite easy to work with latent variable models, in order to automatically discover structures in the data that have predictive value. For example, it is easy to imagine a model where in addition to a word distribution for each class, there is also an additional shared \"neutral\" distribution: for each sentence, the words in that sentence can either come from the class-specific content distribution or the shared neutral distribution. This turns out to be the Latent Sentence Perspective Model of Lin et al. (2006) , which is straightforward to encode using the adaptor grammar formalism simply by introducing two new nonterminals to represent the neutral distribution:Running this grammar did not produce improvements consistent with those reported by Lin et al. We plan to investigate this further, and a natural follow-on would be to experiment with adaptation for this variety of latent structure, to produce an adapted LSPM-like model analogous to adaptive na\u00efve Bayes. Viewed in a larger context, computational classi-fication of perspective is closely connected to social scientists' study of framing, which Entman (1993) characterizes as follows: \"To frame is to select some aspects of a perceived reality and make them more salient in a communicating text, in such a way as to promote a particular problem definition, causal interpretation, moral evaluation, and/or treatment recommendation for the item described.\" Here and in other work (e.g. (Laver et al., 2003; Mullen and Malouf, 2006; Yu et al., 2008; Monroe et al., 2008) ), it is clear that lexical evidence is one key to understanding how language is used to frame discussion from one perspective or another; Resnik and Greene (2009) have shown that syntactic choices can provide important evidence, as well. Another promising direction for this work is the application of adaptor grammar models as a way to capture both lexical and grammatical aspects of framing in a unified model.", "cite_spans": [ { "start": 525, "end": 542, "text": "Lin et al. (2006)", "ref_id": "BIBREF12" }, { "start": 1143, "end": 1156, "text": "Entman (1993)", "ref_id": "BIBREF2" }, { "start": 1482, "end": 1502, "text": "(Laver et al., 2003;", "ref_id": "BIBREF9" }, { "start": 1503, "end": 1527, "text": "Mullen and Malouf, 2006;", "ref_id": "BIBREF15" }, { "start": 1528, "end": 1544, "text": "Yu et al., 2008;", "ref_id": "BIBREF21" }, { "start": 1545, "end": 1565, "text": "Monroe et al., 2008)", "ref_id": "BIBREF14" }, { "start": 1705, "end": 1729, "text": "Resnik and Greene (2009)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Guest", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Using bigrams in text categorization", "authors": [ { "first": "R", "middle": [], "last": "Bekkerman", "suffix": "" }, { "first": "J", "middle": [], "last": "Allan", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Bekkerman and J. Allan. 2004. Using bigrams in text categorization. Technical Report IR-408, Center of Intelligent Information Retrieval, UMass Amherst.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Variational inference for adaptor grammars", "authors": [ { "first": "S", "middle": [ "B" ], "last": "Cohen", "suffix": "" }, { "first": "D", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "N", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2010, "venue": "Conference of the North American Chapter", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. B. Cohen, D. M. Blei, and N. A. Smith. 2010. Varia- tional inference for adaptor grammars. In Conference of the North American Chapter of the Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Framing: Toward Clarification of a Fractured Paradigm", "authors": [ { "first": "R", "middle": [ "M" ], "last": "Entman", "suffix": "" } ], "year": 1993, "venue": "The Journal of Communication", "volume": "43", "issue": "4", "pages": "51--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "R.M. Entman. 1993. Framing: Toward Clarification of a Fractured Paradigm. The Journal of Communication, 43(4):51-58.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "More than words: Syntactic packaging and implicit sentiment", "authors": [ { "first": "Stephan", "middle": [], "last": "Greene", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2009, "venue": "Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "503--511", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephan Greene and Philip Resnik. 2009. More than words: Syntactic packaging and implicit sentiment. In Conference of the North American Chapter of the Asso- ciation for Computational Linguistics, pages 503-511.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Unsupervised coreference resolution in a nonparametric bayesian model", "authors": [ { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "848--855", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aria Haghighi and Dan Klein. 2007. Unsupervised coref- erence resolution in a nonparametric bayesian model. In Proceedings of the 45th Annual Meeting of the Asso- ciation of Computational Linguistics, pages 848-855, Prague, Czech Republic, June.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Improving nonparameteric bayesian inference: experiments on unsupervised word segmentation with adaptor grammars", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" } ], "year": 2009, "venue": "Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "317--325", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Johnson and Sharon Goldwater. 2009. Improving nonparameteric bayesian inference: experiments on unsupervised word segmentation with adaptor gram- mars. In Conference of the North American Chapter of the Association for Computational Linguistics, pages 317-325, Boulder, Colorado, June.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Adaptor grammars: A framework for specifying compositional nonparametric Bayesian models", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Thomas", "middle": [ "L" ], "last": "Griffiths", "suffix": "" }, { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" } ], "year": 2006, "venue": "Proceedings of Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Johnson, Thomas L. Griffiths, and Sharon Gold- water. 2006. Adaptor grammars: A framework for specifying compositional nonparametric Bayesian mod- els. In Proceedings of Advances in Neural Information Processing Systems.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Using adaptor grammars to identify synergies in the unsupervised acquisition of linguistic structure", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "398--406", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Johnson. 2008. Using adaptor grammars to identify synergies in the unsupervised acquisition of linguistic structure. In Proceedings of ACL-08: HLT, pages 398- 406, Columbus, Ohio, June.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "PCFGs, topic models, adaptor grammars and learning topical collocations and the structure of proper names", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1148--1157", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Johnson. 2010. PCFGs, topic models, adaptor gram- mars and learning topical collocations and the structure of proper names. In Proceedings of the Association for Computational Linguistics, pages 1148-1157, Uppsala, Sweden, July.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Extracting policy positions from political texts using words as data", "authors": [ { "first": "Michael", "middle": [], "last": "Laver", "suffix": "" }, { "first": "Kenneth", "middle": [], "last": "Benoit", "suffix": "" }, { "first": "John", "middle": [], "last": "Garry", "suffix": "" } ], "year": 2003, "venue": "American Political Science Review", "volume": "", "issue": "", "pages": "311--331", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Laver, Kenneth Benoit, and John Garry. 2003. Extracting policy positions from political texts using words as data. American Political Science Review, pages 311-331.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Naive (bayes) at forty: The independence assumption in information retrieval", "authors": [ { "first": "D", "middle": [], "last": "David", "suffix": "" }, { "first": "", "middle": [], "last": "Lewis", "suffix": "" } ], "year": 1998, "venue": "Proceedings of ECML-98, 10th European Conference on Machine Learning, number 1398", "volume": "", "issue": "", "pages": "4--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "David D. Lewis. 1998. Naive (bayes) at forty: The inde- pendence assumption in information retrieval. In Claire N\u00e9dellec and C\u00e9line Rouveirol, editors, Proceedings of ECML-98, 10th European Conference on Machine Learning, number 1398, pages 4-15, Chemnitz, DE. Springer Verlag, Heidelberg, DE.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The infinite PCFG using hierarchical Dirichlet processes", "authors": [ { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Jordan", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2007, "venue": "Proceedings of Emperical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "688--697", "other_ids": {}, "num": null, "urls": [], "raw_text": "Percy Liang, Slav Petrov, Michael Jordan, and Dan Klein. 2007. The infinite PCFG using hierarchical Dirichlet processes. In Proceedings of Emperical Methods in Natural Language Processing, pages 688-697.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Which side are you on? Identifying perspectives at the document and sentence levels", "authors": [ { "first": "Wei-Hao", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Theresa", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Hauptmann", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Conference on Natural Language Learning (CoNLL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei-Hao Lin, Theresa Wilson, Janyce Wiebe, and Alexan- der Hauptmann. 2006. Which side are you on? Identi- fying perspectives at the document and sentence levels. In Proceedings of the Conference on Natural Language Learning (CoNLL).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Naive bayes models for probability estimation", "authors": [ { "first": "Daniel", "middle": [], "last": "Lowd", "suffix": "" }, { "first": "Pedro", "middle": [], "last": "Domingos", "suffix": "" } ], "year": 2005, "venue": "ICML '05: Proceedings of the 22nd international conference on Machine learning", "volume": "", "issue": "", "pages": "529--536", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Lowd and Pedro Domingos. 2005. Naive bayes models for probability estimation. In ICML '05: Pro- ceedings of the 22nd international conference on Ma- chine learning, pages 529-536, New York, NY, USA. ACM.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Fightin' Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict", "authors": [ { "first": "L", "middle": [], "last": "Burt", "suffix": "" }, { "first": "Michael", "middle": [ "P" ], "last": "Monroe", "suffix": "" }, { "first": "Kevin", "middle": [ "M" ], "last": "Colaresi", "suffix": "" }, { "first": "", "middle": [], "last": "Quinn", "suffix": "" } ], "year": 2008, "venue": "Political Analysis", "volume": "16", "issue": "", "pages": "372--403", "other_ids": {}, "num": null, "urls": [], "raw_text": "Burt L. Monroe, Michael P. Colaresi, and Kevin M. Quinn. 2008. Fightin' Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Con- flict. Political Analysis, Vol. 16, Issue 4, pp. 372-403, 2008.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A preliminary investigation into sentiment analysis of informal political discourse", "authors": [ { "first": "Tony", "middle": [], "last": "Mullen", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Malouf", "suffix": "" } ], "year": 2006, "venue": "AAAI Symposium on Computational Approaches to Analysing Weblogs (AAAI-CAAW)", "volume": "", "issue": "", "pages": "159--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tony Mullen and Robert Malouf. 2006. A preliminary in- vestigation into sentiment analysis of informal political discourse. In AAAI Symposium on Computational Ap- proaches to Analysing Weblogs (AAAI-CAAW), pages 159-162.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Bayesian nonparametric models", "authors": [ { "first": "P", "middle": [], "last": "Orbanz", "suffix": "" }, { "first": "Y", "middle": [ "W" ], "last": "Teh", "suffix": "" } ], "year": 2010, "venue": "Encyclopedia of Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Orbanz and Y. W. Teh. 2010. Bayesian nonparamet- ric models. In Encyclopedia of Machine Learning. Springer.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator", "authors": [ { "first": "J", "middle": [], "last": "Pitman", "suffix": "" }, { "first": "M", "middle": [], "last": "Yor", "suffix": "" } ], "year": 1997, "venue": "Annals of Probability", "volume": "25", "issue": "2", "pages": "855--900", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Pitman and M. Yor. 1997. The two-parameter Poisson- Dirichlet distribution derived from a stable subordinator. Annals of Probability, 25(2):855-900.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Second order features for maximising text classification performance", "authors": [ { "first": "Bhavani", "middle": [], "last": "Raskutti", "suffix": "" }, { "first": "Herman", "middle": [ "L" ], "last": "Ferr\u00e1", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Kowalczyk", "suffix": "" } ], "year": 2001, "venue": "EMCL '01: Proceedings of the 12th European Conference on Machine Learning", "volume": "", "issue": "", "pages": "419--430", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bhavani Raskutti, Herman L. Ferr\u00e1, and Adam Kowal- czyk. 2001. Second order features for maximising text classification performance. In EMCL '01: Proceedings of the 12th European Conference on Machine Learning, pages 419-430, London, UK. Springer-Verlag.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "The use of bigrams to enhance text categorization", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Hardisty", "suffix": "" } ], "year": 2002, "venue": "Inf. Process. Manage", "volume": "38", "issue": "4", "pages": "529--546", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Resnik and Eric Hardisty. 2010. Gibbs sampling for the uninitiated. Technical Re- port UMIACS-TR-2010-04, University of Maryland. http://www.lib.umd.edu/drum/handle/1903/10058. Chade-Meng Tan, Yuan-Fang Wang, and Chan-Do Lee. 2002. The use of bigrams to enhance text categoriza- tion. Inf. Process. Manage., 38(4):529-546.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Hierarchical Dirichlet processes", "authors": [ { "first": "Yee Whye", "middle": [], "last": "Teh", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Jordan", "suffix": "" }, { "first": "Matthew", "middle": [ "J" ], "last": "Beal", "suffix": "" }, { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" } ], "year": 2006, "venue": "Journal of the American Statistical Association", "volume": "101", "issue": "476", "pages": "1566--1581", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yee Whye Teh, Michael I. Jordan, Matthew J. Beal, and David M. Blei. 2006. Hierarchical Dirichlet pro- cesses. Journal of the American Statistical Association, 101(476):1566-1581.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Classifying party affiliation from political speech", "authors": [ { "first": "B", "middle": [], "last": "Yu", "suffix": "" }, { "first": "S", "middle": [], "last": "Kaufmann", "suffix": "" }, { "first": "D", "middle": [], "last": "Diermeier", "suffix": "" } ], "year": 2008, "venue": "Journal of Information Technology and Politics", "volume": "5", "issue": "1", "pages": "33--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Yu, S. Kaufmann, and D. Diermeier. 2008. Classify- ing party affiliation from political speech. Journal of Information Technology and Politics, 5(1):33-48.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "A plate diagram for na\u00efve Bayes and adaptive na\u00efve Bayes. Nodes represent random variables and parameters; shaded nodes represent observations; lines represent probabilistic dependencies; and the rectangular plates denote replication." }, "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "which Pitman-Yor distribution to draw from for document d. \u2022 \u03c6 i is the distribution over n-grams that comes from the Pitman-Yor distribution. \u2022 W d,n represents an n-gram draw from \u03c6 i \u2022 a, b are the Pitman-Yor strength and discount parameters." }, "FIGREF2": { "uris": null, "type_str": "figure", "num": null, "text": "An alternative adaptive na\u00efve Bayes with a common base distribution for both classes. Corpus preparation and experimental setup." }, "TABREF0": { "type_str": "table", "num": null, "text": "The probability of starting a new table is b+t * a c\u2022+b , where t is the number of tables presently occupied.", "html": null, "content": "" }, "TABREF1": { "type_str": "table", "num": null, "text": "A na\u00efve Bayes-inspired model expressed as a PCFG.", "html": null, "content": "
" }, "TABREF2": { "type_str": "table", "num": null, "text": "An adaptive na\u00efve Bayes grammar. The COLLOC", "html": null, "content": "
i nonterminal's distribution over yields is drawn
from a Pitman-Yor distribution rather than a Dirichlet over
production rules.
SENT\u2192 DOC dd = 1, . . . , m
DOC d0.001
" }, "TABREF3": { "type_str": "table", "num": null, "text": "", "html": null, "content": "" }, "TABREF6": { "type_str": "table", "num": null, "text": "Classification results. ANB* indicates the same grammar as Adapted Na\u00efve Bayes, but with adaptation disabled. Com and Sep refer to whether the base distribution was common to both classes or separate.", "html": null, "content": "
" }, "TABREF7": { "type_str": "table", "num": null, "text": "", "html": null, "content": "
presents some data on adapted rules
" }, "TABREF8": { "type_str": "table", "num": null, "text": "Counts of cached unigrams and n-grams for the two classes compared to the vocabulary sizes.", "html": null, "content": "
IsraeliPalestinian
zionist dreamamerican jew
zionist stateachieve freedom
zionist movementpalestinian freedom
american leadershipsupport palestinian
american victorypalestinian suffer
abandon violencepalestinian territory
freedom (of the) press palestinian statehood
palestinian violencepalestinian refugee
" }, "TABREF9": { "type_str": "table", "num": null, "text": "Charged bigrams captured by the framework.", "html": null, "content": "" } } } }