{ "paper_id": "D08-1004", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:31:09.174202Z" }, "title": "Modeling Annotators: A Generative Approach to Learning from Annotator Rationales *", "authors": [ { "first": "Omar", "middle": [ "F" ], "last": "Zaidan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University Baltimore", "location": { "postCode": "21218", "region": "MD", "country": "USA" } }, "email": "ozaidan@cs.jhu.edu" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University Baltimore", "location": { "postCode": "21218", "region": "MD", "country": "USA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "A human annotator can provide hints to a machine learner by highlighting contextual \"rationales\" for each of his or her annotations (Zaidan et al., 2007). How can one exploit this side information to better learn the desired parameters \u03b8? We present a generative model of how a given annotator, knowing the true \u03b8, stochastically chooses rationales. Thus, observing the rationales helps us infer the true \u03b8. We collect substring rationales for a sentiment classification task (Pang and Lee, 2004) and use them to obtain significant accuracy improvements for each annotator. Our new generative approach exploits the rationales more effectively than our previous \"masking SVM\" approach. It is also more principled, and could be adapted to help learn other kinds of probabilistic classifiers for quite different tasks.", "pdf_parse": { "paper_id": "D08-1004", "_pdf_hash": "", "abstract": [ { "text": "A human annotator can provide hints to a machine learner by highlighting contextual \"rationales\" for each of his or her annotations (Zaidan et al., 2007). How can one exploit this side information to better learn the desired parameters \u03b8? We present a generative model of how a given annotator, knowing the true \u03b8, stochastically chooses rationales. Thus, observing the rationales helps us infer the true \u03b8. We collect substring rationales for a sentiment classification task (Pang and Lee, 2004) and use them to obtain significant accuracy improvements for each annotator. Our new generative approach exploits the rationales more effectively than our previous \"masking SVM\" approach. It is also more principled, and could be adapted to help learn other kinds of probabilistic classifiers for quite different tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Many recent papers aim to reduce the amount of annotated data needed to train the parameters of a statistical model. Well-known paradigms include active learning, semi-supervised learning, and either domain adaptation or cross-lingual transfer from existing annotated data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "1" }, { "text": "A rather different paradigm is to change the actual task that is given to annotators, giving them a greater hand in shaping the learned classifier. After all, human annotators themselves are more than just black-box classifiers to be run on training data. They possess some introspective knowledge about their own classification procedure. The hope is to mine this knowledge rapidly via appropriate questions and use it to help train a machine classifier. How to do this, however, is still being explored.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "1" }, { "text": "An obvious option is to have the annotators directly express their knowledge by hand-crafting rules. This approach remains \"data-driven\" if the annotators repeatedly refine their system against a corpus of labeled or unlabeled examples. This achieves high performance in some domains, such as NP chunking (Brill and Ngai, 1999) , but requires more analytical skill from the annotators. One empirical study (Ngai and Yarowsky, 2000) found that it also required more annotation time than active learning.", "cite_spans": [ { "start": 305, "end": 327, "text": "(Brill and Ngai, 1999)", "ref_id": "BIBREF0" }, { "start": 406, "end": 431, "text": "(Ngai and Yarowsky, 2000)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Hand-crafted rules", "sec_num": "1.1" }, { "text": "More recent work has focused on statistical classifiers. Training such classifiers faces the \"credit assignment problem.\" Given a training example x with many features, which features are responsible for its annotated class y? It may take many training examples to distinguish useful vs. irrelevant features. 1 To reduce the number of training examples needed, one can ask annotators to examine or propose some candidate features. This is possible even for the very large feature sets that are typically used in NLP. In document classification, Raghavan et al. (2006) show that feature selection by an oracle could be helpful, and that humans are both rapid and reasonably good at distinguishing highly useful n-gram features from randomly chosen ones, even when viewing these n-grams out of context. Druck et al. (2008) show annotators some features f from a fixed feature set, and ask them to choose a class label y such that p(y | f ) is as high as possible. Haghighi and Klein (2006) do the reverse: for each class label y, they ask the annotators to propose a few \"prototypical\" features f such that p(y | f ) is as high as possible.", "cite_spans": [ { "start": 309, "end": 310, "text": "1", "ref_id": null }, { "start": 545, "end": 567, "text": "Raghavan et al. (2006)", "ref_id": "BIBREF10" }, { "start": 801, "end": 820, "text": "Druck et al. (2008)", "ref_id": "BIBREF2" }, { "start": 962, "end": 987, "text": "Haghighi and Klein (2006)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Feature selection by humans", "sec_num": "1.2" }, { "text": "The above methods consider features out of context. An annotator might have an easier time examining features in context to recognize whether they appear relevant. This is particularly true for features that are only modestly or only sometimes helpful, which may be abundant in NLP tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature selection in context", "sec_num": "1.3" }, { "text": "Thus, Raghavan et al. (2006) propose an active learning method in which, while classifying a training document, the annotator also identifies some features of that document as particularly relevant. E.g., the annotator might highlight particular unigrams as he or she reads the document. In their proposal, a feature that is highlighted in any document is assumed to be globally more relevant. Its dimension in feature space is scaled by a factor of 10 so that this feature has more influence on distances or inner products, and hence on the learned classifier.", "cite_spans": [ { "start": 6, "end": 28, "text": "Raghavan et al. (2006)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Feature selection in context", "sec_num": "1.3" }, { "text": "Despite the success of the above work, we have several concerns about asking annotators to identify globally relevant features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concerns about marking features", "sec_num": "1.4" }, { "text": "First, a feature in isolation really does not have a well-defined worth. A feature may be useful only in conjunction with other features, 2 or be useful only to the extent that other correlated features are not selected to do the same work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concerns about marking features", "sec_num": "1.4" }, { "text": "Second, it is not clear how an annotator would easily view and highlight features in context, except for the simplest feature sets. In the phrase Apple shares up 3%, there may be several features that fire on the substring Apple-responding to the string Apple, its case-invariant form apple, its lemma apple-(which would also respond to apples), its context-dependent sense Apple 2 , its part of speech noun, etc. How does the annotator indicate which of these features are relevant?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concerns about marking features", "sec_num": "1.4" }, { "text": "Third, annotating features is only appropriate when the feature set can be easily understood by a human. This is not always the case. It would be hard for annotators to read, write, or evaluate a description of a complex syntactic configuration in NLP or a convolution filter in machine vision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concerns about marking features", "sec_num": "1.4" }, { "text": "Fourth, traditional annotation efforts usually try to remain agnostic about the machine learning methods and features to be used. The project's cost is justified by saying that the annotations will be reused by many researchers (perhaps in a \"shared task\"), who are free to compete on how they tackle the learning problem. Unfortunately, feature annotation commits to a particular feature set at annotation time. Subsequent research cannot easily adjust the definition of the features, or obtain annotation of new features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concerns about marking features", "sec_num": "1.4" }, { "text": "To solve these problems, we propose that annotators should not select features but rather mark relevant portions of the example. In earlier work (Zaidan et al., 2007) , we called these markings \"rationales.\"", "cite_spans": [ { "start": 145, "end": 166, "text": "(Zaidan et al., 2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Annotating Rationales", "sec_num": "2" }, { "text": "For example, when classifying a movie review as positive or negative, the annotator would also highlight phrases that supported that judgment. Figure 1 shows two such rationales.", "cite_spans": [], "ref_spans": [ { "start": 143, "end": 151, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Annotating Rationales", "sec_num": "2" }, { "text": "A multi-annotator timing study (Zaidan et al., 2007) found that highlighting rationale phrases while reading movie reviews only doubled annotation time, although annotators marked 5-11 rationale substrings in addition to the simple binary class. The benefit justified the extra time. Furthermore, much of the benefit could have been obtained by giving rationales for only a fraction of the reviews.", "cite_spans": [ { "start": 31, "end": 52, "text": "(Zaidan et al., 2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Annotating Rationales", "sec_num": "2" }, { "text": "In the visual domain, when classifying an image as containing a zoo, the annotator might circle some animals or cages and the sign reading \"Zoo.\" The Peekaboom game (von Ahn et al., 2006) was in fact built to elicit such approximate yet relevant regions of images. Further scenarios were discussed in (Zaidan et al., 2007) : rationale annotation for named entities, linguistic relations, or handwritten digits.", "cite_spans": [ { "start": 165, "end": 187, "text": "(von Ahn et al., 2006)", "ref_id": "BIBREF11" }, { "start": 301, "end": 322, "text": "(Zaidan et al., 2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Annotating Rationales", "sec_num": "2" }, { "text": "Annotating rationales does not require the annotator to think about the feature space, nor even to know anything about it. Arguably this makes annotation easier and more flexible. It also preserves the reusability of the annotated data. Anyone is free to reuse our collected rationales (section 4) to aid in learning a classifier with richer features, or a different kind of classifier altogether, using either our procedures or novel procedures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotating Rationales", "sec_num": "2" }, { "text": "We wish to learn the parameters \u03b8 of some classifier. How can the annotator's rationales help us to do this without many training examples? We will have to exploit a presumed relationship between the rationales and the optimal value of \u03b8 (i.e., the value that we would learn on an infinite training set).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotating Rationales", "sec_num": "2" }, { "text": "This paper exploits an explicit, parametric model of that relationship. The model's parameters \u03c6 are intended to capture what that annotator is doing when he or she marks rationales. Most importantly, they capture how he or she is influenced by the true \u03b8. Given this, our learning method will prefer values of \u03b8 that would adequately explain the rationales (as well as the training classifications).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotating Rationales", "sec_num": "2" }, { "text": "For concreteness, we will assume that the task is document classification. Our training data consists of n triples", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A generative approach", "sec_num": "3.1" }, { "text": "{(x 1 , y 1 , r 1 ), ..., (x n , y n , r n )})", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A generative approach", "sec_num": "3.1" }, { "text": ", where x i is a document, y i is its annotated class, and r i is its rationale markup. At test time we will have to predict y n+1 from x n+1 , without any r n+1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A generative approach", "sec_num": "3.1" }, { "text": "We propose to jointly choose parameter vectors \u03b8 and \u03c6 to maximize the following regularized condi-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A generative approach", "sec_num": "3.1" }, { "text": "tional likelihood: 3 n i=1 p(y i , r i | x i , \u03b8, \u03c6) \u2022 p prior (\u03b8, \u03c6) (1) def = n i=1 p \u03b8 (y i | x i ) \u2022 p \u03c6 (r i | x i , y i , \u03b8) \u2022 p prior (\u03b8, \u03c6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A generative approach", "sec_num": "3.1" }, { "text": "Here we are trying to model all the annotations, both y i and r i . The first factor predicts y i using an ordinary probabilistic classifier p \u03b8 , while the novel second factor predicts r i using a model p \u03c6 of how annotators generate the rationale annotations. The crucial point is that the second factor depends on \u03b8 (since r i is supposed to reflect the relation between x i and y i that is modeled by \u03b8). As a result, the learner has an incentive to modify \u03b8 in a way that increases the second factor, even if this somewhat decreases the first factor on training data. 4 After training, one should simply use the first factor p \u03b8 (y | x) to classify test documents x. The second factor is irrelevant for test documents, since they have not been annotated with rationales r.", "cite_spans": [ { "start": 573, "end": 574, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A generative approach", "sec_num": "3.1" }, { "text": "The second factor may likewise be omitted for any training documents i that have not been annotated with rationales, as there is no r i to predict in those cases. In the extreme case where no documents are annotated with rationales, equation 1reduces to the standard training procedure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A generative approach", "sec_num": "3.1" }, { "text": "Like ordinary class annotations, rationale annotations present us with a \"credit assignment problem,\" albeit a smaller one that is limited to features that fire \"in the vicinity\" of the rationale r. Some of these \u03b8-features were likely responsible for the classification y and hence triggered the rationale. Other such \u03b8-features were just innocent bystanders.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noisy channel design of rationale models", "sec_num": "3.2" }, { "text": "Thus, the interesting part of our model is p \u03c6 (r | x, y, \u03b8), which models the rationale annotation process. The rationales r reflect \u03b8, but in noisy ways.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noisy channel design of rationale models", "sec_num": "3.2" }, { "text": "Taking this noisy channel idea seriously, p \u03c6 (r | x, y, \u03b8) should consider two questions when assessing whether r is a plausible set of rationales given \u03b8. First, it needs a \"language model\" of rationales: does r consist of rationales that are well-formed a priori, i.e., before \u03b8 is considered? Second, it needs a \"channel model\": does r faithfully signal the features of \u03b8 that strongly support classifying x as y?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noisy channel design of rationale models", "sec_num": "3.2" }, { "text": "If a feature contributes heavily to the classification of document x as class y, then the channel model should tell us which parts of document x tend to be highlighted as a result.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noisy channel design of rationale models", "sec_num": "3.2" }, { "text": "The channel model must know about the particular kinds of features that are extracted by f and scored by \u03b8. Suppose the feature not . . . gripping, 5 with weight \u03b8 h , is predictive of the annotated class y. This raises the probabilities of the annotator's highlighting each of various words, or combinations of words, in a phrase like not the most gripping banquet on film. The channel model parameters in \u03c6 tive review-but the second factor still allows learning positive features from the first annotator's positive rationales, and negative features from the second annotator's negative rationales. 5 Our current experiments use only unigram features, to match past work, but we use this example to outline how our approach generalizes to complex linguistic (or visual) features.", "cite_spans": [ { "start": 602, "end": 603, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Noisy channel design of rationale models", "sec_num": "3.2" }, { "text": "should specify how much each of these probabilities is raised, based on the magnitude of \u03b8 h \u2208 R, the class y, and the fact that the feature is an instance of the template . . . . (Thus, \u03c6 has no parameters specific to the word gripping; it is a low-dimensional vector that only describes the annotator's general style in translating \u03b8 into r.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noisy channel design of rationale models", "sec_num": "3.2" }, { "text": "The language model, however, is independent of the feature set \u03b8. It models what rationales tend to look like in the input domain-e.g., documents or images. In the document case, \u03c6 should describe: How frequent and how long are typical rationales? Do their edges tend to align with punctuation or major syntactic boundaries in x? Are they rarer in the middle of a document, or in certain documents? 6 Thanks to the language model, we do not need to posit high \u03b8 features to explain every word in a rationale. The language model can \"explain away\" some words as having been highlighted only because this annotator prefers not to end a rationale in midphrase, or prefers to sweep up close-together features with a single long rationale rather than many short ones. Similarly, the language model can help explain why some words, though important, might not have been included in any rationale of r.", "cite_spans": [ { "start": 399, "end": 400, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Noisy channel design of rationale models", "sec_num": "3.2" }, { "text": "If there are multiple annotators, one can learn different \u03c6 parameters for each annotator, reflecting their different annotation styles. 7 We found this to be useful (section 8.2).", "cite_spans": [ { "start": 137, "end": 138, "text": "7", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Noisy channel design of rationale models", "sec_num": "3.2" }, { "text": "We remark that our generative modeling approach (equation 1) would also apply if r were not rationale markup, but some other kind of so-called \"side information,\" such as the feature annotations discussed in section 1. For example, Raghavan et al. (2006) assume that if feature h is relevant-a bi- 6 Our current experiments do not model this last point. However, we imagine that if the document only has a few \u03b8-features that support the classification, the annotator will probably mark most of them, whereas if such features are abundant, the annotator may lazily mark only a few of the strongest ones. A simple approach would equip \u03c6 with a different \"bias\" or \"threshold\" parameter \u03c6x for each rationale training document x, to modulate the a priori probability of marking a rationale in x. By fitting this bias parameter, we deduce how lazy the annotator was (for whatever reason) on document x. If desired, a prior on \u03c6x could consider whether x has many strong \u03b8-features, whether the annotator has recently had a coffee break, etc.", "cite_spans": [ { "start": 232, "end": 254, "text": "Raghavan et al. (2006)", "ref_id": "BIBREF10" }, { "start": 298, "end": 299, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Noisy channel design of rationale models", "sec_num": "3.2" }, { "text": "7 Given insufficient rationale data to recover some annotator's \u03c6 well, one could smooth using data from other annotators. But in our situation, \u03c6 had relatively few parameters to learn. nary distinction-iff it was selected in at least one document. But it might be more informative to observe that h was selected in 3 of the 10 documents where it appeared, and to predict this via a model p \u03c6 (3 of 10 | \u03b8 h ), where \u03c6 describes (e.g.) how to derive a binomial parameter nonlinearly from \u03b8 h . This approach would not how often h was marked and infer how relevant is feature h (i.e., infer \u03b8 h ). In this case, p \u03c6 is a simple channel that transforms relevant features into direct indicators of the feature. Our side information merely requires a more complex transformation-from relevant features into wellformed rationales, modulated by documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noisy channel design of rationale models", "sec_num": "3.2" }, { "text": "In Zaidan et al. (2007) , we introduced the \"Movie Review Polarity Dataset Enriched with Annotator Rationales.\" 8 It is based on the dataset of Pang and Lee (2004) , 9 which consists of 1000 positive and 1000 negative movie reviews, tokenized and divided into 10 folds (F 0 -F 9 ). All our experiments use F 9 as their final blind test set.", "cite_spans": [ { "start": 3, "end": 23, "text": "Zaidan et al. (2007)", "ref_id": "BIBREF12" }, { "start": 144, "end": 163, "text": "Pang and Lee (2004)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Data: Movie Reviews", "sec_num": "4" }, { "text": "The enriched dataset adds rationale annotations produced by an annotator A0, who annotated folds F 0 -F 8 of the movie review set with rationales (in the form of textual substrings) that supported the goldstandard classifications. We will use A0's data to determine the improvement of our method over a (log-linear) baseline model without rationales. We also use A0 to compare against the \"masking SVM\" method and SVM baseline of Zaidan et al. (2007) .", "cite_spans": [ { "start": 430, "end": 450, "text": "Zaidan et al. (2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Data: Movie Reviews", "sec_num": "4" }, { "text": "Since \u03c6 can be tuned to a particular annotator, we would also like to know how well this works with data from annotators other than A0. We randomly selected 100 reviews (50 positive and 50 negative) and collected both class and rationale annotation data from each of six new annotators A3-A8, 10 following the same procedures as (Zaidan et al., 2007) . We report results using only data from A3-A5, since we used the data from A6-A8 as development data in the early stages of our work.", "cite_spans": [ { "start": 329, "end": 350, "text": "(Zaidan et al., 2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Data: Movie Reviews", "sec_num": "4" }, { "text": "We use this new rationale-enriched dataset 8 to determine if our method works well across annotators. We will only be able to carry out that comparison at small training set sizes, due to limited data from A3-A8. The larger A0 dataset will still allow us to evaluate our method on a range of training set sizes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Data: Movie Reviews", "sec_num": "4" }, { "text": "We define the basic classifier p \u03b8 in equation 1to be a standard conditional log-linear model:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling class annotations with p \u03b8", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p \u03b8 (y | x) def = exp( \u03b8 \u2022 f (x, y)) Z \u03b8 (x) def = u(x, y) Z \u03b8 (x)", "eq_num": "(2)" } ], "section": "Modeling class annotations with p \u03b8", "sec_num": "5.1" }, { "text": "where f (\u2022) extracts a feature vector from a classified document, \u03b8 are the corresponding weights of those features, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling class annotations with p \u03b8", "sec_num": "5.1" }, { "text": "Z \u03b8 (x) def = y u(x, y)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling class annotations with p \u03b8", "sec_num": "5.1" }, { "text": "is a normalizer. We use the same set of binary features as in previous work on this dataset (Pang et al., 2002; Pang and Lee, 2004; Zaidan et al., 2007) . Specifically, let V = {v 1 , ..., v 17744 } be the set of word types with count \u2265 4 in the full 2000-document corpus. Define f h (x, y) to be y if v h appears at least once in x, and 0 otherwise. Thus \u03b8 \u2208 R 17744 , and positive weights in \u03b8 favor class label y = +1 and equally discourage y = \u22121, while negative weights do the opposite. This standard unigram feature set is linguistically impoverished, but serves as a good starting point for studying rationales. Future work should consider more complex features and how they are signaled by rationales, as discussed in section 3.2.", "cite_spans": [ { "start": 92, "end": 111, "text": "(Pang et al., 2002;", "ref_id": "BIBREF8" }, { "start": 112, "end": 131, "text": "Pang and Lee, 2004;", "ref_id": "BIBREF7" }, { "start": 132, "end": 152, "text": "Zaidan et al., 2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Modeling class annotations with p \u03b8", "sec_num": "5.1" }, { "text": "The rationales collected in this task are textual segments of a document to be classified. The document itself is a word token sequence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling rationale annotations with p \u03c6", "sec_num": "5.2" }, { "text": "x = x 1 , ..., x M .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling rationale annotations with p \u03c6", "sec_num": "5.2" }, { "text": "We encode its rationales as a corresponding tag sequence r = r 1 , ..., r M , as illustrated in Figure 1 . Here r m \u2208 {I, O} according to whether the token x m is in a rationale (i.e., x m was at least partly highlighted) or outside all rationales. x 1 and x M are special boundary symbols, tagged with O.", "cite_spans": [], "ref_spans": [ { "start": 96, "end": 104, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Modeling rationale annotations with p \u03c6", "sec_num": "5.2" }, { "text": "We predict the full tag sequence r at once using a conditional random field (Lafferty et al., 2001) . A CRF is just another conditional log-linear model:", "cite_spans": [ { "start": 76, "end": 99, "text": "(Lafferty et al., 2001)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Modeling rationale annotations with p \u03c6", "sec_num": "5.2" }, { "text": "p \u03c6 (r | x, y, \u03b8) def = exp( \u03c6 \u2022 g(r, x, y, \u03b8)) Z \u03c6 (x, y, \u03b8) def = u(r, x, y, \u03b8) Z \u03c6 (x, y, \u03b8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling rationale annotations with p \u03c6", "sec_num": "5.2" }, { "text": "where g(\u2022) extracts a feature vector, \u03c6 are the corresponding weights of those features, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling rationale annotations with p \u03c6", "sec_num": "5.2" }, { "text": "Z \u03c6 (x, y, \u03b8) def = r u(r, x, y, \u03b8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling rationale annotations with p \u03c6", "sec_num": "5.2" }, { "text": "is a normalizer. As usual for linear-chain CRFs, g(\u2022) extracts two kinds of features: first-order \"emission\" features that relate r m to (x m , y, \u03b8), and second-order \"transition\" features that relate r m to r m\u22121 (although some of these also look at x).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling rationale annotations with p \u03c6", "sec_num": "5.2" }, { "text": "These two kinds of features respectively capture the \"channel model\" and \"language model\" of section 3.2. The former says r m is I because x m is associated with a relevant \u03b8-feature. The latter says r m is I simply because it is next to another I.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling rationale annotations with p \u03c6", "sec_num": "5.2" }, { "text": "Recall that our \u03b8-features (at present) correspond to unigrams. Given ( x, y, \u03b8), let us say that a unigram w \u2208 x is relevant, irrelevant, or anti-relevant if y \u2022 \u03b8 w is respectively 0, \u2248 0, or 0. That is, w is relevant if its presence in x strongly supports the annotated class y, and anti-relevant if its presence strongly supports the opposite class \u2212y. The function family B s in equation 3, shown for s \u2208 {10, 2, \u22122, \u221210}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emission \u03c6-features (\"channel model\")", "sec_num": "5.3" }, { "text": "We would like to learn the extent \u03c6 rel to which annotators try to include relevant unigrams in their rationales, and the (usually lesser) extent \u03c6 antirel to which they try to exclude anti-relevant unigrams. This will help us infer \u03b8 from the rationales.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emission \u03c6-features (\"channel model\")", "sec_num": "5.3" }, { "text": "The details are as follows. \u03c6 rel and \u03c6 antirel are the weights of two emission features extracted by g:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emission \u03c6-features (\"channel model\")", "sec_num": "5.3" }, { "text": "g rel ( x, y, r, \u03b8) def = M m=1 I(r m = I) \u2022 B 10 (y \u2022 \u03b8 xm ) g antirel ( x, y, r, \u03b8) def = M m=1 I(r m = I) \u2022 B \u221210 (y \u2022 \u03b8 xm )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emission \u03c6-features (\"channel model\")", "sec_num": "5.3" }, { "text": "Here I(\u2022) denotes the indicator function, returning 1 or 0 according to whether its argument is true or false. Relevance and negated anti-relevance are respectively measured by the differentiable nonlinear functions B 10 and B \u221210 , which are defined by Figure 2 . Sample values of B 10 and g rel are shown in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 254, "end": 262, "text": "Figure 2", "ref_id": null }, { "start": 310, "end": 318, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Emission \u03c6-features (\"channel model\")", "sec_num": "5.3" }, { "text": "How does this work? The g rel feature is a sum over all unigrams in the document x. It does not fire strongly on the irrelevant or anti-relevant unigrams, since B 10 is close to zero there. 11 But it fires positively on relevant unigrams w if they are tagged with I, and the strength of such firing increases approximately linearly with \u03b8 w . Since the weight \u03c6 rel > 0 in practice, this means that raising a relevant unigram's \u03b8 w (if y = +1) will proportionately raise its logodds of being tagged with I. Symmetrically, since \u03c6 antirel > 0 in practice, lowering an anti-relevant unigram's \u03b8 w (if y = +1) will proportionately lower its log-odds of being tagged with I, though not necessarily at the same rate as for relevant unigrams. 12 Should \u03c6 also include traditional CRF emission features, which would recognize that particular words like great tend to be tagged as I? No! Such features would undoubtedly do a better job predicting the rationales and hence increasing equation 1. However, crucially, our true goal is not to predict the rationales but to recover the classifier parameters \u03b8. Thus, if great tends to be highlighted, then the model should not be permitted to explain this directly by increasing some feature \u03c6 great , but only indirectly by increasing \u03b8 great . We therefore permit our rationale prediction model to consider only the two emission features g rel and g antirel , which see the words in x only through their \u03b8-values.", "cite_spans": [ { "start": 737, "end": 739, "text": "12", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Emission \u03c6-features (\"channel model\")", "sec_num": "5.3" }, { "text": "Annotators highlight more than just the relevant unigrams. (After all, they aren't told that our current \u03b8-features are unigrams.) They tend to mark full phrases, though perhaps taking care to exclude antirelevant portions. \u03c6 models these phrases' shape, via weights for several \"language model\" features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition \u03c6-features (\"language model\")", "sec_num": "5.4" }, { "text": "Most important are the 4 traditional CRF tag transition features g O-O , g O-I , g I-I , g I-O . For example, g O-I counts the number of O-to-I transitions in r (see Figure 1 ). Other things equal, an annotator with high \u03c6 O-I is predicted to have many rationales per 1000 words. And if \u03c6 I-I is high, rationales are predicted to be long phrases (including more irrelevant unigrams around or between the relevant ones).", "cite_spans": [], "ref_spans": [ { "start": 166, "end": 174, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Transition \u03c6-features (\"language model\")", "sec_num": "5.4" }, { "text": "We also learn more refined versions of these features, which consider how the transition probabilities are influenced by the punctuation and syntax of the document x (independent of \u03b8). These refined features are more specific and hence more sparsely trained. Their weights reflect deviations from the simpler, \"backed-off\" transition features such as g O-I . (Again, see Figure 1 for examples.)", "cite_spans": [], "ref_spans": [ { "start": 372, "end": 380, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Transition \u03c6-features (\"language model\")", "sec_num": "5.4" }, { "text": "Conditioning on left word. A feature of the form g t 1 (v)-t 2 is specified by a pair of tag types t 1 , t 2 \u2208 {I, O} and a vocabulary word type v. It counts the number of times an t 1 -t 2 transition occurs in r conditioned on v appearing as the first of the two word tokens where the transition occurs. Our experiments include g t 1 (v)-t 2 features that tie I-O and O-I transitions to the 4 most frequent punctuation marks v (comma, period, ?, !).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition \u03c6-features (\"language model\")", "sec_num": "5.4" }, { "text": "Conditioning on right word. A feature g t 1 -t 2 (v) is similar, but v must appear as the second of the two word tokens where the transition occurs. Again here, we use g t 1 -t 2 (v) features that tie I-O and O-I transitions to the four punctuation marks mentioned above. We also include five features that tie O-I transitions to the words no, not, so, very, and quite, since in our development data, those words were more likely than others to start rationales. 13", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition \u03c6-features (\"language model\")", "sec_num": "5.4" }, { "text": "Conditioning on syntactic boundary. We parsed each rationale-annotated training document (no parsing is needed at test time). 14 We then marked each word bigram x 1 -x 2 with three nonterminals: N End is the nonterminal of the largest constituent that contains x 1 and not x 2 , N Start is the nonterminal of the largest constituent that contains x 2 and not x 1 , and N Cross is the nonterminal of the smallest constituent that contains both x 1 and x 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition \u03c6-features (\"language model\")", "sec_num": "5.4" }, { "text": "For a nonterminal N and pair of tag types (t 1 , t 2 ), we define three features, g t 1 -t 2 /E=N , g t 1 -t 2 /S=N , and g t 1 -t 2 /C=N , which count the number of times a t 1 -t 2 transition occurs in r with N matching the N End , N Start , or N Cross nonterminal, respectively. Our experiments include these features for 11 common nonterminal types N (DOC, TOP, S, SBAR, FRAG, PRN, NP, VP, PP, ADJP, QP).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition \u03c6-features (\"language model\")", "sec_num": "5.4" }, { "text": "To train our model, we use L-BFGS to locally maximize the log of the objective function (1): 15", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training: Joint Optimization of \u03b8 and \u03c6", "sec_num": "6" }, { "text": "13 These are the function words with count \u2265 40 in a random sample of 100 documents, and which were associated with the O-I tag transition at more than twice the average rate. We do not use any other lexical \u03c6-features that reference x, for fear that they would enable the learner to explain the rationales without changing \u03b8 as desired (see the end of section 5.3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training: Joint Optimization of \u03b8 and \u03c6", "sec_num": "6" }, { "text": "14 We parse each sentence with the Collins parser (Collins, 1999) . Then the document has one big parse tree, whose root is DOC, with each sentence being a child of DOC.", "cite_spans": [ { "start": 50, "end": 65, "text": "(Collins, 1999)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Training: Joint Optimization of \u03b8 and \u03c6", "sec_num": "6" }, { "text": "15 One might expect this function to be convex because p \u03b8 and p \u03c6 are both log-linear models with no hidden variables. However, log", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training: Joint Optimization of \u03b8 and \u03c6", "sec_num": "6" }, { "text": "p \u03c6 (ri | xi, yi, \u03b8) is not necessarily convex in \u03b8. n i=1 log p \u03b8 (y i | x i ) \u2212 1 2\u03c3 2 \u03b8 \u03b8 2 +C( n i=1 log p \u03c6 (r i | x i , y i , \u03b8)) \u2212 1 2\u03c3 2 \u03c6 \u03c6 2 (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training: Joint Optimization of \u03b8 and \u03c6", "sec_num": "6" }, { "text": "This defines p prior from (1) to be a standard diagonal Gaussian prior, with variances \u03c3 2 \u03b8 and \u03c3 2 \u03c6 for the two sets of parameters. We optimize \u03c3 2 \u03b8 in our experiments. As for \u03c3 2 \u03c6 , different values did not affect the results, since we have a large number of {I,O} rationale tags to train relatively few \u03c6 weights; so we simply use \u03c3 2 \u03c6 = 1 in all of our experiments. Note the new C factor in equation 4. Our initial experiments showed that optimizing equation 4without C led to an increase in the likelihood of the rationale data at the expense of classification accuracy, which degraded noticeably. This is because the second sum in (4) has a much larger magnitude than the first: in a set of 100 documents, it predicts around 74,000 binary {I,O} tags, versus the one hundred binary class labels. While we are willing to reduce the log-likelihood of the training classifications (the first sum) to a certain extent, focusing too much on modeling rationales (the second sum) is clearly not our ultimate goal, and so we optimize C on development data to achieve some balance between the two terms of equation 4. Typical values of C range from 1 300 to 1 50 . 16 We perform alternating optimization on \u03b8 and \u03c6:", "cite_spans": [ { "start": 1166, "end": 1168, "text": "16", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training: Joint Optimization of \u03b8 and \u03c6", "sec_num": "6" }, { "text": "1. Initialize \u03b8 to maximize equation (4) but with C = 0 (i.e. based only on class data). 2. Fix \u03b8, and find \u03c6 that maximizes equation 4. 3. Fix \u03c6, and find \u03b8 that maximizes equation 4. 4. Repeat 2 and 3 until convergence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training: Joint Optimization of \u03b8 and \u03c6", "sec_num": "6" }, { "text": "The L-BFGS method requires calculating the gradient of the objective function (4). The partial derivatives with respect to components of \u03b8 and \u03c6 involve calculating expectations of the feature functions, which can be computed in linear time (with respect to the size of the training set) using the forward-backward algorithm for CRFs. The partial derivatives also involve the derivative of (3), to determine how changing \u03b8 will affect the firing strength of the emission features g rel and g antirel .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training: Joint Optimization of \u03b8 and \u03c6", "sec_num": "6" }, { "text": "We report on two sets of experiments. In the first set, we use the annotation data that A3-A5 provided for the small set of 100 documents (as well as the data from A0 on those same 100 documents). In the second set, we used A0's abundant annotation data to evaluate our method with training set sizes up to 1600 documents, and compare it with three other methods: log-linear baseline, SVM baseline, and the SVM masking method of (Zaidan et al., 2007) .", "cite_spans": [ { "start": 429, "end": 450, "text": "(Zaidan et al., 2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Procedures", "sec_num": "7" }, { "text": "The learning curves reported in section 8.1 are generated exactly as in (Zaidan et al., 2007) . Each curve shows classification accuracy at training set sizes T = 1, 2, ..., 9 folds (i.e. 200, 400, ..., 1600 training documents). For a given size T , the reported accuracy is an average of 9 experiments with different subsets of the entire training set, each of size T :", "cite_spans": [ { "start": 72, "end": 93, "text": "(Zaidan et al., 2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Learning curves", "sec_num": "7.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 9 8 i=0 acc(F 9 | F i+1 \u222a . . . \u222a F i+T )", "eq_num": "(5)" } ], "section": "Learning curves", "sec_num": "7.1" }, { "text": "where F j denotes the fold numbered j mod 9, and acc(F 9 | Y ) means classification accuracy on the held-out test set F 9 after training on set Y . We use an appropriate paired permutation test, detailed in (Zaidan et al., 2007) , to test differences in (5). We call a difference significant at p < 0.05.", "cite_spans": [ { "start": 207, "end": 228, "text": "(Zaidan et al., 2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Learning curves", "sec_num": "7.1" }, { "text": "We compare our method to the \"masking SVM\" method of (Zaidan et al., 2007) . Briefly, that method used rationales to construct several so-called contrast examples from every training example. A contrast example is obtained by \"masking out\" one of the rationales highlighted to support the training example's class. A good classifier should have more trouble on this modified example. Hence, Zaidan et al. (2007) required the learned SVM to classify each contrast example with a smaller margin than the corresponding original example (and did not require it to be classified correctly).", "cite_spans": [ { "start": 53, "end": 74, "text": "(Zaidan et al., 2007)", "ref_id": "BIBREF12" }, { "start": 391, "end": 411, "text": "Zaidan et al. (2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison to \"masking SVM\" method", "sec_num": "7.2" }, { "text": "The masking SVM learner relies on a simple geometric principle; is trivial to implement on top of an existing SVM learner; and works well. However, we believe that the generative method we present here is more interesting and should apply more broadly. Figure 3 : Classification accuracy curves for the 4 methods: the two baseline learners that only utilize class data, and the two learners that also utilize rationale annotations. The SVM curves are from (Zaidan et al., 2007) .", "cite_spans": [ { "start": 456, "end": 477, "text": "(Zaidan et al., 2007)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 253, "end": 261, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Comparison to \"masking SVM\" method", "sec_num": "7.2" }, { "text": "First, the masking method is specific to improving an SVM learner, whereas our method can be used to improve any classifier by adding a rationale-based regularizer (the second half of equation 4) to its objective function during training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to \"masking SVM\" method", "sec_num": "7.2" }, { "text": "More important, there are tasks where it is unclear how to generate contrast examples. For the movie review task, it was natural to mask out a rationale by pretending its words never occurred in the document. After all, most word types do not appear in most documents, so it is natural to consider the nonpresence of a word as a \"default\" state to which we can revert. But in an image classification task, how should one modify the image's features to ignore some spatial region marked as a rationale? There is usually no natural \"default\" value to which we could set the pixels. Our method, on the other hand, eliminates contrast examples altogether.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to \"masking SVM\" method", "sec_num": "7.2" }, { "text": "8.1 The added benefit of rationales Fig. 3 shows learning curves for four methods. A log-linear model shows large and significant improvements, at all training sizes, when we incorporate rationales into its training via equation (4). Moreover, the resulting classifier consistently outperforms 17 prior work, the masking SVM, which starts with a slightly better baseline classifier (an SVM) but incorporates the rationales more crudely. Table 1 : Accuracy rates using each annotator's data. In a given column, a value in italics is not significantly different from the highest value in that column, which is boldfaced. The size=20 results average over 5 experiments.", "cite_spans": [], "ref_spans": [ { "start": 36, "end": 42, "text": "Fig. 3", "ref_id": null }, { "start": 437, "end": 444, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Experimental Results and Analysis", "sec_num": "8" }, { "text": "To confirm that we could successfully model annotators other than A0, we performed the same comparison for annotators A3-A5; each had provided class and rationale annotations on a small 100document training set. We trained a separate \u03c6 for each annotator. Table 1 shows improvements over baseline, usually significant, at 2 training set sizes.", "cite_spans": [], "ref_spans": [ { "start": 256, "end": 263, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Experimental Results and Analysis", "sec_num": "8" }, { "text": "Examining the learned weights \u03c6 gives insight into annotator behavior. High weights include I-O and O-I transitions conditioned on punctuation, e.g., \u03c6 I(.)-O = 3.55, 18 as well as rationales ending at the end of a major phrase, e.g., \u03c6 I-O/E=VP = 1.88.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "8.2" }, { "text": "The large emission feature weights, e.g., \u03c6 rel = 14.68 and \u03c6 antirel = 15.30, tie rationales closely to \u03b8 values, as hoped. For example, in Figure 1 , the word w = succeeds, with \u03b8 w = 0.13, drives up p(I)/p(O) by a factor of 7 (in a positive document) relative to a word with \u03b8 w = 0.", "cite_spans": [], "ref_spans": [ { "start": 141, "end": 149, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Analysis", "sec_num": "8.2" }, { "text": "In fact, feature ablation experiments showed that almost all the classification benefit from rationales can be obtained by using only these 2 emission \u03c6-features and the 4 unconditioned transition \u03c6features. Our full \u03c6 (115 features) merely improves our ability to predict the rationales (whose likelihood does increase significantly with more features).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "8.2" }, { "text": "We also checked that annotators' styles differ enough that it helps to tune \u03c6 to the \"target\" annotator A who gave the rationales. Table 3 shows that a \u03c6 model trained on A's own rationales does best at predicting new rationales from A. Table 2 shows that as 18 When trained on folds F4-F8 with A0's rationales. \u03c6 A0 \u03c6 A3 \u03c6 A4 \u03c6 A5 Baseline \u03b8 A0 76.0 73.0 74.0 73.0 71.0 \u03b8 A3 73.0 76.0 74.0 73.0 73.0 \u03b8 A4 75.0 73.0 77.0 74.0 71.0 \u03b8 A5 74.0 71.0 72.0 74.0 70.0 Table 2 : Accuracy rate for an annotator's \u03b8 (rows) obtained when using some other annotator's \u03c6 (columns).", "cite_spans": [ { "start": 259, "end": 261, "text": "18", "ref_id": null } ], "ref_spans": [ { "start": 131, "end": 138, "text": "Table 3", "ref_id": null }, { "start": 237, "end": 244, "text": "Table 2", "ref_id": null }, { "start": 461, "end": 468, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Analysis", "sec_num": "8.2" }, { "text": "Notice that the diagonal entries and the baseline column are taken from rows of Table 1 (size=100).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "8.2" }, { "text": "\u03c6 A3 \u03c6 A4 \u03c6 A5 model \u2212L(r A0 ) 0.073 0.086 0.077 0.088 0.135 \u2212L(r A3 ) 0.084 0.068 0.071 0.068 0.130 \u2212L(r A4 ) 0.088 0.084 0.075 0.085 0.153 \u2212L(r A5 ) 0.058 0.044 0.047 0.044 0.111 Table 3 : Cross-entropy per tag of rationale annotations r for each annotator (rows), when predicted from that annotator's x and \u03b8 via a possibly different annotator's \u03c6 (columns). For comparison, the trivial model is a bigram model of r, which is trained on the target annotator but ignores x and \u03b8. 5-fold cross-validation on the 100document set was used to prevent testing on training data. a result, classification performance on the test set is usually best if it was A's own \u03c6 that was used to help learn \u03b8 from A's rationales. In both cases, however, a different annotator's \u03c6 is better than nothing.", "cite_spans": [], "ref_spans": [ { "start": 181, "end": 188, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Trivial \u03c6 A0", "sec_num": null }, { "text": "Most NLP systems use thousands or millions of features, because it is helpful to include lexical features over a large vocabulary, often conjoined with lexical or non-lexical context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For example, a linear classifier can learn that most training examples satisfy A \u2192 B by setting \u03b8A = \u22125 and \u03b8A\u2227B = +5, but this solution requires selecting both A and A\u2227B as features. More simply, a polynomial kernel can consider the conjunction A \u2227 B only if both A and B are selected as features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Modeling Rationale AnnotationsAs rationales are more indirect than explicit features, they present a trickier machine learning problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "It would be preferable to integrate out \u03c6 (and even \u03b8), but more difficult.4 Interestingly, even examples where the annotation yi is wrong or unhelpful can provide useful information about \u03b8 via the pair (yi, ri). Two annotators marking the same movie review might disagree on whether it is overall a positive or nega-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Available at http://cs.jhu.edu/\u223cozaidan/rationales. 9 Polarity dataset version 2.0.10 We avoid annotator names A1-A2, which were already used in(Zaidan et al., 2007).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "B10 sets the threshold for relevance to be about 0. One could also include versions of the grel feature that set a higher threshold, using B10(y \u2022 \u03b8x m \u2212 threshold).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "If the two rates are equal (\u03c6rel = \u03c6antirel), we get a simpler model in which the log-odds change exactly linearly with \u03b8w for each w, regardless of w's relevance/irrelevance/anti-relevance. This follows from the fact that Bs(a) + B\u2212s(a) simplifies to a.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "C also balances our confidence in the classifications y against our confidence in the rationales r; either may be noisy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Differences are not significant at sizes 200, 1000, and 1600.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We have demonstrated a effective method for eliciting extra knowledge from naive annotators, in the form of lightweight \"rationales\" for their annotations. By explicitly modeling the annotator's rationale-marking process, we are able to infer a better model of the original annotations.We showed that our method performs significantly better than two strong baseline classifiers, and also outperforms our previous discriminative method for exploiting rationales (Zaidan et al., 2007) . We also saw that it worked across four annotators who have different rationale-marking styles.In future, we are interested in new domains that can adaptively solicit rationales for some or all training examples. Our new method, being essentially Bayesian inference, is potentially extensible to many other situations-other tasks, classifier architectures, and more complex features.", "cite_spans": [ { "start": 462, "end": 483, "text": "(Zaidan et al., 2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "9" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Man [and woman] vs. machine: A case study in base noun phrase learning", "authors": [ { "first": "Eric", "middle": [], "last": "Brill", "suffix": "" }, { "first": "Grace", "middle": [], "last": "Ngai", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 37th ACL Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Brill and Grace Ngai. 1999. Man [and woman] vs. machine: A case study in base noun phrase learning. In Proceedings of the 37th ACL Conference.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Head-Driven Statistical Models for Natural Language Parsing", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, Univer- sity of Pennsylvania.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning from labeled features using generalized expectation criteria", "authors": [ { "first": "G", "middle": [], "last": "Druck", "suffix": "" }, { "first": "G", "middle": [], "last": "Mann", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2008, "venue": "In Proceedings of ACM Special Interest Group on Information Retrieval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Druck, G. Mann, and A. McCallum. 2008. Learn- ing from labeled features using generalized expecta- tion criteria. In Proceedings of ACM Special Interest Group on Information Retrieval, (SIGIR).", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Prototype-driven learning for sequence models", "authors": [ { "first": "A", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "D", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Human Language Technology Conference of the NAACL, Main Conference", "volume": "", "issue": "", "pages": "320--327", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Haghighi and D. Klein. 2006. Prototype-driven learn- ing for sequence models. In Proceedings of the Hu- man Language Technology Conference of the NAACL, Main Conference, pages 320-327, New York City, USA, June. Association for Computational Linguis- tics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the International Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic mod- els for segmenting and labeling sequence data. In Pro- ceedings of the International Conference on Machine Learning.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Rule writing or annotation: Cost-efficient resource usage for base noun phrase chunking", "authors": [ { "first": "Grace", "middle": [], "last": "Ngai", "suffix": "" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 38th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grace Ngai and David Yarowsky. 2000. Rule writing or annotation: Cost-efficient resource usage for base noun phrase chunking. In Proceedings of the 38th", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "117--125", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 117-125, Hong Kong.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts", "authors": [ { "first": "B", "middle": [], "last": "Pang", "suffix": "" }, { "first": "L", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2004, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "271--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Pang and L. Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proc. of ACL, pages 271- 278.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Thumbs up? Sentiment classification using machine learning techniques", "authors": [ { "first": "B", "middle": [], "last": "Pang", "suffix": "" }, { "first": "L", "middle": [], "last": "Lee", "suffix": "" }, { "first": "S", "middle": [], "last": "Vaithyanathan", "suffix": "" } ], "year": 2002, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Pang, L. Lee, and S. Vaithyanathan. 2002. Thumbs up? Sentiment classification using machine learning techniques. In Proc. of EMNLP, pages 79-86.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "An interactive algorithm for asking and incorporating feature feedback into support vector machines", "authors": [ { "first": "Hema", "middle": [], "last": "Raghavan", "suffix": "" }, { "first": "James", "middle": [], "last": "Allan", "suffix": "" } ], "year": 2007, "venue": "Proceedings of SIGIR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hema Raghavan and James Allan. 2007. An interactive algorithm for asking and incorporating feature feed- back into support vector machines. In Proceedings of SIGIR.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Active learning on both features and instances", "authors": [ { "first": "Omid", "middle": [], "last": "Hema Raghavan", "suffix": "" }, { "first": "Rosie", "middle": [], "last": "Madani", "suffix": "" }, { "first": "", "middle": [], "last": "Jones", "suffix": "" } ], "year": 2006, "venue": "Journal of Machine Learning Research", "volume": "7", "issue": "", "pages": "1655--1686", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hema Raghavan, Omid Madani, and Rosie Jones. 2006. Active learning on both features and instances. Jour- nal of Machine Learning Research, 7:1655-1686, Aug.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Peekaboom: A game for locating objects", "authors": [ { "first": "Ruoran", "middle": [], "last": "Luis Von Ahn", "suffix": "" }, { "first": "Manuel", "middle": [], "last": "Liu", "suffix": "" }, { "first": "", "middle": [], "last": "Blum", "suffix": "" } ], "year": 2006, "venue": "CHI '06: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems", "volume": "", "issue": "", "pages": "55--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luis von Ahn, Ruoran Liu, and Manuel Blum. 2006. Peekaboom: A game for locating objects. In CHI '06: Proceedings of the SIGCHI Conference on Hu- man Factors in Computing Systems, pages 55-64.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Using \"annotator rationales\" to improve machine learning for text categorization", "authors": [ { "first": "Omar", "middle": [], "last": "Zaidan", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Piatko", "suffix": "" } ], "year": 2007, "venue": "HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omar Zaidan, Jason Eisner, and Christine Piatko. 2007. Using \"annotator rationales\" to improve machine learning for text categorization. In NAACL HLT 2007;", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Proceedings of the Main Conference", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "260--267", "other_ids": {}, "num": null, "urls": [], "raw_text": "Proceedings of the Main Conference, pages 260-267, April.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Rationales as sequence annotation: the annotator highlighted two textual segments as rationales for a positive class. Highlighted words in x are tagged I in r, and other words are tagged O. The figure also shows some \u03c6-features. For instance, g O(,)-I is a count of O-I transitions that occur with a comma as the left word. Notice also that g rel is the sum of the underlined values.", "type_str": "figure", "num": null }, "FIGREF1": { "uris": null, "text": "Figure 2:", "type_str": "figure", "num": null }, "FIGREF2": { "uris": null, "text": "s (a) = (log(1 + exp(a \u2022 s)) \u2212 log(2))/s (3) and graphed in", "type_str": "figure", "num": null } } } }