{ "paper_id": "D15-1035", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:29:14.011263Z" }, "title": "Noise or additional information? Leveraging crowdsource annotation item agreement for natural language tasks", "authors": [ { "first": "Emily", "middle": [ "K" ], "last": "Jamison", "suffix": "", "affiliation": { "laboratory": "Ubiquitous Knowledge Processing Lab (UKP-TUDA)", "institution": "Technische Universit\u00e4t Darmstadt", "location": {} }, "email": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "", "affiliation": { "laboratory": "Ubiquitous Knowledge Processing Lab (UKP-TUDA)", "institution": "Technische Universit\u00e4t Darmstadt", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In order to reduce noise in training data, most natural language crowdsourcing annotation tasks gather redundant labels and aggregate them into an integrated label, which is provided to the classifier. However, aggregation discards potentially useful information from linguistically ambiguous instances. For five natural language tasks, we pass item agreement on to the task classifier via soft labeling and low-agreement filtering of the training dataset. We find a statistically significant benefit from low item agreement training filtering in four of our five tasks, and no systematic benefit from soft labeling.", "pdf_parse": { "paper_id": "D15-1035", "_pdf_hash": "", "abstract": [ { "text": "In order to reduce noise in training data, most natural language crowdsourcing annotation tasks gather redundant labels and aggregate them into an integrated label, which is provided to the classifier. However, aggregation discards potentially useful information from linguistically ambiguous instances. For five natural language tasks, we pass item agreement on to the task classifier via soft labeling and low-agreement filtering of the training dataset. We find a statistically significant benefit from low item agreement training filtering in four of our five tasks, and no systematic benefit from soft labeling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Crowdsourcing is a cheap and increasinglyutilized source of annotation labels. In a typical annotation task, five or ten labels are collected for an instance, and are aggregated together into an integrated label. The high number of labels is used to compensate for worker bias, task misunderstanding, lack of interest, incompetance, and malicious intent (Wauthier and Jordan, 2011) .", "cite_spans": [ { "start": 354, "end": 381, "text": "(Wauthier and Jordan, 2011)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Majority voting for label aggregation has been found effective in filtering noisy labels (Nowak and R\u00fcger, 2010) . Labels can be aggregated under weighted conditions reflecting the reliability of the annotator (Whitehill et al., 2009; Welinder et al., 2010) . Certain classifiers are also robust to random (unbiased) label noise (Tibshirani and Manning, 2014; Beigman and Beigman Klebanov, 2009) . However, minority label information is discarded by aggregation, and when the labels were gathered under controlled circumstances, these labels may reflect linguistic intuition and contain useful information (Plank et al., 2014b) . Two alternative strategies that allow the classifier to learn from the item agreement include training instance filtering and soft labeling. Filtering training instances by item agreement removes low agreement instances from the training set. Soft labeling assigns a classifier weight to a training instance based on the item agreement.", "cite_spans": [ { "start": 89, "end": 112, "text": "(Nowak and R\u00fcger, 2010)", "ref_id": "BIBREF19" }, { "start": 210, "end": 234, "text": "(Whitehill et al., 2009;", "ref_id": "BIBREF38" }, { "start": 235, "end": 257, "text": "Welinder et al., 2010)", "ref_id": "BIBREF37" }, { "start": 329, "end": 359, "text": "(Tibshirani and Manning, 2014;", "ref_id": "BIBREF35" }, { "start": 360, "end": 395, "text": "Beigman and Beigman Klebanov, 2009)", "ref_id": "BIBREF0" }, { "start": 606, "end": 627, "text": "(Plank et al., 2014b)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Consider two Affect Recognition instances and their Krippendorff (1970) In Figure 1 , annotators mostly agreed that the headline expresses little sadness. But in Figure 2 , the low item agreement may be caused by instance difficulty (i.e., Is a war zone sad or just bad?): a Hard Case (Zeman, 2010) . Previous work (Beigman Klebanov and Beigman, 2014; Beigman and Beigman Klebanov, 2009) has shown that training strategy may affect Hard and Easy Case test instances differently. In this work, for five natural language tasks, we examine the impact of passing crowdsource item agreement on to the task classifier, by means of training instance filtering and soft labeling. We construct classifiers for Biased Text Detection, Stemming Classification, Recognizing Textual Entailment, Twitter POS Tagging, and Affect Recognition, and evaluate the effect of our different training strategies on the accuracy of each task. These tasks represent a wide range of machine learning tasks typical in NLP: sentence-level SVM regression using n-grams; word pairs with character-based features and binary SVM classification; pairwise sentence binary SVM classification with similarity score features; CRF sequence word classification with a range of feature types; and sentence-level regression using a token-weight averaging, respectively. We use preexisting, freely-available crowdsourced datasets and post all our experiment code on GitHub 1 .", "cite_spans": [ { "start": 52, "end": 71, "text": "Krippendorff (1970)", "ref_id": "BIBREF13" }, { "start": 285, "end": 298, "text": "(Zeman, 2010)", "ref_id": "BIBREF40" }, { "start": 315, "end": 351, "text": "(Beigman Klebanov and Beigman, 2014;", "ref_id": "BIBREF1" }, { "start": 352, "end": 387, "text": "Beigman and Beigman Klebanov, 2009)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 75, "end": 83, "text": "Figure 1", "ref_id": null }, { "start": 162, "end": 170, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Contributions This is the first work (1) to apply item-agreement-weighted soft labeling from crowdsourced labels to multiple real natural language tasks; (2) to filter training instances by item agreement from crowdsourced labels, for multiple natural language tasks; (3) to evaluate classifier performance on high item agreement (Easy Case) instances and low item agreement (Hard Case) instances across multiple natural language tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Dekel and Shamir (2009) calculated integrated labels for an information retrieval crowdsourced dataset, and identified low-quality workers by deviation from the integrated label. Removal of these workers' labels improved classifier performance on data that was not similarly filtered. While much work (Dawid and Skene, 1979; Ipeirotis et al., 2010; Dalvi et al., 2013) has explored techniques to model worker ability, bias, and instance difficulty while aggregating labels, there is no evaluation comparing classifiers trained on the new integrated labels with other options, on their respective NLP tasks.", "cite_spans": [ { "start": 301, "end": 324, "text": "(Dawid and Skene, 1979;", "ref_id": "BIBREF5" }, { "start": 325, "end": 348, "text": "Ipeirotis et al., 2010;", "ref_id": "BIBREF10" }, { "start": 349, "end": 368, "text": "Dalvi et al., 2013)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Training instance filtering aims to remove mislabeled instances from the training dataset. Sculley and Cormack (2008) learned a logistic regression classifier to identify and filter noisy labels in a spam email filtering task. They also proposed a label correcting technique that replaces identified noisy labels with \"corrected\" labels, at the risk of introducing noise into the corpus. Rebbapragada et al. (2009) developed a label noise detection technique to cluster training instances and remove label outliers. Raykar et al. (2010) jointly learned a classifier/regressor, annotator accuracy, and the integrated label on datasets with multiple noisy labels, outperforming Smyth et al. (1995) 's model of estimating ground truth labels.", "cite_spans": [ { "start": 388, "end": 414, "text": "Rebbapragada et al. (2009)", "ref_id": "BIBREF26" }, { "start": 676, "end": 695, "text": "Smyth et al. (1995)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Soft labeling, or the association of one training instance with multiple, weighted, conflicting labels, is a technique to model noisy training data. Thiel (2008) found that soft labeled training data produced more accurate classifiers than hard labeled training data, with both Radial Basis Function Networks and Fuzzy-Input Fuzzy-Output SVMs. Shen and Lapata (2007) used soft labeling to model their semantic frame structures in a question answering task, to represent that the semantic frames can bear multiple sematic roles.", "cite_spans": [ { "start": 149, "end": 161, "text": "Thiel (2008)", "ref_id": "BIBREF34" }, { "start": 344, "end": 366, "text": "Shen and Lapata (2007)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Previous research has found that, for a few individual NLP tasks, training while incorporating label noise weight may produce a better model. Mart\u00ednez Alonso et al. 2015show that informing a parser of annotator disagreement via loss function reduced error in labeled attachments by 6.4%. Plank et al. (2014a) incorporate annotator disagreement in POS tags into the loss function of a POS-tag machine learner, resulting in improved performance on downstream chunking. Beigman Klebanov and Beigman 2014observed that, on a task classifying text as semantically old or new, the inclusion of Hard Cases in training data resulted in reduced classifier performance on Easy Cases.", "cite_spans": [ { "start": 288, "end": 308, "text": "Plank et al. (2014a)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We built systems for the five NLP tasks, and trained them using aggregation, soft labeling, and instance screening strategies. When labels were numeric, the integrated label was the average 2 . When labels were nominal, the integrated label was majority vote. Krippendorff (1970) 's \u03b1 item agreement was used to filter ambiguous training instances. For soft labeling, percentage item agreement was used to assign instance weights. We followed Sheng et al. 2008's suggested Multiplied Examples procedure: for each unlabeled instance x i and each existing label y i \u2208 L i = {y ij } (as annotated by worker j), we create one replica of x i , assign it y i , and weight the instance according to the count of y i in L i (i.e., the percentage item agrement). For each training strategy (Soft-Label, etc), the training instances were changed by the strategy, but the test instances were unaffected. For the division of test instances into Hard and Easy Cases, the training instances were unaffected, but the test instances were filtered by \u03b1 item agreement. Hard/Easy Case parameters were chosen to divide the corpus by item agreement into roughly equal portions 3 , relative to the corpus, for post-hoc error analysis.", "cite_spans": [ { "start": 260, "end": 279, "text": "Krippendorff (1970)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Overview of Experiments", "sec_num": "3" }, { "text": "All systems except Affect Recognition were constructed using DKPro Text Classification (Daxenberger et al., 2014) , and used Weka's SMO (Platt, 1999) or SMOreg (Shevade et al., 2000) implementations with default parameters, with 10fold (or 5-fold, for computationally-intensive POS Tagging) cross-validation. More details are available in the Supplemental Notes document.", "cite_spans": [ { "start": 87, "end": 113, "text": "(Daxenberger et al., 2014)", "ref_id": "BIBREF6" }, { "start": 136, "end": 149, "text": "(Platt, 1999)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Overview of Experiments", "sec_num": "3" }, { "text": "HighAgree and VeryHigh utilize agreement cutoff parameters that vary per corpus. These strategies are a discretized approximation of the gradual effect of filtering low agreement instances from the training data. For any given corpus, we could not use a cutoff value equal to no filtering, or that eliminated a class. If there were only 2 remaining cutoffs, we used these. If there were more candidate cutoff values, we trained and evaluated a classifier on a development set and chose the value for HighAgree that maximized Hard Case performance on the development set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Agreement Parameters Training strategies", "sec_num": null }, { "text": "Percentage Agreement In this paper, we follow Beigman Klebanov and Beigman (2014) in using the nominal agreement categories Hard Cases and Easy Cases to separate instances by item agreement. However, unlike Beigman Klebanov and Beigman (2014) who use simple percentage agreement, we calculate item-specific agreement via Krippendorff (1970) 's \u03b1 item agreement 4 , with Nominal, Ordinal, or Ratio distance metrics as appropriate. The agreement is expressed in the range (-1.0 -1.0); 1.0 is perfect agreement.", "cite_spans": [ { "start": 321, "end": 340, "text": "Krippendorff (1970)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Agreement Parameters Training strategies", "sec_num": null }, { "text": "This task detects the use of bias in political text. The corpus (Yano et al., 2010) 5 consists of 1,041 sentences from American political blogs. For each sentence, five crowdsource annotators chose a label no bias, some bias, and very biased. We follow Yano et al. (2010) in representing the amount of bias on a numerical scale (1-3). Hard/Easy Case cutoffs were <-.21 and >.20. Of 1041 total instances, 161 were Hard Cases (<-.21) and 499 were Easy Cases (>.20).", "cite_spans": [ { "start": 64, "end": 83, "text": "(Yano et al., 2010)", "ref_id": "BIBREF39" }, { "start": 253, "end": 271, "text": "Yano et al. (2010)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Biased Language Detection", "sec_num": "3.1" }, { "text": "We built an SVM regression task using unigrams, to predict the numerical amount of bias. The gold standard was the integrated labels. Itemspecific agreement was calculated with Ordinal Distance Function (Krippendorff, 1980) . We used the following training strategies: VeryHigh Filtered for agreement >0.4. HighAgree Filtered for agreement >-0.2. SoftLabel One training instance is generated for each label from a text, and weighted by how many times that label occurred with the text. SLLimited SoftLabel, except that training instances with a label distance >1.0 from the original text label average are discarded.", "cite_spans": [ { "start": 203, "end": 223, "text": "(Krippendorff, 1980)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Biased Language Detection", "sec_num": "3.1" }, { "text": "The goal of this binary classification task is to predict, given an original word and a stemmed version of the word, whether the stemmed version has been correctly stemmed. The word pair was correct if: the stemmed word contained one less affix; or if the original word was a compound, the stemmed word had a space inserted between the components; or if the original word was misspelled, the stemmed word was deleted; or if the original word had no affixes and was not a compound and was not misspelled, then the stemmed word had no changes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Morphological Stemming", "sec_num": "3.2" }, { "text": "This dataset was compiled by Carpenter et al. (2009) 6 . The dataset contains 6679 word pairs; most pairs have 5 labels each. In the crossvalidation division, no pairs with the same original word could be split across training and test data. The gold standard was the integrated label, with 4898 positive and 1781 negative pairs. Hard/Easy Case cutoffs were <-.5 and >.5. Of 6679 total instances, 822 were Hard Cases (<-.5) and 3615 were Easy Cases (>.5). Features used are combinations of the characters after the removal of the longest common substring between the word pair, including 0-2 additional characters from the substring; word boundaries are marked.", "cite_spans": [ { "start": 29, "end": 52, "text": "Carpenter et al. (2009)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Morphological Stemming", "sec_num": "3.2" }, { "text": "Stemming-new training strategies include: HighAgree Filtered for agreement >-0.1. SLLimited MajVote with instances weighted by the frequency of the label for the text pair.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Morphological Stemming", "sec_num": "3.2" }, { "text": "Recognizing textual entailment is the process of determining if, given two sentences text and hypothesis, the meaning of the hypothesis can be inferred from the text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recognising Textual Entailment", "sec_num": "3.3" }, { "text": "We used the dataset from the PASCAL RTE-1, which contains 800 sentence pairs. The crowdsource annotations of 10 labels per pair were obtained by Snow et al. (2008) 7 . We reproduced the basic system described in (Dagan et al., 2006) of TF-IDF weighted Cosine Similarity between lemmas of the text and hypothesis. The weight of each word i in document j , n total documents, is the log-plus-one term i frequency normalized by raw term i document frequency, with Euclidean normalization.", "cite_spans": [ { "start": 145, "end": 165, "text": "Snow et al. (2008) 7", "ref_id": null }, { "start": 212, "end": 232, "text": "(Dagan et al., 2006)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Recognising Textual Entailment", "sec_num": "3.3" }, { "text": "weight(i, j) = (1 + log(tf i,j )) N df i if tf i,j \u2265 1 0 if tf i,j = 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recognising Textual Entailment", "sec_num": "3.3" }, { "text": "Additionally, we used features including the difference in noun chunk character and token length, the difference in number of tokens, shared named entities, and subtask names. The gold standard was the original labels from Dagan et al. (2006) . Hard/Easy Case cutoffs were <0.0 and >.3. Training strategies are from Biased Language (Very-High) and Stem (others) experiments, except the HighAgree cutoff was 0.0 and the VeryHigh cutoff was 0.3. Of 800 total instances, 230 were Hard Cases (<0.0) and 207 were Easy Cases (>.30).", "cite_spans": [ { "start": 223, "end": 242, "text": "Dagan et al. (2006)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Recognising Textual Entailment", "sec_num": "3.3" }, { "text": "We built a POS-tagger for Twitter posts. We used the training section of the dataset from Gimpel et al. (2011) . The POS tagset was the universal tag set (Petrov et al., 2012) ; we converted Gimpel et al. (2011) 's tags to the universal tagset using Hovy et al. (2014)'s mapping. Crowdsource labels for this data came from Hovy et al. (2014) 8 , who obtained 5 labels for each tweet. After aligning and cleaning, our dataset consisted of 953 tweets of 14,439 tokens.", "cite_spans": [ { "start": 90, "end": 110, "text": "Gimpel et al. (2011)", "ref_id": "BIBREF8" }, { "start": 154, "end": 175, "text": "(Petrov et al., 2012)", "ref_id": "BIBREF21" }, { "start": 191, "end": 211, "text": "Gimpel et al. (2011)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "POS tagging", "sec_num": "3.4" }, { "text": "We followed Hovy et al. (2014) in constructing a CRF classifier (Lafferty et al., 2001) , using a list of English affixes, Hovy et al. (2014)'s set of orthographic features, and word clusters (Owoputi et al., 2013) . In the cross-validation division, individual tweets were assigned to folds. The gold standard was the integrated label. Hard/Easy Case 7 Available at sites.google.com/site/ nlpannotations/ 8 Available at lowlands.ku.dk/results/ cutoffs were <0.0 and >.49. Of 14,439 tokens, 649 were Hard Cases (<0.0) and 10830 were Easy Cases (>.49).", "cite_spans": [ { "start": 12, "end": 30, "text": "Hovy et al. (2014)", "ref_id": "BIBREF9" }, { "start": 64, "end": 87, "text": "(Lafferty et al., 2001)", "ref_id": "BIBREF15" }, { "start": 192, "end": 214, "text": "(Owoputi et al., 2013)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "POS tagging", "sec_num": "3.4" }, { "text": "We used the following strategies: VeryHigh For each token t in sequence s where agreement(t) <0.5, s is broken into two separate sequences s 1 and s 2 and t is deleted (i.e. filtered). HighAgree VeryHigh with agreement <0.2. SoftLabel For each proto-sequence s, we generate 5 sequences {s 0 , s 1 , ..., s i }, in which each token t is assigned a crowdsource label drawn at random:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POS tagging", "sec_num": "3.4" }, { "text": "l t,s i \u2208 L t .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POS tagging", "sec_num": "3.4" }, { "text": "SLLimited, Each token t in sequence s is assigned its MajVote label. Then s is given a weight representing the average item agreement for all t \u2208 s.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POS tagging", "sec_num": "3.4" }, { "text": "Our Affect Recognition experiments are based on the affective text annotation task in Strapparava and Mihalcea (2007) , using the Sadness dataset. Each headline is rated for \"sadness\" using a scale of 0-100. Examples are in Figures 1 and 2 . We use the crowdsourced annotation for a 100headline sample of this dataset provided by Snow et al. (2008) 9 , with 10 annotations per emotion per headline. Of 100 total instances, 20 were Hard Cases (<0.0) and 49 were Easy Cases (>.30).", "cite_spans": [ { "start": 86, "end": 117, "text": "Strapparava and Mihalcea (2007)", "ref_id": null }, { "start": 330, "end": 350, "text": "Snow et al. (2008) 9", "ref_id": null } ], "ref_spans": [ { "start": 224, "end": 239, "text": "Figures 1 and 2", "ref_id": null } ], "eq_spans": [], "section": "Affect Recognition", "sec_num": "3.5" }, { "text": "Our system design is identical to Snow et al. (2008) , which is similar to the SWAT system (Katz et al., 2007) , a top-performing system on the Se-mEval Affective Text task. Hard/Easy Case cutoffs were <0.0 and >.3. Training strategies are the same as for the Biased Language experiments, except: VeryHigh Filtered for agreement >0.3. HighAgree Filtered for agreement >0. SLLimited SoftLabel, except that instances with a label distance >20.0 from the original label average are discarded.", "cite_spans": [ { "start": 34, "end": 52, "text": "Snow et al. (2008)", "ref_id": "BIBREF31" }, { "start": 91, "end": 110, "text": "(Katz et al., 2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Affect Recognition", "sec_num": "3.5" }, { "text": "Our results on all five tasks, using each of the training strategies and variously evaluating on all, Easy, or Hard Cases, can be seen in Table 1 .472 Table 1 : Results (Pearson or micro F1) with different training strategies and all, Hard, and Easy Cases. did not significantly outperform Integrated. However, HighAgree does outperform Integrated on 4 or the 5 tasks, especially for Hard Cases: Hard Case improvements for Biased Language and POS Tagging, and Affective Text, and overall improvements for RTE, POS Tagging, and Affective Text were significant (Paired TTest, p < 0.05, for numerical output, or McNemar's Test 10 (McNemar, 1947) , p < 0.05, for nominal classes). The fifth task, Stemming, had the lowest number of item agreement categories of the five tasks, preventing fine-grained agreement training filtering, which explains why filtering shows no benefit.", "cite_spans": [ { "start": 609, "end": 642, "text": "McNemar's Test 10 (McNemar, 1947)", "ref_id": null } ], "ref_spans": [ { "start": 138, "end": 145, "text": "Table 1", "ref_id": null }, { "start": 151, "end": 158, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "All training strategies used the same amount of annotated data as input, and for filtering strategies such as HighAgree, a reduced number of strategyoutput instances are used to train the model. As a higher cutoff is used for HighAgree, the lack of training data results in a worse model; this can be seen in the downward curves of Figures 3 -6 , where the curved line is HighAgree and the matching pattern straight line is Integrated. (Due to the low number of item agreement categories, Stemming results are not displayed in an item agreement cutoff table.) However, Figures 4 -6 show the overall performance boost, and Figure 3 shows the Hard Case performance boost, right before the downward curves from too little training data, using HighAgree.", "cite_spans": [], "ref_spans": [ { "start": 332, "end": 344, "text": "Figures 3 -6", "ref_id": null }, { "start": 569, "end": 581, "text": "Figures 4 -6", "ref_id": null }, { "start": 622, "end": 630, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "Comparability We found the accuracy of our systems was similar to that reported in previous literature. Dagan et al. (2006) report performance of the RTE system, on a different data division, with accuracy=0.568. Hovy et al. (2014) report majority vote results (from acc=0.805 to acc=0.837 on a different data section) similar to our results of 10 See Japkowicz and Shah (2011) for usage description. 0.790 micro-F 1 . For Affective Text, Snow et al. (2008) report results on a different data section of r=0.174, a merged result from systems trained on combinations of crowdsource labels and evaluated against expert-trained systems. The SWAT system (Katz et al., 2007) , which also used lexical resources and additional training data, acheived r=0.3898 on a different section of data. These results are comparable with ours, which range from r=0.326 to r=0.453.", "cite_spans": [ { "start": 104, "end": 123, "text": "Dagan et al. (2006)", "ref_id": "BIBREF3" }, { "start": 439, "end": 457, "text": "Snow et al. (2008)", "ref_id": "BIBREF31" }, { "start": 650, "end": 669, "text": "(Katz et al., 2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "In this work, for five natural langauge tasks, we have examined the impact of informing the classifier of crowdsource item agreement, by means of soft labeling and removal of low-agreement training instances. We found a statistically significant benefit from low-agreement training filtering in four of our five tasks, and strongest improvements for Hard Cases. Previous work (Beigman Klebanov and Beigman, 2014) found a similar effect, but only evaluated a single task, so generalizability was unknown. We also found that soft labeling was not beneficial compared to aggregation. Our findings suggest that the best crowdsource label training strategy is to remove low item agreement instances from the training set.", "cite_spans": [ { "start": 376, "end": 412, "text": "(Beigman Klebanov and Beigman, 2014)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "github.com/EmilyKJamison/crowdsourcing", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We followedYano et al. (2010) andStrapparava and Mihalcea (2007) in using mean as gold standard. Although another aggregation such as as median might be more representative, such discussion is beyond the scope of this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Limited by the discrete nature of our agreement.4 From the DKPro Statistics library(Meyer et al., 2014) 5 Available at sites.google.com/site/ amtworkshop2010/data-1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Available at github.com/bob-carpenter/anno", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Available at sites.google.com/site/ nlpannotations/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work has been supported by the Volkswagen Foundation as part of the Lichtenberg-Professorship Program under grant No. I/82806, and by the Center for Advanced Security Research (www.cased.de).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Learning with annotation noise", "authors": [ { "first": "Eyal", "middle": [], "last": "Beigman", "suffix": "" }, { "first": "Beata", "middle": [], "last": "Beigman Klebanov", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", "volume": "", "issue": "", "pages": "280--287", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eyal Beigman and Beata Beigman Klebanov. 2009. Learning with annotation noise. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 280-287, Suntec, Singapore.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Difficult cases: From data to learning, and back", "authors": [ { "first": "Eyal", "middle": [], "last": "Beata Beigman Klebanov", "suffix": "" }, { "first": "", "middle": [], "last": "Beigman", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "390--396", "other_ids": {}, "num": null, "urls": [], "raw_text": "Beata Beigman Klebanov and Eyal Beigman. 2014. Difficult cases: From data to learning, and back. In Proceedings of the 52nd Annual Meeting of the As- sociation for Computational Linguistics, pages 390- 396, Baltimore, Maryland.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Building a stemming corpus: Coding standards", "authors": [ { "first": "Bob", "middle": [], "last": "Carpenter", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Jamison", "suffix": "" }, { "first": "Breck", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bob Carpenter, Emily Jamison, and Breck Baldwin. 2009. Building a stemming corpus: Coding stan- dards. http://lingpipe-blog.com/2009/ 02/25/stemming-morphology-corpus- coding-standards/.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The PASCAL Recognising Textual Entailment Challenge. In Machine learning challenges. Evaluating predictive uncertainty, visual object classification, and recognising textual entailment", "authors": [ { "first": "Oren", "middle": [], "last": "Ido Dagan", "suffix": "" }, { "first": "Bernardo", "middle": [], "last": "Glickman", "suffix": "" }, { "first": "", "middle": [], "last": "Magnini", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "177--190", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL Recognising Textual En- tailment Challenge. In Machine learning chal- lenges. Evaluating predictive uncertainty, visual ob- ject classification, and recognising textual entail- ment, pages 177-190. Springer.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Aggregating crowdsourced binary ratings", "authors": [ { "first": "Nilesh", "middle": [], "last": "Dalvi", "suffix": "" }, { "first": "Anirban", "middle": [], "last": "Dasgupta", "suffix": "" }, { "first": "Ravi", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Vibhor", "middle": [], "last": "Rastogi", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 22nd International Conference on World Wide Web", "volume": "", "issue": "", "pages": "285--294", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nilesh Dalvi, Anirban Dasgupta, Ravi Kumar, and Vib- hor Rastogi. 2013. Aggregating crowdsourced bi- nary ratings. In Proceedings of the 22nd Interna- tional Conference on World Wide Web, pages 285- 294, Rio de Janeiro, Brazil.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Maximum likelihood estimation of observer errorrates using the EM algorithm", "authors": [ { "first": "Alexander", "middle": [], "last": "Philip Dawid", "suffix": "" }, { "first": "Allan", "middle": [ "M" ], "last": "Skene", "suffix": "" } ], "year": 1979, "venue": "Journal of the Royal Statistical Society. Series C (Applied Statistics)", "volume": "28", "issue": "1", "pages": "20--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Philip Dawid and Allan M. Skene. 1979. Maximum likelihood estimation of observer error- rates using the EM algorithm. Journal of the Royal Statistical Society. Series C (Applied Statis- tics), 28(1):20-28.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "DKPro TC: A Java-based Framework for Supervised Learning Experiments on Textual Data", "authors": [ { "first": "Johannes", "middle": [], "last": "Daxenberger", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Ferschke", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" }, { "first": "Torsten", "middle": [], "last": "Zesch", "suffix": "" } ], "year": 2014, "venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "61--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johannes Daxenberger, Oliver Ferschke, Iryna Gurevych, and Torsten Zesch. 2014. DKPro TC: A Java-based Framework for Supervised Learning Experiments on Textual Data. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics, pages 61-66, Baltimore, Maryland.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Vox populi: Collecting high-quality labels from a crowd", "authors": [ { "first": "Ofer", "middle": [], "last": "Dekel", "suffix": "" }, { "first": "Ohad", "middle": [], "last": "Shamir", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Twenty-Second Annual Conference on Learning Theory", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ofer Dekel and Ohad Shamir. 2009. Vox populi: Col- lecting high-quality labels from a crowd. In Pro- ceedings of the Twenty-Second Annual Conference on Learning Theory, Montreal, Canada. Online pro- ceedings.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Part-of-speech tagging for Twitter: Annotation, features, and experiments", "authors": [ { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Schneider", "suffix": "" }, { "first": "O'", "middle": [], "last": "Brendan", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Connor", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Das", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Mills", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Eisenstein", "suffix": "" }, { "first": "Dani", "middle": [], "last": "Heilman", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Yogatama", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Flanigan", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "42--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Gimpel, Nathan Schneider, Brendan O'Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. 2011. Part-of-speech tag- ging for Twitter: Annotation, features, and exper- iments. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguis- tics: Human Language Technologies, pages 42-47, Portland, Oregon.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Experiments with crowdsourced re-annotation of a pos tagging data set", "authors": [ { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "377--382", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dirk Hovy, Barbara Plank, and Anders S\u00f8gaard. 2014. Experiments with crowdsourced re-annotation of a pos tagging data set. In Proceedings of the 52nd An- nual Meeting of the Association for Computational Linguistics, pages 377-382, Baltimore, Maryland.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Quality management on Amazon Mechanical Turk", "authors": [ { "first": "G", "middle": [], "last": "Panagiotis", "suffix": "" }, { "first": "Foster", "middle": [], "last": "Ipeirotis", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Provost", "suffix": "" }, { "first": "", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the ACM SIGKDD Workshop on Human Computation", "volume": "", "issue": "", "pages": "64--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Panagiotis G. Ipeirotis, Foster Provost, and Jing Wang. 2010. Quality management on Amazon Mechanical Turk. In Proceedings of the ACM SIGKDD Work- shop on Human Computation, pages 64-67, Wash- ington DC, USA.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Evaluating learning algorithms: a classification perspective", "authors": [ { "first": "Nathalie", "middle": [], "last": "Japkowicz", "suffix": "" }, { "first": "Mohak", "middle": [], "last": "Shah", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nathalie Japkowicz and Mohak Shah. 2011. Evalu- ating learning algorithms: a classification perspec- tive. Cambridge University Press.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "SWAT-MP:The SemEval-2007 Systems for Task 5 and Task 14", "authors": [ { "first": "Phil", "middle": [], "last": "Katz", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Singleton", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Wicentowski", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)", "volume": "", "issue": "", "pages": "308--313", "other_ids": {}, "num": null, "urls": [], "raw_text": "Phil Katz, Matt Singleton, and Richard Wicentowski. 2007. SWAT-MP:The SemEval-2007 Systems for Task 5 and Task 14. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 308-313, Prague, Czech Re- public.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Estimating the reliability, systematic error and random error of interval data", "authors": [ { "first": "Klaus", "middle": [], "last": "Krippendorff", "suffix": "" } ], "year": 1970, "venue": "Educational and Psychological Measurement", "volume": "30", "issue": "1", "pages": "61--70", "other_ids": {}, "num": null, "urls": [], "raw_text": "Klaus Krippendorff. 1970. Estimating the reliabil- ity, systematic error and random error of interval data. Educational and Psychological Measurement, 30(1):61-70.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Content analysis: An introduction to its methodology", "authors": [ { "first": "Klaus", "middle": [], "last": "Krippendorff", "suffix": "" } ], "year": 1980, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Klaus Krippendorff. 1980. Content analysis: An in- troduction to its methodology. Sage, Beverly Hills, California.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Fernando", "middle": [ "C N" ], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 18th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "282--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Lafferty, Andrew McCallum, and Fernando C.N. Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data. In Proceedings of the 18th Interna- tional Conference on Machine Learning, pages 282- 289, Williamstown, Massachusetts.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Learning to parse with IAA-weighted loss", "authors": [ { "first": "Barbara", "middle": [], "last": "H\u00e9ctor Mart\u00ednez Alonso", "suffix": "" }, { "first": "Arne", "middle": [], "last": "Plank", "suffix": "" }, { "first": "Anders", "middle": [], "last": "Skjaerholt", "suffix": "" }, { "first": "", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1357--1361", "other_ids": {}, "num": null, "urls": [], "raw_text": "H\u00e9ctor Mart\u00ednez Alonso, Barbara Plank, Arne Skjaerholt, and Anders S\u00f8gaard. 2015. Learning to parse with IAA-weighted loss. In Proceedings of the 2015 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 1357-1361, Denver, Colorado.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Note on the sampling error of the difference between correlated proportions or percentages", "authors": [ { "first": "Quinn", "middle": [], "last": "Mcnemar", "suffix": "" } ], "year": 1947, "venue": "Psychometrika", "volume": "12", "issue": "2", "pages": "153--157", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12(2):153-157.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "DKPro Agreement: An open-source java library for measuring interrater agreement", "authors": [ { "first": "Christian", "middle": [ "M" ], "last": "Meyer", "suffix": "" }, { "first": "Margot", "middle": [], "last": "Mieskes", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Stab", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 25th International Conference on Computational Linguistics (COLING)", "volume": "", "issue": "", "pages": "105--109", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian M. Meyer, Margot Mieskes, Christian Stab, and Iryna Gurevych. 2014. DKPro Agreement: An open-source java library for measuring inter- rater agreement. In Proceedings of the 25th Inter- national Conference on Computational Linguistics (COLING), pages 105-109, Dublin, Ireland.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "How reliable are annotations via crowdsourcing: A study about inter-annotator agreement for multi-label image annotation", "authors": [ { "first": "Stefanie", "middle": [], "last": "Nowak", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "R\u00fcger", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the International Conference on Multimedia Information Retrieval", "volume": "", "issue": "", "pages": "557--566", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefanie Nowak and Stefan R\u00fcger. 2010. How reliable are annotations via crowdsourcing: A study about inter-annotator agreement for multi-label image an- notation. In Proceedings of the International Con- ference on Multimedia Information Retrieval, pages 557-566, Philadelphia, Pennsylvania.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Improved part-of-speech tagging for online conversational text with word clusters", "authors": [ { "first": "Olutobi", "middle": [], "last": "Owoputi", "suffix": "" }, { "first": "O'", "middle": [], "last": "Brendan", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Connor", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Schneider", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "380--390", "other_ids": {}, "num": null, "urls": [], "raw_text": "Olutobi Owoputi, Brendan O'Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A. Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 380-390, Atlanta, Georgia.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A universal part-of-speech tagset", "authors": [ { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012)", "volume": "", "issue": "", "pages": "2089--2096", "other_ids": {}, "num": null, "urls": [], "raw_text": "Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A universal part-of-speech tagset. In Pro- ceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012), pages 2089-2096, Istanbul, Turkey.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Learning part-of-speech taggers with interannotator agreement loss", "authors": [ { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "742--751", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barbara Plank, Dirk Hovy, and Anders S\u00f8gaard. 2014a. Learning part-of-speech taggers with inter- annotator agreement loss. In Proceedings of the 14th Conference of the European Chapter of the As- sociation for Computational Linguistics, pages 742- 751, Gothenburg, Sweden.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Linguistically debatable or just plain wrong?", "authors": [ { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "507--511", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barbara Plank, Dirk Hovy, and Anders S\u00f8gaard. 2014b. Linguistically debatable or just plain wrong? In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 507-511, Baltimore, Maryland.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Fast training of support vector machines using sequential minimal optimization", "authors": [ { "first": "John", "middle": [], "last": "Platt", "suffix": "" } ], "year": 1999, "venue": "Advances in kernel methods -support vector learning", "volume": "", "issue": "", "pages": "185--208", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Platt. 1999. Fast training of support vector ma- chines using sequential minimal optimization. In Advances in kernel methods -support vector learn- ing, pages 185-208. MIT Press.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Learning from crowds", "authors": [ { "first": "C", "middle": [], "last": "Vikas", "suffix": "" }, { "first": "Shipeng", "middle": [], "last": "Raykar", "suffix": "" }, { "first": "Linda", "middle": [ "H" ], "last": "Yu", "suffix": "" }, { "first": "Gerardo", "middle": [ "Hermosillo" ], "last": "Zhao", "suffix": "" }, { "first": "Charles", "middle": [], "last": "Valadez", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Florin", "suffix": "" }, { "first": "Linda", "middle": [], "last": "Bogoni", "suffix": "" }, { "first": "", "middle": [], "last": "Moy", "suffix": "" } ], "year": 2010, "venue": "The Journal of Machine Learning Research", "volume": "11", "issue": "", "pages": "1297--1322", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vikas C. Raykar, Shipeng Yu, Linda H. Zhao, Ger- ardo Hermosillo Valadez, Charles Florin, Luca Bo- goni, and Linda Moy. 2010. Learning from crowds. The Journal of Machine Learning Re- search, 11:1297-1322.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Improving onboard analysis of hyperion images by filtering mislabeled training data examples", "authors": [ { "first": "Umaa", "middle": [], "last": "Rebbapragada", "suffix": "" }, { "first": "Lukas", "middle": [], "last": "Mandrake", "suffix": "" }, { "first": "Kiri", "middle": [ "L" ], "last": "Wagstaff", "suffix": "" }, { "first": "Damhnait", "middle": [], "last": "Gleeson", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Castano", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Chien", "suffix": "" }, { "first": "Carla", "middle": [ "E" ], "last": "Brodley", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 IEEE Aerospace Conference", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Umaa Rebbapragada, Lukas Mandrake, Kiri L. Wagstaff, Damhnait Gleeson, Rebecca Castano, Steve Chien, and Carla E. Brodley. 2009. Improv- ing onboard analysis of hyperion images by filtering mislabeled training data examples. In Proceedings of the 2009 IEEE Aerospace Conference, pages 1-9, Big Sky, Montana.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Filtering email spam in the presence of noisy user feedback", "authors": [ { "first": "D", "middle": [], "last": "Sculley", "suffix": "" }, { "first": "Gordon", "middle": [ "V" ], "last": "Cormack", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Conference on Email and Antispam (CEAS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Sculley and Gordon V. Cormack. 2008. Filtering email spam in the presence of noisy user feedback. In Proceedings of the Conference on Email and Anti- spam (CEAS), Mountain View, CA, USA. Online proceedings.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Using semantic roles to improve question answering", "authors": [ { "first": "Dan", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", "volume": "", "issue": "", "pages": "12--21", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Shen and Mirella Lapata. 2007. Using seman- tic roles to improve question answering. In Pro- ceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Com- putational Natural Language Learning (EMNLP- CoNLL), pages 12-21, Prague, Czech Republic.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Get another label? Improving data quality and data mining using multiple, noisy labelers", "authors": [ { "first": "S", "middle": [], "last": "Victor", "suffix": "" }, { "first": "Foster", "middle": [], "last": "Sheng", "suffix": "" }, { "first": "Panagiotis", "middle": [ "G" ], "last": "Provost", "suffix": "" }, { "first": "", "middle": [], "last": "Ipeirotis", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining", "volume": "11", "issue": "", "pages": "1188--1193", "other_ids": {}, "num": null, "urls": [], "raw_text": "Victor S. Sheng, Foster Provost, and Panagiotis G. Ipeirotis. 2008. Get another label? Improving data quality and data mining using multiple, noisy label- ers. In Proceedings of the 14th ACM SIGKDD Inter- national Conference on Knowledge Discovery and Data Mining, pages 614-622, Las Vegas, Nevada. Shirish Krishnaj Shevade, S. Sathiya Keerthi, Chiranjib Bhattacharyya, and Karaturi Radha Krishna Murthy. 2000. Improvements to the SMO algorithm for SVM regression. IEEE Transactions on Neural Net- works, 11(5):1188-1193.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Inferring ground truth from subjective labelling of Venus images", "authors": [ { "first": "Padhraic", "middle": [], "last": "Smyth", "suffix": "" }, { "first": "Usama", "middle": [], "last": "Fayyad", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Burl", "suffix": "" }, { "first": "Pietro", "middle": [], "last": "Perona", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Baldi", "suffix": "" } ], "year": 1995, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "1085--1092", "other_ids": {}, "num": null, "urls": [], "raw_text": "Padhraic Smyth, Usama Fayyad, Michael Burl, Pietro Perona, and Pierre Baldi. 1995. Inferring ground truth from subjective labelling of Venus images. Ad- vances in Neural Information Processing Systems, pages 1085-1092.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Cheap and fast -but is it good? Evaluating non-expert annotations for natural language tasks", "authors": [ { "first": "Rion", "middle": [], "last": "Snow", "suffix": "" }, { "first": "O'", "middle": [], "last": "Brendan", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Connor", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "254--263", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Ng. 2008. Cheap and fast -but is it good? Evaluating non-expert annotations for natural lan- guage tasks. In Proceedings of the 2008 Confer- ence on Empirical Methods in Natural Language Processing, pages 254-263, Honolulu, Hawaii.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "70--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "SemEval-2007 Task 14: Affective Text. In Proceed- ings of the Fourth International Workshop on Se- mantic Evaluations (SemEval-2007), pages 70-74, Prague, Czech Republic.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Classification on soft labels is robust against label noise", "authors": [ { "first": "Christian", "middle": [], "last": "Thiel", "suffix": "" } ], "year": 2008, "venue": "Knowledge-Based Intelligent Information and Engineering Systems", "volume": "", "issue": "", "pages": "65--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian Thiel. 2008. Classification on soft labels is robust against label noise. In Knowledge-Based Intelligent Information and Engineering Systems, pages 65-73, Wellington, New Zealand.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Robust logistic regression using shift parameters", "authors": [ { "first": "Julie", "middle": [], "last": "Tibshirani", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "124--129", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julie Tibshirani and Christopher D. Manning. 2014. Robust logistic regression using shift parameters. In Proceedings of the 52nd Annual Meeting of the As- sociation for Computational Linguistics, pages 124- 129, Baltimore, Maryland.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Bayesian bias mitigation for crowdsourcing", "authors": [ { "first": "L", "middle": [], "last": "Fabian", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Wauthier", "suffix": "" }, { "first": "", "middle": [], "last": "Jordan", "suffix": "" } ], "year": 2011, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "1800--1808", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabian L. Wauthier and Michael I. Jordan. 2011. Bayesian bias mitigation for crowdsourcing. In Ad- vances in Neural Information Processing Systems, pages 1800-1808.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "The multidimensional wisdom of crowds", "authors": [ { "first": "Peter", "middle": [], "last": "Welinder", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Branson", "suffix": "" }, { "first": "Pietro", "middle": [], "last": "Perona", "suffix": "" }, { "first": "Serge", "middle": [ "J" ], "last": "Belongie", "suffix": "" } ], "year": 2010, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "2424--2432", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Welinder, Steve Branson, Pietro Perona, and Serge J. Belongie. 2010. The multidimensional wis- dom of crowds. In Advances in Neural Information Processing Systems, pages 2424-2432.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Whose vote should count more: Optimal integration of labels from labelers of unknown expertise", "authors": [ { "first": "Jacob", "middle": [], "last": "Whitehill", "suffix": "" }, { "first": "", "middle": [], "last": "Ting Fan", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Javier", "middle": [ "R" ], "last": "Bergsma", "suffix": "" }, { "first": "Paul", "middle": [ "L" ], "last": "Movellan", "suffix": "" }, { "first": "", "middle": [], "last": "Ruvolo", "suffix": "" } ], "year": 2009, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "2035--2043", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Whitehill, Ting fan Wu, Jacob Bergsma, Javier R. Movellan, and Paul L. Ruvolo. 2009. Whose vote should count more: Optimal integra- tion of labels from labelers of unknown expertise. In Y. Bengio, D. Schuurmans, J.D. Lafferty, C.K.I. Williams, and A. Culotta, editors, Advances in Neu- ral Information Processing Systems, pages 2035- 2043. Curran Associates, Inc.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Shedding (a thousand points of) light on biased language", "authors": [ { "first": "Tae", "middle": [], "last": "Yano", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk", "volume": "", "issue": "", "pages": "152--158", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tae Yano, Philip Resnik, and Noah A. Smith. 2010. Shedding (a thousand points of) light on biased lan- guage. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk, pages 152-158, Los Angeles, California.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Hard problems of tagset conversion", "authors": [ { "first": "Daniel", "middle": [], "last": "Zeman", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Second International Conference on Global Interoperability for Language Resources", "volume": "", "issue": "", "pages": "181--185", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Zeman. 2010. Hard problems of tagset con- version. In Proceedings of the Second International Conference on Global Interoperability for Language Resources, pages 181-185, Hong Kong, China.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "text": "'s \u03b1 item agreement : Text: India's Taj Mahal gets facelift Sadness Rating (0-100): 8.0 \u03b1 Agreement (-1.0 -1.0): 0.7 Figure 1: Affect Recognition Easy Case. Text: After Iraq trip, Clinton proposes war limits Sadness Rating (0-100): 12.5 \u03b1 Agreement (-1.0 -1.0): -0.1 Figure 2: Affect Recognition Hard Case.", "type_str": "figure" }, "FIGREF1": { "uris": null, "num": null, "text": "Figure 3: Biased Language.", "type_str": "figure" } } } }