{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:53:23.290017Z" }, "title": "OptSLA: an Optimization-Based Approach for Sequential Label Aggregation", "authors": [ { "first": "Nasim", "middle": [], "last": "Sabetpour", "suffix": "", "affiliation": { "laboratory": "", "institution": "Iowa State University", "location": {} }, "email": "" }, { "first": "Adithya", "middle": [], "last": "Kulkarni", "suffix": "", "affiliation": { "laboratory": "", "institution": "Iowa State University", "location": {} }, "email": "" }, { "first": "Qi", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Iowa State University", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The need for the annotated training dataset on which data-hungry machine learning algorithms feed has increased dramatically with advanced acclaim of machine learning applications. To annotate the data, people with domain expertise are needed, but they are seldom available and expensive to hire. This has lead to the thriving of crowdsourcing platforms such as Amazon Mechanical Turk (AMT). However, the annotations provided by one worker cannot be used directly to train the model due to the lack of expertise. Existing literature in annotation aggregation focuses on binary and multi-choice problems. In contrast, little work has been done on complex tasks such as sequence labeling with imbalanced classes, a ubiquitous task in Natural Language Processing (NLP), and Bio-Informatics. We propose OPTSLA, an Optimization-based Sequential Label Aggregation method, that jointly considers the characteristics of sequential labeling tasks, workers reliabilities, and advanced deep learning techniques to conquer the challenge. We evaluate our model on crowdsourced data for named entity recognition task. Our results show that the proposed OPTSLA outperforms the state-of-the-art aggregation methods, and the results are easier to interpret.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "The need for the annotated training dataset on which data-hungry machine learning algorithms feed has increased dramatically with advanced acclaim of machine learning applications. To annotate the data, people with domain expertise are needed, but they are seldom available and expensive to hire. This has lead to the thriving of crowdsourcing platforms such as Amazon Mechanical Turk (AMT). However, the annotations provided by one worker cannot be used directly to train the model due to the lack of expertise. Existing literature in annotation aggregation focuses on binary and multi-choice problems. In contrast, little work has been done on complex tasks such as sequence labeling with imbalanced classes, a ubiquitous task in Natural Language Processing (NLP), and Bio-Informatics. We propose OPTSLA, an Optimization-based Sequential Label Aggregation method, that jointly considers the characteristics of sequential labeling tasks, workers reliabilities, and advanced deep learning techniques to conquer the challenge. We evaluate our model on crowdsourced data for named entity recognition task. Our results show that the proposed OPTSLA outperforms the state-of-the-art aggregation methods, and the results are easier to interpret.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Crowdsourcing (Howe, 2008 ) is a popular platform to annotate massive corpora inexpensively. It has bred lots of interest in machine learning and deep learning tasks. However, when workers provide annotations, the results may be noisier comparing with labels provided by experts. Thus, it becomes essential to conduct truth inference from the noisy annotations.", "cite_spans": [ { "start": 14, "end": 25, "text": "(Howe, 2008", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One common annotation aggregation approach is Majority Voting (MV) (Lam and Suen, 1997) , in which annotation with the highest number of occurrences is deemed as truth. Another naive approach is to regard an annotation as correct if a certain number of workers provide the same annotation. The concern with these methods is that they assume all workers are of the same quality, which is usually invalid in practice. In this paper, we study the annotation aggregation problem for sequential labeling tasks, a common NLP task.", "cite_spans": [ { "start": 67, "end": 87, "text": "(Lam and Suen, 1997)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Many existing crowdsourcing label aggregation methods may suffer from performance loss because they assume that data instances are independent (Zheng et al., 2017) . New approaches are recently proposed to handle the particular characteristics of sequential labeling tasks, where tokens in one sentence have complex dependencies (Rodrigues et al., 2014; Simpson and Gurevych, 2019; Nguyen et al., 2017) . In this line of approaches, probabilistic models are adopted to model the workers' labeling behavior and to model the dependencies between adjacent tokens. There are some drawbacks to the probabilistic models. First, they have strong statistical assumptions when modeling the sequence annotations, limiting the flexibility of the models. Second, these models need to infer complex parameters, making it hard to interpret the relations between worker's quality and token's true labels. Third, these aggregation methods can not fully unleash the power of deep learning in sequential labeling tasks.", "cite_spans": [ { "start": 143, "end": 163, "text": "(Zheng et al., 2017)", "ref_id": "BIBREF17" }, { "start": 329, "end": 353, "text": "(Rodrigues et al., 2014;", "ref_id": "BIBREF8" }, { "start": 354, "end": 381, "text": "Simpson and Gurevych, 2019;", "ref_id": "BIBREF11" }, { "start": 382, "end": 402, "text": "Nguyen et al., 2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To address these challenges, we propose an optimization framework to improve aggregation performance. Our method OPTSLA estimates workers' reliability and models the label dependencies to infer the true labels from noisy annotations. OPT-SLA handles complex sequential label aggregation problem with fewer parameters comparing the state-of-the-art and produces easy-to-understand results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We further incorporate the state-of-the-art deep learning approach into OPTSLA, where the deep learning component and the aggregation component can maturely enhance each other. To ensure high-quality training data, OPTSLA chooses sentences with high confidence from the aggregation component. The deep learning model is incrementally trained with the iteratively updated aggregation results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Data aggregation and label inference tasks have received lots of attention over the past decade, and many methods are developed to handle various challenges (Li et al., 2016; Zheng et al., 2017) . Earlier works such as (Dawid and Skene, 1979; Yin et al., 2008; Snow et al., 2008; Whitehill et al., 2009; Groot et al., 2011) proposed to model the worker qualities and label inference using statistical methods. Later, optimization-based methods are proposed (Zhou et al., 2012; Li et al., 2014) . Intensive experiments in many applications and tasks have shown that these methods generally outperform MV, which indicates that the worker qualities estimation can play an essential role in label inference. However, in these methods, the annotation instances are assumed to be independent. More recently, methods are developed to handle various types of correlations among annotation instances. For example, methods in Yao et al., 2018; Zhi et al., 2018) are proposed to handle the spatial-temporal dependencies among instances, and methods in (Rodrigues et al., 2014; Nguyen et al., 2017; Simpson and Gurevych, 2019) are proposed to handle the sequential labeling tasks in NLP, which are more related to this paper. Rodrigues et al. (Rodrigues et al., 2014) proposed a probabilistic approach using Conditional Random Fields (CRF) to model the sequential annotations. In this model, the worker's reliability is modeled by his/her F1 score, but only one worker is assumed to be correct for any instance. Nguyen et al. relaxed the assumption and proposed a hidden Markov model (HMM) extension in (Nguyen et al., 2017) . This model uses J parameters per worker to model their reliabilities, where J is the number of classes. Recently, Simpson et al. (Simpson and Gurevych, 2019) proposed a fully-Bayesian approach, where J \u00d7 J \u00d7 J parameters are used to model workers' reliabilities.", "cite_spans": [ { "start": 157, "end": 174, "text": "(Li et al., 2016;", "ref_id": "BIBREF5" }, { "start": 175, "end": 194, "text": "Zheng et al., 2017)", "ref_id": "BIBREF17" }, { "start": 230, "end": 242, "text": "Skene, 1979;", "ref_id": "BIBREF0" }, { "start": 243, "end": 260, "text": "Yin et al., 2008;", "ref_id": "BIBREF16" }, { "start": 261, "end": 279, "text": "Snow et al., 2008;", "ref_id": "BIBREF12" }, { "start": 280, "end": 303, "text": "Whitehill et al., 2009;", "ref_id": "BIBREF14" }, { "start": 304, "end": 323, "text": "Groot et al., 2011)", "ref_id": "BIBREF1" }, { "start": 457, "end": 476, "text": "(Zhou et al., 2012;", "ref_id": "BIBREF19" }, { "start": 477, "end": 493, "text": "Li et al., 2014)", "ref_id": "BIBREF4" }, { "start": 916, "end": 933, "text": "Yao et al., 2018;", "ref_id": "BIBREF15" }, { "start": 934, "end": 951, "text": "Zhi et al., 2018)", "ref_id": "BIBREF18" }, { "start": 1041, "end": 1065, "text": "(Rodrigues et al., 2014;", "ref_id": "BIBREF8" }, { "start": 1066, "end": 1086, "text": "Nguyen et al., 2017;", "ref_id": "BIBREF7" }, { "start": 1087, "end": 1114, "text": "Simpson and Gurevych, 2019)", "ref_id": "BIBREF11" }, { "start": 1214, "end": 1255, "text": "Rodrigues et al. (Rodrigues et al., 2014)", "ref_id": "BIBREF8" }, { "start": 1591, "end": 1612, "text": "(Nguyen et al., 2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "The three models mentioned above are probabilistic models with significantly more parameters to tune and are harder to interpret than optimizationbased methods (Zheng et al., 2017) . Moreover, the existing methods do not fully unleash the power of deep learning approaches in sequential labeling tasks. In this paper, we propose an optimizationbased aggregation method to address the interpretability challenge, and further include the deep learning module to boost the performance.", "cite_spans": [ { "start": 160, "end": 180, "text": "(Zheng et al., 2017)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "The sequential label aggregation task aims to combine the annotations provided by different workers to infer the ground truth sequential labels. In this section, we describe our approach, an optimization-based sequential label aggregation method (OPTSLA), which aggregates multiple workers' annotations with deep learning results by estimating the reliability of workers and modeling the dependencies among tokens in the sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "We first introduce the notations. Suppose m workers (indexed by j) are hired to annotate s sentences (indexed by k) with total n tokens in the corpus. Let i k indicate the i-th token in the k-th sentence. y j i k is a one-hot vector that denotes the annotation given by the j-th worker on the i-th token in the k-th sentence. y * i k is the inferred aggregation label for the corresponding token. Each worker has a weight parameter w j to reflect his/her annotation quality, and W = {w 1 , w 2 , ..., w m } refers to the set of all worker weights. A higher weight implies that the worker is of higher reliability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OPTSLA", "sec_num": "3.1" }, { "text": "Our goal is to minimize the overall weighted loss of the inferred aggregation labels y * i k to the reliable workers' annotations y j i k , deep learning prediction\u015d y * i k , and the loss of inconsistencies in sequential labels. Mathematically, we formulate the aggregation problem as an optimization problem with respect to set of worker weights W, the weight of deep learning model w dl , aggregated annotation y * i k , and the deep learning parameters \u03b8 shown in Eq (1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OPTSLA", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "min f (W, w dl , {y * i k } n i k =1 , \u03b8) = j wj k \u03be(y * k ) i k H(y j i k , y * i k ) + w dl k \u03be(y * k ) i k H(y * i k ,\u0177 * i k ) \u2212 j |{y j i k }i k | log(wj) + n log(w dl ) + i k (g(y * i k \u22121 , y * i k ) + g(y * i k , y * i k +1 )),", "eq_num": "(1)" } ], "section": "OPTSLA", "sec_num": "3.1" }, { "text": "where H(\u2022, \u2022) is the cross entropy loss function, \u03be(y * k ) is the confidence level of the k-th sentence, |{y j i k } i k | refers to the number of annotations provided by worker j, and g(., .) is a loss function to maintain the consistency between tokens label. More specifically, \u03be(y", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OPTSLA", "sec_num": "3.1" }, { "text": "* k ) = 1 l k i k margin(y * i k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OPTSLA", "sec_num": "3.1" }, { "text": ", where l k is the number of tokens in sentence k and margin(y * i k ) is the probability difference between the two most likely labels of y", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OPTSLA", "sec_num": "3.1" }, { "text": "* i k . In Eq(1), j wj k \u03be(y * k ) i k H(y j i k , y * i k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OPTSLA", "sec_num": "3.1" }, { "text": "is the weighted cross-entropy loss between the inferred aggregation labels and the workers' annotations. The loss is adjusted by confidence measurement of (y * k ). Intuitively, if a worker is more reliable (i.e., w j is high) and the annotations are agreed with high confidence, high penalty will be received if his/her annotations are quite different from the inferred aggregation labels. In order to minimize the objective function, the inferred aggregation labels y * i k will rely more on the workers with high weights.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OPTSLA", "sec_num": "3.1" }, { "text": "The term", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OPTSLA", "sec_num": "3.1" }, { "text": "w dl k \u03be(y * k ) i k H(y * i k ,\u0177 * i k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OPTSLA", "sec_num": "3.1" }, { "text": "is the weighted cross-entropy loss between y * i k and the predicted labels\u0177 * i k from a trained deep learning model, where w dl is the reliability of the deep learning model. In our model, the deep learning model is essentially treated as an additional worker. The training of the deep learning model is discussed in Section 3.4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OPTSLA", "sec_num": "3.1" }, { "text": "The term j |{y j i k }i k | log(wj) + n log(w dl ) is a constraint to ensure that the calculated weights are positive. The final term i g(y * i\u22121 , y * i , y * i+1 ) is a loss function which gives penalties if the inferred aggregation labels is not consistent with the sequential label rules. One simple example of g(", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OPTSLA", "sec_num": "3.1" }, { "text": "\u2022, \u2022) is g(y * i k \u22121 , y * i k ) = 0, if P (y i k |y i k \u22121 ) > 0. 1, Otherwise. .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OPTSLA", "sec_num": "3.1" }, { "text": "(2) This function will give 0 loss if the sequence of y * i k \u22121 , y * i k is valid according to sequential label rules, and 1 if the sequence is invalid. Taking NER task as an example, P (yi k = 'I-LOC'|yi k \u22121 = 'B-PER') = 0, so g(y *", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OPTSLA", "sec_num": "3.1" }, { "text": "i k \u22121 = 'I-LOC', y * i k = 'B-PER') = 1. Therefore in g(y * i k \u22121 , y * i k ) + g(y * i k , y * i k +1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OPTSLA", "sec_num": "3.1" }, { "text": ", both y * i k \u22121 and y * i k +1 are considered. The inferred aggregation labels y * i k , workers weights W and w dl , and the deep learning model are learned simultaneously by optimizing the Eq (1). To solve the problem, we adopt the block coordinate descent method (Tseng, 2001 ), which will keep reducing the value of the objective function. To minimize the objective function in Eq (1), we iteratively conduct the following three steps.", "cite_spans": [ { "start": 268, "end": 280, "text": "(Tseng, 2001", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "OPTSLA", "sec_num": "3.1" }, { "text": "We initialize all the workers with equal weights. To update weights in each iteration, we treat the other variables as fixed. Then", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Workers' Weight Update", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "W \u2190 \u2212 argmin W f (y * i k , W, \u03b8).", "eq_num": "(3)" } ], "section": "Workers' Weight Update", "sec_num": "3.2" }, { "text": "W has closed form solution by taking differentiation of Eq (1) with respects to W. The solution is shown as follows", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Workers' Weight Update", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "w j = |{y j i k } i k | k \u03be(y * k ) i k H(y j i k , y * i k ) .", "eq_num": "(4)" } ], "section": "Workers' Weight Update", "sec_num": "3.2" }, { "text": "w dl is updated similarly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Workers' Weight Update", "sec_num": "3.2" }, { "text": "In the second step, once the workers' weights are updated, the inferred aggregation labels y * i k are updated to minimize Eq (1) as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Aggregated Annotation Update", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "argmin y * i k ( j wj k \u03be(y * k ) i k H(y j i k , y * i k ) +w dl k \u03be(y * k ) i k H(y * i k ,\u0177 * i k )) + i k (g(y * i k \u22121 , y * i k ) + g(y * i k , y * i k +1 )).", "eq_num": "(5)" } ], "section": "Aggregated Annotation Update", "sec_num": "3.3" }, { "text": "This function does not have a closed-form solution. In fact, for general label consistency loss function g(\u2022, \u2022), it might be non-trivial to solve Eq (5) as variables are correlated. Therefore, we apply the gradient descent method to calculate y * i k while fixing all other variables.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Aggregated Annotation Update", "sec_num": "3.3" }, { "text": "With updated aggregation results, we update the deep learning model. To maintain a high quality model, we select sentences with high \u03be({y * k }) (i.g., \u03be({y * i k }) >0.9) as training data. Since y * i k is updated iteratively, the training data change as well. However, the re-train of the deep learning model can be time consuming. Therefore, we adopt the incremental deep learning approach (Sarwar et al., 2019) to improve algorithm efficiency.", "cite_spans": [ { "start": 393, "end": 414, "text": "(Sarwar et al., 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Incremental Deep Learning", "sec_num": "3.4" }, { "text": "Many sequential labeling tasks have class imbalance problem. For example, in the NER task, \"O\" will dominate the entity annotations. To handle this problem, class priorities (\u03c1's) can be used to re-weight the classes. A higher \u03c1 will increase the weight for entity labels when calculating y * i k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class Priority (\u03c1)", "sec_num": "3.5" }, { "text": "4 Experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class Priority (\u03c1)", "sec_num": "3.5" }, { "text": "Datasets. We use real-world data to demonstrate the effectiveness of the proposed method OPTSLA. NER dataset (Sang and De Meulder, 2003) 1 consists of 5985 sentences and 47 workers are hired to identify the named entities in the sentences and annotate them as persons, locations, organizations, or miscellaneous. To make the task more challenging, we use 4515 sentences where workers had conflicting annotations, and for comparison we choose 3466 sentences to evaluate, which is the same as test set for NER dataset. 2 To evaluate the proposed OPTSLA, we compare the span level precision, recall, and F1 score 3 of the inferred aggregation labels with three state-of-theart baselines methods HMM-crowd (Nguyen et al., 2017) , CRF-MA (Rodrigues et al., 2014) , and BSCseq result comes from (Simpson and Gurevych, 2019) . For OPTSLA, Convolutional Neural Network (CNN) is employed as the deep learning component for the NER dataset. To evaluate the effect of the deep learning module, we also compare OPTSLA without the deep learning component, denoted as OPTSLA (W/O DL).", "cite_spans": [ { "start": 692, "end": 723, "text": "HMM-crowd (Nguyen et al., 2017)", "ref_id": null }, { "start": 733, "end": 757, "text": "(Rodrigues et al., 2014)", "ref_id": "BIBREF8" }, { "start": 789, "end": 817, "text": "(Simpson and Gurevych, 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Class Priority (\u03c1)", "sec_num": "3.5" }, { "text": "The results are shown in Table 1 4 . It is clear that the proposed OPTSLA method outperforms stateof-the-art baselines methods. The results show that the deep learning component can indeed enhance aggregation performance. H(\u2022, \u2022) and \u03be({y * i k } i ) help in predicting worker reliability properly which in turn help in aggregation. This is because that OPTSLA only uses sentences with high \u03be({y * i k }) for training, the deep learning model is trained properly.", "cite_spans": [], "ref_spans": [ { "start": 25, "end": 32, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Class Priority (\u03c1)", "sec_num": "3.5" }, { "text": "As the worker's reliability estimation is the key to obtain high-quality aggregation results, we further show the weights estimated for workers with respect to their actual F1 scores in Figure 1 . It can be observed that there is a strong positive correla-1 Dataset can be found on http://amilab.dei.uc. pt/fmpr/crf-ma-datasets.tar.gz 2 All codes, experiment scripts, datasets, and results are in a public repository https://github.com/NasimISU/ OptSLA 3 https://github.com/allenai/allennlp/ tree/master/allennlp 4 The results for CRF-MA and HMM-crowd come from (Nguyen et al., 2017) , and BSC-seq results come from (Simpson and Gurevych, 2019) tion between worker weights and their actual F1 scores. Because OPTSLA uses one parameter for each worker, the results are more straightforward to interpret and justify comparing with the baseline methods.", "cite_spans": [ { "start": 562, "end": 583, "text": "(Nguyen et al., 2017)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 186, "end": 194, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Class Priority (\u03c1)", "sec_num": "3.5" }, { "text": "We observe that OPTSLA converges quickly. The algorithm stops when no more sentences can be added to the training set. Figure 2 illustrates the size of the training dataset with respect to the number of iterations. ", "cite_spans": [], "ref_spans": [ { "start": 119, "end": 127, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Class Priority (\u03c1)", "sec_num": "3.5" }, { "text": "In this paper, we propose an innovative optimization-based approach OPTSLA for sequential label aggregation problem. Our model jointly considers different factors in the objective function, including the workers' annotations, workers' reliability, the deep learning model, and the characteristics of sequential labeling tasks. Our experimental results illustrate that OPTSLA outperforms the state-of-the-art sequential label aggregations methods, such as CRF-MA, HMM-Crowd, and Bayesian Sequence Combination (BSC) in terms of F1 score. For the future work, we will evaluate more factors such as the task assignment that may affect the aggregation performance from the deep learning model and the workers' behaviors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Works", "sec_num": "5" } ], "back_matter": [ { "text": "The work was supported in part by the National Science Foundation under Grant NSF IIS-2007941. Any opinions, findings, and conclusions or recommendations expressed in this document are those of the author(s) and should not be interpreted as the views of any U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Maximum likelihood estimation of observer error-rates using the em algorithm", "authors": [ { "first": "A", "middle": [], "last": "", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Dawid", "suffix": "" }, { "first": "Allan", "middle": [], "last": "Skene", "suffix": "" } ], "year": 1979, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Philip Dawid and Allan Skene. 1979. Maximum likelihood estimation of observer error-rates using the em algorithm.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Learning from multiple annotators with gaussian processes", "authors": [ { "first": "Perry", "middle": [], "last": "Groot", "suffix": "" }, { "first": "Adriana", "middle": [], "last": "Birlutiu", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Heskes", "suffix": "" } ], "year": 2011, "venue": "International Conference on Artificial Neural Networks", "volume": "", "issue": "", "pages": "159--164", "other_ids": {}, "num": null, "urls": [], "raw_text": "Perry Groot, Adriana Birlutiu, and Tom Heskes. 2011. Learning from multiple annotators with gaussian processes. In International Conference on Artificial Neural Networks, pages 159-164. Springer.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Crowdsourcing: How the power of the crowd is driving the future of business", "authors": [ { "first": "Jeff", "middle": [], "last": "Howe", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeff Howe. 2008. Crowdsourcing: How the power of the crowd is driving the future of business. Random House.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Application of majority voting to pattern recognition: an analysis of its behavior and performance", "authors": [ { "first": "Louisa", "middle": [], "last": "Lam", "suffix": "" }, { "first": "", "middle": [], "last": "Sy Suen", "suffix": "" } ], "year": 1997, "venue": "IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans", "volume": "27", "issue": "", "pages": "553--568", "other_ids": {}, "num": null, "urls": [], "raw_text": "Louisa Lam and SY Suen. 1997. Application of major- ity voting to pattern recognition: an analysis of its behavior and performance. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 27(5):553-568.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Resolving conflicts in heterogeneous data by truth discovery and source reliability estimation", "authors": [ { "first": "Qi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yaliang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Han", "suffix": "" } ], "year": 2014, "venue": "Proc. of the ACM SIGMOD International Conference on Management of Data (SIG-MOD'14)", "volume": "", "issue": "", "pages": "1187--1198", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qi Li, Yaliang Li, Jing Gao, Bo Zhao, Wei Fan, and Jiawei Han. 2014. Resolving conflicts in heteroge- neous data by truth discovery and source reliability estimation. In Proc. of the ACM SIGMOD Inter- national Conference on Management of Data (SIG- MOD'14), pages 1187-1198.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A survey on truth discovery", "authors": [ { "first": "Yaliang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Chuishi", "middle": [], "last": "Meng", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Su", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Han", "suffix": "" } ], "year": 2016, "venue": "ACM Sigkdd Explorations Newsletter", "volume": "17", "issue": "2", "pages": "1--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaliang Li, Jing Gao, Chuishi Meng, Qi Li, Lu Su, Bo Zhao, Wei Fan, and Jiawei Han. 2016. A sur- vey on truth discovery. ACM Sigkdd Explorations Newsletter, 17(2):1-16.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Tackling the redundancy and sparsity in crowd sensing applications", "authors": [ { "first": "Chuishi", "middle": [], "last": "Meng", "suffix": "" }, { "first": "Houping", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Su", "suffix": "" }, { "first": "Yun", "middle": [], "last": "Cheng", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 14th ACM Conference on Embedded Network Sensor Systems CD-ROM", "volume": "", "issue": "", "pages": "150--163", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chuishi Meng, Houping Xiao, Lu Su, and Yun Cheng. 2016. Tackling the redundancy and sparsity in crowd sensing applications. In Proceedings of the 14th ACM Conference on Embedded Network Sen- sor Systems CD-ROM, pages 150-163.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Aggregating and predicting sequence labels from crowd annotations", "authors": [ { "first": "Byron", "middle": [ "C" ], "last": "An T Nguyen", "suffix": "" }, { "first": "Junyi", "middle": [ "Jessy" ], "last": "Wallace", "suffix": "" }, { "first": "Ani", "middle": [], "last": "Li", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Nenkova", "suffix": "" }, { "first": "", "middle": [], "last": "Lease", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the conference. Association for Computational Linguistics. Meeting", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "An T Nguyen, Byron C Wallace, Junyi Jessy Li, Ani Nenkova, and Matthew Lease. 2017. Aggregating and predicting sequence labels from crowd anno- tations. In Proceedings of the conference. Associ- ation for Computational Linguistics. Meeting, vol- ume 2017, page 299. NIH Public Access.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Sequence labeling with multiple annotators", "authors": [ { "first": "Filipe", "middle": [], "last": "Rodrigues", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Bernardete", "middle": [], "last": "Ribeiro", "suffix": "" } ], "year": 2014, "venue": "Machine learning", "volume": "95", "issue": "2", "pages": "165--181", "other_ids": {}, "num": null, "urls": [], "raw_text": "Filipe Rodrigues, Francisco Pereira, and Bernardete Ribeiro. 2014. Sequence labeling with multiple an- notators. Machine learning, 95(2):165-181.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Introduction to the conll-2003 shared task: Languageindependent named entity recognition", "authors": [ { "first": "F", "middle": [], "last": "Erik", "suffix": "" }, { "first": "Fien", "middle": [], "last": "Sang", "suffix": "" }, { "first": "", "middle": [], "last": "De Meulder", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erik F Sang and Fien De Meulder. 2003. Intro- duction to the conll-2003 shared task: Language- independent named entity recognition. arXiv preprint cs/0306050.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Incremental learning in deep convolutional neural networks using partial network sharing", "authors": [ { "first": "Aayush", "middle": [], "last": "Syed Shakib Sarwar", "suffix": "" }, { "first": "Kaushik", "middle": [], "last": "Ankit", "suffix": "" }, { "first": "", "middle": [], "last": "Roy", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Syed Shakib Sarwar, Aayush Ankit, and Kaushik Roy. 2019. Incremental learning in deep convolu- tional neural networks using partial network sharing. IEEE Access.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A Bayesian approach for sequence tagging with crowds", "authors": [ { "first": "D", "middle": [], "last": "Edwin", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Simpson", "suffix": "" }, { "first": "", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "1093--1104", "other_ids": { "DOI": [ "10.18653/v1/D19-1101" ] }, "num": null, "urls": [], "raw_text": "Edwin D. Simpson and Iryna Gurevych. 2019. A Bayesian approach for sequence tagging with crowds. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 1093-1104, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Cheap and fast -but is it good? Evaluating non-expert annotations for natural language tasks", "authors": [ { "first": "Rion", "middle": [], "last": "Snow", "suffix": "" }, { "first": "O'", "middle": [], "last": "Brendan", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Connor", "suffix": "" }, { "first": "Andrew Y", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2008, "venue": "Proc. of the Conference on Empirical Methods in Natural Language Processing (EMNLP'08)", "volume": "", "issue": "", "pages": "254--263", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Y Ng. 2008. Cheap and fast -but is it good? Evaluating non-expert annotations for natu- ral language tasks. In Proc. of the Conference on Empirical Methods in Natural Language Processing (EMNLP'08), pages 254-263.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Convergence of a block coordinate descent method for nondifferentiable minimization", "authors": [ { "first": "Paul", "middle": [], "last": "Tseng", "suffix": "" } ], "year": 2001, "venue": "Journal of optimization theory and applications", "volume": "109", "issue": "3", "pages": "475--494", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Tseng. 2001. Convergence of a block coordi- nate descent method for nondifferentiable minimiza- tion. Journal of optimization theory and applica- tions, 109(3):475-494.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Whose vote should count more: Optimal integration of labels from labelers of unknown expertise", "authors": [ { "first": "Jacob", "middle": [], "last": "Whitehill", "suffix": "" }, { "first": "", "middle": [], "last": "Ting-Fan", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "Bergsma", "suffix": "" }, { "first": "Paul", "middle": [ "L" ], "last": "Javier R Movellan", "suffix": "" }, { "first": "", "middle": [], "last": "Ruvolo", "suffix": "" } ], "year": 2009, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "2035--2043", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Whitehill, Ting-fan Wu, Jacob Bergsma, Javier R Movellan, and Paul L Ruvolo. 2009. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In Advances in neural information processing systems, pages 2035- 2043.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Online truth discovery on time series data", "authors": [ { "first": "Liuyi", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Su", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yaliang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Fenglong", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Aidong", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 SIAM International Conference on Data Mining", "volume": "", "issue": "", "pages": "162--170", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liuyi Yao, Lu Su, Qi Li, Yaliang Li, Fenglong Ma, Jing Gao, and Aidong Zhang. 2018. Online truth discovery on time series data. In Proceedings of the 2018 SIAM International Conference on Data Min- ing, pages 162-170. SIAM.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Truth discovery with multiple conflicting information providers on the web", "authors": [ { "first": "Xiaoxin", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Han", "suffix": "" }, { "first": "Philip", "middle": [ "S" ], "last": "Yu", "suffix": "" } ], "year": 2008, "venue": "IEEE Transactions on Knowledge and Data Engineering", "volume": "20", "issue": "6", "pages": "796--808", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoxin Yin, Jiawei Han, and Philip S. Yu. 2008. Truth discovery with multiple conflicting informa- tion providers on the web. IEEE Transactions on Knowledge and Data Engineering, 20(6):796-808.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Truth inference in crowdsourcing: Is the problem solved?", "authors": [ { "first": "Yudian", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Guoliang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yuanbing", "middle": [], "last": "Li", "suffix": "" }, { "first": "Caihua", "middle": [], "last": "Shan", "suffix": "" }, { "first": "Reynold", "middle": [], "last": "Cheng", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the VLDB Endowment", "volume": "10", "issue": "", "pages": "541--552", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yudian Zheng, Guoliang Li, Yuanbing Li, Caihua Shan, and Reynold Cheng. 2017. Truth inference in crowd- sourcing: Is the problem solved? Proceedings of the VLDB Endowment, 10(5):541-552.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Dynamic truth discovery on numerical data", "authors": [ { "first": "Fan", "middle": [], "last": "Shi Zhi", "suffix": "" }, { "first": "Zheyi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Zhaoran", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "", "middle": [], "last": "Han", "suffix": "" } ], "year": 2018, "venue": "2018 IEEE International Conference on Data Mining (ICDM)", "volume": "", "issue": "", "pages": "817--826", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shi Zhi, Fan Yang, Zheyi Zhu, Qi Li, Zhaoran Wang, and Jiawei Han. 2018. Dynamic truth discovery on numerical data. In 2018 IEEE International Con- ference on Data Mining (ICDM), pages 817-826. IEEE.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Learning from the wisdom of crowds by minimax entropy", "authors": [ { "first": "Dengyong", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Basu", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Mao", "suffix": "" }, { "first": "John C", "middle": [], "last": "Platt", "suffix": "" } ], "year": 2012, "venue": "Advances in Neural Information Processing Systems (NIPS'12)", "volume": "", "issue": "", "pages": "2195--2203", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dengyong Zhou, Sumit Basu, Yi Mao, and John C Platt. 2012. Learning from the wisdom of crowds by min- imax entropy. In Advances in Neural Information Processing Systems (NIPS'12), pages 2195-2203.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "Worker weights w.r.t. their F1 scoresFigure 2: Training size w.r.t. iterations" }, "TABREF0": { "html": null, "num": null, "type_str": "table", "text": "", "content": "
Performance Comparison
Prec. Rec.F1
MV79.955.365.4
CRF-MA80.29 51.20 62.53
HMM-crowd77.40 72.29 74.76
BSC-seq80.374.877.4
OPTSLA (W/O DL) 76.61 74.14 75.36
OPTSLA79.42 77.59 78.49
" } } } }