{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:10:45.040953Z" }, "title": "An Approach to the Frugal Use of Human Annotators to Scale up Auto-coding for Text Classification Tasks", "authors": [ { "first": "Li", "middle": [ "' An" ], "last": "Chen", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Hanna", "middle": [], "last": "Suominen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Commonwealth Scientific and Industrial Research Organisation / Canberra", "location": { "country": "Australia" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Human annotation for establishing the training data is often a very costly process in natural language processing (NLP) tasks, which has led to frugal NLP approaches becoming an important research topic. Many research teams struggle to complete projects with limited funding, labor, and computational resources. Driven by the Move-Step analytic framework theorized in the applied linguistics field, our study offers a rigorous approach to the frugal use of two human annotators to scale up autocoding for text classification tasks. We applied the Linear Support Vector Machine algorithm to text classification of a job ad corpus. Our Cohen's Kappa for inter-rater agreement and Area Under the Curve (AUC) values reached averages of 0.76 and 0.80, respectively. The calculated time consumption for our human training process was 36 days. The results indicated that even the strategic and frugal use of only two human annotators could enable the efficient training of classifiers with reasonably good performance. This study does not aim to provide generalizability of the results. Rather, it is proposed that the annotation strategies arising from this study be considered by our readers only if they are fit for one's specific research purposes.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Human annotation for establishing the training data is often a very costly process in natural language processing (NLP) tasks, which has led to frugal NLP approaches becoming an important research topic. Many research teams struggle to complete projects with limited funding, labor, and computational resources. Driven by the Move-Step analytic framework theorized in the applied linguistics field, our study offers a rigorous approach to the frugal use of two human annotators to scale up autocoding for text classification tasks. We applied the Linear Support Vector Machine algorithm to text classification of a job ad corpus. Our Cohen's Kappa for inter-rater agreement and Area Under the Curve (AUC) values reached averages of 0.76 and 0.80, respectively. The calculated time consumption for our human training process was 36 days. The results indicated that even the strategic and frugal use of only two human annotators could enable the efficient training of classifiers with reasonably good performance. This study does not aim to provide generalizability of the results. Rather, it is proposed that the annotation strategies arising from this study be considered by our readers only if they are fit for one's specific research purposes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In natural language processing (NLP), human annotation is an indispensable and decisive step. The human annotation process directly influences the quality of the training data in NLP tasks, and consequently, it influences the quality of machinegenerated results. In this regard, Song et al. (2020) have revealed how significant the risk of reaching an incorrect conclusion could be if the quality of human annotation used for validation cannot be guaranteed. Unfortunately, the science of annotation is progressing very slowly (Hovy and Lavid, 2010; Song et al., 2020) . In many NLP studies, methodological details concerning the human annotation process have not been fully disclosed (Song et al., 2020) . Such a lack of disclosure may hinder readers' judgment of the soundness of human annotation procedures (Hovy and Lavid, 2010; Song et al., 2020) . It is time for NLP researchers to attach greater importance to the methodological rigor of human annotation in NLP tasks.", "cite_spans": [ { "start": 279, "end": 297, "text": "Song et al. (2020)", "ref_id": "BIBREF29" }, { "start": 527, "end": 549, "text": "(Hovy and Lavid, 2010;", "ref_id": "BIBREF14" }, { "start": 550, "end": 568, "text": "Song et al., 2020)", "ref_id": "BIBREF29" }, { "start": 685, "end": 704, "text": "(Song et al., 2020)", "ref_id": "BIBREF29" }, { "start": 810, "end": 832, "text": "(Hovy and Lavid, 2010;", "ref_id": "BIBREF14" }, { "start": 833, "end": 851, "text": "Song et al., 2020)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Where the funding and labor are limited, institutions or researchers might have to turn to the 'frugal' use of human annotators for text labelling tasks. For instance, Andreotta et al. (2019) acknowledged the limitation of not being able to afford high computational and labor costs in their machine learning (ML)-assisted analysis of Tweeter commentary. Johnson et al. (2018) also point out cost control that many engineering teams may need to deal with and emphasize the importance of minimizing labor cost and required training data to meet target results in NLP projects. Therefore, well-planned investment of labor and training resources for NLP and ML tasks is a topic worth considerable scholarly attention. We need to investigate how to make the best use of limited labor and monetary resource to achieve the optimal machine-generated outcomes, while preserving methodological rigor.", "cite_spans": [ { "start": 168, "end": 191, "text": "Andreotta et al. (2019)", "ref_id": "BIBREF0" }, { "start": 355, "end": 376, "text": "Johnson et al. (2018)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Crowdsourcing is often put forward as a solution to the human coder resource problem. Aside from the fact that crowds are often not experts, this kind of human annotation is allowed only in some national contexts, such as in the US (e.g., Munro et al., 2010; Pavlick et al., 2014) . This solution is not broadly applicable and has ethical implications with respect to researchers exploiting free or cheap labor. For instance, such option does not conform to the requirement for minimum hourly salaries under employment laws in national contexts such as Australia (Australian Government, 2020) . Under circumstances of regulatory limitations and within ethical constraints, it becomes necessary to resort to the frugal use of human annotators to scale up data analyses.", "cite_spans": [ { "start": 239, "end": 258, "text": "Munro et al., 2010;", "ref_id": "BIBREF23" }, { "start": 259, "end": 280, "text": "Pavlick et al., 2014)", "ref_id": "BIBREF26" }, { "start": 575, "end": 592, "text": "Government, 2020)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Unlike human annotation tasks for ordinary image annotation (e.g., dog vs. cat recognition), many text annotations require expert knowledge because they are simply more demanding. For instance, the labelling of research skills in job ads involved human annotators who worked as researchers and educators at universities in Mewburn et al. (2020) . These researchers point out that it can be extremely time and money-consuming to hire multiple expert human annotators. In many cases, if annotation procedures were well-devised, the frugal option generated results that were as good as the more costly option (Chang et al., 2017; Cocos et al., 2015) . From the perspective of cost control, a better option would be to also involve non-expert annotators with well-designed annotation schemes to reach the optimal annotation outcomes (Chang et al., 2017) . Therefore, it is in the interest of textual-data scientists to investigate if there is a way to guarantee the quality of manual annotation with the frugal use of human coders for automatic textual data analyses at scale. As many social science disciplines (e.g., applied linguistics or sociology) have a record of excellent human annotation frameworks, it is worth considering if annotation frameworks in any of these fields could help us enhance the methodological soundness for human annotation process in NLP tasks.", "cite_spans": [ { "start": 323, "end": 344, "text": "Mewburn et al. (2020)", "ref_id": "BIBREF19" }, { "start": 606, "end": 626, "text": "(Chang et al., 2017;", "ref_id": "BIBREF4" }, { "start": 627, "end": 646, "text": "Cocos et al., 2015)", "ref_id": "BIBREF7" }, { "start": 829, "end": 849, "text": "(Chang et al., 2017)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The research questions of this study are posed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. For automatic text classification tasks, how could we design human annotators' workshop frugally and at the same time maintain good performance of the machine?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. How could we design the human annotators' workshop to enable easy identification and fixation of problems in the human annotation schema?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. If multiple human annotators were involved, which annotator's labelled data should be adopted for training?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The primary outcomes of this study were as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. The frugal use of an expert annotator and a non-expert annotator generated an averaged Cohen's Kappa of 0.76.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. The total time investment of our frugal approach to human annotation was 376 hours (the time consumed by two human annotators).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. The frugal use of only two human annotators plus a limited amount of labelled data resulted in an averaged area under the receiver operating characteristic (ROC) curve (AUC) score of 0.80. 4. Differentiation of coarse-grained and finegrained labels allowed for enhanced interpretability of the ML performance. It also allowed for strategically hybrid use of multiple human annotators' labels to optimize the ML performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our human coders annotated job ad data from a corpus of high research skill intensive job ads of computing and healthcare positions 1 . In total, 1,800 job ads were chosen randomly from a large corpus consisted of health-domain and computing-domain job postings. The word counts of the 1,800 job ads reached 680,367. The randomly chosen job ads contained 900 health-domain job ads and 900 computing-domain job ads. As we aimed to minimize the labor and time cost, as well as the amount of data used for training and validation, the selection of only 1,800 job ads was based on a balanced consideration of the machine's performance and the time investment on manual annotation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2.1" }, { "text": "The job ad corpus was purchased from Burning Glass Technology Inc. Due to legal constraints, the data used for this study cannot be shared. However, it is assumed that our audience would be those who do not necessarily need to conduct analyses of job ads, but potentially other text classification tasks. Alternatively, readers interested in obtaining the same data for a verification of the results could contact Burning Glass Technology Inc. directly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2.1" }, { "text": "We went through necessary ethics procedures to avoid potential conflict of interests. We obtained the approval for the data to be used for our research purpose. The manuscript of the paper was read by a legal consultant in our team and a representative from Burning Glass Technology Inc. to ensure our publication met contractual agreements. We also signed an agreement with our human annotators for clarification of responsibilities and task specifications. The agreement with the human annotators was approved by our ethics delegate. Thus, we believe that ethical issues were mitigated to the best of our abilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ethics", "sec_num": "2.2" }, { "text": "Our study involved two human annotators for the labelling of requirements in job ads. The first human annotator N1was one of the authors of the paper. N1 was an expert annotator and a PhD candidate who held a master's degree in applied linguistics with extensive experience in identifying job requirements from textual data. The second annotator N2 was hired as a volunteer for our task. N2 held a master's degree in finance with experience in classifying news information, her experience was less relevant compared to N1. Hence, N2 played the role of a novice human annotator in the annotators' team.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Annotators' Workshop", "sec_num": "2.3" }, { "text": "Before assigning the job ads to N1 and N2, the job ad texts were segmented into sentences to be labelled by the annotators. The purpose of segmenting the job ad data into sentences was to reduce cognitive burdens for both annotators.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Annotators' Workshop", "sec_num": "2.3" }, { "text": "It was decided that there should be both coarsegrained labels and fine-grained labels. The decision was theoretically driven and inspired by an inductive analytic framework called 'Move-Step analysis' pioneered by the renowned applied linguist John Swales (1990) . Move-Step analysis is a widely adopted linguistic approach to the systematic examination of different genres (or text types). Genre theorists (Miller, 1984; Bhatia, 2014; Moreno and Swales, 2018) advocate that writing is a social action, and so a specific genre serves as a tool to achieve a social purpose that is shared among a community of practice. In our case, the purpose of the job ad genre is the communication of various skills, qualifications and capabilities required of a particular job vacancy, by the employer to potential hirees. To achieve an overarching purpose of a genre, writers need to involve conventionally acknowledged components in their writing (Swales, 1990) . Swalesian genre theorists differentiated the conventional textual components of a genre into coarse-grained moves and fine-grained steps. The intention of differentiating granularity levels derives from the pedagogical orientation shared among the Swalesian genre theorists (Bhatia, 2014; Maswana et al., 2015; Moreno and Swales, 2018) for clarifying concepts more clearly in class. Move-step analysis has previously been applied by NLP researchers such as Chen et al. (2020) for projects with a strong pedagogical orientation. As argued by Chen et al. (2020) , the provision of coarse-grained and fine-grained con-ventions embedded in the writing of a genre would allow students to learn more efficiently. The pedagogical orientation of move-step analysis aligns well with our intention to identify job requirements to enrich employability training 2 .", "cite_spans": [ { "start": 249, "end": 262, "text": "Swales (1990)", "ref_id": "BIBREF31" }, { "start": 407, "end": 421, "text": "(Miller, 1984;", "ref_id": "BIBREF20" }, { "start": 422, "end": 435, "text": "Bhatia, 2014;", "ref_id": "BIBREF3" }, { "start": 436, "end": 460, "text": "Moreno and Swales, 2018)", "ref_id": "BIBREF21" }, { "start": 936, "end": 950, "text": "(Swales, 1990)", "ref_id": "BIBREF31" }, { "start": 1227, "end": 1241, "text": "(Bhatia, 2014;", "ref_id": "BIBREF3" }, { "start": 1242, "end": 1263, "text": "Maswana et al., 2015;", "ref_id": "BIBREF18" }, { "start": 1264, "end": 1288, "text": "Moreno and Swales, 2018)", "ref_id": "BIBREF21" }, { "start": 1410, "end": 1428, "text": "Chen et al. (2020)", "ref_id": "BIBREF6" }, { "start": 1494, "end": 1512, "text": "Chen et al. (2020)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Human Annotators' Workshop", "sec_num": "2.3" }, { "text": "To give the readers a clearer sense of what we meant by a coarse-grained/move-level job requirement label and its associated fine-grained/step-level labels, we give the example of the job requirement 'Continuous education' below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Annotators' Workshop", "sec_num": "2.3" }, { "text": "Coarse-grained/Move-step label:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Annotators' Workshop", "sec_num": "2.3" }, { "text": "\u2022 Continuous education.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Annotators' Workshop", "sec_num": "2.3" }, { "text": "Its associated fine-grained labels:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Annotators' Workshop", "sec_num": "2.3" }, { "text": "\u2022 Passion & Self-motivation, \u2022 Participation in training,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Annotators' Workshop", "sec_num": "2.3" }, { "text": "\u2022 Sharing of knowledge, \u2022 Seeking advice, and \u2022 Self-reflection. Moreover, we assumed that the differentiation between coarse-grained and fine-grained labels might have other potential benefits. Having coarseand fine-grained labels may speed up the annotation process. In this regard, Tange et al. (1998) showed that the combination of coarse and finegrained labels helped the readers of informatics process information faster and more accurately.", "cite_spans": [ { "start": 285, "end": 304, "text": "Tange et al. (1998)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Human Annotators' Workshop", "sec_num": "2.3" }, { "text": "After introducing move-step analysis and assigning the task to the two annotators, N1 conducted the first round of annotation of 200 job ads, as she had the expert skills and knowledge relevant to the task. It was then decided that the unit to be annotated could contain multiple labels, as N1 found that the employers sometimes put multiple requirements in one sentence. Hence, our task was multi-label text classification. After N1 finished the first round of annotation, she came up with a coding schema that listed all the coarse-grained and fine-grained job requirement categories, and she gave the schema to N2. From the second to the last round of annotation, both N1 and N2 were involved in the task. N1 and N2 conducted their annotation tasks individually. The two annotators used the annotation tool Dataturks to label the texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Annotators' Workshop", "sec_num": "2.3" }, { "text": "Overall, there were nine rounds of annotation. In between every two rounds of annotation, the two annotators met once to discuss their compared results. If a high level of inconsistency measured by Cohen's Kappa was found regarding a particular fine-grained label (e.g., Continuous education -Passion & Self-motivation), N1 and N2 would randomly scan through several inconsistent instances and give their justifications about why they labeled in their ways. If the agreement was reached concerning how to label similar instances in the future, both of them would write the agreed approach in their notepads. However, if an agreement was not reached after their justifications were given, they would note down the dubious items and leave them for the next meeting when they labeled more data and had further justifications to convince each other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Annotators' Workshop", "sec_num": "2.3" }, { "text": "The inter-rater reliability between the two human annotators was measured by Cohen's Kappa. For assessing coders' agreement on the annotation of categorical variables, Hallgren (2013) recommends Cohen's Kappa as the measurement. The Cohen's Kappa equation was given in (1) as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Annotators' Workshop", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "K = P (a) \u2212 P (e) 1 \u2212 P (e)", "eq_num": "(1)" } ], "section": "Human Annotators' Workshop", "sec_num": "2.3" }, { "text": "where P(a) denotes the observed percentage of the human annotators' agreement and P(e) refers to the probability that the agreement is met by chance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Annotators' Workshop", "sec_num": "2.3" }, { "text": "After the Kappa was calculated for each coarsegrained and fine-grained category, we also calculated the standard error for the calculation of the 95% confidence intervals for the Kappa. The standard error equation is given in (2) as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Annotators' Workshop", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b1 K = P (a)(1 \u2212 P (e)) N (1 \u2212 P (e)) 2", "eq_num": "(2)" } ], "section": "Human Annotators' Workshop", "sec_num": "2.3" }, { "text": "where N refers to the overall numbers of classified tokens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Annotators' Workshop", "sec_num": "2.3" }, { "text": "The algorithm chosen for running the auto-coding task was the Support Vector Machine (SVM) (Cortes and Vapnik, 1995) with the linear kernel. Linear SVM is a good choice in a low-resource context (Zhang et al., 2012) , such as in ours. Linear SVM had also a low computational cost and at the same time good prediction results (Vijayan et al., 2017) . For multi-label text classification tasks, Linear SVM could have good ability to generate prediction results close to those generated by manual efforts (Qin and Wang, 2009; Yang et al., 2009; Wang and Chiang, 2011) .", "cite_spans": [ { "start": 91, "end": 116, "text": "(Cortes and Vapnik, 1995)", "ref_id": "BIBREF8" }, { "start": 195, "end": 215, "text": "(Zhang et al., 2012)", "ref_id": "BIBREF36" }, { "start": 325, "end": 347, "text": "(Vijayan et al., 2017)", "ref_id": "BIBREF33" }, { "start": 502, "end": 522, "text": "(Qin and Wang, 2009;", "ref_id": "BIBREF27" }, { "start": 523, "end": 541, "text": "Yang et al., 2009;", "ref_id": "BIBREF35" }, { "start": 542, "end": 564, "text": "Wang and Chiang, 2011)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Machine Learning Methods", "sec_num": "2.4" }, { "text": "We involved several steps in preprocessing the data. As mentioned in the description of the human coders' workshop, we segmented the job ad texts into sentences as labeling units. The classification task hence was also at sentence level. There were 63,504 sentence units overall. The average number of labels per sentence was 1.8. The segmentation into sentences supported calculation of the job requirements more accurately. Additionally, we removed stop words (e.g., conjunctions, articles) from the texts via the stop-word list given in the Natural Language Toolkit (NLTK) corpus v.3.5. The data were then put in a machine-readable format with the word representation tool TfidfVectorizer (term frequency times inverse document frequency) from the Scikit-learn v0.24.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Machine Learning Methods", "sec_num": "2.4" }, { "text": "We separated the processed data into 70%, 15%, and 15% chunks for the training, testing, and validation purposes. The ratio of the training/test/validation sets was based on the conventional practice suggested in Muller and Guido (2016) and Ng (2020) . We were aware of other validation approaches such as K-fold cross-validation (CV). Considering that the tuning of the hyperparameters (e.g., K value and ratio) in other CV approaches could be time-consuming and computationally expensive whilst their gain limited (as in Anguita et al., 2012 and Racz et al., 2021) , we chose to proceed with the frugal option of 70%, 15% ,15% split of the data for train/test/validation.", "cite_spans": [ { "start": 213, "end": 236, "text": "Muller and Guido (2016)", "ref_id": "BIBREF22" }, { "start": 241, "end": 250, "text": "Ng (2020)", "ref_id": "BIBREF25" }, { "start": 523, "end": 547, "text": "Anguita et al., 2012 and", "ref_id": "BIBREF1" }, { "start": 548, "end": 566, "text": "Racz et al., 2021)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Machine Learning Methods", "sec_num": "2.4" }, { "text": "For the parameter-tuning function of the Linear SVM classifier, we adopted the GridSearchCV tool from the Scikit-Learn v0.24.1. More specifically, the parameters tuned were 1) Loss, 2) Maxiteration, 3) Tolerance, 4) Fit intercept, and 5) Intercept scaling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Machine Learning Methods", "sec_num": "2.4" }, { "text": "The performance of the Linear SVM classifier was measured by the AUC. The reason that we chose the AUC is that it, compared to the accuracy, F1 or other such measurements, was less prone to biased results from class imbalance (Suominen et al., 2008; Narkhede, 2018) .", "cite_spans": [ { "start": 226, "end": 249, "text": "(Suominen et al., 2008;", "ref_id": "BIBREF30" }, { "start": 250, "end": 265, "text": "Narkhede, 2018)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Machine Learning Methods", "sec_num": "2.4" }, { "text": "After the AUC values were calculated, we also computed the 95% confidence intervals for our automatic classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Machine Learning Methods", "sec_num": "2.4" }, { "text": "The inter-rater agreement measured by Cohen's K reached an average of 0.76 (see Section 3.1), meaning that most of our manually labeled categories can be used for making at least tentative conclusions. The results related to the total time investment in the human annotation process (see Section 3.2) suggested that two human annotators, each working 5 hours a day, would need approximately 36 days to complete the task. Section 3.3 is concerned with the performance of the two automatic classifiers trained with data labeled by our two annotators. Although the two classifiers both reached an averaged AUC of 0.80, a closer examination of fine-grained categories revealed potential room for further improvement to the human annotation schema. These findings posed the question of whether high-inter rater agreement is more important than the ML results' interpretability. Moreover, strategic hybrid use of the two classifiers for optimization was introduced in Section 3.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "3" }, { "text": "The averaged inter-rater reliability measured by Kappa for all the identified categories reached 0.76 (see Table 1 ). For the fine-grained categories, the Kappa ranged from the minimum 0.60 to the maximum of 0.94. At the coarse-grained level, the Kappa range from 0.68 to 0.83. Based on the Kappa interpretation guidelines suggested by Krippendorff (2018) , Kappa values under 0.67 indicate that any conclusion should not be counted. Values ranging from 0.67 to 0.80 point to tentative conclusions to be made. Values above 0.80 indicate that definite conclusions can be made. Based on Krippendorff's guidelines, it is safe to claim that only 9 out of 72, or 12.5% of the fine-gained categories, did not reach the standards for making a tentative conclusion. The rest 87.5% of fine-grained categories reached the 'Pass' Kappa threshold defined by Krippendorff, which has been deemed among the strictest (Hallgren, 2013 ). If we use guidelines defined by Landis and Koch (1977) , who viewed Kappa under 0.61 as enough for the indication of a moderate agreement between two annotators, most of our fine-grained categories can be used for making at least tentative conclusions.", "cite_spans": [ { "start": 336, "end": 355, "text": "Krippendorff (2018)", "ref_id": "BIBREF16" }, { "start": 902, "end": 917, "text": "(Hallgren, 2013", "ref_id": "BIBREF11" }, { "start": 953, "end": 975, "text": "Landis and Koch (1977)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 107, "end": 114, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Inter-rater Agreement", "sec_num": "3.1" }, { "text": "The annotators reported that averagely they spent ten seconds annotating each sentence token in the task when they were fully concentrating on the task. The two annotators both labelled 63,504 sentence tokens. Therefore, the total time investment in the completion of a single-person annotation task was approximately 177 hours. Suppose a research team hires two annotators to do the coding task concurrently, and both the annotators work five hours a day. A project of a size comparable to ours might need about 36 days for the manual labeling to be completed. We considered such a time span as reasonably moderate. In addition, if the hired annotators could work for over five hours each day, the completion of the manual labeling process could be even faster. The exact hours allocated to a human annotator per day might vary based on different research teams' consideration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Investment in Human Annotation", "sec_num": "3.2" }, { "text": "The total labeling hours of the two annotators were 354 hours. Our corpus contained 826,891 words. Therefore, the approximate time investment per word for our labelling task was 1.6s. There were nine rounds of meetings (one hour for every meeting) plus the two-hour orientation time. Hence, two-person efforts for orientation and meetings cost 22 hours. In total, our two-annotator labeling task incurred a 376-hour time investment. Any team who also wants to use a similar frugal approach to their human-labeling task would find our results of interest.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Investment in Human Annotation", "sec_num": "3.2" }, { "text": "The two automatic classifiers trained and tested with the data labeled by our two human annotators both reached an averaged gold-standard AUC value of 0.80. Table 2 suggest that 58% of the coarsegrained categories reached AUC values above 0.80 with Machine N1 on data labeled by N1. Around 57% of step-level categories reached AUC values above 0.8 with Machine N2 on data labeled by N2. The scores of AUC given by the machine trained and tested from data labeled by annotator N1 ranged from 0.52 to 1.00. The scores of AUC given by the machine trained and tested from data labeled by annotator N2 ranged from 0.58 to 0.99. Interestingly, when we calculated the average of the AUC results given by Machine N1 trained and tested on Data N1 for all the fine-grained categories, the value reached 0.80. Similarly, the averaged AUC results given by Machine N2 trained and tested on Data N2 reached 0.80, too. This reminded us of the likelihood that even when a machine's performance seems outstanding at a coarsegrained level, potential problems at a fine-grained level might be invisible.", "cite_spans": [], "ref_spans": [ { "start": 157, "end": 164, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Performance of the Automatic Classifier", "sec_num": "3.3" }, { "text": "Certain coarse-grained categories such as 'Decision makers' and 'Public welfare' were low in AUC scores. We would pay particular attention to these categories in our future attempt for continuous improvement. Our approach of identifying both the fine-and coarse-grained categories proved to be one that could increase the interpretability of the results. More specifically, if we had not differentiated between the fine-and coarse-grained categories, we would not have been able to know where the problem lay in the human annotation schema. With the information about which fine-grained categories did well and which did not, we could allow more efficient future attempts to drive continuous improvement on the human coding schema.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance of the Automatic Classifier", "sec_num": "3.3" }, { "text": "When classifier Ni was tested with data labeled by Nj, most of our fine-grained categories did not show a large decrease in the AUC. When the drop was small, we assumed that the two ML classifiers trained by the two annotators performed almost equally well. We only found 15 fine-grained categories to have a relatively large decrease in the AUC. We used an averaged decrease of 0.05 as the threshold (a threshold used in Hiissa et al., 2006) to denote a large decrease in classifier Ni's performance when tested with data labeled by Nj.", "cite_spans": [ { "start": 422, "end": 442, "text": "Hiissa et al., 2006)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Performance of the Automatic Classifier", "sec_num": "3.3" }, { "text": "These 15 fine-grained categories, which showed a large decrease in performance were 'Peer practitioners', 'Interpersonal skills', 'Safety awareness', 'Agility', 'Passion Motivation', 'Problem understanding & solving', 'Unspecific payment', 'Residency', 'Refined design', 'Change management', 'Risk management', 'Conflict management', 'Working in harsh environment', 'Resource allocation', and 'Medical science subject knowledge' (Table 2) .", "cite_spans": [], "ref_spans": [ { "start": 429, "end": 438, "text": "(Table 2)", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Performance of the Automatic Classifier", "sec_num": "3.3" }, { "text": "These 15 fine-grained categories had good performance with Machine Ni tested on data labeled by Ni, but Machine Ni on data labeled by Nj gave a worse performance. This could indicate that the two human annotators' inner-rater reliability was high, but their inter-rater reliability was not as high. When human annotators face categories like these 15 ones in our study, we recommend a check regarding which features the human annotator Ni deemed as relevant to a category, but the human annotator Nj deemed as not. For the rest categories that did not show a large decrease, we recommend that researchers put Machine Ni into the formal use if Machine Ni on Data Nj results in less decrease in the AUC whilst Machine Ni's performance on Data Ni is also good. Instead of relying on the use of a single classifier for classifying all the fine-grained categories, the hybrid usage of Machine N1 and Machine N2 could optimize the classifier's performance even if the annotators' workshop was frugally designed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance of the Automatic Classifier", "sec_num": "3.3" }, { "text": "Our study showed that even the frugal use of only two human annotators plus a limited amount of labeled data resulted in an averaged AUC score of 0.80. Nonetheless, the differentiation between the fine-grained and coarse-grained categories in our coding schema revealed even the averaged AUC of 0.80 did not necessarily mean the quality of human annotation was as good 3 . The differentiation of fine and coarse granularities could enhance the interpretability of the results. In particular, such a differentiation provided a straightforward indication as to where the machine performed well or not and also where the problems lay in the human annotators' coding schema.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "Our study had limitations. Although we provided justifications for all the choices we made in our methods, there is room to refine our project's design (e.g., involving classification of other genres) when we have more resources. Compared to most previous coding schemas where no differentiation of granularity levels was made, our approach could allow more to-the-point and efficient fixation of the human annotation for continuous improvement. Our findings regarding the benefits of having two granularity levels echo the results in Chen et al. (2020) . Our choice of making the differentiation between granularity levels counters the suggestion given by Hovy and Lavid (2010) . They argue that coarser granularity would improve the accuracy of human annotation results. Nonetheless, Hovy and Lavid (2010) have mostly used examples of semantic recognition tasks such as verb-sense annotation to support their argument. Our task of text classification is different from semantic recognition. Therefore, it is worth further investigating whether 3 The point of constantly mentioning the coarse-grained categories in this paper is to emphasize how coarse granularity alone was unable to ensure the optimal performance for our specific annotation task. Single granularity level has been pervasively used in many text classication tasks (Chen et al., 2018; Da San Martino et al., 2019; Heinisch and Cimiano, 2021) . Nonetheless, recent studies (Chen et al., 2018; Da San Martino et al., 2019; Heinisch and Cimiano, 2021) suggest that single granularity cannot guarantee the optimal performance for certain tasks, which echo our findings here. In addition, we feel it necessary to keep the coarse granularity because the high-level categories are always useful when presenting complex results to the public it is reasonable to always opt for 'neutering' for all NLP tasks only for the sake of reaching a high inter-rater agreement regardless of the research purpose.", "cite_spans": [ { "start": 535, "end": 553, "text": "Chen et al. (2020)", "ref_id": "BIBREF6" }, { "start": 657, "end": 678, "text": "Hovy and Lavid (2010)", "ref_id": "BIBREF14" }, { "start": 786, "end": 807, "text": "Hovy and Lavid (2010)", "ref_id": "BIBREF14" }, { "start": 1046, "end": 1047, "text": "3", "ref_id": null }, { "start": 1334, "end": 1353, "text": "(Chen et al., 2018;", "ref_id": "BIBREF5" }, { "start": 1354, "end": 1382, "text": "Da San Martino et al., 2019;", "ref_id": "BIBREF9" }, { "start": 1383, "end": 1410, "text": "Heinisch and Cimiano, 2021)", "ref_id": "BIBREF12" }, { "start": 1441, "end": 1460, "text": "(Chen et al., 2018;", "ref_id": "BIBREF5" }, { "start": 1461, "end": 1489, "text": "Da San Martino et al., 2019;", "ref_id": "BIBREF9" }, { "start": 1490, "end": 1517, "text": "Heinisch and Cimiano, 2021)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "Our frugal use of one expert annotator and one non-expert annotators proved to cost moderate annotation time whilst generating reasonably good results. Compared to the recruitment of multiple expert-annotators, our approach certainly was much less costly. The strategically hybrid use of automatic classifiers trained by our two annotators is perhaps comparable to a classifier trained by only expert annotators. However, such an assumption is subject to future investigations where appropriate measures are involved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "Future scholarly attempts could explore this topic of frugal hybrids of machines and human experts further to verify our assumption. In this regard, Fort (2016) and Chen et al. (2020) echo our thoughts by arguing that a well-devised nonexpert annotator workshop could allow the labeling quality to be as good as when only expert annotators generate the labeling. Chang et al. (2017) expressed the concern that writing guidelines for even simple concepts for non-expert coders can be very prohibitive, but our approach of mixing both expert and non-expert coders is less likely to incur uncertainties and unexpected costs. To drive the progress of the science of annotation, scholars in the future might find it interesting to compare labeling results generated by pure experts, a mixture of experts non-experts, and crowdsourced workers for the same NLP project.", "cite_spans": [ { "start": 149, "end": 160, "text": "Fort (2016)", "ref_id": "BIBREF10" }, { "start": 165, "end": 183, "text": "Chen et al. (2020)", "ref_id": "BIBREF6" }, { "start": 363, "end": 382, "text": "Chang et al. (2017)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "In this study, we advocate a methodologically sound approach to the frugal use of two annotators to conduct human annotation tasks for NLP projects. Our approach has multiple benefits. Specifically, the time and resource consumption of our frugal approach were moderate compared to the more expensive choice of hiring multiple expert annotators. Having multiple rounds of annotation activities and ongoing meetings makes it possible to make timely justification and adjustments for the annotation schema. Moderate cost, timely communication of dubious labels, joint development of the annotation schema, and reasonably good ML outcomes are the features of our frugal but theoretically sound approach to human annotation. These features make the frugal use of minimally two hu- man annotators a good alternative to crowdsourcing and expert annotation. Regarding whether or not to differentiate granularity levels and whether or not to resort to human annotation frameworks from non-NLP disciplines in the human annotation process, our suggestion is that researchers should make the decision based on specific research purposes. We hope this study could serve as a point to drive reflection upon the science of annotation within our NLP community.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "We only analyzed computing-domain and health-domain job postings because the current paper is part of a large project to contextualize high-RSI job requirements for pedagogical purposes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "How to use the identified job requirements to enrich employability training is not covered in the current paper. Our main focus in this study is still the demonstration of the frugal use of human annotators. The point of mentioning the alignment between our pedagogical aim and the use of move-step analysis is to advocate a well-justified selection of analytic framework to be used in human annotators' workshop to fit one's specific research aim.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We are grateful for the support from Emsi Burning Glass Inc, PostAc\u00ae, and ANU CV Discovery Translation Fund2.0. Our thanks also go to Prof. Inger Mewburn, Dr. Will Grant, and the anonymous paper reviewers for their insightful comments on this paper. We thank Dr. Lindsay Hogan and Chenchen Xu for offering us advise on the technical and legal requirements involved in this study. We appreciate the anonymous annotator's contribution in our coders' workshop. Finally, the first author would like to thank Australian Government Research Training Program International Scholarship for supporting her PhD studies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Analyzing social media data: A mixed-methods framework combining computational and qualitative text analysis", "authors": [ { "first": "M", "middle": [], "last": "Andreotta", "suffix": "" }, { "first": "R", "middle": [], "last": "Nugroho", "suffix": "" }, { "first": "M", "middle": [], "last": "Hurlstone", "suffix": "" }, { "first": "F", "middle": [], "last": "Boschetti", "suffix": "" }, { "first": "S", "middle": [], "last": "Farrell", "suffix": "" }, { "first": "I", "middle": [], "last": "Walker", "suffix": "" }, { "first": "Paris", "middle": [], "last": "", "suffix": "" }, { "first": "C", "middle": [], "last": "", "suffix": "" } ], "year": 2019, "venue": "Behavior Research Methods", "volume": "51", "issue": "4", "pages": "1776--1781", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreotta, M., Nugroho, R., Hurlstone, M., Boschetti, F., Farrell, S., Walker, I., and Paris, C. (2019). An- alyzing social media data: A mixed-methods frame- work combining computational and qualitative text analysis. Behavior Research Methods, 51(4):1776- 1781.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The 'k' in k-fold cross validation", "authors": [ { "first": "D", "middle": [], "last": "Anguita", "suffix": "" }, { "first": "L", "middle": [], "last": "Ghelardoni", "suffix": "" }, { "first": "A", "middle": [], "last": "Ghio", "suffix": "" }, { "first": "L", "middle": [], "last": "Oneto", "suffix": "" }, { "first": "S", "middle": [], "last": "Ridella", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning", "volume": "", "issue": "", "pages": "441--446", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anguita, D., Ghelardoni, L., Ghio, A., Oneto, L., and Ridella, S. (2012). The 'k' in k-fold cross valida- tion. In Proceedings of the 2012 European Sympo- sium on Artificial Neural Networks, Computational Intelligence and Machine Learning, pages 441-446.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Fair work: Minimum wages", "authors": [ { "first": "Australian", "middle": [], "last": "Government", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "2021--2028", "other_ids": {}, "num": null, "urls": [], "raw_text": "Australian Government (2020). Fair work: Minimum wages. Accessed: 2021-07-23.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Analysing genre: Language use in professional settings", "authors": [ { "first": "V", "middle": [], "last": "Bhatia", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bhatia, V. (2014). Analysing genre: Language use in professional settings. Routledge, London, UK.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Revolt: Collaborative crowdsourcing for labeling machine learning datasets", "authors": [ { "first": "C", "middle": [ "J" ], "last": "Chang", "suffix": "" }, { "first": "S", "middle": [], "last": "Amershi", "suffix": "" }, { "first": "E", "middle": [], "last": "Kamar", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems", "volume": "", "issue": "", "pages": "2334--2346", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang, C. J., Amershi, S., and Kamar, E. (2017). Re- volt: Collaborative crowdsourcing for labeling ma- chine learning datasets. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pages 2334-2346.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Short text entity linking with fine-grained topics", "authors": [ { "first": "L", "middle": [], "last": "Chen", "suffix": "" }, { "first": "J", "middle": [], "last": "Liang", "suffix": "" }, { "first": "C", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Xiao", "middle": [], "last": "", "suffix": "" }, { "first": "Y", "middle": [], "last": "", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th ACM International Conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "457--466", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, L., Liang, J., Xie, C., and Xiao, Y. (2018). Short text entity linking with fine-grained topics. In Pro- ceedings of the 27th ACM International Conference on Information and Knowledge Management, pages 457-466.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A machine-learning based model to identify phd-level skills in job ads", "authors": [ { "first": "L", "middle": [], "last": "Chen", "suffix": "" }, { "first": "H", "middle": [], "last": "Suominen", "suffix": "" }, { "first": "I", "middle": [], "last": "Mewburn", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 18th Annual Workshop of the Australasian Language Technology Association", "volume": "", "issue": "", "pages": "72--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, L., Suominen, H., and Mewburn, I. (2020). A machine-learning based model to identify phd-level skills in job ads. In Proceedings of the 18th Annual Workshop of the Australasian Language Technology Association, pages 72-80.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Crowd control: effectively utilizing unscreened crowd workers for biomedical data annotation", "authors": [ { "first": "A", "middle": [], "last": "Cocos", "suffix": "" }, { "first": "T", "middle": [], "last": "Qian", "suffix": "" }, { "first": "C", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "A", "middle": [ "J" ], "last": "Masino", "suffix": "" } ], "year": 2015, "venue": "Journal of biomedical informatics", "volume": "69", "issue": "", "pages": "86--92", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cocos, A., Qian, T., Callison-Burch, C., and Masino, A. J. (2015). Crowd control: effectively utilizing un- screened crowd workers for biomedical data annota- tion. Journal of biomedical informatics, 69:86-92.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Support-vector networks", "authors": [ { "first": "C", "middle": [], "last": "Cortes", "suffix": "" }, { "first": "V", "middle": [], "last": "Vapnik", "suffix": "" } ], "year": 1995, "venue": "Machine learning", "volume": "20", "issue": "3", "pages": "273--297", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cortes, C. and Vapnik, V. (1995). Support-vector net- works. Machine learning, 20(3):273-297.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Fine-grained analysis of propaganda in news articles", "authors": [ { "first": "G", "middle": [], "last": "Da San Martino", "suffix": "" }, { "first": "S", "middle": [], "last": "Yu", "suffix": "" }, { "first": "A", "middle": [], "last": "Barron-Cedeno", "suffix": "" }, { "first": "R", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "P", "middle": [], "last": "Nakov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "5636--5646", "other_ids": {}, "num": null, "urls": [], "raw_text": "Da San Martino, G., Yu, S., Barron-Cedeno, A., Petrov, R., and Nakov, P. (2019). Fine-grained analysis of propaganda in news articles. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 5636-5646.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Collaborative Annotation for Reliable Natural Language Processing: Technical and Sociological Aspects", "authors": [ { "first": "K", "middle": [], "last": "Fort", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fort, K. (2016). Collaborative Annotation for Reliable Natural Language Processing: Technical and Socio- logical Aspects. John Wiley Sons, London, UK.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Computing inter-rater reliability for observational data: an overview and tutorial. Tutorials in quantitative methods for psychology", "authors": [ { "first": "K", "middle": [], "last": "Hallgren", "suffix": "" } ], "year": 2013, "venue": "", "volume": "8", "issue": "", "pages": "23--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hallgren, K. (2013). Computing inter-rater reliabil- ity for observational data: an overview and tuto- rial. Tutorials in quantitative methods for psychol- ogy, 8(1):23-24.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A multi-task approach to argument frame classification at variable granularity levels", "authors": [ { "first": "P", "middle": [], "last": "Heinisch", "suffix": "" }, { "first": "P", "middle": [], "last": "Cimiano", "suffix": "" } ], "year": 2021, "venue": "Information Technology", "volume": "63", "issue": "1", "pages": "59--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heinisch, P. and Cimiano, P. (2021). A multi-task approach to argument frame classification at vari- able granularity levels. Information Technology, 63(1):59-72.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Towards automated classification of intensive care nursing narratives", "authors": [ { "first": "M", "middle": [], "last": "Hiissa", "suffix": "" }, { "first": "T", "middle": [], "last": "Pahikkala", "suffix": "" }, { "first": "H", "middle": [], "last": "Suominen", "suffix": "" }, { "first": "T", "middle": [], "last": "Lehtikunnas", "suffix": "" }, { "first": "B", "middle": [], "last": "Back", "suffix": "" }, { "first": "H", "middle": [], "last": "Karsten", "suffix": "" }, { "first": "S", "middle": [], "last": "Salantera", "suffix": "" }, { "first": "T", "middle": [], "last": "Salakoski", "suffix": "" } ], "year": 2006, "venue": "Studies in health technology and informatics", "volume": "124", "issue": "", "pages": "789--794", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hiissa, M., Pahikkala, T., Suominen, H., Lehtikun- nas, T., Back, B., Karsten, H., Salantera, S., and Salakoski, T. (2006). Towards automated classifica- tion of intensive care nursing narratives. Studies in health technology and informatics, 124:789-794.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Towards a 'science'of corpus annotation: a new methodological challenge for corpus linguistics", "authors": [ { "first": "E", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "J", "middle": [], "last": "Lavid", "suffix": "" } ], "year": 2010, "venue": "International journal of translation", "volume": "22", "issue": "1", "pages": "13--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hovy, E. and Lavid, J. (2010). Towards a 'science'of corpus annotation: a new methodological challenge for corpus linguistics. International journal of trans- lation, 22(1):13-16.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Predicting accuracy on large datasets from smaller pilot data", "authors": [ { "first": "M", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "P", "middle": [], "last": "Anderson", "suffix": "" }, { "first": "M", "middle": [], "last": "Dras", "suffix": "" }, { "first": "M", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "450--455", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johnson, M., Anderson, P., Dras, M., and Steedman, M. (2018). Predicting accuracy on large datasets from smaller pilot data. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics, pages 450-455.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Content analysis: An introduction to its methodology", "authors": [ { "first": "K", "middle": [], "last": "Krippendorff", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Krippendorff, K. (2018). Content analysis: An intro- duction to its methodology. Sage publications., New York, USA.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The measurement of observer agreement for categorical data", "authors": [ { "first": "R", "middle": [], "last": "Landis", "suffix": "" }, { "first": "G", "middle": [], "last": "Koch", "suffix": "" } ], "year": 1977, "venue": "Biometrics", "volume": "33", "issue": "1", "pages": "159--174", "other_ids": {}, "num": null, "urls": [], "raw_text": "Landis, R. and Koch, G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33(1):159-174.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Move analysis of research articles across five engineering fields: What they share and what they do not. Ampersand", "authors": [ { "first": "S", "middle": [], "last": "Maswana", "suffix": "" }, { "first": "T", "middle": [], "last": "Kanamaru", "suffix": "" }, { "first": "A", "middle": [], "last": "Tajino", "suffix": "" } ], "year": 2015, "venue": "", "volume": "2", "issue": "", "pages": "1--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maswana, S., Kanamaru, T., and Tajino, A. (2015). Move analysis of research articles across five engi- neering fields: What they share and what they do not. Ampersand, 2:1-11.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A machine learning analysis of the non-academic employment opportunities for phd graduates in australia", "authors": [ { "first": "I", "middle": [], "last": "Mewburn", "suffix": "" }, { "first": "W", "middle": [ "J" ], "last": "Grant", "suffix": "" }, { "first": "H", "middle": [], "last": "Suominen", "suffix": "" }, { "first": "S", "middle": [], "last": "Kizimchuk", "suffix": "" } ], "year": 2020, "venue": "Higher Education Policy", "volume": "33", "issue": "4", "pages": "799--813", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mewburn, I., Grant, W. J., Suominen, H., and Kiz- imchuk, S. (2020). A machine learning analysis of the non-academic employment opportunities for phd graduates in australia. Higher Education Policy, 33(4):799-813.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Genre as social action", "authors": [ { "first": "C", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1984, "venue": "Quarterly journal of speech", "volume": "70", "issue": "2", "pages": "151--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miller, C. (1984). Genre as social action. Quarterly journal of speech, 70(2):151-167.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Gstrengthening move analysis methodology towards bridging the function-form gap. English for Specific Purposes", "authors": [ { "first": "A", "middle": [ "I" ], "last": "Moreno", "suffix": "" }, { "first": "J", "middle": [ "M" ], "last": "Swales", "suffix": "" } ], "year": 2018, "venue": "", "volume": "50", "issue": "", "pages": "40--63", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moreno, A. I. and Swales, J. M. (2018). Gstrength- ening move analysis methodology towards bridging the function-form gap. English for Specific Pur- poses, 50:40-63.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Introduction to machine learning with Python: a guide for data scientists", "authors": [ { "first": "A", "middle": [ "C" ], "last": "Muller", "suffix": "" }, { "first": "S", "middle": [], "last": "Guido", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Muller, A. C. and Guido, S. (2016). Introduction to machine learning with Python: a guide for data sci- entists. O'Reilly, Newton, US.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Crowdsourcing and language studies: the new generation of linguistic data", "authors": [ { "first": "R", "middle": [], "last": "Munro", "suffix": "" }, { "first": "S", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "V", "middle": [], "last": "Kuperman", "suffix": "" }, { "first": "V", "middle": [ "T" ], "last": "Lai", "suffix": "" }, { "first": "R", "middle": [], "last": "Melnick", "suffix": "" }, { "first": "C", "middle": [], "last": "Potts", "suffix": "" }, { "first": "T", "middle": [], "last": "Schnoebelen", "suffix": "" }, { "first": "H", "middle": [], "last": "Tily", "suffix": "" } ], "year": 2010, "venue": "NAACL Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk", "volume": "", "issue": "", "pages": "122--130", "other_ids": {}, "num": null, "urls": [], "raw_text": "Munro, R., Bethard, S., Kuperman, V., Lai, V. T., Mel- nick, R., Potts, C., Schnoebelen, T., and Tily, H. (2010). Crowdsourcing and language studies: the new generation of linguistic data. In NAACL Work- shop on Creating Speech and Language Data with Amazon's Mechanical Turk, pages 122-130.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Understanding auc-roc curve", "authors": [ { "first": "S", "middle": [], "last": "Narkhede", "suffix": "" } ], "year": 2018, "venue": "Towards Data Science", "volume": "26", "issue": "", "pages": "220--227", "other_ids": {}, "num": null, "urls": [], "raw_text": "Narkhede, S. (2018). Understanding auc-roc curve. To- wards Data Science, 26:220-227.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Coursera: Machine learning by stanford university", "authors": [ { "first": "A", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "2021--2028", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ng, A. (2020). Coursera: Machine learning by stan- ford university. Accessed: 2021-07-23.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "The language demographics of amazon mechanical turk", "authors": [ { "first": "E", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "M", "middle": [], "last": "Post", "suffix": "" }, { "first": "A", "middle": [], "last": "Irvine", "suffix": "" }, { "first": "D", "middle": [], "last": "Kachaev", "suffix": "" }, { "first": "C", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2014, "venue": "Transactions of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "79--92", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pavlick, E., Post, M., Irvine, A., Kachaev, D., and Callison-Burch, C. (2014). The language demo- graphics of amazon mechanical turk. Transactions of the Association for Computational Linguistics, 2:79-92.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Study on multilabel text classification based on svm", "authors": [ { "first": "Y", "middle": [ "P" ], "last": "Qin", "suffix": "" }, { "first": "X", "middle": [ "K" ], "last": "Wang", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Sixth International Conference on Fuzzy Systems and Knowledge Discovery", "volume": "", "issue": "", "pages": "333--304", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qin, Y. P. and Wang, X. K. (2009). Study on multi- label text classification based on svm. In Proceed- ings of the Sixth International Conference on Fuzzy Systems and Knowledge Discovery, pages 333-304.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Effect of dataset size and train/test split ratios in qsar/qspr multiclass classification", "authors": [ { "first": "A", "middle": [], "last": "Racz", "suffix": "" }, { "first": "D", "middle": [], "last": "Bajusz", "suffix": "" }, { "first": "K", "middle": [], "last": "Heberger", "suffix": "" } ], "year": 2021, "venue": "Molecules", "volume": "26", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Racz, A., Bajusz, D., and Heberger, K. (2021). Effect of dataset size and train/test split ratios in qsar/qspr multiclass classification. Molecules, 26(4):1111.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "In validations we trust? the impact of imperfect human annotations as a gold standard on the quality of validation of automated content analysis", "authors": [ { "first": "H", "middle": [], "last": "Song", "suffix": "" }, { "first": "P", "middle": [], "last": "Tolochko", "suffix": "" }, { "first": "J", "middle": [], "last": "Eberl", "suffix": "" }, { "first": "O", "middle": [], "last": "Eisele", "suffix": "" }, { "first": "E", "middle": [], "last": "Greussing", "suffix": "" }, { "first": "T", "middle": [], "last": "Heidenreich", "suffix": "" }, { "first": "F", "middle": [], "last": "Lind", "suffix": "" }, { "first": "S", "middle": [], "last": "Galyga", "suffix": "" }, { "first": "H", "middle": [], "last": "Boomgaarden", "suffix": "" } ], "year": 2020, "venue": "Political Communication", "volume": "37", "issue": "4", "pages": "550--572", "other_ids": {}, "num": null, "urls": [], "raw_text": "Song, H., Tolochko, P., Eberl, J., Eisele, O., Greussing, E., Heidenreich, T., Lind, F., Galyga, S., and Boom- gaarden, H. (2020). In validations we trust? the im- pact of imperfect human annotations as a gold stan- dard on the quality of validation of automated con- tent analysis. Political Communication, 37(4):550- 572.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Performance evaluation measures for text mining", "authors": [ { "first": "H", "middle": [], "last": "Suominen", "suffix": "" }, { "first": "S", "middle": [], "last": "Pyysalo", "suffix": "" }, { "first": "M", "middle": [], "last": "Hiissa", "suffix": "" }, { "first": "F", "middle": [], "last": "Ginter", "suffix": "" }, { "first": "S", "middle": [], "last": "Liu", "suffix": "" }, { "first": "D", "middle": [], "last": "Marghescu", "suffix": "" }, { "first": "T", "middle": [], "last": "Pahikkala", "suffix": "" }, { "first": "B", "middle": [], "last": "Back", "suffix": "" }, { "first": "H", "middle": [], "last": "Karsten", "suffix": "" }, { "first": "T", "middle": [], "last": "Salakoski", "suffix": "" } ], "year": 2008, "venue": "Handbook of Research on Text and Web Mining Technologies", "volume": "", "issue": "", "pages": "724--747", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suominen, H., Pyysalo, S., Hiissa, M., Ginter, F., Liu, S., Marghescu, D., Pahikkala, T., Back, B., Karsten, H., and Salakoski, T. (2008). Performance evalua- tion measures for text mining. In Song, M. and Wu, Y., editors, Handbook of Research on Text and Web Mining Technologies, pages 724-747. IGI Global, Hershey, USA.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Genre analysis: English in academic and research setting", "authors": [ { "first": "J", "middle": [ "M" ], "last": "Swales", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Swales, J. M. (1990). Genre analysis: English in aca- demic and research setting. Cambridge University Press, Cambridge, UK.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "The granularity of medical narratives and its effect on the speed and completeness of information retrieval", "authors": [ { "first": "H", "middle": [ "J" ], "last": "Tange", "suffix": "" }, { "first": "H", "middle": [ "C" ], "last": "Schouten", "suffix": "" }, { "first": "A", "middle": [ "D" ], "last": "Kester", "suffix": "" }, { "first": "A", "middle": [], "last": "Hasman", "suffix": "" } ], "year": 1998, "venue": "Journal of the American Medical Informatics Association", "volume": "5", "issue": "6", "pages": "571--582", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tange, H. J., Schouten, H. C., Kester, A. D., and Has- man, A. (1998). The granularity of medical narra- tives and its effect on the speed and completeness of information retrieval. Journal of the American Med- ical Informatics Association, 5(6):571-582.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "A comprehensive study of text classification algorithms", "authors": [ { "first": "V", "middle": [ "K" ], "last": "Vijayan", "suffix": "" }, { "first": "K", "middle": [ "R" ], "last": "Bindu", "suffix": "" }, { "first": "L", "middle": [], "last": "Parameswaran", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI)", "volume": "", "issue": "", "pages": "1109--1113", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vijayan, V. K., Bindu, K. R., and Parameswaran, L.- h. (2017). A comprehensive study of text classifi- cation algorithms. In Proceedings of the 2017 In- ternational Conference on Advances in Computing, Communications and Informatics (ICACCI), pages 1109-1113.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "multi-label text categorization problem using support vector machine approach with membership function", "authors": [ { "first": "T", "middle": [ "Y" ], "last": "Wang", "suffix": "" }, { "first": "H", "middle": [ "M" ], "last": "Chiang", "suffix": "" } ], "year": 2011, "venue": "Neurocomputing", "volume": "74", "issue": "17", "pages": "3682--3689", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang, T. Y. and Chiang, H. M. (2011). multi-label text categorization problem using support vector ma- chine approach with membership function. Neuro- computing, 74(17):3682-3689.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Effective multi-label active learning for text classification", "authors": [ { "first": "B", "middle": [], "last": "Yang", "suffix": "" }, { "first": "J", "middle": [ "T" ], "last": "Sun", "suffix": "" }, { "first": "T", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chen", "middle": [], "last": "", "suffix": "" }, { "first": "Z", "middle": [], "last": "", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining", "volume": "", "issue": "", "pages": "916--926", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang, B., Sun, J. T., Wang, T., and Chen, Z. (2009). Effective multi-label active learning for text classifi- cation. In Proceedings of the 15th ACM SIGKDD in- ternational conference on Knowledge discovery and data mining, pages 916-926.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Scaling up kernel svm on limited resources: A lowrank linearization approach", "authors": [ { "first": "K", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "L", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Z", "middle": [], "last": "Wang", "suffix": "" }, { "first": "F", "middle": [], "last": "Moerchen", "suffix": "" } ], "year": 2012, "venue": "Artificial intelligence and statistics", "volume": "22", "issue": "", "pages": "1425--1434", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang, K., Lan, L., Wang, Z., and Moerchen, F. (2012). Scaling up kernel svm on limited resources: A low- rank linearization approach. Artificial intelligence and statistics, 22:1425-1434.", "links": null } }, "ref_entries": { "TABREF0": { "type_str": "table", "text": "Cohen's K and the respective 95% confidence interval (CI) for the inter-rater agreement.", "num": null, "html": null, "content": "