{ "paper_id": "J08-4004", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:19:31.217769Z" }, "title": "Inter-Coder Agreement for Computational Linguistics", "authors": [ { "first": "Ron", "middle": [], "last": "Artstein", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Southern California", "location": { "addrLine": "13274 Fiji Way, Marina Del Rey", "postCode": "90292", "region": "CA" } }, "email": "" }, { "first": "Massimo", "middle": [], "last": "Poesio", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Southern California", "location": { "addrLine": "13274 Fiji Way, Marina Del Rey", "postCode": "90292", "region": "CA" } }, "email": "poesio@essex.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This article is a survey of methods for measuring agreement among corpus annotators. It exposes the mathematics and underlying assumptions of agreement coefficients, covering Krippendorff's alpha as well as Scott's pi and Cohen's kappa; discusses the use of coefficients in several annotation tasks; and argues that weighted, alpha-like coefficients, traditionally less used than kappalike measures in computational linguistics, may be more appropriate for many corpus annotation tasks-but that their use makes the interpretation of the value of the coefficient even harder.", "pdf_parse": { "paper_id": "J08-4004", "_pdf_hash": "", "abstract": [ { "text": "This article is a survey of methods for measuring agreement among corpus annotators. It exposes the mathematics and underlying assumptions of agreement coefficients, covering Krippendorff's alpha as well as Scott's pi and Cohen's kappa; discusses the use of coefficients in several annotation tasks; and argues that weighted, alpha-like coefficients, traditionally less used than kappalike measures in computational linguistics, may be more appropriate for many corpus annotation tasks-but that their use makes the interpretation of the value of the coefficient even harder.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Since the mid 1990s, increasing effort has gone into putting semantics and discourse research on the same empirical footing as other areas of computational linguistics (CL). This soon led to worries about the subjectivity of the judgments required to create annotated resources, much greater for semantics and pragmatics than for the aspects of language interpretation of concern in the creation of early resources such as the Brown corpus (Francis and Kucera 1982) , the British National Corpus (Leech, Garside, and Bryant 1994) , or the Penn Treebank (Marcus, Marcinkiewicz, and Santorini 1993) . Problems with early proposals for assessing coders' agreement on discourse segmentation tasks (such as Passonneau and Litman 1993) led Carletta (1996) to suggest the adoption of the K coefficient of agreement, a variant of Cohen's \u03ba (Cohen 1960) , as this had already been used for similar purposes in content analysis for a long time. 1 Carletta's proposals were enormously influential, and K quickly became the de facto standard for measuring agreement in computational linguistics not only in work on discourse (Carletta et al. 1997; Core and Allen 1997; Hearst 1997; Poesio and Vieira 1998; Di Eugenio 2000; Stolcke et al. 2000; Carlson, Marcu, and Okurowski 2003) but also for other annotation tasks (e.g., V\u00e9ronis 1998; Bruce and Wiebe 1998; Stevenson and Gaizauskas 2000; Craggs and McGee Wood 2004; Mieskes and Strube 2006) . During this period, however, a number of questions have also been raised about K and similar coefficients-some already in Carletta's own work (Carletta et al. 1997 )-ranging from simple questions about the way the coefficient is computed (e.g., whether it is really applicable when more than two coders are used), to debates about which levels of agreement can be considered 'acceptable ' (Di Eugenio 2000; Craggs and McGee Wood 2005) , to the realization that K is not appropriate for all types of agreement (Poesio and Vieira 1998; Marcu, Romera, and Amorrortu 1999; Di Eugenio 2000; Stevenson and Gaizauskas 2000) . Di Eugenio raised the issue of the effect of skewed distributions on the value of K and pointed out that the original \u03ba developed by Cohen is based on very different assumptions about coder bias from the K of Siegel and Castellan (1988) , which is typically used in CL. This issue of annotator bias was further debated in Di Eugenio and Glass (2004) and Craggs and McGee Wood (2005) . Di Eugenio and Glass pointed out that the choice of calculating chance agreement by using individual coder marginals (\u03ba) or pooled distributions (K) can lead to reliability values falling on different sides of the accepted 0.67 threshold, and recommended reporting both values. Craggs and McGee Wood argued, following Krippendorff (2004a,b) , that measures like Cohen's \u03ba are inappropriate for measuring agreement. Finally, Passonneau has been advocating the use of Krippendorff's \u03b1 (Krippendorff 1980 (Krippendorff , 2004a for coding tasks in CL which do not involve nominal and disjoint categories, including anaphoric annotation, wordsense tagging, and summarization (Passonneau 2004 (Passonneau , 2006 Nenkova and Passonneau 2004; Passonneau, Habash, and Rambow 2006) . Now that more than ten years have passed since Carletta's original presentation at the workshop on Empirical Methods in Discourse, it is time to reconsider the use of coefficients of agreement in CL in a systematic way. In this article, a survey of coefficients of agreement and their use in CL, we have three main goals. First, we discuss in some detail the mathematics and underlying assumptions of the coefficients used or mentioned in the CL and content analysis literatures. Second, we also cover in some detail Krippendorff's \u03b1, often mentioned but never really discussed in detail in previous CL literature other than in the papers by Passonneau just mentioned. Third, we review the past ten years of experience with coefficients of agreement in CL, reconsidering the issues that have been raised also from a mathematical perspective. 2", "cite_spans": [ { "start": 440, "end": 465, "text": "(Francis and Kucera 1982)", "ref_id": "BIBREF38" }, { "start": 496, "end": 529, "text": "(Leech, Garside, and Bryant 1994)", "ref_id": "BIBREF59" }, { "start": 553, "end": 596, "text": "(Marcus, Marcinkiewicz, and Santorini 1993)", "ref_id": "BIBREF63" }, { "start": 702, "end": 729, "text": "Passonneau and Litman 1993)", "ref_id": "BIBREF76" }, { "start": 734, "end": 749, "text": "Carletta (1996)", "ref_id": "BIBREF17" }, { "start": 832, "end": 844, "text": "(Cohen 1960)", "ref_id": "BIBREF21" }, { "start": 935, "end": 936, "text": "1", "ref_id": null }, { "start": 1113, "end": 1135, "text": "(Carletta et al. 1997;", "ref_id": "BIBREF18" }, { "start": 1136, "end": 1156, "text": "Core and Allen 1997;", "ref_id": "BIBREF23" }, { "start": 1157, "end": 1169, "text": "Hearst 1997;", "ref_id": "BIBREF43" }, { "start": 1170, "end": 1193, "text": "Poesio and Vieira 1998;", "ref_id": "BIBREF83" }, { "start": 1194, "end": 1210, "text": "Di Eugenio 2000;", "ref_id": "BIBREF27" }, { "start": 1211, "end": 1231, "text": "Stolcke et al. 2000;", "ref_id": "BIBREF100" }, { "start": 1232, "end": 1267, "text": "Carlson, Marcu, and Okurowski 2003)", "ref_id": "BIBREF19" }, { "start": 1311, "end": 1324, "text": "V\u00e9ronis 1998;", "ref_id": "BIBREF106" }, { "start": 1325, "end": 1346, "text": "Bruce and Wiebe 1998;", "ref_id": "BIBREF10" }, { "start": 1347, "end": 1377, "text": "Stevenson and Gaizauskas 2000;", "ref_id": "BIBREF99" }, { "start": 1378, "end": 1405, "text": "Craggs and McGee Wood 2004;", "ref_id": "BIBREF24" }, { "start": 1406, "end": 1430, "text": "Mieskes and Strube 2006)", "ref_id": "BIBREF66" }, { "start": 1575, "end": 1596, "text": "(Carletta et al. 1997", "ref_id": "BIBREF18" }, { "start": 1820, "end": 1839, "text": "' (Di Eugenio 2000;", "ref_id": "BIBREF27" }, { "start": 1840, "end": 1867, "text": "Craggs and McGee Wood 2005)", "ref_id": "BIBREF25" }, { "start": 1942, "end": 1966, "text": "(Poesio and Vieira 1998;", "ref_id": "BIBREF83" }, { "start": 1967, "end": 2001, "text": "Marcu, Romera, and Amorrortu 1999;", "ref_id": "BIBREF62" }, { "start": 2002, "end": 2018, "text": "Di Eugenio 2000;", "ref_id": "BIBREF27" }, { "start": 2019, "end": 2049, "text": "Stevenson and Gaizauskas 2000)", "ref_id": "BIBREF99" }, { "start": 2261, "end": 2288, "text": "Siegel and Castellan (1988)", "ref_id": "BIBREF97" }, { "start": 2377, "end": 2401, "text": "Eugenio and Glass (2004)", "ref_id": "BIBREF28" }, { "start": 2406, "end": 2434, "text": "Craggs and McGee Wood (2005)", "ref_id": "BIBREF25" }, { "start": 2755, "end": 2777, "text": "Krippendorff (2004a,b)", "ref_id": null }, { "start": 2920, "end": 2938, "text": "(Krippendorff 1980", "ref_id": "BIBREF54" }, { "start": 2939, "end": 2960, "text": "(Krippendorff , 2004a", "ref_id": "BIBREF56" }, { "start": 3107, "end": 3123, "text": "(Passonneau 2004", "ref_id": "BIBREF73" }, { "start": 3124, "end": 3142, "text": "(Passonneau , 2006", "ref_id": "BIBREF74" }, { "start": 3143, "end": 3171, "text": "Nenkova and Passonneau 2004;", "ref_id": "BIBREF71" }, { "start": 3172, "end": 3208, "text": "Passonneau, Habash, and Rambow 2006)", "ref_id": "BIBREF75" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction and Motivations", "sec_num": "1." }, { "text": "We begin with a quick recap of the goals of agreement studies, inspired by Krippendorff (2004a, Section 11.1) . Researchers who wish to use hand-coded data-that is, data in which items are labeled with categories, whether to support an empirical claim or to develop and test a computational model-need to show that such data are reliable.", "cite_spans": [ { "start": 75, "end": 109, "text": "Krippendorff (2004a, Section 11.1)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Agreement, Reliability, and Validity", "sec_num": "2.1" }, { "text": "The fundamental assumption behind the methodologies discussed in this article is that data are reliable if coders can be shown to agree on the categories assigned to units to an extent determined by the purposes of the study (Krippendorff 2004a; Craggs and McGee Wood 2005) . If different coders produce consistently similar results, then we can infer that they have internalized a similar understanding of the annotation guidelines, and we can expect them to perform consistently under this understanding.", "cite_spans": [ { "start": 225, "end": 245, "text": "(Krippendorff 2004a;", "ref_id": "BIBREF56" }, { "start": 246, "end": 273, "text": "Craggs and McGee Wood 2005)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Agreement, Reliability, and Validity", "sec_num": "2.1" }, { "text": "Reliability is thus a prerequisite for demonstrating the validity of the coding scheme-that is, to show that the coding scheme captures the \"truth\" of the phenomenon being studied, in case this matters: If the annotators are not consistent then either some of them are wrong or else the annotation scheme is inappropriate for the data. (Just as in real life, the fact that witnesses to an event disagree with each other makes it difficult for third parties to know what actually happened.) However, it is important to keep in mind that achieving good agreement cannot ensure validity: Two observers of the same event may well share the same prejudice while still being objectively wrong.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Agreement, Reliability, and Validity", "sec_num": "2.1" }, { "text": "It is useful to think of a reliability study as involving a set of items (markables), a set of categories, and a set of coders (annotators) who assign to each item a unique category label. The discussions of reliability in the literature often use different notations to express these concepts. We introduce a uniform notation, which we hope will make the relations between the different coefficients of agreement clearer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Common Notation", "sec_num": "2.2" }, { "text": "Thesetofitems is { i | i \u2208 I } and is of cardinality i.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022", "sec_num": null }, { "text": "\u2022 Thesetofcategories is { k | k \u2208 K } and is of cardinality k.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022", "sec_num": null }, { "text": "\u2022 Thesetofcoders is { c | c \u2208 C } and is of cardinality c.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022", "sec_num": null }, { "text": "Confusion also arises from the use of the letter P, which is used in the literature with at least three distinct interpretations, namely \"proportion,\" \"percent,\" and \"probability.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022", "sec_num": null }, { "text": "We will use the following notation uniformly throughout the article.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022", "sec_num": null }, { "text": "\u2022 A o is observed agreement and D o is observed disagreement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022", "sec_num": null }, { "text": "\u2022 A e and D e are expected agreement and expected disagreement, respectively. The relevant coefficient will be indicated with a superscript when an ambiguity may arise (for example, A \u03c0 e is the expected agreement used for calculating \u03c0, and A \u03ba e is the expected agreement used for calculating \u03ba).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022", "sec_num": null }, { "text": "\u2022 P(\u2022) is reserved for the probability of a variable, andP(\u2022) is an estimate of such probability from observed data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022", "sec_num": null }, { "text": "Finally, we use n with a subscript to indicate the number of judgments of a given type:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022", "sec_num": null }, { "text": "\u2022 n ik is the number of coders who assigned item i to category k;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022", "sec_num": null }, { "text": "\u2022 n ck is the number of items assigned by coder c to category k;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022", "sec_num": null }, { "text": "\u2022 n k is the total number of items assigned by all coders to category k.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022", "sec_num": null }, { "text": "The simplest measure of agreement between two coders is percentage of agreement or observed agreement, defined for example by Scott (1955, page 323) as \"the percentage of judgments on which the two analysts agree when coding the same data independently.\" This is the number of items on which the coders agree divided by the total number of items. More precisely, and looking ahead to the following discussion, observed agreement is the arithmetic mean of the agreement value agr i for all items i \u2208 I, defined as follows:", "cite_spans": [ { "start": 126, "end": 148, "text": "Scott (1955, page 323)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Agreement Without Chance Correction", "sec_num": "2.3" }, { "text": "agr i = 1 if the two coders assign i to the same category 0 if the two coders assign i to different categories", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Agreement Without Chance Correction", "sec_num": "2.3" }, { "text": "Observed agreement over the values agr i for all items i \u2208 I is then:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Agreement Without Chance Correction", "sec_num": "2.3" }, { "text": "A o = 1 i \u2211 i\u2208I agr i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Agreement Without Chance Correction", "sec_num": "2.3" }, { "text": "For example, let us assume a very simple annotation scheme for dialogue acts in information-seeking dialogues which makes a binary distinction between the categories statement and info-request, as in the DAMSL dialogue act scheme (Allen and Core 1997) . Two coders classify 100 utterances according to this scheme as shown in Table 1 . Percentage agreement for this data set is obtained by summing up the cells on the diagonal and dividing by the total number of items: A o = (20 + 50)/100 = 0.7. Observed agreement enters in the computation of all the measures of agreement we consider, but on its own it does not yield values that can be compared across studies, because some agreement is due to chance, and the amount of chance agreement is affected by two factors that vary from one study to the other. First of all, as Scott (1955, page 322 ) points out, \"[percentage agreement] is biased in favor of dimensions with a small number of categories.\" In other words, given two coding schemes for the same phenomenon, the one with fewer categories will result in higher percentage agreement just by chance. If two coders randomly classify utterances in a uniform manner using the scheme of Table 1 , we would expect an equal number of items to fall in each of the four cells in the table, and therefore pure chance will cause the coders to agree on half of the items (the two cells on the diagonal: 1 4 + 1 4 ). But suppose we want to refine the simple binary coding scheme by introducing a new category, check, as in the MapTask coding scheme (Carletta et al. 1997) . If two coders randomly classify utterances in a uniform manner using the three categories in the second scheme, they would only agree on a third of the items ( 1 9 + 1 9 + 1 9 ).", "cite_spans": [ { "start": 230, "end": 251, "text": "(Allen and Core 1997)", "ref_id": "BIBREF0" }, { "start": 824, "end": 845, "text": "Scott (1955, page 322", "ref_id": null }, { "start": 1545, "end": 1567, "text": "(Carletta et al. 1997)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 326, "end": 333, "text": "Table 1", "ref_id": null }, { "start": 1191, "end": 1198, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Agreement Without Chance Correction", "sec_num": "2.3" }, { "text": "A simple example of agreement on dialogue act tagging. STAT IREQ TOTAL STAT 20 20 40 CODER B IREQ 10 50 60 TOTAL 30 70 100 The second reason percentage agreement cannot be trusted is that it does not correct for the distribution of items among categories: We expect a higher percentage agreement when one category is much more common than the other. This problem, already raised by Hsu and Field (2003, page 207) among others, can be illustrated using the following example (Di Eugenio and Glass 2004, example 3, pages 98-99) . Suppose 95% of utterances in a particular domain are statement, and only 5% are inforequest. We would then expect by chance that 0.95 \u00d7 0.95 = 0.9025 of the utterances would be classified as statement by both coders, and 0.05 \u00d7 0.05 = 0.0025 as inforequest, so the coders would agree on 90.5% of the utterances. Under such circumstances, a seemingly high observed agreement of 90% is actually worse than expected by chance.", "cite_spans": [ { "start": 395, "end": 425, "text": "Hsu and Field (2003, page 207)", "ref_id": null }, { "start": 491, "end": 538, "text": "Eugenio and Glass 2004, example 3, pages 98-99)", "ref_id": null } ], "ref_spans": [ { "start": 55, "end": 135, "text": "STAT IREQ TOTAL STAT 20 20 40 CODER B IREQ 10 50 60 TOTAL 30 70 100", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Table 1", "sec_num": null }, { "text": "The conclusion reached in the literature is that in order to get figures that are comparable across studies, observed agreement has to be adjusted for chance agreement. These are the measures we will review in the remainder of this article. We will not look at the variants of percentage agreement used in CL work on discourse before the introduction of kappa, such as percentage agreement with an expert and percentage agreement with the majority; see Carletta (1996) for discussion and criticism. 3", "cite_spans": [ { "start": 453, "end": 468, "text": "Carletta (1996)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "CODER A", "sec_num": null }, { "text": "All of the coefficients of agreement discussed in this article correct for chance on the basis of the same idea. First we find how much agreement is expected by chance: Let us call this value A e . The value 1 \u2212 A e will then measure how much agreement over and above chance is attainable; the value A o \u2212 A e will tell us how much agreement beyond chance was actually found. The ratio between A o \u2212 A e and 1 \u2212 A e will then tell us which proportion of the possible agreement beyond chance was actually observed. This idea is expressed by the following formula.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chance-Corrected Coefficients for Measuring Agreement between Two Coders", "sec_num": "2.4" }, { "text": "A o \u2212 A e 1 \u2212 A e", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S, \u03c0, \u03ba =", "sec_num": null }, { "text": "The three best-known coefficients, S (Bennett, Alpert, and Goldstein 1954) , \u03c0 (Scott 1955) , and \u03ba (Cohen 1960) , and their generalizations, all use this formula; whereas Krippendorff's \u03b1 is based on a related formula expressed in terms of disagreement (see Section 2.6). All three coefficients therefore yield values of agreement between \u2212A e /1 \u2212 A e (no observed agreement) and 1 (observed agreement = 1), with the value 0 signifying chance agreement (observed agreement = expected agreement). Note also that whenever agreement is less than perfect (A o < 1), chance-corrected agreement will be strictly lower than observed agreement, because some amount of agreement is always expected by chance. Observed agreement A o is easy to compute, and is the same for all three coefficients-the proportion of items on which the two coders agree. But the notion of chance agreement, or the probability that two coders will classify an arbitrary item as belonging to the same category by chance, requires a model of what would happen if coders' behavior was only by chance. All three coefficients assume independence of the two coders-that is, that the chance of c 1 and c 2 agreeing on any given category k Table 2 The value of different coefficients applied to the data from Table 1 .", "cite_spans": [ { "start": 37, "end": 74, "text": "(Bennett, Alpert, and Goldstein 1954)", "ref_id": "BIBREF7" }, { "start": 79, "end": 91, "text": "(Scott 1955)", "ref_id": "BIBREF94" }, { "start": 100, "end": 112, "text": "(Cohen 1960)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 1203, "end": 1210, "text": "Table 2", "ref_id": null }, { "start": 1272, "end": 1279, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "S, \u03c0, \u03ba =", "sec_num": null }, { "text": "Expected agreement Chance-corrected agreement", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coefficient", "sec_num": null }, { "text": "S 2 \u00d7 ( 1 2 ) 2 = 0.5 (0.7 \u2212 0.5)/(1 \u2212 0.5) = 0.4 \u03c0 0.35 2 + 0.65 2 = 0.545 (0.7 \u2212 0.545)/(1 \u2212 0.545) \u2248 0.341 \u03ba 0.3 \u00d7 0.4 + 0.6 \u00d7 0.7 = 0.54 (0.7 \u2212 0.54)/(1 \u2212 0.54) \u2248 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coefficient", "sec_num": null }, { "text": "348 Observed agreement for all the coefficients is 0.7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coefficient", "sec_num": null }, { "text": "is the product of the chance of each of them assigning an item to that category: P(k|c 1 ) \u2022 P(k|c 2 ). 4 Expected agreement is then the probability of c 1 and c 2 agreeing on any category, that is, the sum of the product over all categories:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coefficient", "sec_num": null }, { "text": "A S e = A \u03c0 e = A \u03ba e = \u2211 k\u2208K P(k|c 1 ) \u2022 P(k|c 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coefficient", "sec_num": null }, { "text": "The difference between S, \u03c0, and \u03ba lies in the assumptions leading to the calculation of P(k|c i ), the chance that coder c i will assign an arbitrary item to category k (Zwick 1988; Hsu and Field 2003) .", "cite_spans": [ { "start": 170, "end": 182, "text": "(Zwick 1988;", "ref_id": "BIBREF108" }, { "start": 183, "end": 202, "text": "Hsu and Field 2003)", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "Coefficient", "sec_num": null }, { "text": "This coefficient is based on the assumption that if coders were operating by chance alone, we would get a uniform distribution: That is, for any two coders c m , c n and any two categories k j , k l , P(k j |c m ) = P(k l |c n ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S:", "sec_num": null }, { "text": "If coders were operating by chance alone, we would get the same distribution for each coder: For any two coders c m , c n and any category k, P(k|c m ) = P(k|c n ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u03c0:", "sec_num": null }, { "text": "If coders were operating by chance alone, we would get a separate distribution for each coder.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u03ba:", "sec_num": null }, { "text": "Additionally, the lack of independent prior knowledge of the distribution of items among categories means that the distribution of categories (for \u03c0) and the priors for the individual coders (for \u03ba) have to be estimated from the observed data. Table 2 demonstrates the effect of the different chance models on the coefficient values. The remainder of this section explains how the three coefficients are calculated when the reliability data come from two coders; we will discuss a variety of proposed generalizations starting in Section 2.5.", "cite_spans": [], "ref_spans": [ { "start": 244, "end": 251, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "\u03ba:", "sec_num": null }, { "text": "Equally Likely: S. The simplest way of discounting for chance is the one adopted to compute the coefficient S (Bennett, Alpert, and Goldstein 1954) , also known in the literature as C, \u03ba n , G, and RE (see Zwick 1988; Hsu and Field 2003) . As noted previously, the computation of S is based on an interpretation of chance as a random choice of category from a uniform distribution-that is, all categories are equally likely. If coders classify the items into k categories, then the chance P(k|c i ) of any coder assigning an item to category k under the uniformity assumption is 1 k ; hence the total agreement expected by chance is", "cite_spans": [ { "start": 110, "end": 147, "text": "(Bennett, Alpert, and Goldstein 1954)", "ref_id": "BIBREF7" }, { "start": 206, "end": 217, "text": "Zwick 1988;", "ref_id": "BIBREF108" }, { "start": 218, "end": 237, "text": "Hsu and Field 2003)", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "All Categories Are", "sec_num": "2.4.1" }, { "text": "A S e = \u2211 k\u2208K 1 k \u2022 1 k = k \u2022 1 k 2 = 1 k", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "All Categories Are", "sec_num": "2.4.1" }, { "text": "The calculation of the value of S for the figures in Table 1 is shown in Table 2 . The coefficient S is problematic in many respects. The value of the coefficient can be artificially increased simply by adding spurious categories which the coders would never use (Scott 1955, pages 322-323) . In the case of CL, for example, S would reward designing extremely fine-grained tagsets, provided that most tags are never actually encountered in real data. Additional limitations are noted by Hsu and Field (2003) . It has been argued that uniformity is the best model for a chance distribution of items among categories if we have no independent prior knowledge of the distribution (Brennan and Prediger 1981) . However, a lack of prior knowledge does not mean that the distribution cannot be estimated post hoc, and this is what the other coefficients do.", "cite_spans": [ { "start": 263, "end": 290, "text": "(Scott 1955, pages 322-323)", "ref_id": null }, { "start": 487, "end": 507, "text": "Hsu and Field (2003)", "ref_id": "BIBREF45" }, { "start": 677, "end": 704, "text": "(Brennan and Prediger 1981)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 53, "end": 80, "text": "Table 1 is shown in Table 2", "ref_id": null } ], "eq_spans": [], "section": "All Categories Are", "sec_num": "2.4.1" }, { "text": "Distribution: \u03c0. All of the other methods for discounting chance agreement we discuss in this article attempt to overcome the limitations of S's strong uniformity assumption using an idea first proposed by Scott (1955) : Use the actual behavior of the coders to estimate the prior distribution of the categories. As noted earlier, Scott based his characterization of \u03c0 on the assumption that random assignment of categories to items, by any coder, is governed by the distribution of items among categories in the actual world. The best estimate of this distribution isP(k), the observed proportion of items assigned to category k by both coders. P(k|c 1 ) = P(k|c 2 ) =P(k) P(k), the observed proportion of items assigned to category k by both coders, is the total number of assignments to k by both coders n k , divided by the overall number of assignments, which for the two-coder case is twice the number of items i:", "cite_spans": [ { "start": 206, "end": 218, "text": "Scott (1955)", "ref_id": "BIBREF94" } ], "ref_spans": [], "eq_spans": [], "section": "A Single", "sec_num": "2.4.2" }, { "text": "P(k) = n k 2i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Single", "sec_num": "2.4.2" }, { "text": "Given the assumption that coders act independently, expected agreement is computed as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Single", "sec_num": "2.4.2" }, { "text": "A \u03c0 e = \u2211 k\u2208KP (k) \u2022P(k) = \u2211 k\u2208K n k 2i 2 = 1 4i 2 \u2211 k\u2208K n 2 k", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Single", "sec_num": "2.4.2" }, { "text": "It is easy to show that for any set of coding data, A \u03c0 e \u2265 A S e and therefore \u03c0 \u2264 S, with the limiting case (equality) obtaining when the observed distribution of items among categories is uniform.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Single", "sec_num": "2.4.2" }, { "text": "Coder Distributions: \u03ba. The method proposed by Cohen (1960) to calculate expected agreement A e in his \u03ba coefficient assumes that random assignment of categories to items is governed by prior distributions that are unique to each coder, and which reflect individual annotator bias. An individual coder's prior distribution is estimated by looking at her actual distribution: P(k|c i ), the probability that coder c i will classify an arbitrary item into category k, is estimated by usingP(k|c i ), the proportion of items actually assigned by coder c i to category k; this is the number of assignments to k by c i , n c i k , divided by the number of items i.", "cite_spans": [ { "start": 47, "end": 59, "text": "Cohen (1960)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Individual", "sec_num": "2.4.3" }, { "text": "P(k|c i ) =P(k|c i ) = n c i k i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Individual", "sec_num": "2.4.3" }, { "text": "As in the case of S and \u03c0, the probability that the two coders c 1 and c 2 assign an item to a particular category k \u2208 K is the joint probability of each coder making this assignment independently. For \u03ba this joint probability isP(k|c 1 ) \u2022P(k|c 2 ); expected agreement is then the sum of this joint probability over all the categories k \u2208 K.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Individual", "sec_num": "2.4.3" }, { "text": "A \u03ba e = \u2211 k\u2208KP (k|c 1 ) \u2022P(k|c 2 ) = \u2211 k\u2208K n c 1 k i \u2022 n c 2 k i = 1 i 2 \u2211 k\u2208K n c 1 k n c 2 k", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Individual", "sec_num": "2.4.3" }, { "text": "It is easy to show that for any set of coding data, A \u03c0 e \u2265 A \u03ba e and therefore \u03c0 \u2264 \u03ba, with the limiting case (equality) obtaining when the observed distributions of the two coders are identical. The relationship between \u03ba and S is not fixed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Individual", "sec_num": "2.4.3" }, { "text": "In corpus annotation practice, measuring reliability with only two coders is seldom considered enough, except for small-scale studies. Sometimes researchers run reliability studies with more than two coders, measure agreement separately for each pair of coders, and report the average. However, a better practice is to use generalized versions of the coefficients. A generalization of Scott's \u03c0 is proposed in Fleiss (1971) , and a generalization of Cohen's \u03ba is given in Davies and Fleiss (1982) . We will call these coefficients multi-\u03c0 and multi-\u03ba, respectively, dropping the multi-prefixes when no confusion is expected to arise. 5 2.5.1 Fleiss's Multi-\u03c0. With more than two coders, the observed agreement A o can no longer be defined as the percentage of items on which there is agreement, because inevitably there will be items on which some coders agree and others disagree. The solution proposed in the literature is to measure pairwise agreement (Fleiss 1971) : Define the amount of agreement on a particular item as the proportion of agreeing judgment pairs out of the total number of judgment pairs for that item.", "cite_spans": [ { "start": 410, "end": 423, "text": "Fleiss (1971)", "ref_id": "BIBREF36" }, { "start": 472, "end": 496, "text": "Davies and Fleiss (1982)", "ref_id": "BIBREF26" }, { "start": 955, "end": 968, "text": "(Fleiss 1971)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "More Than Two Coders", "sec_num": "2.5" }, { "text": "Multiple coders also pose a problem for the visualization of the data. When the number of coders c is greater than two, judgments cannot be shown in a contingency table like Table 1 , because each coder has to be represented in a separate dimension.", "cite_spans": [], "ref_spans": [ { "start": 174, "end": 181, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "More Than Two Coders", "sec_num": "2.5" }, { "text": "5 Due to historical accident, the terminology in the literature is confusing. Fleiss (1971) proposed a coefficient of agreement for multiple coders and called it \u03ba, even though it calculates expected agreement based on the cumulative distribution of judgments by all coders and is thus better thought of as a generalization of Scott's \u03c0. This unfortunate choice of name was the cause of much confusion in subsequent literature: Often, studies which claim to give a generalization of \u03ba to more than two coders actually report Fleiss's coefficient (e.g., Bartko and Carpenter 1976; Siegel and Castellan 1988; Di Eugenio and Glass 2004) . Since Carletta (1996) introduced reliability to the CL community based on the definitions of Siegel and Castellan (1988) , the term \"kappa\" has been usually associated in this community with Siegel and Castellan's K, which is in effect Fleiss's coefficient, that is, a generalization of Scott's \u03c0. Fleiss (1971) therefore uses a different type of table which lists each item with the number of judgments it received for each category; Siegel and Castellan (1988) use a similar table, which Di Eugenio and Glass (2004) call an agreement table. Table 3 is an example of an agreement table, in which the same 100 utterances from Table 1 are labeled by three coders instead of two. Di Eugenio and Glass (page 97) note that compared to contingency tables like Table 1, agreement tables like Table 3 lose information because they do not say which coder gave each judgment. This information is not used in the calculation of \u03c0, but is necessary for determining the individual coders' distributions in the calculation of \u03ba. (Agreement tables also add information compared to contingency tables, namely, the identity of the items that make up each contingency class, but this information is not used in the calculation of either \u03ba or \u03c0.) Let n ik stand for the number of times an item i is classified in category k (i.e., the number of coders that make such a judgment): For example, given the distribution in Table 3 , n Utt 1 Stat = 2 and n Utt 1 IReq = 1. Each category k contributes ( n ik 2 ) pairs of agreeing judgments for item i; the amount of agreement agr i for item i is the sum of ( n ik 2 ) over all categories k \u2208 K, divided by ( c 2 ), the total number of judgment pairs per item.", "cite_spans": [ { "start": 78, "end": 91, "text": "Fleiss (1971)", "ref_id": "BIBREF36" }, { "start": 553, "end": 579, "text": "Bartko and Carpenter 1976;", "ref_id": "BIBREF5" }, { "start": 580, "end": 606, "text": "Siegel and Castellan 1988;", "ref_id": "BIBREF97" }, { "start": 607, "end": 633, "text": "Di Eugenio and Glass 2004)", "ref_id": "BIBREF28" }, { "start": 642, "end": 657, "text": "Carletta (1996)", "ref_id": "BIBREF17" }, { "start": 729, "end": 756, "text": "Siegel and Castellan (1988)", "ref_id": "BIBREF97" }, { "start": 934, "end": 947, "text": "Fleiss (1971)", "ref_id": "BIBREF36" }, { "start": 1071, "end": 1098, "text": "Siegel and Castellan (1988)", "ref_id": "BIBREF97" }, { "start": 1129, "end": 1153, "text": "Eugenio and Glass (2004)", "ref_id": "BIBREF28" } ], "ref_spans": [ { "start": 1179, "end": 1270, "text": "Table 3 is an example of an agreement table, in which the same 100 utterances from Table 1", "ref_id": "TABREF0" }, { "start": 1392, "end": 1430, "text": "Table 1, agreement tables like Table 3", "ref_id": "TABREF0" }, { "start": 2038, "end": 2045, "text": "Table 3", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "More Than Two Coders", "sec_num": "2.5" }, { "text": "agr i = 1 ( c 2 ) \u2211 k\u2208K n ik 2 = 1 c(c \u2212 1) \u2211 k\u2208K n ik (n ik \u2212 1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "More Than Two Coders", "sec_num": "2.5" }, { "text": "For example, given the results in Table 3 , we find the agreement value for Utterance 1 as follows.", "cite_spans": [], "ref_spans": [ { "start": 34, "end": 41, "text": "Table 3", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "More Than Two Coders", "sec_num": "2.5" }, { "text": "agr 1 = 1 ( 3 2 ) n Utt 1 Stat 2 + n Utt 1 IReq 2 = 1 3 (1 + 0) \u2248 0.33", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "More Than Two Coders", "sec_num": "2.5" }, { "text": "The overall observed agreement is the mean of agr i for all items i \u2208 I.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "More Than Two Coders", "sec_num": "2.5" }, { "text": "A o = 1 i \u2211 i\u2208I agr i = 1 ic(c \u2212 1) \u2211 i\u2208I \u2211 k\u2208K n ik (n ik \u2212 1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "More Than Two Coders", "sec_num": "2.5" }, { "text": "(Notice that this definition of observed agreement is equivalent to the mean of the two-coder observed agreement values from Section 2.4 for all coder pairs.) If observed agreement is measured on the basis of pairwise agreement (the proportion of agreeing judgment pairs), it makes sense to measure expected agreement in terms of pairwise comparisons as well, that is, as the probability that any pair of judgments for an item would be in agreement-or, said otherwise, the probability that two arbitrary coders would make the same judgment for a particular item by chance. This is the approach taken by Fleiss (1971) . Like Scott, Fleiss interprets \"chance agreement\" as the agreement expected on the basis of a single distribution which reflects the combined judgments of all coders, meaning that expected agreement is calculated usingP(k), the overall proportion of items assigned to category k, which is the total number of such assignments by all coders n k divided by the overall number of assignments. The latter, in turn, is the number of items i multiplied by the number of coders c.", "cite_spans": [ { "start": 603, "end": 616, "text": "Fleiss (1971)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "More Than Two Coders", "sec_num": "2.5" }, { "text": "P(k) = 1 ic n k", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "More Than Two Coders", "sec_num": "2.5" }, { "text": "As in the two-coder case, the probability that two arbitrary coders assign an item to a particular category k \u2208 K is assumed to be the joint probability of each coder making this assignment independently, that is (P(k)) 2 . The expected agreement is the sum of this joint probability over all the categories k \u2208 K.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "More Than Two Coders", "sec_num": "2.5" }, { "text": "A \u03c0 e = \u2211 k\u2208K P (k) 2 = \u2211 k\u2208K 1 ic n k 2 = 1 (ic) 2 \u2211 k\u2208K n 2 k", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "More Than Two Coders", "sec_num": "2.5" }, { "text": "Multi-\u03c0 is the coefficient that Siegel and Castellan (1988) call K.", "cite_spans": [ { "start": 32, "end": 59, "text": "Siegel and Castellan (1988)", "ref_id": "BIBREF97" } ], "ref_spans": [], "eq_spans": [], "section": "More Than Two Coders", "sec_num": "2.5" }, { "text": "It is fairly straightforward to adapt Fleiss's proposal to generalize Cohen's \u03ba proper to more than two coders, calculating expected agreement based on individual coder marginals. A detailed proposal can be found in Davies and Fleiss (1982) , or in the extended version of this article.", "cite_spans": [ { "start": 216, "end": 240, "text": "Davies and Fleiss (1982)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Multi-\u03ba.", "sec_num": "2.5.2" }, { "text": "A serious limitation of both \u03c0 and \u03ba is that all disagreements are treated equally. But especially for semantic and pragmatic features, disagreements are not all alike. Even for the relatively simple case of dialogue act tagging, a disagreement between an accept and a reject interpretation of an utterance is clearly more serious than a disagreement between an info-request and a check. For tasks such as anaphora resolution, where reliability is determined by measuring agreement on sets (coreference chains), allowing for degrees of disagreement becomes essential (see Section 4.4). Under such circumstances, \u03c0 and \u03ba are not very useful. In this section we discuss two coefficients that make it possible to differentiate between types of disagreements: \u03b1 (Krippendorff 1980 (Krippendorff , 2004a , which is a coefficient defined in a general way that is appropriate for use with multiple coders, different magnitudes of disagreement, and missing values, and is based on assumptions similar to those of \u03c0; and weighted kappa \u03ba w (Cohen 1968 ), a generalization of \u03ba.", "cite_spans": [ { "start": 758, "end": 776, "text": "(Krippendorff 1980", "ref_id": "BIBREF54" }, { "start": 777, "end": 798, "text": "(Krippendorff , 2004a", "ref_id": "BIBREF56" }, { "start": 1031, "end": 1042, "text": "(Cohen 1968", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "2.6.1 Krippendorff's \u03b1. The coefficient \u03b1 (Krippendorff 1980 (Krippendorff , 2004a is an extremely versatile agreement coefficient based on assumptions similar to \u03c0, namely, that expected agreement is calculated by looking at the overall distribution of judgments without regard to which coders produced these judgments. It applies to multiple coders, and it allows for different magnitudes of disagreement. When all disagreements are considered equal it is nearly identical to multi-\u03c0, correcting for small sample sizes by using an unbiased estimator for expected agreement. In this section we will present Krippendorff's \u03b1 and relate it to the other coefficients discussed in this article, but we will start with \u03b1's origins as a measure of variance, following a long tradition of using variance to measure reliability (see citations in Rajaratnam 1960; Krippendorff 1970) .", "cite_spans": [ { "start": 42, "end": 60, "text": "(Krippendorff 1980", "ref_id": "BIBREF54" }, { "start": 61, "end": 82, "text": "(Krippendorff , 2004a", "ref_id": "BIBREF56" }, { "start": 839, "end": 855, "text": "Rajaratnam 1960;", "ref_id": "BIBREF87" }, { "start": 856, "end": 874, "text": "Krippendorff 1970)", "ref_id": "BIBREF52" } ], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "A sample's variance s 2 is defined as the sum of square differences from the mean SS = \u2211(x \u2212x) 2 divided by the degrees of freedom df . Variance is a useful way of looking at agreement if coders assign numerical values to the items, as in magnitude estimation tasks. Each item in a reliability study can be considered a separate level in a single-factor analysis of variance: The smaller the variance around each level, the higher the reliability. When agreement is perfect, the variance within the levels (s 2 within ) is zero; when agreement is at chance, the variance within the levels is equal to the variance between the levels, in which case it is also equal to the overall variance of the data: s 2 within = s 2 between = s 2 total . The ratios s 2 within /s 2 between (that is, 1/F) and s 2 within /s 2 total are therefore 0 when agreement is perfect and 1 when agreement is at chance. Additionally, the latter ratio is bounded at 2: SS within \u2264 SS total by definition, and df total < 2 df within because each item has at least two judgments. Subtracting the ratio s 2 within /s 2 total from 1 yields a coefficient which ranges between \u22121 and 1, where 1 signifies perfect agreement and 0 signifies chance agreement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "\u03b1 = 1 \u2212 s 2 within s 2 total = 1 \u2212 SS within /df within SS total /df total", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "We can unpack the formula for \u03b1 to bring it to a form which is similar to the other coefficients we have looked at, and which will allow generalizing \u03b1 beyond simple numerical values. The first step is to get rid of the notion of arithmetic mean which lies at the heart of the measure of variance. We observe that for any set of numbers", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "x 1 , . . . , x N with a meanx = 1 N \u2211 N n=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "x n , the sum of square differences from the mean SS can be expressed as the sum of square of differences between all the (ordered) pairs of numbers, scaled by a factor of 1/2N.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "SS = N \u2211 n=1 (x n \u2212x) 2 = 1 2N N \u2211 n=1 N \u2211 m=1 (x n \u2212 x m ) 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "For calculating \u03b1 we considered each item to be a separate level in an analysis of variance; the number of levels is thus the number of items i, and because each coder marks each item, the number of observations for each item is the number of coders c.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "Within-level variance is the sum of the square differences from the mean of each item,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "SS within = \u2211 i \u2211 c (x ic \u2212x i ) 2 , divided by the degrees of freedom df within = i(c \u2212 1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": ". We can express this as the sum of the squares of the differences between all of the judgment pairs for each item, summed over all items and scaled by the appropriate factor. We use the notation x ic for the value given by coder c to item i, andx i for the mean of all the values given to item i.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "s 2 within = SS within df within = 1 i(c \u2212 1) \u2211 i\u2208I \u2211 c\u2208C (x ic \u2212x i ) 2 = 1 2ic(c \u2212 1) \u2211 i\u2208I c \u2211 m=1 c \u2211 n=1 (x ic m \u2212 x ic n ) 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "The total variance is the sum of the square differences of all judgments from the grand mean,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "SS total = \u2211 i \u2211 c (x ic \u2212x) 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": ", divided by the degrees of freedom df total = ic \u2212 1. This can be expressed as the sum of the squares of the differences between all of the judgments pairs without regard to items, again scaled by the appropriate factor. The notation x is the overall mean of all the judgments in the data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "s 2 total = SS total df total = 1 ic \u2212 1 \u2211 i\u2208I \u2211 c\u2208C (x ic \u2212x) 2 = 1 2ic(ic \u2212 1) i \u2211 j=1 c \u2211 m=1 i \u2211 l=1 c \u2211 n=1 (x i j c m \u2212 x i l c n ) 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "Now that we have removed references to means from our formulas, we can abstract over the measure of variance. We define a distance function d which takes two numbers and returns the square of their difference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "d ab = (a \u2212 b) 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "We also simplify the computation by counting all the identical value assignments together. Each unique value used by the coders will be considered a category k \u2208 K. We use n ik for the number of times item i is given the value k, that is, the number of coders that make such a judgment. For every (ordered) pair of distinct values k a , k b \u2208 K there are n ik a n ik b pairs of judgments of item i, whereas for non-distinct values there are n ik a (n ik a \u2212 1) pairs. We use this notation to rewrite the formula for the within-level variance. D \u03b1 o , the observed disagreement for \u03b1, is defined as twice the variance within the levels in order to get rid of the factor 2 in the denominator; we also simplify the formula by using the multiplier n ik a n ik a for identical categories-this is allowed because", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "d kk = 0 for all k. D \u03b1 o = 2 s 2 within = 1 ic(c \u2212 1) \u2211 i\u2208I k \u2211 j=1 k \u2211 l=1 n ik j n ik l d k j k l", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "We perform the same simplification for the total variance, where n k stands for the total number of times the value k is assigned to any item by any coder. The expected disagreement for \u03b1, D \u03b1 e , is twice the total variance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "D \u03b1 e = 2 s 2 total = 1 ic(ic \u2212 1) k \u2211 j=1 k \u2211 l=1 n k j n k l d k j k l", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "Because both expected and observed disagreement are twice the respective variances, the coefficient \u03b1 retains the same form when expressed with the disagreement values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "\u03b1 = 1 \u2212 D o D e", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "Now that \u03b1 has been expressed without explicit reference to means, differences, and squares, it can be generalized to a variety of coding schemes in which the labels cannot be interpreted as numerical values: All one has to do is to replace the square difference function d with a different distance function. Krippendorff (1980 Krippendorff ( , 2004a offers distance metrics suitable for nominal, interval, ordinal, and ratio scales. Of particular interest is the function for nominal categories, that is, a function which considers all distinct labels equally distant from one another.", "cite_spans": [ { "start": 310, "end": 328, "text": "Krippendorff (1980", "ref_id": "BIBREF54" }, { "start": 329, "end": 351, "text": "Krippendorff ( , 2004a", "ref_id": "BIBREF56" } ], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "d ab = 0 if a = b 1 if a = b", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "It turns out that with this distance function, the observed disagreement D \u03b1 o is exactly the complement of the observed agreement of Fleiss's multi-\u03c0, 1 \u2212 A \u03c0 o , and the expected disagreement D \u03b1 e differs from 1 \u2212 A \u03c0 e by a factor of (ic \u2212 1)/ic; the difference is due to the fact that \u03c0 uses a biased estimator of the expected agreement in the population whereas \u03b1 uses an unbiased estimator. The following equation shows that given the correspondence between observed and expected agreement and disagreement, the coefficients themselves are nearly equivalent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "\u03b1 = 1 \u2212 D \u03b1 o D \u03b1 e \u2248 1 \u2212 1 \u2212 A \u03c0 o 1 \u2212 A \u03c0 e = 1 \u2212 A \u03c0 e \u2212 (1 \u2212 A \u03c0 o ) 1 \u2212 A \u03c0 e = A \u03c0 o \u2212 A \u03c0 e 1 \u2212 A \u03c0 e = \u03c0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "For nominal data, the coefficients \u03c0 and \u03b1 approach each other as either the number of items or the number of coders approaches infinity. Krippendorff's \u03b1 will work with any distance metric, provided that identical categories always have a distance of zero (d kk = 0 for all k). Another useful constraint is symmetry (d ab = d ba for all a, b). This flexibility affords new possibilities for analysis, which we will illustrate in Section 4. We should also note, however, that the flexibility also creates new pitfalls, especially in cases where it is not clear what the natural distance metric is. For example, there are different ways to measure dissimilarity between sets, and any of these measures can be justifiably used when the category labels are sets of items (as in the annotation of anaphoric relations). The different distance metrics yield different values of \u03b1 for the same annotation data, making it difficult to interpret the resulting values. We will return to this problem in Section 4.4. Cohen (1968) . The implementation of weights is similar to that of Krippendorff's \u03b1-each pair of categories k a , k b \u2208 K is associated with a weight d k a k b , where a larger weight indicates more disagreement (Cohen uses the notation v; he does not place any general constraints on the weights-not even a requirement that a pair of identical categories have a weight of zero, or that the weights be symmetric across the diagonal). The coefficient is defined for two coders: The disagreement for a particular item i is the weight of the pair of categories assigned to it by the two coders, and the overall observed disagreement is the (normalized) mean disagreement of all the items. Let k(c n , i) denote the category assigned by coder c n to item i; then the disagreement for item i", "cite_spans": [ { "start": 1006, "end": 1018, "text": "Cohen (1968)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Krippendorff's \u03b1 and Other Weighted Agreement Coefficients", "sec_num": "2.6" }, { "text": "'s \u03ba w . A weighted variant of Cohen's \u03ba is presented in", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cohen", "sec_num": "2.6.2" }, { "text": "is disagr i = d k(c 1 ,i)k(c 2 ,i) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cohen", "sec_num": "2.6.2" }, { "text": "The observed disagreement D o is the mean of disagr i for all items i, normalized to the interval [0, 1] through division by the maximal weight d max .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cohen", "sec_num": "2.6.2" }, { "text": "D \u03ba w o = 1 d max 1 i \u2211 i\u2208I disagr i = 1 d max 1 i \u2211 i\u2208I d k(c 1 ,i)k(c 2 ,i)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cohen", "sec_num": "2.6.2" }, { "text": "If we take all disagreements to be of equal weight, that is d k a k a = 0 for all categories k a and d k a k b = 1 for all k a = k b , then the observed disagreement is exactly the complement of the observed agreement as calculated in Section 2.4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cohen", "sec_num": "2.6.2" }, { "text": "D \u03ba w o = 1 \u2212 A \u03ba o .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cohen", "sec_num": "2.6.2" }, { "text": "Like \u03ba, the coefficient \u03ba w interprets expected disagreement as the amount expected by chance from a distinct probability distribution for each coder. These individual distributions are estimated byP(k|c), the proportion of items assigned by coder c to category k, that is the number of such assignments n ck divided by the number of items i.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cohen", "sec_num": "2.6.2" }, { "text": "P(k|c) = 1 i n ck", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cohen", "sec_num": "2.6.2" }, { "text": "The probability that coder c 1 assigns an item to category k a and coder c 2 assigns it to category k b is the joint probability of each coder making this assignment independently, namely,P(k a |c 1 )P(k b |c 2 ). The expected disagreement is the mean of the weights for all (ordered) category pairs, weighted by the probabilities of the category pairs and normalized to the interval [0, 1] through division by the maximal weight.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cohen", "sec_num": "2.6.2" }, { "text": "D \u03ba w e = 1 d max k \u2211 j=1 k \u2211 l=1P (k j |c 1 )P(k l |c 2 )d k j k l = 1 d max 1 i 2 k \u2211 j=1 k \u2211 l=1 n c 1 k j n c 2 k l d k j k l", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cohen", "sec_num": "2.6.2" }, { "text": "If we take all disagreements to be of equal weight then the expected disagreement is exactly the complement of the expected agreement for \u03ba as calculated in Section 2.4: D \u03ba w e = 1 \u2212 A \u03ba e . Finally, the coefficient \u03ba w itself is the ratio of observed disagreement to expected disagreement, subtracted from 1 in order to yield a final value in terms of agreement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cohen", "sec_num": "2.6.2" }, { "text": "\u03ba w = 1 \u2212 D o D e", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cohen", "sec_num": "2.6.2" }, { "text": "We end this section with an example illustrating how all of the agreement coefficients just discussed are computed. To facilitate comparisons, all computations will be based on the annotation statistics in Table 4 . This confusion matrix reports the results of an experiment where two coders classify a set of utterances into three categories.", "cite_spans": [], "ref_spans": [ { "start": 206, "end": 213, "text": "Table 4", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "An Integrated Example", "sec_num": "2.7" }, { "text": "Observed agreement for all of the unweighted coefficients (S, \u03ba, and \u03c0) is calculated by counting the items on which the coders agree (the figures on the diagonal of the confusion matrix in Table 4 ) and dividing by the total number of items.", "cite_spans": [], "ref_spans": [ { "start": 190, "end": 197, "text": "Table 4", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "The Unweighted Coefficients.", "sec_num": "2.7.1" }, { "text": "A o = 46 + 32 + 10 100 = 0.88", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Unweighted Coefficients.", "sec_num": "2.7.1" }, { "text": "The expected agreement values and the resulting values for the coefficients are shown in Table 5 . The values of \u03c0 and \u03ba are very similar, which is to be expected when agreement is high, because this implies similar marginals. Notice that A \u03ba e < A \u03c0 e , hence \u03ba > \u03c0; this reflects a general property of \u03ba and \u03c0, already mentioned in Section 2.4, which will be elaborated in Section 3.1.", "cite_spans": [], "ref_spans": [ { "start": 89, "end": 96, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "The Unweighted Coefficients.", "sec_num": "2.7.1" }, { "text": "Suppose we notice that whereas Statement and Info-Request are clearly distinct classifications, Check is somewhere between the two. We therefore opt to weigh the distances between the categories as follows (recall that 1 denotes maximal disagreement, and identical categories are in full agreement and thus have a distance of 0).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weighted Coefficients.", "sec_num": "2.7.2" }, { "text": "Statement 0 1 0.5 Info-Request 1 0 0.5 Check 0.5 0.5 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statement Info-Request Check", "sec_num": null }, { "text": "The observed disagreement is calculated by summing up all the cells in the contingency table, multiplying each cell by its respective weight, and dividing the total by the number of items (in the following calculation we ignore cells with zero items).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statement Info-Request Check", "sec_num": null }, { "text": "D o = 46 \u00d7 0 + 6 \u00d7 1 + 32 \u00d7 0 + 6 \u00d7 0.5 + 10 \u00d7 0 100 = 6 + 3 100 = 0.09", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statement Info-Request Check", "sec_num": null }, { "text": "The only sources of disagreement in the coding example of Table 4 are the six utterances marked as Info-Requests by coder A and Statements by coder B, which receive the maximal weight of 1, and the six utterances marked as Info-Requests by coder A and Checks by coder B, which are given a weight of 0.5. The calculation of expected disagreement for the weighted coefficients is shown in Table 6 , and is the sum of the expected disagreement for each category pair multiplied Table 5 Unweighted coefficients for the data from Table 4 .", "cite_spans": [], "ref_spans": [ { "start": 58, "end": 65, "text": "Table 4", "ref_id": "TABREF1" }, { "start": 387, "end": 394, "text": "Table 6", "ref_id": null }, { "start": 475, "end": 482, "text": "Table 5", "ref_id": null }, { "start": 525, "end": 532, "text": "Table 4", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Statement Info-Request Check", "sec_num": null }, { "text": "Chance-corrected agreement Table 6 Expected disagreement of the weighted coefficients for the data from Table 4 .", "cite_spans": [], "ref_spans": [ { "start": 27, "end": 34, "text": "Table 6", "ref_id": null }, { "start": 104, "end": 111, "text": "Table 4", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Expected agreement", "sec_num": null }, { "text": "S 3 \u00d7 ( 1 3 ) 2 = 1 3 (0.88 \u2212 1 3 )/(1 \u2212 1 3 ) = 0.82 \u03c0 0.46+0.52 2 +", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expected agreement", "sec_num": null }, { "text": "D \u03b1 e (46+52)\u00d7(46+52) 2\u00d7100\u00d7(2\u00d7100\u22121) \u00d7 0 + (44+32)\u00d7(46+52) 2\u00d7100\u00d7(2\u00d7100\u22121) \u00d7 1 + (10+16)\u00d7(46+52) 2\u00d7100\u00d7(2\u00d7100\u22121) \u00d7 1 2 + (46+52)\u00d7(44+32) 2\u00d7100\u00d7(2\u00d7100\u22121) \u00d7 1 + (44+32)\u00d7(44+32) 2\u00d7100\u00d7(2\u00d7100\u22121) \u00d7 0 + (10+16)\u00d7(44+32) 2\u00d7100\u00d7(2\u00d7100\u22121) \u00d7 1 2 + (46+52)\u00d7(10+16) 2\u00d7100\u00d7(2\u00d7100\u22121) \u00d7 1 2 + (44+32)\u00d7(10+16) 2\u00d7100\u00d7(2\u00d7100\u22121) \u00d7 1 2 + (10+16)\u00d7(10+16) 2\u00d7100\u00d7(2\u00d7100\u22121) \u00d7 0 0.4879 D \u03ba w e 46\u00d752 100\u00d7100 \u00d7 0 + 44\u00d752 100\u00d7100 \u00d7 1 + 10\u00d752 100\u00d7100 \u00d7 1 2 + 46\u00d732 100\u00d7100 \u00d7 1 + 44\u00d732 100\u00d7100 \u00d7 0 + 10\u00d732 100\u00d7100 \u00d7 1 2 + 46\u00d716 100\u00d7100 \u00d7 1 2 + 44\u00d716 100\u00d7100 \u00d7 1 2 + 10\u00d716 100\u00d7100 \u00d7 0 0.49", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expected agreement", "sec_num": null }, { "text": "by its weight. The value of the weighted coefficients is given by the formula 1 \u2212 D o D e , so \u03b1 \u2248 1 \u2212 0.09 0.4879 \u2248 0.8156, and \u03ba w = 1 \u2212 0.09 0.49 \u2248 0.8163.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expected agreement", "sec_num": null }, { "text": "Two issues recently raised by Di Eugenio and Glass (2004) concern the behavior of agreement coefficients when the annotation data are severely skewed. One issue, which Di Eugenio and Glass call the bias problem, is that \u03c0 and \u03ba yield quite different numerical values when the annotators' marginal distributions are widely divergent; the other issue, the prevalence problem, is the exceeding difficulty in getting high agreement values when most of the items fall under one category. Looking at these two problems in detail is useful for understanding the differences between the coefficients.", "cite_spans": [ { "start": 33, "end": 57, "text": "Eugenio and Glass (2004)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Bias and Prevalence", "sec_num": "3." }, { "text": "The difference between \u03c0 and \u03b1 on the one hand and \u03ba on the other hand lies in the interpretation of the notion of chance agreement, whether it is the amount expected from the the actual distribution of items among categories (\u03c0) or from individual coder priors (\u03ba). As mentioned in Section 2.4, this difference has been the subject of much debate (Fleiss 1975; Krippendorff 1978 Krippendorff , 2004b Byrt, Bishop, and Carlin 1993; Zwick 1988; Hsu and Field 2003; Di Eugenio and Glass 2004; Craggs and McGee Wood 2005) . A claim often repeated in the literature is that single-distribution coefficients like \u03c0 and \u03b1 assume that different coders produce similar distributions of items among categories, with the implication that these coefficients are inapplicable when the annotators show substantially different distributions. Recommendations vary: Zwick (1988) suggests testing the individual coders' distributions using the modified \u03c7 2 test of Stuart (1955) , and discarding the annotation as unreliable if significant systematic discrepancies are observed. In contrast, Hsu and Field (2003, page 214) recommend reporting the value of \u03ba even when the coders produce different distributions, because it is \"the only [index] . . . that could legitimately be applied in the presence of marginal heterogeneity\"; likewise, Di Eugenio and Glass (2004, page 96) recommend using \u03ba in \"the vast majority . . . of discourse-and dialogue-tagging efforts\" where the individual coders' distributions tend to vary. All of these proposals are based on a misconception: that single-distribution coefficients require similar distributions by the individual annotators in order to work properly. This is not the case. The difference between the coefficients is only in the interpretation of \"chance agreement\": \u03c0-style coefficients calculate the chance of agreement among arbitrary coders, whereas \u03ba-style coefficients calculate the chance of agreement among the coders who produced the reliability data. Therefore, the choice of coefficient should not depend on the magnitude of the divergence between the coders, but rather on the desired interpretation of chance agreement.", "cite_spans": [ { "start": 348, "end": 361, "text": "(Fleiss 1975;", "ref_id": "BIBREF37" }, { "start": 362, "end": 379, "text": "Krippendorff 1978", "ref_id": "BIBREF53" }, { "start": 380, "end": 400, "text": "Krippendorff , 2004b", "ref_id": "BIBREF57" }, { "start": 401, "end": 431, "text": "Byrt, Bishop, and Carlin 1993;", "ref_id": "BIBREF16" }, { "start": 432, "end": 443, "text": "Zwick 1988;", "ref_id": "BIBREF108" }, { "start": 444, "end": 463, "text": "Hsu and Field 2003;", "ref_id": "BIBREF45" }, { "start": 464, "end": 490, "text": "Di Eugenio and Glass 2004;", "ref_id": "BIBREF28" }, { "start": 491, "end": 518, "text": "Craggs and McGee Wood 2005)", "ref_id": "BIBREF25" }, { "start": 850, "end": 862, "text": "Zwick (1988)", "ref_id": "BIBREF108" }, { "start": 948, "end": 961, "text": "Stuart (1955)", "ref_id": "BIBREF101" }, { "start": 1075, "end": 1105, "text": "Hsu and Field (2003, page 214)", "ref_id": null }, { "start": 1219, "end": 1226, "text": "[index]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Annotator Bias", "sec_num": "3.1" }, { "text": "Another common claim is that individual-distribution coefficients like \u03ba \"reward\" annotators for disagreeing on the marginal distributions. For example, Di Eugenio and Glass (2004, page 99) say that \u03ba suffers from what they call the bias problem, described as \"the paradox that \u03ba Co [our \u03ba] increases as the coders become less similar.\" Similar reservations about the use of \u03ba have been noted by Brennan and Prediger (1981) and Zwick (1988) . However, the bias problem is less paradoxical than it sounds. Although it is true that for a fixed observed agreement, a higher difference in coder marginals implies a lower expected agreement and therefore a higher \u03ba value, the conclusion that \u03ba penalizes coders for having similar distributions is unwarranted. This is because A o and A e are not independent: Both are drawn from the same set of observations. What \u03ba does is discount some of the disagreement resulting from different coder marginals by incorporating it into A e . Whether this is desirable depends on the application for which the coefficient is used.", "cite_spans": [ { "start": 156, "end": 189, "text": "Eugenio and Glass (2004, page 99)", "ref_id": null }, { "start": 396, "end": 423, "text": "Brennan and Prediger (1981)", "ref_id": "BIBREF9" }, { "start": 428, "end": 440, "text": "Zwick (1988)", "ref_id": "BIBREF108" } ], "ref_spans": [], "eq_spans": [], "section": "Annotator Bias", "sec_num": "3.1" }, { "text": "The most common application of agreement measures in CL is to infer the reliability of a large-scale annotation, where typically each piece of data will be marked by just one coder, by measuring agreement on a small subset of the data which is annotated by multiple coders. In order to make this generalization, the measure must reflect the reliability of the annotation procedure, which is independent of the actual annotators used. Reliability, or reproducibility of the coding, is reduced by all disagreements-both random and systematic. The most appropriate measures of reliability for this purpose are therefore single-distribution coefficients like \u03c0 and \u03b1, which generalize over the individual coders and exclude marginal disagreements from the expected agreement. This argument has been presented recently in much detail by Krippendorff (2004b) and reiterated by Craggs and McGee Wood (2005) .", "cite_spans": [ { "start": 832, "end": 852, "text": "Krippendorff (2004b)", "ref_id": "BIBREF57" }, { "start": 871, "end": 899, "text": "Craggs and McGee Wood (2005)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Annotator Bias", "sec_num": "3.1" }, { "text": "At the same time, individual-distribution coefficients like \u03ba provide important information regarding the trustworthiness (validity) of the data on which the annotators agree. As an intuitive example, think of a person who consults two analysts when deciding whether to buy or sell certain stocks. If one analyst is an optimist and tends to recommend buying whereas the other is a pessimist and tends to recommend selling, they are likely to agree with each other less than two more neutral analysts, so overall their recommendations are likely to be less reliable-less reproducible-than those that come from a population of like-minded analysts. This reproducibility is measured by \u03c0. But whenever the optimistic and pessimistic analysts agree on a recommendation for a particular stock, whether it is \"buy\" or \"sell,\" the confidence that this is indeed the right decision is higher than the same advice from two like-minded analysts. This is why \u03ba \"rewards\" biased annotators: it is not a matter of reproducibility (reliability) but rather of trustworthiness (validity).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotator Bias", "sec_num": "3.1" }, { "text": "Having said this, we should point out that, first, in practice the difference between \u03c0 and \u03ba doesn't often amount to much (see discussion in Section 4). Moreover, the difference becomes smaller as agreement increases, because all the points of agreement contribute toward making the coder marginals similar (it took a lot of experimentation to create data for Table 4 so that the values of \u03c0 and \u03ba would straddle the conventional cutoff point of 0.80, and even so the difference is very small). Finally, one would expect the difference between \u03c0 and \u03ba to diminish as the number of coders grows; this is shown subsequently. 6 We define B, the overall annotator bias in a particular set of coding data, as the difference between the expected agreement according to (multi)-\u03c0 and the expected agreement according to (multi)-\u03ba. Annotator bias is a measure of variance: If we take c to be a random variable with equal probabilities for all coders, then the annotator bias B is the sum of the variances of P(k|c) for all categories k \u2208 K, divided by the number of coders c less one (see Artstein and Poesio [2005] for a proof).", "cite_spans": [ { "start": 624, "end": 625, "text": "6", "ref_id": null }, { "start": 1082, "end": 1108, "text": "Artstein and Poesio [2005]", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Annotator Bias", "sec_num": "3.1" }, { "text": "B = A \u03c0 e \u2212 A \u03ba e = 1 c \u2212 1 \u2211 k\u2208K \u03c3 2 P(k|c)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotator Bias", "sec_num": "3.1" }, { "text": "Annotator bias can be used to express the difference between \u03ba and \u03c0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotator Bias", "sec_num": "3.1" }, { "text": "\u03ba \u2212 \u03c0 = A o \u2212 (A \u03c0 e \u2212 B) 1 \u2212 (A \u03c0 e \u2212 B) \u2212 A o \u2212 A \u03c0 e 1 \u2212 A \u03c0 e = B \u2022 (1 \u2212 A o ) (1 \u2212 A \u03ba e )(1 \u2212 A \u03c0 e )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotator Bias", "sec_num": "3.1" }, { "text": "This allows us to make the following observations about the relationship between \u03c0 and \u03ba.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotator Bias", "sec_num": "3.1" }, { "text": "Observation 1. The difference between \u03ba and \u03c0 grows as the annotator bias grows: For a constant A o and A \u03c0 e , a greater B implies a greater value for \u03ba \u2212 \u03c0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotator Bias", "sec_num": "3.1" }, { "text": "Observation 2. The greater the number of coders, the lower the annotator bias B, and hence the lower the difference between \u03ba and \u03c0, because the variance ofP(k|c) does not increase in proportion to the number of coders.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotator Bias", "sec_num": "3.1" }, { "text": "In other words, provided enough coders are used, it should not matter whether a single-distribution or individual-distribution coefficient is used. This is not to imply that multiple coders increase reliability: The variance of the individual coders' distributions can be just as large with many coders as with few coders, but its effect on the value of \u03ba decreases as the number of coders grows, and becomes more similar to random noise. The same holds for weighted measures too; see the extended version of this article for definitions and proof. In an annotation study with 18 subjects, we compared \u03b1 with a variant which uses individual coder distributions to calculate expected agreement, and found that the values never differed beyond the third decimal point (Poesio and Artstein 2005) .", "cite_spans": [ { "start": 766, "end": 792, "text": "(Poesio and Artstein 2005)", "ref_id": "BIBREF80" } ], "ref_spans": [], "eq_spans": [], "section": "Annotator Bias", "sec_num": "3.1" }, { "text": "We conclude with a summary of our views concerning the difference between \u03c0style and \u03ba-style coefficients. First of all, keep in mind that empirically the difference is small, and gets smaller as the number of annotators increases. Then instead of reporting two coefficients, as suggested by Di Eugenio and Glass (2004) , the appropriate coefficient should be chosen based on the task (not on the observed differences between coder marginals). When the coefficient is used to assess reliability, a single-distribution coefficient like \u03c0 or \u03b1 should be used; this is indeed already the practice in CL, because Siegel and Castellan's K is identical with (multi-)\u03c0. It is also good practice to test reliability with more than two coders, in order to reduce the likelihood of coders sharing a deviant reading of the annotation guidelines.", "cite_spans": [ { "start": 295, "end": 319, "text": "Eugenio and Glass (2004)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Annotator Bias", "sec_num": "3.1" }, { "text": "We touched upon the matter of skewed data in Section 2.3 when we motivated the need for chance correction: If a disproportionate amount of the data falls under one category, then the expected agreement is very high, so in order to demonstrate high reliability an even higher observed agreement is needed. This leads to the so-called paradox that chance-corrected agreement may be low even though A o is high (Cicchetti and Feinstein 1990; Feinstein and Cicchetti 1990; Di Eugenio and Glass 2004) . Moreover, when the data are highly skewed in favor of one category, the high agreement also corresponds to high accuracy: If, say, 95% of the data fall under one category label, then random coding would cause two coders to jointly assign this category label to 90.25% of the items, and on average 95% of these labels would be correct, for an overall accuracy of at least 85.7%. This leads to the surprising result that when data are highly skewed, coders may agree on a high proportion of items while producing annotations that are indeed correct to a high degree, yet the reliability coefficients remain low. (For an illustration, see the discussion of agreement results on coding discourse segments in Section 4.3.1.)", "cite_spans": [ { "start": 408, "end": 438, "text": "(Cicchetti and Feinstein 1990;", "ref_id": "BIBREF20" }, { "start": 439, "end": 468, "text": "Feinstein and Cicchetti 1990;", "ref_id": "BIBREF34" }, { "start": 469, "end": 495, "text": "Di Eugenio and Glass 2004)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Prevalence", "sec_num": "3.2" }, { "text": "This surprising result is, however, justified. Reliability implies the ability to distinguish between categories, but when one category is very common, high accuracy and high agreement can also result from indiscriminate coding. The test for reliability in such cases is the ability to agree on the rare categories (regardless of whether these are the categories of interest). Indeed, chance-corrected coefficients are sensitive to agreement on rare categories. This is easiest to see with a simple example of two coders and two categories, one common and the other one rare; to further simplify the calculation we also assume that the coder marginals are identical, so that \u03c0 and \u03ba yield the same values. We can thus represent the judgments in a contingency table with just two parameters: is half the proportion of items on which there is disagreement, and \u03b4 is the proportion of agreement on the Rare category. Both of these proportions are assumed to be small, so the bulk of the items (a proportion of 1 \u2212 (\u03b4 + 2 )) are labeled with the Common category by both coders (Table 7) . From this table we can calculate A o = 1 \u2212 2 and A e = 1 \u2212 2(\u03b4 + ) + 2(\u03b4 + ) 2 , as well as \u03c0 and \u03ba.", "cite_spans": [], "ref_spans": [ { "start": 1073, "end": 1082, "text": "(Table 7)", "ref_id": null } ], "eq_spans": [], "section": "Prevalence", "sec_num": "3.2" }, { "text": "\u03c0, \u03ba = 1 \u2212 2 \u2212 (1 \u2212 2(\u03b4 + ) + 2(\u03b4 + ) 2 ) 1 \u2212 (1 \u2212 2(\u03b4 + ) + 2(\u03b4 + ) 2 ) = \u03b4 \u03b4 + \u2212 1 \u2212 (\u03b4 + )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prevalence", "sec_num": "3.2" }, { "text": "When and \u03b4 are both small, the fraction after the minus sign is small as well, so \u03c0 and \u03ba are approximately \u03b4/(\u03b4 + ): the value we get if we take all the items marked by one Table 7 A simple example of agreement on dialogue act tagging.", "cite_spans": [], "ref_spans": [ { "start": 174, "end": 181, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Prevalence", "sec_num": "3.2" }, { "text": "CODER A COMMON RARE TOTAL COMMON 1 \u2212 (\u03b4 + 2 ) 1 \u2212 (\u03b4 + ) CODER B RARE \u03b4 \u03b4 + TOTAL 1 \u2212 (\u03b4 + ) \u03b4 + 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prevalence", "sec_num": "3.2" }, { "text": "particular coder as Rare, and calculate what proportion of those items were labeled Rare by the other coder. This is a measure of the coders' ability to agree on the rare category.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prevalence", "sec_num": "3.2" }, { "text": "In this section we review the use of intercoder agreement measures in CL since Carletta's original paper in light of the discussion in the previous sections. We begin with a summary of Krippendorff's recommendations about measuring reliability (Krippendorff 2004a, Chapter 11) , then discuss how coefficients of agreement have been used in CL to measure the reliability of annotation schemes, focusing in particular on the types of annotation where there has been some debate concerning the most appropriate measures of agreement.", "cite_spans": [ { "start": 244, "end": 276, "text": "(Krippendorff 2004a, Chapter 11)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Using Agreement Measures for CL Annotation Tasks", "sec_num": "4." }, { "text": "Krippendorff (2004a, Chapter 11) notes with regret the fact that reliability is discussed in only around 69% of studies in content analysis. In CL as well, not all annotation projects include a formal test of intercoder agreement. Some of the best known annotation efforts, such as the creation of the Penn Treebank (Marcus, Marcinkiewicz, and Santorini 1993) and the British National Corpus (Leech, Garside, and Bryant 1994) , do not report reliability results as they predate the Carletta paper; but even among the more recent efforts, many only report percentage agreement, as for the creation of the PropBank (Palmer, Dang, and Fellbaum 2007) or the ongoing OntoNotes annotation (Hovy et al. 2006) . Even more importantly, very few studies apply a methodology as rigorous as that envisaged by Krippendorff and other content analysts. We therefore begin this discussion of CL practice with a summary of the main recommendations found in Chapter 11 of Krippendorff (2004a) , even though, as we will see, we think that some of these recommendations may not be appropriate for CL.", "cite_spans": [ { "start": 316, "end": 359, "text": "(Marcus, Marcinkiewicz, and Santorini 1993)", "ref_id": "BIBREF63" }, { "start": 392, "end": 425, "text": "(Leech, Garside, and Bryant 1994)", "ref_id": "BIBREF59" }, { "start": 613, "end": 646, "text": "(Palmer, Dang, and Fellbaum 2007)", "ref_id": "BIBREF72" }, { "start": 683, "end": 701, "text": "(Hovy et al. 2006)", "ref_id": "BIBREF44" }, { "start": 954, "end": 974, "text": "Krippendorff (2004a)", "ref_id": "BIBREF56" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology and Interpretation of the Results: General Issues", "sec_num": "4.1" }, { "text": "Reproducibility. Krippendorff's recommendations were developed for the field of content analysis, where coding is used to draw conclusions from the texts. A coded corpus is thus akin to the result of a scientific experiment, and it can only be considered valid if it is reproducible-that is, if the same coded results can be replicated in an independent coding exercise. Krippendorff therefore argues that any study using observed agreement as a measure of reproducibility must satisfy the following requirements:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Data to Measure", "sec_num": "4.1.1" }, { "text": "\u2022 It must employ an exhaustively formulated, clear, and usable coding scheme together with step-by-step instructions on how to use it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Data to Measure", "sec_num": "4.1.1" }, { "text": "\u2022 It must use clearly specified criteria concerning the choice of coders (so that others may use such criteria to reproduce the data).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Data to Measure", "sec_num": "4.1.1" }, { "text": "\u2022 It must ensure that the coders that generate the data used to measure reproducibility work independently of each other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Data to Measure", "sec_num": "4.1.1" }, { "text": "Some practices that are common in CL do not satisfy these requirements. The first requirement is violated by the practice of expanding the written coding instructions and including new rules as the data are generated. The second requirement is often violated by using experts as coders, particularly long-term collaborators, as such coders may agree not because they are carefully following written instructions, but because they know the purpose of the research very well-which makes it virtually impossible for others to reproduce the results on the basis of the same coding scheme (the problems arising when using experts were already discussed at length in Carletta [1996] ). Practices which violate the third requirement (independence) include asking coders to discuss their judgments with each other and reach their decisions by majority vote, or to consult with each other when problems not foreseen in the coding instructions arise. Any of these practices make the resulting data unusable for measuring reproducibility. Krippendorff's own summary of his recommendations is that to obtain usable data for measuring reproducibility a researcher must use data generated by three or more coders, chosen according to some clearly specified criteria, and working independently according to a written coding scheme and coding instructions fixed in advance. Krippendorff also discusses the criteria to be used in the selection of the sample, from the minimum number of units (obtained using a formula from Bloch and Kraemer [1989] , reported in Krippendorff [2004a, page 239] ), to how to make the sample representative of the data population (each category should occur in the sample often enough to yield at least five chance agreements), to how to ensure the reliability of the instructions (the sample should contain examples of all the values for the categories). These recommendations are particularly relevant in light of the comments of Craggs and McGee Wood (2005, page 290) , which discourage researchers from testing their coding instructions on data from more than one domain. Given that the reliability of the coding instructions depends to a great extent on how complications are dealt with, and that every domain displays different complications, the sample should contain sufficient examples from all domains which have to be annotated according to the instructions.", "cite_spans": [ { "start": 661, "end": 676, "text": "Carletta [1996]", "ref_id": "BIBREF17" }, { "start": 1506, "end": 1530, "text": "Bloch and Kraemer [1989]", "ref_id": "BIBREF8" }, { "start": 1545, "end": 1575, "text": "Krippendorff [2004a, page 239]", "ref_id": null }, { "start": 1945, "end": 1983, "text": "Craggs and McGee Wood (2005, page 290)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Generating Data to Measure", "sec_num": "4.1.1" }, { "text": "In hypothesis testing, it is common to test for the significance of a result against a null hypothesis of chance behavior; for an agreement coefficient this would mean rejecting the possibility that a positive value of agreement is nevertheless due to random coding. We can rely on the statement by Siegel and Castellan (1988, Section 9.8 .2) that when sample sizes are large, the sampling distribution of K (Fleiss's multi-\u03c0) is approximately normal and centered around zero-this allows testing the obtained value of K against the null hypothesis of chance agreement by using the z statistic. It is also easy to test Krippendorff's \u03b1 with the interval distance metric against the null hypothesis of chance agreement, because the hypothesis \u03b1 = 0 is identical to the hypothesis F = 1 in an analysis of variance.", "cite_spans": [ { "start": 299, "end": 338, "text": "Siegel and Castellan (1988, Section 9.8", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Establishing Significance.", "sec_num": "4.1.2" }, { "text": "However, a null hypothesis of chance agreement is not very interesting, and demonstrating that agreement is significantly better than chance is not enough to establish reliability. This has already been pointed out by Cohen (1960, page 44) : \"to know merely that \u03ba is beyond chance is trivial since one usually expects much more than this in the way of reliability in psychological measurement.\" The same point has been repeated and stressed in many subsequent works (e.g., Posner et al. 1990; Di Eugenio 2000; Krippendorff 2004a ): The reason for measuring reliability is not to test whether coders perform better than chance, but to ensure that the coders do not deviate too much from perfect agreement (Krippendorff 2004a, page 237) .", "cite_spans": [ { "start": 218, "end": 239, "text": "Cohen (1960, page 44)", "ref_id": null }, { "start": 474, "end": 493, "text": "Posner et al. 1990;", "ref_id": null }, { "start": 494, "end": 510, "text": "Di Eugenio 2000;", "ref_id": "BIBREF27" }, { "start": 511, "end": 529, "text": "Krippendorff 2004a", "ref_id": "BIBREF56" }, { "start": 705, "end": 735, "text": "(Krippendorff 2004a, page 237)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Establishing Significance.", "sec_num": "4.1.2" }, { "text": "The relevant notion of significance for agreement coefficients is therefore a confidence interval. Cohen (1960, pages 43-44) implies that when sample sizes are large, the sampling distribution of \u03ba is approximately normal for any true population value of \u03ba, and therefore confidence intervals for the observed value of \u03ba can be determined using the usual multiples of the standard error. Donner and Eliasziw (1987) propose a more general form of significance test for arbitrary levels of agreement. In contrast, Krippendorff (2004a, Section 11.4 .2) states that the distribution of \u03b1 is unknown, so confidence intervals must be obtained by bootstrapping; a software package for doing this is described in Hayes and Krippendorff (2007) .", "cite_spans": [ { "start": 99, "end": 124, "text": "Cohen (1960, pages 43-44)", "ref_id": null }, { "start": 388, "end": 414, "text": "Donner and Eliasziw (1987)", "ref_id": "BIBREF31" }, { "start": 512, "end": 545, "text": "Krippendorff (2004a, Section 11.4", "ref_id": null }, { "start": 705, "end": 734, "text": "Hayes and Krippendorff (2007)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Establishing Significance.", "sec_num": "4.1.2" }, { "text": "Even after testing significance and establishing confidence intervals for agreement coefficients, we are still faced with the problem of interpreting the meaning of the resulting values. Suppose, for example, we establish that for a particular task, K = 0.78 \u00b1 0.05. Is this good or bad? Unfortunately, deciding what counts as an adequate level of agreement for a specific purpose is still little more than a black art: As we will see, different levels of agreement may be appropriate for resource building and for more linguistic purposes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpreting the Value of Kappa-Like Coefficients.", "sec_num": "4.1.3" }, { "text": "The problem is not unlike that of interpreting the values of correlation coefficients, and in the area of medical diagnosis, the best known conventions concerning the value of kappa-like coefficients, those proposed by Landis and Koch (1977) and reported in Figure 1 , are indeed similar to those used for correlation coefficients, where values above 0.4 are also generally considered adequate (Marion 2004) . Many medical researchers feel that these conventions are appropriate, and in language studies, a similar interpretation of the values has been proposed by Rietveld and van Hout (1993) . In CL, however, most researchers follow the more stringent conventions from content analysis proposed by Krippendorff (1980, page 147) , as reported by Carletta (1996, page 252) : \"content analysis researchers generally think of K > .8 as good reliability, with .67 < K < .8 allowing tentative conclusions to be drawn\" (Krippendorff was discussing values of \u03b1 rather than K, but the coefficients are nearly equivalent for categorical labels). As a result, ever since Carletta's influential paper, CL researchers have attempted to achieve a value of K (more seldom, of \u03b1) above the 0.8 threshold, or, failing that, the 0.67 level allowing for \"tentative conclusions.\" However, the description of the 0.67 boundary in Krippendorff (1980) was actually \"highly tentative and cautious,\" and in later work Krippendorff clearly considers 0.8 the absolute minimum value of \u03b1 to accept for any serious purpose: \"Even a cutoff point of \u03b1 = .800 . . . is a pretty low standard\" (Krippendorff 2004a, page 242) . Recent content analysis practice seems to have settled for even more stringent requirements: A recent textbook, Neuendorf (2002, page 3), analyzing several proposals concerning \"acceptable\" reliability, concludes that \"reliability coefficients of .90 or greater would be acceptable to all, .80 or greater would be acceptable in most situations, and below that, there exists great disagreement.\" This is clearly a fundamental issue. Ideally we would want to establish thresholds which are appropriate for the field of CL, but as we will see in the rest of this section, a decade of practical experience hasn't helped in settling the matter. In fact, weighted coefficients, while arguably more appropriate for many annotation tasks, make the issue of deciding when the value of a coefficient indicates sufficient agreement even K = 0.0 0.2 0.4 0.6 0.8 1.0", "cite_spans": [ { "start": 219, "end": 241, "text": "Landis and Koch (1977)", "ref_id": "BIBREF58" }, { "start": 394, "end": 407, "text": "(Marion 2004)", "ref_id": "BIBREF64" }, { "start": 565, "end": 593, "text": "Rietveld and van Hout (1993)", "ref_id": "BIBREF92" }, { "start": 701, "end": 730, "text": "Krippendorff (1980, page 147)", "ref_id": null }, { "start": 748, "end": 773, "text": "Carletta (1996, page 252)", "ref_id": null }, { "start": 1312, "end": 1331, "text": "Krippendorff (1980)", "ref_id": "BIBREF54" }, { "start": 1563, "end": 1593, "text": "(Krippendorff 2004a, page 242)", "ref_id": null } ], "ref_spans": [ { "start": 258, "end": 266, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Interpreting the Value of Kappa-Like Coefficients.", "sec_num": "4.1.3" }, { "text": "Kappa values and strength of agreement according to Landis and Koch (1977) . more complicated because of the problem of determining appropriate weights (see Section 4.4). We will return to the issue of interpreting the value of the coefficients at the end of this article.", "cite_spans": [ { "start": 52, "end": 74, "text": "Landis and Koch (1977)", "ref_id": "BIBREF58" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "Learning. In a recent article, Reidsma and Carletta (2008) point out that the goals of annotation in CL differ from those of content analysis, where agreement coefficients originate. A common use of an annotated corpus in CL is not to confirm or reject a hypothesis, but to generalize the patterns using machine-learning algorithms. Through a series of simulations, Reidsma and Carletta demonstrate that agreement coefficients are poor predictors of machine-learning success: Even highly reproducible annotations are difficult to generalize when the disagreements contain patterns that can be learned, whereas highly noisy and unreliable data can be generalized successfully when the disagreements do not contain learnable patterns. These results show that agreement coefficients should not be used as indicators of the suitability of annotated data for machine learning. However, the purpose of reliability studies is not to find out whether annotations can be generalized, but whether they capture some kind of observable reality. Even if the pattern of disagreement allows generalization, we need evidence that this generalization would be meaningful. The decision whether a set of annotation guidelines are appropriate or meaningful is ultimately a qualitative one, but a baseline requirement is an acceptable level of agreement among the annotators, who serve as the instruments of measurement. Reliability studies test the soundness of an annotation scheme and guidelines, which is not to be equated with the machine-learnability of data produced by such guidelines.", "cite_spans": [ { "start": 31, "end": 58, "text": "Reidsma and Carletta (2008)", "ref_id": "BIBREF88" } ], "ref_spans": [], "eq_spans": [], "section": "Agreement and Machine", "sec_num": "4.1.4" }, { "text": "The simplest and most common coding in CL involves labeling segments of text with a limited number of linguistic categories: Examples include part-of-speech tagging, dialogue act tagging, and named entity tagging. The practices used to test reliability for this type of annotation tend to be based on the assumption that the categories used in the annotation are mutually exclusive and equally distinct from one another; this assumption seems to have worked out well in practice, but questions about it have been raised even for the annotation of parts of speech (Babarczy, Carroll, and Sampson 2006) , let alone for discourse coding tasks such as dialogue act coding. We concentrate here on this latter type of coding, but a discussion of issues raised for POS, named entity, and prosodic coding can be found in the extended version of the article. Dialogue act tagging is a type of linguistic annotation with which by now the CL community has had extensive experience: Several dialogue-act-annotated spoken language corpora now exist, such as MapTask (Carletta et al. 1997) , Switchboard (Stolcke et al. 2000) , Verbmobil (Jekat et al. 1995) , and Communicator (e.g., Doran et al. 2001) , among others. Historically, dialogue act annotation was also one of the types of annotation that motivated the introduction in CL of chance-corrected coefficients of agreement (Carletta et al. 1997 ) and, as we will see, it has been the type of annotation that has generated the most discussion concerning annotation methodology and measuring agreement.", "cite_spans": [ { "start": 563, "end": 600, "text": "(Babarczy, Carroll, and Sampson 2006)", "ref_id": "BIBREF4" }, { "start": 1053, "end": 1075, "text": "(Carletta et al. 1997)", "ref_id": "BIBREF18" }, { "start": 1090, "end": 1111, "text": "(Stolcke et al. 2000)", "ref_id": "BIBREF100" }, { "start": 1124, "end": 1143, "text": "(Jekat et al. 1995)", "ref_id": "BIBREF47" }, { "start": 1170, "end": 1188, "text": "Doran et al. 2001)", "ref_id": "BIBREF32" }, { "start": 1367, "end": 1388, "text": "(Carletta et al. 1997", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Labeling Units with a Common and Predefined Set of Categories: The Case of Dialogue Act Tagging", "sec_num": "4.2" }, { "text": "A number of coding schemes for dialogue acts have achieved values of K over 0.8 and have therefore been assumed to be reliable: For example, K = 0.83 for the 13-tag MapTask coding scheme (Carletta et al. 1997) , K = 0.8 for the 42-tag Switchboard-DAMSL scheme (Stolcke et al. 2000) , K = 0.90 for the smaller 20-tag subset of the CSTAR scheme used by Doran et al. (2001) . All of these tests were based on the same two assumptions: that every unit (utterance) is assigned to exactly one category (dialogue act), and that these categories are distinct. Therefore, again, unweighted measures, and in particular K, tend to be used for measuring inter-coder agreement.", "cite_spans": [ { "start": 187, "end": 209, "text": "(Carletta et al. 1997)", "ref_id": "BIBREF18" }, { "start": 260, "end": 281, "text": "(Stolcke et al. 2000)", "ref_id": "BIBREF100" }, { "start": 351, "end": 370, "text": "Doran et al. (2001)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Labeling Units with a Common and Predefined Set of Categories: The Case of Dialogue Act Tagging", "sec_num": "4.2" }, { "text": "However, these assumptions have been challenged based on the observation that utterances tend to have more than one function at the dialogue act level (Traum and Hinkelman 1992; Allen and Core 1997; Bunt 2000) ; for a useful survey, see Popescu-Belis (2005) . An assertion performed in answer to a question, for instance, typically performs at least two functions at different levels: asserting some information-the dialogue act that we called Statement in Section 2.3, operating at what Traum and Hinkelman called the \"core speech act\" level-and confirming that the question has been understood, a dialogue act operating at the \"grounding\" level and usually known as Acknowledgment (Ack). In older dialogue act tagsets, acknowledgments and statements were treated as alternative labels at the same \"level\", forcing coders to choose one or the other when an utterance performed a dual function, according to a well-specified set of instructions. By contrast, in the annotation schemes inspired from these newer theories such as DAMSL (Allen and Core 1997) , coders are allowed to assign tags along distinct \"dimensions\" or \"levels\".", "cite_spans": [ { "start": 151, "end": 177, "text": "(Traum and Hinkelman 1992;", "ref_id": "BIBREF104" }, { "start": 178, "end": 198, "text": "Allen and Core 1997;", "ref_id": "BIBREF0" }, { "start": 199, "end": 209, "text": "Bunt 2000)", "ref_id": "BIBREF13" }, { "start": 237, "end": 257, "text": "Popescu-Belis (2005)", "ref_id": "BIBREF84" }, { "start": 1034, "end": 1055, "text": "(Allen and Core 1997)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Labeling Units with a Common and Predefined Set of Categories: The Case of Dialogue Act Tagging", "sec_num": "4.2" }, { "text": "Two annotation experiments testing this solution to the \"multi-tag\" problem with the DAMSL scheme were reported in Core and Allen (1997) and Di Eugenio et al. (1998) . In both studies, coders were allowed to mark each communicative function independently: That is, they were allowed to choose for each utterance one of the Statement tags (or possibly none), one of the Influencing-Addressee-Future-Action tags, and so forth-and agreement was evaluated separately for each dimension using (unweighted) K. Core and Allen found values of K ranging from 0.76 for answer to 0.42 for agreement to 0.15 for Committing-Speaker-Future-Action. Using different coding instructions and on a different corpus, Di Eugenio et al. observed higher agreement, ranging from K = 0.93 (for other-forward-function) to 0.54 (for the tag agreement).", "cite_spans": [ { "start": 115, "end": 136, "text": "Core and Allen (1997)", "ref_id": "BIBREF23" }, { "start": 141, "end": 165, "text": "Di Eugenio et al. (1998)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Labeling Units with a Common and Predefined Set of Categories: The Case of Dialogue Act Tagging", "sec_num": "4.2" }, { "text": "These relatively low levels of agreement led many researchers to return to \"flat\" tagsets for dialogue acts, incorporating however in their schemes some of the insights motivating the work on schemes such as DAMSL. The best known example of this type of approach is the development of the SWITCHBOARD-DAMSL tagset by Jurafsky, Shriberg, and Biasca (1997) , which incorporates many ideas from the \"multi-dimensional\" theories of dialogue acts, but does not allow marking an utterance as both an acknowledgment and a statement; a choice has to be made. This tagset results in overall agreement of K = 0.80. Interestingly, subsequent developments of SWITCHBOARD-DAMSL backtracked on some of these decisions. For instance, the ICSI-MRDA tagset developed for the annotation of the ICSI Meeting Recorder corpus reintroduces some of the DAMSL ideas, in that annotators are allowed to assign multiple SWITCHBOARD-DAMSL labels to utterances (Shriberg et al. 2004) . Shriberg et al. achieved a comparable reliability to that obtained with SWITCHBOARD-DAMSL, but only when using a tagset of just five \"class-maps\". Shriberg et al. (2004) also introduced a hierarchical organization of tags to improve reliability. The dimensions of the DAMSL scheme can be viewed as \"superclasses\" of dialogue acts which share some aspect of their meaning. For instance, the dimension of Influencing-Addressee-Future-Action (IAFA) includes the two dialogue acts Open-option (used to mark suggestions) and Directive, both of which bring into consideration a future action to be performed by the addressee. At least in principle, an organization of this type opens up the possibility for coders to mark an utterance with the superclass (IAFA) in case they do not feel confident that the utterance satisfies the additional requirements for Open-option or Directive. This, in turn, would do away with the need to make a choice between these two options. This possibility wasn't pursued in the studies using the original DAMSL that we are aware of (Core and Allen 1997; Di Eugenio 2000; Stent 2001 ), but was tested by Shriberg et al. (2004) and subsequent work, in particular Geertzen and Bunt (2006) , who were specifically interested in the idea of using hierarchical schemes to measure partial agreement, and in addition experimented with weighted coefficients of agreement for their hierarchical tagging scheme, specifically \u03ba w .", "cite_spans": [ { "start": 317, "end": 354, "text": "Jurafsky, Shriberg, and Biasca (1997)", "ref_id": "BIBREF48" }, { "start": 932, "end": 954, "text": "(Shriberg et al. 2004)", "ref_id": "BIBREF95" }, { "start": 1104, "end": 1126, "text": "Shriberg et al. (2004)", "ref_id": "BIBREF95" }, { "start": 2015, "end": 2036, "text": "(Core and Allen 1997;", "ref_id": "BIBREF23" }, { "start": 2037, "end": 2053, "text": "Di Eugenio 2000;", "ref_id": "BIBREF27" }, { "start": 2054, "end": 2064, "text": "Stent 2001", "ref_id": "BIBREF98" }, { "start": 2086, "end": 2108, "text": "Shriberg et al. (2004)", "ref_id": "BIBREF95" }, { "start": 2144, "end": 2168, "text": "Geertzen and Bunt (2006)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Labeling Units with a Common and Predefined Set of Categories: The Case of Dialogue Act Tagging", "sec_num": "4.2" }, { "text": "Geertzen and Bunt tested intercoder agreement with Bunt's DIT++ (Bunt 2005 ), a scheme with 11 dimensions that builds on ideas from DAMSL and from Dynamic Interpretation Theory (Bunt 2000) . In DIT++, tags can be hierarchically related: For example, the class information-seeking is viewed as consisting of two classes, yesno question (ynq) and wh-question (whq). The hierarchy is explicitly introduced in order to allow coders to leave some aspects of the coding undecided. For example, check is treated as a subclass of ynq in which, in addition, the speaker has a weak belief that the proposition that forms the belief is true. A coder who is not certain about the dialogue act performed using an utterance may simply choose to tag it as ynq.", "cite_spans": [ { "start": 64, "end": 74, "text": "(Bunt 2005", "ref_id": "BIBREF14" }, { "start": 177, "end": 188, "text": "(Bunt 2000)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Labeling Units with a Common and Predefined Set of Categories: The Case of Dialogue Act Tagging", "sec_num": "4.2" }, { "text": "The distance metric d proposed by Geertzen and Bunt is based on the criterion that two communicative functions are related (d(c 1 , c 2 ) < 1) if they stand in an ancestor-offspring relation within a hierarchy. Furthermore, they argue, the magnitude of d(c 1 , c 2 ) should be proportional to the distance between the functions in the hierarchy. A level-dependent correction factor is also proposed so as to leave open the option to make disagreements at higher levels of the hierarchy matter more than disagreements at the deeper level (for example, the distance between information-seeking and ynq might be considered greater than the distance between check and positive-check).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Labeling Units with a Common and Predefined Set of Categories: The Case of Dialogue Act Tagging", "sec_num": "4.2" }, { "text": "The results of an agreement test with two annotators run by Geertzen and Bunt show that taking into account partial agreement leads to values of \u03ba w that are higher than the values of \u03ba for the same categories, particularly for feedback, a class for which Core and Allen (1997) got low agreement. Of course, even assuming that the values of \u03ba w and \u03ba were directly comparable-we remark on the difficulty of interpreting the values of weighted coefficients of agreement in Section 4.4-it remains to be seen whether these higher values are a better indication of the extent of agreement between coders than the values of unweighted \u03ba.", "cite_spans": [ { "start": 256, "end": 277, "text": "Core and Allen (1997)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Labeling Units with a Common and Predefined Set of Categories: The Case of Dialogue Act Tagging", "sec_num": "4.2" }, { "text": "This discussion of coding schemes for dialogue acts introduced issues to which we will return for other CL annotation tasks as well. There are a number of wellestablished schemes for large-scale dialogue act annotation based on the assumption of mutual exclusivity between dialogue act tags, whose reliability is also well known; if one of these schemes is appropriate for modeling the communicative intentions found in a task, we recommend to our readers to use it. They should also realize, however, that the mutual exclusivity assumption is somewhat dubious. If a multi-dimensional or hierarchical tagset is used, readers should also be aware that weighted coefficients do capture partial agreement, and need not automatically result in lower reliability or in an explosion in the number of labels. However, a hierarchical scheme may not reflect genuine annotation difficulties: For example, in the case of DIT++, one might argue that it is more difficult to confuse yes-no questions with wh-questions than with statements. We will also see in a moment that interpreting the results with weighted coefficients is difficult. We will return to both of these problems in what follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Labeling Units with a Common and Predefined Set of Categories: The Case of Dialogue Act Tagging", "sec_num": "4.2" }, { "text": "Before labeling can take place, the units of annotation, or markables, need to be identified-a process Krippendorff (1995 Krippendorff ( , 2004a calls unitizing. The practice in CL for the forms of annotation discussed in the previous section is to assume that the units are linguistic constituents which can be easily identified, such as words, utterances, or noun phrases, and therefore there is no need to check the reliability of this process. We are aware of few exceptions to this assumption, such as Carletta et al. (1997) on unitization for move coding and our own work on the GNOME corpus (Poesio 2004b) . In cases such as text segmentation, however, the identification of units is as important as their labeling, if not more important, and therefore checking agreement on unit identification is essential. In this section we discuss current CL practice with reliability testing of these types of annotation, before briefly summarizing Krippendorff's proposals concerning measuring reliability for unitizing.", "cite_spans": [ { "start": 103, "end": 121, "text": "Krippendorff (1995", "ref_id": "BIBREF55" }, { "start": 122, "end": 144, "text": "Krippendorff ( , 2004a", "ref_id": "BIBREF56" }, { "start": 507, "end": 529, "text": "Carletta et al. (1997)", "ref_id": "BIBREF18" }, { "start": 598, "end": 612, "text": "(Poesio 2004b)", "ref_id": "BIBREF80" } ], "ref_spans": [], "eq_spans": [], "section": "Marking Boundaries and Unitizing", "sec_num": "4.3" }, { "text": "Marking. Discourse segments are portions of text that constitute a unit either because they are about the same \"topic\" (Hearst 1997; Reynar 1998) or because they have to do with achieving the same intention (Grosz and Sidner 1986) or performing the same \"dialogue game\" (Carletta et al. 1997) . 7 The analysis of discourse structure-and especially the identification of discourse segments-is the type of annotation that, more than any other, led CL researchers to look for ways of measuring reliability and agreement, as it made them aware of the extent of disagreement on even quite simple judgments (Kowtko, Isard, and Doherty 1992; Passonneau and Litman 1993; Carletta et al. 1997; Hearst 1997) . Subsequent research identified a number of issues with discourse structure annotation, above all the fact that segmentation, though problematic, is still much easier than marking more complex aspects of discourse structure, such as identifying the most important segments or the \"rhetorical\" relations between segments of different granularity. As a result, many efforts to annotate discourse structure concentrate only on segmentation.", "cite_spans": [ { "start": 119, "end": 132, "text": "(Hearst 1997;", "ref_id": "BIBREF43" }, { "start": 133, "end": 145, "text": "Reynar 1998)", "ref_id": "BIBREF90" }, { "start": 207, "end": 230, "text": "(Grosz and Sidner 1986)", "ref_id": "BIBREF41" }, { "start": 270, "end": 292, "text": "(Carletta et al. 1997)", "ref_id": "BIBREF18" }, { "start": 295, "end": 296, "text": "7", "ref_id": null }, { "start": 601, "end": 634, "text": "(Kowtko, Isard, and Doherty 1992;", "ref_id": "BIBREF51" }, { "start": 635, "end": 662, "text": "Passonneau and Litman 1993;", "ref_id": "BIBREF76" }, { "start": 663, "end": 684, "text": "Carletta et al. 1997;", "ref_id": "BIBREF18" }, { "start": 685, "end": 697, "text": "Hearst 1997)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Segmentation and Topic", "sec_num": "4.3.1" }, { "text": "The agreement results for segment coding tend to be on the lower end of the scale proposed by Krippendorff and recommended by Carletta. Hearst (1997) , for instance, found K = 0.647 for the boundary/not boundary distinction; Reynar (1998) , measuring agreement between his own annotation and the TREC segmentation of broadcast news, reports K = 0.764 for the same task; Ries (2002) reports even lower agreement of K = 0.36. Teufel, Carletta, and Moens (1999) , who studied agreement on the identification of argumentative zones, found high reliability (K = 0.81) for their three main zones (own, other, background), although lower for the whole scheme (K = 0.71). For intention-based segmentation, Passonneau and Litman (1993) in the pre-K days reported an overall percentage agreement with majority opinion of 89%, but the agreement on boundaries was only 70%. For conversational games segmentation, Carletta et al. (1997) reported \"promising but not entirely reassuring agreement on where games began (70%),\" whereas the agreement on transaction boundaries was K = 0.59. Exceptions are two segmentation efforts carried out as part of annotations of rhetorical structure. Moser, Moore, and Glendening (1996) achieved an agreement of K = 0.9 for the highest level of segmentation of their RDA annotation (Poesio, Patel, and Di Eugenio 2006) . Carlson, Marcu, and Okurowski (2003) reported very high agreement over the identification of the boundaries of discourse units, the building blocks of their annotation of rhetorical structure. (Agreement was measured several times; initially, they obtained K = 0.87, and in the final analysis K = 0.97.) This, however, was achieved by employing experienced annotators, and with considerable training.", "cite_spans": [ { "start": 126, "end": 149, "text": "Carletta. Hearst (1997)", "ref_id": null }, { "start": 225, "end": 238, "text": "Reynar (1998)", "ref_id": "BIBREF90" }, { "start": 370, "end": 381, "text": "Ries (2002)", "ref_id": "BIBREF91" }, { "start": 424, "end": 458, "text": "Teufel, Carletta, and Moens (1999)", "ref_id": "BIBREF102" }, { "start": 698, "end": 726, "text": "Passonneau and Litman (1993)", "ref_id": "BIBREF76" }, { "start": 901, "end": 923, "text": "Carletta et al. (1997)", "ref_id": "BIBREF18" }, { "start": 1173, "end": 1208, "text": "Moser, Moore, and Glendening (1996)", "ref_id": "BIBREF69" }, { "start": 1304, "end": 1340, "text": "(Poesio, Patel, and Di Eugenio 2006)", "ref_id": "BIBREF82" }, { "start": 1343, "end": 1379, "text": "Carlson, Marcu, and Okurowski (2003)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Segmentation and Topic", "sec_num": "4.3.1" }, { "text": "One important reason why most agreement results on segmentation are on the lower end of the reliability scale is the fact, known to researchers in discourse analysis from as early as Levin and Moore (1978) , that although analysts generally agree on the \"bulk\" of segments, they tend to disagree on their exact boundaries. This phenomenon was also observed in more recent studies: See for example the discussion in Passonneau and Litman (1997) , the comparison of the annotations produced by seven coders of the same text in Figure 5 of Hearst (1997, page 55) , or the discussion by Carlson, Marcu, and Okurowski (2003) , who point out that the boundaries between elementary discourse units tend to be \"very blurry.\" See also Pevzner and Hearst (2002) for similar comments made in the context of topic segmentation algorithms, and Klavans, Popper, and Passonneau (2003) for selecting definition phrases.", "cite_spans": [ { "start": 183, "end": 205, "text": "Levin and Moore (1978)", "ref_id": "BIBREF60" }, { "start": 415, "end": 443, "text": "Passonneau and Litman (1997)", "ref_id": "BIBREF78" }, { "start": 537, "end": 559, "text": "Hearst (1997, page 55)", "ref_id": null }, { "start": 583, "end": 619, "text": "Carlson, Marcu, and Okurowski (2003)", "ref_id": "BIBREF19" }, { "start": 726, "end": 751, "text": "Pevzner and Hearst (2002)", "ref_id": "BIBREF79" }, { "start": 831, "end": 869, "text": "Klavans, Popper, and Passonneau (2003)", "ref_id": "BIBREF50" } ], "ref_spans": [ { "start": 525, "end": 533, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Segmentation and Topic", "sec_num": "4.3.1" }, { "text": "This \"blurriness\" of boundaries, combined with the prevalence effects discussed in Section 3.2, also explains the fact that topic annotation efforts which were only concerned with roughly dividing a text into segments (Passonneau and Litman 1993; Carletta et al. 1997; Hearst 1997; Reynar 1998; Ries 2002) generally report lower agreement than the studies whose goal is to identify smaller discourse units. When disagreement is mostly concentrated in one class ('boundary' in this case), if the total number of units to annotate remains the same, then expected agreement on this class is lower when a greater proportion of the units to annotate belongs to this class. When in addition this class is much less numerous than the other classes, overall agreement tends to depend mostly on agreement on this class.", "cite_spans": [ { "start": 218, "end": 246, "text": "(Passonneau and Litman 1993;", "ref_id": "BIBREF76" }, { "start": 247, "end": 268, "text": "Carletta et al. 1997;", "ref_id": "BIBREF18" }, { "start": 269, "end": 281, "text": "Hearst 1997;", "ref_id": "BIBREF43" }, { "start": 282, "end": 294, "text": "Reynar 1998;", "ref_id": "BIBREF90" }, { "start": 295, "end": 305, "text": "Ries 2002)", "ref_id": "BIBREF91" } ], "ref_spans": [], "eq_spans": [], "section": "Segmentation and Topic", "sec_num": "4.3.1" }, { "text": "For instance, suppose we are testing the reliability of two different segmentation schemes-into broad \"discourse segments\" and into finer \"discourse units\"-on a text of 50 utterances, and that we obtain the results in Table 8 . Case 1 would be a situation in which Coder A and Coder B agree that the text consists of two segments, obviously agree on its initial and final boundaries, but disagree by one position on the intermediate boundary-say, one of them places it at utterance 25, the other at utterance 26. Nevertheless, because expected agreement is so high-the coders agree on the classification of 98% of the utterances-the value of K is fairly low. In case 2, the coders disagree on three times as many utterances, but K is higher than in the first case because expected agreement is substantially lower (A e = 0.53).", "cite_spans": [], "ref_spans": [ { "start": 218, "end": 225, "text": "Table 8", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Segmentation and Topic", "sec_num": "4.3.1" }, { "text": "The fact that coders mostly agree on the \"bulk\" of discourse segments, but tend to disagree on their boundaries, also makes it likely that an all-or-nothing coefficient like K calculated on individual boundaries would underestimate the degree of agreement, suggesting low agreement even among coders whose segmentations are mostly similar. A weighted coefficient of agreement like \u03b1 might produce values more in keeping with intuition, but we are not aware of any attempts at measuring agreement on segmentation using weighted coefficients. We see two main options. We suspect that the methods proposed by Krippendorff (1995) for measuring agreement on unitizing (see Section 4.3.2, subsequently) may be appropriate for the purpose of measuring agreement on discourse segmentation. A second option would be to measure agreement not on individual boundaries but on windows spanning several units, as done in the methods proposed to evaluate the performance of topic detection algorithms such as (Beeferman, Berger, and Lafferty 1999) or WINDOWDIFF (Pevzner and Hearst 2002) (which are, however, raw agreement scores not corrected for chance).", "cite_spans": [ { "start": 606, "end": 625, "text": "Krippendorff (1995)", "ref_id": "BIBREF55" }, { "start": 994, "end": 1032, "text": "(Beeferman, Berger, and Lafferty 1999)", "ref_id": "BIBREF6" }, { "start": 1047, "end": 1072, "text": "(Pevzner and Hearst 2002)", "ref_id": "BIBREF79" } ], "ref_spans": [], "eq_spans": [], "section": "Segmentation and Topic", "sec_num": "4.3.1" }, { "text": ". It is often assumed in CL annotation practice that the units of analysis are \"natural\" linguistic objects, and therefore there is no need to check agreement on their identification. As a result, agreement is usually measured on the labeling of units rather than on the process of identifying them (unitizing, Krippendorff 1995) . We have just seen, however, two coding tasks for which the reliability of unit identification is a crucial part of the overall reliability, and the problem of markable identification is more pervasive than is generally acknowledged. For example, when the units to be labeled are syntactic constituents, it is common practice to use a parser or chunker to identify the markables and then to allow the coders to correct the parser's output. In such cases one would want to know how reliable the coders' corrections are. We thus need a general method of testing relibility on markable identification. The one proposal for measuring agreement on markable identification we are aware of is the \u03b1 U coefficient, a non-trivial variant of \u03b1 proposed by Krippendorff (1995) . A full presentation of the proposal would require too much space, so we will just present the core idea. Unitizing is conceived of as consisting of two separate steps: identifying boundaries between units, and selecting the units of interest. If a unit identified by one coder overlaps a unit identified by the other coder, the amount of disagreement is the square of the lengths of the non-overlapping segments (see Figure 2) ; if a unit identified by one coder does not overlap any unit of interest identified by the other coder, the amount of disagreement is the square of the length of the whole unit. This distance metric is used in calculating observed and expected disagreement, and \u03b1 U itself. We refer the reader to Krippendorff (1995) for details.", "cite_spans": [ { "start": 311, "end": 329, "text": "Krippendorff 1995)", "ref_id": "BIBREF55" }, { "start": 1077, "end": 1096, "text": "Krippendorff (1995)", "ref_id": "BIBREF55" }, { "start": 1824, "end": 1843, "text": "Krippendorff (1995)", "ref_id": "BIBREF55" } ], "ref_spans": [ { "start": 1516, "end": 1525, "text": "Figure 2)", "ref_id": null } ], "eq_spans": [], "section": "Unitizing (Or, Agreement on Markable Identification)", "sec_num": "4.3.2" }, { "text": "Krippendorff's \u03b1 U is not applicable to all CL tasks. For example, it assumes that units may not overlap in a single coder's output, yet in practice there are many", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unitizing (Or, Agreement on Markable Identification)", "sec_num": "4.3.2" }, { "text": "Coder A Coder B s \u2212 \u271b \u2732 s \u271b \u2732 s + \u271b \u2732 Figure 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unitizing (Or, Agreement on Markable Identification)", "sec_num": "4.3.2" }, { "text": "The difference between overlapping units is d(A, B) = s 2 \u2212 + s 2 + (adapted from Krippendorff 1995, Figure 4, page 61) .", "cite_spans": [], "ref_spans": [ { "start": 44, "end": 51, "text": "d(A, B)", "ref_id": null }, { "start": 101, "end": 119, "text": "Figure 4, page 61)", "ref_id": null } ], "eq_spans": [], "section": "Unitizing (Or, Agreement on Markable Identification)", "sec_num": "4.3.2" }, { "text": "annotation schemes which require coders to label nested syntactic constituents. For continuous segmentation tasks, \u03b1 U may be inappropriate because when a segment identified by one annotator overlaps with two segments identified by another annotator, the distance is smallest when the one segment is centered over the two rather than aligned with one of them. Nevertheless, we feel that when the non-overlap assumption holds, and the units do not cover the text exhaustively, testing the reliabilty of unit identification may prove beneficial. To our knowledge, this has never been tested in CL.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unitizing (Or, Agreement on Markable Identification)", "sec_num": "4.3.2" }, { "text": "The annotation tasks discussed so far involve assigning a specific label to each category, which allows the various agreement measures to be applied in a straightforward way. Anaphoric annotation differs from the previous tasks because annotators do not assign labels, but rather create links between anaphors and their antecedents. It is therefore not clear what the \"labels\" should be for the purpose of calculating agreement. One possibility would be to consider the intended referent (real-world object) as the label, as in named entity tagging, but it wouldn't make sense to predefine a set of \"labels\" applicable to all texts, because different objects are mentioned in different texts. An alternative is to use the marked antecedents as \"labels\". However, we do not want to count as a disagreement every time two coders agree on the discourse entity realized by a particular noun phrase but just happen to mark different words as antecedents. Consider the reference of the underlined pronoun it in the following dialogue excerpt (TRAINS 1991 [Gross, Allen, and Traum 1993] , dialogue d91-3.2). 8", "cite_spans": [ { "start": 1036, "end": 1048, "text": "(TRAINS 1991", "ref_id": null }, { "start": 1049, "end": 1079, "text": "[Gross, Allen, and Traum 1993]", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Anaphora", "sec_num": "4.4" }, { "text": "1.1 M: .... 1.4 first thing I'd like you to do 1.5 is send engine E2 off with a boxcar to Corning to pick up oranges 1.6 as soon as possible 2.1 S: okay 3.1 M: and while it's there it should pick up the tanker Some of the coders in a study we carried out (Poesio and Artstein 2005) indicated the noun phrase engine E2 as antecedent for the second it in utterance 3.1, whereas others indicated the immediately preceding pronoun, which they had previously marked as having engine E2 as antecedent. Clearly, we do not want to consider these coders to be in disagreement. A solution to this dilemma has been proposed by Passonneau (2004) : Use the emerging coreference sets as the 'labels' for the purpose of calculating agreement. This requires using weighted measures for calculating agreement on such sets, and consequently it raises serious questions about weighted measures-in particular, about the interpretability of the results, as we will see shortly. Passonneau's Proposal. Passonneau (2004) recommends measuring agreement on anaphoric annotation by using sets of mentions of discourse entities as labels, that is, the emerging anaphoric/coreference chains. This proposal is in line with the methods developed to evaluate anaphora resolution systems (Vilain et al. 1995) . But using anaphoric chains as labels would not make unweighted measures such as K a good measure for agreement. Practical experience suggests that, except when a text is very short, few annotators will catch all mentions of a discourse entity: Most will forget to mark a few, with the result that the chains (that is, category labels) differ from coder to coder and agreement as measured with K is always very low. What is needed is a coefficient that also allows for partial disagreement between judgments, when two annotators agree on part of the coreference chain but not on all of it. Passonneau (2004) suggests solving the problem by using \u03b1 with a distance metric that allows for partial agreement among anaphoric chains. Passonneau proposes a distance metric based on the following rationale: Two sets are minimally distant when they are identical and maximally distant when they are disjoint; between these extremes, sets that stand in a subset relation are closer (less distant) than ones that merely intersect. This leads to the following distance metric between two sets A and B.", "cite_spans": [ { "start": 255, "end": 281, "text": "(Poesio and Artstein 2005)", "ref_id": "BIBREF80" }, { "start": 616, "end": 633, "text": "Passonneau (2004)", "ref_id": "BIBREF73" }, { "start": 957, "end": 997, "text": "Passonneau's Proposal. Passonneau (2004)", "ref_id": null }, { "start": 1256, "end": 1276, "text": "(Vilain et al. 1995)", "ref_id": "BIBREF107" }, { "start": 1868, "end": 1885, "text": "Passonneau (2004)", "ref_id": "BIBREF73" } ], "ref_spans": [], "eq_spans": [], "section": "Anaphora", "sec_num": "4.4" }, { "text": "d P = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 0 if A = B 1 / 3 if A \u2282 B or B \u2282 A 2 / 3 if A \u2229 B = \u2205, but A \u2282 B and B \u2282 A 1 if A \u2229 B = \u2205", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.4.1", "sec_num": null }, { "text": "Alternative distance metrics take the size of the anaphoric chain into account, based on measures used to compare sets in Information Retrieval, such as the coefficient of community of Jaccard (1912) and the coincidence index of Dice (1945) (Manning and Sch\u00fctze 1999) .", "cite_spans": [ { "start": 185, "end": 199, "text": "Jaccard (1912)", "ref_id": "BIBREF46" }, { "start": 229, "end": 240, "text": "Dice (1945)", "ref_id": "BIBREF30" }, { "start": 241, "end": 267, "text": "(Manning and Sch\u00fctze 1999)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "4.4.1", "sec_num": null }, { "text": "Jaccard: d J = 1 \u2212 |A \u2229 B| |A \u222a B| Dice: d D = 1 \u2212 2 |A \u2229 B| |A| + |B|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.4.1", "sec_num": null }, { "text": "In later work, Passonneau (2006) offers a refined distance metric which she called MASI (Measuring Agreement on Set-valued Items), obtained by multiplying Passonneau's original metric d P by the metric derived from Jaccard d J .", "cite_spans": [ { "start": 15, "end": 32, "text": "Passonneau (2006)", "ref_id": "BIBREF74" } ], "ref_spans": [], "eq_spans": [], "section": "4.4.1", "sec_num": null }, { "text": "d M = d P \u00d7 d J 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.4.1", "sec_num": null }, { "text": ".4.2 Experience with \u03b1 for Anaphoric Annotation. In the experiment mentioned previously (Poesio and Artstein 2005) we used 18 coders to test \u03b1 and K under a variety of conditions. We found that even though our coders by and large agreed on the interpretation of anaphoric expressions, virtually no coder ever identified all the mentions of a discourse entity. As a result, even though the values of \u03b1 and K obtained by using the ID of the antecedent as label were pretty similar, the values obtained when using anaphoric chains as labels were drastically different. The value of \u03b1 increased, because examples where coders linked a markable to different antecedents in the same chain were no longer considered as disagreements. However, the value of K was drastically reduced, because hardly any coder identified all the mentions of discourse entities (Figure 3 ). The study also looked at the matter of individual annotator bias, and as mentioned in Section 3.1, we did not find differences between \u03b1 and a \u03ba-style version of \u03b1 beyond the third decimal point. This similarity is what one would expect, given the result about annotator bias from Section 3.1 and given that in this experiment we used 18 annotators. These very small differences should be contrasted with the differences resulting from the choice of distance metrics, where values for the full-chain condition ranged from \u03b1 = 0.642 using Jaccard as distance metric, to \u03b1 = 0.654 using Passonneau's metric, to the value for Dice reported in Figure 3 , \u03b1 = 0.691. These differences raise an important issue concerning the application of \u03b1-like measures for CL tasks: Using \u03b1 makes it difficult to compare the results of different annotation experiments, in that a \"poor\" value or a \"high\" value might result from \"too strict\" or \"too generous\" distance metrics, making it even more important to develop a methodology to identify appropriate values for these coefficients. This issue is further emphasized by the study reported next.", "cite_spans": [ { "start": 88, "end": 114, "text": "(Poesio and Artstein 2005)", "ref_id": "BIBREF80" } ], "ref_spans": [ { "start": 851, "end": 860, "text": "(Figure 3", "ref_id": "FIGREF1" }, { "start": 1504, "end": 1512, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "4.4.1", "sec_num": null }, { "text": "A second annotation study we carried out (Artstein and Poesio 2006) shows even more clearly the possible side effects of using weighted coefficients. This study was concerned with the annotation of the antecedents of references to abstract objects, such as the example of the pronoun that in utterance 7.6 (TRAINS 1991, dialogue d91-2.2).", "cite_spans": [ { "start": 41, "end": 67, "text": "(Artstein and Poesio 2006)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Discourse Deixis.", "sec_num": "4.4.3" }, { "text": "7.3 : so we ship one 7.4 : boxcar 7.5 : of oranges to Elmira 7.6 : and that takes another 2 hours Previous studies of discourse deixis annotation showed that these are extremely difficult judgments to make (Eckert and Strube 2000; Navarretta 2000; Byron 2002) , except perhaps for identifying the type of object (Poesio and Modjeska 2005) , so we simplified the task by only requiring our participants to identify the boundaries of the area of text in which the antecedent was introduced. Even so, we found a great variety in how these boundaries were marked: Exactly as in the case of discourse segmentation discussed earlier, our participants broadly agreed on the area of text, but disagreed on its exact boundary. For instance, in this example, nine out of ten annotators marked the antecedent of that as a text segment ending with the word Elmira, but some started with the word so, some started with we, some with ship, and some with one.", "cite_spans": [ { "start": 206, "end": 230, "text": "(Eckert and Strube 2000;", "ref_id": "BIBREF33" }, { "start": 231, "end": 247, "text": "Navarretta 2000;", "ref_id": "BIBREF70" }, { "start": 248, "end": 259, "text": "Byron 2002)", "ref_id": "BIBREF15" }, { "start": 312, "end": 338, "text": "(Poesio and Modjeska 2005)", "ref_id": "BIBREF81" } ], "ref_spans": [], "eq_spans": [], "section": "Discourse Deixis.", "sec_num": "4.4.3" }, { "text": "We tested a number of ways to measure partial agreement on this task, and obtained widely different results. First of all, we tested three set-based distance metrics inspired by the Passonneau proposals that we just discussed: We considered discourse segments to be sets of words, and computed the distance between them using Passonneau's metric, Jaccard, and Dice. Using these three metrics, we obtained \u03b1 values of 0.55 (with Passonneau's metric), 0.45 (with Jaccard), and 0.55 (with Dice). We should note that because antecedents of different expressions rarely overlapped, the expected disagreement was close to 1 (maximal), so the value of \u03b1 turned out to be very close to the complement of the observed disagreement as calculated by the different distance metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Deixis.", "sec_num": "4.4.3" }, { "text": "Next, we considered methods based on the position of words in the text. The first method computed differences between absolute boundary positions: Each antecedent was associated with the position of its first or last word in the dialogue, and agreement was calculated using \u03b1 with the interval distance metric. This gave us \u03b1 values of 0.998 for the beginnings of the antecedent-evoking area and 0.999 for the ends. This is because expected disagreement is exceptionally low: Coders tend to mark discourse antecedents close to the referring expression, so the average distance between antecedents of the same expression is smaller than the size of the dialogue by a few orders of magnitude. The second method associated each antecedent with the position of its first or last word relative to the beginning of the anaphoric expression. This time we found extremely low values of \u03b1 = 0.167 for beginnings of antecedents and 0.122 for endsbarely in the positive side. This shows that agreement among coders is not dramatically better than what would be expected if they just marked discourse antecedents at a fixed distance from the referring expression.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Deixis.", "sec_num": "4.4.3" }, { "text": "The three ranges of \u03b1 that we observed (middle, high, and low) show agreement on the identity of discourse antecedents, their position in the dialogue, and their position relative to referring expressions, respectively. The middle range shows variability of up to 10 percentage points, depending on the distance metric chosen. The lesson is that once we start using weighted measures we cannot anymore interpret the value of \u03b1 using traditional rules of thumb such as those proposed by Krippendorff or by Landis and Koch. This is because depending on the way we measure agreement, we can report \u03b1 values ranging from 0.122 to 0.998 for the very same experiment! New interpretation methods have to be developed, which will be task-and distance-metric specific. We'll return to this issue in the conclusions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Deixis.", "sec_num": "4.4.3" }, { "text": "Word sense tagging is one of the hardest annotation tasks. Whereas in the case of partof-speech and dialogue act tagging the same categories are used to classify all units, in the case of word sense tagging different categories must be used for each word, which makes writing a single coding manual specifying examples for all categories impossible: The only option is to rely on a dictionary. Unfortunately, different dictionaries make different distinctions, and often coders can't make the fine-grained distinctions that trained lexicographers can make. The problem is particularly serious for verbs, which tend to be polysemous rather than homonymous (Palmer, Dang, and Fellbaum 2007) .", "cite_spans": [ { "start": 655, "end": 688, "text": "(Palmer, Dang, and Fellbaum 2007)", "ref_id": "BIBREF72" } ], "ref_spans": [], "eq_spans": [], "section": "Word Senses", "sec_num": "4.5" }, { "text": "These difficulties, and in particular the difficulty of tagging senses with a finegrained repertoire of senses such as that provided by dictionaries or by WordNet (Fellbaum 1998), have been highlighted by the three SENSEVAL initiatives. Already during the first SENSEVAL, V\u00e9ronis (1998) carried out two studies of intercoder agreement on word sense tagging in the so-called ROMANSEVAL task. One study was concerned with agreement on polysemy-that is, the extent to which coders agreed that a word was polysemous in a given context. Six naive coders were asked to make this judgment about 600 French words (200 nouns, 200 verbs, 200 adjectives) using the repertoire of senses in the Petit Larousse. On this task, a (pairwise) percentage agreement of 0.68 for nouns, 0.74 for verbs, and 0.78 for adjectives was observed, corresponding to K values of 0.36, 0.37, and 0.67, respectively. The 20 words from each category perceived by the coders in this first experiment to be most polysemous were then used in a second study, of intercoder agreement on the sense tagging task, which involved six different naive coders. Interestingly, the coders in this second experiment were allowed to assign multiple tags to words, although they did not make much use of this possibility; so \u03ba w was used to measure agreement. In this experiment, V\u00e9ronis observed (weighted) pairwise agreement of 0.63 for verbs, 0.71 for adjectives, and 0.73 for nouns, corresponding to \u03ba w values of 0.41, 0.41, and 0.46, but with a wide variety of values when measured per word-ranging from 0.007 for the adjective correct to 0.92 for the noun d\u00e9tention. Similarly mediocre results for intercoder agreement between naive coders were reported in the subsequent editions of SENSEVAL. Agreement studies for SENSEVAL-2, where WordNet senses were used as tags, reported a percentage agreement for verb senses of around 70%, whereas for SENSEVAL-3 (English Lexical Sample Task), Mihalcea, Chklovski, and Kilgarriff (2004) report a percentage agreement of 67.3% and average K of 0.58.", "cite_spans": [ { "start": 272, "end": 286, "text": "V\u00e9ronis (1998)", "ref_id": "BIBREF106" }, { "start": 605, "end": 643, "text": "(200 nouns, 200 verbs, 200 adjectives)", "ref_id": null }, { "start": 1941, "end": 1983, "text": "Mihalcea, Chklovski, and Kilgarriff (2004)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Word Senses", "sec_num": "4.5" }, { "text": "Two types of solutions have been proposed for the problem of low agreement on sense tagging. The solution proposed by Kilgarriff (1999) is to use professional lexicographers and arbitration. The study carried out by Kilgarriff does not therefore qualify as a true study of replicability in the sense of the terms used by Krippendorff, but it did show that this approach makes it possible to achieve percentage agreement of around 95.5%. An alternative approach has been to address the problem of the inability of naive coders to make fine-grained distinctions by introducing coarser-grained classification schemes which group together dictionary senses (Bruce and Wiebe, 1998; Buitelaar 1998; V\u00e9ronis 1998; Palmer, Dang, and Fellbaum 2007) . Hierarchical tagsets were also developed, such as HECTOR (Atkins 1992) or, indeed, WordNet itself (where senses are related by hyponymy links). In the case of Buitelaar and Palmer, Dang, and Fellbaum, the \"supersenses\" were identified by hand, whereas Bruce and Wiebe and V\u00e9ronis used clustering methods such as those from Bruce and Wiebe (1999) to collapse some of the initial sense distinctions. 9 Palmer, Dang, and Fellbaum (2007) illustrate this practice with the example of the verb call, which has 28 fine-grained senses in WordNet 1.7: They conflate these senses into a small number of groups using various criteria-for example, four senses can be grouped in a group they call Group 1 on the basis of subcategorization frame similarities (Table 9) . Palmer, Dang, and Fellbaum (2007) achieved for the English Verb Lexical Sense task of SENSEVAL-2 a percentage agreement among coders of 82% with grouped senses, as opposed to 71% with the original WordNet senses. Bruce and Wiebe (1998) found that collapsing the senses of their test word (interest) on the basis of their use by coders and merging the two classes found to be harder to distinguish resulted in an increase of Table 9 Group 1 of senses of call in Palmer, Dang, and Fellbaum (2007, page 149 the value of K from 0.874 to 0.898. Using a related technique, V\u00e9ronis (1998) found that agreement on noun word sense tagging went up from a K of around 0.45 to a K of 0.86. We should note, however, that the post hoc merging of categories is not equivalent to running a study with fewer categories to begin with. Attempts were also made to develop techniques to measure partial agreement with hierarchical tagsets. A first proposal in this direction was advanced by Melamed and Resnik (2000) , who developed a coefficient for hierarchical tagsets that could be used in SENSEVAL for measuring agreement with tagsets such as HECTOR. Melamed and Resnik proposed to \"normalize\" the computation of observed and expected agreement by taking each label which is not a leaf in the tag hierarchy and distributing it down to the leaves in a uniform way, and then only computing agreement on the leaves. For example, with a tagset like the one in Table 9 , the cases in which the coders used the label 'Group 1' would be uniformly \"distributed down\" and added in equal measure to the number of cases in which the coders assigned each of the four WordNet labels. The method proposed in the paper has, however, problematic properties when used to measure intercoder agreement. For example, suppose tag A dominates two sub-tags A1 and A2, and that two coders mark a particular item as A. Intuitively, we would want to consider this a case of perfect agreement, but this is not what the method proposed by Melamed and Resnik yields. The annotators' marks are distributed over the two sub-tags, each with probability 0.5, and then the agreement is computed by summing the joint probabilities over the two subtags (Equation (4) of Melamed and Resnik 2000) , with the result that the agreement over the item turns out to be 0.5 2 + 0.5 2 = 0.5 instead of 1. To correct this, Dan Melamed (personal communication) suggested replacing the product in Equation (4) with a minimum operator. However, the calculation of expected agreement (Equation (5) of Melamed and Resnik 2000) still gives the amount of agreement which is expected if coders are forced to choose among leaf nodes, which makes this method inappropriate for coding schemes that do not force coders to do this.", "cite_spans": [ { "start": 118, "end": 135, "text": "Kilgarriff (1999)", "ref_id": "BIBREF49" }, { "start": 653, "end": 676, "text": "(Bruce and Wiebe, 1998;", "ref_id": "BIBREF10" }, { "start": 677, "end": 692, "text": "Buitelaar 1998;", "ref_id": "BIBREF12" }, { "start": 693, "end": 706, "text": "V\u00e9ronis 1998;", "ref_id": "BIBREF106" }, { "start": 707, "end": 739, "text": "Palmer, Dang, and Fellbaum 2007)", "ref_id": "BIBREF72" }, { "start": 799, "end": 812, "text": "(Atkins 1992)", "ref_id": "BIBREF3" }, { "start": 1065, "end": 1087, "text": "Bruce and Wiebe (1999)", "ref_id": "BIBREF11" }, { "start": 1140, "end": 1175, "text": "9 Palmer, Dang, and Fellbaum (2007)", "ref_id": null }, { "start": 1499, "end": 1532, "text": "Palmer, Dang, and Fellbaum (2007)", "ref_id": "BIBREF72" }, { "start": 1712, "end": 1734, "text": "Bruce and Wiebe (1998)", "ref_id": "BIBREF10" }, { "start": 1960, "end": 2002, "text": "Palmer, Dang, and Fellbaum (2007, page 149", "ref_id": null }, { "start": 2066, "end": 2080, "text": "V\u00e9ronis (1998)", "ref_id": "BIBREF106" }, { "start": 2469, "end": 2494, "text": "Melamed and Resnik (2000)", "ref_id": "BIBREF65" }, { "start": 3717, "end": 3741, "text": "Melamed and Resnik 2000)", "ref_id": "BIBREF65" }, { "start": 4034, "end": 4058, "text": "Melamed and Resnik 2000)", "ref_id": "BIBREF65" } ], "ref_spans": [ { "start": 1487, "end": 1496, "text": "(Table 9)", "ref_id": null }, { "start": 1923, "end": 1930, "text": "Table 9", "ref_id": null }, { "start": 2939, "end": 2946, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Word Senses", "sec_num": "4.5" }, { "text": "One way to use Melamed and Resnik's proposal while avoiding the discrepancy between observed and expected agreement is to treat the proposal not as a new coefficient, but rather as a distance metric to be plugged into a weighted coefficient like \u03b1. Let A and B be two nodes in a hierarchical tagset, let L be the set of all leaf nodes in the tagset, and let P(l|T) be the probability of selecting a leaf node l given an arbitrary node T when the probability mass of T is distributed uniformly to all the nodes dominated by T. We can reinterpret Melamed's modification of Equation (4) in Melamed and Resnik (2000) as a metric measuring the distance between nodes A and B.", "cite_spans": [ { "start": 587, "end": 612, "text": "Melamed and Resnik (2000)", "ref_id": "BIBREF65" } ], "ref_spans": [], "eq_spans": [], "section": "Word Senses", "sec_num": "4.5" }, { "text": "d M+R = 1 \u2212 \u2211 l\u2208L min(P(l|A), P(l|B))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Senses", "sec_num": "4.5" }, { "text": "This metric has the desirable properties-it is 0 when tags A and B are identical, 1 when the tags do not overlap, and somewhere in between in all other cases. If we use this metric for Krippendorff's \u03b1 we find that observed agreement is exactly the same as in Melamed and Resnik (2000) with the product operator replaced by minimum (Melamed's modification) .", "cite_spans": [ { "start": 260, "end": 285, "text": "Melamed and Resnik (2000)", "ref_id": "BIBREF65" }, { "start": 332, "end": 356, "text": "(Melamed's modification)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Word Senses", "sec_num": "4.5" }, { "text": "We can also use other distance metrics with \u03b1. For example, we could associate with each sense an extended sense-a set es(s) including the sense itself and its grouped sense-and then use set-based distance metrics from Section 4.4, for example Passonneau's d P . To illustrate how this approach could be used to measure (dis)agreement on word sense annotation, suppose that two coders have to annotate the use of call in the following sentence (from the WSJ part of the Penn Treebank, section 02, text w0209):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Senses", "sec_num": "4.5" }, { "text": "This gene, called \"gametocide,\" is carried into the plant by a virus that remains active for a few days.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Senses", "sec_num": "4.5" }, { "text": "The standard guidelines (in SENSEVAL, say) require coders to assign a WN sense to words. Under such guidelines, if coder A classifies the use of called in the above example as an instance of WN1, whereas coder B annotates it as an instance of WN3, we would find total disagreement (d k a k b = 1) which seems excessively harsh as the two senses are clearly related. However, by using the broader senses proposed by Palmer, Dang, and Fellbaum (2007) in combination with a distance metric such as the one just proposed, it is possible to get more flexible and, we believe, more realistic assessments of the degree of agreement in situations such as this. For instance, in case the reliability study had already been carried out under the standard SENSEVAL guidelines, the distance metric proposed above could be used to identify post hoc cases of partial agreement by adding to each WN sense its hypernyms according to the groupings proposed by Palmer, Dang, and Fellbaum. For example, A's annotation could be turned into a new set label {WN1,LABEL} and B's mark into the set table {WN3,LABEL}, which would give a distance d = 2/3, indicating a degree of overlap. The method for computing agreement proposed here could could also be used to allow coders to choose either a more specific label or one of Palmer, Dang, and Fellbaum's superlabels. For example, suppose A sticks to WN1, but B decides to mark the use above using Palmer, Dang, and Fellbaum's LABEL category, then we would still find a distance d = 1/3. An alternative way of using \u03b1 for word sense annotation was developed and tested by Passonneau, Habash, and Rambow (2006) . Their approach is to allow coders to assign multiple labels (WordNet synsets) for wordsenses, as done by V\u00e9ronis (1998) and more recently by Rosenberg and Binkowski (2004) for text classification labels and by Poesio and Artstein (2005) for anaphora. These multi-label sets can then be compared using the MASI distance metric for \u03b1 (Passonneau 2006) .", "cite_spans": [ { "start": 415, "end": 448, "text": "Palmer, Dang, and Fellbaum (2007)", "ref_id": "BIBREF72" }, { "start": 1597, "end": 1634, "text": "Passonneau, Habash, and Rambow (2006)", "ref_id": "BIBREF75" }, { "start": 1742, "end": 1756, "text": "V\u00e9ronis (1998)", "ref_id": "BIBREF106" }, { "start": 1778, "end": 1808, "text": "Rosenberg and Binkowski (2004)", "ref_id": "BIBREF93" }, { "start": 1847, "end": 1873, "text": "Poesio and Artstein (2005)", "ref_id": "BIBREF80" }, { "start": 1969, "end": 1986, "text": "(Passonneau 2006)", "ref_id": "BIBREF74" } ], "ref_spans": [], "eq_spans": [], "section": "Word Senses", "sec_num": "4.5" }, { "text": "The purpose of this article has been to expose the reader to the mathematics of chancecorrected coefficients of agreement as well as the current state of the art of using these coefficients in CL. Our hope is that readers come to view agreement studies not as an additional chore or hurdle for publication, but as a tool for analysis which offers new insights into the annotation process. We conclude by summarizing what in our view are the main recommendations emerging from ten years of experience with coefficients of agreement. These can be grouped under three main headings: methodology, choice of coefficients, and interpretation of coefficients.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5." }, { "text": "Our first recommendation is that annotation efforts should perform and report rigorous reliability testing. The last decade has already seen considerable improvement, from the absence of any tests for the Penn Treebank (Marcus, Marcinkiewicz, and Santorini 1993) or the British National Corpus (Leech, Garside, and Bryant 1994) to the central role played by reliability testing in the Penn Discourse Treebank (Miltsakaki et al. 2004) and OntoNotes (Hovy et al. 2006) . But even the latter efforts only measure and report percent agreement. We believe that part of the reluctance to report chance-corrected measures is the difficulty in interpreting them. However, our experience is that chancecorrected coefficients of agreement do provide a better indication of the quality of the resulting annotation than simple percent agreement, and moreover, the detailed calculations leading to the coefficients can be very revealing as to where the disagreements are located and what their sources may be.", "cite_spans": [ { "start": 219, "end": 262, "text": "(Marcus, Marcinkiewicz, and Santorini 1993)", "ref_id": "BIBREF63" }, { "start": 294, "end": 327, "text": "(Leech, Garside, and Bryant 1994)", "ref_id": "BIBREF59" }, { "start": 409, "end": 433, "text": "(Miltsakaki et al. 2004)", "ref_id": "BIBREF68" }, { "start": 448, "end": 466, "text": "(Hovy et al. 2006)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "5.1" }, { "text": "A rigorous methodology for reliability testing does not, in our opinion, exclude the use of expert coders, and here we feel there may be a motivated difference between the fields of content analysis and CL. There is a clear tradeoff between the complexity of the judgments that coders are required to make and the reliability of such judgments, and we should strive to devise annotation schemes that are not only reliable enough to be replicated, but also sophisticated enough to be useful (cf. Krippendorff 2004a, pages 213-214) . In content analysis, conclusions are drawn directly from annotated corpora, so the emphasis is more on replicability; whereas in CL, corpora constitute a resource which is used by other processes, so the emphasis is more towards usefulness. There is also a tradeoff between the sophistication of judgments and the availability of coders who can make such judgments. Consequently, annotation by experts is often the only practical way to get useful corpora for CL. Current practice achieves high reliability either by using professionals (Kilgarriff 1999) or through intensive training (Hovy et al. 2006; Carlson, Marcu, and Okurowski 2003) ; this means that results are not replicable across sites, and are therefore less reliable than annotation by naive coders adhering to written instructions. We feel that inter-annotator agreement studies should still be carried out, as they serve as an assurance that the results are replicable when the annotators are chosen from the same population as the original annotators. An important additional assurance should be provided in the form of an independent evaluation of the task for which the corpus is used (cf. Passonneau 2006).", "cite_spans": [ { "start": 495, "end": 529, "text": "Krippendorff 2004a, pages 213-214)", "ref_id": null }, { "start": 1069, "end": 1086, "text": "(Kilgarriff 1999)", "ref_id": "BIBREF49" }, { "start": 1117, "end": 1135, "text": "(Hovy et al. 2006;", "ref_id": "BIBREF44" }, { "start": 1136, "end": 1171, "text": "Carlson, Marcu, and Okurowski 2003)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "5.1" }, { "text": "One of the goals of this article is to help authors make an informed choice regarding the coefficients they use for measuring agreement. While coefficients other than K, specifically Cohen's \u03ba and Krippendorff's \u03b1, have appeared in the CL literature as early as Carletta (1996) and Passonneau and Litman (1996) , they hadn't sprung into general awareness until the publication of Di Eugenio and Glass (2004) and Passonneau (2004) . Regarding the question of annotator bias, there is an overwhelming consensus in CL practice: K and \u03b1 are used in the vast majority of the studies we reported. We agree with the view that K and \u03b1 are more appropriate, as they abstract away from the bias of specific coders. But we also believe that ultimately this issue of annotator bias is of little consequence because the differences get smaller and smaller as the number of annotators grows (Artstein and Poesio 2005) . We believe that increasing the number of annotators is the best strategy, because it reduces the chances of accidental personal biases.", "cite_spans": [ { "start": 262, "end": 277, "text": "Carletta (1996)", "ref_id": "BIBREF17" }, { "start": 282, "end": 310, "text": "Passonneau and Litman (1996)", "ref_id": "BIBREF77" }, { "start": 383, "end": 407, "text": "Eugenio and Glass (2004)", "ref_id": "BIBREF28" }, { "start": 412, "end": 429, "text": "Passonneau (2004)", "ref_id": "BIBREF73" }, { "start": 877, "end": 903, "text": "(Artstein and Poesio 2005)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Choosing a Coefficient", "sec_num": "5.2" }, { "text": "However, Krippendorff's \u03b1 is indispensable when the category labels are not equally distinct from one another. We think there are at least two types of coding schemes in which this is the case: (i) hierarchical tagsets and (ii) set-valued interpretations such as those proposed for anaphora. At least in the second case, weighted coefficients are almost unavoidable. We therefore recommend using \u03b1, noting however that the specific choice of weights will affect the overall numerical result.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Choosing a Coefficient", "sec_num": "5.2" }, { "text": "We view the lack of consensus on how to interpret the values of agreement coefficients as a serious problem with current practice in reliability testing, and as one of the main reasons for the reluctance of many in CL to embark on reliability studies. Unlike significance values which report a probability (that an observed effect is due to chance), agreement coefficients report a magnitude, and it is less clear how to interpret such magnitudes. Our own experience is consistent with that of Krippendorff: Both in our earlier work (Poesio and Vieira 1998; Poesio 2004a ) and in the more recent efforts (Poesio and Artstein 2005) we found that only values above 0.8 ensured an annotation of reasonable quality (Poesio 2004a) . We therefore feel that if a threshold needs to be set, 0.8 is a good value.", "cite_spans": [ { "start": 533, "end": 557, "text": "(Poesio and Vieira 1998;", "ref_id": "BIBREF83" }, { "start": 558, "end": 570, "text": "Poesio 2004a", "ref_id": "BIBREF80" }, { "start": 604, "end": 630, "text": "(Poesio and Artstein 2005)", "ref_id": "BIBREF80" }, { "start": 711, "end": 725, "text": "(Poesio 2004a)", "ref_id": "BIBREF80" } ], "ref_spans": [], "eq_spans": [], "section": "Interpreting the Values", "sec_num": "5.3" }, { "text": "That said, we doubt that a single cutoff point is appropriate for all purposes. For some CL studies, particularly on discourse, useful corpora have been obtained while attaining reliability only at the 0.7 level. We agree therefore with Craggs and McGee Wood (2005) that setting a specific agreement threshold should not be a prerequisite for publication. Instead, as recommended by Di Eugenio and Glass (2004) and others, researchers should report in detail on the methodology that was followed in collecting the reliability data (number of coders, whether they coded independently, whether they relied exclusively on an annotation manual), whether agreement was statistically significant, and provide a confusion matrix or agreement table so that readers can find out whether overall figures of agreement hide disagreements on less common categories. For an example of good practice in this respect, see Teufel and Moens (2002) . The decision whether a corpus is good enough for publication should be based on more than the agreement score-specifically, an important consideration is an independent evaluation of the results that are based on the corpus.", "cite_spans": [ { "start": 237, "end": 265, "text": "Craggs and McGee Wood (2005)", "ref_id": "BIBREF25" }, { "start": 386, "end": 410, "text": "Eugenio and Glass (2004)", "ref_id": "BIBREF28" }, { "start": 906, "end": 929, "text": "Teufel and Moens (2002)", "ref_id": "BIBREF103" } ], "ref_spans": [], "eq_spans": [], "section": "Interpreting the Values", "sec_num": "5.3" }, { "text": "Only part of our material could fit in this article. An extended version of the survey is available from http:/ /cswww.essex.ac.uk/Research/nle/arrau/ .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The extended version of the article also includes a discussion of why \u03c7 2 and correlation coefficients are not appropriate for this task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The independence assumption has been the subject of much criticism, for example by John S. Uebersax. http:/ /ourworld.compuserve.com/homepages/jsuebersax/agree.htm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Craggs and McGee Wood (2005) also suggest increasing the number of coders in order to overcome individual annotator bias, but do not provide a mathematical justification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The notion of \"topic\" is notoriously difficult to define and many competing theoretical proposals exist(Reinhart 1981;Vallduv\u00ed 1993). As it is often the case with annotation, fairly simple definitions tend to be used in discourse annotation work: For example, in TDT topic is defined for annotation purposes as \"an event or activity, along with all directly related events and activities\" (TDT-2 Annotation Guide, http:/ /projects.ldc.upenn.edu/TDT2/Guide/label-instr.html).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "ftp:/ /ftp.cs.rochester.edu/pub/papers/ai/92.tn1.trains 91 dialogues.txt.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The methodology proposed inBruce and Wiebe (1999) is in our view the most advanced technique to \"make sense\" of the results of agreement studies available in the literature. The extended version of this article contains a fuller introduction to these methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported in part by EPSRC grant GR/S76434/01, ARRAU. We wish to thank four anonymous reviewers and Jean Carletta, Mark Core, Barbara Di Eugenio, Ruth Filik, Michael Glass, George Hripcsak, Adam Kilgarriff, Dan Melamed, Becky Passonneau, Phil Resnik, Tony Sanford, Patrick Sturt, and David Traum for helpful comments and discussion. Special thanks to Klaus Krippendorff for an extremely detailed review of an earlier version of this article. We are also extremely grateful to the British Library in London, which made accessible to us virtually every paper we needed for this research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "DAMSL: Dialogue act markup in several layers. Draft contribution for the Discourse Resource Initiative", "authors": [ { "first": "James", "middle": [], "last": "Allen", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Core", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Allen, James and Mark Core. 1997. DAMSL: Dialogue act markup in several layers. Draft contribution for the Discourse Resource Initiative, University of Rochester. Available at http://www.cs.rochester.edu/ research/cisd/resources/damsl/.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Bias decreases in proportion to the number of annotators", "authors": [ { "first": "Ron", "middle": [], "last": "Artstein", "suffix": "" }, { "first": "Massimo", "middle": [], "last": "Poesio", "suffix": "" } ], "year": 2005, "venue": "Proceedings of FG-MoL 2005", "volume": "", "issue": "", "pages": "141--150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Artstein, Ron and Massimo Poesio. 2005. Bias decreases in proportion to the number of annotators. In Proceedings of FG-MoL 2005, pages 141-150, Edinburgh.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Identifying reference to abstract objects in dialogue", "authors": [ { "first": "Ron", "middle": [], "last": "Artstein", "suffix": "" }, { "first": "Massimo", "middle": [], "last": "Poesio", "suffix": "" } ], "year": 2006, "venue": "brandial 2006: Proceedings of the 10th Workshop on the Semantics and Pragmatics of Dialogue", "volume": "", "issue": "", "pages": "56--63", "other_ids": {}, "num": null, "urls": [], "raw_text": "Artstein, Ron and Massimo Poesio. 2006. Identifying reference to abstract objects in dialogue. In brandial 2006: Proceedings of the 10th Workshop on the Semantics and Pragmatics of Dialogue, pages 56-63, Potsdam.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Tools for computer-aided corpus lexicography: The Hector project", "authors": [ { "first": "Sue", "middle": [], "last": "Atkins", "suffix": "" } ], "year": 1992, "venue": "Acta Linguistica Hungarica", "volume": "41", "issue": "", "pages": "5--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Atkins, Sue. 1992. Tools for computer-aided corpus lexicography: The Hector project. Acta Linguistica Hungarica, 41:5-71.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Definitional, personal, and mechanical constraints on part of speech annotation performance", "authors": [ { "first": "Anna", "middle": [], "last": "Babarczy", "suffix": "" }, { "first": "John", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Sampson", "suffix": "" } ], "year": 2006, "venue": "Natural Language Engineering", "volume": "12", "issue": "1", "pages": "77--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Babarczy, Anna, John Carroll, and Geoffrey Sampson. 2006. Definitional, personal, and mechanical constraints on part of speech annotation performance. Natural Language Engineering, 12(1):77-90.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "On the methods and theory of reliability", "authors": [ { "first": "John", "middle": [ "J" ], "last": "Bartko", "suffix": "" }, { "first": "William", "middle": [ "T" ], "last": "Carpenter", "suffix": "" } ], "year": 1976, "venue": "Journal of Nervous and Mental Disease", "volume": "163", "issue": "5", "pages": "307--317", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bartko, John J. and William T. Carpenter, Jr. 1976. On the methods and theory of reliability. Journal of Nervous and Mental Disease, 163(5):307-317.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Statistical models for text segmentation", "authors": [ { "first": "Doug", "middle": [], "last": "Beeferman", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Berger", "suffix": "" }, { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" } ], "year": 1999, "venue": "Machine Learning", "volume": "34", "issue": "", "pages": "177--210", "other_ids": {}, "num": null, "urls": [], "raw_text": "Beeferman, Doug, Adam Berger, and John Lafferty. 1999. Statistical models for text segmentation. Machine Learning, 34(1-3):177-210.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Communications through limited questioning", "authors": [ { "first": "E", "middle": [ "M" ], "last": "Bennett", "suffix": "" }, { "first": "R", "middle": [], "last": "Alpert", "suffix": "" }, { "first": "A", "middle": [ "C" ], "last": "Goldstein", "suffix": "" } ], "year": 1954, "venue": "Public Opinion Quarterly", "volume": "18", "issue": "3", "pages": "303--308", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bennett, E. M., R. Alpert, and A. C. Goldstein. 1954. Communications through limited questioning. Public Opinion Quarterly, 18(3):303-308.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "2 \u00d7 2 kappa coefficients: Measures of agreement or association", "authors": [ { "first": "Daniel", "middle": [ "A" ], "last": "Bloch", "suffix": "" }, { "first": "Helena Chmura", "middle": [], "last": "Kraemer", "suffix": "" } ], "year": 1989, "venue": "Biometrics", "volume": "45", "issue": "1", "pages": "269--287", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bloch, Daniel A. and Helena Chmura Kraemer. 1989. 2 \u00d7 2 kappa coefficients: Measures of agreement or association. Biometrics, 45(1):269-287.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Coefficient kappa: Some uses, misuses, and alternatives", "authors": [ { "first": "Robert", "middle": [ "L" ], "last": "Brennan", "suffix": "" }, { "first": "J", "middle": [], "last": "Dale", "suffix": "" }, { "first": "", "middle": [], "last": "Prediger", "suffix": "" } ], "year": 1981, "venue": "Educational and Psychological Measurement", "volume": "41", "issue": "3", "pages": "687--699", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brennan, Robert L. and Dale J. Prediger. 1981. Coefficient kappa: Some uses, misuses, and alternatives. Educational and Psychological Measurement, 41(3):687-699.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Word-sense distinguishability and inter-coder agreement", "authors": [ { "first": "Rebecca", "middle": [], "last": "Bruce", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 1998, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "53--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bruce, Rebecca and Janyce Wiebe. 1998. Word-sense distinguishability and inter-coder agreement. In Proceedings of EMNLP, pages 53-60, Granada.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Recognizing subjectivity: A case study in manual tagging", "authors": [ { "first": "Rebecca", "middle": [ "F" ], "last": "Bruce", "suffix": "" }, { "first": "Janyce", "middle": [ "M" ], "last": "Wiebe", "suffix": "" } ], "year": 1999, "venue": "Natural Language Engineering", "volume": "5", "issue": "2", "pages": "187--205", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bruce, Rebecca F. and Janyce M. Wiebe. 1999. Recognizing subjectivity: A case study in manual tagging. Natural Language Engineering, 5(2):187-205.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "CoreLex : Systematic Polysemy and Underspecification", "authors": [ { "first": "Paul", "middle": [], "last": "Buitelaar", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Buitelaar, Paul. 1998. CoreLex : Systematic Polysemy and Underspecification. Ph.D. thesis, Brandeis University, Waltham, MA.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Dynamic interpretation and dialogue theory", "authors": [ { "first": "Harry", "middle": [ "C" ], "last": "Bunt", "suffix": "" } ], "year": 2000, "venue": "The Structure of Multimodal Dialogue II. John Benjamins", "volume": "", "issue": "", "pages": "139--166", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bunt, Harry C. 2000. Dynamic interpretation and dialogue theory. In Martin M. Taylor, Fran\u00e7oise N\u00e9el, and Don G. Bouwhuis, editors, The Structure of Multimodal Dialogue II. John Benjamins, Amsterdam, pages 139-166.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A framework for dialogue act specification", "authors": [ { "first": "Harry", "middle": [ "C" ], "last": "Bunt", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Joint ISO-ACL Workshop on the Representation and Annotation of Semantic Information", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bunt, Harry C. 2005. A framework for dialogue act specification. In Proceedings of the Joint ISO-ACL Workshop on the Representation and Annotation of Semantic Information, Tilburg. Available at: http://let.uvt.nl/research/ti/ sigsem/wg/discussionnotes4.htm.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Resolving pronominal reference to abstract entities", "authors": [ { "first": "Donna", "middle": [ "K" ], "last": "Byron", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "80--87", "other_ids": {}, "num": null, "urls": [], "raw_text": "Byron, Donna K. 2002. Resolving pronominal reference to abstract entities. In Proceedings of the 40th Annual Meeting of the ACL, pages 80-87, Philadelphia, PA.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Bias, prevalence and kappa", "authors": [ { "first": "Ted", "middle": [], "last": "Byrt", "suffix": "" }, { "first": "Janet", "middle": [], "last": "Bishop", "suffix": "" }, { "first": "John", "middle": [ "B" ], "last": "Carlin", "suffix": "" } ], "year": 1993, "venue": "Journal of Clinical Epidemiology", "volume": "46", "issue": "5", "pages": "423--429", "other_ids": {}, "num": null, "urls": [], "raw_text": "Byrt, Ted, Janet Bishop, and John B. Carlin. 1993. Bias, prevalence and kappa. Journal of Clinical Epidemiology, 46(5):423-429.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Assessing agreement on classification tasks: The kappa statistic", "authors": [ { "first": "Jean", "middle": [], "last": "Carletta", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "22", "issue": "2", "pages": "249--254", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carletta, Jean. 1996. Assessing agreement on classification tasks: The kappa statistic. Computational Linguistics, 22(2):249-254.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The reliability of a dialogue structure coding scheme", "authors": [ { "first": "Jean", "middle": [], "last": "Carletta", "suffix": "" }, { "first": "Amy", "middle": [], "last": "Isard", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Isard", "suffix": "" }, { "first": "Jacqueline", "middle": [ "C" ], "last": "Kowtko", "suffix": "" }, { "first": "Gwyneth", "middle": [], "last": "Doherty-Sneddon", "suffix": "" }, { "first": "Anne", "middle": [ "H" ], "last": "Anderson", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "1", "pages": "13--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carletta, Jean, Amy Isard, Stephen Isard, Jacqueline C. Kowtko, Gwyneth Doherty-Sneddon, and Anne H. Anderson. 1997. The reliability of a dialogue structure coding scheme. Computational Linguistics, 23(1):13-32.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Building a discourse-tagged corpus in the framework of rhetorical structure theory", "authors": [ { "first": "Lynn", "middle": [], "last": "Carlson", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "Mary", "middle": [ "Ellen" ], "last": "Okurowski", "suffix": "" } ], "year": 2003, "venue": "Current and New Directions in Discourse and Dialogue", "volume": "", "issue": "", "pages": "85--112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlson, Lynn, Daniel Marcu, and Mary Ellen Okurowski. 2003. Building a discourse-tagged corpus in the framework of rhetorical structure theory. In Jan C. J. van Kuppevelt and Ronnie W. Smith, editors, Current and New Directions in Discourse and Dialogue. Kluwer, Dordrecht, pages 85-112.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "High agreement but low kappa: II. Resolving the paradoxes", "authors": [ { "first": "Domenic", "middle": [ "V" ], "last": "Cicchetti", "suffix": "" }, { "first": "R", "middle": [], "last": "Alvan", "suffix": "" }, { "first": "", "middle": [], "last": "Feinstein", "suffix": "" } ], "year": 1990, "venue": "Journal of Clinical Epidemiology", "volume": "43", "issue": "6", "pages": "551--558", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cicchetti, Domenic V. and Alvan R. Feinstein. 1990. High agreement but low kappa: II. Resolving the paradoxes. Journal of Clinical Epidemiology, 43(6):551-558.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A coefficient of agreement for nominal scales", "authors": [ { "first": "Jacob", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 1960, "venue": "Educational and Psychological Measurement", "volume": "20", "issue": "1", "pages": "37--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cohen, Jacob. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1):37-46.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit", "authors": [ { "first": "Jacob", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 1968, "venue": "Psychological Bulletin", "volume": "70", "issue": "4", "pages": "213--220", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cohen, Jacob. 1968. Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin, 70(4):213-220.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Coding dialogs with the DAMSL annotation scheme", "authors": [ { "first": "Mark", "middle": [ "G" ], "last": "Core", "suffix": "" }, { "first": "James", "middle": [ "F" ], "last": "Allen", "suffix": "" }, { "first": ";", "middle": [], "last": "Aaai", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Cambridge", "suffix": "" } ], "year": 1997, "venue": "Working Notes of the AAAI Fall Symposium on Communicative Action in Humans and Machines", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Core, Mark G. and James F. Allen. 1997. Coding dialogs with the DAMSL annotation scheme. In Working Notes of the AAAI Fall Symposium on Communicative Action in Humans and Machines, AAAI, Cambridge, MA. Available at: http://www. cs.umd.edu/\u223ctraum/CA/fpapers.html.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A two-dimensional annotation scheme for emotion in dialogue", "authors": [ { "first": "Richard", "middle": [], "last": "Craggs", "suffix": "" }, { "first": "Mary", "middle": [ "Mcgee" ], "last": "Wood", "suffix": "" } ], "year": 2004, "venue": "Papers from the 2004 AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications", "volume": "", "issue": "", "pages": "44--49", "other_ids": {}, "num": null, "urls": [], "raw_text": "Craggs, Richard and Mary McGee Wood. 2004. A two-dimensional annotation scheme for emotion in dialogue. In Papers from the 2004 AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications, Stanford, pages 44-49.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Evaluating discourse and dialogue coding schemes", "authors": [ { "first": "Richard", "middle": [], "last": "Craggs", "suffix": "" }, { "first": "Mary", "middle": [ "Mcgee" ], "last": "Wood", "suffix": "" } ], "year": 2005, "venue": "Computational Linguistics", "volume": "31", "issue": "3", "pages": "289--295", "other_ids": {}, "num": null, "urls": [], "raw_text": "Craggs, Richard and Mary McGee Wood. 2005. Evaluating discourse and dialogue coding schemes. Computational Linguistics, 31(3):289-295.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Measuring agreement for multinomial data", "authors": [ { "first": "Mark", "middle": [], "last": "Davies", "suffix": "" }, { "first": "Joseph", "middle": [ "L" ], "last": "Fleiss", "suffix": "" } ], "year": 1982, "venue": "Biometrics", "volume": "38", "issue": "4", "pages": "1047--1051", "other_ids": {}, "num": null, "urls": [], "raw_text": "Davies, Mark and Joseph L. Fleiss. 1982. Measuring agreement for multinomial data. Biometrics, 38(4):1047-1051.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "On the usage of Kappa to evaluate agreement on coding tasks", "authors": [ { "first": "Di", "middle": [], "last": "Eugenio", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "", "suffix": "" } ], "year": 2000, "venue": "Proceedings of LREC", "volume": "1", "issue": "", "pages": "441--444", "other_ids": {}, "num": null, "urls": [], "raw_text": "Di Eugenio, Barbara. 2000. On the usage of Kappa to evaluate agreement on coding tasks. In Proceedings of LREC, volume 1, pages 441-444, Athens.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "The kappa statistic: A second look", "authors": [ { "first": "Di", "middle": [], "last": "Eugenio", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Glass", "suffix": "" } ], "year": 2004, "venue": "Computational Linguistics", "volume": "30", "issue": "1", "pages": "95--101", "other_ids": {}, "num": null, "urls": [], "raw_text": "Di Eugenio, Barbara and Michael Glass. 2004. The kappa statistic: A second look. Computational Linguistics, 30(1):95-101.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "An empirical investigation of proposals in collaborative dialogues", "authors": [ { "first": "Di", "middle": [], "last": "Eugenio", "suffix": "" }, { "first": "Pamela", "middle": [ "W" ], "last": "Barbara", "suffix": "" }, { "first": "Johanna", "middle": [ "D" ], "last": "Jordan", "suffix": "" }, { "first": "Richmond", "middle": [ "H" ], "last": "Moore", "suffix": "" }, { "first": "", "middle": [], "last": "Thomason", "suffix": "" } ], "year": 1998, "venue": "Proceedings of 36th Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "325--329", "other_ids": {}, "num": null, "urls": [], "raw_text": "Di Eugenio, Barbara, Pamela W. Jordan, Johanna D. Moore, and Richmond H. Thomason. 1998. An empirical investigation of proposals in collaborative dialogues. In Proceedings of 36th Annual Meeting of the ACL, pages 325-329, Montreal.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Measures of the amount of ecologic association between species", "authors": [ { "first": "Lee", "middle": [ "R" ], "last": "Dice", "suffix": "" } ], "year": 1945, "venue": "Ecology", "volume": "26", "issue": "3", "pages": "297--302", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dice, Lee R. 1945. Measures of the amount of ecologic association between species. Ecology, 26(3):297-302.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Sample size requirements for reliability studies", "authors": [ { "first": "Allan", "middle": [], "last": "Donner", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Eliasziw", "suffix": "" } ], "year": 1987, "venue": "Statistics in Medicine", "volume": "6", "issue": "", "pages": "441--448", "other_ids": {}, "num": null, "urls": [], "raw_text": "Donner, Allan and Michael Eliasziw. 1987. Sample size requirements for reliability studies. Statistics in Medicine, 6:441-448.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Comparing several aspects of human-computer and human-human dialogues", "authors": [ { "first": "Christine", "middle": [], "last": "Doran", "suffix": "" }, { "first": "John", "middle": [], "last": "Aberdeen", "suffix": "" }, { "first": "Laurie", "middle": [], "last": "Damianos", "suffix": "" }, { "first": "Lynette", "middle": [], "last": "Hirschman", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 2nd SIGdial Workshop on Discourse and Dialogue", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Doran, Christine, John Aberdeen, Laurie Damianos, and Lynette Hirschman. 2001. Comparing several aspects of human-computer and human-human dialogues. In Proceedings of the 2nd SIGdial Workshop on Discourse and Dialogue, Aalborg, Denmark. Available at: http://www.sigdial.org/workshops/ workshop2/proceedings.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Dialogue acts, synchronizing units, and anaphora resolution", "authors": [ { "first": "Miriam", "middle": [], "last": "Eckert", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Strube", "suffix": "" } ], "year": 2000, "venue": "Journal of Semantics", "volume": "17", "issue": "1", "pages": "51--89", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eckert, Miriam and Michael Strube. 2000. Dialogue acts, synchronizing units, and anaphora resolution. Journal of Semantics, 17(1):51-89.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "High agreement but low kappa: I. The problems of two paradoxes", "authors": [ { "first": "Alvan", "middle": [ "R" ], "last": "Feinstein", "suffix": "" }, { "first": "Domenic", "middle": [ "V" ], "last": "Cicchetti", "suffix": "" } ], "year": 1990, "venue": "Journal of Clinical Epidemiology", "volume": "43", "issue": "6", "pages": "543--549", "other_ids": {}, "num": null, "urls": [], "raw_text": "Feinstein, Alvan R. and Domenic V. Cicchetti. 1990. High agreement but low kappa: I. The problems of two paradoxes. Journal of Clinical Epidemiology, 43(6):543-549.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "WordNet: An Electronic Lexical Database", "authors": [], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fellbaum, Christiane, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press, Cambridge, MA.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Measuring nominal scale agreement among many raters", "authors": [ { "first": "Joseph", "middle": [ "L" ], "last": "Fleiss", "suffix": "" } ], "year": 1971, "venue": "Psychological Bulletin", "volume": "76", "issue": "5", "pages": "378--382", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fleiss, Joseph L. 1971. Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5):378-382.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Measuring agreement between two judges on the presence or absence of a trait", "authors": [ { "first": "Joseph", "middle": [ "L" ], "last": "Fleiss", "suffix": "" } ], "year": 1975, "venue": "Biometrics", "volume": "31", "issue": "3", "pages": "651--659", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fleiss, Joseph L. 1975. Measuring agreement between two judges on the presence or absence of a trait. Biometrics, 31(3):651-659.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Frequency Analysis of English Usage: lexicon and grammar", "authors": [ { "first": "W", "middle": [], "last": "Francis", "suffix": "" }, { "first": "Henry", "middle": [], "last": "Nelson", "suffix": "" }, { "first": "", "middle": [], "last": "Kucera", "suffix": "" } ], "year": 1982, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Francis, W. Nelson and Henry Kucera. 1982. Frequency Analysis of English Usage: lexicon and grammar. Houghton Mifflin, Boston, MA.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Measuring annotator agreement in a complex hierarchical dialogue act annotation scheme", "authors": [ { "first": "Jeroen", "middle": [], "last": "Geertzen", "suffix": "" }, { "first": "Harry", "middle": [], "last": "Bunt", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 7th SIGdial Workshop on Discourse and Dialogue", "volume": "", "issue": "", "pages": "126--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "Geertzen, Jeroen and Harry Bunt. 2006. Measuring annotator agreement in a complex hierarchical dialogue act annotation scheme. In Proceedings of the 7th SIGdial Workshop on Discourse and Dialogue, pages 126-133, Sydney.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "The Trains 91 dialogues. TRAINS Technical Note 92-1", "authors": [ { "first": "Derek", "middle": [], "last": "Gross", "suffix": "" }, { "first": "James", "middle": [ "F" ], "last": "Allen", "suffix": "" }, { "first": "David", "middle": [ "R" ], "last": "Traum", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gross, Derek, James F. Allen, and David R. Traum. 1993. The Trains 91 dialogues. TRAINS Technical Note 92-1, University of Rochester Computer Science Department, Rochester, NY.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Attention, intentions, and the structure of discourse", "authors": [ { "first": "Barbara", "middle": [ "J" ], "last": "Grosz", "suffix": "" }, { "first": "Candace", "middle": [ "L" ], "last": "Sidner", "suffix": "" } ], "year": 1986, "venue": "Computational Linguistics", "volume": "12", "issue": "3", "pages": "175--204", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grosz, Barbara J. and Candace L. Sidner. 1986. Attention, intentions, and the structure of discourse. Computational Linguistics, 12(3):175-204.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Answering the call for a standard reliability measure for coding data", "authors": [ { "first": "Andrew", "middle": [ "F" ], "last": "Hayes", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Krippendorff", "suffix": "" } ], "year": 2007, "venue": "Communication Methods and Measures", "volume": "1", "issue": "1", "pages": "77--89", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hayes, Andrew F. and Klaus Krippendorff. 2007. Answering the call for a standard reliability measure for coding data. Communication Methods and Measures, 1(1):77-89.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "TextTiling: Segmenting text into multi-paragraph subtopic passages", "authors": [ { "first": "Marti", "middle": [ "A" ], "last": "Hearst", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "1", "pages": "33--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hearst, Marti A. 1997. TextTiling: Segmenting text into multi-paragraph subtopic passages. Computational Linguistics, 23(1):33-64.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "OntoNotes: The 90% solution", "authors": [ { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Mitchell", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Lance", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Weischedel", "suffix": "" } ], "year": 2006, "venue": "Proceedings of HLT-NAACL, Companion Volume: Short Papers", "volume": "", "issue": "", "pages": "57--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hovy, Eduard, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. OntoNotes: The 90% solution. In Proceedings of HLT-NAACL, Companion Volume: Short Papers, pages 57-60, New York.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Interrater agreement measures: Comments on kappa n , Cohen's kappa, Scott's \u03c0, and Aickin's \u03b1", "authors": [ { "first": "Louis", "middle": [ "M" ], "last": "Hsu", "suffix": "" }, { "first": "Ronald", "middle": [], "last": "Field", "suffix": "" } ], "year": 2003, "venue": "Understanding Statistics", "volume": "2", "issue": "3", "pages": "205--219", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hsu, Louis M. and Ronald Field. 2003. Interrater agreement measures: Comments on kappa n , Cohen's kappa, Scott's \u03c0, and Aickin's \u03b1. Understanding Statistics, 2(3):205-219.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "The distribution of the flora in the Alpine zone", "authors": [ { "first": "Paul", "middle": [], "last": "Jaccard", "suffix": "" } ], "year": 1912, "venue": "New Phytologist", "volume": "11", "issue": "2", "pages": "37--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jaccard, Paul. 1912. The distribution of the flora in the Alpine zone. New Phytologist, 11(2):37-50.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Dialogue acts in VERBMOBIL", "authors": [ { "first": "Susanne", "middle": [], "last": "Jekat", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Elisabeth", "middle": [], "last": "Maier", "suffix": "" }, { "first": "Ilona", "middle": [], "last": "Maleck", "suffix": "" }, { "first": "Marion", "middle": [], "last": "Mast", "suffix": "" }, { "first": "J. Joachim", "middle": [], "last": "Quantz", "suffix": "" }, { "first": ";", "middle": [], "last": "", "suffix": "" }, { "first": "T", "middle": [ "U" ], "last": "Berlin", "suffix": "" } ], "year": 1995, "venue": "", "volume": "65", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jekat, Susanne, Alexandra Klein, Elisabeth Maier, Ilona Maleck, Marion Mast, and J. Joachim Quantz. 1995. Dialogue acts in VERBMOBIL. VM-Report 65, Universit\u00e4t Hamburg, DFKI GmbH, Universit\u00e4t Erlangen, and TU Berlin.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Switchboard SWBD-DAMSL shallow-discoursefunction annotation coders manual, draft 13", "authors": [ { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Shriberg", "suffix": "" }, { "first": "Debra", "middle": [], "last": "Biasca", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jurafsky, Daniel, Elizabeth Shriberg, and Debra Biasca. 1997. Switchboard SWBD-DAMSL shallow-discourse- function annotation coders manual, draft 13. Technical Report 97-02, University of Colorado at Boulder, Institute for Cognitive Science.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "95% replicability for manual word sense tagging", "authors": [ { "first": "Adam", "middle": [], "last": "Kilgarriff", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the Ninth Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "277--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kilgarriff, Adam. 1999. 95% replicability for manual word sense tagging. In Proceedings of the Ninth Conference of the European Chapter of the Association for Computational Linguistics, pages 277-278, Bergen, Norway.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Tackling the internet glossary glut: Automatic extraction and evaluation of genus phrases", "authors": [ { "first": "Judith", "middle": [ "L" ], "last": "Klavans", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Popper", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Passonneau", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the SIGIR-2003 Workshop on the Semantic Web", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Klavans, Judith L., Samuel Popper, and Rebecca Passonneau. 2003. Tackling the internet glossary glut: Automatic extraction and evaluation of genus phrases. In Proceedings of the SIGIR-2003 Workshop on the Semantic Web, Toronto.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Conversational games within dialogue", "authors": [ { "first": "Jacqueline", "middle": [ "C" ], "last": "Kowtko", "suffix": "" }, { "first": "D", "middle": [], "last": "Stephen", "suffix": "" }, { "first": "Gwyneth", "middle": [ "M" ], "last": "Isard", "suffix": "" }, { "first": "", "middle": [], "last": "Doherty", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kowtko, Jacqueline C., Stephen D. Isard, and Gwyneth M. Doherty. 1992. Conversational games within dialogue. Research Paper HCRC/RP-31, Human Communication Research Centre, University of Edinburgh.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Estimating the reliability, systematic error and random error of interval data", "authors": [ { "first": "Klaus", "middle": [], "last": "Krippendorff", "suffix": "" } ], "year": 1970, "venue": "Educational and Psychological Measurement", "volume": "30", "issue": "1", "pages": "61--70", "other_ids": {}, "num": null, "urls": [], "raw_text": "Krippendorff, Klaus. 1970. Estimating the reliability, systematic error and random error of interval data. Educational and Psychological Measurement, 30(1):61-70.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Reliability of binary attribute data", "authors": [ { "first": "Klaus", "middle": [], "last": "Krippendorff", "suffix": "" } ], "year": 1978, "venue": "Biometrics", "volume": "34", "issue": "1", "pages": "142--144", "other_ids": {}, "num": null, "urls": [], "raw_text": "Krippendorff, Klaus. 1978. Reliability of binary attribute data. Biometrics, 34(1):142-144. Letter to the editor, with a reply by Joseph L. Fleiss.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Content Analysis: An Introduction to Its Methodology, chapter 12", "authors": [ { "first": "Klaus", "middle": [], "last": "Krippendorff", "suffix": "" } ], "year": 1980, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Krippendorff, Klaus. 1980. Content Analysis: An Introduction to Its Methodology, chapter 12. Sage, Beverly Hills, CA.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "On the reliability of unitizing contiguous data", "authors": [ { "first": "Klaus", "middle": [], "last": "Krippendorff", "suffix": "" } ], "year": 1995, "venue": "Sociological Methodology", "volume": "25", "issue": "", "pages": "47--76", "other_ids": {}, "num": null, "urls": [], "raw_text": "Krippendorff, Klaus. 1995. On the reliability of unitizing contiguous data. Sociological Methodology, 25:47-76.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Content Analysis: An Introduction to Its Methodology", "authors": [ { "first": "Klaus", "middle": [], "last": "Krippendorff", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Krippendorff, Klaus. 2004a. Content Analysis: An Introduction to Its Methodology, second edition, chapter 11. Sage, Thousand Oaks, CA.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Reliability in content analysis: Some common misconceptions and recommendations", "authors": [ { "first": "Klaus", "middle": [], "last": "Krippendorff", "suffix": "" } ], "year": 2004, "venue": "Human Communication Research", "volume": "30", "issue": "3", "pages": "411--433", "other_ids": {}, "num": null, "urls": [], "raw_text": "Krippendorff, Klaus. 2004b. Reliability in content analysis: Some common misconceptions and recommendations. Human Communication Research, 30(3):411-433.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "The measurement of observer agreement for categorical data", "authors": [ { "first": "J", "middle": [], "last": "Landis", "suffix": "" }, { "first": "Gary", "middle": [ "G" ], "last": "Richard", "suffix": "" }, { "first": "", "middle": [], "last": "Koch", "suffix": "" } ], "year": 1977, "venue": "Biometrics", "volume": "33", "issue": "1", "pages": "159--174", "other_ids": {}, "num": null, "urls": [], "raw_text": "Landis, J. Richard and Gary G. Koch. 1977. The measurement of observer agreement for categorical data. Biometrics, 33(1):159-174.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "CLAWS4: The tagging of the British National Corpus", "authors": [ { "first": "Geoffrey", "middle": [], "last": "Leech", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Garside", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Bryant", "suffix": "" } ], "year": 1994, "venue": "Proceedings of COLING 1994: The 15th International Conference on Computational Linguistics", "volume": "1", "issue": "", "pages": "622--628", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leech, Geoffrey, Roger Garside, and Michael Bryant. 1994. CLAWS4: The tagging of the British National Corpus. In Proceedings of COLING 1994: The 15th International Conference on Computational Linguistics, Volume 1, pages 622-628, Kyoto.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "Dialogue-games: Metacommunication structures for natural language interaction", "authors": [ { "first": "James", "middle": [ "A" ], "last": "Levin", "suffix": "" }, { "first": "James", "middle": [ "A" ], "last": "Moore", "suffix": "" } ], "year": 1978, "venue": "Cognitive Science", "volume": "1", "issue": "4", "pages": "395--420", "other_ids": {}, "num": null, "urls": [], "raw_text": "Levin, James A. and James A. Moore. 1978. Dialogue-games: Metacommunication structures for natural language interaction. Cognitive Science, 1(4):395-420.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Foundations of Statistical Natural Language Processing", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Schuetze", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manning, Christopher D. and Hinrich Schuetze. 1999. Foundations of Statistical Natural Language Processing. MIT Press, Cambridge, MA.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "Experiments in constructing a corpus of discourse trees: Problems, annotation choices, issues", "authors": [ { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "Magdalena", "middle": [], "last": "Romera", "suffix": "" }, { "first": "Estibaliz", "middle": [], "last": "Amorrortu", "suffix": "" } ], "year": 1999, "venue": "Workshop on Levels of Representation in Discourse", "volume": "", "issue": "", "pages": "71--78", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcu, Daniel, Magdalena Romera, and Estibaliz Amorrortu. 1999. Experiments in constructing a corpus of discourse trees: Problems, annotation choices, issues. In Workshop on Levels of Representation in Discourse, pages 71-78, University of Edinburgh.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "Building a large annotated corpus of English: the Penn Treebank", "authors": [ { "first": "Mitchell", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Mary Ann Marcinkiewicz", "suffix": "" }, { "first": "", "middle": [], "last": "Santorini", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcus, Mitchell P., Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: the Penn Treebank. Computational Linguistics, 19(2):313-330.", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "The whole art of deduction", "authors": [ { "first": "Rodger", "middle": [], "last": "Marion", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marion, Rodger. 2004. The whole art of deduction. Unpublished manuscript.", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "Tagger evaluation given hierarchical tagsets", "authors": [ { "first": "I", "middle": [], "last": "Melamed", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Dan", "suffix": "" }, { "first": "", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2000, "venue": "Computers and the Humanities", "volume": "34", "issue": "1-2", "pages": "79--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "Melamed, I. Dan and Philip Resnik. 2000. Tagger evaluation given hierarchical tagsets. Computers and the Humanities, 34(1-2):79-84. Available at: http://www.sahs/utmb.edu/PELLINORE/ Intro to research/wad/wad/ home.htm.", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "Part-of-speech tagging of transcribed speech", "authors": [ { "first": "Margot", "middle": [], "last": "Mieskes", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Strube", "suffix": "" } ], "year": 2006, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "935--938", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mieskes, Margot and Michael Strube. 2006. Part-of-speech tagging of transcribed speech. In Proceedings of LREC, pages 935-938, Genoa.", "links": null }, "BIBREF68": { "ref_id": "b68", "title": "Annotating discourse connectives and their arguments", "authors": [ { "first": "", "middle": [], "last": "Barcelona", "suffix": "" }, { "first": "Eleni", "middle": [], "last": "Miltsakaki", "suffix": "" }, { "first": "Rashmi", "middle": [], "last": "Prasad", "suffix": "" }, { "first": "Aravind", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Bonnie", "middle": [], "last": "Webber", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the HLT-NAACL Workshop on Frontiers in Corpus Annotation", "volume": "", "issue": "", "pages": "9--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "English lexical sample task. In Proceedings of SENSEVAL-3, pages 25-28, Barcelona. Miltsakaki, Eleni, Rashmi Prasad, Aravind Joshi, and Bonnie Webber. 2004. Annotating discourse connectives and their arguments. In Proceedings of the HLT-NAACL Workshop on Frontiers in Corpus Annotation, pages 9-16, Boston, MA.", "links": null }, "BIBREF69": { "ref_id": "b69", "title": "Instructions for Coding Explanations: Identifying Segments, Relations and Minimal Units", "authors": [ { "first": "Megan", "middle": [ "G" ], "last": "Moser", "suffix": "" }, { "first": "Johanna", "middle": [ "D" ], "last": "Moore", "suffix": "" }, { "first": "Erin", "middle": [], "last": "Glendening", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moser, Megan G., Johanna D. Moore, and Erin Glendening. 1996. Instructions for Coding Explanations: Identifying Segments, Relations and Minimal Units. Technical Report 96-17, University of Pittsburgh, Department of Computer Science.", "links": null }, "BIBREF70": { "ref_id": "b70", "title": "Abstract anaphora resolution in Danish", "authors": [ { "first": "Costanza", "middle": [], "last": "Navarretta", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 1st SIGdial Workshop on Discourse and Dialogue", "volume": "", "issue": "", "pages": "56--65", "other_ids": {}, "num": null, "urls": [], "raw_text": "Navarretta, Costanza. 2000. Abstract anaphora resolution in Danish. In Proceedings of the 1st SIGdial Workshop on Discourse and Dialogue, Hong Kong, pages 56-65.", "links": null }, "BIBREF71": { "ref_id": "b71", "title": "Evaluating content selection in summarization: The pyramid method", "authors": [ { "first": "Ani", "middle": [], "last": "Nenkova", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Passonneau", "suffix": "" } ], "year": 2002, "venue": "Proceedings of HLT-NAACL 2004", "volume": "", "issue": "", "pages": "145--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nenkova, Ani and Rebecca Passonneau. 2004. Evaluating content selection in summarization: The pyramid method. In Proceedings of HLT-NAACL 2004, pages 145-152, Boston, MA. Neuendorf, Kimberly A. 2002. The Content Analysis Guidebook. Sage, Thousand Oaks, CA.", "links": null }, "BIBREF72": { "ref_id": "b72", "title": "Making fine-grained and coarse-grained sense distinctions, both manually and automatically", "authors": [ { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Hoa", "middle": [ "Trang" ], "last": "Dang", "suffix": "" }, { "first": "Christiane", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 2007, "venue": "Natural Language Engineering", "volume": "13", "issue": "2", "pages": "137--163", "other_ids": {}, "num": null, "urls": [], "raw_text": "Palmer, Martha, Hoa Trang Dang, and Christiane Fellbaum. 2007. Making fine-grained and coarse-grained sense distinctions, both manually and automatically. Natural Language Engineering, 13(2):137-163.", "links": null }, "BIBREF73": { "ref_id": "b73", "title": "Computing reliability for coreference annotation", "authors": [ { "first": "Rebecca", "middle": [ "J" ], "last": "Passonneau", "suffix": "" } ], "year": 2004, "venue": "Proceedings of LREC", "volume": "4", "issue": "", "pages": "1503--1506", "other_ids": {}, "num": null, "urls": [], "raw_text": "Passonneau, Rebecca J. 2004. Computing reliability for coreference annotation. In Proceedings of LREC, volume 4, pages 1503-1506, Lisbon.", "links": null }, "BIBREF74": { "ref_id": "b74", "title": "Measuring agreement on set-valued items (MASI) for semantic and pragmatic annotation", "authors": [ { "first": "Rebecca", "middle": [ "J" ], "last": "Passonneau", "suffix": "" } ], "year": 2006, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "831--836", "other_ids": {}, "num": null, "urls": [], "raw_text": "Passonneau, Rebecca J. 2006. Measuring agreement on set-valued items (MASI) for semantic and pragmatic annotation. In Proceedings of LREC, Genoa, pages 831-836.", "links": null }, "BIBREF75": { "ref_id": "b75", "title": "Inter-annotator agreement on a multilingual semantic annotation task", "authors": [ { "first": "Rebecca", "middle": [ "J" ], "last": "Passonneau", "suffix": "" }, { "first": "Nizar", "middle": [], "last": "Habash", "suffix": "" }, { "first": "Owen", "middle": [], "last": "Rambow", "suffix": "" } ], "year": 2006, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "1951--1956", "other_ids": {}, "num": null, "urls": [], "raw_text": "Passonneau, Rebecca J., Nizar Habash, and Owen Rambow. 2006. Inter-annotator agreement on a multilingual semantic annotation task. In Proceedings of LREC, Genoa, pages 1951-1956.", "links": null }, "BIBREF76": { "ref_id": "b76", "title": "Intention-based segmentation: Human reliability and correlation with linguistic cues", "authors": [ { "first": "Rebecca", "middle": [ "J" ], "last": "Passonneau", "suffix": "" }, { "first": "Diane", "middle": [ "J" ], "last": "Litman", "suffix": "" } ], "year": 1993, "venue": "Proceedings of 31st Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "148--155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Passonneau, Rebecca J. and Diane J. Litman. 1993. Intention-based segmentation: Human reliability and correlation with linguistic cues. In Proceedings of 31st Annual Meeting of the ACL, pages 148-155, Columbus, OH.", "links": null }, "BIBREF77": { "ref_id": "b77", "title": "Empirical analysis of three dimensions of spoken discourse: Segmentation, coherence and linguistic devices", "authors": [ { "first": "Rebecca", "middle": [ "J" ], "last": "Passonneau", "suffix": "" }, { "first": "Diane", "middle": [ "J" ], "last": "Litman", "suffix": "" } ], "year": 1996, "venue": "Computational and Conversational Discourse: Burning Issues -An Interdisciplinary Account", "volume": "151", "issue": "", "pages": "161--194", "other_ids": {}, "num": null, "urls": [], "raw_text": "Passonneau, Rebecca J. and Diane J. Litman. 1996. Empirical analysis of three dimensions of spoken discourse: Segmentation, coherence and linguistic devices. In Eduard H. Hovy and Donia R. Scott, editors, Computational and Conversational Discourse: Burning Issues - An Interdisciplinary Account, volume 151 of NATO ASI Series F: Computer and Systems Sciences. Springer, Berlin, chapter 7, pages 161-194.", "links": null }, "BIBREF78": { "ref_id": "b78", "title": "Discourse segmentation by human and automated means", "authors": [ { "first": "Rebecca", "middle": [ "J" ], "last": "Passonneau", "suffix": "" }, { "first": "Diane", "middle": [ "J" ], "last": "Litman", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "1", "pages": "103--139", "other_ids": {}, "num": null, "urls": [], "raw_text": "Passonneau, Rebecca J. and Diane J. Litman. 1997. Discourse segmentation by human and automated means. Computational Linguistics, 23(1):103-139.", "links": null }, "BIBREF79": { "ref_id": "b79", "title": "A critique and improvement of an evaluation metric for text segmentation", "authors": [ { "first": "Lev", "middle": [], "last": "Pevzner", "suffix": "" }, { "first": "Marti", "middle": [ "A" ], "last": "Hearst", "suffix": "" } ], "year": 2002, "venue": "Computational Linguistics", "volume": "28", "issue": "1", "pages": "19--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pevzner, Lev and Marti A. Hearst. 2002. A critique and improvement of an evaluation metric for text segmentation. Computational Linguistics, 28(1):19-36.", "links": null }, "BIBREF80": { "ref_id": "b80", "title": "The reliability of anaphoric annotation, reconsidered: Taking ambiguity into account", "authors": [ { "first": "Massimo", "middle": [], "last": "Poesio", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Cambridge", "suffix": "" }, { "first": "Massimo", "middle": [], "last": "Poesio", "suffix": "" }, { "first": "Ron", "middle": [], "last": "Artstein", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Workshop on Frontiers in Corpus Annotation II: Pie in the Sky", "volume": "", "issue": "", "pages": "76--83", "other_ids": {}, "num": null, "urls": [], "raw_text": "Poesio, Massimo. 2004a. Discourse annotation and semantic annotation in the GNOME corpus. In Proceedings of the 2004 ACL Workshop on Discourse Annotation, pages 72-79, Barcelona. Poesio, Massimo. 2004b. The MATE/GNOME proposals for anaphoric annotation, revisited. In Proceedings of the 5th SIGdial Workshop on Discourse and Dialogue, pages 154-162, Cambridge, MA. Poesio, Massimo and Ron Artstein. 2005. The reliability of anaphoric annotation, reconsidered: Taking ambiguity into account. In Proceedings of the Workshop on Frontiers in Corpus Annotation II: Pie in the Sky, pages 76-83, Ann Arbor, MI.", "links": null }, "BIBREF81": { "ref_id": "b81", "title": "Focus, activation, and this-noun phrases: An empirical study", "authors": [ { "first": "Massimo", "middle": [], "last": "Poesio", "suffix": "" }, { "first": "Natalia", "middle": [ "N" ], "last": "Modjeska", "suffix": "" } ], "year": 2005, "venue": "Anaphora Processing", "volume": "263", "issue": "", "pages": "429--442", "other_ids": {}, "num": null, "urls": [], "raw_text": "Poesio, Massimo and Natalia N. Modjeska. 2005. Focus, activation, and this-noun phrases: An empirical study. In Ant\u00f3nio Branco, Tony McEnery, and Ruslan Mitkov, editors, Anaphora Processing, volume 263 of Current Issues in Linguistic Theory. John Benjamins, pages 429-442, Amsterdam and Philadelphia.", "links": null }, "BIBREF82": { "ref_id": "b82", "title": "Discourse structure and anaphora in tutorial dialogues: An empirical analysis of two theories of the global focus", "authors": [ { "first": "Massimo", "middle": [], "last": "Poesio", "suffix": "" }, { "first": "A", "middle": [], "last": "Patel", "suffix": "" }, { "first": "Barbara", "middle": [ "Di" ], "last": "Eugenio", "suffix": "" } ], "year": 2006, "venue": "Research in Language and Computation", "volume": "4", "issue": "2-3", "pages": "229--257", "other_ids": {}, "num": null, "urls": [], "raw_text": "Poesio, Massimo, A. Patel, and Barbara Di Eugenio. 2006. Discourse structure and anaphora in tutorial dialogues: An empirical analysis of two theories of the global focus. Research in Language and Computation, 4(2-3):229-257.", "links": null }, "BIBREF83": { "ref_id": "b83", "title": "A corpus-based investigation of definite description use", "authors": [ { "first": "Massimo", "middle": [], "last": "Poesio", "suffix": "" }, { "first": "Renata", "middle": [], "last": "Vieira", "suffix": "" } ], "year": 1998, "venue": "Computational Linguistics", "volume": "24", "issue": "2", "pages": "183--216", "other_ids": {}, "num": null, "urls": [], "raw_text": "Poesio, Massimo and Renata Vieira. 1998. A corpus-based investigation of definite description use. Computational Linguistics, 24(2):183-216.", "links": null }, "BIBREF84": { "ref_id": "b84", "title": "Dialogue acts: One or more dimensions? Working Paper 62", "authors": [ { "first": "Andrei", "middle": [], "last": "Popescu-Belis", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Popescu-Belis, Andrei. 2005. Dialogue acts: One or more dimensions? Working Paper 62, ISSCO, University of Geneva.", "links": null }, "BIBREF86": { "ref_id": "b86", "title": "Measuring interrater reliability among multiple raters: An example of methods for nominal data", "authors": [ { "first": "", "middle": [], "last": "Cheney", "suffix": "" } ], "year": 1990, "venue": "Statistics in Medicine", "volume": "9", "issue": "", "pages": "1103--1115", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cheney. 1990. Measuring interrater reliability among multiple raters: An example of methods for nominal data. Statistics in Medicine, 9:1103-1115.", "links": null }, "BIBREF87": { "ref_id": "b87", "title": "Reliability formulas for independent decision data when reliability data are matched", "authors": [ { "first": "Nageswari", "middle": [], "last": "Rajaratnam", "suffix": "" } ], "year": 1960, "venue": "Psychometrika", "volume": "25", "issue": "3", "pages": "261--271", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rajaratnam, Nageswari. 1960. Reliability formulas for independent decision data when reliability data are matched. Psychometrika, 25(3):261-271.", "links": null }, "BIBREF88": { "ref_id": "b88", "title": "Reliability measurement without limits", "authors": [ { "first": "Dennis", "middle": [], "last": "Reidsma", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Carletta", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics", "volume": "34", "issue": "3", "pages": "319--326", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reidsma, Dennis and Jean Carletta. 2008. Reliability measurement without limits. Computational Linguistics, 34(3):319-326.", "links": null }, "BIBREF89": { "ref_id": "b89", "title": "Pragmatics and linguistics: An analysis of sentence topics", "authors": [ { "first": "T", "middle": [], "last": "Reinhart", "suffix": "" } ], "year": 1981, "venue": "Philosophica", "volume": "27", "issue": "1", "pages": "53--93", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reinhart, T. 1981. Pragmatics and linguistics: An analysis of sentence topics. Philosophica, 27(1):53-93.", "links": null }, "BIBREF90": { "ref_id": "b90", "title": "Topic Segmentation: Algorithms and Applications", "authors": [ { "first": "Jeffrey", "middle": [ "C" ], "last": "Reynar", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reynar, Jeffrey C. 1998. Topic Segmentation: Algorithms and Applications. Ph.D. thesis, University of Pennsylvania, Philadelphia.", "links": null }, "BIBREF91": { "ref_id": "b91", "title": "Segmenting conversations by topic, initiative and style", "authors": [ { "first": "Klaus", "middle": [], "last": "Ries", "suffix": "" } ], "year": 2002, "venue": "Information Retrieval Techniques for Speech Applications", "volume": "2273", "issue": "", "pages": "51--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ries, Klaus. 2002. Segmenting conversations by topic, initiative and style. In Anni R. Coden, Eric W. Brown, and Savitha Srinivasan, editors, Information Retrieval Techniques for Speech Applications, volume 2273 of Lecture Notes in Computer Science. Springer, Berlin, pages 51-66.", "links": null }, "BIBREF92": { "ref_id": "b92", "title": "Statistical Techniques for the Study of Language and Language Behaviour", "authors": [ { "first": "Toni", "middle": [], "last": "Rietveld", "suffix": "" }, { "first": "Roeland", "middle": [], "last": "Van Hout", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rietveld, Toni and Roeland van Hout. 1993. Statistical Techniques for the Study of Language and Language Behaviour. Mouton de Gruyter, Berlin.", "links": null }, "BIBREF93": { "ref_id": "b93", "title": "Augmenting the kappa statistic to determine interannotator reliability for multiply labeled data points", "authors": [ { "first": "Andrew", "middle": [], "last": "Rosenberg", "suffix": "" }, { "first": "Ed", "middle": [], "last": "Binkowski", "suffix": "" } ], "year": 2004, "venue": "Proceedings of HLT-NAACL 2004: Short Papers", "volume": "", "issue": "", "pages": "77--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rosenberg, Andrew and Ed Binkowski. 2004. Augmenting the kappa statistic to determine interannotator reliability for multiply labeled data points. In Proceedings of HLT-NAACL 2004: Short Papers, pages 77-80, Boston, MA.", "links": null }, "BIBREF94": { "ref_id": "b94", "title": "Reliability of content analysis: The case of nominal scale coding", "authors": [ { "first": "William", "middle": [ "A" ], "last": "Scott", "suffix": "" } ], "year": 1955, "venue": "Public Opinion Quarterly", "volume": "19", "issue": "3", "pages": "321--325", "other_ids": {}, "num": null, "urls": [], "raw_text": "Scott, William A. 1955. Reliability of content analysis: The case of nominal scale coding. Public Opinion Quarterly, 19(3):321-325.", "links": null }, "BIBREF95": { "ref_id": "b95", "title": "The ICSI meeting recorder dialog act (MRDA) corpus", "authors": [ { "first": "Elizabeth", "middle": [], "last": "Shriberg", "suffix": "" }, { "first": "Raj", "middle": [], "last": "Dhillon", "suffix": "" }, { "first": "Sonali", "middle": [], "last": "Bhagat", "suffix": "" }, { "first": "Jeremy", "middle": [], "last": "Ang", "suffix": "" }, { "first": "Hannah", "middle": [], "last": "Carvey", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 5th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shriberg, Elizabeth, Raj Dhillon, Sonali Bhagat, Jeremy Ang, and Hannah Carvey. 2004. The ICSI meeting recorder dialog act (MRDA) corpus. In Proceedings of the 5th", "links": null }, "BIBREF97": { "ref_id": "b97", "title": "Nonparametric Statistics for the Behavioral Sciences", "authors": [ { "first": "Sidney", "middle": [], "last": "Siegel", "suffix": "" }, { "first": "N. John", "middle": [], "last": "Castellan", "suffix": "" }, { "first": "Jr", "middle": [], "last": "", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siegel, Sidney and N. John Castellan, Jr. 1988. Nonparametric Statistics for the Behavioral Sciences, 2nd edition, chapter 9.8. McGraw-Hill, New York.", "links": null }, "BIBREF98": { "ref_id": "b98", "title": "Dialogue Systems as Conversational Partners: Applying Conversation Acts Theory to Natural Language Generation for Task-Oriented Mixed-Initiative Spoken Dialogue", "authors": [ { "first": "Amanda", "middle": [ "J" ], "last": "Stent", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stent, Amanda J. 2001. Dialogue Systems as Conversational Partners: Applying Conversation Acts Theory to Natural Language Generation for Task-Oriented Mixed-Initiative Spoken Dialogue. Ph.D. thesis, Department of Computer Science, University of Rochester.", "links": null }, "BIBREF99": { "ref_id": "b99", "title": "Experiments on sentence boundary detection", "authors": [ { "first": "Mark", "middle": [], "last": "Stevenson", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Gaizauskas", "suffix": "" } ], "year": 2000, "venue": "Proceedings of 6th ANLP", "volume": "", "issue": "", "pages": "84--89", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stevenson, Mark and Robert Gaizauskas. 2000. Experiments on sentence boundary detection. In Proceedings of 6th ANLP, pages 84-89, Seattle, WA.", "links": null }, "BIBREF100": { "ref_id": "b100", "title": "Dialogue act modeling for automatic tagging and recognition of conversational speech", "authors": [ { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Coccaro", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Bates", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Taylor", "suffix": "" }, { "first": "Carol", "middle": [], "last": "Van Ess-Dykema", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Ries", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Shriberg", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Marie", "middle": [], "last": "Meteer", "suffix": "" } ], "year": 2000, "venue": "Computational Linguistics", "volume": "26", "issue": "3", "pages": "339--373", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stolcke, Andreas, Noah Coccaro, Rebecca Bates, Paul Taylor, Carol Van Ess-Dykema, Klaus Ries, Elizabeth Shriberg, Daniel Jurafsky, Rachel Martin, and Marie Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational Linguistics, 26(3):339-373.", "links": null }, "BIBREF101": { "ref_id": "b101", "title": "A test for homogeneity of the marginal distributions in a two-way classification", "authors": [ { "first": "Alan", "middle": [], "last": "Stuart", "suffix": "" } ], "year": 1955, "venue": "Biometrika", "volume": "42", "issue": "3/4", "pages": "412--416", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stuart, Alan. 1955. A test for homogeneity of the marginal distributions in a two-way classification. Biometrika, 42(3/4):412-416.", "links": null }, "BIBREF102": { "ref_id": "b102", "title": "An annotation scheme for discourse-level argumentation in research articles", "authors": [ { "first": "Simone", "middle": [], "last": "Teufel", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Carletta", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Moens", "suffix": "" } ], "year": 1999, "venue": "Proceedings of Ninth Conference of the EACL", "volume": "", "issue": "", "pages": "110--117", "other_ids": {}, "num": null, "urls": [], "raw_text": "Teufel, Simone, Jean Carletta, and Marc Moens. 1999. An annotation scheme for discourse-level argumentation in research articles. In Proceedings of Ninth Conference of the EACL, pages 110-117, Bergen.", "links": null }, "BIBREF103": { "ref_id": "b103", "title": "Summarizing scientific articles: Experiments with relevance and rhetorical status", "authors": [ { "first": "Simone", "middle": [], "last": "Teufel", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2002, "venue": "Computational Linguistics", "volume": "28", "issue": "4", "pages": "409--445", "other_ids": {}, "num": null, "urls": [], "raw_text": "Teufel, Simone and Marc Moens. 2002. Summarizing scientific articles: Experiments with relevance and rhetorical status. Computational Linguistics, 28(4):409-445.", "links": null }, "BIBREF104": { "ref_id": "b104", "title": "Conversation acts in task-oriented spoken dialogue", "authors": [ { "first": "David", "middle": [ "R" ], "last": "Traum", "suffix": "" }, { "first": "Elizabeth", "middle": [ "A" ], "last": "Hinkelman", "suffix": "" } ], "year": 1992, "venue": "Computational Intelligence", "volume": "8", "issue": "3", "pages": "575--599", "other_ids": {}, "num": null, "urls": [], "raw_text": "Traum, David R. and Elizabeth A. Hinkelman. 1992. Conversation acts in task-oriented spoken dialogue. Computational Intelligence, 8(3):575-599.", "links": null }, "BIBREF105": { "ref_id": "b105", "title": "Information packaging: A survey", "authors": [ { "first": "Enric", "middle": [], "last": "Vallduv\u00ed", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vallduv\u00ed, Enric. 1993. Information packaging: A survey. Research Paper RP-44, University of Edinburgh, HCRC.", "links": null }, "BIBREF106": { "ref_id": "b106", "title": "A study of polysemy judgments and inter-annotator agreement", "authors": [ { "first": "Jean", "middle": [], "last": "V\u00e9ronis", "suffix": "" } ], "year": 1998, "venue": "Proceedings of SENSEVAL-1", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "V\u00e9ronis, Jean. 1998. A study of polysemy judgments and inter-annotator agreement. In Proceedings of SENSEVAL-1, Herstmonceux Castle, England. Available at: http://www.itri.brighton.ac.uk/ events/senseval/ARCHIVE/PROCEEDINGS/.", "links": null }, "BIBREF107": { "ref_id": "b107", "title": "A model-theoretic coreference scoring scheme", "authors": [ { "first": "Marc", "middle": [], "last": "Vilain", "suffix": "" }, { "first": "John", "middle": [], "last": "Burger", "suffix": "" }, { "first": "John", "middle": [], "last": "Aberdeen", "suffix": "" }, { "first": "Dennis", "middle": [], "last": "Connolly", "suffix": "" }, { "first": "Lynette", "middle": [], "last": "Hirschman", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the Sixth Message Understanding Conference", "volume": "", "issue": "", "pages": "45--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vilain, Marc, John Burger, John Aberdeen, Dennis Connolly, and Lynette Hirschman. 1995. A model-theoretic coreference scoring scheme. In Proceedings of the Sixth Message Understanding Conference, pages 45-52, Columbia, MD.", "links": null }, "BIBREF108": { "ref_id": "b108", "title": "Another look at interrater agreement", "authors": [ { "first": "Rebecca", "middle": [], "last": "Zwick", "suffix": "" } ], "year": 1988, "venue": "Psychological Bulletin", "volume": "103", "issue": "3", "pages": "374--378", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zwick, Rebecca. 1988. Another look at interrater agreement. Psychological Bulletin, 103(3):374-378.", "links": null } }, "ref_entries": { "FIGREF1": { "uris": null, "text": "comparison of the values of \u03b1 and K for anaphoric annotation(Poesio and Artstein 2005).", "type_str": "figure", "num": null }, "TABREF0": { "html": null, "num": null, "type_str": "table", "text": "Agreement table with three coders.", "content": "
STATIREQ
Utt 121
Utt 203
. . .
Utt 10012
TOTAL 90 (0.3) 210 (0.7)
" }, "TABREF1": { "html": null, "num": null, "type_str": "table", "text": "An integrated coding example.", "content": "
CODER A
STAT IREQ CHCK TOTAL
STAT466052
IREQ CODER B CHCK0 032 60 1032 16
TOTAL464410100
" }, "TABREF3": { "html": null, "num": null, "type_str": "table", "text": "Fewer boundaries, higher expected agreement.Case 1: Broad segments A o = 0.96, A e = 0.89, K = 0.65", "content": "
CODER A
BOUNDARYNO BOUNDARYTOTAL
BOUNDARY213
CODER B NO BOUNDARY14 64 7
TOTAL34 75 0
Case 2: Fine discourse units
A o = 0.88, A e = 0.53, K = 0.75
CODER A
BOUNDARYNO BOUNDARYTOTAL
BOUNDARY16319
CODER B NO BOUNDARY32 83 1
TOTAL193150
P k
" }, "TABREF4": { "html": null, "num": null, "type_str": "table", "text": "The verb named appears in the original WordNet example for the verb call.", "content": "
).
SENSEDESCRIPTIONEXAMPLEHYPERNYM
WN1name, call\"They named a their son David\"LABEL
WN3call, give a quality\"She called her children lazyLABEL
and ungrateful\"
WN19call, consider\"I would not call her beautiful\"SEE
WN22address, call\"Call me mister\"ADDRESS
" } } } }