{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:29:16.480438Z" }, "title": "A Methodology for the Comparison of Human Judgments With Metrics for Coreference Resolution", "authors": [ { "first": "Mariya", "middle": [], "last": "Borovikova", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universit\u00e9 Sorbonne Nouvelle", "location": {} }, "email": "mariya.borovikova@sorbonne-nouvelle.fr" }, { "first": "Lo\u00efc", "middle": [], "last": "Grobol", "suffix": "", "affiliation": { "laboratory": "LIFO", "institution": "Universit\u00e9 d'Orl\u00e9ans", "location": {} }, "email": "lgrobol@parisnanterre.fr" }, { "first": "Ana\u00efs", "middle": [], "last": "Lefeuvre-Halftermeyer", "suffix": "", "affiliation": { "laboratory": "LIFO", "institution": "Universit\u00e9 d'Orl\u00e9ans", "location": {} }, "email": "" }, { "first": "Sylvie", "middle": [], "last": "Billot", "suffix": "", "affiliation": { "laboratory": "LIFO", "institution": "Universit\u00e9 d'Orl\u00e9ans", "location": {} }, "email": "sylvie.billot@univ-orleans.fr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We propose a method for investigating the interpretability of metrics used for the coreference resolution task through comparisons with human judgments. We provide a corpus with annotations of different error types and human evaluations of their gravity. Our preliminary analysis shows that metrics considerably overlook several error types and overlook errors in general in comparison to humans. This study is conducted on French texts, but the methodology should be language-independent.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "We propose a method for investigating the interpretability of metrics used for the coreference resolution task through comparisons with human judgments. We provide a corpus with annotations of different error types and human evaluations of their gravity. Our preliminary analysis shows that metrics considerably overlook several error types and overlook errors in general in comparison to humans. This study is conducted on French texts, but the methodology should be language-independent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Coreference resolution is still one of the most challenging tasks in Natural Language Processing. Several metrics have been proposed to evaluate the task, each of them meant to rectify the weaknesses of the previous ones. However, neither their correctness nor their ability to reflect the real quality of algorithms is easily be provable from their mathematical definition. Consequently, some additional tests should be conducted in order to confirm their pertinence. This work aims to compare the evaluation measures used for coreference resolution task with human judgments, i.e. to study them in terms of interpretability. More precisely, B-CUBED (Bagga and Baldwin, 1998) , LEA (Moosavi and Strube, 2016) , CEAFe and CEAFm (Luo, 2005) , CoNLL-2012 (MELA) (Denis and Baldridge, 2009) , BLANC (Recasens and Hovy, 2011) and MUC (Vilain et al., 1995) metrics will be analysed.", "cite_spans": [ { "start": 651, "end": 676, "text": "(Bagga and Baldwin, 1998)", "ref_id": "BIBREF1" }, { "start": 683, "end": 709, "text": "(Moosavi and Strube, 2016)", "ref_id": "BIBREF13" }, { "start": 728, "end": 739, "text": "(Luo, 2005)", "ref_id": "BIBREF11" }, { "start": 742, "end": 759, "text": "CoNLL-2012 (MELA)", "ref_id": null }, { "start": 760, "end": 787, "text": "(Denis and Baldridge, 2009)", "ref_id": "BIBREF2" }, { "start": 796, "end": 821, "text": "(Recasens and Hovy, 2011)", "ref_id": "BIBREF19" }, { "start": 830, "end": 851, "text": "(Vilain et al., 1995)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although some properties of coreference resolution quality measures have already been studied in Lion-Bouton et al. (2020) , Moosavi (2020) , Kummerfeld and Klein (2013) and others, to the best of our knowledge, there are no works dedicated to the comparison between automatic measurements and human evaluation of performance for this task. However, very few similar studies were conducted in other domains. Doshi-Velez and Kim (2017) study the interpretability of machine learning models, in general, using application-grounded, human-grounded, and functionally-grounded approaches. Foster (2008) describes an experience of evaluating a non-verbal behaviour of an embodied conversational agent. People were asked to choose the most appropriate talking head among the two generated using different strategies. Then \u03b2 inter-annotator agreement measure (Artstein and Poesio, 2008) was calculated.", "cite_spans": [ { "start": 97, "end": 122, "text": "Lion-Bouton et al. (2020)", "ref_id": "BIBREF10" }, { "start": 125, "end": 139, "text": "Moosavi (2020)", "ref_id": "BIBREF12" }, { "start": 142, "end": 169, "text": "Kummerfeld and Klein (2013)", "ref_id": "BIBREF6" }, { "start": 408, "end": 434, "text": "Doshi-Velez and Kim (2017)", "ref_id": "BIBREF3" }, { "start": 584, "end": 597, "text": "Foster (2008)", "ref_id": "BIBREF4" }, { "start": 851, "end": 878, "text": "(Artstein and Poesio, 2008)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "In Plank et al. (2015) , the correlation between metrics for the dependency parsing task and human judgments was examined. Several models were tested for different languages. The annotators had to choose the best of the two annotations predicted by two different models without knowing the correct option. The obtained results were normalised using Spearman's \u03c1 and compared with standard metrics. Novikova et al. (2017) explore Natural Language Generation (NLG) evaluation measures. The annotation process is organised as follows: an annotator should score an example using three Likert scales from 0 to 6 based on informativeness, naturalness and quality criteria. The obtained results were normalised using Spearman and intra-class correlation coefficients and compared with NLG metrics.", "cite_spans": [ { "start": 3, "end": 22, "text": "Plank et al. (2015)", "ref_id": "BIBREF17" }, { "start": 398, "end": 420, "text": "Novikova et al. (2017)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Considering these studies, for the present research, we will use an approach similar to Novikova et al. (2017) , where the annotators evaluate a system on a Likert scale. Despite possible difficulties with the Likert scale treatment (too many mid-point answers, a broad spectrum of responses for one question, etc.), this method seems more appropriate for our purposes.", "cite_spans": [ { "start": 88, "end": 110, "text": "Novikova et al. (2017)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Two main reasons make us choose this approach: (1) we do not test particular systems and, therefore, have no alternative annotations and (2) a scaled approach is more accurate and exact while evaluating a system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "This section is dedicated to the theoretical description of the methods used in the experiments within the scope of this study.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "In order to correctly evaluate the quality of the algorithm, it is necessary to consider all the types of errors it can produce and, therefore, to define those types.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Errors typology", "sec_num": "3.1" }, { "text": "For our purposes, we have chosen the typology of Landragin and Oberle (2018):1. Border errors occur when limits of referential expressions are marked inaccurately;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Errors typology", "sec_num": "3.1" }, { "text": "2. Type errors occur when a referential expression is assigned to a false chain;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Errors typology", "sec_num": "3.1" }, { "text": "3. Noise errors occur when irrelevant linguistic expressions are marked as a part of a coreference chain;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Errors typology", "sec_num": "3.1" }, { "text": "4. Silence errors occur when a system ignores referential expressions which are included in a relevant coreference chain;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Errors typology", "sec_num": "3.1" }, { "text": "5. Tendency of irrelevant coreference chains construction occurs when a system composes a new chain from several unrelated mentions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Errors typology", "sec_num": "3.1" }, { "text": "We use this typology because it is more comprehensive than others and reflects the semantic aspect of the problem. However, we need to introduce an additional error type which we call \"chain absence\". This error may be regarded as a form of the \"silence\" error, and it occurs when the whole coreference chain (entity) is missing. The necessity of introducing a new error type arose after the experimentation phase of this study as it allowed to explain some patterns in the behaviour of the metrics. You can find the examples for each error type in the appendix section 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Errors typology", "sec_num": "3.1" }, { "text": "Our corpus consists of a series of texts, each with two coreference annotations: one is a manual gold annotation, and the other is a purposefully erroneous annotation, one or more manually introduced errors of one of the types defined in section 3.1. There are also a few examples with errors of different types. Two existing coreference resolution corpora for French were used as a basis for the corpus. 52 texts were taken from the DEMOCRAT corpus (Landragin, 2018) and 4 examples -from the ANCOR corpus (Muzerelle et al., 2014) . More precisely, we have selected the self-standing passages that are understandable out of context. The corpora are collected in the CoNLL-2012 format (Pradhan et al., 2012) 1 . The final dataset consists of 127 passages of 90-130 words each. 108 examples contain only one error, allowing us to analyse to what extent each error reduces the overall system quality. The rest of the samples are needed to adjust the annotations . Coreference chains lengths vary from 2 to 20 mentions. The mentions to contain an error were chosen at random. The total number of each error in the 108 samples varies between 16 and 28. The total number of each error varies between 44 and 97.", "cite_spans": [ { "start": 450, "end": 467, "text": "(Landragin, 2018)", "ref_id": "BIBREF8" }, { "start": 506, "end": 530, "text": "(Muzerelle et al., 2014)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus creation", "sec_num": "3.2" }, { "text": "As the primary goal of this study is to evaluate the interpretability of the metrics, it is necessary to compare them to humans opinions about the correctness of the system's responses. Even though the metrics' output values are between 0 and 1, we will not use this range as it is more natural for people to evaluate the quality on an integer scale.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation scale", "sec_num": "3.3" }, { "text": "For our study, we use a Likert scale (Likert, 1932) with an even number of choices in order to avoid too many mid-point answers. Usually, coreference resolution is only a part of a pipeline of a more complex system, and the way of evaluation depends on the resolved task. In this study, an information retrieval task has been chosen as a global framework. These conditions require some changes in the classic scale; namely, we introduce a notion of the \"importance\" of an element. We distinguish two types of elements: peripheral elements and key elements. Peripheral elements can be removed from a text without severe consequences in its general sense. Key elements constitute the core of a text, so their removal will lead to the total loss of meaning. Thus, the gravity of an error and the importance of an element with an error is taken into account.", "cite_spans": [ { "start": 37, "end": 51, "text": "(Likert, 1932)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation scale", "sec_num": "3.3" }, { "text": "This scale also contains two points to allow differentiation between similar examples with little nuances: (0) The presumed system's annotation contains significant errors on key elements; (1-2) The presumed system's annotation contains significant errors on peripheral elements; (3-4) The presumed system's annotation contains insignificant errors on key elements; (5-6) The presumed system's annotation contains insignificant errors on peripheral elements; (7) The presumed system's annotation does not contain any errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation scale", "sec_num": "3.3" }, { "text": "Every annotation sample contains a correct annotation and an annotation with mistakes. In order to detect inconsistent annotators, three samples appear twice. The objective given to the annotators is to evaluate coreference resolution samples as a part of an information retrieval system using the Likert scale described in section 3.3. General instructions given before the annotations explain all the necessary concepts 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.4" }, { "text": "As an inter-annotator agreement measure, Krippendorff's alpha (Krippendorff, 1970) has been chosen and used to identify annotators whose answers differ much from the others using a new algorithm (see algorithm 1 in the appendix). The Krippendorff's alpha is computed for all the possible annotators combinations. Then, these combinations and their scores are sorted by ascending alpha score. We assume that those annotators whose rank is below the others are more important. In order to consider the differences between the alpha scores, the ranks are multiplied by their corresponding alpha scores. The final score is the sum of obtained values for each annotator. These values allow us to understand the annotators' ranking as better annotators have a higher score, but even with these values it remains unclear how to detect the outliers. In order to do this, we divide all the scores by the maximal value.", "cite_spans": [ { "start": 62, "end": 82, "text": "(Krippendorff, 1970)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.4" }, { "text": "The coefficients obtained by the algorithm (hereinafter the trust coefficients) allow us to detect outliers (an annotator is considered an outlier if their score is less than or equal to 0.5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.4" }, { "text": "In order to interpret the reasoning of each respondent, regressors have been trained to imitate the annotators' and metrics' behaviours. Each model should predict a score having the number of occurrences of each error type as input features. We have trained one model for each annotator and metric. Once the models are trained, the weights assigned to each feature (error type) are extracted and used for further interpretation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.4" }, { "text": "Human evaluation analysis. Since participation in this study was not rewarded and contained many ques-2 You can find the google form with the instructions at https://forms.gle/cgpsfZvKg5zasnqd6. tions, it involved only 12 participants, 9 of whom were linguists and 8 of whom have already worked with coreference. The analysis of the three duplicated questions showed that no one answered at random among the annotators. Krippendorff's alpha is rather low, so we supposed that some questions in our questionnaire raised more confusion among the respondents than others. Therefore we eliminated the questions that contained more than three different answers from the annotators and computed the results only for the remaining simple questions. The total number of questions used in the main analysis is 97. We also decided to compute the inter-annotator agreement on a reduced scale from 0 to 4 points (0 \u2192 0, 1 and 2 \u2192 1, 3 and 4 \u2192 2, 5 and 6 \u2192 3, 7 \u2192 4) and on the gravity (no errors -insignificant error(s) -significant error(s)) and elements importance (no errors -error(s) on peripheral element -error(s) on key element) scales. These agreements are presented in table 1. Human-machine correlation analysis. In order to compare the obtained scores with human judgments, we calculated an average and a mode of human evaluations having previously transformed to a scale from 0 to 1. Every metric was compared with the annotators' assessment on the standard scale, on the reduced scale and on the scale with errors gravity evaluation only. According to the data distributions, in general, the difference between a metric and humans is about 0.33. The averages of differences for all the examples are given in table 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and results", "sec_num": "4" }, { "text": "Analysis by error type. In order to analyse the influence of a particular error type on a score, we train a linear regression model with the number of errors of each type as the input features and the reversed scores 3 as the outputs. All the input features were centered and reduced in order to obtain more stable results. The coefficients that were assigned to each input feature (and which correspond to one of the error types) during the training have been used as a Table 3 : Coefficients of errors importances. \"Humans\" is the average of all the coefficients of models trained on humans' evaluations. See a more detailed version in the appendix (table 4) .", "cite_spans": [], "ref_spans": [ { "start": 471, "end": 478, "text": "Table 3", "ref_id": null }, { "start": 651, "end": 660, "text": "(table 4)", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Experiments and results", "sec_num": "4" }, { "text": "measure of the importance of an error in the process of deciding the example's score (see tables 3 and 4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and results", "sec_num": "4" }, { "text": "Human evaluation analysis. Table 1 reports the interannotator agreement on different scales, with several interesting properties about the task. Firstly, we may observe that the reduced scale results are better than those on the standard scale. It can be explained by the fact that even if people agree on the characteristics of the suggested categories, all of them have their own bias about the task, so they pay attention to different annotation nuances. Secondly, the inter-annotator agreement increased when we eliminated the annotators indicated as outliers by the trust coefficient. Human-machine correlation analysis. One may notice that the average scores of all annotators are relatively high (see table 2 ). The average difference between all metrics and the annotators is usually above 0 and varies from 0.2 to 0.4 after normalisation, which shows that, generally, metrics tend to overestimate the actual quality of a model significantly.", "cite_spans": [], "ref_spans": [ { "start": 27, "end": 34, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 708, "end": 715, "text": "table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "Analysis by error type. In order to perform the analysis regarding the error types, we modified the table 4 by removing all positive and null coefficients as they mean either the absence of answers considering a particular error type or insufficient training quality of some models. These modifications can be justified by the fact that every coefficient of the model should be negative. Otherwise, it would mean that the presence of an error improves a score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "As our analysis shows, the border, silence and irrelevant chains construction errors are treated correctly. It could be proven by the fact that metrics coefficients are similar to the human ones. The type, noise and chain absence errors are underestimated by the metrics, as their scores are usually higher for the metrics than for the humans coefficients (see correspondent columns of the table 3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "We can analyse each metric separately as well. Firstly, we have noticed that the MUC metric considerably underestimates all types of errors except for the \"silence\" and the \"irrelevant chains\" ones. Secondly, the B-CUBED measure put relevant scores only to the examples which contain \"border\" and \"silence\" errors. The CEAFe score estimates correctly only the examples with \"border\" and \"irrelevant chains\" errors. Similarly, the CEAFm metric also underestimates all examples where any errors except for \"border\" and \"irrelevant chains\" ones were made. The BLANC measure treats properly only texts with \"silence\" errors. We observe that the CoNLL-2012 metric tends to overstate the results of a model when the examples contain any errors except for \"border\" errors. Likewise, the LEA metric considerably underestimates all error types except for \"border\", \"silence\" and \"irrelevant chains\" errors (see correspondent lines of the table 3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "This study aims to investigate the extent to which we may understand the results produced by the coreference resolution metrics. The preliminary results on the limited corpus show that metrics underestimate errors gravity compared to humans and add approximately 0.33 points to the final score on the scale from 0 to 1. However, these results need to be proven on a more significant number of annotators.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "This work's contribution consists in creating the corpus with various errors types and its annotation with the human judgments about the gravity of these errors, the proposal of the new automatic outlying annotator identification algorithm and the suggestion of a methodology of comparison of human evaluations with automatic metrics. All the code and corpus are available at https://github.com/ project178/coref-metrics-vs-humans.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Possible future work directions may consist in involving more people in the annotation process of the proposed corpus in order to verify the obtained results and in the development of a new metric that will take into consideration the identified shortcomings of the existing measures. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "-0,033 B-CUBED -0,15 CoNLL-2012 -0,083 GRAVITY_A6 -0,011 GRAVITY_A6 -0,018 MUC -0,076 GRAVITY_A10 -0,057 CoNLL-2012 -0,217 GRAVITY_A7 -0,105 STANDARD_A12 -0,262 BLANC -0,074 STANDARD_A10 -0,099 STANDARD_A7 -0,097 LEA -0,22 MUC -0,121 GRAVITY_A8 -0,265 STANDARD_A11 -0,08 CEAFm -0,101 GRAVITY_A7 -0,118 GRAVITY_A7 -0,239 REDUCED_A1 -0,122 STANDARD_A11 -0,274 REDUCED_MEAN -0,097 GRAVITY_A11 -0,124 GRAVITY_A11 -0,152 GRAVITY_A8 -0,24 CEAFm -0,139 STANDARD_A9 -0,329 GRAVITY_A11 -0,164 STANDARD_A7 -0,17 STANDARD_A10 -0,166 MUC -0,249 STANDARD_A9 -0,146 GRAVITY_A11 -0,391 STANDARD_A4 -0,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "https://github.com/boberle/coreference_ databases.git", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We replaced 7 by 0, 6 by 1, 5 by 2, etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was funded by R\u00e9gion Centre-Val-de-Loire through the RTR DIAMS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "7" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Inter-coder Agreement for Computational Linguistics", "authors": [ { "first": "Ron", "middle": [], "last": "Artstein", "suffix": "" }, { "first": "Massimo", "middle": [], "last": "Poesio", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics", "volume": "34", "issue": "4", "pages": "555--596", "other_ids": { "DOI": [ "10.1162/coli.07-034-R2" ] }, "num": null, "urls": [], "raw_text": "Ron Artstein and Massimo Poesio. 2008. Inter-coder Agreement for Computational Linguistics. Computa- tional Linguistics, 34(4):555-596.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Algorithms for scoring coreference chains", "authors": [ { "first": "Amit", "middle": [], "last": "Bagga", "suffix": "" }, { "first": "Breck", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 1998, "venue": "The first international conference on language resources and evaluation workshop on linguistics coreference", "volume": "1", "issue": "", "pages": "563--566", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In The first international conference on language resources and evaluation workshop on linguistics coreference, volume 1, pages 563-566. Citeseer.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Global joint models for coreference resolution and named entity classification", "authors": [ { "first": "Pascal", "middle": [], "last": "Denis", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "" } ], "year": 2009, "venue": "Procesamiento del lenguaje natural", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pascal Denis and Jason Baldridge. 2009. Global joint models for coreference resolution and named entity classification. Procesamiento del lenguaje natural, 42.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Towards a rigorous science of interpretable machine learning", "authors": [ { "first": "Finale", "middle": [], "last": "Doshi", "suffix": "" }, { "first": "-", "middle": [], "last": "Velez", "suffix": "" }, { "first": "Been", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1702.08608" ] }, "num": null, "urls": [], "raw_text": "Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Automated metrics that agree with human judgements on generated output for an embodied conversational agent", "authors": [ { "first": "Mary", "middle": [ "Ellen" ], "last": "Foster", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Fifth International Natural Language Generation Conference", "volume": "", "issue": "", "pages": "95--103", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mary Ellen Foster. 2008. Automated metrics that agree with human judgements on generated output for an embodied conversational agent. In Proceedings of the Fifth International Natural Language Generation Conference, pages 95-103, Salt Fork, Ohio, USA. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Bivariate agreement coefficients for reliability of data", "authors": [ { "first": "Klaus", "middle": [], "last": "Krippendorff", "suffix": "" } ], "year": 1970, "venue": "Sociological methodology", "volume": "2", "issue": "", "pages": "139--150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Klaus Krippendorff. 1970. Bivariate agreement coeffi- cients for reliability of data. Sociological methodology, 2:139-150.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Errordriven analysis of challenges in coreference resolution", "authors": [ { "first": "Jonathan", "middle": [ "K" ], "last": "Kummerfeld", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "265--277", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan K. Kummerfeld and Dan Klein. 2013. Error- driven analysis of challenges in coreference resolution. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 265-277, Seattle, Washington, USA. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Identification automatique de cha\u00eenes de cor\u00e9f\u00e9rences : vers une analyse des erreurs pour mieux cibler l'apprentissage. In Journ\u00e9e commune AFIA-ATALA sur le Traitement Automatique des Langues et l'Intelligence Artificielle pendant la onzi\u00e8me \u00e9dition de la plate-forme Intelligence Artificielle", "authors": [ { "first": "Fr\u00e9d\u00e9ric", "middle": [], "last": "Landragin", "suffix": "" }, { "first": "Bruno", "middle": [], "last": "Oberle", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fr\u00e9d\u00e9ric Landragin and Bruno Oberle. 2018. Identification automatique de cha\u00eenes de cor\u00e9f\u00e9rences : vers une analyse des erreurs pour mieux cibler l'apprentissage. In Journ\u00e9e commune AFIA-ATALA sur le Traitement Automatique des Langues et l'Intelligence Artifi- cielle pendant la onzi\u00e8me \u00e9dition de la plate-forme Intelligence Artificielle (PFIA 2018), Nancy, France.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "LIVRABLE L2 \"Manuel d'annotation du corpus et organisation de formations sur l'annotation\" du projet DEMOCRAT. Research report, Lattice and LiLPa and ICAR and IHRIM", "authors": [ { "first": "Fr\u00e9d\u00e9ric", "middle": [], "last": "Landragin", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fr\u00e9d\u00e9ric Landragin. 2018. LIVRABLE L2 \"Manuel d'annotation du corpus et organisation de formations sur l'annotation\" du projet DEMOCRAT. Research report, Lattice and LiLPa and ICAR and IHRIM.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A technique for the measurement of attitudes. Archives of psychology", "authors": [ { "first": "Rensis", "middle": [], "last": "Likert", "suffix": "" } ], "year": 1932, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rensis Likert. 1932. A technique for the measurement of attitudes. Archives of psychology.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Comment arpenter sans m\u00e8tre : les scores de r\u00e9solution de cha\u00eenes de cor\u00e9f\u00e9rences sont-ils des m\u00e9triques ? (do the standard scores of evaluation of coreference resolution constitute metrics ?)", "authors": [ { "first": "Adam", "middle": [], "last": "Lion-Bouton", "suffix": "" }, { "first": "Lo\u00efc", "middle": [], "last": "Grobol", "suffix": "" }, { "first": "Jean-Yves", "middle": [], "last": "Antoine", "suffix": "" }, { "first": "Sylvie", "middle": [], "last": "Billot", "suffix": "" }, { "first": "Ana\u00efs", "middle": [], "last": "Lefeuvre-Halftermeyer", "suffix": "" } ], "year": 2020, "venue": "Rencontre des \u00c9tudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (R\u00c9CITAL, 22e \u00e9dition). 2e atelier \u00c9thique et TRaitemeNt Automatique des Langues (ETeR-NAL)", "volume": "", "issue": "", "pages": "10--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Lion-Bouton, Lo\u00efc Grobol, Jean-Yves Antoine, Sylvie Billot, and Ana\u00efs Lefeuvre-Halftermeyer. 2020. Comment arpenter sans m\u00e8tre : les scores de r\u00e9solution de cha\u00eenes de cor\u00e9f\u00e9rences sont-ils des m\u00e9triques ? (do the standard scores of evaluation of coreference resolution constitute metrics ?). In Actes de la 6e conf\u00e9rence conjointe Journ\u00e9es d'\u00c9tudes sur la Parole (JEP, 33e \u00e9dition), Traitement Automatique des Langues Naturelles (TALN, 27e \u00e9dition), Rencontre des \u00c9tudiants Chercheurs en Informatique pour le Traitement Automa- tique des Langues (R\u00c9CITAL, 22e \u00e9dition). 2e atelier \u00c9thique et TRaitemeNt Automatique des Langues (ETeR- NAL), pages 10-18, Nancy, France. ATALA et AFCP.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "On coreference resolution performance metrics", "authors": [ { "first": "Xiaoqiang", "middle": [], "last": "Luo", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "25--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoqiang Luo. 2005. On coreference resolution perfor- mance metrics. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 25-32, Vancouver, British Columbia, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Robustness in Coreference Resolution", "authors": [ { "first": "Nafise Sadat", "middle": [], "last": "Moosavi", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nafise Sadat Moosavi. 2020. Robustness in Coreference Resolution. Ph.D. thesis.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Which coreference evaluation metric do you trust? a proposal for a link-based entity aware metric", "authors": [ { "first": "Sadat", "middle": [], "last": "Nafise", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Moosavi", "suffix": "" }, { "first": "", "middle": [], "last": "Strube", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "632--642", "other_ids": { "DOI": [ "10.18653/v1/P16-1060" ] }, "num": null, "urls": [], "raw_text": "Nafise Sadat Moosavi and Michael Strube. 2016. Which coreference evaluation metric do you trust? a proposal for a link-based entity aware metric. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 632-642, Berlin, Germany. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "ANCOR_Centre, a large free spoken French coreference corpus: description of the resource and reliability measures", "authors": [ { "first": "Judith", "middle": [], "last": "Muzerelle", "suffix": "" }, { "first": "Ana\u00efs", "middle": [], "last": "Lefeuvre", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Schang", "suffix": "" }, { "first": "Jean-Yves", "middle": [], "last": "Antoine", "suffix": "" }, { "first": "Aurore", "middle": [], "last": "Pelletier", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)", "volume": "", "issue": "", "pages": "843--847", "other_ids": {}, "num": null, "urls": [], "raw_text": "Judith Muzerelle, Ana\u00efs Lefeuvre, Emmanuel Schang, Jean-Yves Antoine, Aurore Pelletier, Denis Maurel, Iris Eshkol, and Jeanne Villaneau. 2014. ANCOR_Centre, a large free spoken French coreference corpus: de- scription of the resource and reliability measures. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 843-847, Reykjavik, Iceland. European Language Resources Association (ELRA).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Why we need new evaluation metrics for NLG", "authors": [ { "first": "Jekaterina", "middle": [], "last": "Novikova", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/D17-1238" ] }, "num": null, "urls": [], "raw_text": "Jekaterina Novikova, Ond\u0159ej Du\u0161ek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evalu- ation metrics for NLG. In Proceedings of the 2017", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Conference on Empirical Methods in Natural Language Processing", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "2241--2252", "other_ids": {}, "num": null, "urls": [], "raw_text": "Conference on Empirical Methods in Natural Language Processing, pages 2241-2252, Copenhagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Do dependency parsing metrics correlate with human judgments?", "authors": [ { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" }, { "first": "\u017deljko", "middle": [], "last": "H\u00e9ctor Mart\u00ednez Alonso", "suffix": "" }, { "first": "Danijela", "middle": [], "last": "Agi\u0107", "suffix": "" }, { "first": "Anders", "middle": [], "last": "Merkler", "suffix": "" }, { "first": "", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "315--320", "other_ids": { "DOI": [ "10.18653/v1/K15-1033" ] }, "num": null, "urls": [], "raw_text": "Barbara Plank, H\u00e9ctor Mart\u00ednez Alonso, \u017deljko Agi\u0107, Danijela Merkler, and Anders S\u00f8gaard. 2015. Do dependency parsing metrics correlate with human judgments? In Proceedings of the Nineteenth Conference on Computational Natural Language Learning, pages 315-320, Beijing, China. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "CoNLL-2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes", "authors": [ { "first": "Alessandro", "middle": [], "last": "Sameer Pradhan", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Yuchen", "middle": [], "last": "Uryupina", "suffix": "" }, { "first": "", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2012, "venue": "Joint Conference on EMNLP and CoNLL -Shared Task", "volume": "", "issue": "", "pages": "1--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL-2012 shared task: Modeling multilingual unrestricted coref- erence in OntoNotes. In Joint Conference on EMNLP and CoNLL -Shared Task, pages 1-40, Jeju Island, Korea. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "BLANC: Implementing the Rand index for coreference evaluation", "authors": [ { "first": "Marta", "middle": [], "last": "Recasens", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2011, "venue": "Natural Language Engineering", "volume": "17", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marta Recasens and Eduard Hovy. 2011. BLANC: Im- plementing the Rand index for coreference evaluation. Natural Language Engineering, 17(4):485.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A modeltheoretic coreference scoring scheme", "authors": [ { "first": "Marc", "middle": [], "last": "Vilain", "suffix": "" }, { "first": "D", "middle": [], "last": "John", "suffix": "" }, { "first": "John", "middle": [], "last": "Burger", "suffix": "" }, { "first": "Dennis", "middle": [], "last": "Aberdeen", "suffix": "" }, { "first": "Lynette", "middle": [], "last": "Connolly", "suffix": "" }, { "first": "", "middle": [], "last": "Hirschman", "suffix": "" } ], "year": 1995, "venue": "Sixth Message Understanding Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marc Vilain, John D Burger, John Aberdeen, Dennis Connolly, and Lynette Hirschman. 1995. A model- theoretic coreference scoring scheme. In Sixth Message Understanding Conference (MUC-6): Proceedings of a Conference Held in Columbia, Maryland, November 6-8, 1995.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Error types examples.", "type_str": "figure", "uris": null }, "TABREF1": { "type_str": "table", "content": "", "text": "Krippendorff's alphas. An arrow shows that there are outlier annotators on the particular scale and set of examples. A value on the right of an arrow is an alpha after removing outlier annotators.", "html": null, "num": null }, "TABREF3": { "type_str": "table", "content": "
NameBorderTypeNoiseSilence Irrelevant chains Chain absence
MUC B-CUBED CEAFm CEAFe CoNLL BLANC LEA Humans\u22120.242 \u22120.662 \u22120.325 \u22120.458 \u22120.382 \u22120.174 \u22120.425 \u22120.343\u22120.249 \u22120.15 \u22120.34 \u22120.283 \u22120.217 \u22120.385 \u22120.22 \u22120.629\u22120.121 -\u22120.139 \u22120.322 \u22120.083 \u22120.233 \u22120.207 \u22120.598\u22120.58 \u22120.889 \u22120.408 \u22120.447 \u22120.556 \u22120.973 0.73 \u22120.513\u22120.345 -\u22120.353 \u22120.222 \u22120.179 \u22120.074 \u22120.432 \u22120.467\u22120.076 \u22120.264 \u22120.101 -\u22120.187 \u22120.56 -\u22120.727
", "text": "Differences between humans evaluations and metrics on the scale from 0 to 1.", "html": null, "num": null }, "TABREF4": { "type_str": "table", "content": "
A Appendix
1.
", "text": "Borders errors. Whales are marine mammals . instead of Whales are marine mammals . 2. Type errors. John likes his brother because he is funny instead of John likes his brother because he is funny. 3. Noise errors. The dog barked. It 's time to go. instead of The dog barked. It's time to go. 4. Silence errors. A phone is on the table. It rings. I pick it up instead of A phone is on the table. It rings. I pick it up. 5. Tendency of irrelevant coreference chains construction. A cat and a dog are playing together instead of A cat and a dog are playing together. 6. Chain absence. A phone is on the table. It rings. I pick it up instead of A phone is on the table. It rings. I pick it up.", "html": null, "num": null }, "TABREF7": { "type_str": "table", "content": "", "text": "Coefficients of error importances obtained during the regressors training for all metrics and annotators. Values in bold are reported by metrics' regressors. Values in italic are reported by a regressor trained on a mean answer on the gravity scale.", "html": null, "num": null } } } }