{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:51:30.733938Z" }, "title": "Flamingos and Hedgehogs in the Croquet-Ground: Teaching Evaluation of NLP Systems for Undergraduate Students", "authors": [ { "first": "Brielen", "middle": [], "last": "Madureira", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Potsdam", "location": {} }, "email": "madureiralasota@uni-potsdam.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This report describes the course Evaluation of NLP Systems, taught for Computational Linguistics undergraduate students during the winter semester 20/21 at the University of Potsdam, Germany. It was a discussion-based seminar that covered different aspects of evaluation in NLP, namely paradigms, common procedures, data annotation, metrics and measurements, statistical significance testing, best practices and common approaches in specific NLP tasks and applications. 1 Motivation \"Alice soon came to the conclusion that it was a very difficult game indeed.\" 1 When the Queen of Hearts invited Alice to her croquet-ground, Alice had no idea how to play that strange game with flamingos and hedgehogs. NLP newcomers may be as puzzled as her when they enter the Wonderland of NLP and encounter a myriad of strange new concepts: Baseline, F1 score, glass box, ablation, diagnostic, extrinsic and intrinsic, performance, annotation, metrics, humanbased, test suite, shared task.. . Although experienced researchers and practitioners may easily relate them to the evaluation of NLP", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "This report describes the course Evaluation of NLP Systems, taught for Computational Linguistics undergraduate students during the winter semester 20/21 at the University of Potsdam, Germany. It was a discussion-based seminar that covered different aspects of evaluation in NLP, namely paradigms, common procedures, data annotation, metrics and measurements, statistical significance testing, best practices and common approaches in specific NLP tasks and applications. 1 Motivation \"Alice soon came to the conclusion that it was a very difficult game indeed.\" 1 When the Queen of Hearts invited Alice to her croquet-ground, Alice had no idea how to play that strange game with flamingos and hedgehogs. NLP newcomers may be as puzzled as her when they enter the Wonderland of NLP and encounter a myriad of strange new concepts: Baseline, F1 score, glass box, ablation, diagnostic, extrinsic and intrinsic, performance, annotation, metrics, humanbased, test suite, shared task.. . Although experienced researchers and practitioners may easily relate them to the evaluation of NLP", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "models and systems, for newcomers like undergraduate students it is not simply a matter of looking up their definition. It is necessary to show them the big picture of what and how we play in the croquet-ground of evaluation in NLP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The NLP community clearly cares for doing proper evaluation. From earlier works like the book by Karen Sp\u00e4rck Jones and Julia R. Galliers (1995) to the winner of ACL 2020 best paper award (Ribeiro et al., 2020) and recent dedicated workshops, e.g. Eger et al. (2020) , the formulation of evaluation methodologies has been a prominent topic in the field.", "cite_spans": [ { "start": 129, "end": 144, "text": "Galliers (1995)", "ref_id": "BIBREF20" }, { "start": 188, "end": 210, "text": "(Ribeiro et al., 2020)", "ref_id": "BIBREF30" }, { "start": 248, "end": 266, "text": "Eger et al. (2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Despite its importance, evaluation is usually covered very briefly in NLP courses due to a tight schedule. Teachers barely have time to discuss dataset splits, simple metrics like accuracy, precision, recall and F1 Score, and some techniques like cross validation. As a result, students end up learning about evaluation on-the-fly as they begin their careers in NLP. The lack of structured knowledge may cause them to be unacquainted with the multifaceted metrics and procedures, which can render them partially unable to evaluate models critically and responsibly. The leap from that one lecture to what is expected in good NLP papers and software should not be underestimated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The course Evaluation of NLP Systems, which I taught for undergraduate Computational Linguistics students in the winter semester of 20/21 at the University of Potsdam, Germany, was a reading and discussion-based learning approach with three main goals: i) helping participants become aware of the importance of evaluation in NLP; ii) discussing different evaluation methods, metrics and techniques; and iii) showing how evaluation is being done for different NLP tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The following sections provide an overview of the course content and structure. With some adaptation, this course can also be suitable for more advanced students.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Kinds of evaluation and main steps, e.g. intrinsic and extrinsic, manual and automatic, black box and glass box.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paradigms", "sec_num": null }, { "text": "Overview about the use of measurements, baselines, dataset splits, cross validation, error analysis, ablation, human evaluation and comparisons.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common Procedures", "sec_num": null }, { "text": "How to annotate linguistic data, evaluate the annotation and how the annotation scheme can affect the evaluation of a system's performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": null }, { "text": "Outline of the different metrics commonly used in NLP, what they aim to quantify and how to interpret them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metrics and Measurements", "sec_num": null }, { "text": "Hypothesis testing for comparing the performance of two systems in the same dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Significance Testing", "sec_num": null }, { "text": "The linguistic aspect of NLP, reproducibility and the social impact of NLP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Best Practices", "sec_num": null }, { "text": "Group presentations about specific approaches in four NLP tasks/applications (machine translation, natural language generation, dialogue and speech synthesis) and related themes (the history of evaluation, shared tasks, ethics and ACL's code of conduct and replication crisis). 2 Course Content and Format The course happened 100% online due to the pandemic. It was divided into two parts. In the first half of the semester, students learned about the evaluation methods used in general in NLP and, to some extent, machine learning. After each meeting, I posted a pre-recorded short lecture, slides and a reading list about the next week's content. The participants had thus one week to work through the material anytime before the next meeting slot. I provided diverse sources like papers, blogposts, tutorials, slides and videos.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NLP Case Studies", "sec_num": null }, { "text": "I started the online meetings with a wrap-up and feedback about the previous week's content. Then, I randomly split them into groups of 3 or 4 participants in breakout sessions so that they could discuss a worksheet together for about 45 minutes. I encouraged them to use this occasion to profit from the interaction and brainstorming with their 2 https://briemadu.github.io/evalNLP/schedule peers and exchange arguments and thoughts. After the meeting, they had one week to write down their solutions individually and submit it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NLP Case Studies", "sec_num": null }, { "text": "In the second half of the semester, they divided into 4 groups to analyze how evaluation is being done in specific NLP tasks. For larger groups, other NLP tasks can be added. They prepared group presentations and discussion topics according to general guidelines and an initial bibliography that they could expand. Students provided anonymous feedback about each other's presentations for me and I then shared it with the presenters, to have the chance to filter abusive or offensive comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NLP Case Studies", "sec_num": null }, { "text": "The last lecture was a tutorial about useful metrics available in scikit-learn and nltk Python libraries using Jupyter Notebook (Kluyver et al., 2016) .", "cite_spans": [ { "start": 128, "end": 150, "text": "(Kluyver et al., 2016)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "NLP Case Studies", "sec_num": null }, { "text": "Finally, they had six weeks to work on a final project. Students could select one of the following three options: i) a critical essay on the development and current state of evaluation in NLP, discussing the positive and negative aspects and where to go from here; ii) a hands-on detailed evaluation of an NLP system of their choice, which could be, for example, an algorithm they implemented for another course; or iii) a summary of the course in the format of a small newspaper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NLP Case Studies", "sec_num": null }, { "text": "Seventeen bachelor students of Computational Linguistics attended the course. At the University of Potsdam, this seminar falls into the category of a module called Methods of Computational Linguistics, which is intended for students in the 5 th semester of their bachelor course. Still, one student in the 3 rd and many students in higher semesters also took part.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Participants", "sec_num": "3" }, { "text": "By the 5 th semester, students are expected to have completed introductory courses on linguistics (phonetic and phonology, syntax, morphology, semantics and psycho-and neurolinguistics), computational linguistics techniques, computer science and programming (finite state automata, advanced Python and other courses of their choice), introduction to statistics and empirical methods and foundations of mathematics and logic, as well as varying seminars related to computational linguistics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Participants", "sec_num": "3" }, { "text": "Although there were no formal requirements for taking this course, students should preferably be familiar some common tasks and practices in NLP and the basics of statistics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Participants", "sec_num": "3" }, { "text": "I believe this course successfully introduced students to several fundamental principles of evaluation in NLP. The quality of their submissions, especially the final project, was, in general, very high. By knowing how to properly manage flamingos and hedgehogs, they will hopefully be spared the sentence \"off with their head!\" as they continue their careers in NLP. The game is not very difficult when one learns the rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Outcomes", "sec_num": "4" }, { "text": "Students gave very positive feedback at the end of the semester about the content, the literature and the format. They particularly enjoyed the opportunity to discuss with each other, saying it was good to exchange what they recalled from the reading. They also stated that what they learned contributed to their understanding in other courses and improved their ability to document and evaluate models they implement. The course was also useful for them to start reading more scientific literature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Outcomes", "sec_num": "4" }, { "text": "In terms of improvements, they mentioned that the weekly workload could be reduced. They also reported that the reading for the week when we covered statistical significance testing was too advanced. Still, they could do the worksheet since it did not dive deep into the theory.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Outcomes", "sec_num": "4" }, { "text": "The syllabus, slides and suggested readings are available on the course's website. 3 The references here list the papers and books used to put together the course and has no ambition of being exhaustive. In case this course is replicated, the references should be updated with the most recent papers. I can share the worksheets and guidelines for the group presentation and the project upon request. Feedback from readers is very welcome.", "cite_spans": [ { "start": 83, "end": 84, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Outcomes", "sec_num": "4" }, { "text": "Alice in Wonderland by Lewis Carroll, public domain. Illustration by John Tenniel, public domain, via Wikimedia Commons.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://briemadu.github.io/evalNLP/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "In this course, I was inspired and used material available online by many people, to whom I am thankful. I also thank the students who were very engaged during the semester and made it a rewarding experience for me. Moreover, I am grateful for the anonymous reviewers for their detailed and encouraging feedback.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Verification and validation of language processing systems: Is it evaluation?", "authors": [ { "first": "Valerie", "middle": [], "last": "Barr", "suffix": "" }, { "first": "Judith", "middle": [ "L" ], "last": "Klavans", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the ACL 2001 Workshop on Evaluation Methodologies for Language and Dialogue Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Valerie Barr and Judith L. Klavans. 2001. Verification and validation of language processing systems: Is it evaluation? In Proceedings of the ACL 2001 Work- shop on Evaluation Methodologies for Language and Dialogue Systems.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Analysis methods in neural language processing: A survey", "authors": [ { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" } ], "year": 2019, "venue": "Transactions of the Association for Computational Linguistics", "volume": "7", "issue": "", "pages": "49--72", "other_ids": { "DOI": [ "10.1162/tacl_a_00254" ] }, "num": null, "urls": [], "raw_text": "Yonatan Belinkov and James Glass. 2019. Analysis methods in neural language processing: A survey. Transactions of the Association for Computational Linguistics, 7:49-72.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "That's nice. . . what can you do with it? Computational Linguistics", "authors": [ { "first": "Anja", "middle": [], "last": "Belz", "suffix": "" } ], "year": 2009, "venue": "", "volume": "35", "issue": "", "pages": "111--118", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anja Belz. 2009. That's nice. . . what can you do with it? Computational Linguistics, 35(1):111-118.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Data statements for natural language processing: Toward mitigating system bias and enabling better science", "authors": [ { "first": "Emily", "middle": [ "M" ], "last": "Bender", "suffix": "" }, { "first": "Batya", "middle": [], "last": "Friedman", "suffix": "" } ], "year": 2018, "venue": "Transactions of the Association for Computational Linguistics", "volume": "6", "issue": "", "pages": "587--604", "other_ids": { "DOI": [ "10.1162/tacl_a_00041" ] }, "num": null, "urls": [], "raw_text": "Emily M. Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587-604.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "An empirical investigation of statistical significance in NLP", "authors": [ { "first": "Taylor", "middle": [], "last": "Berg-Kirkpatrick", "suffix": "" }, { "first": "David", "middle": [], "last": "Burkett", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "995--1005", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taylor Berg-Kirkpatrick, David Burkett, and Dan Klein. 2012. An empirical investigation of statis- tical significance in NLP. In Proceedings of the 2012 Joint Conference on Empirical Methods in Nat- ural Language Processing and Computational Nat- ural Language Learning, pages 995-1005, Jeju Is- land, Korea. Association for Computational Linguis- tics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "(meta-) evaluation of machine translation", "authors": [ { "first": "Chris", "middle": [], "last": "Callison", "suffix": "" }, { "first": "-", "middle": [], "last": "Burch", "suffix": "" }, { "first": "Cameron", "middle": [], "last": "Fordyce", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Josh", "middle": [], "last": "Schroeder", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Second Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "136--158", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2007. (meta-) evaluation of machine translation. In Pro- ceedings of the Second Workshop on Statistical Ma- chine Translation, pages 136-158, Prague, Czech Republic. Association for Computational Linguis- tics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Evaluation of text generation: A survey", "authors": [ { "first": "Asli", "middle": [], "last": "Celikyilmaz", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2006.14799" ] }, "num": null, "urls": [], "raw_text": "Asli Celikyilmaz, Elizabeth Clark, and Jianfeng Gao. 2020. Evaluation of text generation: A survey. arXiv preprint arXiv:2006.14799.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "How to evaluate machine translation: A review of automated and human metrics", "authors": [ { "first": "Eirini", "middle": [], "last": "Chatzikoumi", "suffix": "" } ], "year": 2020, "venue": "Natural Language Engineering", "volume": "26", "issue": "2", "pages": "137--161", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eirini Chatzikoumi. 2020. How to evaluate machine translation: A review of automated and human met- rics. Natural Language Engineering, 26(2):137- 161.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Survey on evaluation methods for dialogue systems", "authors": [ { "first": "Jan", "middle": [], "last": "Deriu", "suffix": "" }, { "first": "Alvaro", "middle": [], "last": "Rodrigo", "suffix": "" }, { "first": "Arantxa", "middle": [], "last": "Otegi", "suffix": "" }, { "first": "Guillermo", "middle": [], "last": "Echegoyen", "suffix": "" }, { "first": "Sophie", "middle": [], "last": "Rosset", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Cieliebak", "suffix": "" } ], "year": 2021, "venue": "Artificial Intelligence Review", "volume": "54", "issue": "1", "pages": "755--810", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jan Deriu, Alvaro Rodrigo, Arantxa Otegi, Guillermo Echegoyen, Sophie Rosset, Eneko Agirre, and Mark Cieliebak. 2021. Survey on evaluation methods for dialogue systems. Artificial Intelligence Review, 54(1):755-810.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Replicability analysis for natural language processing: Testing significance with multiple datasets", "authors": [ { "first": "Rotem", "middle": [], "last": "Dror", "suffix": "" }, { "first": "Gili", "middle": [], "last": "Baumer", "suffix": "" }, { "first": "Marina", "middle": [], "last": "Bogomolov", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "471--486", "other_ids": { "DOI": [ "10.1162/tacl_a_00074" ] }, "num": null, "urls": [], "raw_text": "Rotem Dror, Gili Baumer, Marina Bogomolov, and Roi Reichart. 2017. Replicability analysis for natural language processing: Testing significance with mul- tiple datasets. Transactions of the Association for Computational Linguistics, 5:471-486.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The hitchhiker's guide to testing statistical significance in natural language processing", "authors": [ { "first": "Rotem", "middle": [], "last": "Dror", "suffix": "" }, { "first": "Gili", "middle": [], "last": "Baumer", "suffix": "" }, { "first": "Segev", "middle": [], "last": "Shlomov", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1383--1392", "other_ids": { "DOI": [ "10.18653/v1/P18-1128" ] }, "num": null, "urls": [], "raw_text": "Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Re- ichart. 2018. The hitchhiker's guide to testing statis- tical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1383-1392, Melbourne, Aus- tralia. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Statistical significance testing for natural language processing", "authors": [ { "first": "Rotem", "middle": [], "last": "Dror", "suffix": "" }, { "first": "Lotem", "middle": [], "last": "Peled-Cohen", "suffix": "" }, { "first": "Segev", "middle": [], "last": "Shlomov", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" } ], "year": 2020, "venue": "Synthesis Lectures on Human Language Technologies", "volume": "13", "issue": "2", "pages": "1--116", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rotem Dror, Lotem Peled-Cohen, Segev Shlomov, and Roi Reichart. 2020. Statistical significance testing for natural language processing. Synthesis Lectures on Human Language Technologies, 13(2):1-116.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "2020. Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems. Association for Computational Linguistics", "authors": [ { "first": "Steffen", "middle": [], "last": "Eger", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Maxime", "middle": [], "last": "Peyrard", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steffen Eger, Yang Gao, Maxime Peyrard, Wei Zhao, and Eduard Hovy, editors. 2020. Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems. Association for Computational Lin- guistics, Online.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Last words: Amazon Mechanical Turk: Gold mine or coal mine? Computational Linguistics", "authors": [ { "first": "Kar\u00ebn", "middle": [], "last": "Fort", "suffix": "" }, { "first": "Gilles", "middle": [], "last": "Adda", "suffix": "" }, { "first": "K", "middle": [], "last": "Bretonnel Cohen", "suffix": "" } ], "year": 2011, "venue": "", "volume": "37", "issue": "", "pages": "413--420", "other_ids": { "DOI": [ "10.1162/COLI_a_00057" ] }, "num": null, "urls": [], "raw_text": "Kar\u00ebn Fort, Gilles Adda, and K. Bretonnel Cohen. 2011. Last words: Amazon Mechanical Turk: Gold mine or coal mine? Computational Linguistics, 37(2):413-420.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Evaluating Natural Language Processing Systems", "authors": [ { "first": "J", "middle": [ "R" ], "last": "Galliers", "suffix": "" }, { "first": "K", "middle": [ "S" ], "last": "Jones", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.R. Galliers, K.S. Jones, and University of Cambridge. Computer Laboratory. 1993. Evaluating Natural Language Processing Systems. Computer Labora- tory Cambridge: Technical report. University of Cambridge, Computer Laboratory.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Survey of the state of the art in natural language generation: Core tasks, applications and evaluation", "authors": [ { "first": "Albert", "middle": [], "last": "Gatt", "suffix": "" }, { "first": "Emiel", "middle": [], "last": "Krahmer", "suffix": "" } ], "year": 2018, "venue": "Journal of Artificial Intelligence Research", "volume": "61", "issue": "", "pages": "65--170", "other_ids": {}, "num": null, "urls": [], "raw_text": "Albert Gatt and Emiel Krahmer. 2018. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. Journal of Artifi- cial Intelligence Research, 61:65-170.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "We need to talk about standard splits", "authors": [ { "first": "Kyle", "middle": [], "last": "Gorman", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bedrick", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2786--2791", "other_ids": { "DOI": [ "10.18653/v1/P19-1267" ] }, "num": null, "urls": [], "raw_text": "Kyle Gorman and Steven Bedrick. 2019. We need to talk about standard splits. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 2786-2791, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Overview of Evaluation in Speech and Natural Language Processing", "authors": [ { "first": "Lynette", "middle": [], "last": "Hirschman", "suffix": "" }, { "first": "Henry", "middle": [ "S" ], "last": "Thompson", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "409--414", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lynette Hirschman and Henry S. Thompson. 1997. Overview of Evaluation in Speech and Natural Lan- guage Processing, page 409-414. Cambridge Uni- versity Press, USA.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The social impact of natural language processing", "authors": [ { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Shannon", "middle": [ "L" ], "last": "Spruit", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "591--598", "other_ids": { "DOI": [ "10.18653/v1/P16-2096" ] }, "num": null, "urls": [], "raw_text": "Dirk Hovy and Shannon L. Spruit. 2016. The social impact of natural language processing. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 591-598, Berlin, Germany. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Emiel van Miltenburg, Sashank Santhanam, and Verena Rieser. 2020. Twenty years of confusion in human evaluation: NLG needs evaluation sheets and standardised definitions", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Howcroft", "suffix": "" }, { "first": "Anya", "middle": [], "last": "Belz", "suffix": "" }, { "first": "Miruna-Adriana", "middle": [], "last": "Clinciu", "suffix": "" }, { "first": "Dimitra", "middle": [], "last": "Gkatzia", "suffix": "" }, { "first": "A", "middle": [], "last": "Sadid", "suffix": "" }, { "first": "Saad", "middle": [], "last": "Hasan", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Mahamood", "suffix": "" }, { "first": "", "middle": [], "last": "Mille", "suffix": "" } ], "year": null, "venue": "Proceedings of the 13th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "169--182", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M. Howcroft, Anya Belz, Miruna-Adriana Clinciu, Dimitra Gkatzia, Sadid A. Hasan, Saad Mahamood, Simon Mille, Emiel van Miltenburg, Sashank Santhanam, and Verena Rieser. 2020. Twenty years of confusion in human evaluation: NLG needs evaluation sheets and standardised def- initions. In Proceedings of the 13th International Conference on Natural Language Generation, pages 169-182, Dublin, Ireland. Association for Computa- tional Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Evaluating natural language processing systems: An analysis and review", "authors": [ { "first": "Karen Sparck Jones", "middle": [], "last": "", "suffix": "" }, { "first": "Julia", "middle": [ "R" ], "last": "Galliers", "suffix": "" } ], "year": 1995, "venue": "Lecture Notes in Artificial Intelligence", "volume": "1083", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karen Sparck Jones and Julia R Galliers. 1995. Evalu- ating natural language processing systems: An anal- ysis and review, volume 1083 of Lecture Notes in Artificial Intelligence. Springer-Verlag Berlin Hei- delberg.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Evaluating natural language processing systems", "authors": [ { "first": "Margaret", "middle": [], "last": "King", "suffix": "" } ], "year": 1996, "venue": "Communications of the ACM", "volume": "39", "issue": "1", "pages": "73--79", "other_ids": {}, "num": null, "urls": [], "raw_text": "Margaret King. 1996. Evaluating natural language processing systems. Communications of the ACM, 39(1):73-79.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Jupyter notebooks -a publishing format for reproducible computational workflows", "authors": [ { "first": "Thomas", "middle": [], "last": "Kluyver", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Ragan-Kelley", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "P\u00e9rez", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Granger", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Bussonnier", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Frederic", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Kelley", "suffix": "" }, { "first": "Jessica", "middle": [], "last": "Hamrick", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Grout", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Corlay", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Ivanov", "suffix": "" }, { "first": "Dami\u00e1n", "middle": [], "last": "Avila", "suffix": "" } ], "year": 2016, "venue": "Positioning and Power in Academic Publishing: Players, Agents and Agendas", "volume": "", "issue": "", "pages": "87--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Kluyver, Benjamin Ragan-Kelley, Fer- nando P\u00e9rez, Brian Granger, Matthias Bussonnier, Jonathan Frederic, Kyle Kelley, Jessica Hamrick, Jason Grout, Sylvain Corlay, Paul Ivanov, Dami\u00e1n Avila, Safia Abdalla, and Carol Willing. 2016. Jupyter notebooks -a publishing format for repro- ducible computational workflows. In Positioning and Power in Academic Publishing: Players, Agents and Agendas, pages 87 -90. IOS Press.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Troubling trends in machine learning scholarship: Some ml papers suffer from flaws that could mislead the public and stymie future research", "authors": [ { "first": "Zachary", "middle": [ "C" ], "last": "Lipton", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Steinhardt", "suffix": "" } ], "year": 2019, "venue": "Queue", "volume": "17", "issue": "1", "pages": "45--77", "other_ids": { "DOI": [ "10.1145/3317287.3328534" ] }, "num": null, "urls": [], "raw_text": "Zachary C. Lipton and Jacob Steinhardt. 2019. Trou- bling trends in machine learning scholarship: Some ml papers suffer from flaws that could mislead the public and stymie future research. Queue, 17(1):45-77.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation", "authors": [ { "first": "Chia-Wei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Lowe", "suffix": "" }, { "first": "Iulian", "middle": [], "last": "Serban", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Noseworthy", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Charlin", "suffix": "" }, { "first": "Joelle", "middle": [], "last": "Pineau", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2122--2132", "other_ids": { "DOI": [ "10.18653/v1/D16-1230" ] }, "num": null, "urls": [], "raw_text": "Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Nose- worthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2122-2132, Austin, Texas. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Why we need new evaluation metrics for NLG", "authors": [ { "first": "Jekaterina", "middle": [], "last": "Novikova", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Amanda", "middle": [ "Cercas" ], "last": "Curry", "suffix": "" }, { "first": "Verena", "middle": [], "last": "Rieser", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2241--2252", "other_ids": { "DOI": [ "10.18653/v1/D17-1238" ] }, "num": null, "urls": [], "raw_text": "Jekaterina Novikova, Ond\u0159ej Du\u0161ek, Amanda Cer- cas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2241-2252, Copenhagen, Denmark. Association for Computa- tional Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Principles of evaluation in natural language processing", "authors": [ { "first": "Patrick", "middle": [], "last": "Paroubek", "suffix": "" }, { "first": "St\u00e9phane", "middle": [], "last": "Chaudiron", "suffix": "" }, { "first": "Lynette", "middle": [], "last": "Hirschman", "suffix": "" } ], "year": 2007, "venue": "Traitement Automatique des Langues", "volume": "48", "issue": "1", "pages": "7--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Paroubek, St\u00e9phane Chaudiron, and Lynette Hirschman. 2007. Principles of evaluation in nat- ural language processing. Traitement Automatique des Langues, 48(1):7-31.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Ethical considerations in NLP shared tasks", "authors": [ { "first": "Carla", "middle": [], "last": "Parra Escart\u00edn", "suffix": "" }, { "first": "Wessel", "middle": [], "last": "Reijers", "suffix": "" }, { "first": "Teresa", "middle": [], "last": "Lynn", "suffix": "" }, { "first": "Joss", "middle": [], "last": "Moorkens", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Way", "suffix": "" }, { "first": "Chao-Hong", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the First ACL Workshop on Ethics in Natural Language Processing", "volume": "", "issue": "", "pages": "66--73", "other_ids": { "DOI": [ "10.18653/v1/W17-1608" ] }, "num": null, "urls": [], "raw_text": "Carla Parra Escart\u00edn, Wessel Reijers, Teresa Lynn, Joss Moorkens, Andy Way, and Chao-Hong Liu. 2017. Ethical considerations in NLP shared tasks. In Pro- ceedings of the First ACL Workshop on Ethics in Nat- ural Language Processing, pages 66-73, Valencia, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "An investigation into the validity of some metrics for automatically evaluating natural language generation systems", "authors": [ { "first": "Ehud", "middle": [], "last": "Reiter", "suffix": "" }, { "first": "Anja", "middle": [], "last": "Belz", "suffix": "" } ], "year": 2009, "venue": "Computational Linguistics", "volume": "35", "issue": "4", "pages": "529--558", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ehud Reiter and Anja Belz. 2009. An investigation into the validity of some metrics for automatically evalu- ating natural language generation systems. Compu- tational Linguistics, 35(4):529-558.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Evaluation of NLP systems. The handbook of computational linguistics and natural language processing", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Resnik and Jimmy Lin. 2010. Evaluation of NLP systems. The handbook of computational lin- guistics and natural language processing. Chapter 11., 57.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Beyond accuracy: Behavioral testing of NLP models with CheckList", "authors": [ { "first": "Tongshuang", "middle": [], "last": "Marco Tulio Ribeiro", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Guestrin", "suffix": "" }, { "first": "", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4902--4912", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.442" ] }, "num": null, "urls": [], "raw_text": "Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Be- havioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4902- 4912, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "On some pitfalls in automatic evaluation and significance testing for MT", "authors": [ { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" }, { "first": "John", "middle": [ "T" ], "last": "Maxwell", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization", "volume": "", "issue": "", "pages": "57--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefan Riezler and John T. Maxwell. 2005. On some pitfalls in automatic evaluation and significance test- ing for MT. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 57-64, Ann Arbor, Michigan. Association for Com- putational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "What I've learned about annotating informal text (and why you shouldn't take my word for it)", "authors": [ { "first": "Nathan", "middle": [], "last": "Schneider", "suffix": "" } ], "year": 2015, "venue": "Proceedings of The 9th Linguistic Annotation Workshop", "volume": "", "issue": "", "pages": "152--157", "other_ids": { "DOI": [ "10.3115/v1/W15-1618" ] }, "num": null, "urls": [], "raw_text": "Nathan Schneider. 2015. What I've learned about anno- tating informal text (and why you shouldn't take my word for it). In Proceedings of The 9th Linguistic Annotation Workshop, pages 152-157, Denver, Col- orado, USA. Association for Computational Linguis- tics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Linguistic structure prediction", "authors": [ { "first": "A", "middle": [], "last": "Noah", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2011, "venue": "", "volume": "4", "issue": "", "pages": "1--274", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noah A Smith. 2011. Linguistic structure prediction. Synthesis lectures on human language technologies, 4(2):1-274.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "What's in a p-value in NLP?", "authors": [ { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" }, { "first": "Anders", "middle": [], "last": "Johannsen", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Hector Mart\u00ednez", "middle": [], "last": "Alonso", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "1--10", "other_ids": { "DOI": [ "10.3115/v1/W14-1601" ] }, "num": null, "urls": [], "raw_text": "Anders S\u00f8gaard, Anders Johannsen, Barbara Plank, Dirk Hovy, and Hector Mart\u00ednez Alonso. 2014. What's in a p-value in NLP? In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, pages 1-10, Ann Arbor, Michi- gan. Association for Computational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Towards better NLP system evaluation", "authors": [ { "first": "Karen Sparck", "middle": [], "last": "Jones", "suffix": "" } ], "year": 1994, "venue": "Human Language Technology: Proceedings of a Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karen Sparck Jones. 1994. Towards better NLP sys- tem evaluation. In Human Language Technology: Proceedings of a Workshop held at Plainsboro, New Jersey, March 8-11, 1994.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Best practices for the human evaluation of automatically generated text", "authors": [ { "first": "Chris", "middle": [], "last": "Van Der Lee", "suffix": "" }, { "first": "Albert", "middle": [], "last": "Gatt", "suffix": "" }, { "first": "Sander", "middle": [], "last": "Emiel Van Miltenburg", "suffix": "" }, { "first": "Emiel", "middle": [], "last": "Wubben", "suffix": "" }, { "first": "", "middle": [], "last": "Krahmer", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 12th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "355--368", "other_ids": { "DOI": [ "10.18653/v1/W19-8643" ] }, "num": null, "urls": [], "raw_text": "Chris van der Lee, Albert Gatt, Emiel van Miltenburg, Sander Wubben, and Emiel Krahmer. 2019. Best practices for the human evaluation of automatically generated text. In Proceedings of the 12th Interna- tional Conference on Natural Language Generation, pages 355-368, Tokyo, Japan. Association for Com- putational Linguistics.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Speech synthesis evaluation-state-of-the-art assessment and suggestion for a novel research program", "authors": [ { "first": "Petra", "middle": [], "last": "Wagner", "suffix": "" }, { "first": "Jonas", "middle": [], "last": "Beskow", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Betz", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Edlund", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Gustafson", "suffix": "" }, { "first": "Gustav", "middle": [ "Eje" ], "last": "Henter", "suffix": "" }, { "first": "S\u00e9bastien", "middle": [ "Le" ], "last": "Maguer", "suffix": "" }, { "first": "Zofia", "middle": [], "last": "Malisz", "suffix": "" }, { "first": "\u00c9va", "middle": [], "last": "Sz\u00e9kely", "suffix": "" }, { "first": "Christina", "middle": [], "last": "T\u00e5nnander", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 10th Speech Synthesis Workshop (SSW10)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Petra Wagner, Jonas Beskow, Simon Betz, Jens Edlund, Joakim Gustafson, Gustav Eje Henter, S\u00e9bastien Le Maguer, Zofia Malisz, \u00c9va Sz\u00e9kely, Christina T\u00e5nnander, et al. 2019. Speech synthesis evaluation-state-of-the-art assessment and sugges- tion for a novel research program. In Proceedings of the 10th Speech Synthesis Workshop (SSW10).", "links": null } }, "ref_entries": { "TABREF0": { "num": null, "text": "Overview of the course content.", "type_str": "table", "content": "