{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:51:32.271831Z" }, "title": "From back to the roots into the gated woods: Deep learning for NLP", "authors": [ { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "", "affiliation": { "laboratory": "", "institution": "IT University of Copenhagen", "location": {} }, "email": "bplank@gmail.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Deep neural networks have revolutionized many fields, including Natural Language Processing. This paper outlines teaching materials for an introductory lecture on deep learning in Natural Language Processing (NLP). The main submitted material covers a summer school lecture on encoder-decoder models. Complementary to this is a set of jupyter notebook slides from earlier teaching, on which parts of the lecture were based on. The main goal of this teaching material is to provide an overview of neural network approaches to natural language processing, while linking modern concepts back to the roots showing traditional essential counterparts. The lecture departs from count-based statistical methods and spans up to gated recurrent networks and attention, which is ubiquitous in today's NLP.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Deep neural networks have revolutionized many fields, including Natural Language Processing. This paper outlines teaching materials for an introductory lecture on deep learning in Natural Language Processing (NLP). The main submitted material covers a summer school lecture on encoder-decoder models. Complementary to this is a set of jupyter notebook slides from earlier teaching, on which parts of the lecture were based on. The main goal of this teaching material is to provide an overview of neural network approaches to natural language processing, while linking modern concepts back to the roots showing traditional essential counterparts. The lecture departs from count-based statistical methods and spans up to gated recurrent networks and attention, which is ubiquitous in today's NLP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In 2015, the \"deep learning tsunami\" hit our field . In 2011, neural networks were not really \"a thing\" yet in NLP; they were even hardly taught. I remember when in 2011 Andrew Ng introduced the free online course on machine learning, which soon after led Daphne Koller and him to start massive online education: \"Many scientists think the best way to make progress [..] is through learning algorithms called neural networks\". 1 Ten years later, it is hard to imagine any NLP education not touching upon neural networks and in particular, representation learning. While neural networks have undoubtedly pushed the field, it has led to a more homogenized field; some lecturers even question whether to include pre-neural methods. I believe it is essential to teach statistical foundations besides modern DL, ie., to go back to the roots, to better be equipped for the future. 2 1 Accessed March 15, 2021: https://bit.ly/ 2OZH2MZ", "cite_spans": [ { "start": 366, "end": 370, "text": "[..]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 Disclaimer: This submitted teaching material is limited as it spans a single lecture. However, I believe it to be essential to 2 Structure of the summer school lecture This paper outlines teaching material (Keynote slides, Jupyter notebooks), which I would like to provide to the community for reuse when teaching an introductory course on deep learning for NLP. 3 The main material outlined in this paper is a Keynote presentation slide deck for a 3-h lecture on encoder-decoder models. An overview of the lecture (in non-temporal form), from foundations, to representations to neural networks (NNs) beyond vanilla NNs is given in Figure 1 . Moreover, I provide complementary teaching material in form of Jupyter notebook (convertable to slides). I've used these notebooks (slides and exercises) during earlier teaching and in parts formed the basis of the summer school lecture (see Section 3). The lecture covers NLP methods broadly from introduction concepts of more traditional NLP methods to recent deep learning methods. A particular focus is to contrast traditional approaches from the statistical NLP literature (sparse repreteach traditional methods besides modern neural approaches (e.g. count-based vs prediction-based word representations; statistical n-grams vs neural LMs; naive Bayes and logistic regression vs neural classifiers, to name a few examples).", "cite_spans": [ { "start": 365, "end": 366, "text": "3", "ref_id": null } ], "ref_spans": [ { "start": 634, "end": 642, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3 Available at: https://github.com/bplank/ teaching-dl4nlp sentations, n-grams) to their deep learning-based counterparts (dense representations, 'soft' ngrams in CNNS and RNN-based encoders). Consequently, the following topics are covered in the lecture: \u2022 Deep contextualized embeddings (Embeddings as LMs, Elmo)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The lecture was held as 3-h lecture at the first Athens in NLP (AthNLP) summer school in 2019. In the overall schedule of the summer school, this lecture was the 3rd in a row of six. It was scheduled after an introduction to classification (by Ryan Mc-Donald), and a lecture on structured prediction (by Xavier Carreras). The outlined encoder-decoder lecture was then followed by a lecture on machine translation (by Arianna Bisazza; the lecture build upon this lecture here and included the transformer), machine reading (by Sebastian Riedel) and dialogue (by Vivian Chen). 4 Each lecture was enhanced by unified lab exercises. 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Attention", "sec_num": null }, { "text": "4 Videos of all lectures are available at: http: //athnlp2019.iit.demokritos.gr/schedule/ index.html 5 Exercises were kindly provided as unified framework by the AthNLP team, and hence are not provided here. They included, lab 1: Part-of-Speech tagging with the perceptron; lab 2: POS tagging with the structured perceptron; lab 3: neural encoding for text classification; lab 4: neural language modeling; lab 5: machine translation; lab 6: question answering As key textbook references, I would like to refer the reader to chapters 3-9 of Jurafsky and Martin's 3rd edition (under development) (Jurafsky and Martin, 2020) , 6 and Yoav Goldberg's NLP primer (Goldberg, 2015) . Besides these textbooks, key papers include (Kim, 2014) for CNNs on texts, attention (Luong et al., 2015) and Elmo (Peters et al., 2018) .", "cite_spans": [ { "start": 594, "end": 621, "text": "(Jurafsky and Martin, 2020)", "ref_id": "BIBREF1" }, { "start": 624, "end": 625, "text": "6", "ref_id": null }, { "start": 657, "end": 673, "text": "(Goldberg, 2015)", "ref_id": "BIBREF0" }, { "start": 720, "end": 731, "text": "(Kim, 2014)", "ref_id": "BIBREF2" }, { "start": 761, "end": 781, "text": "(Luong et al., 2015)", "ref_id": "BIBREF3" }, { "start": 791, "end": 812, "text": "(Peters et al., 2018)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "\u2022 Attention", "sec_num": null }, { "text": "This lecture evolved from a series of lectures given earlier, amongst which a short course given in Malta in 2019, and a MSc-level course I taught at the University of Groningen (Language Technology Project). To complement the Keynote slides of the summer school lecture provided here, earlier Jupyter notebooks can be found at the website. These cover a subset of the material above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complementary notebooks of earlier material", "sec_num": "3" }, { "text": "This short paper outlines teaching material for an introductory lecture on deep learning for NLP. By releasing this teaching material, I hope to contribute material that fellow researchers find useful when teaching introductory courses on DL for NLP. For comments and suggestions to improve upon this material, please reach out to me.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "4" }, { "text": "https://web.stanford.edu/~jurafsky/ slp3/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "I would like to thank Arianna Bisazza for many fruitful discussions in preparing this summer school lecture, and the AthNLP 2019 organizers for the invitation, with special thanks to Andreas Vlachos, Yannis Konstas and Ion Androutsopoulos. I would like to thank many colleagues who inspired me throughout the years in so many ways, including those who provide excellent teaching material online. Thanks goes to Abigal See, Anders Johannsen, Alexander Koller, Chris Manning, Dan Jurafsky, Dirk Hovy, Graham Neubig, Greg Durrett, Malvina Nissim, Richard Johannson, Ryan McDonald and Philip Koehn. Special thank to those who inspired me in one way or another for teaching the beauty and challenges of computational linguistics, NLP or deep learning (in chronological order): Raffaella Bernardi, Reut Tsarfaty and Khalil Sima'an, Gertjan van Noord, Ryan McDonald and Yoav Goldberg.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A primer on neural network models for natural language processing", "authors": [ { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoav Goldberg. 2015. A primer on neural network models for natural language processing.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Speech and language processing : an introduction to natural language processing, computational linguistics, and speech recognition", "authors": [ { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "James", "middle": [ "H" ], "last": "Martin", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Jurafsky and James H. Martin. 2020. Speech and language processing : an introduction to natural language processing, computational linguistics, and speech recognition. Upper Saddle River, N.J.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1746--1751", "other_ids": { "DOI": [ "10.3115/v1/D14-1181" ] }, "num": null, "urls": [], "raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751, Doha, Qatar. Association for Computational Lin- guistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Effective approaches to attention-based neural machine translation", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1412--1421", "other_ids": { "DOI": [ "10.18653/v1/D15-1166" ] }, "num": null, "urls": [], "raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1412-1421, Lis- bon, Portugal. Association for Computational Lin- guistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Computational linguistics and deep learning", "authors": [ { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Computational Linguistics", "volume": "41", "issue": "4", "pages": "701--707", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D Manning. 2015. Computational linguis- tics and deep learning. Computational Linguistics, 41(4):701-707.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2227--2237", "other_ids": { "DOI": [ "10.18653/v1/N18-1202" ] }, "num": null, "urls": [], "raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "Overview of the core concepts covered." } } } }