{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:51:40.146241Z" }, "title": "Naive Bayes versus BERT: Jupyter notebook assignments for an introductory NLP course", "authors": [ { "first": "Jennifer", "middle": [], "last": "Foster", "suffix": "", "affiliation": { "laboratory": "", "institution": "Dublin City University", "location": {} }, "email": "" }, { "first": "Joachim", "middle": [], "last": "Wagner", "suffix": "", "affiliation": { "laboratory": "", "institution": "Dublin City University", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We describe two Jupyter notebooks that form the basis of two assignments in an introductory Natural Language Processing (NLP) module taught to final year undergraduate students at Dublin City University. The notebooks show the students how to train a bag-of-words polarity classifier using multinomial Naive Bayes, and how to fine-tune a polarity classifier using BERT. The students take the code as a starting point for their own experiments.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We describe two Jupyter notebooks that form the basis of two assignments in an introductory Natural Language Processing (NLP) module taught to final year undergraduate students at Dublin City University. The notebooks show the students how to train a bag-of-words polarity classifier using multinomial Naive Bayes, and how to fine-tune a polarity classifier using BERT. The students take the code as a starting point for their own experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "We describe two Jupyter 1 notebooks that form the basis of two assignments in a new introductory Natural Language Processing (NLP) module taught to final year students on the B.Sc. in Data Science programme at Dublin City University. As part of a prior module on this programme, the students have some experience with the NLP problem of quality estimation for machine translation. They have also studied machine learning and are competent Python programmers. Since this is the first Data Science cohort, there are only seven students. Four graduate students are also taking the module.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The course textbook is the draft 3 rd edition of (Jurafsky and Martin, 2009 The course is fully online for the 2020/2021 academic year. Lectures are pre-recorded and there are weekly live sessions where students anonymously answer comprehension questions via zoom polls and spend 20-30 minutes in breakout rooms working on exercises. These involve working out toy examples, or using online tools such as the AllenNLP online demo 3 (Gardner et al., 2018) to examine the behaviour of neural NLP systems.", "cite_spans": [ { "start": 49, "end": 75, "text": "(Jurafsky and Martin, 2009", "ref_id": "BIBREF4" }, { "start": 431, "end": 453, "text": "(Gardner et al., 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Assessment takes the form of an online end-ofsemester open-book exam worth 60% and three assignments worth 40%. The first assignment is worth 10% and involves coding a bigram language model from scratch. The second and third assignments are worth 15% each and involve experimentation, using Google Colab 4 as a platform. For both assignments, a Jupyter notebook is provided to the students which they are invited to use as a basis for their experiments. We describe both of these in turn.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We describe the assignment objectives, the notebooks we provide to the students 5 and the experiments they carried out.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notebooks", "sec_num": "2" }, { "text": "The assignment The aim of this assignment is to help students feel comfortable carrying out text classification experiments using scikit-learn (Pedregosa et al., 2011). Sentiment analysis of movie reviews is chosen as the application since it is a familiar and easily understood task and domain, requiring little linguistic expertise. We use the dataset of Pang and Lee (2004) because its relatively small size (2,000 documents) makes it quicker to train on. The documents are provided in tokenised form and have been split into ten cross-validation folds. We provide a Jupyter notebook implementing a baseline bag-of-words Naive Bayes classifier which assigns a label positive or negative to a review. The students are asked to experiment with this baseline model and to attempt to improve its accuracy by experimenting with 1. different learning algorithms, e.g. logistic regression, decision trees, support vector machines, etc.", "cite_spans": [ { "start": 357, "end": 376, "text": "Pang and Lee (2004)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Notebook One: Sentiment Polarity with Naive Bayes", "sec_num": "2.1" }, { "text": "2. different feature sets, such as handling negation, including bigrams and trigrams, using sentiment lexicons and performing linguistic analysis of the input", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notebook One: Sentiment Polarity with Naive Bayes", "sec_num": "2.1" }, { "text": "They are asked to use the same cross-validation set-up as the baseline system. Marks are awarded for the breadth of experimentation, the experiment descriptions, code clarity, average 10-fold crossvalidation accuracy and accuracy on a 'hidden' test set (also movie reviews).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notebook One: Sentiment Polarity with Naive Bayes", "sec_num": "2.1" }, { "text": "The notebook We implement document-level sentiment polarity prediction for movie reviews with multinomial Naive Bayes and bag-of-words features. We first build and test the functionality to load the dataset into a nested list of documents, sentences and tokens, each document annotated with its polarity label. Then we show code to collect the training data vocabulary and assign a unique ID to each entry. Documents are then encoded as bagof-word feature vectors in NumPy (Harris et al., 2020) , optionally clipped at frequency one to produce binary vectors. Finally, we show how to train a multinomial Naive Bayes model with scikit-learn, obtain a confusion matrix, measure accuracy and report cross-validation results. The functionality is demonstrated using a series of increasingly specific Python classes.", "cite_spans": [ { "start": 473, "end": 494, "text": "(Harris et al., 2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Notebook One: Sentiment Polarity with Naive Bayes", "sec_num": "2.1" }, { "text": "What the students did Most students carried out an extensive range of experiments, for the most part following the suggestions we provided at the assignment briefing and the strategies outlined in the lectures. The baseline accuracy of 83% was improved in most projects by about 3-5 points. The algorithm which gave the best results was logistic regression, whose default hyper-parameters worked well. The students who reported the highest accuracy scores used a combination of token unigrams, bigrams and trigrams, whereas most students directly compared each n-gram order. The students were free to change the code structure, and indeed some of them took the opportunity to refactor the code to a style that better suited them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notebook One: Sentiment Polarity with Naive Bayes", "sec_num": "2.1" }, { "text": "The assignment The aim of this second assignment is to help students feel comfortable using BERT (Devlin et al., 2019) . We provide a sample notebook which shows how to fine-tune BERT on the same task and dataset as in the previous assignment. The students are asked to do one of three things:", "cite_spans": [ { "start": 97, "end": 118, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Notebook Two: Sentiment Polarity with BERT", "sec_num": "2.2" }, { "text": "1. Perform a comparative error analysis of the output of the BERT system(s) and the systems from the previous assignment. The aim here is to get the students thinking about interpreting system output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notebook Two: Sentiment Polarity with BERT", "sec_num": "2.2" }, { "text": "2. Using the code in this notebook and online resources as examples, fine-tune BERT on a different task. The aim here is to 1) allow the students to experiment with something other than movie review polarity classification and explore their own interests, and 2) test their research and problem-solving skills.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notebook Two: Sentiment Polarity with BERT", "sec_num": "2.2" }, { "text": "3. Attempt to improve the BERT-based system provided in the notebook by experimenting with different ways of overcoming the input length restriction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notebook Two: Sentiment Polarity with BERT", "sec_num": "2.2" }, { "text": "The notebook We exemplify how to fine-tune BERT on the (Pang and Lee, 2004) dataset, using Hugging Face Transformers (Wolf et al., 2020) and PyTorch Lightning (Falcon, 2019) . We introduce the concept of subword units, showing BERT's token IDs for sample text input, the matching vocabulary entries, the mapping to the original input tokens and BERT's special [CLS] and [SEP] tokens. Then, we show the length distribution of documents in the data set and sketch strategies to address the limited sequence length of BERT. We implement taking 1) a slice from the start or 2) the end of each document, or 3) combining a slice from the start with a slice from the end of each document.", "cite_spans": [ { "start": 55, "end": 75, "text": "(Pang and Lee, 2004)", "ref_id": "BIBREF5" }, { "start": 117, "end": 136, "text": "(Wolf et al., 2020)", "ref_id": "BIBREF7" }, { "start": 159, "end": 173, "text": "(Falcon, 2019)", "ref_id": null }, { "start": 360, "end": 365, "text": "[CLS]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Notebook Two: Sentiment Polarity with BERT", "sec_num": "2.2" }, { "text": "In doing so, we show the students how a dataset can be read from a custom file format into the data loader objects expected by the framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notebook Two: Sentiment Polarity with BERT", "sec_num": "2.2" }, { "text": "What the students did Of the ten students who completed the assignment, three chose the first option of analysing system output and seven chose the second option of fine-tuning BERT on a task of their choosing. These included detection of hate speech in tweets, sentence-level acceptability judgements, document-level human rights violation detection, and sentiment polarity classification applied to tweets instead of movie reviews. No student opted for the third option of examining ways to overcome the input length limit in BERT for the (Pang and Lee, 2004) dataset.", "cite_spans": [ { "start": 541, "end": 561, "text": "(Pang and Lee, 2004)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Notebook Two: Sentiment Polarity with BERT", "sec_num": "2.2" }, { "text": "We surveyed the students to see what they thought of the assignments. On the positive side, they found them challenging and interesting, and they appreciated the flexibility provided in the third assignment. On the negative side, they felt that they involved more effort than the marks warranted, and they found the code in the notebooks to be unnecessarily complicated. The object-oriented nature of the code was also highlighted as a negative by some. For next year, we plan to 1) streamline the code, hiding some of the messy details, 2) reduce the scope of the assignments, and 3) provide more BERT fine-tuning example notebooks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Improvements", "sec_num": "3" }, { "text": "https://demo.allennlp.org/ 4 https://colab.research.google.com 5 An updated version of the notebooks will be made available in the materials repository of the Teaching-NLP 2021 workshop.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The second author's contribution to this work was funded by Science Foundation Ireland through the SFI Frontiers for the Future programme (19/FFP/6942). We thank the reviewers for their helpful suggestions, and the DCU CA4023 students for their hard work and patience!", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "AllenNLP: A deep semantic natural language processing platform", "authors": [ { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Grus", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Oyvind", "middle": [], "last": "Tafjord", "suffix": "" }, { "first": "Pradeep", "middle": [], "last": "Dasigi", "suffix": "" }, { "first": "Nelson", "middle": [ "F" ], "last": "Liu", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Schmitz", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of Workshop for NLP Open Source Software (NLP-OSS)", "volume": "", "issue": "", "pages": "1--6", "other_ids": { "DOI": [ "10.18653/v1/W18-2501" ] }, "num": null, "urls": [], "raw_text": "Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Pe- ters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language pro- cessing platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 1- 6, Melbourne, Australia. Association for Computa- tional Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Array programming with NumPy", "authors": [ { "first": "Charles", "middle": [ "R" ], "last": "Harris", "suffix": "" }, { "first": "K", "middle": [ "Jarrod" ], "last": "Millman", "suffix": "" }, { "first": "J", "middle": [], "last": "St\u00e9fan", "suffix": "" }, { "first": "Ralf", "middle": [], "last": "Van Der Walt", "suffix": "" }, { "first": "Pauli", "middle": [], "last": "Gommers", "suffix": "" }, { "first": "David", "middle": [], "last": "Virtanen", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Cournapeau", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Wieser", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Taylor", "suffix": "" }, { "first": "Nathaniel", "middle": [ "J" ], "last": "Berg", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Matti", "middle": [], "last": "Kern", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Picus", "suffix": "" }, { "first": "Marten", "middle": [ "H" ], "last": "Hoyer", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Van Kerkwijk", "suffix": "" }, { "first": "Allan", "middle": [], "last": "Brett", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Haldane", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Fern\u00e1ndez Del R\u00edo", "suffix": "" }, { "first": "Pearu", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Peterson", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "G\u00e9rard-Marchant", "suffix": "" }, { "first": "Tyler", "middle": [], "last": "Sheppard", "suffix": "" }, { "first": "Warren", "middle": [], "last": "Reddy", "suffix": "" }, { "first": "Hameer", "middle": [], "last": "Weckesser", "suffix": "" }, { "first": "Christoph", "middle": [], "last": "Abbasi", "suffix": "" }, { "first": "Travis", "middle": [ "E" ], "last": "Gohlke", "suffix": "" }, { "first": "", "middle": [], "last": "Oliphant", "suffix": "" } ], "year": 2020, "venue": "Nature", "volume": "585", "issue": "7825", "pages": "357--362", "other_ids": { "DOI": [ "10.1038/s41586-020-2649-2" ] }, "num": null, "urls": [], "raw_text": "Charles R. Harris, K. Jarrod Millman, St\u00e9fan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebas- tian Berg, Nathaniel J. Smith, Robert Kern, Matti Picus, Stephan Hoyer, Marten H. van Kerkwijk, Matthew Brett, Allan Haldane, Jaime Fern\u00e1ndez del R\u00edo, Mark Wiebe, Pearu Peterson, Pierre G\u00e9rard- Marchant, Kevin Sheppard, Tyler Reddy, Warren Weckesser, Hameer Abbasi, Christoph Gohlke, and Travis E. Oliphant. 2020. Array programming with NumPy. Nature, 585(7825):357-362.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Speech and language processing", "authors": [ { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "James", "middle": [ "H" ], "last": "Martin", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Jurafsky and James H. Martin. 2009. Speech and language processing. Pearson Prentice Hall, Upper Saddle River, N.J.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04)", "volume": "", "issue": "", "pages": "271--278", "other_ids": { "DOI": [ "10.3115/1218955.1218990" ] }, "num": null, "urls": [], "raw_text": "Bo Pang and Lillian Lee. 2004. A sentimental edu- cation: Sentiment analysis using subjectivity sum- marization based on minimum cuts. In Proceed- ings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 271- 278, Barcelona, Spain.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Scikit-learn: Machine learning in Python", "authors": [ { "first": "F", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "G", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "A", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "V", "middle": [], "last": "Michel", "suffix": "" }, { "first": "B", "middle": [], "last": "Thirion", "suffix": "" }, { "first": "O", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "M", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "P", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "R", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "V", "middle": [], "last": "Dubourg", "suffix": "" }, { "first": "J", "middle": [], "last": "Vanderplas", "suffix": "" }, { "first": "A", "middle": [], "last": "Passos", "suffix": "" }, { "first": "D", "middle": [], "last": "Cournapeau", "suffix": "" }, { "first": "M", "middle": [], "last": "Brucher", "suffix": "" }, { "first": "M", "middle": [], "last": "Perrot", "suffix": "" }, { "first": "E", "middle": [], "last": "Duchesnay", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R\u00e9mi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "Mariama", "middle": [], "last": "Gugger", "suffix": "" }, { "first": "Quentin", "middle": [], "last": "Drame", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Lhoest", "suffix": "" }, { "first": "", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "38--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.", "links": null } }, "ref_entries": { "TABREF0": { "html": null, "content": "
6. Neural Net Architectures (feed-forward, re-
current, transformer)
7. Ethical Issues in NLP
The
following topics are covered:
1. Pre-processing
2. N-gram Language Modelling
3. Text Classification using Naive Bayes and Lo-
gistic Regression
4. Sequence Labelling using Hidden Markov
Models and Conditional Random Fields
5. Word Vectors
~jurafsky/
slp3/
", "text": "). 2 It is impossible to teach the entire book in a twelve week module and so we concentrate on the first ten chapters.", "num": null, "type_str": "table" } } } }