|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:43:58.036137Z" |
|
}, |
|
"title": "iNLTK: Natural Language Toolkit for Indic Languages", |
|
"authors": [ |
|
{ |
|
"first": "Gaurav", |
|
"middle": [], |
|
"last": "Arora", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "gaurav@haptik.ai" |
|
}, |
|
{ |
|
"first": "Jio", |
|
"middle": [], |
|
"last": "Haptik", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We present iNLTK, an open-source NLP library consisting of pre-trained language models and out-of-the-box support for Data Augmentation, Textual Similarity, Sentence Embeddings, Word Embeddings, Tokenization and Text Generation in 13 Indic Languages. By using pre-trained models from iNLTK for text classification on publicly available datasets, we significantly outperform previously reported results. On these datasets, we also show that by using pre-trained models and data augmentation from iNLTK, we can achieve more than 95% of the previous best performance by using less than 10% of the training data. iNLTK is already being widely used by the community and has 40,000+ downloads, 600+ stars and 100+ forks on GitHub. The library is available at https://github.com/goru001/inltk.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We present iNLTK, an open-source NLP library consisting of pre-trained language models and out-of-the-box support for Data Augmentation, Textual Similarity, Sentence Embeddings, Word Embeddings, Tokenization and Text Generation in 13 Indic Languages. By using pre-trained models from iNLTK for text classification on publicly available datasets, we significantly outperform previously reported results. On these datasets, we also show that by using pre-trained models and data augmentation from iNLTK, we can achieve more than 95% of the previous best performance by using less than 10% of the training data. iNLTK is already being widely used by the community and has 40,000+ downloads, 600+ stars and 100+ forks on GitHub. The library is available at https://github.com/goru001/inltk.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Deep learning offers a way to harness large amounts of computation and data with little engineering by hand (LeCun et al., 2015) . With distributed representation, various deep models have become the new state-of-the-art methods for NLP problems. Pre-trained language models (Devlin et al., 2019) can model syntactic/semantic relations between words and reduce feature engineering. These pre-trained models are useful for initialization and/or transfer learning for NLP tasks. Pre-trained models are typically learned using unsupervised approaches from large, diverse monolingual corpora (Kunchukuttan et al., 2020 ). While we have seen exciting progress across many tasks in natural language processing over the last years, most such results have been achieved in English and a small set of other high-resource languages (Ruder, 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 128, |
|
"text": "(LeCun et al., 2015)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 275, |
|
"end": 296, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 588, |
|
"end": 614, |
|
"text": "(Kunchukuttan et al., 2020", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 822, |
|
"end": 835, |
|
"text": "(Ruder, 2020)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Indic languages, widely spoken by more than a billion speakers, lack pre-trained deep language models, trained on a large corpus, which can provide a headstart for downstream tasks using transfer learning. Availability of such models is critical to build a system that can achieve good results in \"lowresource\" settings -where labeled data is scarce and computation is expensive, which is the biggest challenge for working on NLP in Indic Languages. Additionally, there's lack of Indic languages support in NLP libraries like spacy 1 , nltk 2 -creating a barrier to entry for working with Indic languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "iNLTK, an open-source natural language toolkit for Indic languages, is designed to address these problems and to significantly lower barriers to doing NLP in Indic Languages by", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 sharing pre-trained deep language models, which can then be fine-tuned and used for downstream tasks like text classification,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 providing out-of-the-box support for Data Augmentation, Textual Similarity, Sentence Embeddings, Word Embeddings, Tokenization and Text Generation built on top of pretrained language models, lowering the barrier for doing applied research and building products in Indic languages iNLTK library supports 13 Indic languages, including English, as shown in Table 2 . GitHub repository 3 for the library contains source code, links to download pre-trained models, datasets and API documentation 4 . It includes reference implementations for reproducing text-classification results shown in Section 2.4, which can also be easily adapted to new data. The library has a permissive MIT License and is easy to download and install via pip or by cloning the GitHub repository. (Howard and Ruder, 2018) and TransformerXL (Dai et al., 2019) language models for 13 Indic languages. All the language models (LMs) were trained from scratch using PyTorch (Paszke et al., 2017) and Fastai 5 , except for English. Pre-trained LMs were then evaluated on downstream task of text classification on public datasets. Pre-trained LMs for English were borrowed from Fastai directly. This section describes training of language models and their evaluation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 769, |
|
"end": 793, |
|
"text": "(Howard and Ruder, 2018)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 812, |
|
"end": 830, |
|
"text": "(Dai et al., 2019)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 356, |
|
"end": 363, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We obtained a monolingual corpora for each one of the languages from Wikipedia for training LMs from scratch. We used the wiki extractor 6 tool and BeautifulSoup 7 for text extraction from Wikipedia. Wikipedia articles were then cleaned and split into train-validation sets. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset preparation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We create subword vocabulary for each one of the languages by training a SentencePiece 8 tokenization model on Wikipedia articles dataset, using unigram segmentation algorithm (Kudo and Richardson, 2018 ). An important property of Sen-tencePiece tokenization, necessary for us to obtain a valid subword-based language model, is its reversibility. We do not use subword regularization as the available training dataset is large enough to avoid overfitting. Table 3 shows subword vocabulary size of the tokenization model for each one of the languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 176, |
|
"end": 202, |
|
"text": "(Kudo and Richardson, 2018", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 456, |
|
"end": 463, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tokenization", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Our model is based on the ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Model Training", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We evaluated pre-trained ULMFiT language models on downstream task of text-classification using following publicly available datasets: (a) IIT-Patna Sentiment Analysis dataset (Akhtar et al., 2016), Table 6 shows statistics of these datasets. iNLTK results were compared against results reported in (Kunchukuttan et al., 2020) for pre-trained embeddings released by the Fast-Text project trained on Wikipedia (FT-W) (Bojanowski et al., 2016), Wiki+CommonCrawl (FT-WC) (Grave et al., 2018) and INLP embeddings (Kunchukuttan et al., 2020) . iNLTK is designed to be simple for practitioners in order to lower the barrier for doing applied research and building products in Indic languages. This section discusses various NLP tasks for which iNLTK provides out-of-the-box support, under a unified API. Data Augmentation helps in improving the performance of NLP models (Duboue and Chu-Carroll, 2006; Marton et al., 2009) . It is even more important in \"low-resource\" settings, where labeled data is scarce. iNLTK provides augmentations 14 for a sentence while preserving its semantics following a two step process. Firstly, it generates candidate paraphrases by replacing original sentence tokens with tokens which have closest embeddings from the embedding layer of pre-trained language model. And then, it chooses top paraphrases which are similar to original sentence, where similarity between sentences is calculated as the cosine similarity of sentence embeddings, obtained from pre-trained language model's encoder.", |
|
"cite_spans": [ |
|
{ |
|
"start": 300, |
|
"end": 327, |
|
"text": "(Kunchukuttan et al., 2020)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 469, |
|
"end": 489, |
|
"text": "(Grave et al., 2018)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 510, |
|
"end": 537, |
|
"text": "(Kunchukuttan et al., 2020)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 866, |
|
"end": 896, |
|
"text": "(Duboue and Chu-Carroll, 2006;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 897, |
|
"end": 917, |
|
"text": "Marton et al., 2009)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 199, |
|
"end": 207, |
|
"text": "Table 6", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Text Classification Evaluation", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "To evaluate the effectiveness of using data augmentation from iNLTK in low resource settings, we prepare 15 reduced train sets of publicly available text-classification datasets by picking first N examples from the full train set 16 , where N is equal to size of reduced train set and compare accuracy of the classifier trained with vs without data augmen-14 https://inltk.readthedocs.io/en/latest/api docs.html#getsimilar-sentences 15 Notebooks to prepare reduced datasets are accessible from the GitHub repository of the library 16 Labels in publicly available full train sets were not grouped together, instead were randomly shuffled tation. Table 7 shows reduced dataset statistics and comparison of results obtained on full and reduced datasets using iNLTK. Using data augmentation from iNLTK gives significant increase in accuracy on Hindi, Bengali, Malayalam and Tamil dataset, and minor improvements in Gujarati and Marathi datasets. Additionally, Table 7 compares previously obtained best results on these datasets using INLP embeddings (Kunchukuttan et al., 2020) with results obtained using iNLTK pretrained models and iNLTK's data augmentation utility. On an average, with iNLTK we are able to achieve more than 95% of the previous accuracy using less than 10% of the training data 17 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 230, |
|
"end": 232, |
|
"text": "16", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1046, |
|
"end": 1073, |
|
"text": "(Kunchukuttan et al., 2020)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 645, |
|
"end": 652, |
|
"text": "Table 7", |
|
"ref_id": "TABREF11" |
|
}, |
|
{ |
|
"start": 956, |
|
"end": 963, |
|
"text": "Table 7", |
|
"ref_id": "TABREF11" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Text Classification Evaluation", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "Semantic Textual Similarity (STS) assesses the degree to which the underlying semantics of two segments of text are equivalent to each other (Agirre et al., 2016) . iNLTK compares 18 sentence embeddings of the two segments of text, obtained from pre-trained language model's encoder, using a comparison function, to evaluate semantic textual similarity. Cosine similarity between sentence embeddings is used as the default comparison function.", |
|
"cite_spans": [ |
|
{ |
|
"start": 141, |
|
"end": 162, |
|
"text": "(Agirre et al., 2016)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text Classification Evaluation", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "Distributed representations are the cornerstone of modern NLP, which have led to significant advances in many NLP tasks. iNLTK provides utilities to obtain distributed representations for words 19 , sentences and documents 20 obtained 17 Refer GitHub repository of the library for instructions to reproduce results on full and reduced dataset 18 https://inltk.readthedocs.io/en/latest/api docs.html#getsentence-similarity 19 https://inltk.readthedocs.io/en/latest/api docs.html#getembedding-vectors 20 https://inltk.readthedocs.io/en/latest/api docs.html#get-from embedding layer and encoder output of pretrained language models, respectively. Additionally, iNLTK provides utilities to generate text 21 given a prompt, using pre-trained language models, tokenize 22 text using sentencepiece tokenization models described in Section 2.2, identify 23 which one of the supported Indic languages is given text in and remove tokens of a foreign language 24 from given text.", |
|
"cite_spans": [ |
|
{ |
|
"start": 235, |
|
"end": 237, |
|
"text": "17", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text Classification Evaluation", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "NLP and ML communities have a strong culture of building open-source tools. There are lots of easyto-use, user-facing libraries for general-purpose NLP like NLTK (Loper and Bird, 2002) , Stanford CoreNLP (Manning et al., 2014) , Spacy (Honnibal and Montani, 2017), AllenNLP (Gardner et al., 2018) , Flair (Akbik et al., 2019) , Stanza (Qi et al., 2020) and Huggingface Transformers (Wolf et al., 2019) . But most of these libraries have limited or no support for Indic languages, creating a barrier to entry for working with Indic languages. Additionally, for many Indic languages word embeddings have been trained, but they still lack richer pretrained representations from deep language models (Kunchukuttan et al., 2020) . iNLTK tries to solve these problems by providing pre-trained language models and out-of-the-box support for a variety of NLP tasks in 13 Indic languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 162, |
|
"end": 184, |
|
"text": "(Loper and Bird, 2002)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 204, |
|
"end": 226, |
|
"text": "(Manning et al., 2014)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 274, |
|
"end": 296, |
|
"text": "(Gardner et al., 2018)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 305, |
|
"end": 325, |
|
"text": "(Akbik et al., 2019)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 335, |
|
"end": 352, |
|
"text": "(Qi et al., 2020)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 382, |
|
"end": 401, |
|
"text": "(Wolf et al., 2019)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 696, |
|
"end": 723, |
|
"text": "(Kunchukuttan et al., 2020)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "iNLTK provides pre-trained language models and supports Data Augmentation, Textual Similarity, Sentence Embeddings, Word Embeddings, Tokenization and Text Generation in 13 Indic Languages. Our results significantly outperform other methods on text-classification benchmarks, using pre-trained models from iNLTK. These pre-trained models from iNLTK can be used as-is for a variety of NLP tasks, or can be fine-tuned on domain specific datasets. iNLTK is being widely 25 used 26 and sentence-encoding 21 https://inltk.readthedocs.io/en/latest/api docs.html#predictnext-n-words 22 https://inltk.readthedocs.io/en/latest/api docs.html#tokenize 23 https://inltk.readthedocs.io/en/latest/api docs.html#identifylanguage 24 https://inltk.readthedocs.io/en/latest/api docs.html#removeforeign-languages 25 https://github.com/goru001/inltk/network/members 26 https://pepy.tech/project/inltk appreciated 27 by the community 28 . We are working on expanding the supported languages in iNLTK to include other Indic languages like Telugu, Maithili; code mixed languages like Hinglish (Hindi and English), Manglish (Malayalam and English) and Tanglish (Tamil and English); expanding supported model architectures to include BERT. Additionally, we want to mitigate any possible unwarranted biases which might exist in pre-trained language models (Lu et al., 2019) , because of training data, which might propagate into downstream systems using these models. While these tasks are work in progress, we hope this library will accelerate NLP research and development in Indic languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1329, |
|
"end": 1346, |
|
"text": "(Lu et al., 2019)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "https://spacy.io/ 2 https://www.nltk.org/ 3 https://github.com/goru001/inltk 4 https://inltk.readthedocs.io/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/google/sentencepiece", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/AI4Bharat/indicnlp corpus13 Refer GitHub repository of the library for instructions to reproduce results", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We are thankful to Anurag Singh 29 and Ravi Annaswamy 30 for their contributions to support Urdu and Tamil in the iNLTK library, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "SemEval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carmen", |
|
"middle": [], |
|
"last": "Banea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Cer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mona", |
|
"middle": [], |
|
"last": "Diab", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aitor", |
|
"middle": [], |
|
"last": "Gonzalez-Agirre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rada", |
|
"middle": [], |
|
"last": "Mihalcea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "German", |
|
"middle": [], |
|
"last": "Rigau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Janyce", |
|
"middle": [], |
|
"last": "Wiebe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "497--511", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/S16-1081" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2016. SemEval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. In Proceedings of the 10th International Workshop on Semantic Evalua- tion (SemEval-2016), pages 497-511, San Diego, California. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "FLAIR: An easy-to-use framework for state-of-theart NLP", |
|
"authors": [ |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Akbik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tanja", |
|
"middle": [], |
|
"last": "Bergmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Duncan", |
|
"middle": [], |
|
"last": "Blythe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kashif", |
|
"middle": [], |
|
"last": "Rasul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Schweter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roland", |
|
"middle": [], |
|
"last": "Vollgraf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "54--59", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-4010" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. 2019. FLAIR: An easy-to-use framework for state-of-the- art NLP. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 54-59, Minneapolis, Minnesota. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A hybrid deep learning architecture for sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Ayush", |
|
"middle": [], |
|
"last": "Md Shad Akhtar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Asif", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pushpak", |
|
"middle": [], |
|
"last": "Ekbal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bhattacharyya", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "482--493", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Md Shad Akhtar, Ayush Kumar, Asif Ekbal, and Push- pak Bhattacharyya. 2016. A hybrid deep learning architecture for sentiment analysis. In Proceedings of COLING 2016, the 26th International Confer- ence on Computational Linguistics: Technical Pa- pers, pages 482-493, Osaka, Japan. The COLING 2016 Organizing Committee.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Enriching word vectors with subword information", |
|
"authors": [ |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. CoRR, abs/1607.04606. 27 https://github.com/goru001/inltk/stargazers 28 https://github.com/goru001/inltk#inltks-appreciation 29 https://github.com/anuragshas 30 https://github.com/ravi-annaswamy", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Transformer-xl: Attentive language models beyond a fixed", |
|
"authors": [ |
|
{ |
|
"first": "Zihang", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhilin", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiming", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaime", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Carbonell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G. Car- bonell, Quoc V. Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. CoRR, abs/1901.02860.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Answering the question you wish they had asked: The impact of paraphrasing for question answering", |
|
"authors": [ |
|
{ |
|
"first": "Pablo", |
|
"middle": [], |
|
"last": "Duboue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennifer", |
|
"middle": [], |
|
"last": "Chu-Carroll", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "33--36", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pablo Duboue and Jennifer Chu-Carroll. 2006. An- swering the question you wish they had asked: The impact of paraphrasing for question answering. In Proceedings of the Human Language Technol- ogy Conference of the NAACL, Companion Volume: Short Papers, pages 33-36.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Allennlp: A deep semantic natural language processing platform", |
|
"authors": [ |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joel", |
|
"middle": [], |
|
"last": "Grus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oyvind", |
|
"middle": [], |
|
"last": "Tafjord", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pradeep", |
|
"middle": [], |
|
"last": "Dasigi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nelson", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Schmitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew E. Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. Allennlp: A deep semantic natural language processing platform. CoRR, abs/1803.07640.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Learning word vectors for 157 languages", |
|
"authors": [ |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prakhar", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Ar- mand Joulin, and Tomas Mikolov. 2018. Learn- ing word vectors for 157 languages. CoRR, abs/1802.06893.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Honnibal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ines", |
|
"middle": [], |
|
"last": "Montani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embed- dings, convolutional neural networks and incremen- tal parsing. To appear.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Finetuned language models for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Jeremy", |
|
"middle": [], |
|
"last": "Howard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Fine- tuned language models for text classification. CoRR, abs/1801.06146.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", |
|
"authors": [ |
|
{ |
|
"first": "Taku", |
|
"middle": [], |
|
"last": "Kudo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Richardson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. CoRR, abs/1808.06226.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Ai4bharat-indicnlp corpus: Monolingual corpora and word embeddings for indic languages", |
|
"authors": [ |
|
{ |
|
"first": "Anoop", |
|
"middle": [], |
|
"last": "Kunchukuttan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Divyanshu", |
|
"middle": [], |
|
"last": "Kakwani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Satish", |
|
"middle": [], |
|
"last": "Golla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Gokul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Avik", |
|
"middle": [], |
|
"last": "Bhattacharyya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Mitesh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pratyush", |
|
"middle": [], |
|
"last": "Khapra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2005.00085" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anoop Kunchukuttan, Divyanshu Kakwani, Satish Golla, Gokul N.C., Avik Bhattacharyya, Mitesh M. Khapra, and Pratyush Kumar. 2020. Ai4bharat- indicnlp corpus: Monolingual corpora and word embeddings for indic languages. arXiv preprint arXiv:2005.00085.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Deep learning", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Yann Lecun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hinton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Nature", |
|
"volume": "521", |
|
"issue": "", |
|
"pages": "436--480", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1038/nature14539" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yann LeCun, Y. Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature, 521:436-44.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Nltk: the natural language toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Loper", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bird", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1118108.1118117" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Edward Loper and Steven Bird. 2002. Nltk: the natural language toolkit. CoRR, cs.CL/0205028.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Preetam Amancharla, and Anupam Datta", |
|
"authors": [ |
|
{ |
|
"first": "Kaiji", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Mardziel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fangjing", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Gender bias in neural natural language processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kaiji Lu, Piotr Mardziel, Fangjing Wu, Preetam Aman- charla, and Anupam Datta. 2019. Gender bias in neural natural language processing.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "The Stanford CoreNLP natural language processing toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Bauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jenny", |
|
"middle": [], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bethard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mcclosky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "55--60", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/P14-5010" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language pro- cessing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Lin- guistics: System Demonstrations, pages 55-60, Bal- timore, Maryland. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Improved statistical machine translation using monolingually-derived paraphrases", |
|
"authors": [ |
|
{ |
|
"first": "Yuval", |
|
"middle": [], |
|
"last": "Marton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "381--390", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuval Marton, Chris Callison-Burch, and Philip Resnik. 2009. Improved statistical machine translation us- ing monolingually-derived paraphrases. In Proceed- ings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 381-390.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Automatic differentiation in pytorch", |
|
"authors": [ |
|
{ |
|
"first": ",", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Adam Paszke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Soumith Chintala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Chanan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zachary", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zeming", |
|
"middle": [], |
|
"last": "Devito", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alban", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Desmaison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Antiga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lerer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Paszke, S. Gross, Soumith Chintala, G. Chanan, E. Yang, Zachary Devito, Zeming Lin, Alban Des- maison, L. Antiga, and A. Lerer. 2017. Automatic differentiation in pytorch.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Stanza: A python natural language processing toolkit for many human languages", |
|
"authors": [ |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Qi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuhao", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuhui", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Bolton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2003.07082" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D Manning. 2020. Stanza: A python natural language processing toolkit for many human languages. arXiv preprint arXiv:2003.07082.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Why You Should Do NLP Beyond English", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Ruder. 2020. Why You Should Do NLP Beyond English. http://ruder.io/ nlp-beyond-english.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Huggingface's transformers: State-of-the-art natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clement", |
|
"middle": [], |
|
"last": "Delangue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Moi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierric", |
|
"middle": [], |
|
"last": "Cistac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R'emi", |
|
"middle": [], |
|
"last": "Louf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morgan", |
|
"middle": [], |
|
"last": "Funtowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jamie", |
|
"middle": [], |
|
"last": "Brew", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "ArXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"text": "Statistics of Wikipedia Articles Dataset used for training Language Models", |
|
"content": "<table><tr><td colspan=\"4\">Language Code Language Code</td></tr><tr><td>Hindi</td><td>hi</td><td>Marathi</td><td>mr</td></tr><tr><td>Punjabi</td><td>pa</td><td>Bengali</td><td>bn</td></tr><tr><td>Gujarati</td><td>gu</td><td>Tamil</td><td>ta</td></tr><tr><td>Kannada</td><td>kn</td><td>Urdu</td><td>ur</td></tr><tr><td colspan=\"2\">Malayalam ml</td><td>Nepali</td><td>ne</td></tr><tr><td>Oriya</td><td>or</td><td>Sanskrit</td><td>sa</td></tr><tr><td>English</td><td>en</td><td/><td/></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"text": "", |
|
"content": "<table><tr><td>: Languages supported in iNLTK</td></tr><tr><td>2 iNLTK Pretrained Language Models</td></tr><tr><td>iNLTK has pre-trained ULMFiT</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"text": "", |
|
"content": "<table><tr><td>Language</td><td>Vocab size</td><td>Language</td><td>Vocab size</td></tr><tr><td>Hindi</td><td>30,000</td><td>Marathi</td><td>30,000</td></tr><tr><td>Punjabi</td><td>30,000</td><td>Bengali</td><td>30,000</td></tr><tr><td>Gujarati</td><td>20,000</td><td>Tamil</td><td>8,000</td></tr><tr><td>Kannada</td><td>25,000</td><td>Urdu</td><td>30,000</td></tr><tr><td colspan=\"2\">Malayalam 10,000</td><td>Nepali</td><td>15,000</td></tr><tr><td>Oriya</td><td>15,000</td><td>Sanskrit</td><td>20,000</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"text": "", |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF6": { |
|
"text": "Text classification accuracy on public datasets", |
|
"content": "<table><tr><td>Language</td><td/><td>Perplexity</td></tr><tr><td/><td colspan=\"2\">ULMFiT TransformerXL</td></tr><tr><td>Hindi</td><td>34.0</td><td>26.0</td></tr><tr><td>Bengali</td><td>41.2</td><td>39.3</td></tr><tr><td>Gujarati</td><td>34.1</td><td>28.1</td></tr><tr><td>Malayalam</td><td>26.3</td><td>25.7</td></tr><tr><td>Marathi</td><td>17.9</td><td>17.4</td></tr><tr><td>Tamil</td><td>19.8</td><td>17.2</td></tr><tr><td>Punjabi</td><td>24.4</td><td>14.0</td></tr><tr><td>Kannada</td><td>70.1</td><td>61.9</td></tr><tr><td>Oriya</td><td>26.5</td><td>26.8</td></tr><tr><td>Sanskrit</td><td>5.5</td><td>2.7</td></tr><tr><td>Nepali</td><td>31.5</td><td>29.3</td></tr><tr><td>Urdu</td><td>13.1</td><td>12.5</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF7": { |
|
"text": "", |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF9": { |
|
"text": "", |
|
"content": "<table><tr><td>: Statistics of publicly available classification</td></tr><tr><td>datasets (N is the number of classes)</td></tr><tr><td>News Category classification dataset (Kunchukut-</td></tr><tr><td>tan et al., 2020). Train and test splits, derived by the</td></tr><tr><td>authors (Kunchukuttan et al., 2020) from the above</td></tr><tr><td>mentioned corpora and used for benchmarking, are</td></tr><tr><td>available on the IndicNLP corpus website 12 .</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF10": { |
|
"text": "shows that iNLTK significantly outperforms other models across all languages and datasets 13 .", |
|
"content": "<table><tr><td>Language</td><td>Dataset</td><td colspan=\"2\"># Training Examples</td><td>%age reduction</td><td>INLP Accuracy</td><td/><td colspan=\"2\">iNLTK Accuracy</td></tr><tr><td/><td/><td>Full</td><td>Reduced</td><td/><td>Full</td><td>Full</td><td colspan=\"2\">Reduced</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>Without</td><td>With</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>Data Aug</td><td>Data Aug</td></tr><tr><td>Hindi</td><td colspan=\"2\">IITP+Movie 2,480</td><td>496</td><td>80%</td><td>45.81</td><td>57.74</td><td>47.74</td><td>56.13</td></tr><tr><td>Bengali</td><td>Soham Articles</td><td>11,284</td><td>112</td><td>99%</td><td>72.50</td><td>90.71</td><td>69.88</td><td>74.06</td></tr><tr><td>Gujarati</td><td/><td>5,269</td><td>526</td><td>90%</td><td>90.90</td><td>91.05</td><td>80.88</td><td>81.03</td></tr><tr><td>Malayalam</td><td>iNLTK</td><td>5,036</td><td>503</td><td>90%</td><td>93.49</td><td>95.56</td><td>82.38</td><td>84.29</td></tr><tr><td>Marathi</td><td>Headlines</td><td>9,672</td><td>483</td><td>95%</td><td>89.92</td><td>92.40</td><td>84.13</td><td>84.55</td></tr><tr><td>Tamil</td><td/><td>5,346</td><td>267</td><td>95%</td><td>93.57</td><td>95.22</td><td>86.25</td><td>89.84</td></tr><tr><td/><td>Average</td><td>6514.5</td><td>397.8</td><td>91.5%</td><td>81.03</td><td>87.11</td><td>75.21</td><td>78.31</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF11": { |
|
"text": "Comparison of Accuracy on INLP trained on Full Training set vs Accuracy on iNLTK, using data augmentation, trained on reduced training set", |
|
"content": "<table><tr><td>3 iNLTK API</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
} |
|
} |
|
} |
|
} |