{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:00:49.134230Z" }, "title": "Attention-based Domain Adaption Using Transfer Learning for Part-of-Speech Tagging: An Experiment on the Hindi Language", "authors": [ { "first": "Rajesh", "middle": [], "last": "Kumar Mundotiya", "suffix": "", "affiliation": { "laboratory": "", "institution": "IIT(BHU)", "location": { "settlement": "Varanasi", "country": "India" } }, "email": "" }, { "first": "Vikrant", "middle": [], "last": "Kumar", "suffix": "", "affiliation": { "laboratory": "", "institution": "IIT(BHU)", "location": { "settlement": "Varanasi", "country": "India" } }, "email": "vikrantkumar.cse18@iitbhu.ac.in" }, { "first": "Arpit", "middle": [], "last": "Mehta", "suffix": "", "affiliation": { "laboratory": "", "institution": "IIT(BHU)", "location": { "settlement": "Varanasi", "country": "India" } }, "email": "arpitmehta.cse18@iitbhu.ac.in" }, { "first": "Anil", "middle": [ "Kumar" ], "last": "Singh", "suffix": "", "affiliation": { "laboratory": "", "institution": "IIT(BHU)", "location": { "settlement": "Varanasi", "country": "India" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Part-of-Speech (POS) tagging is considered a preliminary task for parsing any language, which in turn is required for many Natural Language Processing (NLP) applications. Existing work on the Hindi language for this task reported results on either the General or the News domain from the Hindi-Urdu Treebank that relied on a reasonably large annotated corpus. Since the Hindi datasets of the Disease and the Tourism domain have less annotated corpus, using domain adaptation seems to be a promising approach. In this paper, we describe an attention-based model with selfattention as well as monotonic chunk-wise attention, which successfully leverage syntactic relations through training on a small dataset. The accuracy of the Hindi Disease dataset performed by the attention-based model using transfer learning is 93.86%, an improvement on the baseline model (93.64%). In terms of F 1-score, however, the baseline model (93.65%) seems to do better than the monotonic-chunk-wise attention model (94.05%).", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Part-of-Speech (POS) tagging is considered a preliminary task for parsing any language, which in turn is required for many Natural Language Processing (NLP) applications. Existing work on the Hindi language for this task reported results on either the General or the News domain from the Hindi-Urdu Treebank that relied on a reasonably large annotated corpus. Since the Hindi datasets of the Disease and the Tourism domain have less annotated corpus, using domain adaptation seems to be a promising approach. In this paper, we describe an attention-based model with selfattention as well as monotonic chunk-wise attention, which successfully leverage syntactic relations through training on a small dataset. The accuracy of the Hindi Disease dataset performed by the attention-based model using transfer learning is 93.86%, an improvement on the baseline model (93.64%). In terms of F 1-score, however, the baseline model (93.65%) seems to do better than the monotonic-chunk-wise attention model (94.05%).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Deep learning has been consistently providing promising results on a large variety of language processing problems. Textual processing includes diverse applications of NLP such as text classification, dialect identification and classification, sequence labelling problems (such as Named Entity Recognition and Extraction, Chunking and POS tagging) and machine translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, for performance improvement obtained on the preliminary NLP tasks -POS tagging and Chunking -especially under a low resource scenario, Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN) have been used more. An efficient way of information modeling by Gated Recurrent (GRU) and Long Short Term Memory (LSTM), a variant of RNN, has also been tried.", "cite_spans": [ { "start": 179, "end": 213, "text": "Convolutional Neural Network (CNN)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Earlier work on POS tagger for canonical Hindi text achieved considerable results of about 97.10% on Universal Dependency dataset (Plank et al., 2016) , which belongs to a single domain. The performance reduces radically after deploying this existing trained model to a different domain-specific data or out-of-domain data. Domain-specific data such as Tourism and Disease has its own distributions and having a minimal amount of annotated dataset, considered as low resources, which also causes an Out-of-vocabulary (OOV) words issue.", "cite_spans": [ { "start": 130, "end": 150, "text": "(Plank et al., 2016)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "OOV is a major problem in low resources text processing, faced while training a model on one domain of a language and trying it to another domain of the same language. This problem is partly countered by incorporating character level information into the model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Lately, Transfer Learning has been shown to enhance the performance of the model by transferring learned features (general features as well as domainspecific features) which were obtained during training the model. The general features are transferred to the target domain through an initializer or feature extractor. These methods are beneficial as they benefit from the pre-trained model via neurons (Zennaki et al., 2019) . Yang et al. (2017) , Meftah et al. (2018) have followed the Transfer Learning approach on English (following Subject-Verb-Object sentence structure), while there is not much work for Hindi (following Subject-Object-Verb sentence structure) using such models.", "cite_spans": [ { "start": 402, "end": 424, "text": "(Zennaki et al., 2019)", "ref_id": "BIBREF17" }, { "start": 427, "end": 445, "text": "Yang et al. (2017)", "ref_id": "BIBREF15" }, { "start": 448, "end": 468, "text": "Meftah et al. (2018)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The proposed architecture of (Ma and Hovy, 2016) is employed as a baseline model for the purposes of our work. It encodes character level information by CNN. Authors have strengthened the baseline model through attention mechanism: selfattention and monotonic chunk-wise attention as the contribution. The motivation behind using these attention mechanisms is that it exhibits adequate improvement on neural machine translation, especially for low resource regime (Chiu and Raffel, 2017; Bahdanau et al., 2014; Goyal et al., 2020) . Also, the experimental datasets required can be smaller in size. The improvement in capturing syntactic information is due to the attention mechanisms. The results obtained by the attention mechanism provide an improvement over the original baseline results.", "cite_spans": [ { "start": 29, "end": 48, "text": "(Ma and Hovy, 2016)", "ref_id": "BIBREF8" }, { "start": 464, "end": 487, "text": "(Chiu and Raffel, 2017;", "ref_id": null }, { "start": 488, "end": 510, "text": "Bahdanau et al., 2014;", "ref_id": "BIBREF0" }, { "start": 511, "end": 530, "text": "Goyal et al., 2020)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We use as our baseline the above mentioned model using a discriminative tagging model proposed by Ma et al. (2016) In this model, the preservation of both syntactic and semantic information of words is achieved by a combination of two vectors obtained at word-level and character-level (Murthy et al., 2018) .", "cite_spans": [ { "start": 98, "end": 114, "text": "Ma et al. (2016)", "ref_id": "BIBREF8" }, { "start": 286, "end": 307, "text": "(Murthy et al., 2018)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "2" }, { "text": "The character-level information captures orthographic and morphological features by applying CNN (Murthy et al., 2018) , where characters are initially represented by a one-hot encoder and passed to convolution layer. The convolution layer holds n-gram information followed by max-pooling layer, where n is given by filter size. Maximum relevant information over the different features perceived through this layer, which are the distinct features of the word, represented at the character level, are passed to a fully connected layer. This layer used a Rectifier Linear Unit (ReLU) as a non-linear activation function to produce character-level word vector. The word vector is assigned by random initialization which is learnt during model training. The concatenated character and word-level vector is fed to the Bidirectional GRU. The obtained output from forward and backward GRUs at each time-step are combined before being fed to a Conditional Random Fields (CRF) layer. The CRF layer generate a probability score over the labels at each time-step.", "cite_spans": [ { "start": 97, "end": 118, "text": "(Murthy et al., 2018)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "2" }, { "text": "Since the last few years, attention mechanisms have been providing promising results in NLP applications as well, e.g. Machine Translation gets a better alignment between the source and the targets words after applying the attention mechanism (Bahdanau et al., 2014; Chiu and Raffel, 2017) . Here, we use two attention mechanisms into the baseline model: self-attention (Cheng et al., 2016) and Monotonic Chunkwise Attention (MOCHA) (Chiu and Raffel, 2017) to enhance the capabilities of capturing syntactic relations from input words.", "cite_spans": [ { "start": 243, "end": 266, "text": "(Bahdanau et al., 2014;", "ref_id": "BIBREF0" }, { "start": 267, "end": 289, "text": "Chiu and Raffel, 2017)", "ref_id": null }, { "start": 370, "end": 390, "text": "(Cheng et al., 2016)", "ref_id": "BIBREF2" }, { "start": 433, "end": 456, "text": "(Chiu and Raffel, 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Attention Based Model", "sec_num": "3" }, { "text": "Self-attention or intra-attention (Cheng et al., 2016) became popular after a Transformer model came into existence for Neural Machine Translation (Vaswani et al., 2017) . The Transformer proposed by Vaswani et al. (2017) completely relied on self-attention, which uses different positions of the input to obtain the attention score. The primary reason for calling self-attention as intra-attention is a dependency on itself for score calculation, which is calculated by applying softmax over the additive or dot product of the current vector with previous at-tention score. These intra-word dependencies are helpful for capturing the syntactic relations among words during labelling. Monotonic chunk-wise attention (Chiu and Raffel, 2017) is also an extension of Hard monotonic attention. It provides flexibility to the attention score calculation. In this method, the calculation of energy score is based on the chunk (a particular static word window size) rather than entire word input (usually following soft attention) or a particular time-step of input (generally following Hard monotonic attention). The energy score uses chunk energy (soft attention over a limited window) and monotonic energy (Bahdanau attention (Bahdanau et al., 2014) with a sigmoid function instead of softmax) to calculate the attention score. This attention score is calculated for each time-step input.", "cite_spans": [ { "start": 34, "end": 54, "text": "(Cheng et al., 2016)", "ref_id": "BIBREF2" }, { "start": 147, "end": 169, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF14" }, { "start": 200, "end": 221, "text": "Vaswani et al. (2017)", "ref_id": "BIBREF14" }, { "start": 716, "end": 739, "text": "(Chiu and Raffel, 2017)", "ref_id": null }, { "start": 1202, "end": 1245, "text": "(Bahdanau attention (Bahdanau et al., 2014)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Attention Mechanism", "sec_num": "3.1" }, { "text": "The previous extensions to the attention mechanisms are based on the encoder-decoder architecture, prevalent in end-to-end neural machine translation systems. In our work, two Bidirectional GRU layers are exploited for incorporating the attentions in the baseline model for POS tagging. The first GRU layer is treated as an encoder for attention and the remaining layer as a decoder for the attentionbased extended baseline model. The dropout layer is also used between attention input and output to prevent overfitting. The rest of the model architecture from the input data by CNN and word vector to predictions by CRF are the same as the baseline model, as shown in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 669, "end": 677, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Attention-based Model", "sec_num": "3.2" }, { "text": "Domain adaption has been performed with supervised, unsupervised and semi-supervised settings until now for many tasks including POS tagging. We have used relatively little annotated data to build a robust POS tagger for the target domain by using Transfer Learning. Transfer Learning procedure closely follows the Meftah et al. (2018) settings. The attention-based model has been trained on the first domain for POS tagging while performing transfer learning. The optimal learned parameters during this training are passed for the training of another domain. That is a standard procedure of transfer learning where all labels are considered as equal. Since the size of the Disease domain dataset is smaller compared to Tourism, it is considered a source domain, while the other is considered the target domain for domain adaption.", "cite_spans": [ { "start": 315, "end": 335, "text": "Meftah et al. (2018)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Domain Adaption model", "sec_num": "3.3" }, { "text": "The source and target domain datasets are divided in a 70%-30% ratio for performing validation of the trained model. The maximum length of sentences and words has been fixed for training the model, which is 52 and 22, respectively. However, gradient calculation avoided the padded sentences and words, which in turn prevents overfitting. The character vector size 32 are obtained after applying two filters 64 and 124, each with the size of 3, with a dropout of 30%. The model trained with the word vector and GRU unit of 100 and 128, respectively. As annotation corpus is tiny, the model tends to overfit quickly. Hence, dropout and early stoppage have applied with the value of 50% and 30 as patience, respectively. The parameters and hyper-parameters used in training are briefly mentioned in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 796, "end": 803, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Settings", "sec_num": "4.2" }, { "text": "The baseline model is also robust towards the POS tagging as the obtained results on the Disease dataset for isolated training has improved by domain adaptions even tough overlapping vocabularies are relatively small (1579 types). The baseline model gets up from 93.64% to 94.29% as in isolation and domain adaption training, respectively, which is the highest accuracy among reported results in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 396, "end": 403, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Result and Analysis", "sec_num": "5" }, { "text": "The self-attention-based model has degraded the performance due to their nature of attention score The baseline model and monotonic chunk-wise attention model achieved 94.05% and 93.65%, respectively as best F 1 -score for domain adaption. However, after tuning the hyper-parameter for DA, learning rate (0.01 for Baseline, 0.02 for MOCHA and 0.004 for Self-attention) of these model have improved the performance. We have used Variable length of training size (200, 400, 600 and 900) for DA training by using these models that show Selfattention model performs better (94.63% F 1 -score on training size of 900) than other models (94.20% and 93.65% F 1 -score on training size of 900 for Baseline and MOCHA model, respectively), as illustrated in Figure 4 . As evident from Table 3 , the MOCHA-based model is precise over the baseline model. From the analysis of predictions file of the baseline model and MOCHA-based model, we found that error-rate reduced on the selective tags. Postposition (PSP), Main Verb (V VM), Punctuation (RD PUNC), Cardinal Quantifier (QT QTC) and Co-ordinator Conjunction (CC CCD), General Quantifier (QT QTF), Common Noun (N NN) are selective most and less frequent POS tags, respectively. These tags have reduced error rate, the difference among these are shown in Figure 5 . Hence, It shows that MOCHAbased model more accurate to predictions of right POS tags on scarce words as well. On other POS tags, the error rate of the MOCHA-based model found to be comparable to the baseline model.", "cite_spans": [], "ref_spans": [ { "start": 748, "end": 756, "text": "Figure 4", "ref_id": "FIGREF4" }, { "start": 775, "end": 782, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 1296, "end": 1304, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Result and Analysis", "sec_num": "5" }, { "text": "A short chronological overview of the related work presented here to provide the context of our work. Blitzer et al. (2006) used Structural Correspondence Learning (SCL) to automatically induce correspondence to the features of a different domain in order to transfer POS tagger from Wall Street Journal (financial news) to MEDLINE (biomedical abstracts). Collobert et al. (2011) presented a taskindependent, a learning algorithm and unified convolutional neural network architecture, pertaining to various NLP tasks as POS tagging, Chunking, Named Entity Recognition and Semantic Role Labelling. They jointly trained models of POS tagging, Chunk and NER tasks with the additional linkage in trainable parameters for transferring knowledge learned in one task to another. Zhang et al. (2014) showed type-supervised domain adaptation for the Chinese word segmentation and POS tagging, using domain-specific tag dictionaries. Unlabeled target domain dataset has improved target domain accuracy by providing annotated source domain dataset. They have obtained a 33% error reduction on target domain tagging by unlabeled sentences and a lexicon of 3000 words. Yu et al. (2015) used an effective confidence-based self-training approach to select additional training samples for domain adaptation of a dependency parser and were able to improve parsing accuracy for out-of-domain texts by 1.6% on texts from a chemical domain. Mishra et al (2017) used unlabeled data for POS tagging applying for feature transfer via transfer learning from resource-rich to resource-poor language across eight Indian languages, each having 25K sentences and gained an average accuracy of 81%. Yang et al. (2017) explored transfer learning for neural sequence tagging, where source task with large annotated dataset was exploited to enhance the performance of the target task with smaller dataset. They examined the effect of Transfer Learning on recurrent neural networks across domains, applications and languages, and obtained significant improvement. Meftah et al. (2018) used GRU, CRF and CNN for character level feature representation as model components for POS tagging as a sequence labelling problem. To address the data scarcity, they examined the effectiveness of Cross-Domain and Cross Task Transfer Learning. Li et al. (2019) proposed a domain embedding approach to merge the source and the target domain training data. The results demonstrated that it is more effective than multi-task learning approaches and both direct corpus concatenation (as traditional approach). Contextualized word representation with fine-tuning is used to utilize unlabeled target-domain data, which further increased its cross-domain parsing accuracy.", "cite_spans": [ { "start": 102, "end": 123, "text": "Blitzer et al. (2006)", "ref_id": "BIBREF1" }, { "start": 356, "end": 379, "text": "Collobert et al. (2011)", "ref_id": "BIBREF4" }, { "start": 772, "end": 791, "text": "Zhang et al. (2014)", "ref_id": "BIBREF18" }, { "start": 1156, "end": 1172, "text": "Yu et al. (2015)", "ref_id": "BIBREF16" }, { "start": 1421, "end": 1440, "text": "Mishra et al (2017)", "ref_id": "BIBREF10" }, { "start": 1670, "end": 1688, "text": "Yang et al. (2017)", "ref_id": "BIBREF15" }, { "start": 2031, "end": 2051, "text": "Meftah et al. (2018)", "ref_id": "BIBREF9" }, { "start": 2298, "end": 2314, "text": "Li et al. (2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "We have used a similar CNN architecture as proposed by Meftah et al. (2018) , except that we have applied different sizes of stacked convolution layers. We have also used the same transfer settings across the domain for performing domain adaption the Hindi Treebank dataset.", "cite_spans": [ { "start": 55, "end": 75, "text": "Meftah et al. (2018)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Distributed word representation usually learns semantic and syntactic information about the word and ignores word size and morphological features. Partof-speech tagging requires intra-word information when dealing with morphologically rich language. Santos et al. (2014) have demonstrated that CNN is an effective approach for extracting morphological features and encoding it into neural represen-tations. Singh et al. (2018) used CRF and LSTM Recurrent Neural Networks to model POS Tagging on Hindi-English Code Mixed dataset from Twitter and achieved a result of overall F 1 -score of 90.20%. These works are related to our use of character level information in the models that we used.", "cite_spans": [ { "start": 250, "end": 270, "text": "Santos et al. (2014)", "ref_id": "BIBREF5" }, { "start": 407, "end": 426, "text": "Singh et al. (2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "The attention-based extended baseline model is a simple model for domain adaption to perform Partof-Speech (POS) tagging if there is scarcity of annotated corpus. It is an extension of the LSTM-CNN-CRF model by replacing LSTM by GRU and appending attention mechanisms (self-attention and monotonic chunk-wise attention). This model was used to perform domain adaption on the Hindi Treebank dataset, where the Tourism domain was considered as the source domain and Disease as the target domain for the Transfer Learning scenario. The results show the improvement over the baseline model by the monotonic chunk-wise attention mechanism. The limitation of scarcity of annotated corpus of both of the domains can be overcome to some extent by using available pre-trained word embeddings or raw corpus to get better embeddings for this model as part of future work. In addition to this, additional linguistic information can be fused into the model to leverage the advantages of additional accessible annotations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "http://tdil-dc.in/index.php?lang=en", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1409.0473" ] }, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Domain adaptation with structural correspondence learning", "authors": [ { "first": "John", "middle": [], "last": "Blitzer", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "120--128", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspon- dence learning. In Proceedings of the 2006 Confer- ence on Empirical Methods in Natural Language Pro- cessing, pages 120-128, Sydney, Australia, July. As- sociation for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Long short-term memory-networks for machine reading", "authors": [ { "first": "Jianpeng", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "551--561", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine read- ing. In Proceedings of the 2016 Conference on Empir- ical Methods in Natural Language Processing, pages 551-561, Austin, Texas, November. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "Journal of machine learning research", "volume": "12", "issue": "", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of machine learning research, 12(ARTICLE):2493-2537.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Learning character-level representations for part-of-speech tagging", "authors": [ { "first": "Santos", "middle": [], "last": "Cicero Dos", "suffix": "" }, { "first": "Bianca", "middle": [], "last": "Zadrozny", "suffix": "" } ], "year": 2014, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "1818--1826", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cicero Dos Santos and Bianca Zadrozny. 2014. Learning character-level representations for part-of-speech tag- ging. In International Conference on Machine Learn- ing, pages 1818-1826.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Efficient neural machine translation for lowresource languages via exploiting related languages", "authors": [ { "first": "Vikrant", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Sourav", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Dipti Misra", "middle": [], "last": "Sharma", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop", "volume": "", "issue": "", "pages": "162--168", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vikrant Goyal, Sourav Kumar, and Dipti Misra Sharma. 2020. Efficient neural machine translation for low- resource languages via exploiting related languages. In Proceedings of the 58th Annual Meeting of the As- sociation for Computational Linguistics: Student Re- search Workshop, pages 162-168.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Semi-supervised domain adaptation for dependency parsing", "authors": [ { "first": "Zhenghua", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xue", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Luo", "middle": [], "last": "Si", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2386--2395", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhenghua Li, Xue Peng, Min Zhang, Rui Wang, and Luo Si. 2019. Semi-supervised domain adaptation for de- pendency parsing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguis- tics, pages 2386-2395, Florence, Italy, July. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF", "authors": [ { "first": "Xuezhe", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1064--1074", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end se- quence labeling via bi-directional LSTM-CNNs-CRF. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1064-1074, Berlin, Germany, August. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A neural network model for part-of-speech tagging of social media texts", "authors": [ { "first": "Sara", "middle": [], "last": "Meftah", "suffix": "" }, { "first": "Nasredine", "middle": [], "last": "Semmar", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sara Meftah and Nasredine Semmar. 2018. A neural net- work model for part-of-speech tagging of social media texts. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "POS tagging for resource poor languages through feature projection", "authors": [ { "first": "Pruthwik", "middle": [], "last": "Mishra", "suffix": "" }, { "first": "Vandan", "middle": [], "last": "Mujadia", "suffix": "" }, { "first": "Dipti Misra", "middle": [], "last": "Sharma", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 14th International Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "50--55", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pruthwik Mishra, Vandan Mujadia, and Dipti Misra Sharma. 2017. POS tagging for resource poor languages through feature projection. In Proceed- ings of the 14th International Conference on Natu- ral Language Processing (ICON-2017), pages 50-55, Kolkata, India, December. NLP Association of India.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Improving ner tagging performance in low-resource languages via multilingual learning", "authors": [ { "first": "Rudra", "middle": [], "last": "Murthy", "suffix": "" }, { "first": "M", "middle": [], "last": "Mitesh", "suffix": "" }, { "first": "Pushpak", "middle": [], "last": "Khapra", "suffix": "" }, { "first": "", "middle": [], "last": "Bhattacharyya", "suffix": "" } ], "year": 2018, "venue": "ACM Trans. Asian Low-Resour. Lang. Inf. Process", "volume": "18", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rudra Murthy, Mitesh M. Khapra, and Pushpak Bhat- tacharyya. 2018. Improving ner tagging performance in low-resource languages via multilingual learning. ACM Trans. Asian Low-Resour. Lang. Inf. Process., 18(2), December.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss", "authors": [ { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "412--418", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barbara Plank, Anders S\u00f8gaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidi- rectional long short-term memory models and auxil- iary loss. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 412-418, Berlin, Germany, August. Association for Computational Lin- guistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A twitter corpus for hindi-english code mixed pos tagging", "authors": [ { "first": "Kushagra", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Indira", "middle": [], "last": "Sen", "suffix": "" }, { "first": "Ponnurangam", "middle": [], "last": "Kumaraguru", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media", "volume": "", "issue": "", "pages": "12--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kushagra Singh, Indira Sen, and Ponnurangam Ku- maraguru. 2018. A twitter corpus for hindi-english code mixed pos tagging. In Proceedings of the Sixth International Workshop on Natural Language Pro- cessing for Social Media, pages 12-17.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information process- ing systems, pages 5998-6008.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Transfer learning for sequence tagging with hierarchical recurrent networks", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "William", "middle": [ "W" ], "last": "Co", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Ruslan Salakhutdinov, and William W Co- hen. 2017. Transfer learning for sequence tagging with hierarchical recurrent networks.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Domain adaptation for dependency parsing via selftraining", "authors": [ { "first": "Juntao", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Mohab", "middle": [], "last": "Elkaref", "suffix": "" }, { "first": "Bernd", "middle": [], "last": "Bohnet", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 14th International Conference on Parsing Technologies", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Juntao Yu, Mohab Elkaref, and Bernd Bohnet. 2015. Domain adaptation for dependency parsing via self- training. In Proceedings of the 14th International Conference on Parsing Technologies, pages 1-10, Bil- bao, Spain, July. Association for Computational Lin- guistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A neural approach for inducing multilingual resources and natural language processing tools for low-resource languages", "authors": [ { "first": "Othman", "middle": [], "last": "Zennaki", "suffix": "" }, { "first": "Nasredine", "middle": [], "last": "Semmar", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Besacier", "suffix": "" } ], "year": 2019, "venue": "Natural Language Engineering", "volume": "25", "issue": "1", "pages": "43--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Othman Zennaki, Nasredine Semmar, and Laurent Be- sacier. 2019. A neural approach for inducing multilin- gual resources and natural language processing tools for low-resource languages. Natural Language Engi- neering, 25(1):43-67.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Type-supervised domain adaptation for joint segmentation and POS-tagging", "authors": [ { "first": "Meishan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "588--597", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meishan Zhang, Yue Zhang, Wanxiang Che, and Ting Liu. 2014. Type-supervised domain adaptation for joint segmentation and POS-tagging. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 588-597, Gothenburg, Sweden, April. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": ", together with character-level information encoded by CNN, illustrated in Figure 1." }, "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "Baseline model for POS Tagging" }, "FIGREF2": { "num": null, "type_str": "figure", "uris": null, "text": "Attention-based extended baseline model for POS Tagging Here, the optimal parameters \u03b8 s from the training of source domain are used for initialization of the target domain's parameters \u03b8 t . After this initialization (\u03b8 s \u2192 \u03b8 t ), the model is fine-tuned for the target domain, as shown inFigure 3." }, "FIGREF3": { "num": null, "type_str": "figure", "uris": null, "text": "Domain adaption via transfer learning approach" }, "FIGREF4": { "num": null, "type_str": "figure", "uris": null, "text": "Accuracy and F 1 -score comparison on Variable length of training data size for DA on the Baseline, Selfattention and MOCHA-based modelFigure 5: Error-rate comparison between selective most and less frequent POS tags obtained from predictions of baseline and MOCHA-based model" }, "TABREF0": { "html": null, "type_str": "table", "content": "
DomainSentences Types
Tourism3022 7100
Disease1494 4987
Overlapping-1579
4 Experimental Setup
4.1 Dataset
", "text": "For performing the experiments of domain adaption, we have used Disease and Tourism domains of the Hindi Treebank dataset 1 . The dataset follows the Bureau of Indian Standards (BIS) tagset. The statistics of the dataset are mentioned inTable 1. As the size of the dataset is small, and out of which overlapped types are 1579, we have extracted Treebank for our experiments.", "num": null }, "TABREF1": { "html": null, "type_str": "table", "content": "", "text": "Hindi Treebank data statistics according to domain", "num": null }, "TABREF3": { "html": null, "type_str": "table", "content": "
: The value of parameters and hyper-parameters
used in model training
calculation and limitation of sentence length. On the
other hand, MOCHA-based model has improved the
POS tagging system's performance due to the nature
of chunk consideration during attention score calcu-
lations. We have used a chunk size of 8 in model
setup. The MOCHA-based model obtained an ac-
curacy of 93.86%, which has a slight improvement
over the baseline model depicted in the Table 3.
Model (%)Accuracy F 1 -score
Baseline Model93.6494.05
Baseline Model + DA 94.2994.20
Self-Attention + DA91.1190.46
MOCHA + DA93.8693.65
", "text": "", "num": null }, "TABREF4": { "html": null, "type_str": "table", "content": "", "text": "Obtained results from baseline model and attention based models, where DA indicates Domain adaption settings", "num": null } } } }