{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:20:59.961002Z" }, "title": "IRLAB-DAIICT@DravidianLangTech-EACL2021: Neural Machine Translation", "authors": [ { "first": "Raj", "middle": [], "last": "Prajapati", "suffix": "", "affiliation": {}, "email": "prajapatiraj.97@gmail.com" }, { "first": "Vedant", "middle": [], "last": "Parikh", "suffix": "", "affiliation": {}, "email": "vedant.parikh.6299@gmail.com" }, { "first": "Prasenjit", "middle": [], "last": "Majumder", "suffix": "", "affiliation": {}, "email": "prasenjit.majumder@gmail.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes our team's submission of the EACL DravidianLangTech-2021's shared task on Machine Translation of Dravidian languages.We submitted our translations for English to Malayalam , Tamil , and Telugu. The submissions mainly focus on having adequate amount of data backed up by good pre-processing of it to produce quality translations,which includes some custom made rules to remove unnecessary sentences. We conducted several experiments on these models by tweaking the architecture, Byte Pair Encoding (BPE) and other hyperparameters.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "This paper describes our team's submission of the EACL DravidianLangTech-2021's shared task on Machine Translation of Dravidian languages.We submitted our translations for English to Malayalam , Tamil , and Telugu. The submissions mainly focus on having adequate amount of data backed up by good pre-processing of it to produce quality translations,which includes some custom made rules to remove unnecessary sentences. We conducted several experiments on these models by tweaking the architecture, Byte Pair Encoding (BPE) and other hyperparameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "We participated in the shared task on Machine Translation in Dravidian languages Dravidian-LangTech, EACL 2021 .The advancement of technology has increased our internet usage and majority of the languages have acclimatised to the growing digital world. However, there are many regional languages which are under-resourced languages and still lack development.One such language family is the Dravidian languages , these languages are majorly spoken in south India ,Nepal, Pakistan, Sri Lanka and South Asia, we have submitted our translations for three language pairs namely:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. English-Malayalam 2. English-Tamil", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our implementations uses Transformer architecture and for that we have used OpenNMT-py (Klein et al., 2017) framework and BLEU (Papineni et al., 2002) score as the evaluation metric for our translation system.", "cite_spans": [ { "start": 87, "end": 107, "text": "(Klein et al., 2017)", "ref_id": "BIBREF2" }, { "start": 127, "end": 150, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "English-Telugu", "sec_num": "3." }, { "text": "Our main focus was on proper pre-processing of the data and often we have seen that improper preprocessing has led to horrendous translations. We have done extensive data pre-processing starting basic cleaning of punctuation symbols to language specific script normalization , apart from this we have added some custom rules as well. Which is followed by tokenization , truecasing and byte pair encoding (BPE).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "English-Telugu", "sec_num": "3." }, { "text": "For Indic languages espacially Dravidian langauges we often face the problem of Out of Vocabulary word (OOV) which is taken care by word segmentation using BPE ,so we deal with subwords instead of words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "English-Telugu", "sec_num": "3." }, { "text": "This paper is arranged as follows : First we describe the task undertaken which is followed by in-depth explanation of the model architecture, then next we have described the experimental setup which includes provided data set information , preprocessing steps and clean data statistics. After that , we describe the experiments conducted on different language pairs and analysis of the results produced. At last we draw some conclusions and propose some future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "English-Telugu", "sec_num": "3." }, { "text": "The task focuses on improvement to access and production of information for speakers of Dravidian languages. Due to low resources available , the research community has not developed much of an interest in this domain , the main focus of this task is to promote research in this area and build machine translation systems for native monolingual speakers of these group of languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "2" }, { "text": "In the era of digitization there is a large population who are not fully connected to the digital world because of their inability to access the digital world in their native language, which is what this task tries to accomplish.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "2" }, { "text": "The experiment setup contains the detailed information about our experiments,data and vision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "2" }, { "text": "3 Architecture", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "2" }, { "text": "Given two parallel sentences (a , b), the NMT model tries to learn the parameters \u03b8 by maximizing the probability P( b | a ; \u03b8 ) . The Encoder generates a mapping from the input sentence to a hidden set of representations h and the decoder generates a target token b t using the previously generated target tokens b k where k" }, "TABREF2": { "html": null, "type_str": "table", "text": "Cleaned training data statistics", "num": null, "content": "
Language PairNo. of sentences
English-Malayalam2K
English-Tamil1.5K
English-Telugu1.3K
" }, "TABREF3": { "html": null, "type_str": "table", "text": "Cleaned validation data statistics", "num": null, "content": "" }, "TABREF5": { "html": null, "type_str": "table", "text": "The main model configuration", "num": null, "content": "
" }, "TABREF6": { "html": null, "type_str": "table", "text": "", "num": null, "content": "
: Training Parameters
Language PairBLEU Score
English-Malayalam24.89
English-Tamil7.00
English-Telugu15.79
" }, "TABREF7": { "html": null, "type_str": "table", "text": "Results on Validation data", "num": null, "content": "" }, "TABREF9": { "html": null, "type_str": "table", "text": "", "num": null, "content": "
" } } } }