{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:15:12.321773Z" }, "title": "Low Resource Multimodal Neural Machine Translation of EnglishHindi in News Domain", "authors": [ { "first": "Loitongbam", "middle": [], "last": "Sanayai Meetei", "suffix": "", "affiliation": { "laboratory": "", "institution": "NIT Silchar", "location": { "country": "India" } }, "email": "" }, { "first": "Thoudam", "middle": [], "last": "Doren Singh", "suffix": "", "affiliation": { "laboratory": "", "institution": "NIT Silchar", "location": { "country": "India" } }, "email": "thoudam.doren@gmail.com" }, { "first": "Sivaji", "middle": [], "last": "Bandyopadhyay", "suffix": "", "affiliation": { "laboratory": "", "institution": "NIT Silchar", "location": { "country": "India" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Incorporating multiple input modalities in a machine translation (MT) system is gaining popularity among MT researchers. Unlike the publicly available dataset for Multimodal Ma chine Translation (MMT) tasks, where the cap tions are short image descriptions, the news captions provide a more detailed description of the contents of the images. As a result, nu merous named entities relating to specific per sons, locations, etc., are found. In this paper, we acquire two monolingual news datasets re ported in English and Hindi paired with the images to generate a synthetic EnglishHindi parallel corpus. The parallel corpus is used to train the EnglishHindi Neural Machine Trans lation (NMT) and an EnglishHindi MMT sys tem by incorporating the image feature paired with the corresponding parallel corpus. We also conduct a systematic analysis to evaluate the EnglishHindi MT systems with 1) more synthetic data and 2) by adding backtranslated data. Our finding shows improvement in terms of BLEU scores for both the NMT (+8.05) and MMT (+11.03) systems.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Incorporating multiple input modalities in a machine translation (MT) system is gaining popularity among MT researchers. Unlike the publicly available dataset for Multimodal Ma chine Translation (MMT) tasks, where the cap tions are short image descriptions, the news captions provide a more detailed description of the contents of the images. As a result, nu merous named entities relating to specific per sons, locations, etc., are found. In this paper, we acquire two monolingual news datasets re ported in English and Hindi paired with the images to generate a synthetic EnglishHindi parallel corpus. The parallel corpus is used to train the EnglishHindi Neural Machine Trans lation (NMT) and an EnglishHindi MMT sys tem by incorporating the image feature paired with the corresponding parallel corpus. We also conduct a systematic analysis to evaluate the EnglishHindi MT systems with 1) more synthetic data and 2) by adding backtranslated data. Our finding shows improvement in terms of BLEU scores for both the NMT (+8.05) and MMT (+11.03) systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "With the implementation of encoderdecoder ar chitecture (Cho et al., 2014\u037e Luong et al., 2015\u037e Vaswani et al., 2017 , MT systems have under gone quality enhancement. Instead of using text as the only input in an MT system, the current trend has also started exploring Multimodal Ma chine Translation (MMT), where multiple input modalities such as visual modality are incorpo rated along with the text as an input to the MT sys tem. Using the MMT system has shown improve ment in the translated text output as compared to the NMT system (Huang et al., 2016\u037e Caglayan et al., 2016\u037e Elliott and K\u00e1d\u00e1r, 2017\u037e Caglayan et al., 2019 . To analyse the benefits of using multiple modalities of input, various shared tasks are organized (WAT2019 MultiModal Translation Task 1 , WMT2018 2 , VMT Challenge 3 ). However, the image descriptions in the majority of current datasets are made up of usercaptioned or created by crowdsourcing. On the other hand, the cap tions present in the news details the contents of the image with better clarity, and as a result, contain many named entities relating to specific individu als, locations, organizations, etc. For example, in Figure 1 , the caption \"The old looking ship is sail ing at sunset\" correctly depicts the image on some levels, yet it fails to portray the picture's higher level scenario as described in the caption on the left. Limited data resources set back the development of a machine learningbased system. The lack of highquality training parallel dataset poses a con siderable challenge in developing an MT system for low resource languages. For an extremely low resource language pair, training with NMT, which is a datadriven approach, often reports to poor performance of the MT system Hu jon, 2020\u037e Singh and . As such, re searchers have investigated various models to aug ment the dataset using the monolingual corpus, such as backtranslation (Sennrich et al., 2015a) , incorporating language model train on monolin gual dataset (Gulcehre et al., 2015) , etc. The ap proach reported in (Sennrich et al., 2015a\u037e Cal ixto et al., 2017a acquire an additional training dataset by backtranslating from a monolingual tar get dataset. In this paper, we acquire monolin gual news datasets reported in English and Hindi, which are used to generate a synthetic parallel cor pus. English\u2192Hindi NMT systems are trained by using the parallel corpus. We train English\u2192Hindi MMT systems by incorporating the image as a fea ture paired with the corresponding parallel corpus. We also conduct a systematic analysis to evaluate the MT systems by training with 1) more synthetic data and 2) adding backtranslated data. Belonging to the same language family, IndoEuropean, En glish, and Hindi follow different word orders: Sub ject Verb Object (SVO) and Subject Object Verb (SOV).", "cite_spans": [ { "start": 56, "end": 115, "text": "(Cho et al., 2014\u037e Luong et al., 2015\u037e Vaswani et al., 2017", "ref_id": null }, { "start": 536, "end": 626, "text": "(Huang et al., 2016\u037e Caglayan et al., 2016\u037e Elliott and K\u00e1d\u00e1r, 2017\u037e Caglayan et al., 2019", "ref_id": null }, { "start": 1740, "end": 1763, "text": "Hu jon, 2020\u037e Singh and", "ref_id": null }, { "start": 1899, "end": 1923, "text": "(Sennrich et al., 2015a)", "ref_id": "BIBREF21" }, { "start": 1985, "end": 2008, "text": "(Gulcehre et al., 2015)", "ref_id": "BIBREF9" }, { "start": 2042, "end": 2089, "text": "(Sennrich et al., 2015a\u037e Cal ixto et al., 2017a", "ref_id": null } ], "ref_spans": [ { "start": 1160, "end": 1168, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remainder of this paper is organized as fol lows: Section 2 discuss the previous related works followed by the framework of our model in Sec tion 3. Section 4 details our system set up and Section 5 illustrates the analysis of our result. Sec tion 6 sums up the conclusion and future works.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A review of the machine translationrelated works is discussed in this section. Based on the encoder decoder model of the NMT model, various archi tectures are built to improve the performance of MT systems (Sutskever et al., 2014\u037e Bahdanau et al., 2014 . For both the encoder and the de coder, Sutskever et al. (2014) stacked numerous layers of an RNN with a Long ShortTerm Mem ory (LSTM) (Hochreiter and Schmidhuber, 1997) hidden unit. introduced an attention mechanism where the decoder attends to various sections of the source text at each step of generation of the output. While the model en hances the translation of long sentences due to its sequential nature, each hidden state depends on the output of the previous hidden state resulting in a large consumption of computational power. Gage (1994) introduced a method for data com pression, BPE, which iteratively substitutes sin gle, unused bytes for common bytes pairs in a se quence. Sennrich et al. (2015b) proposed a method for word segmentation to deal with an open vo cabulary problem. Instead of common byte pairs, the method combines characters or character se quences. Provilkov et al. (2019) achieves a bet ter MT system by introducing a dropout in BPE (Sennrich et al., 2015b) , the BPEdropout excludes some merges randomly, resulting in the same word with different segmentation. Vinyals et al. (2015) introduced a neural and probabilistic framework to generate image cap tions. The model comprises a vision Convolution Neural Network (CNN), which is followed by a Re current Neural Network (RNN) to generate a lan guage. Extracting global features from an image to incorporate into attentionbased NMT, Calixto et al. (2017b) introduced various multimodal neu ral machine translation models. Using the features of an image to initialize the encoder hidden state is reported to be the best performing among other models. Using Hindi Visual Genome (Parida et al., 2019 ) dataset, Meetei et al. (2019 carried out an MMT for the EnglishHindi language pair. The au thor reported that the use of multiple modalities as an input improves the MT system.", "cite_spans": [ { "start": 206, "end": 252, "text": "(Sutskever et al., 2014\u037e Bahdanau et al., 2014", "ref_id": null }, { "start": 294, "end": 317, "text": "Sutskever et al. (2014)", "ref_id": "BIBREF26" }, { "start": 389, "end": 423, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF10" }, { "start": 794, "end": 805, "text": "Gage (1994)", "ref_id": "BIBREF8" }, { "start": 945, "end": 968, "text": "Sennrich et al. (2015b)", "ref_id": "BIBREF22" }, { "start": 1137, "end": 1160, "text": "Provilkov et al. (2019)", "ref_id": "BIBREF20" }, { "start": 1222, "end": 1246, "text": "(Sennrich et al., 2015b)", "ref_id": "BIBREF22" }, { "start": 1351, "end": 1372, "text": "Vinyals et al. (2015)", "ref_id": "BIBREF28" }, { "start": 1674, "end": 1696, "text": "Calixto et al. (2017b)", "ref_id": "BIBREF4" }, { "start": 1917, "end": 1937, "text": "(Parida et al., 2019", "ref_id": "BIBREF19" }, { "start": 1938, "end": 1968, "text": ") dataset, Meetei et al. (2019", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "Various approaches to data augmentation are ap plied to mitigate the scarcity of parallel training datasets for MT tasks. Gulcehre et al. (2015) used a language model train on a monolingual dataset, achieving an improvement of up to 1.96 BLEU on TurkishEnglish, a low resource language pair. The author also reported that domain similarity be tween monolingual dataset and target task was the key factor to use an external language model to improve the MT system. Sennrich et al. (2015a) carried out the backtranslation of monolingual tar get text into the source text, thereby generating an additional training dataset. The author reported that even the limited amount of backtranslated indomain monolingual datasets could be utilized efficiently for domain adaptation. Calixto et al. (2017a) used a textonly NMT model train on Multi30k (Elliott et al., 2016) dataset (German English), without images to backtranslate German descriptions in the Multi30k into English and in cluded it as additional training data.", "cite_spans": [ { "start": 122, "end": 144, "text": "Gulcehre et al. (2015)", "ref_id": "BIBREF9" }, { "start": 464, "end": 487, "text": "Sennrich et al. (2015a)", "ref_id": "BIBREF21" }, { "start": 771, "end": 793, "text": "Calixto et al. (2017a)", "ref_id": "BIBREF3" }, { "start": 838, "end": 860, "text": "(Elliott et al., 2016)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "In our experiment, news articles reported in En glish and Hindi along with the corresponding im ages in the articles are collected. After subjecting to a preprocessing step, the collected dataset is sentences images enhi st 80900 hien st 42400 42400 . machine translated to generate a synthetic parallel dataset. MT systems are trained with various set tings by using the synthetic parallel dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "News articles reported in English along with their corresponding images are collected from a national news channel, India TV 4 , for the period June 2010 to May 2020. After filtering the articles where the image is absent, the collected dataset comprises 80900 and 42400 news articles reported in English and Hindi, respectively. The dataset is gathered by utilizing a webscraper built inhouse. In order to prepare the experimental dataset, we separate the headline from each of the news article items, which is considered as the description for the correspond ing image. Apart from a standard single sentence, the image description comprises single or multiple phrases. Using IndicTrans 5 (Kakwani et al., 2020) , which is an NMT system, we build two machine translated parallel datasets, namely EnglishHindi (enhi st ) and HindiEnglish (hien st ), Table 1 .", "cite_spans": [ { "start": 690, "end": 712, "text": "(Kakwani et al., 2020)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 850, "end": 857, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Building synthetic EnglishHindi and HindiEnglish dataset", "sec_num": "3.1" }, { "text": "Our experiment used both NMT and MMT ap proaches to train English\u2192Hindi MT systems in our parallel news dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Machine Translation Systems", "sec_num": "3.2" }, { "text": "For a source sentence x, the translation task tries to find a target sentence y that maximizes the con ditional probability of y given x. We followed the attention model of by using a biLSTM (Sutskever et al., 2014) in the encoder and an alignment model paired with an LSTM in the decoder model. The biLSTM generates a se quence of annotations (", "cite_spans": [ { "start": 191, "end": 215, "text": "(Sutskever et al., 2014)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation (NMT)", "sec_num": "3.2.1" }, { "text": "h 1 , h 2 ,...,h N ) = h i for each input sentence, x = (x 1 , x 2 , ..., x N ). h i = [ \u20d7 h i \u037e \u20d7 h i ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation (NMT)", "sec_num": "3.2.1" }, { "text": "is the concatenation of forward hidden state, \u20d7 h i and backward hidden state, \u20d7 h i in the encoder at time step i. The attention mechanism focuses on spe cific input vectors in the input sequence based on the attention weights. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation (NMT)", "sec_num": "3.2.1" }, { "text": "We train the MMT systems using the image from the news article paired with the EnglishHindi par allel dataset. Following the multimodal neural ma chine translation (MNMT) model (Calixto et al., 2017b) , a deep CNNbased model is utilized to extract global features from the image, Figure 2 . Global image feature vector (q \u2208 R 4096 ) is used to compute a vector d as follows:", "cite_spans": [ { "start": 177, "end": 200, "text": "(Calixto et al., 2017b)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 280, "end": 288, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Multimodal Machine Translation (MMT)", "sec_num": "3.2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "d = W 2 I \u2022 (W 1 I \u2022 q + b 1 I ) + b 2 I", "eq_num": "(1)" } ], "section": "Multimodal Machine Translation (MMT)", "sec_num": "3.2.2" }, { "text": "where W = image transformation matrices and b = bias vector.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multimodal Machine Translation (MMT)", "sec_num": "3.2.2" }, { "text": "Instead of using \u20d7 0 to ini tialize encoder hidden states, two new singlelayer feedforward networks are utilized to initialize the states as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multimodal Machine Translation (MMT)", "sec_num": "3.2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u20d7 h init = tanh(W f d + b f ) (2) \u20d7 h init = tanh(W b d + b b )", "eq_num": "(3)" } ], "section": "Multimodal Machine Translation (MMT)", "sec_num": "3.2.2" }, { "text": "where W f and W b are the multimodal projec tion matrices that project the image features d into the encoder forward and backward hidden states dimensionality, respectively, and b f and b b as bias vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multimodal Machine Translation (MMT)", "sec_num": "3.2.2" }, { "text": "With limited availability of parallel corpus, it is often difficult to train a data driven NMT system. Using the generated synthetic parallel dataset, we carry out a systematic analysis to evaluate the MT system by training with four experimental data set tings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "\u2022 T b : By randomly selecting 45000 parallel dataset from enhi st as the baseline training dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "\u2022 T ad : T b + an additional dataset of randomly selected 30900 from the remaining enhi st dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "\u2022 T bt : T b + an additional backtranslated dataset of 30900 which are randomly selected from 42400 sentences (hien st ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "\u2022 T all : Combining the above three training datasets i.e. T b + T ad + T bt .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "We use two holdouts test datasets, t 1 and t 2 from enhi st and hien st (backtranslated) respectively. The development dataset, however, is used from enhi st only. A detailed statistics of our dataset is shown in Table 2 where enhi st is split into training, development, and test datasets, whereas hien st is used for training and test datasets.", "cite_spans": [], "ref_spans": [ { "start": 213, "end": 220, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "Normalization and tokenization of English sen tences are carried by using Koehn et al. (2007) and for Hindi sentences, we use Indic NLP 6 . By employing BPEdropout (Provilkov et al., 2019) , words in the preprocessed parallel corpus are seg ment into subword units for word embedding pre sentation before training the MT systems. A full regularization is applied with a dropout of 0.1 to the training dataset. Following the system design described in SubSection 3.2, we train our NMT and MMT systems using the processed dataset.", "cite_spans": [ { "start": 74, "end": 93, "text": "Koehn et al. (2007)", "ref_id": "BIBREF15" }, { "start": 164, "end": 188, "text": "(Provilkov et al., 2019)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "6 https://anoopkunchukuttan.github.io/indic_ nlp_library/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "Based on the four settings of training dataset, we train the following eight MT systems:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MT systems", "sec_num": "4.2" }, { "text": "\u2022 NMT(T b ) and MMT(T b ): NMT and MMT systems trained with T b respectively, baseline models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MT systems", "sec_num": "4.2" }, { "text": "\u2022 NMT(T ad ) and MMT(T ad ) : NMT and MMT systems trained with T ad respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MT systems", "sec_num": "4.2" }, { "text": "\u2022 NMT(T bt ) and MMT(T bt ): NMT and MMT systems trained with T bt respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MT systems", "sec_num": "4.2" }, { "text": "\u2022 NMT(T all ) and MMT(T all ): NMT and MMT systems trained with T all respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MT systems", "sec_num": "4.2" }, { "text": "The size of our encoder and decoder LSTM hidden states is set to 512. We use a batch size of 128 and a word embedding size of 512D for both source and target. The normalization method of the gradient is set to tokens. Along with other parameters such as learning rate at 0.01, Adam optimizer (Kingma and Ba, 2014), a dropout rate of 0.1, we train the sys tem using early stopping, where training is stopped if a model does not progress on the validation set for more than 10 epochs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NMT system settings", "sec_num": "4.3" }, { "text": "A CNNbased pretrained model, VGG19 (Si monyan and Zisserman, 2014), is used to extract the global features of an image. By incorporating the features from the image and the processed text, we train our MMT systems with stochastic gradi ent descent and a batch size of 128. Early stopping is applied to stop the training when the MT sys tem does not improve for 10 epochs on the develop ment set. We carry out the implementation of our MT systems by using an NMT opensource tool based on OpenNMT (Klein et al., 2017) . Subword nmt 7 is used for encodingdecoding of the text dataset to and from subword units.", "cite_spans": [ { "start": 495, "end": 515, "text": "(Klein et al., 2017)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "MMT system settings", "sec_num": "4.4" }, { "text": "The automatic evaluation of our MT systems is re ported using BLEU (Papineni et al., 2002) . Table 3 shows a detailed evaluation of our MT systems on two test datasets, t 1 and t 2 .", "cite_spans": [ { "start": 67, "end": 90, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 93, "end": 101, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Based on Evaluation Metric", "sec_num": "5.1" }, { "text": "\u2022 NMT systems: NMT(T all ) outperforms the remaining NMT systems in both the test datasets, t 1 and t 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Based on Evaluation Metric", "sec_num": "5.1" }, { "text": "\u2022 MMT systems: MMT(T all ) outperforms the MMT systems in both the test datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Based on Evaluation Metric", "sec_num": "5.1" }, { "text": "\u2022 NMT vs MMT System: The best MMT sys tem, MMT(T all ) outperforms the best NMT system, NMT(T all ) by up to 9.62 BLEU score in t 1 and up to 5.34 BLEU score in t 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Based on Evaluation Metric", "sec_num": "5.1" }, { "text": "Training with additional datasets shows im provements in terms of BLEU scores for both the NMT and MMT systems. Although improvements are observed with the MT systems trained with the data augmentation approach, the BLEU score in creases only by 1.23 while training with the ad ditional backtranslated dataset, T bt for the MMT system. This shows that using an additional back translated dataset improves our MMT system only by a small margin. It is observed that the perfor mance of NMT(T ad ) and NMT(T all ) are almost comparable in terms of BLEU score, which indi cates the poor effectiveness of T bt in our exper imental settings. Whereas, in the MMT system, MMT(T all ) outperforms the other MMT systems in terms of BLEU score by a reasonable margin. This indicates that incorporating image features in the MT system negates the bias introduced by the synthetic dataset to some extent. Furthermore, a large gap in terms of BLEU score is observed be tween t 1 and t 2 . A likely cause is using more training dataset from enhi st and the development dataset from enhi st only.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Based on Evaluation Metric", "sec_num": "5.1" }, { "text": "Bucket Analysis: Figure 3 and Figure 4 shows bucket analysis where salient statistics are com puted by assigning sentences over the bucket. Af ter computing the BLEU score based on the length of the reference sentence, the analysis displays how well a system performs with shorter and longer sentences.", "cite_spans": [], "ref_spans": [ { "start": 17, "end": 25, "text": "Figure 3", "ref_id": null }, { "start": 30, "end": 38, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Based on Evaluation Metric", "sec_num": "5.1" }, { "text": "\u2022 t 1 : Sentences in t 1 dataset are grouped into four buckets as shown in Figure 3 . MMT(T all ) outperforms all the other MT sys tems by a large margin in most of the cases i.e. sentences with length less than 30. Although Moderately retained information 3", "cite_spans": [], "ref_spans": [ { "start": 75, "end": 83, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Based on Evaluation Metric", "sec_num": "5.1" }, { "text": "Most of the information is retained 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Based on Evaluation Metric", "sec_num": "5.1" }, { "text": "All information is retained the overall BLEU score of MMT(T bt ) is bet ter than MMT(T b ), MMT(T b ) is observed to perform better than MMT(T bt ) when the sen tence length is less than 10.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Based on Evaluation Metric", "sec_num": "5.1" }, { "text": "\u2022 t 2 : As the maximum length of a sen tence in the t 2 dataset is not more than 20, t 2 is grouped into two buckets, Figure 4 . MMT(T all ) outperforms the other MT sys tems by a large margin irrespective of the sen tence length. MMT(T ad ) is almost compara ble with MMT(T bt ) in sentences with length [1020).", "cite_spans": [], "ref_spans": [ { "start": 118, "end": 126, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Based on Evaluation Metric", "sec_num": "5.1" }, { "text": "Overall, the MT systems performs better when the length of a sentence is up to 10, and the perfor mance declines as the length of a sentence increase above 10.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Based on Evaluation Metric", "sec_num": "5.1" }, { "text": "Using adequacy and fluency indicators, we carried out human evaluations on our machine translated outputs. Adequacy indicates information retained in the generated translations, whereas fluency anal yses generated translations primarily on grammat ical rules. In our experiment, both adequacy and fluency are computed in the range of 0 to 4 scores. The meanings of the various score are summarized Score Level Interpretation 0 Incomprehensible 1 Disfluent 2 Nonnative 3 Acceptable in terms of grammatical rules 4 Flawless and correct in terms of gram matical rules in Table 4 and Table 5 . We use a sample output of randomly selected 100 sentences from each MT system to evaluate adequacy and fluency scores. The average score of the individual MT system is considered as our final score. Table 6 shows the adequacy and fluency scores reported by our human evaluators. Comparison among the different NMT systems indicates no correlation between the manual evaluation and BLEU score. In the NMT system, adding a back translated dataset in the T all setting shows a neg ative effect in the fluency score, with NMT(T all ) scoring less than the NMT(T bt ). Similar obser vation is also observe among the different MMT systems. The adequacy and fluency score of MMT(T ad ) shows better results than MMT(T all ) by a small margin. When comparing between the NMT and MMT systems, correlation is found between the manual evaluation and BLEU score. Overall, in terms of adequacy and fluency score, the MMT system is more robust than the NMT sys tem. Table 7 and Table 8 shows a qualitative analy sis on two examples from the test dataset (t 1 ) for the NMT and MMT systems. The words in \"blue\" highlights incorrect word(s) or gram \"Pune: One nurse tested positive for COVID19, 30 others in quarantine\" T bt \u092a\u0941 \u0923\u0947 \u092e\u0947\u0902 \u090f\u0915 \u0928\u0938\u0930\u094d \u0915\u094b\u093f\u0935\u0921-\u0967\u096f \u0938\u0947 \u0938\u0902 \u0915\u094d\u0930\u093f\u092e\u0924, \u0969\u0966 \u0905\u0928\u094d\u092f \u0915\u094b \u0915\u094d\u0935\u093e\u0930\u0902 \u091f\u0940\u0928 \u093f\u0915\u092f\u093e \u0917\u092f\u093e (pune mein ek nurse covid19 se sankramit, 30 anya ko quarantine kiya gaya) \"One nurse infected with Covid19 in Pune, 30 others were quarantined\" T all \u092a\u0941 \u0923\u0947 \u092e\u0947\u0902 \u0928\u0938\u0930\u094d \u0915\u093e \u0915\u094b\u093f\u0935\u0921-\u0967\u096f \u091f\u0947 \u0938\u094d\u091f \u092a\u0949 \u091c\u093f\u091f\u0935 \u0906\u092f\u093e, \u0969\u0966 \u0905\u0928\u094d\u092f \u0915\u094b \u0915\u094d\u0935\u093e\u0930\u0902 \u091f\u0940\u0928 \u093f\u0915\u092f\u093e \u0917\u092f\u093e (pune mein nurse ka covid19 test positive aya, 30 anya ko quarantine kiya gaya) \"Covid19 test of a nurse came out positive in Pune, 30 others were quarantined\" matically error in the translation output whereas, words in \"magenta\" highlights incorrectly trans lated word(s). NMT(T b ) and NMT(T bt ) generate translations with low fluency, where part of the sentence is grammatically incorrect, which in turn affects the adequacy of the translated text. Though the MMT(T b ) and MMT(T bt ) generates a transla tion with good fluency, the model fails to convey the words like \"quarantined\", \"landslide\" and in stead translates as \"quarantined\"\u2192 \"infected\" , \"landslide\"\u2192 \"disturbance\", and \"landslide\"\u2192 \"collision\" thereby reducing the adequacy of the translated text. As reported in the adequacy and fluency evaluation in Table 6 , T all performs poorly in the NMT system with the translation output missing part of the source sentence as shown in the sample examples. MMT(T ad ) and NMT(T all ) generate translations that are grammatically cor rect and convey correct meaning as the input sen tence.", "cite_spans": [], "ref_spans": [ { "start": 404, "end": 524, "text": "Level Interpretation 0 Incomprehensible 1 Disfluent 2 Nonnative 3 Acceptable in terms of grammatical rules 4", "ref_id": "TABREF0" }, { "start": 580, "end": 599, "text": "Table 4 and Table 5", "ref_id": "TABREF5" }, { "start": 801, "end": 808, "text": "Table 6", "ref_id": "TABREF7" }, { "start": 1554, "end": 1573, "text": "Table 7 and Table 8", "ref_id": "TABREF9" }, { "start": 2933, "end": 2940, "text": "Table 6", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Adequacy and Fluency Analysis", "sec_num": "5.2" }, { "text": "The lack of a highquality parallel dataset for the MT tasks is one of the major challenges, espe cially for low resource languages. In this work, we collected two monolingual news datasets re", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "https://ufal.mff.cuni.cz/ hindi-visual-genome/wat-2019-multimodal-task 2 http://www.statmt.org/wmt18/ multimodal-task.html 3 https://eric-xw.github.io/vatex-website/ translation_2020.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.indiatvnews.com/ 5 https://indicnlp.ai4bharat.org/ indic-trans/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/rsennrich/subword-nmt", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work is supported by Scheme for Promotion of Academic and Research Collaboration (SPARC) Project Code: P995 of No: SPARC/2018 2019/119/SL(IN) under Ministry of Education (erstwhile MHRD), Govt. of India. The authors thank the anonymous reviewers for their careful reading and their many insightful comments. The authors also thank the volunteers for their help in human evaluation tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "\"Major landslide on MumbaiPune Expressway, Railways announces special trains\" An analysis of the dataset augmentation with the lack of a parallel dataset is carried out. We ob serve an improvement in the MT systems in BLEU scores for both the NMT and MMT systems with the data augmentation approach. Our results also show that when the training dataset is comprised of a synthetic dataset from both English\u2192Hindi and Hindi\u2192English directions, the backtranslated dataset in T all setting is more effective in the MMT system as compared to the NMT system. In the fu ture, we would like to incorporate multiple modal ities to improve the MT system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Ben Gio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1409.0473" ] }, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Does multimodality help human and machine for trans lation and image captioning? arXiv preprint", "authors": [ { "first": "Ozan", "middle": [], "last": "Caglayan", "suffix": "" }, { "first": "Walid", "middle": [], "last": "Aransa", "suffix": "" }, { "first": "Yaxing", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Masana", "suffix": "" }, { "first": "Mercedes", "middle": [], "last": "Garc\u00edamart\u00ednez", "suffix": "" }, { "first": "Fethi", "middle": [], "last": "Bougares", "suffix": "" }, { "first": "Lo\u00efc", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Joost", "middle": [], "last": "Van De Weijer", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1605.09186" ] }, "num": null, "urls": [], "raw_text": "Ozan Caglayan, Walid Aransa, Yaxing Wang, Marc Masana, Mercedes Garc\u00edaMart\u00ednez, Fethi Bougares, Lo\u00efc Barrault, and Joost Van de Weijer. 2016. Does multimodality help human and machine for trans lation and image captioning? arXiv preprint arXiv:1605.09186.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Probing the need for visual context in multimodal machine translation", "authors": [ { "first": "Ozan", "middle": [], "last": "Caglayan", "suffix": "" }, { "first": "Pranava", "middle": [], "last": "Madhyastha", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Lo\u00efc", "middle": [], "last": "Barrault", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1903.08678" ] }, "num": null, "urls": [], "raw_text": "Ozan Caglayan, Pranava Madhyastha, Lucia Specia, and Lo\u00efc Barrault. 2019. Probing the need for visual context in multimodal machine translation. arXiv preprint arXiv:1903.08678.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Doublyattentive decoder for multimodal neural ma chine translation", "authors": [ { "first": "Iacer", "middle": [], "last": "Calixto", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Campbell", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1702.01287" ] }, "num": null, "urls": [], "raw_text": "Iacer Calixto, Qun Liu, and Nick Campbell. 2017a. Doublyattentive decoder for multimodal neural ma chine translation. arXiv preprint arXiv:1702.01287.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Incorporating global visual features into attention based neural machine translation", "authors": [ { "first": "Iacer", "middle": [], "last": "Calixto", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Campbell", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1701.06521" ] }, "num": null, "urls": [], "raw_text": "Iacer Calixto, Qun Liu, and Nick Campbell. 2017b. Incorporating global visual features into attention based neural machine translation. arXiv preprint arXiv:1701.06521.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Learning phrase representations using rnn encoderdecoder for statistical machine translation", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merri\u00ebnboer", "suffix": "" }, { "first": "Caglar", "middle": [], "last": "Gul Cehre", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Fethi", "middle": [], "last": "Bougares", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXivpreprintarXiv:1406.1078" ] }, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gul cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoderdecoder for statistical machine translation. In arXiv preprint arXiv:1406.1078.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Multi30k: Multilingual englishgerman image descriptions", "authors": [ { "first": "Desmond", "middle": [], "last": "Elliott", "suffix": "" }, { "first": "Stella", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Khalil", "middle": [], "last": "Sima'an", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1605.00459" ] }, "num": null, "urls": [], "raw_text": "Desmond Elliott, Stella Frank, Khalil Sima'an, and Lucia Specia. 2016. Multi30k: Multilingual englishgerman image descriptions. arXiv preprint arXiv:1605.00459.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Imagination improves multimodal translation", "authors": [ { "first": "Desmond", "middle": [], "last": "Elliott", "suffix": "" }, { "first": "\u00c1kos", "middle": [], "last": "K\u00e1d\u00e1r", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Eighth International Joint Conference on Natu ral Language Processing", "volume": "1", "issue": "", "pages": "130--141", "other_ids": {}, "num": null, "urls": [], "raw_text": "Desmond Elliott and \u00c1kos K\u00e1d\u00e1r. 2017. Imagination improves multimodal translation. In Proceedings of the Eighth International Joint Conference on Natu ral Language Processing (Volume 1: Long Papers), pages 130-141, Taipei, Taiwan. Asian Federation of Natural Language Processing.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A new algorithm for data compres sion", "authors": [ { "first": "Philip", "middle": [], "last": "Gage", "suffix": "" } ], "year": 1994, "venue": "The C Users Journal", "volume": "12", "issue": "2", "pages": "23--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Gage. 1994. A new algorithm for data compres sion. The C Users Journal, 12(2):23-38.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "On us ing monolingual corpora in neural machine transla tion", "authors": [ { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Orhan", "middle": [], "last": "Firat", "suffix": "" }, { "first": "Kelvin", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Loic", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Hueichi", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Fethi", "middle": [], "last": "Bougares", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1503.03535" ] }, "num": null, "urls": [], "raw_text": "Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, HueiChi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On us ing monolingual corpora in neural machine transla tion. arXiv preprint arXiv:1503.03535.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Long shortterm memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long shortterm memory. Neural computation, 9(8):1735-1780.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Attentionbased multi modal neural machine translation", "authors": [ { "first": "Poyao", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Frederick", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Szrung", "middle": [], "last": "Shiang", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Oh", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the First Conference on Machine Translation", "volume": "2", "issue": "", "pages": "639--645", "other_ids": {}, "num": null, "urls": [], "raw_text": "PoYao Huang, Frederick Liu, SzRung Shiang, Jean Oh, and Chris Dyer. 2016. Attentionbased multi modal neural machine translation. In Proceedings of the First Conference on Machine Translation: Vol ume 2, Shared Task Papers, pages 639-645.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pretrained Multilingual Language Models for In dian Languages", "authors": [ { "first": "Divyanshu", "middle": [], "last": "Kakwani", "suffix": "" }, { "first": "Anoop", "middle": [], "last": "Kunchukuttan", "suffix": "" }, { "first": "Satish", "middle": [], "last": "Golla", "suffix": "" }, { "first": "N", "middle": [ "C" ], "last": "Gokul", "suffix": "" }, { "first": "Avik", "middle": [], "last": "Bhattacharyya", "suffix": "" }, { "first": "M", "middle": [], "last": "Mitesh", "suffix": "" }, { "first": "Pratyush", "middle": [], "last": "Khapra", "suffix": "" }, { "first": "", "middle": [], "last": "Kumar", "suffix": "" } ], "year": 2020, "venue": "Findings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Divyanshu Kakwani, Anoop Kunchukuttan, Satish Golla, Gokul N.C., Avik Bhattacharyya, Mitesh M. Khapra, and Pratyush Kumar. 2020. IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pretrained Multilingual Language Models for In dian Languages. In Findings of EMNLP.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Opennmt: Open source toolkit for neural machine translation", "authors": [ { "first": "Guillaume", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Yuntian", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Senel", "suffix": "" }, { "first": "Alexander M", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1701.02810" ] }, "num": null, "urls": [], "raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel lart, and Alexander M Rush. 2017. Opennmt: Open source toolkit for neural machine translation. arXiv preprint arXiv:1701.02810.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Moses: Open source toolkit for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callisonburch", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Brooke", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Moran", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zens", "suffix": "" } ], "year": 2007, "venue": "Pro ceedings of the 45th annual meeting of the associa tion for computational linguistics companion volume proceedings of the demo and poster sessions", "volume": "", "issue": "", "pages": "177--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris CallisonBurch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Pro ceedings of the 45th annual meeting of the associa tion for computational linguistics companion volume proceedings of the demo and poster sessions, pages 177-180.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Effective approaches to attention based neural machine translation", "authors": [ { "first": "Minhthang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.04025" ] }, "num": null, "urls": [], "raw_text": "MinhThang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention based neural machine translation. arXiv preprint arXiv:1508.04025.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Wat2019: Englishhindi translation on hindi visual genome dataset", "authors": [ { "first": "Thoudam Doren", "middle": [], "last": "Loitongbam Sanayai Meetei", "suffix": "" }, { "first": "Sivaji", "middle": [], "last": "Singh", "suffix": "" }, { "first": "", "middle": [], "last": "Bandyopadhyay", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 6th Workshop on Asian Translation", "volume": "", "issue": "", "pages": "181--188", "other_ids": {}, "num": null, "urls": [], "raw_text": "Loitongbam Sanayai Meetei, Thoudam Doren Singh, and Sivaji Bandyopadhyay. 2019. Wat2019: Englishhindi translation on hindi visual genome dataset. In Proceedings of the 6th Workshop on Asian Translation, pages 181-188.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Bleu: a method for automatic eval uation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th annual meeting on association for compu tational linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei Jing Zhu. 2002. Bleu: a method for automatic eval uation of machine translation. In Proceedings of the 40th annual meeting on association for compu tational linguistics, pages 311-318. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Hindi visual genome: A dataset for mul timodal englishtohindi machine translation", "authors": [ { "first": "Shantipriya", "middle": [], "last": "Parida", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "", "middle": [], "last": "Satya Ranjan", "suffix": "" }, { "first": "", "middle": [], "last": "Dash", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.08948" ] }, "num": null, "urls": [], "raw_text": "Shantipriya Parida, Ond\u0159ej Bojar, and Satya Ranjan Dash. 2019. Hindi visual genome: A dataset for mul timodal englishtohindi machine translation. arXiv preprint arXiv:1907.08948.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Bpedropout: Simple and effective subword regularization", "authors": [ { "first": "Ivan", "middle": [], "last": "Provilkov", "suffix": "" }, { "first": "Dmitrii", "middle": [], "last": "Emelianenko", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Voita", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.13267" ] }, "num": null, "urls": [], "raw_text": "Ivan Provilkov, Dmitrii Emelianenko, and Elena Voita. 2019. Bpedropout: Simple and effective subword regularization. arXiv preprint arXiv:1910.13267.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Improving neural machine translation models with monolingual data", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1511.06709" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015a. Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.07909" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015b. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Very deep convolutional networks for largescale image recognition", "authors": [ { "first": "Karen", "middle": [], "last": "Simonyan", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Zisserman", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1409.1556" ] }, "num": null, "urls": [], "raw_text": "Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for largescale image recognition. arXiv preprint arXiv:1409.1556.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Unsupervised neural machine translation for english and manipuri", "authors": [ { "first": "Michael", "middle": [], "last": "Salam", "suffix": "" }, { "first": "Thoudam Doren", "middle": [], "last": "Singh", "suffix": "" }, { "first": "", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages", "volume": "", "issue": "", "pages": "69--78", "other_ids": {}, "num": null, "urls": [], "raw_text": "Salam Michael Singh and Thoudam Doren Singh. 2020. Unsupervised neural machine translation for english and manipuri. In Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages, pages 69-78.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Low resource and domain specific english to khasi smt and nmt systems", "authors": [ { "first": "Doren", "middle": [], "last": "Thoudam", "suffix": "" }, { "first": "Aiusha", "middle": [], "last": "Singh", "suffix": "" }, { "first": "", "middle": [], "last": "Vellintihun Hujon", "suffix": "" } ], "year": 2020, "venue": "2020 International Conference on Computational Performance Evalua tion (ComPE)", "volume": "", "issue": "", "pages": "733--737", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thoudam Doren Singh and Aiusha Vellintihun Hujon. 2020. Low resource and domain specific english to khasi smt and nmt systems. In 2020 International Conference on Computational Performance Evalua tion (ComPE), pages 733-737. IEEE.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Sequence to sequence learning with neural networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "Advances in neural information processing sys tems", "volume": "", "issue": "", "pages": "3104--3112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing sys tems, pages 3104-3112.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information pro cessing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro cessing systems, pages 5998-6008.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Show and tell: A neural im age caption generator", "authors": [ { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Toshev", "suffix": "" }, { "first": "Samy", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Dumitru", "middle": [], "last": "Erhan", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the IEEE conference on computer vision and pattern recogni tion", "volume": "", "issue": "", "pages": "3156--3164", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural im age caption generator. In Proceedings of the IEEE conference on computer vision and pattern recogni tion, pages 3156-3164.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "text": "Examples from our dataset (left) and multi30k (right).", "num": null }, "FIGREF1": { "type_str": "figure", "uris": null, "text": "Figure 2: MMT model", "num": null }, "FIGREF2": { "type_str": "figure", "uris": null, "text": "Evaluation on test dataset t 2", "num": null }, "TABREF0": { "text": "Machine translated datasets, enhi st and hien st", "type_str": "table", "num": null, "html": null, "content": "" }, "TABREF1": { "text": "ad ) T b + 30900 en:796074, hi:923313 en:70371, hi:43945 en:10, hi:12 train (T bt ) T b + 30900 en:806767, hi:1005549 en:64057, hi:52992 en:10, hi:13 train (T all ) T b + 61800 en:1130335, hi:1380674 en:79574, hi:61419 en:10, hi:12", "type_str": "table", "num": null, "html": null, "content": "
TextImage typesunique typesAvg SL
train (T b )45000en:472496, hi:548172en:52841, hi:33617 en:10, hi:12
train (T dev3000en:31437, hi:36576en:10673, hi:7871en:10, hi:12
test (t 1 )2000en21038:, hi:24415en:8089, hi:6176en:10, hi:12
test (t 2 )2000en:17105, hi:20854en:5439, hi:5874en:8, hi:10
" }, "TABREF2": { "text": "Statistics of our dataset and data partitioning.", "type_str": "table", "num": null, "html": null, "content": "" }, "TABREF3": { "text": "", "type_str": "table", "num": null, "html": null, "content": "
Figure 3: Evaluation on test dataset t 1
BLEU
train tt 2
NMT T b15.568.96
T ad23.26(\u21917.7)13.47(\u21914.51)
T bt19.23(\u21913.67)11.70(\u21912.74)
T all 23.61(\u21918.05)13.75(\u21914.79)
MMT T b22.2014.54
T ad32.30(\u219110.1)17.13(\u21912.59)
T bt23.43(\u21911.23)17.12(\u21912.58)
T all 33
" }, "TABREF4": { "text": "Evaluation of NMT and MMT systems in terms of BLEU score.", "type_str": "table", "num": null, "html": null, "content": "" }, "TABREF5": { "text": "Scale for Adequacy score.", "type_str": "table", "num": null, "html": null, "content": "
" }, "TABREF6": { "text": "Scale for Fluency score.", "type_str": "table", "num": null, "html": null, "content": "
train Adequacy Fluency
NMT T b11.5
T ad1.552.1
T bt1.351.85
T all 1.41.77
MMT T b1.41.92
T ad1.952.35
T bt1.681.87
T all 1.82.1
" }, "TABREF7": { "text": "Evaluation of NMT and MMT systems in terms of Adequacy and Fluency score.", "type_str": "table", "num": null, "html": null, "content": "" }, "TABREF8": { "text": "Pune: One nurse tests COVID19 positive, 30 others quarantined Ref: \u092a\u0941 \u0923\u0947 \u092e\u0947\u0902 \u090f\u0915 \u0928\u0938\u0930\u094d \u0915\u094b\u093f\u0935\u0921-\u0967\u096f \u092a\u0949 \u091c\u093f\u091f\u0935, \u0969\u0966 \u0905\u0928\u094d\u092f \u092a\u0943 \u0925\u0915-\u0935\u093e\u0938 \u092e\u0947\u0902 (pune mein ek nurse covid19 positive, 30 anay prthakvaas mein) \u092a\u0941 \u0923\u0947 \u092e\u0947\u0902 \u090f\u0915 \u092e\u0939\u0940\u0928\u0947 \u0915\u093e \u0915\u094b\u093f\u0935\u0921-\u0967\u096f \u092a\u0949 \u091c\u093f\u091f\u0935 \u092a\u0949 \u091c\u093f\u091f\u0935 \u0906\u092f\u093e, \u0969\u0966 \u0905\u0928\u094d\u092f \u0932\u094b\u0917\u094b\u0902 \u0915\u094b \u0915\u094d\u0935\u093e\u0930\u0902 \u091f\u0940\u0928 \u093f\u0915\u092f\u093e \u0917\u092f\u093e (pune mein ek maheene ka covid19 positive positive anya, 30 anya logon ko quarantine kiya gaya) \"One month of Covid19 positive positive others came in Pune, 30 others were quarantined\" T ad \u092a\u0941 \u0923\u0947 \u092e\u0947\u0902 \u090f\u0915 \u0935\u094d\u092f\u093f\u0915\u094d\u0924 \u0915\u094b\u093f\u0935\u0921-\u0967\u096f \u0938\u0947 \u0938\u0902 \u0915\u094d\u0930\u093f\u092e\u0924, \u0969\u0966 \u0905\u0928\u094d\u092f \u0932\u094b\u0917\u094b\u0902 \u0915\u094b \u0915\u094d\u0935\u093e\u0930\u0902 \u091f\u0940\u0928 \u093f\u0915\u092f\u093e \u0917\u092f\u093e (pune mein ek vyakti covid19 se sankramit, 30 anya logon ko quarantine kiya gaya) \"One person infected with Covid19 in Pune, 30 others have been quarantined\" T bt \u092a\u0941 \u0923\u0947 \u092e\u0947\u0902 \u090f\u0915 \u0915\u094b\u093f\u0935\u0921-\u0967\u096f \u0938\u0947 \u0938\u0902 \u0915\u094d\u0930\u093f\u092e\u0924 \u092a\u093e\u090f \u0917\u090f \u0915\u094b\u093f\u0935\u0921-\u0967\u096f \u0938\u0947 \u0938\u0902 \u0915\u094d\u0930\u093f\u092e\u0924 (pune mein ek covid19 se sankramit pae gae covid19 se sankramit) \"One found infected with Covid19 in Pune infected with Covid19\" T all \u092a\u0941 \u0923\u0947 \u092e\u0947\u0902 \u090f\u0915 \u0928\u0938\u0930\u094d \u0915\u094b\u093f\u0935\u0921-\u0967\u096f \u092a\u0949 \u091c\u093f\u091f\u0935 \u092a\u093e\u090f \u0917\u090f \u092a\u0941 \u0923\u0947 \u092e\u0947\u0902 \u090f\u0915 \u0928\u0938\u0930\u094d \u0915\u094b\u093f\u0935\u0921-\u0967\u096f \u0938\u0947 \u0938\u0902 \u0915\u094d\u0930\u093f\u092e\u0924, \u0969\u0966 \u0905\u0928\u094d\u092f \u0938\u0902 \u0915\u094d\u0930\u093f\u092e\u0924 (pune mein ek nurse Covid19 se sankramit, 30 anya sankramit) \"One nurse infected with Covid19 in Pune, 30 others infected\" T ad \u092a\u0941 \u0923\u0947 : \u090f\u0915 \u0928\u0938\u0930\u094d \u0915\u093e \u0915\u094b\u093f\u0935\u0921-\u0967\u096f \u091f\u0947 \u0938\u094d\u091f \u092a\u0949 \u091c\u093f\u091f\u0935 \u0906\u092f\u093e, \u0969\u0966 \u0905\u0928\u094d\u092f \u0932\u094b\u0917 \u0915\u094d\u0935\u093e\u0930\u0902 \u091f\u0940\u0928 (pune: ek nurse ka Covid19 test positive aaya, 30 anya log quarantine)", "type_str": "table", "num": null, "html": null, "content": "
NMT Model Outputs
T b (pune mein ek nurse covid19 positive pae gae )
\"One nurse in Pune found to be Covid19 positive\"
MMT Model Outputs
T b
" }, "TABREF9": { "text": "Input Output sample 1", "type_str": "table", "num": null, "html": null, "content": "" } } } }