{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:46:42.028863Z" }, "title": "Towards End-to-End In-Image Neural Machine Translation", "authors": [ { "first": "Elman", "middle": [], "last": "Mansimov", "suffix": "", "affiliation": { "laboratory": "", "institution": "New York University", "location": {} }, "email": "mansimov@cs.nyu.edu" }, { "first": "Mitchell", "middle": [], "last": "Stern", "suffix": "", "affiliation": {}, "email": "mitchell@berkeley.edu" }, { "first": "Mia", "middle": [], "last": "Chen", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Orhan", "middle": [], "last": "Firat", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Puneet", "middle": [], "last": "Jain", "suffix": "", "affiliation": {}, "email": "" }, { "first": "U", "middle": [ "C" ], "last": "Berkeley", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Google", "middle": [], "last": "Research", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we offer a preliminary investigation into the task of in-image machine translation: transforming an image containing text in one language into an image containing the same text in another language. We propose an end-to-end neural model for this task inspired by recent approaches to neural machine translation, and demonstrate promising initial results based purely on pixel-level supervision. We then offer a quantitative and qualitative evaluation of our system outputs and discuss some common failure modes. Finally, we conclude with directions for future work.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we offer a preliminary investigation into the task of in-image machine translation: transforming an image containing text in one language into an image containing the same text in another language. We propose an end-to-end neural model for this task inspired by recent approaches to neural machine translation, and demonstrate promising initial results based purely on pixel-level supervision. We then offer a quantitative and qualitative evaluation of our system outputs and discuss some common failure modes. Finally, we conclude with directions for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "End-to-end neural models have emerged in recent years as the dominant approach to a wide variety of sequence generation tasks in natural language processing, including speech recognition, machine translation, and dialog generation, among many others. While highly accurate, these models typically operate by outputting tokens from a predetermined symbolic vocabulary, and require integration into larger pipelines for use in user-facing applications such as voice assistants where neither the input nor output modality is text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the speech domain, neural methods have recently been successfully applied to end-to-end speech translation (Jia et al., 2019; Liu et al., 2019; Inaguma et al., 2019) , in which the goal is to translate directly from speech in one language to speech in another language. We propose to study the analogous problem of in-image machine translation. Specifically, an image containing text in one language is to be transformed into an image containing the same text in another language, removing the dependency of any predetermined symbolic vocabulary or processing.", "cite_spans": [ { "start": 110, "end": 128, "text": "(Jia et al., 2019;", "ref_id": "BIBREF4" }, { "start": 129, "end": 146, "text": "Liu et al., 2019;", "ref_id": "BIBREF7" }, { "start": 147, "end": 168, "text": "Inaguma et al., 2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In-image neural machine translation is a com-pelling test-bed for both research and engineering communities for a variety of reasons. Although there are existing commercial products that address this problem such as image translation feature of Google Translate 1 the underlying technical solutions are unknown. By leveraging large amounts of data and compute, end-to-end neural system could potentially improve overall quality of pipelined approaches for image translation. Second, and arguably more importantly, working directly with pixels has the potential to sidestep issues related to vocabularies, segmentation, and tokenization, allowing for the possibility of more universal approaches to neural machine translation, by unifying input and output spaces via pixels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why In-Image Neural Machine Translation ?", "sec_num": null }, { "text": "Text preprocessing and vocabulary construction has been an active research area leading to work on investigating neural machine translation systems operating on subword units (Sennrich et al., 2016) , characters (Lee et al., 2017) and even bytes and has been highlighted to be one of the major challenges when dealing with many languages simultaneously in multilingual machine translation (Arivazhagan et al., 2019) , and crosslingual natural language understanding (Conneau et al., 2019) . Pixels serve as a straightforward way to share vocabulary among all languages at the expense of being a significantly harder learning task for the underlying models.", "cite_spans": [ { "start": 175, "end": 198, "text": "(Sennrich et al., 2016)", "ref_id": "BIBREF11" }, { "start": 212, "end": 230, "text": "(Lee et al., 2017)", "ref_id": "BIBREF6" }, { "start": 389, "end": 415, "text": "(Arivazhagan et al., 2019)", "ref_id": "BIBREF0" }, { "start": 466, "end": 488, "text": "(Conneau et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Why In-Image Neural Machine Translation ?", "sec_num": null }, { "text": "In this work, we propose an end-to-end neural approach to in-image machine translation that combines elements from recent neural approaches to the relevant sub-tasks in an end-to-end differentiable manner. We provide the initial problem definition and demonstrate promising first qualitative results using only pixel-level supervision on the target side. We then analyze some of the errors made by our models, and in the process of doing so uncover a common deficiency that suggests a path forward for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why In-Image Neural Machine Translation ?", "sec_num": null }, { "text": "To our knowledge, there are no publicly available datasets for the task of in-image machine translation task. Since collecting aligned natural data for in-image translation would be a difficult and costly process, a more practical approach is to bootstrap by generating pairs of rendered images containing sentences from the WMT 2014 German-English parallel corpus. The dataset consists of 4.5M German-English parallel sentence pairs. We use newstest-2013 as a development set. For each sentence pair, we create a minimal web page for the source and target, then render each using Headless Chrome 2 to obtain a pair of images. The text is displayed in a black 16-pixel sans-serif font on a white background inside of a fixed-size 1024x32-pixel frame. For simplicity, all sentences are vertically centered and left-aligned without any line-wrapping. The consistent position and styling of the text in our synthetic dataset represents an ideal scenario for in-image translation, serving as a good test-bed for initial attempts. Later, one could generalize to more realistic settings by varying the location, size, typeface, and perspective of the text and by using non-uniform backgrounds.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Generation", "sec_num": "2" }, { "text": "Our goal is to build a neural model for the inimage translation task that can be trained end-toend on example image pairs (X * , Y * ) of height and width H and W using only pixel-level supervision. We evaluate two approaches for this task: convolutional encoder-decoder model and full model that combines soft versions of the traditional pipeline in order to arrive at a modular yet fully differentiable solution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "Inspired by the success of convolutional encoderdecoder architectures for medical image segmentation (Ronneberger et al., 2015) , we begin with a U-net style convolutional baseline. In this version of the model, the source image X * is first compressed into a single continuous vector h enc using a convolutional encoder h enc = enc(X * ).", "cite_spans": [ { "start": 101, "end": 127, "text": "(Ronneberger et al., 2015)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Convolutional Baseline", "sec_num": "3.1" }, { "text": "Then, the compressed representation is used as the input to a convolutional decoder that aims to predict all target pixels in parallel. Decoder outputs the probabilities of each pixel p(Y ) = H i=1 W j=1 softmax(dec(h enc )). The convolutional encoder consists of four residual blocks with the dimensions shown in Table 1 , and the convolutional decoder uses the same network structure in reverse order, composing a simple encoderdecoder architecture with a representational bottleneck. We threshold the grayscale value of each pixel in the groundtruth output image at 0.5 to obtain a binary black-and-white target, and use a binary cross-entropy loss on the pixels of the model output as our loss function for training. In order to solve the proposed task, this baseline must address the combined challenges of recognizing and rendering text at a pixel level, capturing the meaning of a sentence in a single vector as in early sequence-to-sequence models (Sutskever et al., 2014) , and performing non-autoregressive translation (Gu et al., 2018) . Although the model can sometimes produce the first few words of the output, it is unable to learn much beyond that; see Figure 1 for a representative example.", "cite_spans": [ { "start": 956, "end": 980, "text": "(Sutskever et al., 2014)", "ref_id": "BIBREF12" }, { "start": 1029, "end": 1046, "text": "(Gu et al., 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 314, "end": 321, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 1169, "end": 1177, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Convolutional Baseline", "sec_num": "3.1" }, { "text": "To better take advantage of the problem structure, we next propose a modular neural model that breaks the problem down into more manageable sub-tasks while still being trainable end-to-end. Intuitively, one would expect a model that can successfully carry out the in-image machine trans- Figure 1 : Example predictions made by the baseline convolutional model from Section 3.1. We show two pairs of groundtruth target images followed by generated target images. Although it successfully predicts one or two words, it quickly devolves into noise thereafter.", "cite_spans": [], "ref_spans": [ { "start": 288, "end": 296, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Full Model", "sec_num": "3.2" }, { "text": "lation task to first recognize the text represented in the input image, next perform some computation over its internal representation to obtain a soft translation, and finally generate the output image through a learned rendering process. Moreover, just as modern neural machine translation systems predict the output over the span of multiple time steps in a auto-regressive way rather than all at once, it stands to reason that such a decomposition would be of use here as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Full Model", "sec_num": "3.2" }, { "text": "To this end, we propose a revised model that receives as input both the source image X * and a partial (or proposal) target image Y *