{ "paper_id": "J19-2003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:58:22.736028Z" }, "title": "Incorporating Source-Side Phrase Structures into Neural Machine Translation", "authors": [ { "first": "Akiko", "middle": [], "last": "Eriguchi", "suffix": "", "affiliation": {}, "email": "akikoe@microsoft.com" }, { "first": "Kazuma", "middle": [], "last": "Hashimoto", "suffix": "", "affiliation": {}, "email": "k.hashimoto@salesforce.com" }, { "first": "Yoshimasa", "middle": [], "last": "Tsuruoka", "suffix": "", "affiliation": {}, "email": "tsuruoka@logos.t.u-tokyo.ac.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Neural machine translation (NMT) has shown great success as a new alternative to the traditional Statistical Machine Translation model in multiple languages. Early NMT models are based on sequence-to-sequence learning that encodes a sequence of source words into a vector space and generates another sequence of target words from the vector. In those NMT models, sentences are simply treated as sequences of words without any internal structure. In this article, we focus on the role of the syntactic structure of source sentences and propose a novel end-to-end syntactic NMT model, which we call a tree-to-sequence NMT model, extending a sequence-to-sequence model with the source-side phrase structure. Our proposed model has an attention mechanism that enables the decoder to generate a translated word while softly aligning it with phrases as well as words of the source sentence. We have empirically compared the proposed model with sequence-to-sequence models in various settings on Chinese-to-Japanese and English-to-Japanese translation tasks. Our experimental results suggest that the use of syntactic structure can be beneficial when the training data set is small, but is not as effective as using a bi-directional encoder. As the size of training data set increases, the benefits of using a syntactic tree tends to diminish.", "pdf_parse": { "paper_id": "J19-2003", "_pdf_hash": "", "abstract": [ { "text": "Neural machine translation (NMT) has shown great success as a new alternative to the traditional Statistical Machine Translation model in multiple languages. Early NMT models are based on sequence-to-sequence learning that encodes a sequence of source words into a vector space and generates another sequence of target words from the vector. In those NMT models, sentences are simply treated as sequences of words without any internal structure. In this article, we focus on the role of the syntactic structure of source sentences and propose a novel end-to-end syntactic NMT model, which we call a tree-to-sequence NMT model, extending a sequence-to-sequence model with the source-side phrase structure. Our proposed model has an attention mechanism that enables the decoder to generate a translated word while softly aligning it with phrases as well as words of the source sentence. We have empirically compared the proposed model with sequence-to-sequence models in various settings on Chinese-to-Japanese and English-to-Japanese translation tasks. Our experimental results suggest that the use of syntactic structure can be beneficial when the training data set is small, but is not as effective as using a bi-directional encoder. As the size of training data set increases, the benefits of using a syntactic tree tends to diminish.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Machine translation has traditionally been one of the most complex language processing tasks, but recent advances of neural machine translation (NMT) make it possible to perform translation using a simple end-to-end architecture. In the Encoder-Decoder model (Cho et al. 2014b; Sutskever, Vinyals, and Le 2014) , a recurrent neural network (RNN) called an encoder reads the whole sequence of source words to produce a fixedlength vector, and then another RNN called a decoder generates a sequence of target words from the vector. The Encoder-Decoder model has been extended with an attention mechanism (Bahdanau, Cho, and Bengio 2015; Luong, Pham, and Manning 2015) , which allows the model to jointly learn soft alignments between the source words and the target words. Recently, NMT models have achieved state-of-the-art results in a variety of language pairs (Wu et al. 2016; Zhou et al. 2016; Gehring et al. 2017; Vaswani et al. 2017) .", "cite_spans": [ { "start": 259, "end": 277, "text": "(Cho et al. 2014b;", "ref_id": "BIBREF5" }, { "start": 278, "end": 310, "text": "Sutskever, Vinyals, and Le 2014)", "ref_id": "BIBREF37" }, { "start": 602, "end": 634, "text": "(Bahdanau, Cho, and Bengio 2015;", "ref_id": "BIBREF1" }, { "start": 635, "end": 665, "text": "Luong, Pham, and Manning 2015)", "ref_id": "BIBREF26" }, { "start": 862, "end": 878, "text": "(Wu et al. 2016;", "ref_id": "BIBREF42" }, { "start": 879, "end": 896, "text": "Zhou et al. 2016;", "ref_id": "BIBREF45" }, { "start": 897, "end": 917, "text": "Gehring et al. 2017;", "ref_id": null }, { "start": 918, "end": 938, "text": "Vaswani et al. 2017)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this work, we consider how to incorporate syntactic information into NMT. Figure 1 illustrates the phrase structure of an English sentence, which is represented as a binary tree. Each node of the tree corresponds to a grammatical phrase of the English sentence. Figure 1 also shows its translation in Japanese. The two languages are linguistically distant from each other in many respects; they have different syntactic constructions, and words and phrases are defined in different lexical units. In this example, the Japanese word \" \" is aligned with the English word \"movie.\" The indefinite article \"a\" in English, however, is not explicitly translated into any Japanese words. One way to solve this mismatch problem is to consider the phrase structure of the English sentence and align the phrase \"a movie\" with the Japanese word \"", "cite_spans": [], "ref_spans": [ { "start": 77, "end": 85, "text": "Figure 1", "ref_id": null }, { "start": 265, "end": 273, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": ".\" The verb phrase of \"went to see a movie last night\" is also related to the eight-word sequence \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": ".\" Since Yamada and Knight (2001) proposed the first syntax-based alignment model, various approaches to leveraging the syntactic structures have been adopted in statistical machine translation (SMT) models (Liu, Liu, and Lin 2006) . In SMT, it is known that incorporating source-side syntactic constituents into the models improves word alignment (Yamada and Knight 2001) and translation accuracy (Liu, Liu, and Lin 2006; Neubig and Duh 2014) . However, the aforementioned NMT models do not allow one to perform this kind of alignment.", "cite_spans": [ { "start": 9, "end": 33, "text": "Yamada and Knight (2001)", "ref_id": "BIBREF43" }, { "start": 207, "end": 231, "text": "(Liu, Liu, and Lin 2006)", "ref_id": "BIBREF25" }, { "start": 348, "end": 372, "text": "(Yamada and Knight 2001)", "ref_id": "BIBREF43" }, { "start": 398, "end": 422, "text": "(Liu, Liu, and Lin 2006;", "ref_id": "BIBREF25" }, { "start": 423, "end": 443, "text": "Neubig and Duh 2014)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "To take advantage of syntactic information on the source side, we propose a syntactic NMT model. Following the phrase structure of a source sentence, we encode the", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The phrase structure of the English sentence \"Mary and John went to see a movie last night\" and its Japanese translation. sentence recursively in a bottom-up fashion to produce a sentence vector by using a tree-structured recursive neural network (RvNN) (Pollack 1990 ) as well as a sequential RNN (Elman 1990) . We also introduce an attention mechanism to let the decoder generate each target word while aligning the input phrases and words with the output.", "cite_spans": [ { "start": 254, "end": 267, "text": "(Pollack 1990", "ref_id": "BIBREF34" }, { "start": 298, "end": 310, "text": "(Elman 1990)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "This article extends our conference paper on tree-to-sequence NMT (Eriguchi, Hashimoto, and Tsuruoka 2016) in two significant ways. In addition to an Englishto-Japanese translation task, we have newly experimented with our tree-to-sequence NMT model in a Chinese-to-Japanese translation task, and observed that a bi-directional encoder was more effective than our tree-based encoder in both tasks. We also provide detailed analyses of our model and discuss the differences between the syntax-based and sequence-based NMT models. The article is structured as follows. We explain the basics of sequence-to-sequence NMT models in Section 2 and define our proposed treeto-sequence NMT model in Section 3. After introducing the experimental design in Section 4, we first conduct experiments on two different tasks of {Chinese, English}to-Japanese translation on a small scale and a series of analyses to understand the underlying key components in our proposed method in Section 5. Moreover, we report large-scale experimental results and analyses in the English-to-Japanese translation task in Section 6. In Section 7, we survey recent studies related to the syntax-based NMT models and conclude in Section 8 by summarizing the contributions of our work.", "cite_spans": [ { "start": 66, "end": 106, "text": "(Eriguchi, Hashimoto, and Tsuruoka 2016)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "The sequence-to-sequence NMT models are built based on the idea of an Encoder-Decoder model, where an encoder converts each input sequence x x x = (x 1 , x 2 , \u2022 \u2022 \u2022 , x n ) into a vector space, and a decoder generates an output sequence y y y = (y 1 , y 2 , \u2022 \u2022 \u2022 , y m ) from the vector, following the conditional probability of P \u03b8 (y j |y y y