{ "paper_id": "D14-1017", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:54:54.133201Z" }, "title": "Unsupervised Word Alignment Using Frequency Constraint in Posterior Regularized EM", "authors": [ { "first": "Hidetaka", "middle": [], "last": "Kamigaito", "suffix": "", "affiliation": { "laboratory": "Precision and Intelligence Laboratory", "institution": "Tokyo Institute of Technology", "location": { "postCode": "4259", "settlement": "Nagatsuta-cho Midori-ku Yokohama", "country": "Japan" } }, "email": "" }, { "first": "Taro", "middle": [], "last": "Watanabe", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Institute of Information and Communication Technology", "location": { "addrLine": "3-5 Hikari-dai, Seika-cho, Soraku-gun", "settlement": "Kyoto", "country": "Japan" } }, "email": "" }, { "first": "Hiroya", "middle": [], "last": "Takamura", "suffix": "", "affiliation": { "laboratory": "Precision and Intelligence Laboratory", "institution": "Tokyo Institute of Technology", "location": { "postCode": "4259", "settlement": "Nagatsuta-cho Midori-ku Yokohama", "country": "Japan" } }, "email": "" }, { "first": "Manabu", "middle": [], "last": "Okumura", "suffix": "", "affiliation": { "laboratory": "Precision and Intelligence Laboratory", "institution": "Tokyo Institute of Technology", "location": { "postCode": "4259", "settlement": "Nagatsuta-cho Midori-ku Yokohama", "country": "Japan" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Generative word alignment models, such as IBM Models, are restricted to oneto-many alignment, and cannot explicitly represent many-to-many relationships in a bilingual text. The problem is partially solved either by introducing heuristics or by agreement constraints such that two directional word alignments agree with each other. In this paper, we focus on the posterior regularization framework (Ganchev et al., 2010) that can force two directional word alignment models to agree with each other during training, and propose new constraints that can take into account the difference between function words and content words. Experimental results on French-to-English and Japanese-to-English alignment tasks show statistically significant gains over the previous posterior regularization baseline. We also observed gains in Japanese-to-English translation tasks, which prove the effectiveness of our methods under grammatically different language pairs.", "pdf_parse": { "paper_id": "D14-1017", "_pdf_hash": "", "abstract": [ { "text": "Generative word alignment models, such as IBM Models, are restricted to oneto-many alignment, and cannot explicitly represent many-to-many relationships in a bilingual text. The problem is partially solved either by introducing heuristics or by agreement constraints such that two directional word alignments agree with each other. In this paper, we focus on the posterior regularization framework (Ganchev et al., 2010) that can force two directional word alignment models to agree with each other during training, and propose new constraints that can take into account the difference between function words and content words. Experimental results on French-to-English and Japanese-to-English alignment tasks show statistically significant gains over the previous posterior regularization baseline. We also observed gains in Japanese-to-English translation tasks, which prove the effectiveness of our methods under grammatically different language pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Word alignment is an important component in statistical machine translation (SMT). For instance phrase-based SMT (Koehn et al., 2003) is based on the concept of phrase pairs that are automatically extracted from bilingual data and rely on word alignment annotation. Similarly, the model for hierarchical phrase-based SMT is built from exhaustively extracted phrases that are, in turn, heavily reliant on word alignment.", "cite_spans": [ { "start": 113, "end": 133, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The Generative word alignment models, such as the IBM Models (Brown et al., 1993) and HMM (Vogel et al., 1996) , are popular methods for automatically aligning bilingual texts, but are restricted to represent one-to-many correspondence of each word. To resolve this weakness, various symmetrization methods are proposed. Och and Ney (2003) and Koehn et al. (2003) propose various heuristic methods to combine two directional models to represent many-to-many relationships. As an alternative to heuristic methods, filtering methods employ a threshold to control the trade-off between precision and recall based on a score estimated from the posterior probabilities from two directional models. Matusov et al. (2004) proposed arithmetic means of two models as a score for the filtering, whereas Liang et al. (2006) reported better results using geometric means. The joint training method (Liang et al., 2006) enforces agreement between two directional models. Posterior regularization (Ganchev et al., 2010) is an alternative agreement method which directly encodes agreement during training. DeNero and Macherey (2011) and Chang et al. (2014) also enforce agreement during decoding.", "cite_spans": [ { "start": 54, "end": 81, "text": "Models (Brown et al., 1993)", "ref_id": null }, { "start": 90, "end": 110, "text": "(Vogel et al., 1996)", "ref_id": "BIBREF18" }, { "start": 321, "end": 339, "text": "Och and Ney (2003)", "ref_id": "BIBREF14" }, { "start": 344, "end": 363, "text": "Koehn et al. (2003)", "ref_id": "BIBREF9" }, { "start": 693, "end": 714, "text": "Matusov et al. (2004)", "ref_id": "BIBREF12" }, { "start": 793, "end": 812, "text": "Liang et al. (2006)", "ref_id": "BIBREF11" }, { "start": 886, "end": 906, "text": "(Liang et al., 2006)", "ref_id": "BIBREF11" }, { "start": 983, "end": 1005, "text": "(Ganchev et al., 2010)", "ref_id": "BIBREF6" }, { "start": 1091, "end": 1117, "text": "DeNero and Macherey (2011)", "ref_id": "BIBREF4" }, { "start": 1122, "end": 1141, "text": "Chang et al. (2014)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, these agreement models do not take into account the difference in language pairs, which is crucial for linguistically different language pairs, such as Japanese and English: although content words may be aligned with each other by introducing some agreement constraints, function words are difficult to align.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We focus on the posterior regularization framework and improve upon the previous work by proposing new constraint functions that take into account the difference in languages in terms of content words and function words. In particular, we differentiate between content words and function words by frequency in bilingual data, following Setiawan et al. (2007) .", "cite_spans": [ { "start": 336, "end": 358, "text": "Setiawan et al. (2007)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Experimental results show that the proposed methods achieved better alignment qualities on the French-English Hansard data and the Japanese-English Kyoto free translation task (KFTT) measured by AER and F-measure. In translation evaluations, we achieved statistically significant gains in BLEU scores in the NTCIR10.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Given a bilingual sentence x = (x s , x t ) where x s and x t denote a source and target sentence, respectively, the bilingual sentence is aligned by a manyto-many alignment of y. We represent posterior probabilities from two directional word alignment models as \u2212 \u2192 p \u03b8 ( \u2212 \u2192 y |x) and \u2190 \u2212 p \u03b8 ( \u2190 \u2212 y |x) with each arrow indicating a particular direction, and use \u03b8 to denote the parameters of the models. For instance, \u2212 \u2192 y is a subset of y for the alignment from x s to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical word alignment with posterior regularization framework", "sec_num": "2" }, { "text": "x t under the model of p(x t , \u2212 \u2192 y |x s ). In the case of IBM Model 1, the model is represented as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical word alignment with posterior regularization framework", "sec_num": "2" }, { "text": "p(x t , \u2212 \u2192 y |x s ) = i 1 |x s | + 1 pt(x t i |x s \u2212 \u2192 y i ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical word alignment with posterior regularization framework", "sec_num": "2" }, { "text": "( 1)where we define the index of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical word alignment with posterior regularization framework", "sec_num": "2" }, { "text": "x t , x s as i, j(1 \u2264 i \u2264 |x t |, 1 \u2264 j \u2264 |x s |)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical word alignment with posterior regularization framework", "sec_num": "2" }, { "text": "and the posterior probability for the word pair (x t i , x s j ) is defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical word alignment with posterior regularization framework", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2212 \u2192 p (i, j|x) = pt(x t i |x s j ) j pt(x t i |x s j ) .", "eq_num": "(2)" } ], "section": "Statistical word alignment with posterior regularization framework", "sec_num": "2" }, { "text": "Herein, we assume that the posterior probability for wrong directional alignment is zero (i.e., \u2212 \u2192 p ( \u2190 \u2212 y |x) = 0). 1 Given the two directional models, Ganchev et al. defined a symmetric feature for each target/source position pair, i, j as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical word alignment with posterior regularization framework", "sec_num": "2" }, { "text": "\u03c6i,j(x, y) = \uf8f1 \uf8f2 \uf8f3 +1 ( \u2212 \u2192 y \u2282 y) \u2229 ( \u2212 \u2192 y i = j), \u22121 ( \u2190 \u2212 y \u2282 y) \u2229 ( \u2190 \u2212 y j = i), 0 otherwise. (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical word alignment with posterior regularization framework", "sec_num": "2" }, { "text": "The feature assigns 1 for the subset of word alignment for \u2212 \u2192 y , but assigns \u22121 for \u2190 \u2212 y . As a result, if a word pair i, j is aligned with equal posterior probabilities in two directions, the expectation of the feature value will be zero. Ganchev et al. defined a joint model that combines two directional models using arithmetic means:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical word alignment with posterior regularization framework", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p \u03b8 (y|x) = 1 2 \u2212 \u2192 p \u03b8 (y|x) + 1 2 \u2190 \u2212 p \u03b8 (y|x).", "eq_num": "(4)" } ], "section": "Statistical word alignment with posterior regularization framework", "sec_num": "2" }, { "text": "Under the posterior regularization framework, we instead use q that is derived by maximizing the following posterior probability parametrized by \u03bb for each bilingual data x as follows (Ganchev et al., 2010) :", "cite_spans": [ { "start": 184, "end": 206, "text": "(Ganchev et al., 2010)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Statistical word alignment with posterior regularization framework", "sec_num": "2" }, { "text": "q \u03bb (y|x) = \u2212 \u2192 p \u03b8 ( \u2212 \u2192 y |x) + \u2190 \u2212 p \u03b8 ( \u2190 \u2212 y |x) 2 (5) \u2022 exp{\u2212\u03bb \u2022 \u03c6(x, y)} Z 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical word alignment with posterior regularization framework", "sec_num": "2" }, { "text": "No alignment is represented by alignment into a special token \"null\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical word alignment with posterior regularization framework", "sec_num": "2" }, { "text": "= \u2212 \u2192 q ( \u2212 \u2192 y |x) Z\u2212 \u2192 q \u2212 \u2192 p \u03b8 (x) + \u2190 \u2212 q ( \u2190 \u2212 y |x) Z\u2190 \u2212 q \u2190 \u2212 p \u03b8 (x) 2Z , Z = 1 2 ( Z\u2212 \u2192 q \u2212 \u2192 p \u03b8 + Z\u2190 \u2212 q \u2190 \u2212 p \u03b8 ), \u2212 \u2192 q ( \u2212 \u2192 y |x) = 1 Z\u2212 \u2192 q \u2212 \u2192 p \u03b8 ( \u2212 \u2192 y , x)exp{\u2212\u03bb \u2022 \u03c6(x, y)}, Z\u2212 \u2192 q = \u2212 \u2192 y \u2212 \u2192 p \u03b8 ( \u2212 \u2192 y , x)exp{\u2212\u03bb \u2022 \u03c6(x, y)}, \u2190 \u2212 q ( \u2190 \u2212 y |x) = 1 Z\u2190 \u2212 q \u2190 \u2212 p \u03b8 ( \u2190 \u2212 y , x)exp{\u2212\u03bb \u2022 \u03c6(x, y)}, Z\u2190 \u2212 q = \u2190 \u2212 y \u2190 \u2212 p \u03b8 ( \u2190 \u2212 y , x)exp{\u2212\u03bb \u2022 \u03c6(x, y)}, such that E q \u03bb [\u03c6 i,j (x, y)] = 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical word alignment with posterior regularization framework", "sec_num": "2" }, { "text": "In the E-step of EM-algorithm, we employ q \u03bb instead of p \u03b8 to accumulate fractional counts for its use in the Mstep. \u03bb is efficiently estimated by the gradient ascent for each bilingual sentence x. Note that posterior regularization is performed during parameter estimation, and not during testing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical word alignment with posterior regularization framework", "sec_num": "2" }, { "text": "The symmetric constraint method represented in Equation 3assumes a strong one-to-one relation for any word, and does not take into account the divergence in language pairs. For linguistically different language pairs, such as Japanese-English, content words may be easily aligned oneto-one, but function words are not always aligned together. In addition, Japanese is a pro-drop language which can easily violate the symmetric constraint when proper nouns in the English side have to be aligned with a \"null\" word. In addition, low frequency words may cause unreliable estimates for adjusting the weighing parameters \u03bb. In order to solve the problem, we improve Ganchev's symmetric constraint so that it can consider the difference between content words and function words in each language. In particular, we follow the frequency-based idea of Setiawan et al. (2007) that discriminates content words and function words by their frequencies. We propose constraint features that take into account the difference between content words and function words, determined by a frequency threshold.", "cite_spans": [ { "start": 844, "end": 866, "text": "Setiawan et al. (2007)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Posterior Regularization with Frequency Constraint", "sec_num": "3" }, { "text": "First, we propose a mismatching constraint that penalizes word alignment between content words and function words by decreasing the corresponding posterior probabilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mismatching constraint", "sec_num": "3.1" }, { "text": "The constraint is represented as f2c (function to content) constraint:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mismatching constraint", "sec_num": "3.1" }, { "text": "\u03c6 f2c i,j (x, y) = (6) \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 +1 ( \u2212 \u2192 y \u2282 y) \u2229 ( \u2212 \u2192 y i = j) \u2229 ((x t i \u2208 C t \u2229 x s j \u2208 F s ) \u222a(x t i \u2208 F t \u2229 x s j \u2208 C s )) \u2229 (\u03b4i,j(x, y) > 0), 0 ( \u2190 \u2212 y \u2282 y) \u2229 ( \u2190 \u2212 y j = i) \u2229 ((x t i \u2208 C t \u2229 x s j \u2208 F s ) \u222a(x t i \u2208 F t \u2229 x s j \u2208 C s )) \u2229 (\u03b4i,j(x, y) > 0), 0 ( \u2212 \u2192 y \u2282 y) \u2229 ( \u2212 \u2192 y i = j) \u2229 ((x t i \u2208 C t \u2229 x s j \u2208 F s ) \u222a(x t i \u2208 F t \u2229 x s j \u2208 C s )) \u2229 (\u03b4i,j(x, y) < 0), \u22121 ( \u2190 \u2212 y \u2282 y) \u2229 ( \u2190 \u2212 y j = i) \u2229 ((x t i \u2208 C t \u2229 x s j \u2208 F s ) \u222a(x t i \u2208 F t \u2229 x s j \u2208 C s )) \u2229 (\u03b4i,j(x, y) < 0). where \u03b4 i,j (x, y) = \u2212 \u2192 p \u03b8 (i, j|x) \u2212 \u2190 \u2212 p \u03b8 (i, j|x) is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mismatching constraint", "sec_num": "3.1" }, { "text": "the difference in the posterior probabilities between the source-to-target and the target-to-source alignment. C s and C t represent content words in the source sentence and target sentence, respectively. Similarly, F s and F t are function words in the source and target sentence, respectively. Intuitively, when there exists a mismatch in content word and function word for a word pair (i, j), the constraint function returns a non-zero value for the model with the highest posterior probability. When coupled with the constraint such that the expectation of the feature value is zero, the constraint function decreases the posterior probability of the highest direction and discourages agreement with each other. Note that when this constraint is not fired, we fall back to the constraint function in Equation 3for each word pair.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mismatching constraint", "sec_num": "3.1" }, { "text": "In contrast to the mismatching constraint, our second constraint function rewards alignment for function to function word matching, namely f2f. The f2f constraint function is defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Matching constraint", "sec_num": "3.2" }, { "text": "\u03c6 f2f i,j (x, y) = (7) \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 +1 ( \u2212 \u2192 y \u2282 y) \u2229 ( \u2212 \u2192 y i = j)\u2229 (x t i \u2208 F t \u2229 x s j \u2208 F s ) \u2229 (\u03b4i,j(x, y) > 0), 0 ( \u2190 \u2212 y \u2282 y) \u2229 ( \u2190 \u2212 y j = i)\u2229 (x t i \u2208 F t \u2229 x s j \u2208 F s ) \u2229 (\u03b4i,j(x, y) > 0), 0 ( \u2212 \u2192 y \u2282 y) \u2229 ( \u2212 \u2192 y i = j)\u2229 (x t i \u2208 F t \u2229 x s j \u2208 F s ) \u2229 (\u03b4i,j(x, y) < 0), \u22121 ( \u2190 \u2212 y \u2282 y) \u2229 ( \u2190 \u2212 y j = i)\u2229 (x t i \u2208 F t \u2229 x s j \u2208 F s ) \u2229 (\u03b4i,j(x, y) < 0).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Matching constraint", "sec_num": "3.2" }, { "text": "This constraint function returns a non-zero value for a word pair (i, j) when they are function words. As a result, the pair of function words are encouraged to agree with each other, but not other pairs. The content to content word matching function c2c can be defined similarly by replacing F s and F t by C s and C t , respectively. Likewise, the function to content word matching func-tion f2c is defined by considering the matching of content words and function words in two languages. As noted in the mismatch function, when no constraint is fired, we fall back to Eq (3) for each word pair.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Matching constraint", "sec_num": "3.2" }, { "text": "The data sets used in our experiments are the French-English Hansard Corpus, and two data sets for Japanese-English tasks: the Kyoto free translation task (KFTT) and NTCIR10. The Hansard Corpus consists of parallel texts drawn from official records of the proceedings of the Canadian Parliament. The KFTT (Neubig, 2011) is derived from Japanese Wikipedia articles related to Kyoto, which is professionally translated into English. NTCIR10 comes from patent data employed in a machine translation shared task (Goto et al., 2013) . The statistics of these data are presented in Table 1 . Sentences of over 40 words on both source and target sides are removed for training alignment models. We used a word alignment toolkit cicada 2 for training the IBM Model 4 with our proposed methods. Training is bootstrapped from IBM Model 1, followed by HMM and IBM Model 4. When generating the final bidirectional word alignment, we use a grow-diag-final heuristic for the Japanese-English tasks and an intersection heuristic in the French-English task, judged by preliminary studies.", "cite_spans": [ { "start": 305, "end": 319, "text": "(Neubig, 2011)", "ref_id": "BIBREF13" }, { "start": 508, "end": 527, "text": "(Goto et al., 2013)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 576, "end": 583, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "Following Bisazza and Federico (2012) , we automatically decide the threshold for word frequency to discriminate between content words and function words. Specifically, the threshold is determined by the ratio of highly frequent words. The threshold th is the maximum frequency that satisfies the following equation:", "cite_spans": [ { "start": 10, "end": 37, "text": "Bisazza and Federico (2012)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "w\u2208(f req(w)>th) f req(w) w\u2208all f req(w) > r.", "eq_num": "(8)" } ], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "Here, we empirically set r = 0.5 by preliminary studies. This method is based on the intuition that content words and function words exist in a document at a constant rate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "We measure the impact of our proposed methods on the quality of word alignment measured by AER and F-measure (Och and Ney, 2003) .", "cite_spans": [ { "start": 91, "end": 128, "text": "AER and F-measure (Och and Ney, 2003)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Word alignment evaluation", "sec_num": "4.2" }, { "text": "Since there exists no distinction for sure-possible alignments in the KFTT data, we use only sure alignment for our evaluation, both for the French-English and the Japanese-English tasks. Table 2 summarizes our results. The baseline method is symmetric constraint (Ganchev et al., 2010) shown in Table 2 . The numbers in bold and in italics indicate the best score and the second best score, respectively. The differences between f2f,f2c and baseline in KFTT are statistically significant at p < 0.05 using the signtest, but in hansard corpus, there exist no significant differences between the baseline and the proposed methods. In terms of F-measure, it is clear that the f2f method is the most effective method in KFTT, and both f2f and f2c methods exceed the original posterior regularized model of Ganchev et al. (2010) .", "cite_spans": [ { "start": 264, "end": 286, "text": "(Ganchev et al., 2010)", "ref_id": "BIBREF6" }, { "start": 803, "end": 824, "text": "Ganchev et al. (2010)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 188, "end": 195, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 296, "end": 303, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Word alignment evaluation", "sec_num": "4.2" }, { "text": "We also compared these methods with filtering methods (Liang et al., 2006) , in addition to heuristic methods. We plot precision/recall curves and AER by varying the threshold between 0.1 and 0.9 with 0.1 increments. From Figures, it can be seen that our proposed methods are superior to the baseline in terms of both precision-recall and AER.", "cite_spans": [ { "start": 54, "end": 74, "text": "(Liang et al., 2006)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Word alignment evaluation", "sec_num": "4.2" }, { "text": "Next, we performed a translation evaluation, measured by BLEU (Papineni et al., 2002) . We compared the grow-diag-final and filtering method (Liang et al., 2006) for creating phrase tables. The threshold for the filtering factor was set to 0.1 which was the best setting in the word alignment experiment in section 4.2 under KFTT. From the English side of the training data, we trained a word using the 5-gram model with SRILM (Stolcke and others, 2002) . \"Moses\" toolkit was used as a decoder (Koehn et al., 2007) and the model parameters were tuned by k-best MIRA (Cherry and Foster, 2012) . In order to avoid tuning instability, we evaluated the average of five runs (Hopkins and May, 2011). The results are summarized in Table 3 . Our proposed methods achieved large gains in NTCIR10 task with the filtered method, but observed no gain in the KFTT with the filtered method. In NTCIR10 task with GDF, the gain in BLEU was smaller than that of KFTT. We calculate p-values and the difference between symmetric and c2c (the most effective proposed constraint) are lower than 0.05 in kftt with GDF and NTCIR10 with filtered method. There seems to be no clear tendency in the improved alignment qualities and the translation qualities, as shown in numerous previous studies (Ganchev et al., 2008) .", "cite_spans": [ { "start": 62, "end": 85, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF15" }, { "start": 141, "end": 161, "text": "(Liang et al., 2006)", "ref_id": "BIBREF11" }, { "start": 427, "end": 453, "text": "(Stolcke and others, 2002)", "ref_id": null }, { "start": 494, "end": 514, "text": "(Koehn et al., 2007)", "ref_id": "BIBREF10" }, { "start": 566, "end": 591, "text": "(Cherry and Foster, 2012)", "ref_id": "BIBREF3" }, { "start": 1272, "end": 1294, "text": "(Ganchev et al., 2008)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 725, "end": 732, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Translation evaluation", "sec_num": "4.3" }, { "text": "In this paper, we proposed new constraint functions under the posterior regularization framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Our constraint functions introduce a fine-grained agreement constraint considering the frequency of words, a assuming that the high frequency words correspond to function words whereas the less frequent words may be treated as content words, based on the previous work of Setiawan et al. (2007) . Experiments on word alignment tasks showed better alignment qualities measured by F-measure and AER on both the Hansard task and KFTT. We also observed large gain in BLEU, 0.2 on average, when compared with the previous posterior regularization method under NTCIR10 task.", "cite_spans": [ { "start": 272, "end": 294, "text": "Setiawan et al. (2007)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "As our future work, we will investigate more precise methods for deciding function words and content words for better alignment and translation qualities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "https://github.com/tarowatanabe/cicada", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Cutting the long tail: Hybrid language models for translation style adaptation", "authors": [ { "first": "Arianna", "middle": [], "last": "Bisazza", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "439--448", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arianna Bisazza and Marcello Federico. 2012. Cutting the long tail: Hybrid language models for translation style adaptation. In Proceedings of the 13th Confer- ence of the European Chapter of the Association for Computational Linguistics, pages 439-448. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The mathematics of statistical machine translation: Parameter estimation", "authors": [ { "first": "Vincent J Della", "middle": [], "last": "Peter F Brown", "suffix": "" }, { "first": "Stephen A Della", "middle": [], "last": "Pietra", "suffix": "" }, { "first": "Robert L", "middle": [], "last": "Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational linguistics", "volume": "19", "issue": "2", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter F Brown, Vincent J Della Pietra, Stephen A Della Pietra, and Robert L Mercer. 1993. The mathemat- ics of statistical machine translation: Parameter esti- mation. Computational linguistics, 19(2):263-311.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A constrained viterbi relaxation for bidirectional word alignment", "authors": [ { "first": "Yin-Wen", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" }, { "first": "John", "middle": [], "last": "Denero", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1481--1490", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yin-Wen Chang, Alexander M. Rush, John DeNero, and Michael Collins. 2014. A constrained viterbi relaxation for bidirectional word alignment. In Pro- ceedings of the 52nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 1481-1490, Baltimore, Maryland, June. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Batch tuning strategies for statistical machine translation", "authors": [ { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" }, { "first": "George", "middle": [], "last": "Foster", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "427--436", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Cherry and George Foster. 2012. Batch tun- ing strategies for statistical machine translation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 427-436. Association for Computational Lin- guistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Modelbased aligner combination using dual decomposition", "authors": [ { "first": "John", "middle": [], "last": "Denero", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Macherey", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "420--429", "other_ids": {}, "num": null, "urls": [], "raw_text": "John DeNero and Klaus Macherey. 2011. Model- based aligner combination using dual decomposi- tion. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies, pages 420-429, Port- land, Oregon, USA, June. Association for Computa- tional Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Better alignments = better translations?", "authors": [ { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "" }, { "first": "Jo\u00e3o", "middle": [ "V" ], "last": "Gra\u00e7a", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "986--993", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kuzman Ganchev, Jo\u00e3o V. Gra\u00e7a, and Ben Taskar. 2008. Better alignments = better translations? In Proceedings of ACL-08: HLT, pages 986-993, Columbus, Ohio, June. Association for Computa- tional Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Posterior regularization for structured latent variable models", "authors": [ { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "" }, { "first": "Joao", "middle": [], "last": "Gra\u00e7a", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Gillenwater", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" } ], "year": 2010, "venue": "The Journal of Machine Learning Research", "volume": "99", "issue": "", "pages": "2001--2049", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kuzman Ganchev, Joao Gra\u00e7a, Jennifer Gillenwater, and Ben Taskar. 2010. Posterior regularization for structured latent variable models. The Journal of Machine Learning Research, 99:2001-2049.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Overview of the patent machine translation task at the ntcir-10 workshop", "authors": [ { "first": "Isao", "middle": [], "last": "Goto", "suffix": "" }, { "first": "Ka", "middle": [ "Po" ], "last": "Chow", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" }, { "first": "Benjamin", "middle": [ "K" ], "last": "Tsou", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 10th NTCIR Workshop Meeting on Evaluation of Information Access Technologies: Information Retrieval, Question Answering and Cross-Lingual Information Access", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isao Goto, Ka Po Chow, Bin Lu, Eiichiro Sumita, and Benjamin K Tsou. 2013. Overview of the patent machine translation task at the ntcir-10 workshop. In Proceedings of the 10th NTCIR Workshop Meet- ing on Evaluation of Information Access Technolo- gies: Information Retrieval, Question Answering and Cross-Lingual Information Access, NTCIR-10.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Tuning as ranking", "authors": [ { "first": "Mark", "middle": [], "last": "Hopkins", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1352--1362", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Hopkins and Jonathan May. 2011. Tuning as ranking. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Process- ing, pages 1352-1362, Edinburgh, Scotland, UK., July. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Statistical phrase-based translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Franz", "middle": [ "Josef" ], "last": "Och", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology", "volume": "1", "issue": "", "pages": "48--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computa- tional Linguistics on Human Language Technology- Volume 1, pages 48-54. Association for Computa- tional Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Moses: Open source toolkit for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Brooke", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Moran", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zens", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions", "volume": "", "issue": "", "pages": "177--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Pro- ceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, pages 177-180. Association for Computational Lin- guistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Alignment by agreement", "authors": [ { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Human Language Technology Conference of the NAACL, Main Conference", "volume": "", "issue": "", "pages": "104--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Percy Liang, Ben Taskar, and Dan Klein. 2006. Align- ment by agreement. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 104-111, New York City, USA, June. Association for Computational Linguis- tics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Symmetric Word Alignments for Statistical Machine Translation", "authors": [ { "first": "E", "middle": [], "last": "Matusov", "suffix": "" }, { "first": "R", "middle": [], "last": "Zens", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Proceedings of COLING 2004", "volume": "", "issue": "", "pages": "219--225", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Matusov, R. Zens, and H. Ney. 2004. Symmetric Word Alignments for Statistical Machine Transla- tion. In Proceedings of COLING 2004, pages 219- 225, Geneva, Switzerland, August 23-27.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The Kyoto free translation task", "authors": [ { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Graham Neubig. 2011. The Kyoto free translation task. http://www.phontron.com/kftt.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational linguistics", "volume": "29", "issue": "1", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2003. A sys- tematic comparison of various statistical alignment models. Computational linguistics, 29(1):19-51.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th annual meeting on association for computational linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Ordering phrases with function words", "authors": [ { "first": "Hendra", "middle": [], "last": "Setiawan", "suffix": "" }, { "first": "Min-Yen", "middle": [], "last": "Kan", "suffix": "" }, { "first": "Haizhou", "middle": [], "last": "Li", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th annual meeting on association for computational linguistics", "volume": "", "issue": "", "pages": "712--719", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hendra Setiawan, Min-Yen Kan, and Haizhou Li. 2007. Ordering phrases with function words. In Proceedings of the 45th annual meeting on associ- ation for computational linguistics, pages 712-719. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Srilm-an extensible language modeling toolkit", "authors": [ { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2002, "venue": "INTERSPEECH", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Stolcke et al. 2002. Srilm-an extensible lan- guage modeling toolkit. In INTERSPEECH.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Hmm-based word alignment in statistical translation", "authors": [ { "first": "Stephan", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" }, { "first": "Christoph", "middle": [], "last": "Tillmann", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 16th conference on Computational linguistics", "volume": "2", "issue": "", "pages": "836--841", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. Hmm-based word alignment in statistical translation. In Proceedings of the 16th conference on Computational linguistics-Volume 2, pages 836- 841. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "Precision Recall graph in Hansard French-English Figure 2: Precision Recall graph in KFTT Figure 3: AER in Hansard French-English Figure 4: AER in KFTT", "type_str": "figure" }, "TABREF0": { "html": null, "text": "The statistics of the data sets", "content": "
hansardkfttNTCIR10
French English Japanese English Japanese English
trainsentence1.13M329.88K2.02M
word23.3M19.8M6.08M5.91M53.4M49.4M
vocabulary 78.1K57.3K114K138K114K183K
devsentence1.17K2K
word26.8K24.3K73K67.3K
vocabulary4.51K4.78K4.38K5.04K
test WA sentence447582
word7.76K7.02K14.4K12.6K
vocabulary 1,92K1.69K2.57K2.65K
TR sentence1.16K8.6K
word28.5K26.7K334K310K
vocabulary4.91K4.57K10.4K12.7K
", "num": null, "type_str": "table" }, "TABREF1": { "html": null, "text": "Results of word alignment evaluation with the heuristics-based method (GDF)", "content": "
KFTTHansard (French-English)
methodprecisionrecallAERFprecisionrecallAERF
symmetric0.45950.5942 48.18 0.51820.70290.8816 7.29 0.7822
f2f0.46330.5997 47.73 0.52270.70420.8851 7.29 0.7844
c2c0.46060.5964 48.02 0.51980.70010.8816 7.34 0.7804
f2c0.46300.5998 47.74 0.52260.70370.8871 7.10 0.7848
", "num": null, "type_str": "table" }, "TABREF2": { "html": null, "text": "Results of translation evaluation", "content": "
KFTTNTCIR10
GDF Filtered GDF Filtered
symmetric 19.0619.2828.329.71
f2f19.1519.1728.3629.74
c2c19.2619.0228.3629.92
f2c18.9119.2028.3629.67
", "num": null, "type_str": "table" } } } }