{ "paper_id": "Y16-2012", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:47:04.680164Z" }, "title": "Yet Another Symmetrical and Real-time Word Alignment Method: Hierarchical Sub-sentential Alignment using F-measure", "authors": [ { "first": "Hao", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Waseda University", "location": { "addrLine": "2-7 Hibikino, Wakamatsu-ku", "postCode": "808-0135", "settlement": "Kitakyushu, Fukuoka", "country": "Japan" } }, "email": "" }, { "first": "Yves", "middle": [], "last": "Lepage", "suffix": "", "affiliation": { "laboratory": "", "institution": "Waseda University", "location": { "addrLine": "2-7 Hibikino, Wakamatsu-ku", "postCode": "808-0135", "settlement": "Kitakyushu, Fukuoka", "country": "Japan" } }, "email": "yves.lepage@waseda.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Symmetrization of word alignments is the fundamental issue in statistical machine translation (SMT). In this paper, we describe an novel reformulation of Hierarchical Subsentential Alignment (HSSA) method using F-measure. Starting with a soft alignment matrix, we use the F-measure to recursively split the matrix into two soft alignment submatrices. A direction is chosen as the same time on the basis of Inversion Transduction Grammar (ITG). In other words, our method simplifies the processing of word alignment as recursive segmentation in a bipartite graph, which is simple and easy to implement. It can be considered as an alternative of growdiag-final-and heuristic. We show its application on phrase-based SMT systems combined with the state-of-the-art approaches. In addition, by feeding with word-to-word associations, it also can be a real-time word aligner. Our experiments show that, given a reliable lexicon translation table, this simple method can yield comparable results with state-of-theart approaches.", "pdf_parse": { "paper_id": "Y16-2012", "_pdf_hash": "", "abstract": [ { "text": "Symmetrization of word alignments is the fundamental issue in statistical machine translation (SMT). In this paper, we describe an novel reformulation of Hierarchical Subsentential Alignment (HSSA) method using F-measure. Starting with a soft alignment matrix, we use the F-measure to recursively split the matrix into two soft alignment submatrices. A direction is chosen as the same time on the basis of Inversion Transduction Grammar (ITG). In other words, our method simplifies the processing of word alignment as recursive segmentation in a bipartite graph, which is simple and easy to implement. It can be considered as an alternative of growdiag-final-and heuristic. We show its application on phrase-based SMT systems combined with the state-of-the-art approaches. In addition, by feeding with word-to-word associations, it also can be a real-time word aligner. Our experiments show that, given a reliable lexicon translation table, this simple method can yield comparable results with state-of-theart approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Since most of state-of-the-art Statistical Machine Translation (SMT) approaches require word-toword aligned data on a parallel corpus, word alignment is a fundamental issue to perform this task rapidly. In order to extract translation fragments for various purposes, e.g., word pairs , phrase pairs , hierarchical rules (Chiang, 2005) , tree-to-tree correspon-dences (Zhang et al., 2007) , reliable and accurate word aligners are essential.", "cite_spans": [ { "start": 320, "end": 334, "text": "(Chiang, 2005)", "ref_id": "BIBREF11" }, { "start": 367, "end": 387, "text": "(Zhang et al., 2007)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There exist several problems in state-of-the-art methods for word alignment. Present word alignment approaches are usually based on IBM models (Brown et al., 1993) , which parameters are estimated using the Expectation-Maximization (EM) algorithm. Sometimes, they are augmented with an HMM-based model (Vogel et al., 1996) . Since IBM Models is the restriction to one-to-many alignments, some multi-word units cannot be correctly aligned. It is necessary to train models in both directions, and merge the outcome of mono-directional alignments using some symmetrization methods can overcome this deficiency to some degree.", "cite_spans": [ { "start": 143, "end": 163, "text": "(Brown et al., 1993)", "ref_id": null }, { "start": 302, "end": 322, "text": "(Vogel et al., 1996)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "It results, even using the standard open source tool aligner, called GIZA++ 1 (Och, 2003) , which consist of the widely used IBM models and their extensions, still will spend lots of time to obtain word alignments. A recent development of word alignment approach fast align 2 (Dyer et al., 2013) , based on the variation of the IBM model 2, has been reported faster than baseline GIZA++ but with comparable results. However, both mentioned approaches generate asymmetric alignments. In order to obtain the symmetrical word alignments, these approaches symmetrize the alignments in both forward and reverse directions using a symmetrization heuristic called grow-diag-final-and (Och, 2003) . Starting with the intersection alignment points that occur in both of the two directional alignments, grow-diagfinal-and expands the alignment in the union of the alignment in either of the two directional alignments. Although it has been shown to be most effective for phrase extraction for phrased-based SMT (Wu and Wang, 2007) , there lacks a principled explanation.", "cite_spans": [ { "start": 78, "end": 89, "text": "(Och, 2003)", "ref_id": "BIBREF20" }, { "start": 276, "end": 295, "text": "(Dyer et al., 2013)", "ref_id": "BIBREF17" }, { "start": 677, "end": 688, "text": "(Och, 2003)", "ref_id": "BIBREF20" }, { "start": 1001, "end": 1020, "text": "(Wu and Wang, 2007)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, development in mining large parallel patent or document collections increase the needs in fast methods for word alignment. Besides, in the real scenario of Computer Assisted Translation (CAT) (Kay, 1997) , in conjunction with SMT system (Farajian et al., 2014) for translation or post-editing (reference) (Guerberof, 2009) , real-time word alignment methods become necessary.", "cite_spans": [ { "start": 202, "end": 213, "text": "(Kay, 1997)", "ref_id": "BIBREF8" }, { "start": 236, "end": 270, "text": "SMT system (Farajian et al., 2014)", "ref_id": null }, { "start": 315, "end": 332, "text": "(Guerberof, 2009)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose a novel method based on the use of F-measure for symmetrization of word alignment, at the same time which can be regarded as an real-time word alignment approach. We justify this approach with mathematical principles. The paper is organized as follows: in Section 2, we discuss the motivation. In Section 3, we summarize the related work like Viterbi alignment and inversion transduction grammar. In Section 4, we formulate our method and give a mathematical justification. The Section 5 reports experiments and results. Finally, we give some conclusion and points for the future research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There exist several purposes that drive us to introduce such a new method which differs the previous approach. Absolutely, time cost is the first our consideration. Consider the case when huge parallel documents are handed to the computer. It will be a very interesting question that how to align these parallel sentences in a large number of documents while have spent the minimal time. Nowadays, since most of the public available word aligners are based on EM algorithm in order to get the global optimal alignments, the real-time cost of the processing of word alignment can not be estimated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "2" }, { "text": "Another realistic problem is, in the most real situation of machine translation task, a bilingual lexicon dictionary even longer phrase translation fragments table is given or available, while reusing the pre-built knowledge base, rather than aligning data using some machine learning technique to guess the probable Viterbi alignment again, is a more advis-able solution to employ a real-time aligner to align words automatically.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "2" }, { "text": "There are also some drawbacks of the previous approach likes IBM models and their variations. All these models are based on restricted alignments in the sense that a source word can be aligned to at most one target word. This constraint is necessary to reduce the computational complexity of the models, but it makes it impossible to align phrase in the target language (English) such as \"a car\" to a single word in the source language (Japanese/Chinese) \"\u8eca/\u8f66\". Beside, a variation of IBM model 2 was used in fast align. It introduces a \"tension\" to model the overall accordance of word orders, but it has proved by (Ding et al., 2015 ) that it performs not well when applied to the very distinct language pairs, e.g., English and Japanese.", "cite_spans": [ { "start": 616, "end": 634, "text": "(Ding et al., 2015", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "2" }, { "text": "3 Related Work", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "2" }, { "text": "The basic idea of the previous approaches is to develop a model treating the word alignment as a hidden variables (Och, 2003) , by applying some statistical estimation theory to obtain the most possible/Viterbi alignments. The problem of translation can be defined as:", "cite_spans": [ { "start": 114, "end": 125, "text": "(Och, 2003)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Viterbi alignment and symmetrization", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P r(f J 1 |e I 1 ) = a J 1 P r(f J 1 , a J 1 |e I 1 )", "eq_num": "(1)" } ], "section": "Viterbi alignment and symmetrization", "sec_num": "3.1" }, { "text": "Here we use the symbol P r(\u2022) to denote general probability distributions. a J 1 is a \"hidden\" alignment which is mapping from a source position j to a target position a j . It is always possible to find a best alignment by maximizing the likelihood on the given parallel training corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Viterbi alignment and symmetrization", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "a J 1 = argmax a J 1 P r(f J 1 , a J 1 |e I 1 )", "eq_num": "(2)" } ], "section": "Viterbi alignment and symmetrization", "sec_num": "3.1" }, { "text": "Since Viterbi alignment model is based on conditional probabilities, it only returns one directional alignment in each direction (F \u2192 E and viceversa). In other words, this process is asymmetric. The complementary part of Viterbi alignment model before phrase extraction is grow-diag-final-and, in which the symmetrical word alignments are generated using simple growing heuristics. Given two sets of alignments\u00e2 J 1 andb J 1 , in order to increase the quality of the alignments, they combine\u00e2 J 1 andb J 1 into one alignment matrix A using grow-diag-finaland algorithm. A widely used approach to get word alignments is estimating the alignment using IBM models because word alignments are the by-production of estimating lexicon translation probabilities. However, this generative story looks like a \"chicken or the egg\" problem. On the one hand, given alignments with probabilities it is possible to compute translation probabilities. On the other hand, if knowing which words are a probable translation of another one makes it possible to guess which alignment is probable and which one is improbable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Viterbi alignment and symmetrization", "sec_num": "3.1" }, { "text": "Since the search space of word alignment will grow exponentially with the length of source and target sentences (Brown et al., 1993) , Wu (1997) proposed an approach to constraining the search space for word alignment, namely inversion transduction grammars (ITG). Generally, ITG is a family of grammars in which the right part of the rule is either two non-terminals or a terminal sequence. ITG is a special case of synchronous context-free grammar, also called Bracketing Transduction Grammar (BTG). There are three simple generation rules, S (straight), I (inverted) and terminal (T ).", "cite_spans": [ { "start": 112, "end": 132, "text": "(Brown et al., 1993)", "ref_id": null }, { "start": 135, "end": 144, "text": "Wu (1997)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "ITG-based word alignment", "sec_num": "3.2" }, { "text": "(3) I :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S : \u03b3 \u2192 [XY ]", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b3 \u2192< XY > (4) T : \u03b3 \u2192 w = (s, t)", "eq_num": "(5)" } ], "section": "S : \u03b3 \u2192 [XY ]", "sec_num": null }, { "text": "The algorithm used by (Wu, 1997 ) synchronously parses the source and the target sentence to build a synchronous parse tree. This ITG tree indicates the same underlying structure but the ordering of constituents may be different. Due to its simplicity and effectiveness of modelling bilingual correspondence, ITG can be used to model the bilingual sentences in very distinct ordering. In fact, an ITG-style Tree is a bitree consists of one tree in the source side and another tree in the target side (see Figure 1 .a), here, two trees are compressed as a single tree. Besides, an ITG-style Tree is also able to be displayed in a soft alignment matrix (see Figure 2 ) with the representation of bipartite graph (see Figure 1 .b) .", "cite_spans": [ { "start": 22, "end": 31, "text": "(Wu, 1997", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 505, "end": 513, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 656, "end": 664, "text": "Figure 2", "ref_id": null }, { "start": 715, "end": 723, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "S : \u03b3 \u2192 [XY ]", "sec_num": null }, { "text": "Hierarchical sub-sentential alignment (HSSA) is yet another alignment approach, introduced by (Lardilleux et al., 2012) . This method does not rely on the EM algorithm as other alignment models. With a recursive binary segmentation process of searching the segment point in a soft alignment matrix (as Figure 2 ) between a source sentence and its corresponding target sentence, this approach aims to minimize Ncut score , which can yield acceptable and accurate 1-to-many or manyto-1 word alignments.", "cite_spans": [ { "start": 94, "end": 119, "text": "(Lardilleux et al., 2012)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 302, "end": 310, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Hierarchical sub-sentential alignment", "sec_num": "3.3" }, { "text": "In order to build soft alignment matrices before the step of aligning words, Lardilleux et al. (2012) employed Anymalign 3 to obtain the prepared translation table of lexicon translation probabilities. Since the training times and the quality of translation table changed considerably depending on the timeouts for Anymalign, an easy and fair comparison to state-of-the-art approaches is difficult.", "cite_spans": [ { "start": 77, "end": 101, "text": "Lardilleux et al. (2012)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Hierarchical sub-sentential alignment", "sec_num": "3.3" }, { "text": "Given the grey-scale graph of soft alignment, Hierarchical Sub-sentential Alignment (hereafter referred to as HSSA) approach takes all cells in the soft alignment matrix into consideration and seeks the precise criterion for a good partition same as image segmentation. It makes use of a popular modern clustering algorithm called normalized cuts Shi and Malik, 2000) , i.e., spectral clustering, or Ncut for short, to binary segment the matrix recursively.", "cite_spans": [ { "start": 347, "end": 367, "text": "Shi and Malik, 2000)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Hierarchical sub-sentential alignment", "sec_num": "3.3" }, { "text": "In the following section, we will refine the proposal of hierarchical sub-sentential alignment. We will not use the notion of Ncut, so as to give a sim- Figure 2 : Translation strengths on a logarithmic scale in a English-Japanese sentence pair matrix as a grey graph.", "cite_spans": [], "ref_spans": [ { "start": 153, "end": 161, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Hierarchical sub-sentential alignment", "sec_num": "3.3" }, { "text": "(X, Y ) (X,Y ) (X,\u0232 ) (X,\u0232 ) (X, Y ) (X, Y ) (X,\u0232 ) T = t 0 , t 1 , . . . , t j . . . , t n S = s 0 , s 1 , . . . , s i , . . . , s m (m, n) (m, n) (X,\u0232 ) Inverted Straight", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical sub-sentential alignment", "sec_num": "3.3" }, { "text": "ple and convincing justification using F-measure for this symmetrical word alignment approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical sub-sentential alignment", "sec_num": "3.3" }, { "text": "We propose to regard the alignment associations between a source sentence S and a target word T as a contingency matrix (Matusov et al., 2004; Moore, 2005) as in Figure 2 , noted as M(I, J), in which I is the length of source sentence in words and J for target side. We define a function w which measuring the strength of the translation link between any source and target pair of words (s i , t j ). The symmetric alignment between word s i and t j presents a greyed cell (i, j) in this matrix. In this paper, the score w(s i , t j ) is defined as the geometric mean of the bidirectional lexical translation probabilities p(s i |t j ) and p(t j |s i ). For a given sub-sentential alignment A(X, Y ) \u2286 I \u00d7 J, we define the weight of this alignment W(X,Y) as the summation of association scores between each source and target words of a block (X, Y ) in such a matrix.", "cite_spans": [ { "start": 120, "end": 142, "text": "(Matusov et al., 2004;", "ref_id": "BIBREF26" }, { "start": 143, "end": 155, "text": "Moore, 2005)", "ref_id": "BIBREF30" } ], "ref_spans": [ { "start": 162, "end": 170, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Soft alignment matrices", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "W (X, Y ) = s\u2208X t\u2208Y w(s, t)", "eq_num": "(6)" } ], "section": "Soft alignment matrices", "sec_num": "4.1" }, { "text": "Since we have to calculate all cells in the block (X, Y ), the time complexity here is in O(I \u00d7 J).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Soft alignment matrices", "sec_num": "4.1" }, { "text": "Ncut can be computed as the following formula same as in , :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reformulation: from Ncut to F-measure", "sec_num": "4.2" }, { "text": "N cut(X, Y ) = cut(X ,Y ) cut(X ,Y )+2\u00d7W (X ,Y ) + cut(X ,Y ) cut(X ,Y )+2\u00d7W (X ,Y ) cut(X, Y ) = W (X, Y ) + W (X, Y ) (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reformulation: from Ncut to F-measure", "sec_num": "4.2" }, { "text": "Actually, minimizing Ncut(X, Y ) is equivalent to maximizing the arithmetic mean of the F-measure (also called F-score) of X relatively to Y andX relatively to\u0232 . It can be derived as following. In general, F 1 -measure (Kim et al., 1999) of block (X, Y ) is defined as the harmonic mean of precision P and recall R:", "cite_spans": [ { "start": 220, "end": 238, "text": "(Kim et al., 1999)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Reformulation: from Ncut to F-measure", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 F 1 (X, Y ) = 1 2 \u00d7 ( 1 P (X, Y ) + 1 R(X, Y ) )", "eq_num": "(8)" } ], "section": "Reformulation: from Ncut to F-measure", "sec_num": "4.2" }, { "text": "To interpret sentence pair matrices as contingency matrices, it suffices to read trans-lation strengths as reflecting the contribution of a source word to a target word and reciprocally. With this interpretation, the precision (P ) and the recall (R) for two sub-parts of the source and the target sentences can easily be expressed using the sum of all the translation strengths inside a block. These two measures can thus be defined as following Equations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reformulation: from Ncut to F-measure", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (X, Y ) = W (X, Y ) W (X, Y ) + W (X, Y ) (9) R(X, Y ) = W (X, Y ) W (X, Y ) + W (X, Y )", "eq_num": "(10)" } ], "section": "Reformulation: from Ncut to F-measure", "sec_num": "4.2" }, { "text": "Now, it suffices to replace precision and recall by their values in terms of cut to derive the following formula.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reformulation: from Ncut to F-measure", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 F 1 (X, Y ) = 1 2 \u00d7 ( W (X ,Y )+W (X ,Y ) W (X ,Y ) + W (X ,Y )+W (X ,Y ) W (X ,Y ) ) (11) = 2\u00d7W (X ,Y )+W (X ,Y )+W (X ,Y ) 2\u00d7W (X ,Y ) (12) = 2\u00d7W (X ,Y )+cut(X ,Y ) 2\u00d7W (X ,Y )", "eq_num": "(13)" } ], "section": "Reformulation: from Ncut to F-measure", "sec_num": "4.2" }, { "text": "By using Equation 13 and Equation 7, for (X, Y ), we obtain:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reformulation: from Ncut to F-measure", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "F 1 (X, Y )= 1 \u2212 W (X ,Y )+W (X ,Y ) 2\u00d7W (X ,Y )+W (X ,Y )+W (X ,Y ) (14) = 1 \u2212 Ncut lef t (X, Y )", "eq_num": "(15)" } ], "section": "Reformulation: from Ncut to F-measure", "sec_num": "4.2" }, { "text": "In a contingency matrix, where balanced F 1 -score can be used regularly for binary classification, especially on the scenario of binary segmentation of bilingual sentence pair under the ITG framework. With this interpretation, for the straight case of ITG, we can get the F 1 -score for the remaining block (X, Y ) as,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reformulation: from Ncut to F-measure", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "F 1 (X, Y ) = 1 \u2212 Ncut right (X, Y )", "eq_num": "(16)" } ], "section": "Reformulation: from Ncut to F-measure", "sec_num": "4.2" }, { "text": "Absolutely, an equivalent way of writing is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reformulation: from Ncut to F-measure", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "N cut(X, Y ) = 2 \u00d7 [1 \u2212 F1(X ,Y )+F1(X ,Y ) 2 ]", "eq_num": "(17)" } ], "section": "Reformulation: from Ncut to F-measure", "sec_num": "4.2" }, { "text": "To summarize, minimizing Ncut equals finding the best point with the maximum value in the matrix of arithmetic means of F 1 -score. This in fact makes sense intuitively if we look for the best possible way for parts of the source and target sentences to correspond. These parts should cover one another in both directions as much as possible, that is to say, they should exhibit the best recall and precision at the same time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reformulation: from Ncut to F-measure", "sec_num": "4.2" }, { "text": "In order to reduce the time complexity in calculate the value of W(X,Y ), we make use of a specialized data structure for fast computation. For each given sentence pair, a summed area table (SAT) was created for fast calculating the summation of cells in the corresponding soft alignment matrix M(I, J). The preprocessing step is to build a new (I + 1, J + 1) matrix M , where each entry is the sum of the submatrix to the upper-left of that entry. Any arbitrary sub-matrix sum can be calculated by looking up and mixing only 4 entries in the SAT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reducing time complexity", "sec_num": "4.3" }, { "text": "Assume X, Y starts from point (i 0 , j 0 ), where X,X and Y,\u0232 are splitting at i 1 and j 1 separately. We have,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reducing time complexity", "sec_num": "4.3" }, { "text": "W (X, Y ) = i 0 < i < i 1 j 0 < j < j 1 w(i, j) = M (i 1 , j 1 ) \u2212 M (i 0 , j 1 ) \u2212 M (i 1 , j 0 ) + M (i 0 , j 0 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reducing time complexity", "sec_num": "4.3" }, { "text": "Time complexity here is reduced from O(I \u00d7 J) to O(1) when calculating the summation of cells in the block of X, Y , and similar to the remaining. Due to data sparsity, a simple Laplace smoothing was used here to handle the unseen alignments with a very small smoothing parameter \u03b1 = 10 \u22127 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reducing time complexity", "sec_num": "4.3" }, { "text": "We evaluate the performance of our proposed methods. We conduct the experiments on KFTT corpus 4 , in which applied Japanese-to-English word alignment. We report the performance of various alignment approach in terms of precision, recall and alignment error rate (AER) as (Och, 2003) defined. The quality of an alignment A = {(j, a j )|a j > 0} is then computed by appropriately redefined precision and recall measures:", "cite_spans": [ { "start": 272, "end": 283, "text": "(Och, 2003)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Alignment Experiments", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Recall = |A \u2229 S| |S| , P recision = |A \u2229 P | |P | , S \u2286 P", "eq_num": "(18)" } ], "section": "Alignment Experiments", "sec_num": "5.1" }, { "text": "and the following alignment error rate:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Experiments", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "AER(S, P ; A) = 1 \u2212 |A \u2229 S| + |A \u2229 P | |A| + |S|", "eq_num": "(19)" } ], "section": "Alignment Experiments", "sec_num": "5.1" }, { "text": "The details are shown in Table 1 . Figure 3 plots the average run-time of the currently available alignment approaches as a function of the number of input English-French sentence pairs. The HSSA approach is far more efficient. In total, aligning the The test sentence pair is sampled from KFTT corpus. We fed HSSA with the lexical translation table relying on the output of GIZA++. In this example, our proposed approach (GIZA++ + HSSA) generates a better alignment than GIZA++ + GDFA or fast align + GDFA. Remember that, given the lexical translation probabilities, HSSA runs only in one iteration.", "cite_spans": [], "ref_spans": [ { "start": 25, "end": 32, "text": "Table 1", "ref_id": null }, { "start": 35, "end": 43, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Alignment Experiments", "sec_num": "5.1" }, { "text": "10K sentence pairs in the corpus completed in nearly 20 second with the HSSA approach but required more time with the other EM-based approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Experiments", "sec_num": "5.1" }, { "text": "In Table 1 , Precision of our proposed approach are lower than baseline system, but Recall are better than fast align + GDFA. However, it has been proved (Fraser and Marcu, 2007; Ganchev et al., 2008 ) that AER does not imply a better translation accuracy (see Table 3 ).", "cite_spans": [ { "start": 154, "end": 178, "text": "(Fraser and Marcu, 2007;", "ref_id": "BIBREF22" }, { "start": 179, "end": 199, "text": "Ganchev et al., 2008", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 1", "ref_id": null }, { "start": 261, "end": 268, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Alignment Experiments", "sec_num": "5.1" }, { "text": "In this section, we first describe the data used in our experiments. We then perform to extract the lexical translation probabilities. Finally, we conduct translation experiments using both the baseline system (GIZA++) and the system using HSSA approach combined with to show, given a reliable lexical translation table for soft alignment matrix, the effectiveness of our proposed integrated system. We also investigate the time cost and the influence on the SMT frameworks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Experiments", "sec_num": "5.2" }, { "text": "In order to evaluate the proposed method, we conducted translation experiments on two corpora: Europarl Corpus and KFTT corpus. For English-Japanese (en-ja) and Japanese-English (ja-en), we evaluated on the KFTT corpus. For English-Finnish (en-fi), Spanish-Portuguese (es-pt) and English-French (en-fr), we measure the translation metrics on Europarl Corpus v7 5 . The baseline systems are using GIZA++ to train as generally.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Experiments", "sec_num": "5.2" }, { "text": "In our experiments, standard phrase-based statistical machine translation systems were built by using the Moses toolkit (Koehn et al., 2007) , Minimum Error Rate Training (Och, 2003) , and the KenLM language model (Heafield, 2011) . Default training pipeline for phrase-based SMT in is adopt with default distortion-limit 6. For the evaluation of machine translation quality, some standard automatic evaluation metrics have been used, like BLEU (Papineni et al., 2002) , NIST (Doddington, 2002) and RIBES (Isozaki et al., 2010) in all experiments. When compared with the baseline system (GIZA++ + GDFA), there is no significant difference on the final results of machine translation between using the alignments output by the proposed approach and GIZA++.", "cite_spans": [ { "start": 120, "end": 140, "text": "(Koehn et al., 2007)", "ref_id": "BIBREF13" }, { "start": 171, "end": 182, "text": "(Och, 2003)", "ref_id": "BIBREF20" }, { "start": 214, "end": 230, "text": "(Heafield, 2011)", "ref_id": "BIBREF33" }, { "start": 445, "end": 468, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF12" }, { "start": 476, "end": 494, "text": "(Doddington, 2002)", "ref_id": "BIBREF16" }, { "start": 505, "end": 527, "text": "(Isozaki et al., 2010)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Experiments", "sec_num": "5.2" }, { "text": "In this paper, we studied an ITG-based bilingual word alignment method which recursively segments the sentence pair on the basis of a soft alignment matrix. There are several advantages in our proposed method. Firstly, when combining the proposed method with word association probabilities (lexical translation table), it is more reasonable to obtain symmetrical alignments using the proposed method rather than grow-diag-final-and. In other words, this method provides an alternative to grow-diagfinal-and for symmetrization of word alignments. It achieves a similar speed compared to the simplest IBM model 1. Second, HSSA points a new way to real-time word alignment. For the tasks of processing same domain document, HSSA makes it possible to reuse the pre-built crossing-language information, likes bilingual lexical translation table. In our experiment, it has demonstrated that our proposed method achieves comparable accuracies compared Table 3 : Comparison of translation results using various configurations, GIZA++ or fast align with grow-diagfinal-and (GDFA) or hierarchical subsentential alignment (HSSA). Bold surfaces indicate the best BlEU score in each group. No significant difference between directly GIZA++ + GDFA with our proposed method except en-fi. Statistical significantly difference in BLEU score at \u2021 : p < 0.01 and \u2020 : p < 0.05 compared with GIZA++ + GDFA.", "cite_spans": [], "ref_spans": [ { "start": 945, "end": 952, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "with a state-of-the-art baseline. Finally, compared with original HSSA, the advantages of our implementation includes well-formulated, shorter computation times spent, armed with smoothing technique.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "For future work, we think of designing a beamsearch variation to make it possible to generate several parsing derivations during recursive segmentation. This will allow us to investigate recombinations of different derivations in order to obtain more possible alignments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "30th Pacific Asia Conference on Language, Information and Computation (PACLIC 30)Seoul, Republic of Korea, October 28-30, 2016", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://anymalign.limsi.fr/ PACLIC 30 Proceedings", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.phontron.com/kftt/index.html PACLIC 30 Proceedings", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.statmt.org/europarl/ PACLIC 30 Proceedings", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work is supported in part by China Scholarship Council (CSC) under the CSC Grant No.201406890026. We also thank the anonymous reviewers for their insightful comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A Statistical Approach to Language Translation. Proceedings of the International Conference on Computational Linguistics (COLING)", "authors": [ { "first": "F", "middle": [], "last": "Peter", "suffix": "" }, { "first": "John", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Stephen", "middle": [ "A" ], "last": "Cocke", "suffix": "" }, { "first": "Vincent", "middle": [ "J" ], "last": "Della-Pietra", "suffix": "" }, { "first": "Frederick", "middle": [], "last": "Della-Pietra", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Jelinek", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Mercer", "suffix": "" }, { "first": "", "middle": [], "last": "Rossin", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter F. Brown and John Cocke and Stephen A. Della- Pietra and Vincent J. Della-Pietra and Frederick Je- linek and Robert L. Mercer and Paul Rossin. 1988. A Statistical Approach to Language Translation. Pro- ceedings of the International Conference on Computa- tional Linguistics (COLING).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The mathematics of statistical machine translation: Parameter estimation. Computational linguistic", "authors": [ { "first": "F", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Brown", "suffix": "" }, { "first": "J", "middle": [ "Della" ], "last": "Vincent", "suffix": "" }, { "first": "Stephen", "middle": [ "A" ], "last": "Pietra", "suffix": "" }, { "first": "Della", "middle": [], "last": "Pietra", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Mercer", "suffix": "" } ], "year": 1988, "venue": "", "volume": "19", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter F. Brown, Vincent J. Della Pietra, Stephen A Della Pietra and Robert L. Mercer 1988. The mathemat- ics of statistical machine translation: Parameter es- timation. Computational linguistic. volume 19 (2): pp.263-311. MIT Press M. Amin Farajian, Nicola Bertoldi and Marcello Fed- erico. 2014. Online word alignment for online adap- tive machine translation. EACL. 84.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2003. Statistical machine translation. Cambridge University Press , 2009.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Alignment by agreement", "authors": [ { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Percy Liang, Ben Taskar and Dan Klein. 2006. Align- ment by agreement. Proceedings of the main con- ference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics. Association for Computa- tional Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A systematic comparison of various statistical alignment models. Computational linguistic", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "", "volume": "29", "issue": "", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2003. A system- atic comparison of various statistical alignment mod- els. Computational linguistic. 29.1: pp.19-51", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "Franz", "middle": [], "last": "Och", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Josef", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Och, Franz Josef, and Hermann Ney. \"A systematic comparison of various statistical alignment models.\" .", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Statistical phrase-based translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Franz Josef Och and Daniel Marcu. 2003. Statistical phrase-based translation. Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Better alignments= better translations? ACL-08: HLT. Association for Computational Linguistics", "authors": [ { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "" }, { "first": "Joao", "middle": [ "V" ], "last": "Grac A", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kuzman Ganchev, Joao V Grac a and Ben Taskar. 2008. Better alignments= better translations? ACL-08: HLT. Association for Computational Linguistics. page 986.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The proper place of men and machines in language translation. machine translation", "authors": [ { "first": "Martin", "middle": [], "last": "Kay", "suffix": "" } ], "year": 1997, "venue": "", "volume": "12", "issue": "", "pages": "3--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Kay 1997. The proper place of men and machines in language translation. machine translation, 12, 1-2. Springer. pp.3-23", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A tree-to-tree alignment-based model for statistical machine translation. MT-Summit-07", "authors": [ { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Hongfei", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Aiti", "middle": [], "last": "Aw", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Li", "suffix": "" }, { "first": "", "middle": [], "last": "Chew Limu Tan", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "535--542", "other_ids": {}, "num": null, "urls": [], "raw_text": "Min Zhang, Hongfei Jiang and AiTi Aw and Jun Sun and Sheng Li and Chew Limu Tan. 2003. A tree-to-tree alignment-based model for statistical machine transla- tion. MT-Summit-07. pp.535-542", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Improving fast align by Reordering", "authors": [ { "first": "Chenchen", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Masao", "middle": [], "last": "Utiyama", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chenchen Ding, Masao Utiyama and Eiichiro Sumita. 2015. Improving fast align by Reordering. Proceed- ings of the 2015 Conference on Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A hierarchical phrase-based model for statistical machine translation", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. Proceedings of the 43rd Annual Meeting on Association for Computa- tional Linguistics. Association for Computational Lin- guistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "BLEU: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th annual meeting on association for computational linguistics. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, et al. 2002. BLEU: a method for automatic evaluation of machine translation. In Pro- ceedings of the 40th annual meeting on association for computational linguistics. Association for Compu- tational Linguistics. p. 311-318.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Moses: Open source toolkit for statistical machine translation", "authors": [ { "first": "", "middle": [], "last": "Philipp Koehn", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, et al. 2007. Moses: Open source toolkit for statistical machine translation. Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions. Association for Compu- tational Linguistics, 2007.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Hierarchical sub-sentential alignment with Anymalign", "authors": [ { "first": "Adrien", "middle": [], "last": "Lardilleux", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Yvon", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Lepage", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adrien Lardilleux, Fran\u00e7ois Yvon and Yves Lepage. 2012. Hierarchical sub-sentential alignment with Anymalign. 16th annual conference of the European Association for Machine Translation (EAMT).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "HMM-based word alignment in statistical translation", "authors": [ { "first": "Stephan", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" }, { "first": "Christoph", "middle": [], "last": "Tillmann", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 16th conference on Computational linguistics", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephan Vogel, Hermann Ney and Christoph Tillmann. 1996. HMM-based word alignment in statistical trans- lation. Proceedings of the 16th conference on Compu- tational linguistics-Volume 2. Association for Com- putational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Automatic evaluation of machine translation quality using n-gram co-occurrence statistics", "authors": [ { "first": "George", "middle": [], "last": "Doddington", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the second international conference on Human Language Technology Research", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Doddington. 2002. Automatic evaluation of ma- chine translation quality using n-gram co-occurrence statistics. Proceedings of the second international con- ference on Human Language Technology Research. Morgan Kaufmann Publishers Inc.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A Simple, Fast, and Effective Reparameterization of IBM Model", "authors": [ { "first": "C", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "V", "middle": [], "last": "Chahuneau", "suffix": "" }, { "first": "N", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "644--649", "other_ids": {}, "num": null, "urls": [], "raw_text": "C Dyer, V Chahuneau and NA Smith 2013. A Simple, Fast, and Effective Reparameterization of IBM Model. Proceedings of the Conference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies. pp.644-649.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Bipartite graph partitioning and data clustering", "authors": [ { "first": "Hongyuan", "middle": [], "last": "Zha", "suffix": "" }, { "first": "Xiaofeng", "middle": [], "last": "He", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Horst", "middle": [], "last": "Simon", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Gu", "suffix": "" } ], "year": 2001, "venue": "Proc. of the 10th international conference on Information and knowledge management", "volume": "", "issue": "", "pages": "25--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hongyuan Zha, Xiaofeng He, Chris Ding, Horst Simon, and Ming Gu. 2001. Bipartite graph partitioning and data clustering. In Proc. of the 10th international con- ference on Information and knowledge management, pages 25-32, Atlanta.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Minimum error rate training in statistical machine translation", "authors": [ { "first": "Franz Josef", "middle": [], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och. 2003. Minimum error rate train- ing in statistical machine translation.. Proceedings of the 41st Annual Meeting on Association for Computa- tional Linguistics-Volume 1. Association for Compu- tational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Comparative study of word alignment heuristics and phrase-based SMT", "authors": [ { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the MT Summit XI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hua Wu and Haifeng Wang. 2007. Comparative study of word alignment heuristics and phrase-based SMT. Proceedings of the MT Summit XI.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Measuring word alignment quality for statistical machine translation", "authors": [ { "first": "Alexander", "middle": [], "last": "Fraser", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics", "volume": "33", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Fraser and Daniel Marcu. 2007. Measuring word alignment quality for statistical machine transla- tion. Computational Linguistics. 33(3)", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational linguistics", "authors": [ { "first": "Dekai", "middle": [], "last": "Wu", "suffix": "" } ], "year": 1997, "venue": "", "volume": "23", "issue": "", "pages": "377--403", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational linguistics.23.3: pp.377-403.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Normalized cuts and image segmentation. Pattern Analysis and Machine Intelligence", "authors": [ { "first": "Jianbo", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Jitendra", "middle": [], "last": "Malik", "suffix": "" } ], "year": 2000, "venue": "IEEE Transactions", "volume": "", "issue": "", "pages": "888--905", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jianbo Shi and Jitendra Malik 2000. Normalized cuts and image segmentation. Pattern Analysis and Ma- chine Intelligence, IEEE Transactions on.22.8. IEEE. pp.888-905", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Bipartite graph partitioning and data clustering Proceedings of the tenth international conference on Information and knowledge management", "authors": [ { "first": "Hongyuan", "middle": [], "last": "Zha", "suffix": "" }, { "first": "Xiaofeng", "middle": [], "last": "He", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Horst", "middle": [], "last": "Simon", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Gu", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "25--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hongyuan Zha, Xiaofeng He, Chris Ding and Horst Si- mon and Ming Gu. 2001. Bipartite graph partitioning and data clustering Proceedings of the tenth interna- tional conference on Information and knowledge man- agement. Association for Computational Linguistics. pp.25-32.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Symmetric word alignments for statistical machine translation", "authors": [ { "first": "Evgeny", "middle": [], "last": "Matusov", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zens", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 20th international conference on Computational Linguistics. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Evgeny Matusov, Richard Zens, and Hermann Ney. 2004. Symmetric word alignments for statistical ma- chine translation. Proceedings of the 20th interna- tional conference on Computational Linguistics. As- sociation for Computational Linguistics. pp.219.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Performance measures for information extraction", "authors": [ { "first": "J", "middle": [], "last": "Makhoul", "suffix": "" }, { "first": "F", "middle": [], "last": "Kubala", "suffix": "" }, { "first": "R", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "R", "middle": [], "last": "Weischedel", "suffix": "" } ], "year": 1999, "venue": "Proceedings of DARPA broadcast news workshop", "volume": "", "issue": "", "pages": "249--252", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Makhoul, F. Kubala, R. Schwartz and R. Weischedel,. 1999. Performance measures for information extrac- tion. In Proceedings of DARPA broadcast news work- shop. pp. 249-252.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Performance measures for information extraction", "authors": [ { "first": "J", "middle": [], "last": "Makhoul", "suffix": "" }, { "first": "F", "middle": [], "last": "Kubala", "suffix": "" }, { "first": "R", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "R", "middle": [], "last": "Weischedel", "suffix": "" } ], "year": 1999, "venue": "Proceedings of DARPA broadcast news workshop", "volume": "", "issue": "", "pages": "249--252", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Makhoul, F. Kubala, R. Schwartz and R. Weischedel,. 1999. Performance measures for information extrac- tion. In Proceedings of DARPA broadcast news work- shop. pp. 249-252.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Symmetric probabilistic alignment for example-based translation", "authors": [ { "first": "Jae", "middle": [ "Dong" ], "last": "Kim", "suffix": "" }, { "first": "Ralf", "middle": [ "D" ], "last": "Brown", "suffix": "" }, { "first": "J", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Jaime", "middle": [ "G" ], "last": "Jansen", "suffix": "" }, { "first": "", "middle": [], "last": "Carbonell", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Tenth Workshop of the European Assocation for Machine Translation (EAMT-05)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jae Dong Kim, Ralf D. Brown, Peter J. Jansen and Jaime G. Carbonell. 2005. Symmetric probabilistic align- ment for example-based translation. In Proceedings of the Tenth Workshop of the European Assocation for Machine Translation (EAMT-05), May.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Association-based bilingual word alignment", "authors": [ { "first": "C", "middle": [], "last": "Robert", "suffix": "" }, { "first": "", "middle": [], "last": "Moore", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the ACL Workshop on Building and Using Parallel Texts", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert C Moore. 2005. Association-based bilingual word alignment. Proceedings of the ACL Workshop on Building and Using Parallel Texts. Association for Computational Linguistics,", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Productivity and quality in MT post-editing MT Summit XII-Workshop: Beyond Translation Memories: New Tools for Translators MT", "authors": [ { "first": "Ana", "middle": [], "last": "Guerberof", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ana Guerberof 2009. Productivity and quality in MT post-editing MT Summit XII-Workshop: Beyond Translation Memories: New Tools for Translators MT.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Automatic evaluation of translation quality for distant language pairs", "authors": [ { "first": "Hideki", "middle": [], "last": "Isozaki", "suffix": "" }, { "first": "Tsutomu", "middle": [], "last": "Hirao", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Duh", "suffix": "" }, { "first": "Katsuhito", "middle": [], "last": "Sudoh", "suffix": "" }, { "first": "Hajime", "middle": [], "last": "Tsukada", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "944--952", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hideki Isozaki, Tsutomu Hirao,Kevin Duh, Katsuhito Su- doh and Hajime Tsukada. 2010. Automatic evaluation of translation quality for distant language pairs. Pro- ceedings of the 2010 Conference on Empirical Meth- ods in Natural Language Processing. Association for Computational Linguistics. pp. 944-952.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "KenLM: Faster and smaller language model queries Proceedings of the Sixth Workshop on Statistical Machine Translation", "authors": [ { "first": "Kenneth", "middle": [], "last": "Heafield", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth Heafield. 2011. KenLM: Faster and smaller lan- guage model queries Proceedings of the Sixth Work- shop on Statistical Machine Translation. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Alignments representations using ITG and bipartite graph. None of the structure contains cycles. The Japanese phrase \u5099\u4e2d \u56fd \u306b \u751f\u307e\u308c means born in bicchu province in English.", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "Comparison of alignments output by various tools.", "type_str": "figure", "num": null, "uris": null }, "FIGREF2": { "text": "Average word alignment run-time (in seconds) as a function of the size of a corpus (in sentence pairs).", "type_str": "figure", "num": null, "uris": null }, "FIGREF3": { "text": "Our proposed approach starts from alignment associations with some probabilities, which is different from the standard phrase-based SMT pipeline.", "type_str": "figure", "num": null, "uris": null }, "TABREF2": { "text": "Statistics on the parallel corpus used in the experiments (K=1,000 lines).", "type_str": "table", "content": "