{ "paper_id": "D14-1019", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:53:52.358054Z" }, "title": "Syntax-Augmented Machine Translation using Syntax-Label Clustering", "authors": [ { "first": "Hideya", "middle": [], "last": "Mino", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Institute of Information and Communications Technology", "location": { "addrLine": "3-5 Hikaridai, Seika-cho, Soraku-gun", "settlement": "Kyoto", "country": "JAPAN" } }, "email": "hideya.mino@nict.go.jp" }, { "first": "Taro", "middle": [], "last": "Watanabe", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Institute of Information and Communications Technology", "location": { "addrLine": "3-5 Hikaridai, Seika-cho, Soraku-gun", "settlement": "Kyoto", "country": "JAPAN" } }, "email": "taro.watanabe@nict.go.jp" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Institute of Information and Communications Technology", "location": { "addrLine": "3-5 Hikaridai, Seika-cho, Soraku-gun", "settlement": "Kyoto", "country": "JAPAN" } }, "email": "eiichiro.sumita@nict.go.jp" }, { "first": "G", "middle": [], "last": "= (n", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Institute of Information and Communications Technology", "location": { "addrLine": "3-5 Hikaridai, Seika-cho, Soraku-gun", "settlement": "Kyoto", "country": "JAPAN" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Recently, syntactic information has helped significantly to improve statistical machine translation. However, the use of syntactic information may have a negative impact on the speed of translation because of the large number of rules, especially when syntax labels are projected from a parser in syntax-augmented machine translation. In this paper, we propose a syntax-label clustering method that uses an exchange algorithm in which syntax labels are clustered together to reduce the number of rules. The proposed method achieves clustering by directly maximizing the likelihood of synchronous rules, whereas previous work considered only the similarity of probabilistic distributions of labels. We tested the proposed method on Japanese-English and Chinese-English translation tasks and found order-of-magnitude higher clustering speeds for reducing labels and gains in translation quality compared with previous clustering method.", "pdf_parse": { "paper_id": "D14-1019", "_pdf_hash": "", "abstract": [ { "text": "Recently, syntactic information has helped significantly to improve statistical machine translation. However, the use of syntactic information may have a negative impact on the speed of translation because of the large number of rules, especially when syntax labels are projected from a parser in syntax-augmented machine translation. In this paper, we propose a syntax-label clustering method that uses an exchange algorithm in which syntax labels are clustered together to reduce the number of rules. The proposed method achieves clustering by directly maximizing the likelihood of synchronous rules, whereas previous work considered only the similarity of probabilistic distributions of labels. We tested the proposed method on Japanese-English and Chinese-English translation tasks and found order-of-magnitude higher clustering speeds for reducing labels and gains in translation quality compared with previous clustering method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In recent years, statistical machine translation (SMT) models that use syntactic information have received significant research attention. These models use syntactic information on the source side (Liu et al., 2006; Mylonakis and Sima'an, 2011) , the target side (Galley et al., 2006; Huang and Knight, 2006) or both sides (Chiang, 2010; Hanneman and Lavie, 2013) produce syntactically correct translations. Zollmann and Venugopal (2006) proposed syntax-augmented MT (SAMT), which is a MT system that uses syntax labels of a parser. The SAMT grammar directly encodes syntactic information into the synchronous contextfree grammar (SCFG) of Hiero (Chiang, 2007) , which relies on two nonterminal labels. One problem in adding syntax labels to Hiero-style rules is that only partial phrases are assigned labels. It is common practice to extend labels by using the idea of combinatory categorial grammar (CCG) (Steedman, 2000) on the problem. Although this extended syntactical information may improve the coverage of rules and syntactic correctness in translation, the increased grammar size causes serious speed and data-sparseness problems. To address these problems, Hanneman and Lavie (2013) coarsen syntactic labels using the similarity of the probabilistic distributions of labels in synchronous rules and showed that performance improved.", "cite_spans": [ { "start": 197, "end": 215, "text": "(Liu et al., 2006;", "ref_id": "BIBREF11" }, { "start": 216, "end": 244, "text": "Mylonakis and Sima'an, 2011)", "ref_id": "BIBREF13" }, { "start": 263, "end": 284, "text": "(Galley et al., 2006;", "ref_id": "BIBREF5" }, { "start": 285, "end": 308, "text": "Huang and Knight, 2006)", "ref_id": "BIBREF8" }, { "start": 323, "end": 337, "text": "(Chiang, 2010;", "ref_id": "BIBREF3" }, { "start": 338, "end": 363, "text": "Hanneman and Lavie, 2013)", "ref_id": "BIBREF7" }, { "start": 408, "end": 437, "text": "Zollmann and Venugopal (2006)", "ref_id": "BIBREF22" }, { "start": 646, "end": 660, "text": "(Chiang, 2007)", "ref_id": "BIBREF2" }, { "start": 907, "end": 923, "text": "(Steedman, 2000)", "ref_id": "BIBREF17" }, { "start": 1168, "end": 1193, "text": "Hanneman and Lavie (2013)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the present work, we follow the idea of labelset coarsening and propose a new method to group syntax labels. First, as an optimization criterion, we use the logarithm of the likelihood of synchronous rules instead of the similarity of probabilistic distributions of syntax labels. Second, we use exchange clustering (Uszkoreit and Brants, 2008) , which is faster than the agglomerativeclustering algorithm used in the previous work. We tested our proposed method on Japanese-English and Chinese-English translation tasks and observed gains comparable to those of previous work with similar reductions in grammar size.", "cite_spans": [ { "start": 319, "end": 347, "text": "(Uszkoreit and Brants, 2008)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "where X \u2208 N is a nonterminal, \u03b1 \u2208 (N \u222a T \u03c3 ) * is a sequence of nonterminals or source-side terminals, and \u03b2 \u2208 (N \u222a T \u03c4 ) * is a sequence of nonterminals or target-side terminals. The number #N T (\u03b1) of nonterminals in \u03b1 is equal to the number #N T (\u03b2) of nonterminals in \u03b2, and \u223c: {1, ..., #N T (\u03b1)} \u2192 {1, ..., #N T (\u03b2)} is a one-to-one mapping from nonterminals in \u03b1 to nonterminals in \u03b2. For each synchronous rule, a nonnegative real-value weight w(X \u2192 \u27e8\u03b1, \u03b2, \u223c\u27e9) is assigned and the sum of the weights of all rules sharing the same left-hand side in a grammar is unity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Hierarchical phrase-based SMT (Hiero) (Chiang, 2007) translates by using synchronous rules that only have two nonterminal labels X and S but have no linguistic information. SAMT augments the Hiero-style rules with syntax labels from a parser and extends these labels based on CCG. Although the use of extended syntax labels may increase the coverage of rules and improve the potential for syntactically correct translations, the growth of the nonterminal symbols significantly affects the speed of decoding and causes a serious data-sparseness problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To address these problems, Hanneman and Lavie (2013) proposed a label-collapsing algorithm, in which syntax labels are clustered by using the similarity of the probabilistic distributions of clustered labels in synchronous rules. First, Hanneman and Lavie defined the label-alignment distribution as", "cite_spans": [ { "start": 27, "end": 52, "text": "Hanneman and Lavie (2013)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (s|t) = #(s, t) #(t)", "eq_num": "(1)" } ], "section": "Introduction", "sec_num": "1" }, { "text": "where N \u03c3 and N \u03c4 are the source-and target-side nonterminals in synchronous rules, s \u2208 N \u03c3 and t \u2208 N \u03c4 are syntax labels from the source and target sides, #(s, t) denotes the number of left-handside label pairs, and #(t) denotes the number of target-side labels. Second, for each target-side label pair (t i , t j ), we calculate the total distance d of the absolute differences in the likelihood of labels that are aligned to a source-side label s:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "d(t i , t j ) = \u2211 s\u2208N\u03c3 |P (s|t i ) \u2212 P (s|t j )|", "eq_num": "(2)" } ], "section": "Introduction", "sec_num": "1" }, { "text": "Next, the closest syntax-label pair oft andt \u2032 is combined into a new single label. The agglomerative clustering is applied iteratively until the number of the syntax labels reaches a given value.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The clustering of Hanneman and Lavie proved successful in decreasing the grammar size and providing a statistically significant improvement in translation quality. However, their method relies on an agglomerative clustering with a worst-case time complexity of O(|N | 2 log |N |). Also, clustering based on label distributions does not always imply higher-quality rules, because it does not consider the interactions of the nonterminals on the left-hand side and the right-hand side in each synchronous rule.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As an alternative to using the similarity of probabilistic distributions as a criterion for syntax-label clustering, we propose a clustering method based on the maximum likelihood of the synchronous rules in a training data D. We uses the idea of maximizing the Bayesian posterior probability P (M |D) of the overall model structure M given data D (Stolcke and Omohundro, 1994) . While their goal is to maximize the posterior", "cite_spans": [ { "start": 348, "end": 377, "text": "(Stolcke and Omohundro, 1994)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Syntax-Label Clustering", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (M |D) \u221d P (M )P (D|M )", "eq_num": "(3)" } ], "section": "Syntax-Label Clustering", "sec_num": "3" }, { "text": "we omit the prior term P (M ) and directly maximize the P (D|M ). A model M is a clustering structure 1 . The synchronous rule in the data D for SAMT with target-side syntax labels is represented as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax-Label Clustering", "sec_num": "3" }, { "text": "X \u2192 \u27e8a 1 Y (1) a 2 Z (2) a 3 , b 1 Y (1) b 2 Z (2) b 3 \u27e9 (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax-Label Clustering", "sec_num": "3" }, { "text": "where a 1 , a 2 , a 3 and b 1 , b 2 , b 3 are the source-and target-side terminals, respectively X, Y , Z are nonterminal syntax labels, and the superscript number indicates alignment between the sourceand target-side nonterminals. Using Equation 4we maximize the posterior probability P (D|M ) which we define as the probability of right-hand side given the syntax label X of the left-hand side rule in the training data as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax-Label Clustering", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2211 X\u2192\u27e8\u03b1,\u03b2,\u223c\u27e9\u2208D log P r(\u27e8\u03b1, \u03b2, \u223c\u27e9|X)", "eq_num": "(5)" } ], "section": "Syntax-Label Clustering", "sec_num": "3" }, { "text": "For the sake of simplicity, we assume that the generative probability for each rule does not depend on the existence of terminal symbols and that the reordering in the target side may be ignored. Therefore, Equation (5) simplifies to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax-Label Clustering", "sec_num": "3" }, { "text": "\u2211 X\u2192\u27e8a 1 Y (1) a 2 Z (2) a 3 ,b 1 Y (1) b 2 Z (2) b 3 \u27e9 log p(Y, Z|X) (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax-Label Clustering", "sec_num": "3" }, { "text": "The generative probability in each rule of the form of Equation (6) can be approximated by clustering nonterminal symbols as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Criterion", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(Y, Z|X) \u2248 p(Y |c(Y )) \u2022 p(Z|c(Z)) \u2022p(c(Y ), c(Z)|c(X))", "eq_num": "(7)" } ], "section": "Optimization Criterion", "sec_num": "3.1" }, { "text": "where we map a syntax label X to its equivalence cluster c(X). This can be regarded as the clustering criterion usually used in a class-based n-gram language model (Brown et al., 1992) . If each label on the right-hand side of a synchronous rule (4) is independent of each other, we can factor the joint model as follows:", "cite_spans": [ { "start": 164, "end": 184, "text": "(Brown et al., 1992)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Optimization Criterion", "sec_num": "3.1" }, { "text": "p(Y, Z|X) \u2248 p(Y |c(Y )) \u2022 p(Z|c(Z)) \u2022p(c(Y )|c(X))\u2022p(c(Z)|c(X)) (8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Criterion", "sec_num": "3.1" }, { "text": "We introduce the predictive idea of Uszkoreit and Brants (2008) to Equation 8, which doesn't condition on the clustered label c(X), but directly on the syntax label X:", "cite_spans": [ { "start": 36, "end": 63, "text": "Uszkoreit and Brants (2008)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Optimization Criterion", "sec_num": "3.1" }, { "text": "p(Y, Z|X) \u2248 p(Y |c(Y )) \u2022 p(Z|c(Z)) \u2022p(c(Y )|X) \u2022 p(c(Z)|X) (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Criterion", "sec_num": "3.1" }, { "text": "The objective in Equation 9is represented using the frequency in the training data as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Criterion", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "N (Y ) N (c(Y )) \u2022 N (X, c(Y )) N (X) \u2022 N (Z) N (c(Z)) \u2022 N (X, c(Z)) N (X)", "eq_num": "(10)" } ], "section": "Optimization Criterion", "sec_num": "3.1" }, { "text": "where N (X) and N (c(X)) denote the frequency 2 of X and c(X), and N (X, K) denotes the frequency of cluster K in the right-hand side of a synchronous rule whose left-hand side syntax label is X. By replacing the rule probabilities in Equation 9with Equation (10) and plugging the result into Equation (6), our objective becomes", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Criterion", "sec_num": "3.1" }, { "text": "F (C) = \u2211 Y \u2208N N (Y ) \u2022 log N (Y ) N (c(Y )) + \u2211 X\u2208N ,K\u2208C N (X, K) \u2022 log N (X, K) N (X) = \u2211 Y \u2208N N (Y ) \u2022 log N (Y ) \u2212 \u2211 Y \u2208N N (Y ) \u2022 log N (c(Y )) + \u2211 X\u2208N ,K\u2208C N (X, K) \u2022 log N (X, K) \u2212 \u2211 X\u2208N ,K\u2208C N (X, K) \u2022 log N (X)(11) 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Criterion", "sec_num": "3.1" }, { "text": "We use a fractional count (Chiang, 2007) which adds up to one as a frequency. start with the initial mapping (label X \u2192 c(X)) compute objective function F (C) for each label X do remove label X from c(X) for each cluster K do move label X tentatively to cluster K compute F (C) for this exchange move label X to cluster with maximum F (C) do until the cluster mapping does not change 11, the last summation is equivalent to the sum of the occurrences of all syntax labels, and canceled out by the first summation. K in the third summation considers clusters in a synchronous rule whose left-hand side label is X, and we let ch(X) denote a set of those clusters. The second summation equals", "cite_spans": [ { "start": 26, "end": 40, "text": "(Chiang, 2007)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Optimization Criterion", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2211 K\u2208C N (K) \u2022 log N (K). As a result, Equation (11) simplifies to F (C) = \u2211 X\u2208N ,K\u2208ch(X) N (X, K) \u2022 log N (X, K) \u2212 \u2211 K\u2208C N (K) \u2022 log N (K)", "eq_num": "(12)" } ], "section": "Optimization Criterion", "sec_num": "3.1" }, { "text": "We used an exchange clustering algorithm (Uszkoreit and Brants, 2008) which was proven to be very efficient in word clustering with a vocabulary of over 1 million words. The exchange clustering for words begins with the initial clustering of words and greedily exchanges words from one cluster to another such that an optimization criterion is maximized after the move. While agglomerative clustering requires recalculation for all pair-wise distances between words, exchange clustering only demands computing the difference of the objective for the word pair involved in a particular movement. We applied this exchange clustering to syntax-label clustering. Table 1 shows the outline. For initial clustering, we partitioned all the syntax labels into clusters according to the frequency of syntax labels in synchronous rules. If remove and move are as computationally intensive as computing the change in F (C) in Equation (12), then the time complexity of remove and move is O(K) (Martin et al., 1998) , where K is the number of clusters. Since the remove procedure is called once for each label and, for a given label, the move procedure is called K \u2212 1 times Table 2 : Data sets: The \"sent\" column indicates the number of sentences. The \"src-tokens\" and \"tgttokens\" columns indicate the number of words in the source-and the target-side sentences.", "cite_spans": [ { "start": 41, "end": 69, "text": "(Uszkoreit and Brants, 2008)", "ref_id": "BIBREF21" }, { "start": 982, "end": 1003, "text": "(Martin et al., 1998)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 659, "end": 666, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 1163, "end": 1170, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Exchange Clustering", "sec_num": "3.2" }, { "text": "to find the maximum F (C), the worst-time complexity for one iteration of the syntax-label clustering is O (|N |K 2 ) . The exchange procedure is continued until the cluster mapping is stable or the number of iterations reaches a threshold value of 100.", "cite_spans": [], "ref_spans": [ { "start": 107, "end": 117, "text": "(|N |K 2 )", "ref_id": null } ], "eq_spans": [], "section": "Exchange Clustering", "sec_num": "3.2" }, { "text": "We conducted experiments on Japanese-English (ja-en) and Chinese-English (zh-en) translation tasks. The ja-en data comes from IWSLT07 (Fordyce, 2007) in a spoken travel domain. The tuning set has seven English references and the test set has six English references. For zh-en data we prepared two kind of data. The one is extracted from FBIS 3 , which is a collection of news articles. The other is 1 M sentences extracted rondomly from NIST Open MT 2008 task (NIST08). We use the NIST Open MT 2006 for tuning and the MT 2003 for testing. The tuning and test sets have four English references. Table 2 shows the details for each corpus. Each corpus is tokenized, put in lower-case, and sentences with over 40 tokens on either side are removed from the training data. We use KyTea (Neubig et al., 2011) to tokenize the Japanese data and Stanford Word Segmenter (Tseng et al., 2005) to tokenize the Chinese data. We parse the English data with the Berkeley parser (Petrov and Klein, 2007) .", "cite_spans": [ { "start": 134, "end": 149, "text": "(Fordyce, 2007)", "ref_id": "BIBREF4" }, { "start": 780, "end": 801, "text": "(Neubig et al., 2011)", "ref_id": "BIBREF14" }, { "start": 860, "end": 880, "text": "(Tseng et al., 2005)", "ref_id": "BIBREF20" }, { "start": 962, "end": 986, "text": "(Petrov and Klein, 2007)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 594, "end": 601, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Experiments 4.1 Data", "sec_num": "4" }, { "text": "We did experiments with the SAMT (Zollmann and Venugopal, 2006) model with the Moses (Koehn et al., 2007) . For the SAMT model, we conducted experiments with two label sets. One is extracted from the phrase structure parses and the other is extended with CCG 4 . We applied the proposed method (+clustering) and the baseline method (+coarsening), which uses the Hanneman Table 4 : SAMT grammars on zh-en experiments label-collapsing algorithm described in Section 2, for syntax-label clustering to the SAMT models with CCG. The number of clusters for each clustering was set to 80. The language models were built using SRILM Toolkits (Stolcke, 2002) . The language model with the IWSLT07 is a 5-gram model trained on the training data, and the language model with the FBIS and NIST08 is a 5gram model trained on the Xinhua portion of English GigaWord. For word alignments, we used MGIZA++ (Gao and Vogel, 2008) . To tune the weights for BLEU (Papineni et al., 2002) , we used the n-best batch MIRA (Cherry and Foster, 2012) .", "cite_spans": [ { "start": 33, "end": 63, "text": "(Zollmann and Venugopal, 2006)", "ref_id": "BIBREF22" }, { "start": 85, "end": 105, "text": "(Koehn et al., 2007)", "ref_id": "BIBREF10" }, { "start": 634, "end": 649, "text": "(Stolcke, 2002)", "ref_id": "BIBREF19" }, { "start": 889, "end": 910, "text": "(Gao and Vogel, 2008)", "ref_id": "BIBREF6" }, { "start": 942, "end": 965, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF15" }, { "start": 998, "end": 1023, "text": "(Cherry and Foster, 2012)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 371, "end": 378, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Experiment design", "sec_num": "4.2" }, { "text": "Tables 3 and 4 present the details of SAMT grammars with each label set learned by the experiments using the IWSLT07 (ja-en), FBIS and NIST08 (zh-en), which include the number of syntax labels and synchronous rules, the values of the objective (F (C)), and the standard deviation (SD) of the number of labels assigned to each cluster. For NIST08 we applied only the + clustering because the + coarsening needs a huge amount of computation time. Table 5 shows the differences between the BLEU score and the rule number for each cluster number when using the IWSLT07 dataset.", "cite_spans": [], "ref_spans": [ { "start": 445, "end": 452, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results and analysis", "sec_num": "5" }, { "text": "Since the +clustering maximizes the likelihood of synchronous rules, it can introduce appropriate rules adapted to training data given a fixed number of clusters. For each experiment, SAMT grammars with the +clustering have a greater number of rules than with the +coarsening and, as shown in Table 5 , the number of synchronous rules with +clustering increase with the number of clusters. For +clustering with eight clusters and +coarsening with 80 clusters, which have almost 2.4M rules, the BLEU score of +clustering with eight clusters is higher. Also, the SD of the number of labels, which indicates the balance of the number of labels among clusters, with +clustering is smaller than with +coarsening. These results suggest that +clustering maintain a large-scale variation of synchronous rules for high performance by balancing the number of labels in each cluster.", "cite_spans": [], "ref_spans": [ { "start": 293, "end": 300, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results and analysis", "sec_num": "5" }, { "text": "The number of synchronous rules grows as you progress from +coarsening to +clustering and finally to raw label with CCG. To confirm the effect of the number of rules, we measured the decoding time per sentence for translating the test set by taking the average of ten runs with FBIS corpus. +coarsening takes 0.14 s and +clustering takes 0.16 s while raw label with CCG takes 0.37s. Thus the increase in the number of synchronous rules adversely affects the decoding speed. Table 6 presents the results for the experiments 5 using ja-en and zh-en with the BLEU metric. SAMT with parse have the lowest BLEU scores. It appears that the linguistic information of the raw syntax labels of the phrase structure parses is not enough to improve the translation performance. Hiero has the higher BLEU score than SAMT with CCG on zh-en. This is likely due to the low accuracy of the parses, on which SAMT relies while Hiero doesn't. SAMT with + clustering have the higher BLEU score than raw label with CCG. For SAMT with CCG using IWSLT07 and FBIS, though the statistical significance tests were not significant when p < 0.05, +clustering have the higher BLEU scores than +coarsening. For these results, the performance of +clustering is comparable to that of +coarsening. For the complexity of both clustering algorithm, though it is difficult to evaluate directly because the speed 5 As another baseline, we also used Phrase-based SMT (Koehn et al., 2003) and Hiero (Chiang, 2007 ", "cite_spans": [ { "start": 1376, "end": 1377, "text": "5", "ref_id": null }, { "start": 1429, "end": 1449, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF9" }, { "start": 1460, "end": 1473, "text": "(Chiang, 2007", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 474, "end": 481, "text": "Table 6", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Results and analysis", "sec_num": "5" }, { "text": "In this paper, we propose syntax-label clustering for SAMT, which uses syntax-label information to generate syntactically correct translations. One of the problems of SAMT is the large grammar size when a CCG-style extended label set is used in the grammar, which make decoding slower. We cluster syntax labels with a very fast exchange algorithm in which the generative probabilities of synchronous rules are maximized. We demonstrate the effectiveness of the proposed method by using it to translate Japanese-English and Chinese-English tasks and measuring the decoding speed, the accuracy and the clustering speed. Future work involves improving the optimization criterion. We expect to make a new objective that includes the terminal symbols and the reordering of nonterminal symbols that were ignored in this work. Another interesting direction is to determine the appropriate number of clusters for each corpus and the initialization method for clustering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "P (M ) is reflected by the number of clusters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the anonymous reviewers for their suggestions and helpful comments on the early version of this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Class-based n-gram models of natural language", "authors": [ { "first": "F", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Brown", "suffix": "" }, { "first": "J", "middle": [ "Della" ], "last": "Vincent", "suffix": "" }, { "first": "Peter", "middle": [ "V" ], "last": "Pietra", "suffix": "" }, { "first": "Jenifer", "middle": [ "C" ], "last": "Desouza", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Lai", "suffix": "" }, { "first": "", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1992, "venue": "Computational Linguistics", "volume": "18", "issue": "4", "pages": "467--479", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter F. Brown, Vincent J. Della Pietra, Peter V. deS- ouza, Jenifer C. Lai, and Robert L. Mercer. 1992. Class-based n-gram models of natural language. Computational Linguistics, 18(4):467-479.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Batch tuning strategies for statistical machine translation", "authors": [ { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" }, { "first": "George", "middle": [], "last": "Foster", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "427--436", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Cherry and George Foster. 2012. Batch tuning strategies for statistical machine translation. In Pro- ceedings of the 2012 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 427-436, Montr\u00e9al, Canada, June. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Hierarchical phrase-based translation", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics", "volume": "", "issue": "", "pages": "201--228", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang. 2007. Hierarchical phrase-based trans- lation. Computational Linguistics, pages 201-228, June.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Learning to translate with source and target syntax", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1443--1452", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang. 2010. Learning to translate with source and target syntax. In Proceedings of the 48th Annual Meeting of the Association for Computational Lin- guistics, pages 1443-1452, Uppsala, Sweden, July. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Overview of the 4th international workshop on spoken language translation iwslt 2007 evaluation campaign", "authors": [ { "first": "Cameron", "middle": [], "last": "Shaw Fordyce", "suffix": "" } ], "year": 2007, "venue": "Proceedings of IWSLT 2007", "volume": "", "issue": "", "pages": "1--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cameron Shaw Fordyce. 2007. Overview of the 4th international workshop on spoken language transla- tion iwslt 2007 evaluation campaign. In In Proceed- ings of IWSLT 2007, pages 1-12, Trento, Italy, Oc- tober.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Scalable inference and training of context-rich syntactic translation models", "authors": [ { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Graehl", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Deneefe", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ignacio", "middle": [], "last": "Thayer", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "961--968", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Pro- ceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meet- ing of the Association for Computational Linguis- tics, pages 961-968, Sydney, Australia, July. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Parallel implementations of word alignment tool", "authors": [ { "first": "Qin", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Vogel", "suffix": "" } ], "year": 2008, "venue": "Software Engineering, Testing, and Quality Assurance for Natural Language Processing", "volume": "", "issue": "", "pages": "49--57", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qin Gao and Stephan Vogel. 2008. Parallel implemen- tations of word alignment tool. In Software Engi- neering, Testing, and Quality Assurance for Natu- ral Language Processing, pages 49-57, Columbus, Ohio, June. Association for Computational Linguis- tics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Improving syntax-augmented machine translation by coarsening the label set", "authors": [ { "first": "Greg", "middle": [], "last": "Hanneman", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "288--297", "other_ids": {}, "num": null, "urls": [], "raw_text": "Greg Hanneman and Alon Lavie. 2013. Improving syntax-augmented machine translation by coarsen- ing the label set. In Proceedings of the 2013 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 288-297, Atlanta, Geor- gia, June. Association for Computational Linguis- tics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Relabeling syntax trees to improve syntax-based machine translation quality", "authors": [ { "first": "Bryant", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Human Language Technology Conference of the NAACL, Main Conference", "volume": "", "issue": "", "pages": "240--247", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bryant Huang and Kevin Knight. 2006. Relabeling syntax trees to improve syntax-based machine trans- lation quality. In Proceedings of the Human Lan- guage Technology Conference of the NAACL, Main Conference, pages 240-247, New York City, USA, June. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Statistical phrase-based translation", "authors": [ { "first": "Phillip", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Franz", "middle": [ "Josef" ], "last": "Och", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of HLT-NAACL", "volume": "", "issue": "", "pages": "48--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Phillip Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In In Proceedings of HLT-NAACL, pages 48-54, Edmon- ton, Canada, May/July.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Moses: Open source toolkit for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Brooke", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Moran", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zens", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Ondrej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Constantin", "suffix": "" }, { "first": "Evan", "middle": [], "last": "Herbst", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions", "volume": "", "issue": "", "pages": "177--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexan- dra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine transla- tion. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Com- panion Volume Proceedings of the Demo and Poster Sessions, pages 177-180, Prague, Czech Republic, June. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Treeto-string alignment template for statistical machine translation", "authors": [ { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Shouxun", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "609--616", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang Liu, Qun Liu, and Shouxun Lin. 2006. Tree- to-string alignment template for statistical machine translation. In Proceedings of the 21st Interna- tional Conference on Computational Linguistics and 44th Annual Meeting of the Association for Compu- tational Linguistics, pages 609-616, Sydney, Aus- tralia, July. Association for Computational Linguis- tics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Algorithms for bigram and trigram word clustering", "authors": [ { "first": "Sven", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Jorg", "middle": [], "last": "Liermann", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 1998, "venue": "Speech Communication", "volume": "", "issue": "", "pages": "19--37", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sven Martin, Jorg Liermann, and Hermann Ney. 1998. Algorithms for bigram and trigram word clustering. In Speech Communication, pages 19-37.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Learning hierarchical translation structure with linguistic annotations", "authors": [ { "first": "Markos", "middle": [], "last": "Mylonakis", "suffix": "" }, { "first": "Khalil", "middle": [], "last": "Sima'an", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "642--652", "other_ids": {}, "num": null, "urls": [], "raw_text": "Markos Mylonakis and Khalil Sima'an. 2011. Learn- ing hierarchical translation structure with linguis- tic annotations. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 642-652, Portland, Oregon, USA, June. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Pointwise prediction for robust, adaptable japanese morphological analysis", "authors": [ { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Yosuke", "middle": [], "last": "Nakata", "suffix": "" }, { "first": "Shinsuke", "middle": [], "last": "Mori", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "529--533", "other_ids": {}, "num": null, "urls": [], "raw_text": "Graham Neubig, Yosuke Nakata, and Shinsuke Mori. 2011. Pointwise prediction for robust, adaptable japanese morphological analysis. In Proceedings of the 49th Annual Meeting of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 529-533, Portland, Oregon, USA, June. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA, July. Association for Computa- tional Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Improved inference for unlexicalized parsing", "authors": [ { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2007, "venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference", "volume": "", "issue": "", "pages": "404--411", "other_ids": {}, "num": null, "urls": [], "raw_text": "Slav Petrov and Dan Klein. 2007. Improved infer- ence for unlexicalized parsing. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computa- tional Linguistics; Proceedings of the Main Confer- ence, pages 404-411, Rochester, New York, April. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The syntactic process", "authors": [ { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Steedman. 2000. The syntactic process, vol- ume 27. MIT Press.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Inducing probabilistic grammars by bayesian model merging", "authors": [ { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Omohundro", "suffix": "" } ], "year": 1994, "venue": "Grammatical Inference and Applications (ICGI-94)", "volume": "", "issue": "", "pages": "106--118", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Stolcke and Stephen Omohundro. 1994. In- ducing probabilistic grammars by bayesian model merging. In R. C. Carrasco and J. Oncina, editors, Grammatical Inference and Applications (ICGI-94), pages 106-118. Berlin, Heidelberg.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Srilm an extensible language modeling toolkit", "authors": [ { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Seventh International Conference on Spoken Language Processing", "volume": "", "issue": "", "pages": "901--904", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Stolcke. 2002. Srilm an extensible language modeling toolkit. In In Proceedings of the Seventh International Conference on Spoken Language Pro- cessing, pages 901-904.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A conditional random field word segmenter", "authors": [ { "first": "Huihsin", "middle": [], "last": "Tseng", "suffix": "" }, { "first": "Pichuan", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Galen", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "Fourth SIGHAN Workshop on Chinese Language Processing", "volume": "", "issue": "", "pages": "168--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Manning. 2005. A con- ditional random field word segmenter. In Fourth SIGHAN Workshop on Chinese Language Process- ing, pages 168-171. Jeju Island, Korea.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Distributed word clustering for large scale class-based language modeling in machine translation", "authors": [ { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Thorsten", "middle": [], "last": "Brants", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "755--762", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jakob Uszkoreit and Thorsten Brants. 2008. Dis- tributed word clustering for large scale class-based language modeling in machine translation. In Pro- ceedings of ACL-08: HLT, pages 755-762, Colum- bus, Ohio, June. Association for Computational Lin- guistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Syntax augmented machine translation via chart parsing", "authors": [ { "first": "Andreas", "middle": [], "last": "Zollmann", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Venugopal", "suffix": "" } ], "year": 2006, "venue": "Proceedings on the Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "138--141", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Zollmann and Ashish Venugopal. 2006. Syn- tax augmented machine translation via chart parsing. In Proceedings on the Workshop on Statistical Ma- chine Translation, pages 138-141, New York City, June. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "TABREF0": { "text": "Outline of syntax-label clustering method where C denotes all clusters and N denotes all syntax labels. For Equation", "num": null, "type_str": "table", "content": "", "html": null }, "TABREF3": { "text": "SAMT grammars on ja-en experiments", "num": null, "type_str": "table", "content": "
Label setLabelRuleF(C)SD
FBIS
parse702.1 M--
CCG5,46060 M--
+ coarsening8032 M-1.5 e+10 526
+ clustering8038 M-7.9 e+09 154
NIST08
parse7012 M--
CCG7,328 120 M--
+ clustering80100 M -2.6 e+10 218
", "html": null }, "TABREF5": { "text": "BLEU score and rule number for each cluster number using IWSLT07", "num": null, "type_str": "table", "content": "
ja-enzh-en
Modelparse CCG parse CCG parse CCG
SAMT42.58 48.77 23.66 26.97 24.67 27.28
+coarsening-49.54-27.12--
+clustering-50.21-27.47-27.29
Hiero48.9128.3127.62
PB-SMT49.1426.8826.71
", "html": null }, "TABREF6": { "text": "BLEU scores on each experiments depends on how each algorithm is implemented, +clustering is an order of magnitude faster than +coarsening. For the clustering experiment that groups 5460 raw labels with CCG into 80 clusters using FBIS corpus, +coarsening takes about 1 week whereas +clustering takes about 10 minutes.", "num": null, "type_str": "table", "content": "", "html": null } } } }