{ "paper_id": "C04-1017", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:20:14.486708Z" }, "title": "Splitting Input Sentence for Machine Translation Using Language Model with Sentence Similarity", "authors": [ { "first": "Takao", "middle": [], "last": "Doi", "suffix": "", "affiliation": { "laboratory": "ATR Spoken Language Translation Research Laboratories", "institution": "Kansai Science City", "location": { "addrLine": "2-2-2 Hikaridai", "postCode": "619-0288", "settlement": "Kyoto", "country": "Japan" } }, "email": "takao.doi@atr.jp" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "", "affiliation": { "laboratory": "ATR Spoken Language Translation Research Laboratories", "institution": "Kansai Science City", "location": { "addrLine": "2-2-2 Hikaridai", "postCode": "619-0288", "settlement": "Kyoto", "country": "Japan" } }, "email": "eiichiro.sumita@atr.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In order to boost the translation quality of corpus-based MT systems for speech translation, the technique of splitting an input sentence appears promising. In previous research, many methods used N-gram clues to split sentences. In this paper, to supplement N-gram based splitting methods, we introduce another clue using sentence similarity based on edit-distance. In our splitting method, we generate candidates for sentence splitting based on N-grams, and select the best one by measuring sentence similarity. We conducted experiments using two EBMT systems, one of which uses a phrase and the other of which uses a sentence as a translation unit. The translation results on various conditions were evaluated by objective measures and a subjective measure. The experimental results show that the proposed method is valuable for both systems.", "pdf_parse": { "paper_id": "C04-1017", "_pdf_hash": "", "abstract": [ { "text": "In order to boost the translation quality of corpus-based MT systems for speech translation, the technique of splitting an input sentence appears promising. In previous research, many methods used N-gram clues to split sentences. In this paper, to supplement N-gram based splitting methods, we introduce another clue using sentence similarity based on edit-distance. In our splitting method, we generate candidates for sentence splitting based on N-grams, and select the best one by measuring sentence similarity. We conducted experiments using two EBMT systems, one of which uses a phrase and the other of which uses a sentence as a translation unit. The translation results on various conditions were evaluated by objective measures and a subjective measure. The experimental results show that the proposed method is valuable for both systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "We are exploring methods to boost the translation quality of corpus-based Machine Translation (MT) systems for speech translation. Among them, the technique of splitting an input sentence and translating the split sentences appears promising (Doi and Sumita, 2003 ).", "cite_spans": [ { "start": 242, "end": 263, "text": "(Doi and Sumita, 2003", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "An MT system sometimes fails to translate an input correctly. Such a failure occurs particularly when an input is long. In such a case, by splitting the input, translation may be successfully performed for each portion. Particularly in a dialogue, sentences tend not to have complicated nested structures, and many long sentences can be split into mutually independent portions. Therefore, if the splitting positions and the translations of the split portions are adequate, the possibility that the arrangement of the translations can provide an adequate translation of the complete input is relatively high. For example, the input sen-tence, \"This is a medium size jacket I think it's a good size for you try it on please\" 1 can be split into three portions, \"This is a medium size jacket\", \"I think it's a good size for you\" and \"try it on please\". In this case, translating the three portions and arranging the results in the same order give us the translation of the input sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In previous research on splitting sentences, many methods have been based on word-sequence characteristics like N-gram (Lavie et al., 1996; Berger et al., 1996; Nakajima and Yamamoto, 2001; Gupta et al., 2002) . Some research efforts have achieved high performance in recall and precision against correct splitting positions. Despite such a high performance, from the view point of translation, MT systems are not always able to translate the split sentences well.", "cite_spans": [ { "start": 119, "end": 139, "text": "(Lavie et al., 1996;", "ref_id": "BIBREF9" }, { "start": 140, "end": 160, "text": "Berger et al., 1996;", "ref_id": "BIBREF0" }, { "start": 161, "end": 189, "text": "Nakajima and Yamamoto, 2001;", "ref_id": "BIBREF10" }, { "start": 190, "end": 209, "text": "Gupta et al., 2002)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In order to supplement sentence splitting based on word-sequence characteristics, this paper introduces another measure of sentence similarity. In our splitting method, we generate candidates for splitting positions based on N-grams, and select the best combination of positions by measuring sentence similarity. This selection is based on the assumption that a corpus-based MT system can correctly translate a sentence that is similar to a sentence in its training corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The following sections describe the proposed splitting method, present experiments using two Example-Based Machine Translation (EBMT) systems, and evaluate the effect of introducing the similarity measure on translation quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We define the term sentence-splitting as the result of splitting a sentence. A sentence-splitting is expressed as a list of sub-sentences that are A sentencesplitting includes a portion or several portions. We use an N-gram Language Model (NLM) to generate sentence-splitting candidates, and we use the NLM and sentence similarity to select one of the candidates. The configuration of the method is shown in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 408, "end": 416, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Splitting Method", "sec_num": "2" }, { "text": "The probability of a sentence can be calculated by an NLM of a corpus. The probability of a sentence-splitting, P rob, is defined as the product of the probabilities of the sub-sentences in equation (1), where P is the probability of a sentence based on an NLM, S is a sentence-splitting, that is, a list of sub-sentences that are portions of a sentence, and P is applied to the sub-sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Based on N-gram Language Model", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P rob(S) = s\u2208S P (s)", "eq_num": "(1)" } ], "section": "Probability Based on N-gram Language Model", "sec_num": "2.1" }, { "text": "To judge whether a sentence is split at a position, we compare the probabilities of the sentence-splittings before and after splitting. When calculating the probability of a sentence including a sub-sentence, we put pseudo words at the head and tail of the sentence to evaluate the probabilities of the head word and the tail word. For example, the probability of the sentence, \"This is a medium size jacket\" based on a trigram language model is calculated as follows. Here, p(z | x y) indicates the probability that z occurs after the sequence x y, and SOS and EOS indicate the pseudo words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Based on N-gram Language Model", "sec_num": "2.1" }, { "text": "= p(this | SOS SOS) \u00d7 p(is | SOS this) \u00d7 p(a | this is) \u00d7 ... p(jacket | medium size) \u00d7 p(EOS | size jacket) \u00d7 p(EOS | jacket EOS)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "P (this is a medium size jacket)", "sec_num": null }, { "text": "This causes a tendency for the probability of the sentence-splitting after adding a splitting position to be lower than that of the sentence-splitting before adding the splitting position. Therefore, when we find a position that makes the probability higher, it is plausible that the position divides the sentence into sub-sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "P (this is a medium size jacket)", "sec_num": null }, { "text": "An NLM suggests where we should split a sentence, by using the local clue of several words among the splitting position. To supplement it with a wider view, we introduce another clue based on similarity to sentences, for which translation knowledge is automatically acquired from a parallel corpus. It is reasonably expected that MT systems can correctly translate a sentence that is similar to a sentence in the training corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Similarity", "sec_num": "2.2" }, { "text": "Here, the similarity between two sentences is defined using the edit-distance between word sequences. The edit-distance used here is extended to consider a semantic factor. The edit-distance is normalized between 0 and 1, and the similarity is 1 minus the edit-distance. The definition of the similarity is given in equation (2). In this equation, L is the word count of the corresponding sentence. I and D are the counts of insertions and deletions respectively. Substitutions are permitted only between content words of the same part of speech. Substitution is considered as the semantic distance between two substituted words, described as Sem, which is defined using a thesaurus and ranges from 0 to 1. Sem is the division of K (the level of the least common abstraction in the thesaurus of two words) by N (the height of the thesaurus) according to equation 3 (Sumita and Iida, 1991) .", "cite_spans": [ { "start": 865, "end": 888, "text": "(Sumita and Iida, 1991)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Sentence Similarity", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Sim 0 (s 1 , s 2 ) = 1 \u2212 I + D + 2 Sem L s 1 + L s 2 (2) Sem = K N", "eq_num": "(3)" } ], "section": "Sentence Similarity", "sec_num": "2.2" }, { "text": "Using Sim 0 , the similarity of a sentencesplitting to a corpus is defined as Sim in equation (4). In this equation, S is a sentence-splitting and C is a given corpus that is a set of sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Similarity", "sec_num": "2.2" }, { "text": "Sim is a mean similarity of sub-sentences against the corpus weighted with the length of each subsentence. The similarity of a sentence including a sub-sentence to a corpus is the greatest similarity between the sentence and a sentence in the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Similarity", "sec_num": "2.2" }, { "text": "Sim(S) = s\u2208S L s \u2022 max{Sim 0 (s, c)|c \u2208 C} s\u2208S L s (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Similarity", "sec_num": "2.2" }, { "text": "2.3 Generating Sentence-Splitting Candidates To calculate Sim is similar to retrieving the most similar sentence from a corpus. The retrieval procedure can be efficiently implemented by the techniques of clustering (Cranias et al., 1997) or using A* search algorithm on word graphs (Doi et al., 2004) . However, it still takes more cost to calculate Sim than P rob when the corpus is large. Therefore, in the splitting method, we first generate sentence-splitting candidates by P rob alone. In the generating process, for a given sentence, the sentence itself is a candidate. For each sentencesplitting of two portions whose P rob does not decrease, the generating process is recursively executed with one of the two portions and then with the other. The results of recursive execution are combined into candidates for the given sentence. Through this process, sentence-splittings whose P robs are lower than that of the original sentence, are filtered out.", "cite_spans": [ { "start": 215, "end": 237, "text": "(Cranias et al., 1997)", "ref_id": "BIBREF1" }, { "start": 282, "end": 300, "text": "(Doi et al., 2004)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Sentence Similarity", "sec_num": "2.2" }, { "text": "Next, among the candidates, we select the one with the highest score using not only P rob but also Sim. We use the product of P rob and Sim as the measure to select a sentence-splitting by. The measure is defined as Score in equation 5, where \u03bb, ranging from 0 to 1, gives the weight of Sim. In particular, the method uses only P rob when \u03bb is 0, and the method generates candidates by P rob and selects a candidate by only Sim when \u03bb is 1. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selecting the Best Sentence-Splitting", "sec_num": "2.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Score = P rob 1\u2212\u03bb \u2022 Sim \u03bb", "eq_num": "(5" } ], "section": "Selecting the Best Sentence-Splitting", "sec_num": "2.4" }, { "text": "We evaluated the splitting method through experiments, whose conditions are as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Conditions", "sec_num": "3" }, { "text": "We investigated the splitting method using MT systems in English-to-Japanese translation, to determine what effect the method had on translation. We used two different EBMT systems as test beds. One of the systems was Hierarchical Phrase Alignment-based Translator (HPAT) (Imamura, 2002), whose unit of translation expression is a phrase. HPAT translates an input sentence by combining phrases. The HPAT system is equipped with another sentence splitting method based on parsing trees (Furuse et al., 1998 ). The other system was DP-match Driven transDucer (D 3 ) (Sumita, 2001) , whose unit of expression is a sentence. For both systems, translation knowledge is automatically acquired from a parallel corpus.", "cite_spans": [ { "start": 485, "end": 505, "text": "(Furuse et al., 1998", "ref_id": "BIBREF5" }, { "start": 564, "end": 578, "text": "(Sumita, 2001)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "MT Systems", "sec_num": "3.1" }, { "text": "We used Japanese-and-English parallel corpora, i.e., a Basic Travel Expression Corpus (BTEC) and a bilingual travel conversation corpus of Spoken Language (SLDB) for training, and English sentences in Machine-Translation-Aided bilingual Dialogues (MAD) for a test set . BTEC is a collection of Japanese sentences and their English translations usually found in phrase-books for foreign tourists. The contents of SLDB are transcriptions of spoken dialogues between Japanese and English speakers through human interpreters. The Japanese and English parts of the corpora correspond to each other sentence-to-sentence. The dialogues of MAD took place between Japanese and English speakers through human typists and an experimental MT system. shows that BTEC and SLDB are both required for handling MAD-type tasks. Therefore, in order to translate test sentences in MAD, we merged the parallel corpora, 152,170 sentence pairs of BTEC and 72,365 of SLDB, into a training corpus for HPAT and D 3 . The English part of the training corpus was also used to make an NLM and to calculate similarities for the sentence-splitting method. The statistics of the training corpus are shown in Table 1 . The perplexity in the table is word trigram perplexity.", "cite_spans": [], "ref_spans": [ { "start": 1176, "end": 1183, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Linguistic Resources", "sec_num": "3.2" }, { "text": "The test set of this experiment was 505 English sentences uttered by human speakers in MAD, including no sentences generated by the MT system. The average length of the sentences in the test set was 9.52 words per sentence. The word trigram perplexity of the test set against the training corpus was 63.66.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic Resources", "sec_num": "3.2" }, { "text": "We also used a thesaurus whose hierarchies are based on the Kadokawa Ruigo-shin-jiten (Ohno and Hamanishi, 1984) ", "cite_spans": [ { "start": 86, "end": 112, "text": "(Ohno and Hamanishi, 1984)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Linguistic Resources", "sec_num": "3.2" }, { "text": "For the splitting method, the NLM was the word trigram model using Good-Turing discounting. The number of split portions was limited to 4 per sentence. The weight of Sim, \u03bb in equation 5was assigned one of 5 values: 0, 1/2, 2/3, 3/4 or 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Instantiation of the Method", "sec_num": "3.3" }, { "text": "We compared translation quality under the conditions of with or without splitting. To evaluate translation quality, we used objective measures and a subjective measure as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.4" }, { "text": "The objective measures used were the BLEU score (Papineni et al., 2001) , the NIST score (Dod-dington, 2002) and Multi-reference Word Error Rate (mWER) (Ueffing et al., 2002) . They were calculated with the test set. Both BLEU and NIST compare the system output translation with a set of reference translations of the same source text by finding sequences of words in the reference translations that match those in the system output translation. Therefore, achieving higher scores by these measures means that the translation results can be regarded as being more adequate translations. mWER indicates the error rate based on the edit-distance between the system output and the reference translations. Therefore, achieving a lower score by mWER means that the translation results can be regarded as more adequate translations. The number of references was 15 for the three measures.", "cite_spans": [ { "start": 48, "end": 71, "text": "(Papineni et al., 2001)", "ref_id": "BIBREF12" }, { "start": 89, "end": 108, "text": "(Dod-dington, 2002)", "ref_id": null }, { "start": 152, "end": 174, "text": "(Ueffing et al., 2002)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.4" }, { "text": "In the subjective measure (SM), the translation results of the test set under different two conditions were evaluated by paired comparison. Sentence-by-sentence, a Japanese native speaker who had acquired a sufficient level of English, judged which result was better or that they were of the same quality. SM was calculated compared to a baseline. As in equation 6, the measure was the gain per sentence, where the gain was the number of won translations subtracted by the number of defeated translations as judged by the human evaluator.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.4" }, { "text": "# of wins \u2212 # of defeats # of test sentences (6) 4 Effect of Splitting for MT Table 2 shows evaluations of the translation results of two MT systems, HPAT and D 3 , under six conditions. In 'original', the input sentences of the systems were the test set itself without any splitting. In the other conditions, the test set sentences were split using P rob into sentence-splitting candidates, and a sentence-splitting per input sentence was selected with Score. The weights of P rob and Sim in the definition of Score in equation 5were varied from only P rob to only Sim. The baseline of SM was the original. The number of input sentences, which have multi-candidates generated with P rob, was 237, where the average and the maximum number of candidates were respectively 5.07 and 64. The average length of the 237 sentences was 12.79 words original P 1 S 0 P 1/2 S 1/2 P 1/3 S 2/3 P 1/4 S 3/4 per sentence. The word trigram perplexity of the set of the 237 sentences against the training corpus was 73.87. The table shows certain tendencies. The differences in the evaluation scores between the original and the cases with splitting are significant for both systems and especially for D 3 . Although the differences among the cases with splitting are not so significant, SM steadily increases when using Sim compared to using only P rob, by 3.2% for HPAT and by 2.4% for D 3 . Among objective measures, the NIST score corresponds well to SM. Table 3 allows us to focus on the effect of Sim in the sentence-splitting selection. The table shows the evaluations on 237 sentences of the test set, where selection was required. In this table, the number of changes is the number of cases where a candidate other than the best candidate using P rob was selected. The table also shows the average and maximum P rob ranking of candidates which were not the best using P rob but were selected as the best using Score. The condition of 'IDEAL' is to select such a candidate that makes the mWER of its translation the best value in any candidate. In IDEAL, the selections are different between MT systems. The two values of the number of changes are for HPAT and for D 3 . The baseline of SM was the condition of using only P rob.", "cite_spans": [], "ref_spans": [ { "start": 78, "end": 85, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 1442, "end": 1449, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "SM =", "sec_num": null }, { "text": "P 0 S 1 # of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Quality", "sec_num": "4.1" }, { "text": "From the table, we can extract certain tenden-cies. The number of changes is very small when using both P rob and Sim in the experiment. In these cases, the procedure selects the best candidates or the second candidates in the measure of P rob. Although the change is small when the weights of P rob and Sim are equal, SM shows that most of the changed translations become better, some remain even and none become worse. The heavier the weight of Sim is, the higher the SM score is. The NIST score also increases especially for D 3 when the weight of Sim increases. The IDEAL condition overcomes most of the conditions as was expected, except that the SM score and the NIST score of D 3 are worse than those in the condition using only Sim. For D 3 , the sentence-splitting selection with Sim is a match for the ideal selection. So far, we have observed that SM and NIST tend to correspond to each other, although SM and BLEU or SM and mWER do not. The NIST score uses information weights when comparing the result of an MT system and reference translations. We can infer that the translation of a sentencesplitting, which was judged as being superior to another by the human evaluator, is more informative than the other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of Selection Using Similarity", "sec_num": "4.2" }, { "text": "Furthermore, we conducted an experiment without using a thesaurus in calculating Sim. In the definition of Sim, all semantic distances of Sem P 1 S 0 P 1/2 S 1/2 P 1/3 S 2/3 P 1/4 S 3/4 were assumed to be equal to 0.5. Table 4 shows evaluations on the 237 sentences.", "cite_spans": [], "ref_spans": [ { "start": 219, "end": 226, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Effect of Using Thesaurus", "sec_num": "4.3" }, { "text": "P 0 S 1 IDEAL # of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of Using Thesaurus", "sec_num": "4.3" }, { "text": "P 1 S 0 P 1/2 S 1/2 P 1/3 S 2/3 P 1/4 S 3/4 P 0 S 1 IDEAL # of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of Using Thesaurus", "sec_num": "4.3" }, { "text": "Compared to Table 3 , the SM score is worse when the weight of Sim in Score is small, and better when the weight of Sim is great. However, the difference between the conditions of using or not using a thesaurus is not so significant.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 19, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Effect of Using Thesaurus", "sec_num": "4.3" }, { "text": "In order to boost the translation quality of corpusbased MT systems for speech translation, the technique of splitting an input sentence appears promising. In previous research, many methods used N-gram clues to split sentences. To supplement N-gram based splitting methods, we intro-duce another clue using sentence similarity based on edit-distance. In our splitting method, we generate sentence-splitting candidates based on Ngrams, and select the best one by the measure of sentence similarity. The experimental results show that the method is valuable for two kinds of EBMT systems, one of which uses a phrase and the other of which uses a sentence as a translation unit.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concluding Remarks", "sec_num": "5" }, { "text": "Although we used English-to-Japanese translation in the experiments, the method depends on no particular language. It can be applied to multilingual translation. Because the semantic distance used in the similarity definition did not show any significant effect, we need to find another factor to enhance the similarity measure. Furthermore, as future work, we'd like to make the splitting method cooperate with sentence simplification methods like (Siddharthan, 2002) in order to boost the translation quality much more.", "cite_spans": [ { "start": 449, "end": 468, "text": "(Siddharthan, 2002)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Concluding Remarks", "sec_num": "5" }, { "text": "Punctuation marks are not used in translation input in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors' heartfelt thanks go to Kadokawa-Shoten for providing the Ruigo-Shin-Jiten. The research reported here was supported in part by a contract with the Telecommunications Advancement Organization of Japan entitled, \"A study of speech dialogue translation technology based on a large corpus\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A maximum entropy approach to natural language processing", "authors": [ { "first": "A", "middle": [ "L" ], "last": "Berger", "suffix": "" }, { "first": "S", "middle": [ "A" ], "last": "Della Pietra", "suffix": "" }, { "first": "V", "middle": [ "J" ], "last": "Della Pietra", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "22", "issue": "1", "pages": "1--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "A.L. Berger, S. A. Della Pietra, and V. J. Della Pietra. 1996. A maximum entropy ap- proach to natural language processing. Compu- tational Linguistics, 22(1):1-36.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Example retrieval from a translation memory", "authors": [ { "first": "L", "middle": [], "last": "Cranias", "suffix": "" }, { "first": "H", "middle": [], "last": "Papageorgiou", "suffix": "" }, { "first": "S", "middle": [], "last": "Piperidis", "suffix": "" } ], "year": 1997, "venue": "Natural Language Engineering", "volume": "3", "issue": "4", "pages": "255--277", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Cranias, H. Papageorgiou, and S. Piperidis. 1997. Example retrieval from a transla- tion memory. Natural Language Engineering, 3(4):255-277.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Automatic evaluation of machine translation quality using n-gram cooccurrence statistics", "authors": [ { "first": "G", "middle": [], "last": "Doddington", "suffix": "" } ], "year": 2002, "venue": "Proc. of the HLT 2002 Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co- occurrence statistics. Proc. of the HLT 2002 Conference.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Input sentence splitting and translating", "authors": [ { "first": "T", "middle": [], "last": "Doi", "suffix": "" }, { "first": "E", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2003, "venue": "Proc. of Workshop on Building and Using Parallel Texts, HLT-NAACL 2003", "volume": "", "issue": "", "pages": "104--110", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Doi and E. Sumita. 2003. Input sentence split- ting and translating. Proc. of Workshop on Building and Using Parallel Texts, HLT-NAACL 2003, pages 104-110.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Efficient retrieval method and performance evaluation of example-based machine translation using edit-distance", "authors": [ { "first": "T", "middle": [], "last": "Doi", "suffix": "" }, { "first": "E", "middle": [], "last": "Sumita", "suffix": "" }, { "first": "H", "middle": [], "last": "Yamamoto", "suffix": "" } ], "year": 2004, "venue": "Transactions of IPSJ", "volume": "", "issue": "6", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Doi, E. Sumita, and H. Yamamoto. 2004. Ef- ficient retrieval method and performance evalu- ation of example-based machine translation us- ing edit-distance (in Japanese). Transactions of IPSJ, 45(6).", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Splitting long or ill-formed input for robust spoken-language translation", "authors": [ { "first": "O", "middle": [], "last": "Furuse", "suffix": "" }, { "first": "S", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "K", "middle": [], "last": "Yamamoto", "suffix": "" } ], "year": 1998, "venue": "Proc. of COLING-ACL'98", "volume": "", "issue": "", "pages": "421--427", "other_ids": {}, "num": null, "urls": [], "raw_text": "O. Furuse, S. Yamada, and K. Yamamoto. 1998. Splitting long or ill-formed input for robust spoken-language translation. Proc. of COLING-ACL'98, pages 421-427.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Extracting clauses for spoken language understanding in conversational systems", "authors": [ { "first": "N", "middle": [ "K" ], "last": "Gupta", "suffix": "" }, { "first": "S", "middle": [], "last": "Bangalore", "suffix": "" }, { "first": "M", "middle": [], "last": "Rahim", "suffix": "" } ], "year": 2002, "venue": "Proc. of IC-SLP 2002", "volume": "", "issue": "", "pages": "361--364", "other_ids": {}, "num": null, "urls": [], "raw_text": "N.K. Gupta, S. Bangalore, and M. Rahim. 2002. Extracting clauses for spoken language under- standing in conversational systems. Proc. of IC- SLP 2002, pages 361-364.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Application of translation knowledge acquired by hierarchical phrase alignment for pattern-based mt", "authors": [ { "first": "K", "middle": [], "last": "Imamura", "suffix": "" } ], "year": 2002, "venue": "Proc. of TMI-2002", "volume": "", "issue": "", "pages": "74--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Imamura. 2002. Application of transla- tion knowledge acquired by hierarchical phrase alignment for pattern-based mt. Proc. of TMI- 2002, pages 74-84.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Creating corpora for speechto-speech translation", "authors": [ { "first": "G", "middle": [], "last": "Kikui", "suffix": "" }, { "first": "E", "middle": [], "last": "Sumita", "suffix": "" }, { "first": "T", "middle": [], "last": "Takezawa", "suffix": "" }, { "first": "S", "middle": [], "last": "Yamamoto", "suffix": "" } ], "year": 2003, "venue": "Proc. of EUROSPEECH", "volume": "", "issue": "", "pages": "381--384", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Kikui, E. Sumita, T. Takezawa, and S. Ya- mamoto. 2003. Creating corpora for speech- to-speech translation. Proc. of EUROSPEECH, pages 381-384.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Input segmentation of spontaneous speech in janus: a speech-to-speech translation system", "authors": [ { "first": "A", "middle": [], "last": "Lavie", "suffix": "" }, { "first": "D", "middle": [], "last": "Gates", "suffix": "" }, { "first": "N", "middle": [], "last": "Coccaro", "suffix": "" }, { "first": "L", "middle": [], "last": "Levin", "suffix": "" } ], "year": 1996, "venue": "Proc. of ECAI-96 Workshop on Dialogue Processing in Spoken Language Systems", "volume": "", "issue": "", "pages": "86--99", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Lavie, D. Gates, N. Coccaro, and L. Levin. 1996. Input segmentation of spontaneous speech in janus: a speech-to-speech translation system. Proc. of ECAI-96 Workshop on Dia- logue Processing in Spoken Language Systems, pages 86-99.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The statistical language model for utterance splitting in speech recognition", "authors": [ { "first": "H", "middle": [], "last": "Nakajima", "suffix": "" }, { "first": "H", "middle": [], "last": "Yamamoto", "suffix": "" } ], "year": 2001, "venue": "Transactions of IPSJ", "volume": "42", "issue": "11", "pages": "2681--2688", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Nakajima and H. Yamamoto. 2001. The statis- tical language model for utterance splitting in speech recognition (in Japanese). Transactions of IPSJ, 42(11):2681-2688.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Bleu: a method for automatic evaluation of machine translation. RC22176", "authors": [ { "first": "K", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "T", "middle": [], "last": "Ward", "suffix": "" }, { "first": "W", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2001. Bleu: a method for automatic evaluation of machine translation. RC22176, September 17, 2001, Computer Science.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "An architecture for a text simplification system", "authors": [ { "first": "A", "middle": [], "last": "Siddharthan", "suffix": "" } ], "year": 2002, "venue": "Proc. of LEC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Siddharthan. 2002. An architecture for a text simplification system. Proc. of LEC 2002.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Experiments and prospects of example-based machine translation", "authors": [ { "first": "E", "middle": [], "last": "Sumita", "suffix": "" }, { "first": "H", "middle": [], "last": "Iida", "suffix": "" } ], "year": 1991, "venue": "Proc. of 29th Annual Meeting of ACL", "volume": "", "issue": "", "pages": "185--192", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Sumita and H. Iida. 1991. Experiments and prospects of example-based machine transla- tion. Proc. of 29th Annual Meeting of ACL, pages 185-192.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Example-based machine translation using dp-matching between word sequences", "authors": [ { "first": "E", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2001, "venue": "Proc. of 39th ACL Workshop on DDMT", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Sumita. 2001. Example-based machine trans- lation using dp-matching between word se- quences. Proc. of 39th ACL Workshop on DDMT, pages 1-8.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Collecting machine-translation-aided bilingual dialogues for corpus-based speech translation", "authors": [ { "first": "T", "middle": [], "last": "Takezawa", "suffix": "" }, { "first": "G", "middle": [], "last": "Kikui", "suffix": "" } ], "year": 2003, "venue": "Proc. of EUROSPEECH", "volume": "", "issue": "", "pages": "2757--2760", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Takezawa and G. Kikui. 2003. Collecting machine-translation-aided bilingual dialogues for corpus-based speech translation. Proc. of EUROSPEECH, pages 2757-2760.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Generation of word graphs in statistical machine translation", "authors": [ { "first": "N", "middle": [], "last": "Ueffing", "suffix": "" }, { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2002, "venue": "Proc. of Conf. on Empirical Methods for Natural Language Processing", "volume": "", "issue": "", "pages": "156--163", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Ueffing, F.J. Och, and H. Ney. 2002. Genera- tion of word graphs in statistical machine trans- lation. Proc. of Conf. on Empirical Methods for Natural Language Processing, pages 156-163.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Figure 1: Configuration", "num": null, "type_str": "figure", "uris": null }, "TABREF0": { "num": null, "html": null, "text": "ExampleHere, we show an example of generating sentencesplitting candidates with P rob and selecting one by Score. For the input sentence, \"This is a medium size jacket I think it's a good size for you try it on please\", there may be many candidates. Below, five candidates, whose P rob are not less than that of the original sentence, are generated. A '|' indicates a splitting position. The left numbers indicate the ranking based on P rob. The 5th candidate is the input sentence itself. For each candidate, Sim, and further, Score are calculated. Among the candidates, the 2nd is selected because its Score is the highest.", "content": "
) 2.5 1. This is a medium size jacket | I think it's a good size for you try it on please 2. This is a medium size jacket | I think it's a good size for you | try it on please 3. This is a medium size jacket | I think it's a good size | for you try it on please 4. This is a medium size jacket | I think it's a good size | for you | try it on please 5. This is a medium size jacket I think it's a good size for you try it on please
", "type_str": "table" }, "TABREF3": { "num": null, "html": null, "text": "", "content": "", "type_str": "table" }, "TABREF5": { "num": null, "html": null, "text": "MT Quality: Using similarity vs. not using similarity, on the test set of 237 sentences (P indicates P rob and S indicates Sim)", "content": "
", "type_str": "table" }, "TABREF7": { "num": null, "html": null, "text": "MT Quality: Using similarity vs. not using similarity, on the test set of 237 sentences, without a thesaurus (P indicates P rob and S indicates Sim)", "content": "
", "type_str": "table" } } } }