Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N03-2006",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:07:33.439642Z"
},
"title": "Adaptation Using Out-of-Domain Corpus within EBMT",
"authors": [
{
"first": "Takao",
"middle": [],
"last": "Doi",
"suffix": "",
"affiliation": {
"laboratory": "ATR Spoken Language Translation Research Laboratories",
"institution": "Kansai Science City",
"location": {
"addrLine": "2-2-2 Hikaridai",
"postCode": "619-0288",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "takao.doi@atr.co.jp"
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": "",
"affiliation": {
"laboratory": "ATR Spoken Language Translation Research Laboratories",
"institution": "Kansai Science City",
"location": {
"addrLine": "2-2-2 Hikaridai",
"postCode": "619-0288",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "eiichiro.sumita@atr.co.jp"
},
{
"first": "Hirofumi",
"middle": [],
"last": "Yamamoto",
"suffix": "",
"affiliation": {
"laboratory": "ATR Spoken Language Translation Research Laboratories",
"institution": "Kansai Science City",
"location": {
"addrLine": "2-2-2 Hikaridai",
"postCode": "619-0288",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "hirofumi.yamamoto@atr.co.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In order to boost the translation quality of EBMT based on a small-sized bilingual corpus, we use an out-of-domain bilingual corpus and, in addition, the language model of an indomain monolingual corpus. We conducted experiments with an EBMT system. The two evaluation measures of the BLEU score and the NIST score demonstrated the effect of using an out-of-domain bilingual corpus and the possibility of using the language model.",
"pdf_parse": {
"paper_id": "N03-2006",
"_pdf_hash": "",
"abstract": [
{
"text": "In order to boost the translation quality of EBMT based on a small-sized bilingual corpus, we use an out-of-domain bilingual corpus and, in addition, the language model of an indomain monolingual corpus. We conducted experiments with an EBMT system. The two evaluation measures of the BLEU score and the NIST score demonstrated the effect of using an out-of-domain bilingual corpus and the possibility of using the language model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Example-Based Machine Translation (EBMT) is adaptable to new domains. If you simply prepare a bilingual corpus of a new domain, you'll get a translation system for the domain. However, if only a small-sized corpus is available, low translation quality is obtained. We explored methods to boost translation quality based on a small-sized bilingual corpus in the domain. Among these methods, we use an out-of-domain bilingual corpus and, in addition, the language model (LM) of an indomain monolingual corpus. For accuracy of the LM, a larger training set is better. The training set is a target language corpus, which can be more easily prepared than a bilingual corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In prior works, statistical machine translation (Brown, 1993) used not only LM but also translation models. However, making a translation model requires a bilingual corpus. On the other hand, in some studies on multiple-translation selection, the LM of the target language is used to calculate translation scores (Kaki, 1999; Callison-Burch, 2001) . For adaptation, we use the LM of an in-domain target language.",
"cite_spans": [
{
"start": 48,
"end": 61,
"text": "(Brown, 1993)",
"ref_id": "BIBREF4"
},
{
"start": 313,
"end": 325,
"text": "(Kaki, 1999;",
"ref_id": "BIBREF5"
},
{
"start": 326,
"end": 347,
"text": "Callison-Burch, 2001)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the following sections, we describe the methods using an out-of-domain bilingual corpus and an indomain monolingual corpus. Moreover, we report on our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "EBMT (Nagao, 1984) retrieves the translation examples that are most similar to an input expression and adjusts the examples to obtain the translation. The EBMT system in our approach retrieves not only indomain examples, but also out-of-domain examples. When using out-of-domain examples, suitability to the target domain is considered. We tried the following three types of adaptation methods.",
"cite_spans": [
{
"start": 5,
"end": 18,
"text": "(Nagao, 1984)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation Methods",
"sec_num": "2"
},
{
"text": "(1) Merging equally An in-domain corpus and an out-of-domain corpus are simply merged and used without distinction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation Methods",
"sec_num": "2"
},
{
"text": "(2) Merging with preference for in-domain corpus An in-domain corpus and an out-of-domain corpus are merged. However, when multiple examples with the same similarity are retrieved, the in-domain examples are used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation Methods",
"sec_num": "2"
},
{
"text": "(3) Using LM Beforehand, we make an LM of an in-domain target language corpus and, according to the LM, assign a probability to the target sentence of each out-of-domain example.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation Methods",
"sec_num": "2"
},
{
"text": "In the example retrieval phase of the EBMT system, two types of examples are handled differently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation Methods",
"sec_num": "2"
},
{
"text": "(3-1) From in-domain examples, the most similar examples are retrieved. (3-2) From out-of-domain examples, not only the most similar examples but also other examples that are nearly as similar are retrieved. In the retrieved examples, examples with the highest probabilities of their target sentences by the LM are selected. (3-3) From the results of both (3-1) and (3-2), the most similar examples are selected. Examples of (3-1) are used when the similarities are equal to each other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation Methods",
"sec_num": "2"
},
{
"text": "In order to evaluate the adaptability of an EBMT with out-of-domain examples, we applied the methods described in Section 2 to the EBMT and evaluated the translation quality in Japanese-to-English translation. We used an EBMT, DP-match Driven transDucer (D 3 , Sumita, 2001 ) as a test bed. We used two Japanese-and-English bilingual corpora. In this experiment on adaptation, as an out-ofdomain corpus, we used Basic Travel Expression Corpus (BTEC, described as BE-corpus in Takezawa, 2002) ; as an in-domain corpus, we used a telephone conversation corpus (TEL). The statistics of the corpora are shown in Table 1 . TEL is split into two parts: a test set of 1,653 sentence pairs and a training set of 9,918. Perplexities reveal the large difference between the indomain and out-of-domain corpora.",
"cite_spans": [
{
"start": 261,
"end": 273,
"text": "Sumita, 2001",
"ref_id": "BIBREF3"
},
{
"start": 476,
"end": 491,
"text": "Takezawa, 2002)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 608,
"end": 615,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conditions",
"sec_num": "3.1"
},
{
"text": "The translation qualities were evaluated by the BLEU score (Papineni, 2001) and the NIST score (Doddington, 2002) . The evaluation methods compare the system output translation with a set of reference translations of the same source text by finding sequences of words in the reference translations that match those in the system output translation. We used the English sentence corresponding to each input Japanese sentence in the test set as the reference translation. Therefore, achieving a better score by the evaluation means that the translation results can be regarded as more adequate translations for the domain.",
"cite_spans": [
{
"start": 59,
"end": 75,
"text": "(Papineni, 2001)",
"ref_id": "BIBREF1"
},
{
"start": 95,
"end": 113,
"text": "(Doddington, 2002)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conditions",
"sec_num": "3.1"
},
{
"text": "In order to simulate incremental expansion of an indomain bilingual corpus and to observe the relationship between corpus size and translation quality, translations were performed with some subsets of the training corpus. The numbers of the sentence pairs are 0, 1000, .. , 5000 and 9918, adding randomly selected examples from the training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditions",
"sec_num": "3.1"
},
{
"text": "The LM of the domain's target language was the word trigram model of the English sentences of the training set of TEL. We tried two patterns of training set quantities in making the LM: 1) all of the training set, and 2) the part of the set used for translation examples according to the numbers mentioned above. Table 2 shows the BLEU scores from the translation experiment, which show certain tendencies. Generally, by using more in-domain examples, the translation results steadily achieve better scores. The score when using 4,000 in-domain examples exceeded that when using 152,172 out-of-domain examples. Equal merging outperformed using only out-of-domain examples. Merging with in-domain preference outperformed equal merging, and using LM outperformed merging with indomain preference. Comparing the two cases using LM, using LM made from all of the training set got a slightly better scores than the other, which implies that better LM is made from a larger corpus. All of the adaptation methods are more effective when a smaller-sized indomain corpus is available. When using no in-domain examples, the effect of using LM made from the entire training set was relatively large. Table 3 shows the NIST scores for the same experiment. We can observe the same tendencies as in the table of BLEU scores, except that the advantage of using LM made from all of the training set over that from a partial set was not observed.",
"cite_spans": [],
"ref_spans": [
{
"start": 313,
"end": 320,
"text": "Table 2",
"ref_id": null
},
{
"start": 1189,
"end": 1196,
"text": "Table 3",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Conditions",
"sec_num": "3.1"
},
{
"text": "A corpus-based approach is able to quickly build a machine translation system for a new domain if a bilingual corpus of that domain is available. However, if only a small-sized corpus is available, a low translation quality is obtained. In order to boost the performance, several methods using out-of-domain data were explored in this paper. The experimental results showed the effect of using an out-of-domain corpus by two evaluation measures, i.e., the BLEU score and the NIST score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "4"
},
{
"text": "We also showed the possibility of increasing the translation quality by using the LM of the domain's target language. However, the gains from using the LM in the evaluation scores were not significant. We must continue experiments with other corpora and under various conditions. In addition, though we've implicitly assumed a high-quality in-domain corpus, next we'd like to investigate using a low-quality corpus. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "4"
}
],
"back_matter": [
{
"text": "The research reported here was supported in part by a contract with the Telecommunications Advancement Organization of Japan entitled, \"A study of speech dialogue translation technology based on a large corpus\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Toward a Broad-coverage Bilingual Corpus for Speech Translation of Travel Conversations in the Real World",
"authors": [
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of LREC-2002",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takezawa, T. et al. 2002. Toward a Broad-coverage Bilingual Corpus for Speech Translation of Travel Conversations in the Real World, Proc. of LREC- 2002",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Bleu: a Method for Automatic Evaluation of Machine Translation, RC22176",
"authors": [
{
"first": "K",
"middle": [],
"last": "Papineni",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Papineni, K. et al. 2001. Bleu: a Method for Automatic Evaluation of Machine Translation, RC22176, Sep- tember 17, 2001, Computer Science",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic Evaluation of Machine Translation Quality Using N-gram Co-Occurrence Statistics",
"authors": [
{
"first": "G",
"middle": [],
"last": "Doddington",
"suffix": ""
}
],
"year": 1984,
"venue": "Proc. of the HLT 2002 Conference Nagao",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Doddington, G. 2002. Automatic Evaluation of Machine Translation Quality Using N-gram Co-Occurrence Statistics. Proc. of the HLT 2002 Conference Nagao, M. 1984. A Framework of a Mechanical Trans- lation between Japanese and English by Analogy Principle, in Artificial and Human Intelligence, Elithorn, A. and Banerji, R. (eds.). North-Holland",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Example-based machine translation using DP-matching between word sequences",
"authors": [
{
"first": "E",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of DDMT Workshop of 39th ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sumita, E. 2001 Example-based machine translation using DP-matching between word sequences, Proc. of DDMT Workshop of 39th ACL",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brown, P. F. et al. 1993. The mathematics of statistical machine translation: Parameter estimation, Computa- tional Linguistics, 19(2)",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Scoring multiple translations using character N-gram",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kaki",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. of NLPRS-99",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaki, S. et al. 1999. Scoring multiple translations using character N-gram, Proc. of NLPRS-99",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Program for Automatically Selecting the Best Output from Multiple Machine Translation Engines",
"authors": [
{
"first": "C",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of MT Summit VIII",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Callison-Burch, C. et al. 2001. A Program for Auto- matically Selecting the Best Output from Multiple Machine Translation Engines, Proc. of MT Summit VIII",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td/><td/><td colspan=\"2\">BTEC</td><td>TEL</td><td/></tr><tr><td/><td/><td>Japanese</td><td colspan=\"3\">English Japanese English</td></tr><tr><td colspan=\"2\"># of sentences</td><td colspan=\"2\">152,172</td><td>11,571</td><td/></tr><tr><td># of words</td><td/><td colspan=\"2\">1,045,694 909,270</td><td>103,860</td><td>92,749</td></tr><tr><td colspan=\"2\">Vocabulary size</td><td>19,999</td><td>12,268</td><td>5,242</td><td>4,086</td></tr><tr><td>Average tence length</td><td>sen-</td><td>6.87</td><td>5.98</td><td>8.98</td><td>8.02</td></tr><tr><td colspan=\"2\">Perplexity (word trigram)</td><td colspan=\"2\">24.19 TEL language model 28.85 190.77 142.04</td><td colspan=\"2\">37.22 BTEC language model 40.04 57.27 81.26</td></tr></table>",
"type_str": "table",
"text": "Corpus Statistics Experimental results of translation by BLEU scores",
"num": null,
"html": null
},
"TABREF1": {
"content": "<table><tr><td># of in-domain examples</td><td>0</td><td>1,000</td><td>2,000</td><td>3,000</td><td>4,000</td><td>5,000</td><td>9,918</td></tr><tr><td>Using in-domain examples</td><td>---</td><td colspan=\"6\">0.0037 0.1130 0.4168 0.7567 1.1619 2.7400</td></tr><tr><td>Using out-of-domain examples</td><td/><td/><td/><td>1.1126</td><td/><td/><td/></tr><tr><td>Merging equally</td><td/><td colspan=\"6\">1.4283 1.7367 2.0690 2.3405 2.6142 3.5772</td></tr><tr><td>Merging with preference for in-domain</td><td>1.1126</td><td colspan=\"6\">1.4580 1.7975 2.1343 2.4045 2.7088 3.6255</td></tr><tr><td>Using LM of partial training set Using LM of all training set</td><td colspan=\"6\">1.7454 2.0449 2.3639 2.5825 2.9304 1.4404 1.7007 2.0125 2.3484 2.5992 2.8973</td><td>3.7544</td></tr></table>",
"type_str": "table",
"text": "Experimental results of translation by NIST scores",
"num": null,
"html": null
}
}
}
}