{ "paper_id": "C96-1030", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:51:45.009637Z" }, "title": "Example-Based Machine Translation in the Pangloss System", "authors": [ { "first": "Ralf", "middle": [ "D" ], "last": "Brown", "suffix": "", "affiliation": {}, "email": "ralf@cs@cmu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The Pangloss Example-Based Machine Translation engine (I'anEI3MT) l is a translation system reql,iring essentially no knowledge of the structure of a language, merely a large parallel corpus of example sentences atn[ a bilingual dictionary. Input texts are segmented into sequences of words occurring in the corpus, for which translations are determined by subsententia[ alignment of the sentence pairs containing those sequences. These partial translations are then combined with the results of other translation en gines to form the final translation produced by the Pangloss system. In an internal evaluation, PanEBMT achieved 70.2% coverage of unrestricted Spanish news-wire text, despite a simplistic subsententia[ alignment algorithm, a subop ritual dictionary, and a corpus Dora a different domain than the evalual, ion texts.", "pdf_parse": { "paper_id": "C96-1030", "_pdf_hash": "", "abstract": [ { "text": "The Pangloss Example-Based Machine Translation engine (I'anEI3MT) l is a translation system reql,iring essentially no knowledge of the structure of a language, merely a large parallel corpus of example sentences atn[ a bilingual dictionary. Input texts are segmented into sequences of words occurring in the corpus, for which translations are determined by subsententia[ alignment of the sentence pairs containing those sequences. These partial translations are then combined with the results of other translation en gines to form the final translation produced by the Pangloss system. In an internal evaluation, PanEBMT achieved 70.2% coverage of unrestricted Spanish news-wire text, despite a simplistic subsententia[ alignment algorithm, a subop ritual dictionary, and a corpus Dora a different domain than the evalual, ion texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Pangloss (Nirenburg el; is a multiengine machine translation system, in which several translation engines are. run in parallel to propose translations of various portions of the input, Dora which the final translation is selected by a statistical language model. Panl'3BMT is one of the translation engines used by Pangloss. EBMT is essentially translation-by-analogy: given a source-language passage S and a collection of aligned source/target text pairs, lind the \"best\" match for S in the source-language half of the text collection, and accept the target-language half of that match as the translation. PanEBMT, like other example-based translation systems, uses essentially no knowledge about its source or target languages; what little knowledge it does use is optional, and is supplied in a eonIiguration file. Its 1This work as part of the l'angloss project was supported I)y tim U.S. I)epartment of Defense three main knowledge sources arc: a sententiallyaligned parallel bilingual corpus; a bilingual dictionary; and a target-language root/synonym list,. The fourth (minor and optional) knowledge source is the hmguage-specific information provided in the conliguration tile, which consists of n list of tokenizations equating words within classes such as w0ekdays, a list of words which ntay be elided during alignment (such as artMes), and a list of words which may be inserted 2", "cite_spans": [ { "start": 9, "end": 23, "text": "(Nirenburg el;", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The corpus used by PanEBMT consists of a set of source/target sentence, pairs, and is flflly indexed on t, he source-language sentences. The corpus is not aligned at any granularity liner than the sentence pair; subsententia] alignment is perfornled at run-time based on the sentence fragments selet;ted and the other knowledge sources. The corpus index lists all occurrences of every word and punctuation mark in the sourcelanguage sentences contained in the corpus. The index has been designed to permit incremental updates, allowing new sentence pairs to be added to the corpus as they become awulable (for example, to implement a translation memory with the system's own output). The text is tokenized prior to indexing, so that words in any of the equivalence classes detined in the EBMT contiguration tile (such as month names, countries, or measuring units), as well as the predetined equiwdence class , are indexed under the equivalence class rather than their own names. For each distinct token, the index contains a list of tile token's occurrences, consisting of a sentence identifier and the word number within the sentence. At translation time, f'anEI~MT back-substitutes tile appropriate target-language word into any translation which involves any tokenized words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Bilingual Corpus", "sec_num": null }, { "text": "'rile bilingual corpus used for the results reported here consists of 726,406 Spanish-English sentence pairs drawn primarily from the IIN Multilingual (~'orpus available fl'om tile l,inguistic Data (Jonsortium(Graff and Finch, 1992) (Figure l) , with a small admixture of texts from the Pan-Las fuentes de esos comentarios y recomendaciones son las siguientes : The sources of these comments and recommendations are : E1 informe de la Junta de Auditores a la Asamblea General que incluye las observaciones del Director Ejecutivo del UNICEF sobre los comentarios y recomendaciones de la Junta de Auditores ; The report of the Board of Auditors to the General Assembly which incorporates the observations of the Executive Director of UNICEF on the comments and recommendations of the Board of Auditors ; American Health Organization and prior project evaluations 2, indexed as described above.", "cite_spans": [ { "start": 193, "end": 232, "text": "Data (Jonsortium(Graff and Finch, 1992)", "ref_id": null } ], "ref_spans": [ { "start": 233, "end": 244, "text": "(Figure l)", "ref_id": null } ], "eq_spans": [], "section": "Parallel Bilingual Corpus", "sec_num": null }, { "text": "Together, the bilingual dictionary and targetlanguage list, of roots and synonyms (extracted from WordNet when translating into English) provide the necessary information to lind associations between source-language and targetlanguage words in the selected sentence pairs. These associations are used in performing subsentential alignment. A source word is considered to be associated with a target-language word whenever either the target word itself or any of the words in its root/synonym list appear in the list of possible translations for the source word given by the dictionary. Not all words will be associated one-to-one; however, the current implementation requires that at least one such unique association be found in order to provide an anchor for the alignment protess.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Bilingual Corpus", "sec_num": null }, { "text": "PanEBMT is implemented in C++, using the FramepaC library (Brown, 1996) for accessing Lisp data structures stored in files or sent from the main Pangloss module via Unix pipes. PanEBMT consists of approximately 13,300 lines of code, including the code for a glossary mode which will not be described here.", "cite_spans": [ { "start": 58, "end": 71, "text": "(Brown, 1996)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Implementation", "sec_num": null }, { "text": "PanEBMT uses a re-processed version of the bilingual dictionary used by Pangloss's dictionary translation engine ( Figure 2 ). The re-processing consists of removing various high-frequency words and splitting all nmlti-word definitions into a list of single words, needed to find one-to-one associations. 210250 sentence pairs stern from the PAI{O corpus and 552 pairs from evaluations.", "cite_spans": [], "ref_spans": [ { "start": 115, "end": 123, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Implementation", "sec_num": null }, { "text": "PanEBMT is merely one of the translation engines used by Pangloss; the others are transfer engines (dictionaries and glossaries) and a knowledge-based machine translation engine (Figure 3) . Each of these produces a set of candidate translations for various segments of the input, which are then combined into a chart ( Figure 3 ). The chart is passed through a statistical language model to determine the best path through the chart, which is then output as the translation of the original input sentence.", "cite_spans": [ { "start": 178, "end": 188, "text": "(Figure 3)", "ref_id": null } ], "ref_spans": [ { "start": 320, "end": 329, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "EBMT's Place in Pangloss", "sec_num": "4" }, { "text": "The EBMT engine produces translations in two phases:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EBMT Operation", "sec_num": "5" }, { "text": "1. find chunks by searching the corpus index for occurrences of consecutive words from the input text 2. perform subsentential alignment on each sentence pair found in the first phase to determine the translation of the chunk", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EBMT Operation", "sec_num": "5" }, { "text": "In constrast with other work on examplebased translation, such as (Maruyama and Watanabe, 1992) or early Pangloss EBMT experiments (Nirenburg et al., 1993) , PanEBMT does not find an optimal partitioning of the input. Instead, it attempts to produce translations of every word sequence in the input sentence which appears in its corpus. The final selection of the \"correct\" cover for the input is left for the statistical language model, as is the case for all of the other translation engines in Pangloss. An advantage of this approach is that; it avoids discarding possible chunks merely because they are not part of the \"optimal\" cover for the input, instead selecting the input coverage by how well the translations fit together to form a complete translation.", "cite_spans": [ { "start": 66, "end": 95, "text": "(Maruyama and Watanabe, 1992)", "ref_id": "BIBREF3" }, { "start": 131, "end": 155, "text": "(Nirenburg et al., 1993)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "EBMT Operation", "sec_num": "5" }, { "text": "Transfer M'I) (]MAT l'ost-cdit ) H A 'l'al'get Text Source Text ] 1 !T 5/ < 5 / Figure 3: l'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EBMT Operation", "sec_num": "5" }, { "text": "angloss Machine q'r;mslation System Architecture 3'0 lind chunks, the engine sequentially looks up each word of tile input in the index. The oc<:urrence list for each word is comp~tred ;tgainst the occurrence list for the prior word and against the list of chunks extending to the prior word. For c,~u;h occtlrrence which is adjacent to all occnrl'elwe of the prior word, a new chunk is created or an existing chunk is extended as appropriate. Alter processing all input words in this tmmner, the engine has determined all possible substrings of the input containing at least two words which are; present in the corpus. Since the more Dequent word sequences <:an o<:cur hundreds of times in the eorl)uS , the list of chunks is culled to eliminate all but the last tlve (by default) occurrences of any distinct word sequence. By selecting the last occurrences of each word sequence, one effectively gives the most recent additions to the corpus the highest weight, precisely what is needed for a translation meanory.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EBMT Operation", "sec_num": "5" }, { "text": "Next, the sentence pairs containing tile chunks retold in the lirst phase are read from disk, and alignment is performed on each in order to determine the translation of the chunk unless the match is against the entire COl'pus entry, in which case the entire target-language sentence is taken as the translation. Alignment currently uses a rather simplistic brnte-force approach very similar to that of (Nirenburg et el., 1994) which iden-tifies the minimum and maximum possible segments of the target-language sentence which could possibly correspond to the chunk, and then applies a scoring fimction to ew',ry possible substring of the maximum segment containing at least the luinimmn segment. The suhstring with the best score is then selected as the aligned match for the chunk.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EBMT Operation", "sec_num": "5" }, { "text": "The alignment scoring function is computed fl'om the weighted sum of a number of extremely simple test flmctions. The weights call be changed for ditDring lengths of the source chunk in order to adapt to varying impacts of the tests with varying numl)ers of words in the chunk, as well as varyit,g impacts as some or all of the. raw test stores change. The test functions include (in approximate order el' importance) such measures as a) the number o[' source words without <:orrcspondences in the t.;trget, b) the number of target words without c.orrespondences in tile source, c) matching words in source/target without correspondences, d) nmnber of words with COl'respondence itt the fifll target but not the candidate chunk, e) common sentence boundaries, f) eli(table source words, g) insertable target words, and It) the difference in length between source and ta> get chunks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EBMT Operation", "sec_num": "5" }, { "text": "There is one exception to the above procedure for retrieving and aligning chunks. If any of the chunks covers the entire input string and the entire source-language half of a corpus sentence pair, then all other chunks are discarded and the targetlanguage half of the pair is prodnced as the translation. This speeds up the system when opea'ating in tnmsl~tion memory mode, as would be the case in a system used to translate revisions of previous texts. Unlike a pure translation memory, however, Pan I']IIMT does not require all exact; match with a memorized translation. Figure 4 shows the set of translations generated fi'om one sentence. The output is shown in the format used R)r standalone testing, which generates only the best translation for each distinct clnmk; when integrated with the rest of Pangloss, Panl,;l/MT also includes information indicating which portion of tile input sentence and which pair fi'om the corpus were used, and can produce multiple translations for each chunk. The. number next to the source-language chunk in the output indicates the wdue of the scoring flnlction, where higher values are worse. Very poor alignmeats (scores greater than five times the source chunk length) have already been omitted from the output.", "cite_spans": [], "ref_spans": [ { "start": 573, "end": 581, "text": "Figure 4", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "EBMT Operation", "sec_num": "5" }, { "text": "The EBMT engine described here is a completely new implementation ill C++ replacing an earlier Lisp version. The previous version had performed very poorly (to the point where its results were E1 Banco de Santander habia sido elegido el lunes per las autoridades monetarias espanolas para comprar el Banco Espanol de Credito (Banesto), cuarto banco espanol. \"El Banco de\" (O) (\"the Bank of\") \"El Banco de Santander\" (i) (\"the Bank of Santander\") \"Banco de\" (0) (\"Bank of\") \"Banco de Santander\" (I) (\"Bank of Santander\") \"de Santander\" (0) (\"of Santander\") \"habia side\" (0.5) (\"been\") \"elegido el\" (0) (\"chosen the\") \"el lunes por\" (0) (\"Monday by the\") \"por las\" (O) (\"by the\") \"por las autoridades\" (14.2) (\"by the health authorities\") \"por las autoridades monetarias\" (0) (\"by the monetary authorities\") \"las autoridades monetarias\" (0) (\"the monetary authorities\") \"comprar el\" (0) (\"buying the\") \"Espanol de Credito\" (13.2) (\"Spanish Institute of Credit for\") \"de Credito\" (0) (\"of credit\") \"de Credito (\" (i) (\"of credit (\") \"Credito (\" (0) (\"credit (\") \", cuarto\" (0) (\", fourth\") \"banco espanol\" (0) (\"Spanish bank\") \"espanol .\" (0) (\"Spanish .\") The earlier incarnation had used a corpus of considerably less than 40 megabytes of text, compared to the 270 megabytes used for the results described herein. The seven-fold increase in corpus size produces a proportional increase in matches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recent Enhancements", "sec_num": "6" }, { "text": "Not only was the corpus fairly small, the text which was used was not flflly indexed. To limit the size of the index file, a long list of tile most frequent words were omitted from the index, as were punctuation marks. Although allowances were made for the words on the stop Fist, the missing punctuation marks always forced a break in clmnks, fl'equently limiting the size of chunks which could be found. Further, allowance was made for the ,m-indexed frequent words by permitring any sequence of frequent words between two indexed words, producing many erroneous matches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recent Enhancements", "sec_num": "6" }, { "text": "The newer implementation fully indexes the corpus, anti thus examines only exact matches with the input, ensuring that only good matches are actually processed. Further, PanEBMT can index certain word pairs to, in effect, precompute some two-word chunks. When applied to the five to ten most frequent words, this pairing can reduce processing time during translation by dramatically reducing the amount of data which must be read from the index file (for example, there might be 10,000 occurrences of a word pair instead of 1,000,000 occurrences of one of the words and 100,000 of the other word), and thus the number of adjacency comparisons which must be made. The above t;imings rel)reselll, ;1, v{LricLy of sl}ee(I Ol}l, imizati(}ns which Imve been N}l}lied since the Augusl; 1.{)95 ewdm~t;ion, r{',sulting in a {h}ul}ling of t;he in{lexiug spee{I and trit)ling {}f 1,r~mslal, ion speed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recent Enhancements", "sec_num": "6" }, { "text": "As {:urrenl, ly i~q}h.uenl,e(I, I)m~EI~MT has I,() lation system. The engine can not generate a chunk for a word unless it both co-occurs with either the preceding or following word somewhere in the corpus, and at least one occurrence can be successfiflly aligned. Additionally, candidate chunks are omitted if the alignment was successfifl but the scoring function indicates a poor match. Unless all of these conditions are met, a gap in output occurs for the particular input word. In the context of the Pangloss system, such gaps are not a problem, since one of the other engines can usually supply a translation covering each gap.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Strengths and Weaknesses", "sec_num": null }, { "text": "As currently implemented, the EBMT engine is unable to properly deal with translations that do not involve one-for-one correspondences between source and target words (e.g. Spanish \"rail milliones\" corresponding to English \"billions\"). Lack of a one-to-one correspondence between sourcelanguage and target:language expressions can often cause the alignment to be incorrect or fail altogether under the current alignment algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Strengths and Weaknesses", "sec_num": null }, { "text": "Since the corpus used in the experiments described here was based almost entirely on the UN proceedings rather than newswire text, PanEBMT did not find many long chunks during the evaluation. In fact, the average chunk was just over three words in length, and less than three percent of the chunks were more than six words long. This quite naturally affects the quality of the final translation, since many short pieces must be assembled into a translation rather than one or two long segments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Strengths and Weaknesses", "sec_num": null }, { "text": "Despite all these difficulties, PanEBMT was able to cover 70.2% of the input it was presented with good chunks, and generate some translation for more than 84ordinarily not outpnt at all). Integrating the hand-crafted glossaries from Pangloss into the corpus, thus adding 148,600 effectively pre-aligned phrases to the corpus, improved the matches against the corpus from 90.4% to 90.9% of the input, and the coverage with good chunks to 73.3%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Strengths and Weaknesses", "sec_num": null }, { "text": "Future Enhancements", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "9", "sec_num": null }, { "text": "Since PanEBMT is a fairly new implementation, there is still much that could be done to enhance it. Among the improvements being considered are: improving the qnality of the dictionary (in progress); supporting one-to-many or manyto-one associations for alignment; optimizing the test-function weights; other alignment algorithms; using linguistic information such as morphological variants and source-language synonymy to increase the number of matches against the corpus; using approximate matchings when no exact matches exist in the corpus; and using of a classifier algorithm to remove redundancy from the corpus (suggested by C. Domashnev).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "9", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "FramepaC User's Manual Carnegie Mellon University (]enter :for Machine Translation technical memorandnm hUp:// ww", "authors": [ { "first": "Ralf", "middle": [], "last": "Brown", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ralf Brown (in preparation). FramepaC User's Manual Carnegie Mellon University (]enter :for Machine Translation technical memoran- dnm hUp:// ww, es. cmu. edu/afs/cs, emu. edu/-", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Applying Statistical English Language Modeling to Symbolic Machine ~lYanslation", "authors": [ { "first": "Ralf", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Frederking", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the Sixth International Conference on Theoretical and Methodoloqical Issues in Machine Translation (TMI-95)", "volume": "", "issue": "", "pages": "221--239", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ralf Brown and Robert Frederking 1995. Apply- ing Statistical English Language Modeling to Symbolic Machine ~lYanslation. In Proceedings of the Sixth International Conference on The- oretical and Methodoloqical Issues in Machine Translation (TMI-95), pages 221-239. Leuven, Belgium.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Multilingum Text Resources at the Linguistic Data Consortium", "authors": [ { "first": "David", "middle": [], "last": "Graft", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Finch", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 1994 ARPA Human Language Technology Workshop Morgan Kaufinann", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Graft and Rebecca Finch 1994. Multilin- gum Text Resources at the Linguistic Data Con- sortium In Proceedings of the 1994 ARPA Hu- man Language Technology Workshop Morgan Kaufinann.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Tree Cover Search Algorithm for Example-Based 'lYanslation", "authors": [ { "first": "H", "middle": [], "last": "Maruyama", "suffix": "" }, { "first": "H", "middle": [], "last": "Watanabe", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the Fourth International Conference on Theoretical and Methodological Issues in Machine ~lYanslation (TMI-92)", "volume": "", "issue": "", "pages": "173--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Maruyama and H. Watanabe 1992. Tree Cover Search Algorithm for Example-Based 'lYansla- tion. In Proceedings of the Fourth International Conference on Theoretical and Methodological Issues in Machine ~lYanslation (TMI-92), pages 173-184. Montreal.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A Framework of a Mechanical ~IYanslation between Japanese and English by Analogy Principle", "authors": [ { "first": "M", "middle": [], "last": "Nagao", "suffix": "" } ], "year": 1984, "venue": "Artificial and Human Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Nagao 1984. A Framework of a Mechani- cal ~IYanslation between Japanese and English by Analogy Principle. In Artificial and Human Intelligence, A. Elithorn and R. Banerji (eds). NATO Publications", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The Pangloss Mark IIl Machine Translation System", "authors": [], "year": 1995, "venue": "Center for Machine Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sergei Nirenburg, (ed.). 1995. \"The Pangloss Mark IIl Machine Translation System.\" Joint Technical Report, Computing Research Labora- tory (New Mexico State University), Center for Machine Translation (Carnegie Mellon Univer: sity), Information Sciences Institute (University of Southern California). Issued as CMU techni- cal report CMU-CMT-95-145.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A Full-Text Experiment in Example-Based Machine Translation", "authors": [ { "first": "Sergei", "middle": [], "last": "Nirenburg", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Beale", "suffix": "" }, { "first": "Constantine", "middle": [], "last": "Domashnev", "suffix": "" } ], "year": 1994, "venue": "New Methods in Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sergei Nirenburg, Stephen Beale, and Constantine Domashnev 1994. A Full-Text Experiment in Example-Based Machine Translation. In New Methods in Language Processing Manchester, England.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Two Approaches to Matching in EBMT", "authors": [ { "first": "Sergei", "middle": [], "last": "Nirenburg", "suffix": "" }, { "first": "Constantine", "middle": [], "last": "Domashnev", "suffix": "" }, { "first": "Dean", "middle": [ "J" ], "last": "Grannes", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the Fifth International Conference on Theoretical and Methodological Issues in Machine Translation (TM[-93)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sergei Nirenburg, Constantine Domashnev, and Dean J. Grannes 1993. Two Approaches to Matching in EBMT. In Proceedings of the Fifth International Conference on Theoretical and Methodological Issues in Machine Transla- tion (TM[-93).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Evaluation in the ARPA Machine Translation Program: 1993 Methodology", "authors": [ { "first": "J", "middle": [ "S" ], "last": "White", "suffix": "" }, { "first": "T", "middle": [], "last": "O'connell", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the ARPA lILT Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "White, J.S. and T. O'Connell. 1994. \"Evalu- ation in the ARPA Machine Translation Pro- gram: 1993 Methodology.\" [n Proceedings of the ARPA lILT Workshop. Plainsboro, NJ.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Figure 1: Corpus Sentence Pairs", "num": null, "uris": null }, "FIGREF1": { "type_str": "figure", "text": "Sample 'Danslations", "num": null, "uris": null }, "FIGREF2": { "type_str": "figure", "text": "'31~MT's asso{:ia, l, iol~ die-Li(}It,a,l'y~ tl,'-;e{I here priumrily 1;{} I}rovi(le covel'ztge f'{}]' words It(){, ()l;herwise e{}vered In{lexing a 270 m{:gal}yi;c {:{}rl)US requircs al}l}r()xitn;tl;ely 45 ndmaes on a Sun SI)arcsl,;tti-n I,X when all tiles are located on local ,lisks, an,l an{)lher ~{} lllilllll,eB I,{) lmck {,he. index (n(}l, required, I}ul, im l}roves speed al, run time). It~cret}~enl,al a{hlil,ion of new data. 1,o the {:orpllS l}l'o{:e{2{l:-; ;tl, ~t l'al,c (}[\" roughly six megal}ytes l)er ndnute. A sample text (}f 15 sentences l;(}t;Mling 414 Wol'ds ,~l, ll(l I}llll{:{41&{,iOll ll10,1'k8 c0,II I){'~ t}l'{.}{:(':ss(':(I in jus{, under three minul,es. The 20 texts use{l in l;he evalu~d,ion (:~m he {:On~l}lel,ely i}r{)eesse(I in l,w{} hours, inchMing sel)~U';d,e i)asscs for (ti{:l, io-.;try lool~ul}S ;m(l sl,~l;istieM ]~{,{leling I,y a se I} ar;d,e i}rogr~tm ((lescril}e{t in (Ih'owu m,{t I\"re{lerkinp;, 1!}95)); I)m~EI~MT a(:c(}unl,s for a,I)oul, 8{} nfintiles (}f l;hose l;wo hollrs.", "num": null, "uris": null }, "FIGREF3": { "type_str": "figure", "text": "i;okenizal;ion (along; with the required rc-indexing of the {:orlms) a.nd t{) adjusl; the scoring fllll(:l, iOll weighl,s.Nun~l}er ~m(l qualil,y of I;rmlsl;tl, ions {legra(les gradually as the size and (lualil# {}[ the I)ilingual diction&try aim synonym list (leerease. An in-{:{mtl}tel,e (licl;i{mary or rool,/synonym list m{wely causes Pan EB M'[' l;o miss son.2 potenl, ial tr;mslat,i{}ns. Similarly, a smMM' {:orpus t}r{}duces fewer l}otential m~d, ches, I}ut there i~ no t}oinl, 12}r ~my (51\" l, he l, hree l\u00a2nowle{Ip#2 SOllrces ~tl, which the etlg~ilte su{hlenly {:eases 1;o ['tlllCLiOll. ()lie can I M(c advantage of this gradual beh~wior l}y tmihting {he knowledgc sources incrcmenl;Mly and using I!;I~MT fOP l, ra, llSlaJ, iOllS evell I)el'ore the kn{}wledge sources trove I)ecn eomplc(,e{I. In I)ar{,icul~tr, 1}y a(htinp; l},}sl,-edil,ed oul, l}lll, Of the MT sysl,elll I}ack into 1,11{': {:Ol'l)llS } l;[Ie sysL{':lll c;I, ll I}{: 1}{}o{;s{,ra, i}l}e{I I'r{}nl a rela.tively mo{lesl, inil, ial coPi)ll8 (precisely the i{tea, l}ehin{l ~ l;r;-msla,(;i{)n nlenlory). I)uring l)repa.r~l;ion of this l)a.1}er, severM exl,l'.~tlleOIlS lines were discovered in the eorlms files,w}lich ('all:-;(:(I lll(}l'C, l;ha, ll 2!)/1110 8ell|,{:ll(:e p;Lil'S (over 4% o[\" Lhc eorl}lls ) l,o t}{~ corrul)l, ed. I)11(2 1,{) t, he exl;r~l lines, the corrut)l,ed pairs consisl, ed of the English target senl;enee t'ronl one pair and l, he Spanish sotn'ec senl, en{',e t\u00a5(}m the following I)air. 'l'his error had n(}t I}een diseovere{I earlier 1}e{:ause il, had n{} ol)vious effect ou I}anEl3MT's perforlnmt(:e ~t clear exa.ml}le of the sysl;enl}s graceful { h~gra.{Ial, i(sn i}r{q}erl:y. I,ack {)f (:~)mt)lel,e in[)/ll, {:{}w, rage is a severe {)t}-s{,;i.cle l,O IlSill~ I'anl,',l~IMT as a sl,and-ahme I, rans", "num": null, "uris": null }, "TABREF1": { "type_str": "table", "text": "", "num": null, "html": null, "content": "
: (]overage and Sentence Alignability
EngineProposedSelected
NameArcsWords Arcs Words Cover
DICT27482 27482 3451 34519167
EBMT11005 34992 1527 47686439
GLOSS17663 19249 1567 17745780
Overall: 46580 71998 5415 91699169
" }, "TABREF2": { "type_str": "table", "text": "", "num": null, "html": null, "content": "
: (]onl, ributions of Pangloss l~hlgines
essentially ignored when combining the outputs
of the various translation engines), for two main
reasons: inadequate corpus size and incomplete
indexing.
" } } } }