{ "paper_id": "C94-1009", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:48:07.064059Z" }, "title": "BUILDING AN MT I)ICTIONARY FROM PARAI~LEI~ TEXTS BASED ON LINGUISTIC AND STATISTICAL INIi'ORMATION", "authors": [ { "first": "Akira", "middle": [], "last": "Kumano", "suffix": "", "affiliation": { "laboratory": "", "institution": "Toshiba Corporation 1", "location": { "addrLine": "Komukai Toshiba-cho, Saiwai-ku", "postCode": "210", "settlement": "Kawasaki", "country": "JAPAN" } }, "email": "" }, { "first": "Ltidcki", "middle": [], "last": "Ltirakawa", "suffix": "", "affiliation": { "laboratory": "", "institution": "Toshiba Corporation 1", "location": { "addrLine": "Komukai Toshiba-cho, Saiwai-ku", "postCode": "210", "settlement": "Kawasaki", "country": "JAPAN" } }, "email": "hirakawa@ist.rdc.toshiba.co.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "A method for generating a machine translation (MT) dictionary from parallel texts is described. This method utilizes both statistical information and linguistic information to obtain corresponding words or phrases in parallel texts. By combining these two types of information, translation pairs which cannot be obtained by a linguistic-based method can be extntcted. Over 70% accurate translations of compound nouns and over 50% of unknown words are obtained as tbe first candidate from small Japanese/Englisb parallel texts containing severe distortions.", "pdf_parse": { "paper_id": "C94-1009", "_pdf_hash": "", "abstract": [ { "text": "A method for generating a machine translation (MT) dictionary from parallel texts is described. This method utilizes both statistical information and linguistic information to obtain corresponding words or phrases in parallel texts. By combining these two types of information, translation pairs which cannot be obtained by a linguistic-based method can be extntcted. Over 70% accurate translations of compound nouns and over 50% of unknown words are obtained as tbe first candidate from small Japanese/Englisb parallel texts containing severe distortions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Parallel texts (corpora) are useful resources for acquiring a variety of linguistic knowledge (Dangan, 1991; Matsumoto, 1993) , especially for machine translation systems which inherently require customizations.", "cite_spans": [ { "start": 94, "end": 108, "text": "(Dangan, 1991;", "ref_id": null }, { "start": 109, "end": 125, "text": "Matsumoto, 1993)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1" }, { "text": "Translation dictionaries are, needless to say, the most basic and powerful knowledge source for improving and customizing translation systems. Our research interest lies in automatic generation of translation dictionaries from parallel texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1" }, { "text": "In this perspective, finding corresponding words or phrases in bilingual texts will be the fundamental factor for accurate translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1" }, { "text": "Statistics-based processing has proven to be very powerful for aligning sentences and words in parallel corpora (Brown, 1991; Gale, 1993; Chen, 1993) . Kupiec proposes an Mgorithm for finding ~loun phrases in bilingual corpora (Kupiec, 1993) . In this algo o rithm, noui~-phrase candidates are extracted from tagged and aligned parallel texts using a noun phrase recognizer and tile correspondences of these nonn phrases are calculated based on the EM algorithm. Accuracy of around 90% has been attained for the Imndred highest ranking con'espondenccs. Statisticsbased processing is effective when a relatively large amount of parallel texts is available, i.e. when high frequencies are obtained.", "cite_spans": [ { "start": 112, "end": 125, "text": "(Brown, 1991;", "ref_id": "BIBREF0" }, { "start": 126, "end": 137, "text": "Gale, 1993;", "ref_id": "BIBREF3" }, { "start": 138, "end": 149, "text": "Chen, 1993)", "ref_id": "BIBREF1" }, { "start": 227, "end": 241, "text": "(Kupiec, 1993)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1" }, { "text": "On the other hand, existing linguistic knowledge can be used for finding corresponding words or phrases in parallel texts. For example, possible tar-get expressions for a source expression provided by a translation system (linguistic knowledge source) can be a key in searching the corresponding expressions in a corpus (Nogami, 1991; Katoh, 1993) . Yanramoto (1993) proposes a method for generating a translation dictionary from Japanese/English parallel texts. In this method, English and Japanese compound noun phrases are extracted from parallel texts and their correspondences are searched by matching their possible translations generated by tile existing translation dictionary. However, acquirable noun phrases are limited by tile linguistic generative power of the translation dictionary. Furthernlore, tiffs method utilizes no sentence alignmeat information which can reduce errors in finding noun phrase correspondences.", "cite_spans": [ { "start": 320, "end": 334, "text": "(Nogami, 1991;", "ref_id": null }, { "start": 335, "end": 347, "text": "Katoh, 1993)", "ref_id": "BIBREF4" }, { "start": 350, "end": 366, "text": "Yanramoto (1993)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1" }, { "text": "This paper proposes a new method for generating an MT dictionary from parallel texts. It utilizes both statistical and linguistic information to obtain corresponding words or phrases in parallel texts. By combining these two types of information, translation pairs which cannot be obtained by the above linguistic-based method can be extracted, and a highly accurate translation dictionary is generated from relatively small par:dlel texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1" }, { "text": "TO BUILDING AN MT 1)ICTIONARY Our goal in building an MT dictionary from parallcl texts is to develop a robust method which enables highly accurate extraction of translation pairs from a relatively small amount of parallel texts as well as from parallel texts containing severe distortions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "APPROACtt", "sec_num": "2" }, { "text": "In real-world applications, generally it is extremely difficult especially for MT users to obtain a large amount of high quality parallel texts of one specific domain. If source and target languages do not belong to the same linguistic family, like Japanese and Fnglish, tile situation becomes grave.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "APPROACtt", "sec_num": "2" }, { "text": "As one typical example of MT dictionary compilation, we have selected Japanese and English patent doemnents which contain many state-of-the-m~t technical terms. Althougb thes~ documents are not cul-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "APPROACtt", "sec_num": "2" }, { "text": "Japanese [--English 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "APPROACtt", "sec_num": "2" }, { "text": "; ,,nit extractio,, I", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text l Text", "sec_num": null }, { "text": "[Corresponding ]-. To solve this problem, we explored the appropriate integration method considering the use of linguistic information and statistical information to this end. Lingt, istic information is useful in making an intelligent judgment about correspondence between two languages even from partial texts because of its lexical, syntactic, and semantic knowledge; statistical information is characterized by its robustness against noise because it can tnmsform many actual examples into an abstract fom~.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text l Text", "sec_num": null }, { "text": ". ~---> ' L~nil", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text l Text", "sec_num": null }, { "text": "Below is the flow of ot, r method illustrated in Fig. 1 : (1) Unit Extraction: Pmls of documents (\"units\") are extracted from both Japanese and English texts.", "cite_spans": [], "ref_spans": [ { "start": 49, "end": 57, "text": "Fig. 1 :", "ref_id": null } ], "eq_spans": [], "section": "Text l Text", "sec_num": null }, { "text": "(2) Unit Mapping:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text l Text", "sec_num": null }, { "text": "I&mh Japanese nnit is mapped into English units. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text l Text", "sec_num": null }, { "text": "The plausible hypothesis that parallel sentences cont,,in corresponding linguistic expressions is the major premise in Kupiec (1993) . This type of info,mation should be wklely used. The problem is that tim alignment method based on tile sentence bead model (Brown, 1991) is not applicable to patent documents due to their severe disto,fions in doculnent strtlctures and selltence correspolldences.", "cite_spans": [ { "start": 119, "end": 132, "text": "Kupiec (1993)", "ref_id": "BIBREF5" }, { "start": 258, "end": 271, "text": "(Brown, 1991)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "FORMING UNIT CORRI{SPON-DENCES", "sec_num": "3" }, { "text": "Conse-quently, we have introduced a concept called \"unit\" which corresponds to a pa~t of sentence and adopted a new method to extract corresponding units by using linguistic knowledge as a primaxy source of hi formation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FORMING UNIT CORRI{SPON-DENCES", "sec_num": "3" }, { "text": "First, units are extracted from parallel texts. The unit corresponds to sentences or phrases ill tile text. Terms which should be extracted can be found within a unit. \"File rest of words in the unit is called contextual infommtion for tile extracted term. Tile size of units determines tile effectiveness of the st,eceeding unit mapping process. For exa,npie, if we set noun phrases (enny words in a dictionaly) as :.1 unit, no contextual information is available, and thus tim probability that corresponding relations hold decreases. In our present implementation, we set sentences as a unit for tile first approximation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "l,:xh'aclion of Units", "sec_num": "3.1" }, { "text": "Next, the unit mapping process creates a conesponding unit table from Japanese ~,nd English vails. This table stores the correslmndenee relationship between milts and its likelihood. The likeli.. hood is calculated based on the linguistic information in an MT bilingual dictionary, Our trait mapping algorithm is given below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mal)ping of Uniis", "sec_num": "3.2" }, { "text": "(1) l,ct ,1 be a set of all content words in tile ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mal)ping of Uniis", "sec_num": "3.2" }, { "text": "Errors in the extraction of terms and phrases from parallel texts eventually lead to a failure in acquiring the correct term/phrase correspondences. In Kupiec (1993) and Yamamoto (1993) , term and phrase extraction is applied to both of parallel texts.", "cite_spans": [ { "start": 152, "end": 165, "text": "Kupiec (1993)", "ref_id": "BIBREF5" }, { "start": 170, "end": 185, "text": "Yamamoto (1993)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Extraction of Japanese Terms", "sec_num": "4.1" }, { "text": "In contrast, we extract from units only Japanese terms, thereby reducing the errors caused by term/phrase recognizer. Japanese NP's can be recognized more accurately than English NP's because Japanese has considerably less multi-category words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Japanese Terms", "sec_num": "4.1" }, { "text": "In the current implementation, the following two types of term candidates are extracted by the NP recognizer: (A) Compound nouns (including verbal nouns)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Japanese Terms", "sec_num": "4.1" }, { "text": "Examples: \"~-7\" y e\" :, l-~'~3i~\" (=open bit line colfiguration) \"/i~4-/JiJm~l-fJ~\" (=minimum featuring size) (B) Unknown words (nouns, verbal nouns)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Japanese Terms", "sec_num": "4.1" }, { "text": "Examples: \"~-J-~\" (=to laminate, to form) \" ,l-t 1. 1 .~, 9 :.\" \"Y'\" (=polishing)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Japanese Terms", "sec_num": "4.1" }, { "text": "Our NP recognizer utilizes the sentence awdyzer of a practical MT system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Japanese Terms", "sec_num": "4.1" }, { "text": "The word dictionary includes approximately 70,000 Japanese entries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Japanese Terms", "sec_num": "4.1" }, { "text": "Generation of English translation candidates for a Japanese term is essentially based on the following hypothesis:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding Translation Candidates", "sec_num": "4.2" }, { "text": "The English translation of an extracted term in a Japanese unit is contained in the English cormsponding unit.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hypothesis 1", "sec_num": null }, { "text": "Now an arbitrary word sequence in corresponding units can be a translation candidate of the Japanese term. We extract English translation candidates in two steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hypothesis 1", "sec_num": null }, { "text": "Step 1 : Select English corresponding units.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hypothesis 1", "sec_num": null }, { "text": "Step 2: Extract n-gram data from the units.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hypothesis 1", "sec_num": null }, { "text": "Step 1 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hypothesis 1", "sec_num": null }, { "text": "When the extracted term appears in N Japanese units, N\u00d7M English units will be stored in the corresponding unit table with their correspondence likelihood. The N highest corresponding units within N\u00d7M combinations are extracted. When N is less than M, the M highest combinations arc selected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hypothesis 1", "sec_num": null }, { "text": "Step 2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hypothesis 1", "sec_num": null }, { "text": "Suppose that tile correct English translation of the Japanese term JW is EW, and that the mnnber of Japanese units in which JW appears is FJU(JW) (= N). From ltypothesis 1 that the translation is contained in the corresponding units EU I, EU 2 .....", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hypothesis 1", "sec_num": null }, { "text": ", EW would be a word sequence which often appears in corresponding units. In order to get such EW, we use n-gram data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EUFJU(JW )", "sec_num": null }, { "text": "The frequency of each n-gram (1 <_ n _< 2 x (the number of component words in JW)) data in FJU(JW) English units is calculated and then EW candidates are ranked by the frequency as EWC 1, EWC 2 .... EWCj. Because EWC with a low frequency in the corresponding units is unlikely to be the correct wanslation, the data with a frequency less than FJU(JW) 4 are heuristically excluded from the candidates. The data containing be verb and the data which starts or ends with a preposition or an article are also excluded from the candidates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EUFJU(JW )", "sec_num": null }, { "text": "The translation likelihood (TL) of one translation candidate EWCi for the term JW is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ESTIMATING ENGLISH TRANSLA-TIONS", "sec_num": "5" }, { "text": "TL(JW, EWCi) = F(TLS(JW, EWCi), TLL(JW, EWCi))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ESTIMATING ENGLISH TRANSLA-TIONS", "sec_num": "5" }, { "text": "where TI~S(JW, EWCi) is \"'Franslation Likelihood based on Statistical information,\" and TLL(JW, EWCi) \"Translatiou Likelihood based on Linguistic info rmat ion 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ESTIMATING ENGLISH TRANSLA-TIONS", "sec_num": "5" }, { "text": "Statistical hfformation TLS(JW, EWCi) is the frequency score based on the statistical information from Hypothesis 1 that a word which appears as often in tile corresponding units as JW in Japanese units is more likely to be EW. It is quantitatively defined as tile probability in which the translation candidate appears in the corresponding traits. Then we use the following hypottmsis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.1", "sec_num": null }, { "text": "Hypothesis 2 (1) wJl=*we 1, wJ2~we 2 ...... wJk~We k More generally, tim relation of each word (w j) in term JW and each word (we) in translation candidate EWCi is classified into the following four classes: i) wj~ we ii) wj --* we iii) wj -4 iv) ~ ---> we (qb indicates no word) it) shows a pair whose correspondence is not described in the bilingual dictionary, iii) and iv) indicate that the corresponding word for wj or we is missing. In iii), JW is longer than EWCi; and vice versa in iv).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.1", "sec_num": null }, { "text": "In order to estimate correspondence between JW and EWCi, i) and it) are scored by similarity to the virtual translation which holds the relation (I). When the nmnber of words is the same, score Q (constant) is given, c~Q (ct>0) is added to Q when there is a translation relation to reflect higher reliability of i). Therefore, Q+aQ=(I-,c~)Q is given to the word pair of i), and Q to the word pair of it). Now since we disregard the word order of a term, JW and EWCi are represented as sets of words: JW = wJl, wJ2,.., wJk ~-{wJl, w j2,.., wJk } EWCi = we I , we2,.., we I -{wel, we2,.., wel}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.1", "sec_num": null }, { "text": "The number of words with a lexical correspondence relation in wj and we, the number of words in wj without a relation and the number of words in we without a relation are counted as x, y, z respectively. That is, x -~ y = k and x + z= l.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.1", "sec_num": null }, { "text": "T [.I.(JW, EWCi) is given as the ratio of tile score of the vmual translation to the score of FWCi.", "cite_spans": [ { "start": 2, "end": 16, "text": "[.I.(JW, EWCi)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "5.1", "sec_num": null }, { "text": "When y>_z,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.1", "sec_num": null }, { "text": "x(l-t ct)Q t-zQ ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.1", "sec_num": null }, { "text": "We define the translation likelihood TL(JW, EWCi) as below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combination of Statistical and Linguistic Information", "sec_num": "5.3" }, { "text": "TL(JW, EWCi) -: m TLS(JW, F.WCi) + n TLL(JW, EWCi) m-{ tl", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combination of Statistical and Linguistic Information", "sec_num": "5.3" }, { "text": "Examining the value with the ratio n/ttl constant, a low value of TI.S(JW, EWCi) ill affects the total score, especially when the frequency FJU(JW) is 5 or less. This shows that TLS(JW, EWCi) should be much weighed for JW's which appear often, but not for JW's with a low freqt,ency. Therefore we tentatively define ~ = n/m as a function of frequency FJU(JW), because !3 sbould be higher when FJU(JW) is low.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combination of Statistical and Linguistic Information", "sec_num": "5.3" }, { "text": "]3 = G(FJU(JW)) P + s {FJU(JW)} q -r where r is a possible minimum frequency, aqd s is limit of 13 as the word frequency is high enough.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combination of Statistical and Linguistic Information", "sec_num": "5.3" }, { "text": "Values p=4, q=l, r=l, and s=0.5 are used in tile following experiments. By introducing 13, F is rewritten as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combination of Statistical and Linguistic Information", "sec_num": "5.3" }, { "text": "F(TLS(JW, EWCi), TLL(JW, EWCi) ) = _TLS(JW, EWCi) + 13 TLL(JW, EWCi) 1+13", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combination of Statistical and Linguistic Information", "sec_num": "5.3" }, { "text": "In case {FJU(JW)} q is equal to or less than r, is meaningless, For such JW's, TL(JW, EWCi) is redefined as simply: TL(JW, EWCi) = TLL(JW, EWCi).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combination of Statistical and Linguistic Information", "sec_num": "5.3" }, { "text": "Finally the translation candidate EWC i with the largest value of TL(JW, EWCi) is assumed to be the correct English translation. Table 1 shows the translation candidates for JW: ~ 7\" >\" ~\" ~, I-,~jY~ with the best three TL's. Its frequency in Japanese text is FJU(JW) = 19 (13 4 + 0.5 = 0.72). Consequently, the correct 19-1 translation EWC 3, open bit line cotfiguration, is obtained. To evaluate this method, we have estimated English translations of Japanese terms in seven parallel texts (Japanese specifications of patents on semiconductors and their English translations by human translators) and compared the translations with the correct data given by experts in building an MT dictionary. The size of a Japanese text is 7,508 to 26, 927 characters in 127 to 616 sentences; 99,286 characters in 2,148 sentences in total. Examples of correct translation pairs estimated with the highest TL Table 2 shows the ranking of the correctly estimated translation pairs in seven sample texts. The upper row shows the average of seven individual texts; the lower shows the result using all seven texts in one time. The translation of over 70% of compound nouns is obtained as the first candidate, and over 80% in the top three.", "cite_spans": [], "ref_spans": [ { "start": 129, "end": 136, "text": "Table 1", "ref_id": "TABREF3" }, { "start": 897, "end": 904, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Combination of Statistical and Linguistic Information", "sec_num": "5.3" }, { "text": "The result for unknown words is 54.0% and 65.0%. Though the accuracy for tile unknown words is relatively low, the estimation has been impossible for Yamamoto (1993) . itere, tile terms whose cor,ect translations are not found in English texts are excepted from evaluation. .Such data occur when human experts give a noun translation for Japanese verbal noun term which is translated as a verb in the actual text. Tile ratio of this kind of translation pairs is abot, t 3%. Tile rate of the correct data is calculated by the ratio of the total occurrences.", "cite_spans": [ { "start": 159, "end": 165, "text": "(1993)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Combination of Statistical and Linguistic Information", "sec_num": "5.3" }, { "text": "The accuracy for the average of unknown words is 52.4% in the top three. The result using all texts is significantly better than tile average because tile statistical information is the major factor in the current implementation. Use of more linguistic information such as in Dangan (1991) and Matsumoto (1993) would improve the total performance.", "cite_spans": [ { "start": 276, "end": 289, "text": "Dangan (1991)", "ref_id": null }, { "start": 294, "end": 310, "text": "Matsumoto (1993)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Combination of Statistical and Linguistic Information", "sec_num": "5.3" }, { "text": "Linguistic information has proven effective to estimate translations of low-frequency terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combination of Statistical and Linguistic Information", "sec_num": "5.3" }, { "text": "Of terms which appeared only once in a Japanese text, 215 translations are obtained correctly as the first candidate from 327 terms (65.7%) in seven texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combination of Statistical and Linguistic Information", "sec_num": "5.3" }, { "text": "The fourth example of compound nouns in Fig. 2 shows the advantage of statistical information because the correct translation was obtained in spite of the wrong word segmentation. The Japanese term really consists of three words (~J 9 A, 7\" F 1t ~, .z ]. ~ -.7\" ), each of whicb corresponds to \"cohtmn,\" \"address\" and \"strobe\" respectively. But word segmentation output four word.~ (~J 5' ],, T F 1t ~, l., ~ -.7\") because \":< I. ~--7\"\" is unknown and \"-~ 1-\" is known as \"strike.\"", "cite_spans": [], "ref_spans": [ { "start": 40, "end": 47, "text": "Fig. 2", "ref_id": null } ], "eq_spans": [], "section": "Combination of Statistical and Linguistic Information", "sec_num": "5.3" }, { "text": "The CASES where no correct translatkm has been obtained needs to be examined. The major reasons for faih, res are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combination of Statistical and Linguistic Information", "sec_num": "5.3" }, { "text": "1. Errors in mappi,lg conesponding units. 2. Errors in word segmentation of unknown compound wo,ds.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combination of Statistical and Linguistic Information", "sec_num": "5.3" }, { "text": "Mapping unit errm.'s occur when the one-to-one nnit correspondence does not exist. The experiment using one text shows that 12 out of 98 Japanese sentences have no onE-to-one corresponding English sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combination of Statistical and Linguistic Information", "sec_num": "5.3" }, { "text": "For better unit correspondence, the trails should be smaller, for example, a clause or a verb phrase, so as to make the corresponding accuracy and frequency in text higher and statistical infornmtion more effective. It would improve the unit mapl)ing when one Japanese sentence is tnmslatcd into several English sentences or vice vmsa.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combination of Statistical and Linguistic Information", "sec_num": "5.3" }, { "text": "ThE segmentation errors of unknown words arise often in case of Katakana compotmd word. Katakana is the phonetic alphabet in Jal)anese for spelling foreign words\u2022 Since many compound nourLs in a technical field consist of Katakana's with no space between component words, much larger lexicon will contribute to more accurate segmelltation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combination of Statistical and Linguistic Information", "sec_num": "5.3" }, { "text": "An MT dictionary has been generated from Japanese and English parallel texts. The method proposed in this paper assumes t, nit correspondence and utilizes linguistic information in an MT bilingual dictionary as well as statistical information, namely, word frequency, to estimate the English translatio,L Over 70% accun~te translations for compound nouns are obtained as the first candidate from small (about 300 sentences) Japanese/Fnglish parallel texts (patent specifications) containing severe distortions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CONCLUSION", "sec_num": "7" }, { "text": "The accnracy of the first translaticm candidates Ior unknown words, which calmot be obtained by a linguistic-based method, is over 50%\u2022 Tim current implementation shows promising results for a cliff let, It target (patent texts) despite relatively shnple linguistic knowledge\u2022 The overall lmfformance will be imlnOved by using more linguistic knowledge and optimizing panuneters calculated by sh~tistical information\u2022", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CONCLUSION", "sec_num": "7" } ], "back_matter": [ { "text": " (1993) . \"Extraction of teclmical te,'m bilingual dictionary from bilingual corpus.\" IPSJ SIG Notes, N1, ", "cite_spans": [ { "start": 1, "end": 7, "text": "(1993)", "ref_id": null }, { "start": 95, "end": 101, "text": "Notes,", "ref_id": null }, { "start": 102, "end": 105, "text": "N1,", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Aligning sentences in parallel corlx),a", "authors": [ { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "J", "middle": [ "C" ], "last": "Mercer", "suffix": "" }, { "first": "R", "middle": [], "last": "", "suffix": "" } ], "year": 1991, "venue": "Proe. of the 29th Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "16--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brown, P. F.; l,ai, J. C.; and MErcer, R. 1, (1991). \"Aligning sentences in parallel corlx),a.\" In Proe. of the 29th Annual Meeting of the ACL, 16%176.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Aligning sentences in bilingual corpora using Iexical informatio,L", "authors": [ { "first": "S", "middle": [ "F" ], "last": "Chen", "suffix": "" } ], "year": 1993, "venue": "Proc. of the 3 lxt A tmual Meeting of the A CL", "volume": "", "issue": "", "pages": "9--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, S. F. (1993). \"Aligning sentences in bilingual corpora using Iexical informatio,L\" In Proc. of the 3 lxt A tmual Meeting of the A CL, 9-16.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Two languages are mo,'e intkmnative than one", "authors": [ { "first": "I", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "A", "middle": [], "last": "Ltai", "suffix": "" }, { "first": "U", "middle": [], "last": "Schwall", "suffix": "" } ], "year": 1991, "venue": "Proc. of the 29th Ammal Meeting of the ACL", "volume": "", "issue": "", "pages": "130--137", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dagan, I.; ltai, A.; and Schwall, U. (1991). \"Two languages are mo,'e intkmnative than one.\" In Proc. of the 29th Ammal Meeting of the ACL, 130-137.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A program for aligning sentences in bilingt,al corpora", "authors": [ { "first": "W", "middle": [ "A" ], "last": "Gale", "suffix": "" }, { "first": "K", "middle": [ "W" ], "last": "Chnrcb", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "1", "pages": "75--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gale, W. A., and Chnrcb, K. W. (1993). \"A pro- gram for aligning sentences in bilingt,al corpo- ra.\" Computational Linguistics, 19(1 ), 75-90.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Word selection by searching the translation candidates on monolingnal texts in target language", "authors": [ { "first": "N", "middle": [], "last": "Katoh", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "93--125", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katoh, N. (1993). \"Word selection by searching the translation candidates on monolingnal texts in target language.\" 7>chuieal Report of IEICE, NLC93-32. (in Japanese)", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "An algorithm for finding noun phrase correspondences in bilingual corpora", "authors": [ { "first": "J", "middle": [], "last": "Kupiec", "suffix": "" } ], "year": 1993, "venue": "I'roc. e( the 31st Ammal Meeting rg\" the ACL", "volume": "", "issue": "", "pages": "17--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kupiec, J. (1993). \"An algorithm for finding noun phrase correspondences in bilingual corpora.\" In I'roc. e( the 31st Ammal Meeting rg\" the ACL, 17-22.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Structural Matching", "authors": [ { "first": "Y", "middle": [], "last": "Matsumoto", "suffix": "" }, { "first": "T", "middle": [], "last": "Utsuro", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matsumoto, Y.; [shimoto, ll.; and Utsuro, T. (1993). \"Structural Matching", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Japanese unit JU. (m iS tim number of words) ,1 ={ Jl'J2 ..... lm} (2) l.et E be a set of all content words in the F, nglish unit [{[J. (n is tile number of words) E=:{ E 1,1{2...F; n} J (3) .v is the number of .li's whose translation candi-date list includes some Ej in E. (4) y is the number of Ej's which is included in the translation candidate list of some Ji in J. (5) The correspondence likelihood CL is given by CL(JU, EU) = -x + y m+n For each JU, M (currently 3) English units with the highest CL(JU, EU) are stored in the corresponding unit table.", "uris": null, "num": null }, "FIGREF1": { "type_str": "figure", "text": "If the length of EWCi is close to the length of JW, JW and EWCi are likely to correspond each other. (b) JW and EWCi with more word translation correspondences are likely to correspond each other. Under this hypothesis, the following correspondence relation (1) is the best. Term JW and translation candidate EWCi have the same length k(-I), and all of their component words correspond in the dictionary, wJi:~we i indicates that we i is included in wJi's translation candidates in the MT bilingual dictionary.", "uris": null, "num": null }, "FIGREF2": { "type_str": "figure", "text": "TI2_.(JW, EWCi) = (x l y)(l -t a.)Q Otherwise, Thus, Tl.l.(JW, EWCi) = x(1-I a)Q + yO -(z -y)Q (x-ly)(l-~ c*)Q TI.I.(JW, Ewci) -. TI.L(JW, t!WCi) < 1. The value of c~ is determined as 2 by evaluating sample tnmslalion pairs. Followings are the TLI,'s of three EWC's for JW:vk --7\" :./ ff .:t I. ~Jy:,~ which consists of four component words (k=4); \":,l---7\" :/(=open),\" \"tf .~, I-(-bit),\" \"~(=line),\" and \"Jj3~.(-method, process).\" bit line configuration x:2,y-2, z=l .'.T[.I~ -(2x3+l)/4x3 =0.58 open bit line x::3, y: 1, z:-O .'. Tl.l. = (3x3)/4x3 = 0.75 open bit line configuration x=3,y:l,z-I .'. TLL = (3\u00d73+1)/4x3 =0.83", "uris": null, "num": null }, "FIGREF3": { "type_str": "figure", "text": "-'J\")3ll ~-\" J\" ~./2 minimum featuring size ~ -j'-5}l~f[i~.t~t~ element separation region 71---':7\" :-\" t::\" 'u I\" ~7,t)':,:~ open bit line configuration", "uris": null, "num": null }, "TABREF0": { "text": "", "content": "
\\ I Linguistic
--r+--~l candidate v_~-____._j-'-'-----~ generation ~_____~ / '\u00b0d\u00b0\u00b0 List [ ~-[ j
I statistical I/
f , .mation [ Translation Pairsj/ J
Fig. 1: Flow of building an MT dictionary
from paralh.q texts
turally biased, in many cases, tile organization
between Japanese and English greatly differs and
extensive changes are made ill translating from
Japanese to English text and vice vm.~a. Hence, tile
difficulty of word extraction from patents.
", "num": null, "type_str": "table", "html": null }, "TABREF3": { "text": "Estimation of English translation", "content": "
EWCiFEU 'I'LS TLL 'I'L
bit line configuration19 1.00 0.58 0.82
open bit line18 0.95 0.75 0.86
open bit line configuration 18 0.95 0.83 0.90
6 EVALUATIONAND DISCUSSION
", "num": null, "type_str": "table", "html": null }, "TABREF4": { "text": "Aeeur'lcy of transl'dion estimates Compound nouns (occurrences)-total Tl-i~'t cstq,n--at~'~ to,;~-e~-tq m:ZteT\"", "content": "
Unkilown words occtnrences)
-t-ot~al--[-first estimate top 3 estimates
1 text,,\u2022 55 6 I 30.1~, (16.7) I O---52.4% (29.1)
7 {ext_s~ 3,224_~.9% (2,349)83.3% (2,680)I 389 | 54.0%, (210) k65.0% (253)
", "num": null, "type_str": "table", "html": null } } } }