{ "paper_id": "Y11-1021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:39:36.933450Z" }, "title": "Semi-Automatic Identification of Bilingual Synonymous Technical Terms from Phrase Tables and Parallel Patent Sentences", "authors": [ { "first": "Bing", "middle": [], "last": "Liang", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Tsukuba", "location": { "postCode": "305-8573", "settlement": "Tsukuba", "country": "JAPAN" } }, "email": "" }, { "first": "Takehito", "middle": [], "last": "Utsuro", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Tsukuba", "location": { "postCode": "305-8573", "settlement": "Tsukuba", "country": "JAPAN" } }, "email": "" }, { "first": "Mikio", "middle": [], "last": "Yamamoto", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Tsukuba", "location": { "postCode": "305-8573", "settlement": "Tsukuba", "country": "JAPAN" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In the research field of machine translation of patent documents, the issue of acquiring technical term translation equivalent pairs automatically from parallel patent documents is one of those most important. We take an approach of utilizing the phrase table of a state-of-the-art phrase-based statistical machine translation model. In this task, we consider situations where a technical term is observed in many parallel patent sentences and is translated into many translation equivalents. We apply SVM to the task of identifying synonymous translation equivalent pairs and achieve almost 98% precision and over 40% Fmeasure. Then, in order to improve recall, we introduce a semi-automatic framework, where we employ the strategy of selecting more than one seeds for each set of candidates bilingual synonymous term pairs. By manually judging whether each pair of two seeds is synonymous or not, we achieve over 95% precision and 50% recall.", "pdf_parse": { "paper_id": "Y11-1021", "_pdf_hash": "", "abstract": [ { "text": "In the research field of machine translation of patent documents, the issue of acquiring technical term translation equivalent pairs automatically from parallel patent documents is one of those most important. We take an approach of utilizing the phrase table of a state-of-the-art phrase-based statistical machine translation model. In this task, we consider situations where a technical term is observed in many parallel patent sentences and is translated into many translation equivalents. We apply SVM to the task of identifying synonymous translation equivalent pairs and achieve almost 98% precision and over 40% Fmeasure. Then, in order to improve recall, we introduce a semi-automatic framework, where we employ the strategy of selecting more than one seeds for each set of candidates bilingual synonymous term pairs. By manually judging whether each pair of two seeds is synonymous or not, we achieve over 95% precision and 50% recall.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "For both high quality machine and human translation, a large scale and high quality bilingual lexicon is the most important key resource. Since manual compilation of bilingual lexicon requires plenty of time and huge manual labor, in the research area of knowledge acquisition from natural language text, automatic bilingual lexicon compilation have been studied. Techniques invented so far include translation term pair acquisition based on statistical co-occurrence measure from parallel sentences (Matsumoto and Utsuro, 2000) , translation term pair acquisition from comparable corpora (Fung and Yee, 1998) , compositional translation generation based on an existing bilingual lexicon for human use (Tonoike et al., 2006) , and translation term pair acquisition by collecting partially bilingual texts through the search engine (Huang et al., 2005) .", "cite_spans": [ { "start": 500, "end": 528, "text": "(Matsumoto and Utsuro, 2000)", "ref_id": "BIBREF7" }, { "start": 589, "end": 609, "text": "(Fung and Yee, 1998)", "ref_id": "BIBREF1" }, { "start": 702, "end": 724, "text": "(Tonoike et al., 2006)", "ref_id": "BIBREF10" }, { "start": 831, "end": 851, "text": "(Huang et al., 2005)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Among those efforts of acquiring bilingual lexicon from text, Morishita et al. (2008) studied to acquire technical term translation lexicon from phrase tables, which are trained by a phrasebased statistical machine translation model with parallel sentences automatically extracted from parallel patent documents. Recently, we further studied to require the acquired technical term translation equivalents to be consistent with word alignment in parallel sentences and achieved 91.9% precision with almost 70% recall. This technique has been actually adopted by a Japanese organization which is responsible for translating Japanese patent applications published by the Japanese Patent Office (JPO) into English, where it has been utilized in the process of semiautomatically compiling bilingual technical term lexicon from parallel patent sentences. In this process, persons who are working on compiling bilingual technical term lexicon judge whether to accept or not candidates of bilingual technical term pairs presented by the system. Based on the achievement so far, in this paper, we consider situations where a technical term is observed in many parallel patent sentences and is translated into many translation equivalents. More specifically, in the task of acquiring technical term translation equivalent pairs, this paper studies the issue of identifying synonymous translation equivalent pairs. First, we collect candidates of synonymous translation equivalent pairs from parallel patent sentences. Then, we analyze features for identifying synonymous translation equivalent pairs. Finally, we apply the Support Vector Machines (SVMs) (Vapnik, 1998) to the task of identifying bilingual synonymous technical terms, and achieve the performance of almost 98% precision and over 40% F-measure. Then, in order to improve recall, we introduce a semi-automatic framework, where we employ the strategy of selecting more than one seeds for each set of candidates bilingual synonymous term pairs. By manually judging whether each pair of two seeds is synonymous or not, we achieve over 95% precision and 50% recall.", "cite_spans": [ { "start": 62, "end": 85, "text": "Morishita et al. (2008)", "ref_id": "BIBREF8" }, { "start": 1644, "end": 1658, "text": "(Vapnik, 1998)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the NTCIR-7 workshop, the Japanese-English patent translation task is organized (Fujii et al., 2008) , where parallel patent documents and sentences are provided by the organizer. Those parallel patent documents are collected from the 10 years of unexamined Japanese patent applications published by the Japanese Patent Office (JPO) and the 10 years patent grant data published by the U.S. Patent & Trademark Office (USPTO) in 1993-2000. The numbers of documents are approximately 3,500,000 for Japanese and 1,300,000 for English. Because the USPTO documents consist of only patent that have been granted, the number of these documents is smaller than that of the JPO documents. From these document sets, patent families are automatically extracted and the fields of \"Background of the Invention\" and \"Detailed Description of the Preferred Embodiments\" are selected. This is because the text of those fields is usually translated on a sentence-by-sentence basis. Then, the method of Utiyama and Isahara (2007) is applied to the text of those fields, and Japanese and English sentences are aligned.", "cite_spans": [ { "start": 83, "end": 103, "text": "(Fujii et al., 2008)", "ref_id": "BIBREF0" }, { "start": 986, "end": 1012, "text": "Utiyama and Isahara (2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Japanese-English Parallel Patent Documents", "sec_num": "2" }, { "text": "As a toolkit of a phrase-based statistical machine translation model, we use Moses (Koehn et al., 2007) and apply it to the whole 1.8M parallel patent sentences. In Moses, first, word alignment of parallel sentences are obtained by GIZA++ (Och and Ney, 2003) in both translation directions and then the two alignments are symmetrised. Next, any phrase pair that is consistent with word alignment is collected into the phrase table and a phrase translation probability is assigned to each pair. More specifically, we construct a phrase table in the direction of Japanese to English translation, and another one in the opposite direction of English to Japanese translation. In the direction of Japanese to English translation, we finally obtain 76M translation pairs with 33M unique Japanese phrases, i.e., 2.29 English translations per Japanese phrase on average, with Japanese to English phrase translation probabilities P (p E | p J ) of translating a Japanese phrase p J into an English phrase p E . For each Japanese phrase, those multiple translation candidates in the phrase table are ranked in descending order of Japanese to English phrase translation probabilities. In the similar way, in the phrase table in the opposite direction of English to Japanese translation, for each English phrase, multiple Japanese translation candidates are ranked in descending order of English to Japanese phrase translation probabilities.", "cite_spans": [ { "start": 83, "end": 103, "text": "(Koehn et al., 2007)", "ref_id": "BIBREF4" }, { "start": 239, "end": 258, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Phrase Table of an SMT Model", "sec_num": "3" }, { "text": "Those two phrase tables are then referred to when identifying a bilingual technical term pair, given a parallel sentence pair S J , S E and a Japanese technical term t J , or an English technical term t E . In the direction of Japanese to English, given a parallel sentence pair S J , S E containing a Japanese technical term t J , the Japanese to English phrase table is referred to when identifying a bilingual technical term pair. From the Japanese to English phrase table, candidates of translating t J into English which are consistent with word alignment are collected. Then, those English translation candidates are matched against the English sentence S E of the parallel sentence pair, and those which are not found in S E are filtered out. Finally, among the remaining translation candidates,t E with the largest translation probability P (t E | t J ) is selected and the bilingual technical term pair t J ,t E is identified. The precision of identifying bilingual technical term pair here is 91.9%. Similarly, in the opposite direction of English to Japanese, given a parallel sentence pair S J , S E containing an English technical term t E , the English to Japanese phrase table is referred to when identifying a bilingual technical term pair. The following describes the procedure of developing a reference set of bilingual synonymous technical terms from the whole 1.8M parallel patent sentences and the Japanese to English / English to Japanese phrase tables. Figure 1 illustrates the whole procedure.", "cite_spans": [], "ref_spans": [ { "start": 1476, "end": 1484, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Phrase Table of an SMT Model", "sec_num": "3" }, { "text": "1. First, a initial Japanese noun phrase t 0 J is randomly selected from the Japanese part of the 1.8M parallel patent sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Table of an SMT Model", "sec_num": "3" }, { "text": "2. Then, to the initial Japanese noun phrase t 0 J , the following \"Iteration: Generating Candidates Bilingual Synonymous Term Pairs\" is applied, where the iteration is repeated steps of translation generation from the 1.8M parallel patent sentences and the Japanese to English / English to Japanese phrase tables 1 . Next, the initial set CBP (t 0 J ) of candidate bilingual synonymous term pairs is generated as in the left half of Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 434, "end": 442, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Phrase Table of an SMT Model", "sec_num": "3" }, { "text": "Iteration: Generating Candidates of Bilingual Synonymous Term Pairs 1st step Given the input Japanese term t J , collect all the parallel sentence pairs which contain t J from the 1.8M parallel patent sentences. Next, from each parallel sentence pair, t J is translated into English according to the procedure in the previous section, referring to the Japanese to English phrase table. Then, all the bilingual term pairs t J , t i E are collected into the initial set CBP (t J ) of candidates bilingual synonymous term pairs 2 . 2nd step Similarly, for each English term t E in CBP (t J ), collect all the parallel sentence pairs which contain t E from the 1.8M parallel patent sentences, and translate t E into Japanese, referring to the English to Japanese phrase table. Then, all the bilingual term pairs t i J , t E are added to CBP (t J ). 3rd step Similarly, for each Japanese term t J in CBP (t J ), collect all the parallel sentence pairs which contain t J from the 1.8M parallel patent sentences, and translate t J into English, referring to the Japanese to English phrase table. Then, all the bilingual term pairs t J , t i E are added to CBP (t J ). 4th step Repeat the procedure of the \"2nd step\". 5th step Repeat the procedure of the \"3rd step\". 6th step Repeat the procedure of the \"2nd step\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Table of an SMT Model", "sec_num": "3" }, { "text": "After the candidate generation iteration, we restrict the set CBP (t 0 J ) as having more than or equal to 10 members (i.e., | CBP (t 0 J ) |\u2265 10). In the evaluation of this paper, out of 4,000 randomly selected initial Japanese noun phrases and corresponding initial sets CBP (t 0 J ), about 350 sets satisfy the lower bound of the number of members.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Table of an SMT Model", "sec_num": "3" }, { "text": "3. Next, out of the members of the initial set CBP (t 0 J ) of candidates bilingual synonymous term pairs for the initial Japanese noun phrase t 0 J , we select the seed bilingual term pair s JE = s J , s E as below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Table of an SMT Model", "sec_num": "3" }, { "text": "First, in order to distinguish technical terms and general terms and to select bilingual technical term pairs as seeds, we assume the candidates of seeds to satisfy at least one of the following requirements:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Table of an SMT Model", "sec_num": "3" }, { "text": "(a) The co-occurring frequency of the bilingual term pair in the 1.8M parallel patent sentences is less than 500. (b) The character length of the Japanese term is more than two when it contains kanji (Chinese characters) or hiragana (Japanese characters). The Japanese term consists of more than one morpheme when all of its characters are katakana (Japanese characters for foreign words). (c) The English term consists of more than one word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Table of an SMT Model", "sec_num": "3" }, { "text": "Then, we manually examine the bilingual term pair with the largest co-occurring frequency in the 1.8M parallel patent sentences. If the one with the largest co-occurring frequency is appropriate as a pair of technical terms, we select it as seed. Otherwise, we manually examine all the members of the initial set CBP (t 0 J ) and select the most appropriate pair as seed. If the initial set CBP (t 0 J ) does not include any pair of bilingual technical terms, we discard the set CBP (t 0 J ) at this step. In the evaluation of this paper, out of all the initial sets CBP (t 0 J ), for about 29% of the initial sets, we keep the bilingual term pair with the largest co-occurring frequency as seed, for about 14% of them, we manually select as seed the pair other than the one with the largest co-occurring frequency, and for the remaining 57%, we discard the initial sets CBP (t 0 J ). It took about 5.5 minutes on average to manually examine all the members of each initial set CBP (t 0 J ). 4. To the Japanese technical term s J of the seed bilingual technical term pair s JE = s J , s E , \"Iteration: Generating Candidates Bilingual Synonymous Term Pairs\" is applied. As the result of this iteration, the set CBP (s J ) of candidates of bilingual synonymous technical term pairs is generated as in the right half of Figure 1 . Here, we again restrict the set CBP (s J ) as having more than or equal to 10 members (i.e., | CBP (s J ) |\u2265 10). In the evaluation of this paper, about 90% of the sets CBP (s J ) satisfy the lower bound of the number of members. Finally, we have 134 seed bilingual technical term pairs, where the number of bilingual technical terms in total and their average are shown in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 1318, "end": 1326, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1703, "end": 1710, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Phrase Table of an SMT Model", "sec_num": "3" }, { "text": "5. Finally, for each seed bilingual technical term pair s JE = s J , s E , we manually divide the set CBP (s J ) of candidates of bilingual synonymous technical term pairs into SBP (s JE ), those of which are synonymous with s JE , and the remaining NSBP (s JE ). As in Table 1 , the number of bilingual technical terms included in SBP (s JE ) in total for all of the 134 seed bilingual technical term pairs is 1,680, which amounts to 12.5 per seed on average.", "cite_spans": [], "ref_spans": [ { "start": 270, "end": 277, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Phrase Table of an SMT Model", "sec_num": "3" }, { "text": "In this section, we apply the SVMs to the task of identifying bilingual synonymous technical terms, which we originally proposed in Liang et al. (2011) .", "cite_spans": [ { "start": 132, "end": 151, "text": "Liang et al. (2011)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Automatic Identification of Bilingual Synonymous Technical Terms by Machine Learning", "sec_num": "5" }, { "text": "First, let CBP be the union of the sets CBP (s J ) of candidates of bilingual synonymous technical term pairs for all of the 134 seed bilingual technical term pairs. In the training and testing of the classifier for identifying bilingual synonymous technical terms, we first divide the set of 134 seed bilingual technical term pairs into 10 subsets. Here, for each i-th subset (i = 1, . . . , 10), we construct the union CBP i of the sets CBP (s J ) of candidates of bilingual synonymous technical term pairs, where CBP 1 , . . . , CBP 10 are 10 disjoint subsets 3 of CBP . As a tool for learning SVMs, we use TinySVM (http://chasen.org/\u02dctaku/ software/TinySVM/). As the kernel function, we use the polynomial (2nd order) kernel. In the testing of a SVMs classifier, we regard the distance from the separating hyperplane to each test instance as a confidence measure, and return test instances satisfying confidence measures over a certain lower bound only as positive samples (i.e., synonymous with the seed). In the training of SVMs, we use 8 subsets out of the whole 10 subsets CBP 1 , . . . , CBP 10 . Then, we tune the lower bound of the confidence measure with one of the remaining two subsets (henceforth named as the development set). With this subset, we also tune the parameter of TinySVM for trade-off between training error and margin. Finally, we test the trained classifier against another one of the remaining two subsets (henceforth named as the evaluation set). We repeat this procedure of training / tuning / testing 10 times, and average the 10 results of test performance. Table 2 lists all the features used for training and testing of SVMs for identifying bilingual synonymous technical terms. Features are roughly divided into two types: those of the first type f 1 , . . . , f 6 simply represent various characteristics of the input bilingual technical term t J , t E , given t E , log of the rank of t J with respect to the descending order of the conditional translation probability", "cite_spans": [], "ref_spans": [ { "start": 1593, "end": 1600, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "The Procedure", "sec_num": "5.1" }, { "text": "P(t J | t E ) technical terms t J , t E", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "5.2" }, { "text": "f 3 : rank of the English term given t J , log of the rank of t E with respect to the descending order of the conditional translation probability P(t E | t J ) f 4 : number of Japanese characters number of characters in t J f 5 : number of English words number of words in t E f 6 : number of times generating translation by applying the phrase tables the number of times repeating the procedure of generating translation by applying the phrase tables until generating t E or t J from s J , as in", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "5.2" }, { "text": "s J \u2192 \u2022 \u2022 \u2022 \u2192 t J \u2192 t E , or, s J \u2192 \u2022 \u2022 \u2022 \u2192 t E \u2192 t J f 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "5.2" }, { "text": ": identity of Japanese terms returns 1 when t J = s J f 8 : identity of English terms returns 1 when t E = s E features for the relation of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "5.2" }, { "text": "f 9 : edit distance similarity of monolingual terms f 9 (t X , s X ) = 1 \u2212 ED(tX ,sX ) max(|tX |,|sX |) (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "5.2" }, { "text": "where ED is the edit distance of t X and s X , and | t | denotes the number of characters of t.) bilingual technical terms f 10 : character bigram similarity of monolingual terms", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "5.2" }, { "text": "f 10 (t X , s X ) = |bigram(tX )\u2229bigram(sX )| max(|tX |,|sX |)+1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "5.2" }, { "text": "(where bigram(t) is the set of character bigrams of the term t.) t J , t E and the seed f 11 : rate of identical morphemes (for Japanese) / words (for English)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "5.2" }, { "text": "f 11 (t X , s X ) = |const(tX )\u2229const(sX )| max(|const(tX )|,|const(sX )|)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "5.2" }, { "text": "(where const(t) is the set of morphemes (for Japanese) / words (for English) in the term t.) s J , s E f 12 : subsumption relation of strings / variants relation of surface forms (for Japanese terms ) returns 1 when the difference of t J and s J is only in their suffixes, or only whether or not having the prolonged sound \" \", or only in their hiragana parts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "5.2" }, { "text": "f 13 : identical stem (for English terms) returns 1 when the numbers of constituent words of t E and s E are the same, and their corresponding constituents have the same stem. f 14 : hyphen / space (for English terms)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "5.2" }, { "text": "returns 1 when the difference of t E and s E is only whether having hyphen or space. f 15 : compositional translation with an existing bilingual lexicon returns 1 when s J can be compositionally generated by translating constituents of t E with an existing bilingual lexicon, or, s E can be compositionally generated by translating constituents of t J with an existing bilingual lexicon (Tonoike et al., 2006) . f 16 : translation by the phrase table returns 1 when s J can be generated by translating t E with the phrase table, or, s E can be generated by translating t J with the phrase table.", "cite_spans": [ { "start": 387, "end": 409, "text": "(Tonoike et al., 2006)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "5.2" }, { "text": "while those of the second type f 7 , . . . , f 16 represent relation of the input bilingual technical term t J , t E and the seed bilingual technical term pair", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "5.2" }, { "text": "s JE = s J , s E .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "5.2" }, { "text": "Among the features of the first type are the frequency (f 1 ), ranks of terms with respect to the conditional translation probabilities (f 2 and f 3 ), length of terms (f 4 and f 5 ), and the number of times repeating the procedure of generating translation with the phrase tables until generating input terms t J and t E from the Japanese seed term s J (f 6 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "5.2" }, { "text": "Among the features of the second type are identity of monolingual terms (f 7 and f 8 ), edit distance of monolingual terms (f 9 ), character bigram similarity of monolingual terms (f 10 ), rate of identical morphemes / words (f 11 ), string subsumption and variants for Japanese (f 12 ), identical stems for English (f 13 ), hyphen / space of English terms (f 14 ), compositional translation with an existing bilingual lexicon 4 (f 15 ), and translation by the phrase tables (f 16 ). Table 3 shows the evaluation results for a baseline as well as for SVMs. As the baseline, we simply judge the input bilingual term pair t J , t E as synonymous with the seed bilingual technical term pair s JE = s J , s E when t J and s J are identical, or, t E and s E are identical. When training / testing a SVMs classifier, we tune the lower bound of the confidence measure of the distance from the separating hyperplane in two ways: i.e., for maximizing precision and for maximizing Fmeasure. When maximizing precision, we achieve almost 98% precision where F-measure is over 40%. When maximizing F-measure, we achieve over 70% F-measure with over 73% precision and over 68% recall. Table 4 also show examples of improving the baseline by SVMs. Table 4 (a) shows the case of correctly judging as \"synonym\" only by the proposed method. Here, the baseline judges as \"not synonym\", since neither t J and s J nor t E and s E are identical. With the proposed method, on the other hand, f 13 returns 1 since \"holding\" and \"hold\" have the same stem. Also, f 16 returns 1 since, by the phrase tables, \" \" can be generated by translating \"holding circuit\", and \" \" can be generated by translating \"hold circuit\". Table 4 (b) shows the case of correctly judging as \"not synonym\" only by the proposed method. Here, the baseline judges as \"synonym\", since t E and s E are identical. With the proposed method, on the other hand, both edit distance similarity f 9 and character bigram similarity f 10 return 0 for the Japanese terms \" \" and \" \". Also, f 15 returns 0 since, by compositional translation with an existing bilingual lexicon, \"", "cite_spans": [], "ref_spans": [ { "start": 484, "end": 491, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 1171, "end": 1178, "text": "Table 4", "ref_id": "TABREF3" }, { "start": 1233, "end": 1240, "text": "Table 4", "ref_id": "TABREF3" }, { "start": 1692, "end": 1699, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Features", "sec_num": "5.2" }, { "text": "\" cannot be generated by translating \"transfer unit\", nor \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5.3" }, { "text": "\" cannot be generated by translating \"transfer unit\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5.3" }, { "text": "Evaluation results of the previous section is satisfactory in terms of precision of identifying bilingual synonymous technical terms. However, its recall is relatively low, which needs to be improved. In this section, we allow semi-automatic approach to the task of identifying bilingual synonymous technical terms. In this approach, we assume that the SVM classifier trained by the procedure of section 5.1 is tuned so that it can achieve high precision against relatively easier instances, while it judges relatively harder instances as not synonymous, resulting in relatively low recall. Based on this assumption, we design a manual process which is responsible for examining whether pairs of bilingual technical terms judged by the SVM classifier as not synonymous are not actually synonymous.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semi-Automatic Approach to Transitive Identification of Bilingual Synonymous Technical Terms", "sec_num": "6" }, { "text": "The detailed procedure of the semi-automatic approach is presented in this section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Procedure", "sec_num": "6.1" }, { "text": "1st step Suppose that we are given the SVM classifier trained by the procedure of section 5.1. Then, for a set CBP (s J ) of candidates of bilingual synonymous technical terms, apply the SVM classifier to every pair", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Procedure", "sec_num": "6.1" }, { "text": "u JE = u J , u E and v JE = v J , v E of the members of CBP (s J ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Procedure", "sec_num": "6.1" }, { "text": "2nd step Next, for each member u JE = u J , u E of CBP (s J ), collect u JE itself and other", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Procedure", "sec_num": "6.1" }, { "text": "member v JE = v J , v E of CBP (s J )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Procedure", "sec_num": "6.1" }, { "text": "judged as synonymous with u JE by SVM into a set X(u JE ) (Figure 2 (a) ) 5 . 6 , then manually examine whether u JE is synonymous with the seed bilingual technical term pair s JE , i.e., whether u JE \u2208 SBP (s JE ) holds (Figure 2 (b) ). If so, then collect the members of X(u JE ) into XX(s JE ) (Figure 2 (c) ). Finally, the set XX(s JE ) can be regarded as the final output of the procedure for semiautomatic transitive identification of bilingual technical terms that are judged as synonymous with the seed bilingual technical term pair s JE , and can be denoted as the following formula:", "cite_spans": [ { "start": 74, "end": 75, "text": "5", "ref_id": null }, { "start": 78, "end": 79, "text": "6", "ref_id": null } ], "ref_spans": [ { "start": 58, "end": 71, "text": "(Figure 2 (a)", "ref_id": "FIGREF1" }, { "start": 221, "end": 234, "text": "(Figure 2 (b)", "ref_id": "FIGREF1" }, { "start": 297, "end": 310, "text": "(Figure 2 (c)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "The Procedure", "sec_num": "6.1" }, { "text": "X(u JE ) = v JE = v J , v E (\u2208 CBP (s J )) v JE = u JE , or, v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Procedure", "sec_num": "6.1" }, { "text": "u JE = u J , u E (\u2208 CBP (s J )), if |X(u JE )| > 1 holds", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Procedure", "sec_num": "6.1" }, { "text": "XX(s JE ) = u JE \u2208SBP (s JE ) X(u JE )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Procedure", "sec_num": "6.1" }, { "text": "In the evaluation, for each of the 134 seed bilingual technical term pairs, we evaluate the precision, recall, and F-measure of the set XX(s JE ). As in the case of the procedure of section 5.1, with the development set, we tune the lower bound of the confidence measure as well as the parameter of TinySVM for trade-off between training error and margin, so that we can control the precision of the set XX(s JE ) as over 80%, 85%, 90%, and 95%. Evaluation results against the evaluation set are shown in Table 5 . Here, we achieve over 95% precision with more than 50% recall, and over 90% precision with almost 70% recall. As can be clearly seen from these results, by simply transitively merging the results of identifying bilingual synonymous technical terms by SVM, we can improve the recall of bilingual synonym identification task 7 . ", "cite_spans": [], "ref_spans": [ { "start": 505, "end": 512, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "6.2" }, { "text": "Among related works on acquiring bilingual lexicon from text, Itagaki et al. (2007) focused on automatic validation of translation pairs available in the phrase table learned by a statistical machine translation model, where their study differs with this paper in that Itagaki et al. (2007) did not study the issue of synonymous bilingual technical terms. Tsunakawa and Tsujii (2008) is mostly related to our study, in that they also proposed to apply machine learning technique to the task of identifying synonymous bilingual technical terms and that the features of machine learning studied in Tsunakawa and Tsujii (2008) are closely related those studied in this paper. However, Tsunakawa and Tsujii (2008) studied the issue of identifying synonymous bilingual technical terms only within manually compiled bilingual technical term lexicon and thus are quite limited in its applicability. Our study in this paper, on the other hand, is quite advantageous in that we start from parallel patent documents which continue to be published every year and then, that we can generate candidates of synonymous bilingual technical terms automatically. 6 This condition means that at least one technical term pair vJE is judged as synonymous with uJE by SVM. Otherwise, we skip the process of manually examining the pair vJE. 7 In this evaluation, we simply measure the average number of the members uJE of SBP (sJE) for each of which \u00ac \u00ac \u00acX (uJE) \u00ac \u00ac \u00ac = 1 holds. This number represents that, for how many members of SBP (sJE), we can actually skip examining whether uJE is synonymous with the seed bilingual technical term pair sJE. Out of the average 12.5 members per seed, in Table 5 , the numbers are 0.9 for \"> 80%\", 1.4 for \"> 85%\", 2.2 for \"> 90%\", and 4.0 for \"> 95%\", respectively.", "cite_spans": [ { "start": 62, "end": 83, "text": "Itagaki et al. (2007)", "ref_id": "BIBREF3" }, { "start": 269, "end": 290, "text": "Itagaki et al. (2007)", "ref_id": "BIBREF3" }, { "start": 356, "end": 383, "text": "Tsunakawa and Tsujii (2008)", "ref_id": "BIBREF11" }, { "start": 596, "end": 623, "text": "Tsunakawa and Tsujii (2008)", "ref_id": "BIBREF11" }, { "start": 682, "end": 709, "text": "Tsunakawa and Tsujii (2008)", "ref_id": "BIBREF11" }, { "start": 1145, "end": 1146, "text": "6", "ref_id": null } ], "ref_spans": [ { "start": 1672, "end": 1679, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Related Works", "sec_num": "7" }, { "text": "Our study in this paper is also different from previous works on identifying synonyms based on bilingual and monolingual resources (e.g. Lin and Zhao (2003) ) in that we learn synonymous bilingual technical terms from phrase tables of a phrase-based statistical machine translation model trained with very large parallel sentences.", "cite_spans": [ { "start": 137, "end": 156, "text": "Lin and Zhao (2003)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "7" }, { "text": "In Liang et al. (2011) , we proposed the framework of applying machine learning technique to the task of identifying bilingual synonymous technical terms in the process of acquiring technical term translation equivalent pairs from parallel patent documents. The major drawback of the framework of Liang et al. (2011) is in its low recall when preferring precision as over 90%. The framework proposed in Liang et al. (2011) is also employed in the first half of this paper, where the major contribution of this paper is in showing that the approach of manually merging synonym candidate sets identified by SVM in the first half of this paper is quite effective in improving low recall reported in Liang et al. (2011) .", "cite_spans": [ { "start": 3, "end": 22, "text": "Liang et al. (2011)", "ref_id": "BIBREF5" }, { "start": 297, "end": 316, "text": "Liang et al. (2011)", "ref_id": "BIBREF5" }, { "start": 403, "end": 422, "text": "Liang et al. (2011)", "ref_id": "BIBREF5" }, { "start": 696, "end": 715, "text": "Liang et al. (2011)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "7" }, { "text": "In the task of acquiring technical term translation equivalent pairs, this paper studied the issue of identifying synonymous translation equivalent pairs. We applied the SVMs to this task and achieved the performance of almost 98% precision and over 40% F-measure. Then, in order to improve recall, we simply introduced a semi-automatic framework, where we employed the strategy of selecting more than one seeds for each set of candidates bilingual synonymous term pairs. By manually judging whether each pair of two seeds is synonymous or not, we achieved over 95% precision and 50% recall. We are planning to incorporate the results of judgment by SVM when judging whether each pair of two seeds is synonymous or not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "Copyright 2011 by Bing Liang, Takehito Utsuro, and Mikio Yamamoto 25th Pacific Asia Conference on Language, Information and Computation, pages 196-205", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The number of iteration 6 here is based on our preliminary evaluation, and is decided so that most synonymous bilingual technical terms are generated from the initial Japanese phrase t 0 J , while the number of candidates other than true synonyms is minimized. Throughout those steps, we simply avoid duplicate generation of terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Throughout the steps from the \"1st\" to the \"6th\", we only keep bilingual term pairs which satisfy the lower bound 6 as well as the upper bound 800 of the co-occurring frequency in the 1.8M parallel patent sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Here, we divide the set of 134 seed bilingual technical term pairs into 10 subsets so that the numbers of positive (i.e., synonymous with the seed) / negative (i.e., not synonymous with the seed) samples in each CBPi (i = 1, . . . , 10) are comparative among the 10 subsets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "As the existing Japanese-English bilingual lexicon, Eijiro (http://www.eijiro.jp/, Ver.79, with 1.6M translation pairs, is used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Here, if both u 1 JE and u 2 JE are judged as synonymous with vJE, but u 1 JE and u 2 JE are judged as not synonymous, we simply include both u 1 JE and u 2 JE in X(vJE).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Toward the evaluation of machine translation using patent information", "authors": [ { "first": "A", "middle": [], "last": "Fujii", "suffix": "" }, { "first": "M", "middle": [], "last": "Utiyama", "suffix": "" }, { "first": "M", "middle": [], "last": "Yamamoto", "suffix": "" }, { "first": "T", "middle": [], "last": "Utsuro", "suffix": "" } ], "year": 2008, "venue": "Proc. 8th AMTA", "volume": "", "issue": "", "pages": "97--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fujii, A., M. Utiyama, M. Yamamoto, and T. Utsuro. 2008. Toward the evaluation of machine translation using patent information. In Proc. 8th AMTA, pp. 97-106.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "An IR approach for translating new words from nonparallel, comparable texts", "authors": [ { "first": "P", "middle": [], "last": "Fung", "suffix": "" }, { "first": "L", "middle": [ "Y" ], "last": "Yee", "suffix": "" } ], "year": 1998, "venue": "Proc. 17th COLING and 36th ACL", "volume": "", "issue": "", "pages": "414--420", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fung, P. and L. Y. Yee. 1998. An IR approach for translating new words from nonparallel, comparable texts. In Proc. 17th COLING and 36th ACL, pp. 414-420.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Mining key phrase translations from Web corpora", "authors": [ { "first": "F", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Y", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "S", "middle": [], "last": "Vogel", "suffix": "" } ], "year": 2005, "venue": "Proc. HLT/EMNLP", "volume": "", "issue": "", "pages": "483--490", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, F., Y. Zhang, and S. Vogel. 2005. Mining key phrase translations from Web corpora. In Proc. HLT/EMNLP, pp. 483-490.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Automatic validation of terminology translation consistency with statistical method", "authors": [ { "first": "M", "middle": [], "last": "Itagaki", "suffix": "" }, { "first": "T", "middle": [], "last": "Aikawa", "suffix": "" }, { "first": "X", "middle": [], "last": "He", "suffix": "" } ], "year": 2007, "venue": "Proc. MT Summit XI", "volume": "", "issue": "", "pages": "269--274", "other_ids": {}, "num": null, "urls": [], "raw_text": "Itagaki, M., T. Aikawa, and X. He. 2007. Automatic validation of terminology translation consistency with statistical method. In Proc. MT Summit XI, pp. 269-274.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Moses: Open source toolkit for statistical machine translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "H", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "A", "middle": [], "last": "Birch", "suffix": "" }, { "first": "C", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "M", "middle": [], "last": "Federico", "suffix": "" }, { "first": "N", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "B", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "W", "middle": [], "last": "Shen", "suffix": "" }, { "first": "C", "middle": [], "last": "Moran", "suffix": "" }, { "first": "R", "middle": [], "last": "Zens", "suffix": "" }, { "first": "C", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "O", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "A", "middle": [], "last": "Constantin", "suffix": "" }, { "first": "E", "middle": [], "last": "Herbst", "suffix": "" } ], "year": 2007, "venue": "Proc. 45th ACL, Companion Volume", "volume": "", "issue": "", "pages": "177--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koehn, P., H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin, and E. Herbst. 2007. Moses: Open source toolkit for statistical machine transla- tion. In Proc. 45th ACL, Companion Volume, pp. 177-180.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Identifying bilingual synonymous technical terms from phrase tables and parallel patent sentences", "authors": [ { "first": "B", "middle": [], "last": "Liang", "suffix": "" }, { "first": "T", "middle": [], "last": "Utsuro", "suffix": "" }, { "first": "M", "middle": [], "last": "Yamamoto", "suffix": "" } ], "year": 2011, "venue": "Proc. 12th PACLING, #7", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang, B., T. Utsuro, and M. Yamamoto. 2011. Identifying bilingual synonymous technical terms from phrase tables and parallel patent sentences. In Proc. 12th PACLING, #7.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Identifying synonyms among distributionally similar words", "authors": [ { "first": "D", "middle": [], "last": "Lin", "suffix": "" }, { "first": "S", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2003, "venue": "Proc. 18th IJCAI", "volume": "", "issue": "", "pages": "1492--1493", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, D. and S. Zhao. 2003. Identifying synonyms among distributionally similar words. In Proc. 18th IJCAI, pp. 1492-1493.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Lexical knowledge acquisition", "authors": [ { "first": "Y", "middle": [], "last": "Matsumoto", "suffix": "" }, { "first": "T", "middle": [], "last": "Utsuro", "suffix": "" } ], "year": 2000, "venue": "Handbook of Natural Language Processing", "volume": "24", "issue": "", "pages": "563--610", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matsumoto, Y. and T. Utsuro. 2000. Lexical knowledge acquisition. In R. Dale, H. Moisl, and H. Somers, eds., Handbook of Natural Language Processing, ch. 24, pp. 563-610. Marcel Dekker Inc.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Integrating a phrase-based SMT model and a bilingual lexicon for human in semi-automatic acquisition of technical term translation lexicon", "authors": [ { "first": "Y", "middle": [], "last": "Morishita", "suffix": "" }, { "first": "T", "middle": [], "last": "Utsuro", "suffix": "" }, { "first": "M", "middle": [], "last": "Yamamoto", "suffix": "" } ], "year": 2008, "venue": "Proc. 8th AMTA", "volume": "", "issue": "", "pages": "153--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Morishita, Y., T. Utsuro, and M. Yamamoto. 2008. Integrating a phrase-based SMT model and a bilingual lexicon for human in semi-automatic acquisition of technical term translation lexicon. In Proc. 8th AMTA, pp. 153-162.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Och, F. J. and H. Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1), 19-51.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A comparative study on compositional translation estimation using a domain/topic-specific corpus collected from the web", "authors": [ { "first": "M", "middle": [], "last": "Tonoike", "suffix": "" }, { "first": "M", "middle": [], "last": "Kida", "suffix": "" }, { "first": "T", "middle": [], "last": "Takagi", "suffix": "" }, { "first": "Y", "middle": [], "last": "Sasaki", "suffix": "" }, { "first": "T", "middle": [], "last": "Utsuro", "suffix": "" }, { "first": "S", "middle": [], "last": "Sato", "suffix": "" } ], "year": 2006, "venue": "Proc. 2nd Intl. Workshop on Web as Corpus", "volume": "", "issue": "", "pages": "11--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tonoike, M., M. Kida, T. Takagi, Y. Sasaki, T. Utsuro, and S. Sato. 2006. A comparative study on compositional translation estimation using a domain/topic-specific corpus collected from the web. In Proc. 2nd Intl. Workshop on Web as Corpus, pp. 11-18.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Bilingual synonym identification with spelling variations", "authors": [ { "first": "T", "middle": [], "last": "Tsunakawa", "suffix": "" }, { "first": "J", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2008, "venue": "Proc. 3rd IJCNLP", "volume": "", "issue": "", "pages": "457--464", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsunakawa, T. and J. Tsujii. 2008. Bilingual synonym identification with spelling variations. In Proc. 3rd IJCNLP, pp. 457-464.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A Japanese-English patent parallel corpus", "authors": [ { "first": "M", "middle": [], "last": "Utiyama", "suffix": "" }, { "first": "H", "middle": [], "last": "Isahara", "suffix": "" } ], "year": 1998, "venue": "Statistical Learning Theory", "volume": "", "issue": "", "pages": "475--482", "other_ids": {}, "num": null, "urls": [], "raw_text": "Utiyama, M. and H. Isahara. 2007. A Japanese-English patent parallel corpus. In Proc. MT Summit XI, pp. 475-482. Vapnik, V. N. 1998. Statistical Learning Theory. Wiley-Interscience.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Developing a Reference Set of Bilingual Synonymous Technical Terms 4 Developing a Reference Set of Bilingual Synonymous Technical Terms", "type_str": "figure", "num": null }, "FIGREF1": { "uris": null, "text": "The Procedure of Semi-Automatic Transitive Identification of Bilingual Technical Terms", "type_str": "figure", "num": null }, "TABREF0": { "html": null, "type_str": "table", "text": "Number of Bilingual Technical Terms: Candidates and Reference of Synonyms", "num": null, "content": "
# of bilingual technical terms for the total 134 seedsaverage per seed
Candidates of SynonymssJCBP (s J ) = 22,473167.7
Reference of
Synonyms
" }, "TABREF1": { "html": null, "type_str": "table", "text": "Features for Identifying Bilingual Synonymous Technical Terms by Machine Learning", "num": null, "content": "
definition
classfeature( where X denotes J or E, and s J , s E denotes
the seed bilingual technical term pair )
f 1 : frequencylog of the frequency of t J , t E within the whole parallel
patent sentences
featuresf 2 : rank of the Japanese
forterm
bilingual
" }, "TABREF2": { "html": null, "type_str": "table", "text": "Evaluation Results of Automatic Identification of Bilingual Synonymous Technical Terms (%) J and s J are identical, or, t E and s E are identical.", "num": null, "content": "
Precision Recall F-measure
" }, "TABREF3": { "html": null, "type_str": "table", "text": "Examples of Improvement in Automatic Identification of Bilingual Synonymous Technical Terms", "num": null, "content": "
by SVM(a) Correctly Judging as \"Synonym\" only by SVM
seed s J , s Et J , t EReferenceBaselineSVM (Maximum Precision)
hold circuit,, holding circuitsynonym not synonymsynonym
(b) Correctly Judging as \"Not Synonym\" only by SVM
seed s J , s Et J , t EReferenceBaseline SVM (Maximum Precision)
, transfer unittransfer unit,not synonym synonymnot synonym
" }, "TABREF4": { "html": null, "type_str": "table", "text": "JE is judged as synonymous with u JE by SVM.", "num": null, "content": "" }, "TABREF5": { "html": null, "type_str": "table", "text": "Evaluation Results of Transitive Identification of Bilingual Synonymous Technical Terms (%)", "num": null, "content": "
Requirement for Precision against the Development Set Precision Recall F-measure
> 80%81.389.985.1
> 85%86.980.983.4
> 90%91.369.178.2
> 95%95.253.167.9
" } } } }