{ "paper_id": "C08-1030", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:24:49.836406Z" }, "title": "Retrieving Bilingual Verb-noun Collocations by Integrating Cross-Language Category Hierarchies", "authors": [ { "first": "Fumiyo", "middle": [], "last": "Fukumoto", "suffix": "", "affiliation": {}, "email": "fukumoto@yamanashi.ac.jp" }, { "first": "Yoshimi", "middle": [], "last": "Suzuki", "suffix": "", "affiliation": {}, "email": "ysuzuki@yamanashi.ac.jp" }, { "first": "Kazuyuki", "middle": [], "last": "Yamashita", "suffix": "", "affiliation": {}, "email": "kazuyuki@yamanashi.ac.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents a method of retrieving bilingual collocations of a verb and its objective noun from cross-lingual documents with similar contents. Relevant documents are obtained by integrating crosslanguage hierarchies. The results showed a 15.1% improvement over the baseline nonhierarchy model, and a 6.0% improvement over use of relevant documents retrieved from a single hierarchy. Moreover, we found that some of the retrieved collocations were domain-specific.", "pdf_parse": { "paper_id": "C08-1030", "_pdf_hash": "", "abstract": [ { "text": "This paper presents a method of retrieving bilingual collocations of a verb and its objective noun from cross-lingual documents with similar contents. Relevant documents are obtained by integrating crosslanguage hierarchies. The results showed a 15.1% improvement over the baseline nonhierarchy model, and a 6.0% improvement over use of relevant documents retrieved from a single hierarchy. Moreover, we found that some of the retrieved collocations were domain-specific.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "A bilingual lexicon is important for cross-lingual NLP applications, such as CLIR, and multilingual topic tracking. Much of the previous work on finding bilingual lexicons has made use of comparable corpora, which exhibit various degrees of parallelism. Fung et al. (2004) described corpora ranging from noisy parallel, to comparable, and finally to very non-parallel. Obviously, the latter are easy to collect because very non-parallel corpora consist of sets of documents in two different languages from the same period of dates. However, a good solution is required to produce a higher quality of lexicon retrieval.", "cite_spans": [ { "start": 254, "end": 272, "text": "Fung et al. (2004)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we focus on English and Japanese bilingual verb-objective noun collocations which we call verb-noun collocations and retrieve them using very non-parallel corpora. The method first finds cross-lingual relevant document pairs with similar contents from non-parallel corpora, and c 2008.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported license (http://creativecommons.org/licenses/by-nc-sa/3.0/). Some rights reserved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "then we estimate bilingual verb-noun collocations within these relevant documents. Relevant documents are defined here as pairs of English and Japanese documents that report identical or closely related contents, e.g., a pair of documents describing an aircraft crash and the ensuing investigation to compensate the victims' families or any safety measures proposed as a result of the crash. In the task of retrieving cross-lingual relevant documents, it is crucial to identify an event as something occurs at some specific place and time associated with some specific action. One solution is to use a topic, i.e., category in the hierarchical structure, such as Internet directories. Although a topic is not an event, it can be a broader class of event. Therefore, it is helpful for retrieving relevant documents, and thus bilingual verb-noun collocations. Consider the Reuters'96 and Mainichi newspaper documents shown in Figure 1 . The documents report on the same event, \"Russian space station collides with cargo craft,\" were published within two days of each other, and have overlapping content. Moreover, as indicated by the double-headed arrows in the figure, there are a number of bilingual collocations. However, as shown in Figure 1 , the Reuters document is classified into \"Science and Technology,\" while the Mainichi document is classified into \"Space Navigation\". This is natural because categories in the hierarchical structures are defined by different human experts. Therefore, a hierarchy tends to have some bias in both defining hierarchical structure and classifying documents, and as a result some hierarchies written in one language are coarse-grained, while others written in other languages are fine-grained. Our attempt using the results of integrating different hierarchies for retrieving relevant documents was postulated to be able to solve this defect of the differences in ", "cite_spans": [], "ref_spans": [ { "start": 924, "end": 932, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1235, "end": 1243, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The method consists of three steps: integrating category hierarchies, retrieving cross-lingual relevant documents, and retrieving collocations from relevant documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Description", "sec_num": "2" }, { "text": "The method for integrating different category hierarchies does not simply merge two different hierarchies into a large hierarchy, but instead retrieves pairs of categories, where each category is relevant to each other. 1 The procedure consists of two substeps: Cross-language text classification (CLTC) and estimating category correspondences.", "cite_spans": [ { "start": 220, "end": 221, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Integrating Hierarchies", "sec_num": "2.1" }, { "text": "The corpora we used are the Reuters'96 and the RWCP of the Mainichi Japanese newspapers. In the CLTC task, we used English and Japanese data to train the Reuters'96 categorical hierarchy and the Mainichi UDC code hierarchy (Mainichi hierarchy), respectively. In the Reuters'96 hierarchy, the system was trained using labeled English documents, and classified translated labeled Japanese Figure 2 : Cross-language text classification documents. Similarly, for Mainichi hierarchy, the system was trained using labeled Japanese documents, and classified translated labeled English documents. We used Japanese-English and English-Japanese MT software.", "cite_spans": [], "ref_spans": [ { "start": 387, "end": 395, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Cross-language text classification", "sec_num": "2.1.1" }, { "text": "We used a learning model, Support Vector Machines (SVMs) (Vapnik, 1995) , to classify documents, as SVMs have been shown to be effective for text classification. We used the \"One-againstthe-Rest\" version of the SVMs at each level of a hierarchy. We classify test documents using a hierarchy by learning separate classifiers at each internal node of the hierarchy. We used a Boolean function", "cite_spans": [ { "start": 57, "end": 71, "text": "(Vapnik, 1995)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-language text classification", "sec_num": "2.1.1" }, { "text": "b(L 1 )&&\u2022 \u2022 \u2022&&b(L m ), where b(L i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-language text classification", "sec_num": "2.1.1" }, { "text": "is a decision threshold value of the i-th hierarchical level. The process is repeated by greedily selecting subbranches until a leaf is reached.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-language text classification", "sec_num": "2.1.1" }, { "text": "We classified translated Mainichi documents with Mainichi category m into Reuters categories using SVMs classifiers. Similarly, each translated Reuters document with category r was classified into Mainichi categories. Figure 2 illustrates the classification of Reuters and Mainichi documents. A document with Mainichi category \"m1\" is classified into Reuters category \"r12\", and a document with Reuters category \"r1\" is classified into Mainichi category \"m21\". As a result, we obtained category pairs, e.g., (r12, m1), and (m21, r1), from the documents assigned to the categories in each hierarchy.", "cite_spans": [], "ref_spans": [ { "start": 218, "end": 226, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Cross-language text classification", "sec_num": "2.1.1" }, { "text": "The assumption of category correspondences is that semantically similar categories, such as \"Equity markets\" and \"Bond markets\" exhibit similar statistical properties than dissimilar categories, such as \"Equity markets\" and \"Sports\". We applied \u03c7 2 statistics to the results of CLTC. Let us take a look at the Reuters'96 hierarchy. Sup-pose that the translated Mainichi document with Mainichi category m \u2208 M (where M is a set of Mainichi categories) is assigned to Reuters category r \u2208 R (R is a set of Reuters'96 categories). We can retrieve Reuters and Mainichi category pairs, and estimate category correspondences according to the \u03c7 2 statistics shown in Eq. (1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimating category correspondences", "sec_num": "2.1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c7 2 (r, m) = f (r, m) \u2212 E(r, m) E(r, m)", "eq_num": "(1)" } ], "section": "Estimating category correspondences", "sec_num": "2.1.2" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimating category correspondences", "sec_num": "2.1.2" }, { "text": "E(r, m) = Sr \u00d7 Sm SR , Sr = k\u2208M f (r, k), SR = r\u2208R Sr .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimating category correspondences", "sec_num": "2.1.2" }, { "text": "Here, the co-occurrence frequency of r and m, f (r, m) is equal to the number of category m documents assigned to r. Similar to the Reuters hierarchy, we can estimate category correspondences from Mainichi hierarchy, and extract a pair (r, m) according to the \u03c7 2 value. We note that the similarity obtained by each hierarchy does not have a fixed range. Thus, we apply the normalization strategy shown in Eq. (2) to the results obtained by each hierarchy to bring the similarity value into the range [0,1].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimating category correspondences", "sec_num": "2.1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c7 2 new (r, m) = \u03c7 2 old (r, m) \u2212 \u03c7 2 min (r, m) \u03c7 2 max (r, m) \u2212 \u03c7 2 min (r, m) .", "eq_num": "(2)" } ], "section": "Estimating category correspondences", "sec_num": "2.1.2" }, { "text": "Let SP r and SP m are a set of pairs obtained by Reuters hierarchy and Mainichi hierarchy, respectively. We construct the set of r and m category pairs,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimating category correspondences", "sec_num": "2.1.2" }, { "text": "SP (r,m) = {(r,m) | (r,m) \u2208 SP r \u2229 SP m },", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimating category correspondences", "sec_num": "2.1.2" }, { "text": "where each pair is sorted in descending order of \u03c7 2 value. For each pair of SP (r,m) , if the value of \u03c7 2 is higher than a lower bound L \u03c7 2 , two categories, r and m, are regarded as similar. 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimating category correspondences", "sec_num": "2.1.2" }, { "text": "We used the results of category correspondences from the Reuters and Mainichi hierarchies to retrieve relevant documents. Recall that we used English and Japanese documents with quite different hierarchical structures. The task thus consists of two criteria: retrieving relevant documents based on English (we call this Int hi & Eng) and in Japanese ( Figure 3 , in \"Int hi & Eng\" with a set of similar categories consisting of r and m, for each Reuters and translated Mainichi document, we calculate BM25 similarities between them.", "cite_spans": [], "ref_spans": [ { "start": 352, "end": 360, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Retrieval of Relevant Documents", "sec_num": "2.2" }, { "text": "Int hi & Jap). Let d r i (1 \u2264 i \u2264 s) be a Reuters document that is classified into the Reuters category r. Let d m j (1 \u2264 j \u2264 t) be a Mainichi", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Retrieval of Relevant Documents", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "BM25(d r i , d m mt j ) = w\u2208d m mt j w (1) (k1 + 1)tf K + tf (k3 + 1)qtf k3 + qtf ,", "eq_num": "(3)" } ], "section": "Retrieval of Relevant Documents", "sec_num": "2.2" }, { "text": "where w is a word within d m mt j , and w (1) is the weight of w, w (1) = log (N \u2212n+0.5) (n+0.5) . N is the number of Reuters documents within the same category r, and n is the number of documents which contains w. K refers to k 1 ((1 \u2212 b) + b dl avdl ). k 1 , b, and k 3 are parameters and set to 1, 1, and 1,000, respectively. dl is the document length of d r i and avdl is the average document length in words. tf and qtf are the frequency of occurrence of w in d r i , and d m mt j , respectively. If the similarity value between them is higher than a lower bound L \u03b8 , we regarded these as relevant documents. The procedure is applied to all documents belonging to the sets of similar categories. \"Int hi & Jap\" is the same as \"Int hi & Eng\" except for the use of d r mt i and d m j for comparison. We compared the performance of these tasks, and found that \"Int hi & Eng\" was better than \"Int hi & Jap\". In section 3, we show results with \"Int hi & Eng\" due to lack of space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Retrieval of Relevant Documents", "sec_num": "2.2" }, { "text": "The final step is to estimate bilingual correspondences from relevant documents. All Japanese documents were parsed using the syntactic analyzer CaboCha (Kudo and Matsumoto, 2003) . English documents were parsed with the syntactic analyzer (Lin, 1993) . In both English and Japanese, we extracted all the dependency triplets(obj, n, v). Here, n refers to a noun which is an object(obj) of a verb v in a sentence. 3 Hereafter, we describe the Reuters English dependency triplet as vn r , and that of Mainichi as vn m . The method to retrieve bilingual correspondences consists of two sub-steps: document-based retrieval and sentencebased retrieval.", "cite_spans": [ { "start": 153, "end": 179, "text": "(Kudo and Matsumoto, 2003)", "ref_id": "BIBREF3" }, { "start": 240, "end": 251, "text": "(Lin, 1993)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Acquisition of Bilingual Collocations", "sec_num": "2.3" }, { "text": "We extract vn r and vn m pairs from the results of relevant documents:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document-based retrieval", "sec_num": "2.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "{vnr, vnm} s.t. \u2203 d r i vnr, \u2203 d m j vnm BM25(d r i , d m mt j ) \u2265 L \u03b8 .", "eq_num": "(4)" } ], "section": "Document-based retrieval", "sec_num": "2.3.1" }, { "text": "Next, we estimate the bilingual correspondences according to the \u03c7 2 (vn r , vn m ) statistics shown in Eq. (1). In Eq. (1), we replace r by vn r and m by vn m . f (r, m) is replaced by f (vn r , vn m ), i.e., the co-occurrence frequency of vn r and vn m .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document-based retrieval", "sec_num": "2.3.1" }, { "text": "We note that bilingual correspondences obtained by document-based retrieval are not reliable. This is because many verb-noun collocations appear in a pair of relevant documents, as can be seen from Figure 1 . Therefore, we applied sentence-based retrieval to the results obtained by document-based retrieval. First, we extract vn r and vn m pairs the \u03c7 2 values of which are higher than 0. Next, for each vn r and vn m pair, we assign sentence-based similarity: Here, Set r and Set m are a set of sentences that include vn r and vn m , respectively. The similarity between S vn r and S vn m is shown in Eq. (6).", "cite_spans": [], "ref_spans": [ { "start": 198, "end": 206, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Sentence-based retrieval", "sec_num": "2.3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "sim(S vnr, S vnm) = co(S vnr \u2229 S mt vnm) | S vnr | + | S mt vnm | \u22122co(S vnr \u2229 S mt vnm) + 2 ,", "eq_num": "(6)" } ], "section": "Sentence-based retrieval", "sec_num": "2.3.2" }, { "text": "where |X| is the number of content words in a sentence X, and co(S vn r \u2229 S mt vn m ) refers to the number of content words that appear in both S vn r and S mt vn m . S mt vn m is a translation result of S vm m . We retrieved vn r and vn m as a bilingual lexicon that satisfies:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence-based retrieval", "sec_num": "2.3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "{vnr, vnm} = a r g m a x {vn r \u2208BP (vnm), vnm} S sim(vn r , vnm) ,", "eq_num": "(7)" } ], "section": "Sentence-based retrieval", "sec_num": "2.3.2" }, { "text": "where BP (vnm) is a set of bilingual verb-noun pairs, each of which includes vn m on the Japanese side.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence-based retrieval", "sec_num": "2.3.2" }, { "text": "3.1 Integrating hierarchies", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "We used Reuters'96 and UDC code hierarchies. The Reuters'96 corpus from 20th Aug. 1996 to 19th Aug. 1997 consists of 806,791 documents organized into coarse-grained categories, i.e., 126 categories with a four-level hierarchy. The RWCP corpus labeled with UDC codes selected from 1994 Mainichi newspaper consists of 27,755 documents organized into a fine-grained categories, i.e., 9,951 categories with a seven-level hierarchy (RWCP., 1998). We used Japanese-English and English-Japanese MT software (Internet Honyakuno-Ousama for Linux, Ver.5, IBM Corp.) for CLTC. We divided both Reuters'96 (from 20th Aug. 1996 to 19th May 1997) and RWCP corpora into two equal sets: a training set to train SVM classifiers, and a test set for TC to generate pairs of similar categories. We divided the test set into two parts: the first was used to estimate thresholds, i.e., a decision threshold b used in CLTC, and lower bound L \u03c7 2 ; and the second was used to generate pairs of similar categories using the threshold. We chose b = 0 for each level of a hierarchy. The lower bound L \u03c7 2 was .003.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental setup", "sec_num": "3.1.1" }, { "text": "We selected 109 categories from Reuters and 4,739 categories from Mainichi, which have at least five documents in each set. We used content words for both English and Japanese documents. We compared the results obtained by hierarchical approach to those obtained by the flat non-hierarchical approach. Moreover, in the hierarchical approach, we applied a Boolean function to each test document. For evaluation of category correspondences, we used F1-score (F1) which is a measure that balances precision (Prec) and recall (Rec). Let Cor be a set of correct category pairs. 4 The precise definitions of the precision and recall of the task are given below: Table 1 shows F1 of category correspondences with L \u03c7 2 = .003. \"Mai & Reu\" shows the results obtained by our method. \"Mai\" and \"Reu\" show the results using only one hierarchy. For example, \"Mai\" shows the results in which both Mainichi and translated Reuters documents are classified into categories with Mainichi hierarchy, and estimated category correspondences.", "cite_spans": [ { "start": 573, "end": 574, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 656, "end": 663, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experimental setup", "sec_num": "3.1.1" }, { "text": "Prec = | {(r, m) | (r, m) \u2208 Cor, \u03c7 2 (r, m) \u2265 L \u03c7 2 } | | {(r, m) | \u03c7 2 (r, m) \u2265 L \u03c7 2 } | Rec = | {(r, m) | (r, m) \u2208 Cor, \u03c7 2 (r, m) \u2265 L \u03c7 2 } | | {(r, m) | (r, m) \u2208 Cor} |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental setup", "sec_num": "3.1.1" }, { "text": "Integrating hierarchies is more effective than only a single hierarchy. Moreover, we found advantages in the F1 for the hierarchical approach (\"Hierarchy\" in Table 1 ) in comparison with a baseline flat approach (\"Flat\"). We note that the result of \"Mai\" was worse than that of \"Reu\" in both approaches. One reason is that the accuracy of TC. The micro-average F1 of TC for Reuters hierarchy was .815, while that of Mainichi was .673, as Mainichi hierarchy consists of many categories, and the number of training data for each category were smaller than those of Reuters. The results obtained by our method depend on the performance of TC. Therefore, it will be necessary to examine some semi-supervised learning techniques to improve classification accuracy. 3.2 Relevant document retrieval", "cite_spans": [], "ref_spans": [ { "start": 158, "end": 165, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Results", "sec_num": "3.1.2" }, { "text": "The training data for choosing the lower bound L \u03b8 used in the relevant document retrieval is Reuters and RWCP from 13th to 21st Jun. 1997. The difference in dates between them is less than \u00b1 3 days. For example, when the date of the RWCP document is 18th Jun., the corresponding Reuters date is from 15th to 21st Jun. We chose L \u03b8 that maximized the average F1 among them. Table 2 shows the test data, i.e., the total number of collected documents and the number of related documents collected manually for the evaluation. 5 We implemented the following approaches including related work, and compared these results with those obtained by our methods, Int hi & Eng.", "cite_spans": [ { "start": 524, "end": 525, "text": "5", "ref_id": null } ], "ref_spans": [ { "start": 374, "end": 381, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experimental setup", "sec_num": "3.2.1" }, { "text": "chy are not used in the approach. The approach is the same as the method reported by Collier et al. (1998) except for term weights and similarities. We calculate similarities between Reuters and translated Mainichi documents, where the difference in dates is less than \u00b1 3 days. (No hi & Eng).", "cite_spans": [ { "start": 85, "end": 106, "text": "Collier et al. (1998)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "No hierarchy: Categories with each hierar-", "sec_num": "1." }, { "text": "The approach uses only Reuters hierarchy (we call this Reu Hierarchy).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchy:", "sec_num": "2." }, { "text": "Reuters documents and translated Mainichi documents are classified into categories with Reuters hierarchy. We calculate BM25 between Reuters and Mainichi documents within the same category. The procedure is applied for all categories of the hierarchies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchy:", "sec_num": "2." }, { "text": "The judgment of relevant documents was the same as our method: if the value of similarity between two documents is higher than a lower bound L \u03b8 , we regarded them as relevant documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchy:", "sec_num": "2." }, { "text": "The retrieval results are shown in Table 3 and Figure 4 . Table 3 shows best performance of each method against L \u03b8 . As can be seen clearly from Table 3 and Figure 4 , the results with integrating hierarchies improved overall performance. Figure 4: F1 of retrieving relevant documents Table 4 shows the total number of document pairs (P), Reuters (E), and Mainichi documents (J), which satisfied the similarity lower bound L \u03b8 . As shown in Table 4 , the number of retrieved pairs by non-hierarchy approach was much greater than that of \"Int hi & Eng\" at all L \u03b8 values. This is because pairs are retrieved by using only the BM25. Therefore, many of the document pairs retrieved do not have closely related contents, even if L \u03b8 is set to a higher value.", "cite_spans": [], "ref_spans": [ { "start": 35, "end": 42, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 47, "end": 55, "text": "Figure 4", "ref_id": null }, { "start": 58, "end": 65, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 146, "end": 153, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 158, "end": 166, "text": "Figure 4", "ref_id": null }, { "start": 286, "end": 293, "text": "Table 4", "ref_id": "TABREF3" }, { "start": 442, "end": 449, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "3.2.2" }, { "text": "The results of a single hierarchy showed recall of .544, while that of the integrating hierarchies was .585 at the same L \u03b8 value (20), as shown in Table 3 . This is because in the single hierarchy method, there are some translated Mainichi documents that are not correctly classified into categories with the Reuters hierarchy. For example, \"Hashimoto remarks on fx rates\" in Mainichi documents should be classified into Reuters category \"Forex markets,\" but it was classified into \"Government\". As a result, \"U.S. Treasury has no comment on Hashimoto fx remarks\" in Reuters category \"Forex markets\" and the document \"Hashimoto\" are not retrieved by a single hierarchy approach. In contrast, in the integrating method, these two documents are classified correctly into a pair of similar categories, i.e., the \"U.S Treasury\" is classified into Reuters category \"Forex markets\", and the \"Hashimoto\" is classified into Mainichi category \"Money and banking\". These observations show that our method contributes to the retrieval of relevant documents. ", "cite_spans": [], "ref_spans": [ { "start": 148, "end": 155, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "3.2.2" }, { "text": "Finally, we report the results of bilingual verbnoun collocations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Verb-noun Collocations", "sec_num": "3.3" }, { "text": "The data for relevant document retrieval was the Reuters and Mainichi corpora from the same period, i.e., 20th Aug. 1996 to 19th Aug. 1997. The total number of Reuters documents was 806,791, and that of Mainichi was 119,822. As the number of Reuters documents was far greater than that of Mainichi documents, we estimated collocations from the results of cross-lingually retrieving relevant English documents with Japanese query documents. The difference in dates between them was less than \u00b1 3 days. Table 5 shows retrieved relevant documents that showed best performance of each method against L \u03b8 . From these data, we extracted bilingual verb-noun collocations. Table 6 shows the numbers of English and Japanese monolingual verb-noun collocations, those of candidate collocations against which bilingual correspondences were estimated, and those of correct collocations. \"D & S\" of candidate collocations indicates the number of collocations when we applied both document-and sentencebased retrieval. \"Doc\" indicates the number of collocations when we applied only document-based retrieval. \"D & S\" and \"Doc\" of correct collocations show the number of correct collocations in the topmost 1,000 according to sentence similarity and the \u03c7 2 statistics, respectively. As shown in Table 6 , the results obtained by integrating hierarchies showed a 15.1% (32.8 -17.7) improvement over the baseline non-hierarchy model, and a 6.0% (32.8 -26.8 ) improvement over use of a single hierarchy. We manually compared those 328 bilingual collocations with an existing bilingual lexicon where 78 of them (23.8%) were not included in it. 6 Moreover, 168 of 328 (51.2%) were not correctly translated by Japanese-English MT software. 7 These observations clearly support the usefulness of the method.", "cite_spans": [ { "start": 1429, "end": 1440, "text": "(32.8 -26.8", "ref_id": null }, { "start": 1626, "end": 1627, "text": "6", "ref_id": null }, { "start": 1720, "end": 1721, "text": "7", "ref_id": null } ], "ref_spans": [ { "start": 501, "end": 508, "text": "Table 5", "ref_id": "TABREF4" }, { "start": 666, "end": 673, "text": "Table 6", "ref_id": "TABREF5" }, { "start": 1281, "end": 1288, "text": "Table 6", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experimental setup", "sec_num": "3.3.1" }, { "text": "It is very important to compare the column \"rate\" for the numbers of candidate collocations with that for the numbers of correct collocations. In all approaches, sentence-based retrieval was effective in removing useless collocations, especially in our method, about 1.5% of the size obtained by \"Doc\" was retrieved, while about 4.6(328/72) times the number of correct collocations were obtained in the topmost 1,000 collocations. These observations showed that sentencebased retrieval contributes to a marked reduction in the number of useless collocations without a decrease in accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "3.3.2" }, { "text": "The last column in Table 6 shows the results using Inverse Rank Score (IRS), which is a measure of system performance by considering the rank of correct bilingual collocations within the candidate collocations. It is the sum of the inverse rank of each matching collocations, e.g., correct collocations by manual evaluation matches at ranks 2 and 4 give an IRS of 1 2 + 1 4 = 0.75. With at most 1,000 collocations, the maximum IRS score is 7.485, and the higher the IRS value, the better the system performance. As shown in Table 6 , the performance by integrating hierarchies was much better than that of the non-hierarchical approach, and slightly better than those obtained by a single hierarchy. However, correct retrieved collocations were different from each other. Table 7 lists examples of bilingual collocations obtained by a single hierarchy and integrating hierarchies. The category is \"Sport\". 8 (x,y) of category pair in Table 7 refer to Reuters and Mainichi category correspondences. Examples in Table 7 denote only English verb-noun collocations. It is interesting to note that 12 of 154 collocations, such as \"earn medal\" and \"block shot\" obtained by integrating hierarchies were also obtained by a single hierarchy approach. However, other collocations such as \"get strikeout\" and \"make birdie\" which were obtained in a particular category (Sport, Baseball) and (Sport, Golf), did not appear in either of the results using a single hierarchy or a non hierarchical approach. These observations again clearly support the usefulness of our method.", "cite_spans": [], "ref_spans": [ { "start": 19, "end": 26, "text": "Table 6", "ref_id": "TABREF5" }, { "start": 524, "end": 531, "text": "Table 6", "ref_id": "TABREF5" }, { "start": 772, "end": 779, "text": "Table 7", "ref_id": "TABREF6" }, { "start": 934, "end": 941, "text": "Table 7", "ref_id": "TABREF6" }, { "start": 1010, "end": 1017, "text": "Table 7", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Results", "sec_num": "3.3.2" }, { "text": "Much of the previous work on finding bilingual lexicons used comparable corpora. One attempt involved directly retrieving bilingual lexicons from corpora. One approach focused on extracting word translations (Gaussier et al., 2004) . The techniques were based on the idea that semantically similar words appear in similar contexts. Unlike parallel corpora, the position of a word in a document is useless for translation into the other language. In these techniques, the frequency of words in the monolingual document is calculated and their contextual similarity is measured across languages. Another approach focused on sentence extraction (Fung and Cheung, 2004) . One limitation of all these methods is that they need to control the experimental evaluation to avoid estimation of every bilingual lexicon appearing in comparable corpora.", "cite_spans": [ { "start": 208, "end": 231, "text": "(Gaussier et al., 2004)", "ref_id": "BIBREF2" }, { "start": 642, "end": 665, "text": "(Fung and Cheung, 2004)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "4" }, { "text": "The alternative consists of two steps: first, crosslingual relevant documents are retrieved from comparable corpora, then bilingual term correspondences within these relevant documents are estimated. Thus, the accuracy depends on the performance of relevant documents retrieval. Much of the previous work in finding relevant documents used MT systems or existing bilingual lexicons to translate one language into another. Document pairs are then retrieved using some measure of document similarity. Another approach to retrieving relevant documents involves the collection of relevant document URLs from the WWW (Resnik and Smith, 2003) . Utsuro et al. (2003) proposed a method for acquiring bilingual lexicons that involved retrieval of relevant English and Japanese documents from news sites on the WWW. Our work is also applicable to retrieval of relevant documents on the web because it estimates every bilingual lexicon only appearing in a set of smaller documents belonging to pairs of similar categories. Munteanu and Marcu (2006) proposed a method for extracting parallel subsentential fragments from very non-parallel bilingual corpora. The method is based on the fact that very non-parallel corpora has none or few good sentence pairs, while existing methods for exploiting comparable corpora look for parallel data at the sentence level. Their methodology is the first aimed at detecting sub-sentential correspondences, while they have not reported that the method is also applicable for large amount of data with good performance, especially in the case of large-scale evaluation such as that presented in this paper.", "cite_spans": [ { "start": 612, "end": 636, "text": "(Resnik and Smith, 2003)", "ref_id": "BIBREF6" }, { "start": 639, "end": 659, "text": "Utsuro et al. (2003)", "ref_id": "BIBREF9" }, { "start": 1012, "end": 1037, "text": "Munteanu and Marcu (2006)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "4" }, { "text": "We have developed an approach to bilingual verbnoun collocations from non-parallel corpora. The results showed the effectiveness of the method. Future work will include: (i) applying the method to retrieve other types of collocations (Smadja, 1993) , and (ii) evaluating the method using Internet directories.", "cite_spans": [ { "start": 234, "end": 248, "text": "(Smadja, 1993)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "The reason for retrieving pairs of categories is that each categorical hierarchy is defined by individual human experts, and different linguists often identify different numbers of categories for the same concepts. Therefore, it is impossible to handle full integration of hierarchies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We set \u03c7 2 value of each element of SP (r,m) to a higher value of either (r,m) \u2208 SPr or (r,m) \u2208 SPm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We used the particle \"wo\" as an object relationship in Japanese.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The classification was determined to be correct if the two human judges agreed on the evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The classification was determined by two human.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We used an existing bilingual lexicon, Eijiro on the Web, 1.91 million words, (http://www.alc.co.jp) for evaluation. If collocations were not included, the estimation was determined by two human judges.7 The number of words in the Japanese-English dictionary (Internet Honyaku-no-Ousama for Linux, Ver.5, IBM Corp.) was about 250,000.8 We obtained 98 category pairs in the Sport category.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Machine Translation vs. Dictionary Term Translation -a Comparison for English-Japanese News Article Alignment", "authors": [ { "first": "N", "middle": [], "last": "Collier", "suffix": "" }, { "first": "H", "middle": [], "last": "Hirakawa", "suffix": "" }, { "first": "A", "middle": [], "last": "Kumano", "suffix": "" } ], "year": 1998, "venue": "Proc. of 36th ACL and 17th COLING", "volume": "", "issue": "", "pages": "263--267", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collier, N., H. Hirakawa, and A. Kumano. 1998. Machine Translation vs. Dictionary Term Translation -a Compar- ison for English-Japanese News Article Alignment. In Proc. of 36th ACL and 17th COLING., pages 263-267.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Mining Very Non-Parallel Corpora: Parallel Sentence and Lexicon Extraction vie Bootstrapping and EM", "authors": [ { "first": "P", "middle": [], "last": "Fung", "suffix": "" }, { "first": "P", "middle": [], "last": "Cheung", "suffix": "" } ], "year": 2004, "venue": "Proc. of EMNLP2004", "volume": "", "issue": "", "pages": "57--63", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fung, P. and P. Cheung. 2004. Mining Very Non-Parallel Corpora: Parallel Sentence and Lexicon Extraction vie Bootstrapping and EM. In Proc. of EMNLP2004., pages 57-63.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A Geometric View on Bilingual Lexicon Extraction from Comparable Corpora", "authors": [ { "first": "E", "middle": [], "last": "Gaussier", "suffix": "" }, { "first": "I", "middle": [], "last": "H-M. Renders", "suffix": "" }, { "first": "C", "middle": [], "last": "Matveeva", "suffix": "" }, { "first": "H", "middle": [], "last": "Goutte", "suffix": "" }, { "first": "", "middle": [], "last": "D\u00e9jean", "suffix": "" } ], "year": 2004, "venue": "Proc. of 42nd ACL", "volume": "", "issue": "", "pages": "527--534", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gaussier, E., H-M. Renders, I. Matveeva, C. Goutte, and H. D\u00e9jean. 2004. A Geometric View on Bilingual Lex- icon Extraction from Comparable Corpora. In Proc. of 42nd ACL, pages 527-534.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Fast Methods for Kernelbased Text Analysis", "authors": [ { "first": "T", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "Y", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2003, "venue": "Proc. of 41th ACL", "volume": "", "issue": "", "pages": "24--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kudo, T. and Y. Matsumoto. 2003. Fast Methods for Kernel- based Text Analysis. In Proc. of 41th ACL, pages 24-31.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Principle-based Parsing without Overgeneration", "authors": [ { "first": "D", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1993, "venue": "Proc. of 31st ACL", "volume": "", "issue": "", "pages": "112--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, D. 1993. Principle-based Parsing without Overgenera- tion. In Proc. of 31st ACL, pages 112-120.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Extracting Parallel Sub-Sentential Fragments from Non-Parallel Corpora", "authors": [ { "first": "D", "middle": [ "S" ], "last": "Munteanu", "suffix": "" }, { "first": "D", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2006, "venue": "Proc. of 21st COLING and 44th ACL", "volume": "", "issue": "", "pages": "81--88", "other_ids": {}, "num": null, "urls": [], "raw_text": "Munteanu, D. S. and D. Marcu. 2006. Extracting Parallel Sub-Sentential Fragments from Non-Parallel Corpora. In Proc. of 21st COLING and 44th ACL., pages 81-88.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The Web as a Parallel Corpus", "authors": [ { "first": "P", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "N", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "3", "pages": "349--380", "other_ids": {}, "num": null, "urls": [], "raw_text": "Resnik, P. and N. A. Smith. 2003. The Web as a Parallel Corpus. Computational Linguistics., 29(3):349-380.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Rwc Text Database", "authors": [ { "first": "", "middle": [], "last": "Rwcp", "suffix": "" } ], "year": 1998, "venue": "Real World Computing Partnership", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "RWCP. 1998. Rwc Text Database. In Real World Computing Partnership.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Effect of Cross-Language IR in Bilingual Lexicon Acquisition from Comparable Corpora", "authors": [ { "first": "T", "middle": [], "last": "Utsuro", "suffix": "" }, { "first": "T", "middle": [], "last": "Horiuchi", "suffix": "" }, { "first": "T", "middle": [], "last": "Hamamoto", "suffix": "" }, { "first": "K", "middle": [], "last": "Hino", "suffix": "" }, { "first": "T", "middle": [], "last": "Nakayama", "suffix": "" } ], "year": 2003, "venue": "Proc. of 10th EACL", "volume": "", "issue": "", "pages": "355--362", "other_ids": {}, "num": null, "urls": [], "raw_text": "Utsuro, T., T. Horiuchi, T. Hamamoto, K. Hino, and T. Nakayama. 2003. Effect of Cross-Language IR in Bilingual Lexicon Acquisition from Comparable Corpora. In Proc. of 10th EACL., pages 355-362.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The Nature of Statistical Learning Theory", "authors": [ { "first": "V", "middle": [], "last": "Vapnik", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vapnik, V. 1995. The Nature of Statistical Learning Theory. Springer.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Relevant document pairs hierarchies, and to improve the efficiency and efficacy of retrieving collocations.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "Retrieving relevant documents document that belongs to the Mainichi category m. Here, s and t are the number of documents classified into r and m, respectively. Each Reuters document d r i is translated into a Japanese document d r mt i by an MT system. Each Mainichi document d m j is translated into an English document d m mt j . Retrieving relevant documents itself is quite simple. As illustrated in", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "S sim(vnr, vnm) = max S vnr \u2208Setr ,S vnm\u2208Setm sim(S vnr, S vnm) . (5)", "num": null, "uris": null, "type_str": "figure" }, "TABREF0": { "html": null, "content": "
HierarchyFlat
PrecRecF1 PrecRecF1
Mai &
", "text": "Performance of category correspondencesReu .503 .463 .482 .462 .389 .422 Reu .342 .329 .335 .240 .296 .265 Mai .157 .293 .204 .149 .277 .194", "num": null, "type_str": "table" }, "TABREF1": { "html": null, "content": "
Jap \u2192 Eng(\u00b1 3) Total # of doc.Total # of
JapEngrelevant doc.
26/06/9739115,482513
", "text": "Data for retrieving documents", "num": null, "type_str": "table" }, "TABREF2": { "html": null, "content": "
PrecRec F1-score L \u03b8
No hi & Eng.417 .322.36340
Reu Hierarchy .356 .544.43020
Int hi & Eng.839 .585.68920
", "text": "Retrieval performance", "num": null, "type_str": "table" }, "TABREF3": { "html": null, "content": "
ApproachLower Bound L \u03b8
10080604020
p 188 319 630 1,229 3,000
No hi & EngE 150 272 543987 2,053
J1316192225
p12172547186
Reu Hierarchy E8121936142
J810121825
p466183135218
Int hi & EngE32436099158
J44579
", "text": "# of documents vs L \u03b8", "num": null, "type_str": "table" }, "TABREF4": { "html": null, "content": "
: # of J/E document pairs with L \u03b8
Approach & (L \u03b8 )pairsEngJap
No hi & Eng (40)3,042,166 428,04270,080
Reu Hierarchy (20) 27,181,243 43,018199,452
Int hi & Eng (20) 81,904,24345,965 654,787
", "text": "", "num": null, "type_str": "table" }, "TABREF5": { "html": null, "content": "
Approach & (L \u03b8 )# ofCandidate collocations# of Correct collocations (top 1,000)Inverse rank score
Monolingual patterns# of collocationsrate (D & S/# of collocationsrate (D & S/(top 1,000)
JapEngD & SDocDoc)D & SDocDoc)D & S Doc
No hi & Eng (40)25,16344,76225,163 6,976,214.361177622.91.350.71
Reu Hierarchy (20)10,57637,02210,576 1,272,102.831268644.22.241.41
Int hi & Eng (20)8,34721,5248,347560,4721.489328724.62.331.46
", "text": "Numbers of monolingual and bilingual verb-noun collocations", "num": null, "type_str": "table" }, "TABREF6": { "html": null, "content": "
Approach & (L \u03b8 )Category or# of collocations# of correctExamples (English)
category pairD & SDoc collocations(%)
Reu Hierarchy (20) Sport262 19,39136(13.7) create chance, earn medal, feel pressure
block shot, establish record, take chance
(Sport, Baseball)1108,83824(21.8) get strikeout, leave base, throw pitch
(Sport, Relay)1773,41818(10.2) lead ranking, run km, win athletic
(Sport, Tennis)1152,65632(27.8) lose prize money, play exhibition game
Int hi & Eng (20)(Sport, Golf)1312,65428(21.4) make birdie, have birdie, hole putt, miss putt
(Sport, Soccer)861,31734(39.5) block shot, score defender, give free kick
(Sport, Sumo)757732(2.7) lead sumo, set championship
(Sport, Ski jump)6866110(14.7) postpone downhill, earn medal
(Sport, Football)374616(16.2) play football, lease football stadium
", "text": "Examples of bilingual verb-noun collocations", "num": null, "type_str": "table" } } } }