{ "paper_id": "P09-1049", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:54:05.005980Z" }, "title": "Bilingual Co-Training for Monolingual Hyponymy-Relation Acquisition", "authors": [ { "first": "Jong-Hoon", "middle": [], "last": "Oh", "suffix": "", "affiliation": { "laboratory": "Language Infrastructure Group", "institution": "National Institute of Information and Communications Technology", "location": { "addrLine": "3-5 Hikaridai Seika-cho, Soraku-gun", "postCode": "619-0289", "settlement": "Kyoto", "country": "Japan" } }, "email": "" }, { "first": "Kiyotaka", "middle": [], "last": "Uchimoto", "suffix": "", "affiliation": { "laboratory": "Language Infrastructure Group", "institution": "National Institute of Information and Communications Technology", "location": { "addrLine": "3-5 Hikaridai Seika-cho, Soraku-gun", "postCode": "619-0289", "settlement": "Kyoto", "country": "Japan" } }, "email": "uchimoto@nict.go.jp" }, { "first": "Kentaro", "middle": [], "last": "Torisawa", "suffix": "", "affiliation": { "laboratory": "Language Infrastructure Group", "institution": "National Institute of Information and Communications Technology", "location": { "addrLine": "3-5 Hikaridai Seika-cho, Soraku-gun", "postCode": "619-0289", "settlement": "Kyoto", "country": "Japan" } }, "email": "torisawa@nict.go.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper proposes a novel framework called bilingual co-training for a largescale, accurate acquisition method for monolingual semantic knowledge. In this framework, we combine the independent processes of monolingual semanticknowledge acquisition for two languages using bilingual resources to boost performance. We apply this framework to largescale hyponymy-relation acquisition from Wikipedia. Experimental results show that our approach improved the F-measure by 3.6-10.3%. We also show that bilingual co-training enables us to build classifiers for two languages in tandem with the same combined amount of data as required for training a single classifier in isolation while achieving superior performance.", "pdf_parse": { "paper_id": "P09-1049", "_pdf_hash": "", "abstract": [ { "text": "This paper proposes a novel framework called bilingual co-training for a largescale, accurate acquisition method for monolingual semantic knowledge. In this framework, we combine the independent processes of monolingual semanticknowledge acquisition for two languages using bilingual resources to boost performance. We apply this framework to largescale hyponymy-relation acquisition from Wikipedia. Experimental results show that our approach improved the F-measure by 3.6-10.3%. We also show that bilingual co-training enables us to build classifiers for two languages in tandem with the same combined amount of data as required for training a single classifier in isolation while achieving superior performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Acquiring and accumulating semantic knowledge are crucial steps for developing high-level NLP applications such as question answering, although it remains difficult to acquire a large amount of highly accurate semantic knowledge. This paper proposes a novel framework for a large-scale, accurate acquisition method for monolingual semantic knowledge, especially for semantic relations between nominals such as hyponymy and meronymy. We call the framework bilingual cotraining.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "1" }, { "text": "The acquisition of semantic relations between nominals can be seen as a classification task of semantic relations -to determine whether two nominals hold a particular semantic relation (Girju et al., 2007) . Supervised learning methods, which have often been applied to this classification task, have shown promising results. In those methods, however, a large amount of training data is usually required to obtain high performance, and the high costs of preparing training data have always been a bottleneck.", "cite_spans": [ { "start": 185, "end": 205, "text": "(Girju et al., 2007)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "1" }, { "text": "Our research on bilingual co-training sprang from a very simple idea: perhaps training data in a language can be enlarged without much cost if we translate training data in another language and add the translation to the training data in the original language. We also noticed that it may be possible to further enlarge the training data by translating the reliable part of the classification results in another language. Since the learning settings (feature sets, feature values, training data, corpora, and so on) are usually different in two languages, the reliable part in one language may be overlapped by an unreliable part in another language. Adding the translated part of the classification results to the training data will improve the classification results in the unreliable part. This process can also be repeated by swapping the languages, as illustrated in Figure 1 . Actually, this is nothing other than a bilingual version of co-training (Blum and Mitchell, 1998 ", "cite_spans": [ { "start": 955, "end": 979, "text": "(Blum and Mitchell, 1998", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 872, "end": 880, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Motivation", "sec_num": "1" }, { "text": "Let us show an example in our current task: hyponymy-relation acquisition from Wikipedia. Our original approach for this task was super-vised learning based on the approach proposed by , which was only applied for Japanese and achieved around 80% in F-measure. In their approach, a common substring in a hypernym and a hyponym is assumed to be one strong clue for recognizing that the two words constitute a hyponymy relation. For example, recognizing a proper hyponymy relation between two Japanese words, \u00de (kouso meaning enzyme) and A\u00c4$ F\u00de (kasuibunkaikouso meaning hydrolase), is relatively easy because they share a common suffix: kouso. On the other hand, judging whether their English translations (enzyme and hydrolase) have a hyponymy relation is probably more difficult since they do not share any substrings. A classifier for Japanese will regard the hyponymy relation as valid with high confidence, while a classifier for English may not be so positive. In this case, we can compensate for the weak part of the English classifier by adding the English translation of the Japanese hyponymy relation, which was recognized with high confidence, to the English training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1: Concept of bilingual co-training", "sec_num": null }, { "text": "In addition, if we repeat this process by swapping English and Japanese, further improvement may be possible. Furthermore, the reliable parts that are automatically produced by a classifier can be larger than manually tailored training data. If this is the case, the effect of adding the translation to the training data can be quite large, and the same level of effect may not be achievable by a reasonable amount of labor for preparing the training data. This is the whole idea.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1: Concept of bilingual co-training", "sec_num": null }, { "text": "Through a series of experiments, this paper shows that the above idea is valid at least for one task: large-scale monolingual hyponymy-relation acquisition from English and Japanese Wikipedia. Experimental results showed that our method based on bilingual co-training improved the performance of monolingual hyponymy-relation acquisition about 3.6-10.3% in the F-measure. Bilingual co-training also enables us to build classifiers for two languages in tandem with the same combined amount of data as would be required for training a single classifier in isolation while achieving superior performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1: Concept of bilingual co-training", "sec_num": null }, { "text": "People probably expect that a key factor in the success of this bilingual co-training is how to translate the training data. We actually did translation by a simple look-up procedure in the existing translation dictionaries without any machine trans-lation systems or disambiguation processes. Despite this simple approach, we obtained consistent improvement in our task using various translation dictionaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1: Concept of bilingual co-training", "sec_num": null }, { "text": "This paper is organized as follows. Section 2 presents bilingual co-training, and Section 3 precisely describes our system. Section 4 describes our experiments and presents results. Section 5 discusses related work. Conclusions are drawn and future work is mentioned in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1: Concept of bilingual co-training", "sec_num": null }, { "text": "Let S and T be two different languages, and let CL be a set of class labels to be obtained as a result of learning/classification. To simplify the discussion, we assume that a class label is binary; i.e., the classification results are \"yes\" or \"no.\" Thus, CL = {yes, no}. Also, we denote the set of all nonnegative real numbers by R + .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Co-Training", "sec_num": "2" }, { "text": "Assume X = X S \u222a X T is a set of instances in languages S and T to be classified. In the context of a hyponymy-relation acquisition task, the instances are pairs of nominals. Then we assume that classifier c assigns class label cl in CL and confidence value r for assigning the label, i.e., c(x) = (x, cl, r), where x \u2208 X, cl \u2208 CL, and r \u2208 R + . Note that we used support vector machines (SVMs) in our experiments and (the absolute value of) the distance between a sample and the hyperplane determined by the SVMs was used as confidence value r. The training data are denoted by L \u2282 X \u00d7CL, and we denote the learning by function LEARN ; if classifier c is trained by training data L, then c = LEARN (L). Particularly, we denote the training sets for S and T that are manually prepared by L S and L T , respectively. Also, bilingual instance dictionary D BI is defined as the translation pairs of instances in X S and X T . Thus,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Co-Training", "sec_num": "2" }, { "text": "D BI = {(s, t)} \u2282 X S \u00d7 X T .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Co-Training", "sec_num": "2" }, { "text": "In the case of hyponymy-relation acquisition in English and Japanese, (s, t) \u2208 D BI could be (s=(enzyme, hydrolase), t=(\u00de (meaning enzyme), A\u00c4$F \u00de (meaning hydrolase))).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Co-Training", "sec_num": "2" }, { "text": "Our bilingual co-training is given in Figure 2 . In the initial stage, c 0 S and c 0 T are learned with manually labeled instances L S and L T (lines 2-5). Then c i S and c i T are applied to classify instances in X S and X T (lines 6-7). Denote CR i S as a set of the classification results of c i S on instances X S that is not in L i S and is registered in D BI . Lines 10-18 describe a way of selecting from CR i S newly la-", "cite_spans": [], "ref_spans": [ { "start": 38, "end": 46, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Bilingual Co-Training", "sec_num": "2" }, { "text": "1: i = 0 2: L 0 S = L S ; L 0 T = L T 3: repeat 4: c i S := LEARN (L i S )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Co-Training", "sec_num": "2" }, { "text": "5:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Co-Training", "sec_num": "2" }, { "text": "c i T := LEARN (L i T )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Co-Training", "sec_num": "2" }, { "text": "6:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Co-Training", "sec_num": "2" }, { "text": "CR i S := {c i S (x S )|x S \u2208 X S , \u2200cl (x S , cl) / \u2208 L i S , \u2203x T (x S , x T ) \u2208 D BI } 7:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Co-Training", "sec_num": "2" }, { "text": "CR i T := {c i T (x T )|x T \u2208 X T , \u2200cl (x T , cl) / \u2208 L i T , \u2203x S (x S , x T ) \u2208 D BI } 8: L (i+1) S := L i S 9: L (i+1) T := L i T 10", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Co-Training", "sec_num": "2" }, { "text": ":", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Co-Training", "sec_num": "2" }, { "text": "for each (x S , cl S , r S ) \u2208 T opN(CR i S ) do", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Co-Training", "sec_num": "2" }, { "text": "11:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Co-Training", "sec_num": "2" }, { "text": "for each x T such that (x S , x T ) \u2208 D BI and (x T , cl T , r T ) \u2208 CR i T do 12: if r S > \u03b8 then 13: if r T < \u03b8 or cl S = cl T then", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Co-Training", "sec_num": "2" }, { "text": "14: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Co-Training", "sec_num": "2" }, { "text": "L (i+1) T := L (i+1) T \u222a {(x T , cl S )}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Co-Training", "sec_num": "2" }, { "text": "for each (x T , cl T , r T ) \u2208 T opN(CR i T ) do 20: for each x S such that (x S , x T ) \u2208 D BI and (x S , cl S , r S ) \u2208 CR i S do 21: if r T > \u03b8 then 22: if r S < \u03b8 or cl S = cl T then 23: L (i+1) S := L (i+1) S \u222a {(x S , cl T )}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Co-Training", "sec_num": "2" }, { "text": "i = i + 1 29: until a fixed number of iterations is reached", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Co-Training", "sec_num": "2" }, { "text": "in T . T opN(CR i S ) is a set of c i S (x), whose r S is top-N highest in CR i S . (In our experiments, N = 900.) During the selection, c i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Co-Training", "sec_num": "2" }, { "text": "S acts as a teacher and c i T as a student. The teacher instructs his student in the class label of x T , which is actually a translation of x S by bilingual instance dictionary D BI , through cl S only if he can do it with a certain level of confidence, say r S > \u03b8, and if one of two other condition meets (r T < \u03b8 or cl S = cl T ). cl S = cl T is a condition to avoid problems, especially when the student also has a certain level of confidence in his opinion on a class label but disagrees with the teacher: r T > \u03b8 and cl S = cl T . In that case, the teacher does nothing and ignores the instance. Condition r T < \u03b8 enables the teacher to instruct his student in the class label of x T in spite of their disagreement in a class label. If every condition is satisfied,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Co-Training", "sec_num": "2" }, { "text": "(x T , cl S ) is added to existing labeled instances L (i+1) T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Co-Training", "sec_num": "2" }, { "text": ". The roles are reversed in lines 19-27 so that c i T becomes a teacher and c i S a student. Similar to co-training (Blum and Mitchell, 1998) , one classifier seeks another's opinion to select new labeled instances. One main difference between co-training and bilingual co-training is the space of instances: co-training is based on different features of the same instances, and bilingual co-training is based on different spaces of instances divided by languages. Since some of the instances in different spaces are connected by a bilingual instance dictionary, they seem to be in the same space. Another big difference lies in the role of the two classifiers. The two classifiers in co-training work on the same task, but those in bilingual co-training do the same type of task rather than the same task.", "cite_spans": [ { "start": 116, "end": 141, "text": "(Blum and Mitchell, 1998)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Bilingual Co-Training", "sec_num": "2" }, { "text": "Our system, which acquires hyponymy relations from Wikipedia based on bilingual co-training, is described in Figure 3 . The following three main parts are described in this section: candidate extraction, hyponymy-relation classification, and bilingual instance dictionary construction. ", "cite_spans": [], "ref_spans": [ { "start": 109, "end": 117, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Acquisition of Hyponymy Relations from Wikipedia", "sec_num": "3" }, { "text": "We follow to extract hyponymy-relation candidates from English and Japanese Wikipedia. A layout structure is chosen as a source of hyponymy relations because it can provide a huge amount of them 1 , and recognition of the layout structure is easy regardless of languages. Every English and Japanese Wikipedia article was transformed into a tree structure like Figure 4 , where layout items title, (sub)section headings, and list items in an article were used as nodes in a tree structure. found that some pairs consisting of a node and one of its descendants constituted a proper hyponymy relation (e.g., (TIGER, SIBERIAN TIGER)), and this could be a knowledge source of hyponymy relation acquisition. A hyponymy-relation candidate is then extracted from the tree structure by regarding a node as a hypernym candidate and all its subordinate nodes as hyponym candidates of the hypernym candidate (e.g., (TIGER, TAXON-OMY) and (TIGER, SIBERIAN TIGER) from Figure 4) . 39 M English hyponymy-relation candidates and 10 M Japanese ones were extracted from Wikipedia. These candidates are classified into proper hyponymy relations and others by using the classifiers described below.", "cite_spans": [ { "start": 955, "end": 964, "text": "Figure 4)", "ref_id": null } ], "ref_spans": [ { "start": 360, "end": 368, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Candidate Extraction", "sec_num": "3.1" }, { "text": "We use SVMs (Vapnik, 1995) as classifiers for the classification of the hyponymy relations on the hyponymy-relation candidates. Let hyper be a hypernym candidate, hypo be a hyper's hyponym candidate, and (hyper, hypo) be a hyponymyrelation candidate. The lexical, structure-based, and infobox-based features of (hyper, hypo) in Table 1 are used for building English and Japanese classifiers. Note that SF 3 -SF 5 and IF were not used in but LF 1 -LF 5 and SF 1 -SF 2 are the same as their feature set.", "cite_spans": [ { "start": 12, "end": 26, "text": "(Vapnik, 1995)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Hyponymy-Relation Classification", "sec_num": "3.2" }, { "text": "Let us provide an overview of the feature sets used in . See for more details. Lexical features LF 1 -LF 5 are used to recognize the lexical evidence encoded in hyper and hypo for hyponymy relations. For example, (hyper,hypo) is often a proper hyponymy relation if hyper and hypo share the same head morpheme or word. In LF 1 and LF 2 , such information is provided along with the words/morphemes and the parts of speech of hyper and hypo, which can be multiword/morpheme nouns. TagChunk (Daum\u00e9 III et al., 2005) for English and MeCab (MeCab, 2008) for Japanese were used to provide the lexical features. Several simple lexical patterns 2 were also applied to hyponymy-relation candidates. For example, \"List of artists\" is converted into \"artists\" by lexical pattern \"list of X.\" Hyponymy-relation candidates whose hypernym candidate matches such a lexical pattern are likely to be valid (e.g., (List of artists, Leonardo da Vinci)). We use LF 4 for dealing with these cases. If a typical or frequently used section heading in a Wikipedia article, such as \"History\" or \"References,\" is used as a hyponym candidate in a hyponymy-relation candidate, the hyponymy-relation candidate is usually not a hyponymy relation. LF 5 is used to recognize these hyponymy-relation candidates.", "cite_spans": [ { "start": 488, "end": 512, "text": "(Daum\u00e9 III et al., 2005)", "ref_id": "BIBREF4" }, { "start": 517, "end": 548, "text": "English and MeCab (MeCab, 2008)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Hyponymy-Relation Classification", "sec_num": "3.2" }, { "text": "Structure-based features are related to the tree structure of Wikipedia articles from which hyponymy-relation candidate (hyper,hypo) is extracted. SF 1 provides the distance between hyper and hypo in the tree structure. SF 2 represents the type of layout items from which hyper and hypo are originated. These are the feature sets used in .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hyponymy-Relation Classification", "sec_num": "3.2" }, { "text": "We also added some new items to the above feature sets. SF 3 represents the types of tree nodes including root, leaf, and others. For example, (hyper,hypo) is seldom a hyponymy relation if hyper is from a root node (or title) and hypo is from a hyper's child node (or section headings). SF 4 and SF 5 represent the structural contexts of hyper and hypo in a tree structure. They can provide evidence related to similar hyponymyrelation candidates in the structural contexts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hyponymy-Relation Classification", "sec_num": "3.2" }, { "text": "An infobox-based feature, IF , is based on a Figure 4 .", "cite_spans": [], "ref_spans": [ { "start": 45, "end": 53, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Hyponymy-Relation Classification", "sec_num": "3.2" }, { "text": "Wikipedia infobox, a special kind of template, that describes a tabular summary of an article subject expressed by attribute-value pairs. An attribute type coupled with the infobox name to which it belongs provides the semantic properties of its value that enable us to easily understand what the attribute value means (Auer and Lehmann, 2007; Wu and Weld, 2007) . For example, infobox template City Japan in Wikipedia article Kyoto contains several attribute-value pairs such as \"Mayor=Daisaku Kadokawa\" as attribute=its value. What Daisaku Kadokawa, the attribute value of mayor in the example, represents is hard to understand alone if we lack knowledge, but its attribute type, mayor, gives a clue-Daisaku Kadokawa is a mayor related to Kyoto. These semantic properties enable us to discover semantic evidence for hyponymy relations. We extract triples (infobox name, attribute type, attribute value) from the Wikipedia infoboxes and encode such information related to hyper and hypo in our feature set IF . 3 ", "cite_spans": [ { "start": 319, "end": 343, "text": "(Auer and Lehmann, 2007;", "ref_id": "BIBREF0" }, { "start": 344, "end": 362, "text": "Wu and Weld, 2007)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Hyponymy-Relation Classification", "sec_num": "3.2" }, { "text": "We used the MAY 2008 version of English Wikipedia and the JUNE 2008 version of Japanese Wikipedia for our experiments. 24,000 hyponymy-relation candidates, randomly selected in both languages, were manually checked to build training, development, and test sets 6 . Around 8,000 hyponymy relations were found in the manually checked data for both languages 7 . 20,000 of the manually checked data were used as a training set for training the initial classifier. The rest were equally divided into development and test sets. The development set was used to select the optimal parameters in bilingual co-training and the test set was used to evaluate our system. We used TinySVM (TinySVM, 2002) with a polynomial kernel of degree 2 as a classifier. The maximum iteration number in the bilingual cotraining was set as 100. Two parameters, \u03b8 and T opN, were selected through experiments on the development set. \u03b8 = 1 and T opN=900 showed the best performance and were used as the optimal parameter in the following experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We conducted three experiments to show effects of bilingual co-training, training data size, and bilingual instance dictionaries. In the first two experiments, we experimented with a bilingual instance dictionary derived from Wikipedia crosslanguage links. Comparison among systems based on three different bilingual instance dictionaries is shown in the third experiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Precision (P ), recall (R), and F 1 -measure (F 1 ), as in Eq (1), were used as the evaluation measures, where Rel represents a set of manually checked hyponymy relations and HRbyS represents a set of hyponymy-relation candidates classified as hyponymy relations by the system: Table 2 shows the comparison results of the four systems. SYT represents the system that we implemented and tested with the same data as ours. INIT is a system based on initial classifier c 0 in bilingual co-training. We translated training data in one language by using our bilingual instance dictionary and added the translation to the existing training data in the other language like bilingual co-training did. The size of the English and Japanese training data reached 20,729 and 20,486. We trained initial classifier c 0 with the new training data. TRAN is a system based on the classifier. BICO is a system based on bilingual co-training.", "cite_spans": [], "ref_spans": [ { "start": 278, "end": 285, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "P = |Rel \u2229 HRbyS|/|HRbyS| (1) R = |Rel \u2229 HRbyS|/|Rel| F 1 = 2 \u00d7 (P \u00d7 R)/(P + R) 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "For Japanese, SYT showed worse performance than that reported in , probably due to the difference in training data size (ours is 20,000 and was 29,900). The size of the test data was also different -ours is 2,000 and was 1,000.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Comparison between INIT and SYT shows the effect of SF 3 -SF 5 and IF , newly introduced feature types, in hyponymy-relation classification. INIT consistently outperformed SYT, although the difference was merely around 0.5-1.8% in F 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "BICO showed significant performance improvement (around 3.6-10.3% in F 1 ) over SYT, INIT, and TRAN regardless of the language. Comparison between TRAN and BICO showed that bilingual co-training is useful for enlarging the training data and that the performance gain by bilingual co-training cannot be achieved by simply translating the existing training data. English Japanese Figure 5 : F 1 curves based on the increase of training data size during bilingual co-training Figure 5 shows F 1 curves based on the size of the training data including those manually tailored and automatically obtained through bilingual co-training. The curve starts from 20,000 and ends around 55,000 in Japanese and 62,000 in English. As the training data size increases, the F 1 curves tend to go upward in both languages. This indicates that the two classifiers cooperate well to boost their performance through bilingual cotraining.", "cite_spans": [], "ref_spans": [ { "start": 378, "end": 386, "text": "Figure 5", "ref_id": null }, { "start": 473, "end": 481, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We recognized 5.4 M English and 2.41 M Japanese hyponymy relations from the classification results of BICO on all hyponymy-relation candidates in both languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We performed two tests to investigate the effect of the training data size on bilingual co-training. The first test posed the following question: \"If we build 2n training samples by hand and the building cost is the same in both languages, which is better from the monolingual aspects: 2n monolingual training samples or n bilingual training samples?\" Table 3 and Figure 6 show the results.", "cite_spans": [], "ref_spans": [ { "start": 352, "end": 359, "text": "Table 3", "ref_id": null }, { "start": 364, "end": 372, "text": "Figure 6", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Effect of Training Data Size", "sec_num": "4.2" }, { "text": "In INIT-E and INIT-J, a classifier in each language, which was trained with 2n monolingual training samples, did not learn through bilingual co-training. In BICO-E and BICO-J, bilingual cotraining was applied to the initial classifiers trained with n training samples in both languages. As shown in Table 3 , BICO, with half the size of the training samples used in INIT, always performed better than INIT in both languages. This indicates that bilingual co-training enables us to build classifiers for two languages in tandem with the same combined amount of data as required for training a single classifier in isolation while achieving superior performance. Table 3 : F 1 based on training data size: with/without bilingual co-training (%)", "cite_spans": [], "ref_spans": [ { "start": 299, "end": 306, "text": "Table 3", "ref_id": null }, { "start": 661, "end": 668, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Effect of Training Data Size", "sec_num": "4.2" }, { "text": "The second test asked: \"Can we always improve performance through bilingual co-training with one strong and one weak classifier?\" If the answer is yes, then we can apply our framework to acquisition of hyponymy-relations in other languages, i.e., German and French, without much effort for preparing a large amount of training data, because our strong classifier in English or Japanese can boost the performance of a weak classifier in other languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of Training Data Size", "sec_num": "4.2" }, { "text": "To answer the question, we tested the performance of classifiers by using all training data (20,000) for a strong classifier and by changing the training data size of the other from 1,000 to 15,000 ({1,000, 5,000, 10,000, 15,000}) for a weak classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of Training Data Size", "sec_num": "4.2" }, { "text": "INIT-J BICO-J Tables 4 and 5 show the results, where \"INIT\" represents a system based on the initial classifier in each language and \"BICO\" represents a system based on bilingual co-training. The results were encouraging because the classifiers showed better performance than their initial ones in every setting. In other words, a strong classifier always taught a weak classifier well, and the strong one also got help from the weak one, regardless of the size of the training data with which the weaker one learned. The test showed that bilingual co-training can work well if we have one strong classifier.", "cite_spans": [], "ref_spans": [ { "start": 14, "end": 28, "text": "Tables 4 and 5", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "INIT-E BICO-E", "sec_num": null }, { "text": "We tested our method with different bilingual instance dictionaries to investigate their effect. We built bilingual instance dictionaries based on different translation dictionaries whose translation entries came from different domains (i.e., general domain, technical domain, and Wikipedia) and had a different degree of translation ambiguity. In Table 6 , D1 and D2 correspond to systems based on a bilingual instance dictionary derived from two handcrafted translation dictionaries, EDICT (Breen, 2008 ) (a general-domain dictionary) and \"The Japan Science and Technology Agency Dictionary,\" (a translation dictionary for technical terms) respectively. D3, which is the same as BICO in Table 2 , is based on a bilingual instance dictionary derived from Wikipedia. EN-TRY represents the number of translation dictionary entries used for building a bilingual instance dictionary. E2J (or J2E) represents the average translation ambiguities of English (or Japanese) terms in the entries. To show the effect of these translation ambiguities, we used each dictionary under two different conditions, \u03b1=5 and ALL. \u03b1=5 represents the condition where only translation entries with less than five translation ambiguities are used; ALL represents no restriction on translation ambiguities.", "cite_spans": [ { "start": 492, "end": 504, "text": "(Breen, 2008", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 348, "end": 355, "text": "Table 6", "ref_id": null }, { "start": 689, "end": 696, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Effect of Bilingual Instance Dictionaries", "sec_num": "4.3" }, { "text": "F 1 DIC STATISTICS TYPE E J ENTRY E2J J2E D1 \u03b1=5 76.5 78.4 588K 1.80 1.77 D1 ALL 75.0 77.2 990K 7.17 2.52 D2 \u03b1=5 76.9 78.5 667K 1.89 1.55 D2 ALL 77.0 77.9 750K 3.05 1.71 D3 \u03b1=5 80.7 81.6 197K 1.03 1.02 D3 ALL 80.7 81.6 197K 1.03 1.02 Table 6 : Effect of different bilingual instance dictionaries", "cite_spans": [], "ref_spans": [ { "start": 234, "end": 241, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "DIC", "sec_num": null }, { "text": "The results showed that D3 was the best and that the performances of the others were similar to each other. The differences in the F 1 scores between \u03b1=5 and ALL were relatively small within the same system triggered by translation ambiguities. The performance gap between D3 and the other systems might explain the fact that both hyponymy-relation candidates and the translation dictionary used in D3 were extracted from the same dataset (i.e., Wikipedia), and thus the bilingual instance dictionary built with the translation dictionary in D3 had better coverage of the Wikipedia entries consisting of hyponymyrelation candidates than the other bilingual instance dictionaries. Although D1 and D2 showed lower performance than D3, the experimental results showed that bilingual co-training was always effective no matter which dictionary was used (Note that F 1 of INIT in Table 2 was 72.2 in English and 76.6 in Japanese.) Li and Li (2002) proposed bilingual bootstrapping for word translation disambiguation. Similar to bilingual co-training, classifiers for two languages cooperated in learning with bilingual resources in bilingual bootstrapping. However, the two classifiers in bilingual bootstrapping were for a bilingual task but did different tasks from the monolingual viewpoint. A classifier in each language is for word sense disambiguation, where a class label (or word sense) is different based on the languages. On the contrary, classifiers in bilingual co-training cooperate in doing the same type of tasks.", "cite_spans": [ { "start": 926, "end": 942, "text": "Li and Li (2002)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 875, "end": 882, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "DIC", "sec_num": null }, { "text": "Bilingual resources have been used for monolingual tasks including verb classification and noun phrase semantic interpolation (Merlo et al., 2002; Girju, 2006) . However, unlike ours, their focus was limited to bilingual features for one monolingual classifier based on supervised learning.", "cite_spans": [ { "start": 126, "end": 146, "text": "(Merlo et al., 2002;", "ref_id": "BIBREF12" }, { "start": 147, "end": 159, "text": "Girju, 2006)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Recently, there has been increased interest in semantic relation acquisition from corpora. Some regarded Wikipedia as the corpora and applied hand-crafted or machine-learned rules to acquire semantic relations (Herbelot and Copestake, 2006; Kazama and Torisawa, 2007; Ruiz-casado et al., 2005; Nastase and Strube, 2008; Suchanek et al., 2007) . Several researchers who participated in SemEval-07 (Girju et al., 2007) proposed methods for the classification of semantic relations between simple nominals in English sentences. However, the previous work seldom considered the bilingual aspect of semantic relations in the acquisition of monolingual semantic relations.", "cite_spans": [ { "start": 210, "end": 240, "text": "(Herbelot and Copestake, 2006;", "ref_id": "BIBREF8" }, { "start": 241, "end": 267, "text": "Kazama and Torisawa, 2007;", "ref_id": "BIBREF9" }, { "start": 268, "end": 293, "text": "Ruiz-casado et al., 2005;", "ref_id": "BIBREF14" }, { "start": 294, "end": 319, "text": "Nastase and Strube, 2008;", "ref_id": "BIBREF13" }, { "start": 320, "end": 342, "text": "Suchanek et al., 2007)", "ref_id": "BIBREF15" }, { "start": 396, "end": 416, "text": "(Girju et al., 2007)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "We proposed a bilingual co-training approach and applied it to hyponymy-relation acquisition from Wikipedia. Experiments showed that bilingual co-training is effective for improving the performance of classifiers in both languages. We further showed that bilingual co-training enables us to build classifiers for two languages in tandem, outperforming classifiers trained individually for each language while requiring no more training data in total than a single classifier trained in isolation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "We showed that bilingual co-training is also helpful for boosting the performance of a weak classifier in one language with the help of a strong classifier in the other language without lowering the performance of either classifier. This indicates that the framework can reduce the cost of preparing training data in new languages with the help of our English and Japanese strong classifiers. Our future work focuses on this issue.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "reported that they obtained 171 K, 420 K, and 1.48 M hyponymy relations from a definition sentence, a category system, and a layout structure in Japanese Wikipedia, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We used the same Japanese lexical patterns in to build English lexical patterns with them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We obtained 1.6 M object-attribute-value triples in Japanese and 5.9 M in English.4 197 K translation pairs were extracted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We also used redirection links in English and Japanese Wikipedia for recognizing the variations of terms when we built a bilingual instance dictionary with Wikipedia crosslanguage links.6 It took about two or three months to check them in each language.7 Regarding a hyponymy relation as a positive sample and the others as a negative sample for training SVMs, \"positive sample:negative sample\" was about 8,000:16,000=1:2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "What have Innsbruck and Leipzig in common? Extracting semantics from wiki content", "authors": [ { "first": "S\u00f6ren", "middle": [], "last": "Auer", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Lehmann", "suffix": "" } ], "year": 2007, "venue": "Proc. of the", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S\u00f6ren Auer and Jens Lehmann. 2007. What have Innsbruck and Leipzig in common? Extracting se- mantics from wiki content. In Proc. of the 4th", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "European Semantic Web Conference", "authors": [], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "503--517", "other_ids": {}, "num": null, "urls": [], "raw_text": "European Semantic Web Conference (ESWC 2007), pages 503-517. Springer.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Combining labeled and unlabeled data with co-training", "authors": [ { "first": "Avrim", "middle": [], "last": "Blum", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 1998, "venue": "COLT' 98: Proceedings of the eleventh annual conference on Computational learning theory", "volume": "", "issue": "", "pages": "92--100", "other_ids": {}, "num": null, "urls": [], "raw_text": "Avrim Blum and Tom Mitchell. 1998. Combining la- beled and unlabeled data with co-training. In COLT' 98: Proceedings of the eleventh annual conference on Computational learning theory, pages 92-100.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "EDICT Japanese/English dictionary file, The Electronic Dictionary Research and Development Group", "authors": [ { "first": "Jim", "middle": [], "last": "Breen", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jim Breen. 2008. EDICT Japanese/English dictionary file, The Electronic Dictionary Research and Devel- opment Group, Monash University.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Search-based structured prediction as classification", "authors": [ { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" }, { "first": "John", "middle": [], "last": "Langford", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2005, "venue": "Proc. of NIPS Workshop on Advances in Structured Learning for Text and Speech Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hal Daum\u00e9 III, John Langford, and Daniel Marcu. 2005. Search-based structured prediction as classi- fication. In Proc. of NIPS Workshop on Advances in Structured Learning for Text and Speech Processing, Whistler, Canada.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A bilingual dictionary extracted from the Wikipedia link structure", "authors": [ { "first": "Maike", "middle": [], "last": "Erdmann", "suffix": "" }, { "first": "Kotaro", "middle": [], "last": "Nakayama", "suffix": "" }, { "first": "Takahiro", "middle": [], "last": "Hara", "suffix": "" }, { "first": "Shojiro", "middle": [], "last": "Nishio", "suffix": "" } ], "year": 2008, "venue": "Proc. of DASFAA", "volume": "", "issue": "", "pages": "686--689", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maike Erdmann, Kotaro Nakayama, Takahiro Hara, and Shojiro Nishio. 2008. A bilingual dictionary extracted from the Wikipedia link structure. In Proc. of DASFAA, pages 686-689.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Semeval-2007 task 04: Classification of semantic relations between nominals", "authors": [ { "first": "Roxana", "middle": [], "last": "Girju", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "Vivi", "middle": [], "last": "Nastase", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Szpakowicz", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Turney", "suffix": "" }, { "first": "Deniz", "middle": [], "last": "Yuret", "suffix": "" } ], "year": 2007, "venue": "Proc. of the Fourth International Workshop on Semantic Evaluations", "volume": "", "issue": "", "pages": "13--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roxana Girju, Preslav Nakov, Vivi Nastase, Stan Sz- pakowicz, Peter Turney, and Deniz Yuret. 2007. Semeval-2007 task 04: Classification of semantic re- lations between nominals. In Proc. of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 13-18.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Out-of-context noun phrase semantic interpretation with cross-linguistic evidence", "authors": [ { "first": "Roxana", "middle": [], "last": "Girju", "suffix": "" } ], "year": 2006, "venue": "CIKM '06: Proceedings of the 15th ACM international conference on Information and knowledge management", "volume": "", "issue": "", "pages": "268--276", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roxana Girju. 2006. Out-of-context noun phrase se- mantic interpretation with cross-linguistic evidence. In CIKM '06: Proceedings of the 15th ACM inter- national conference on Information and knowledge management, pages 268-276.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Acquiring ontological relationships from Wikipedia using RMRS", "authors": [ { "first": "Aurelie", "middle": [], "last": "Herbelot", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Copestake", "suffix": "" } ], "year": 2006, "venue": "Proc. of the ISWC 2006 Workshop on Web Content Mining with Human Language Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aurelie Herbelot and Ann Copestake. 2006. Acquir- ing ontological relationships from Wikipedia using RMRS. In Proc. of the ISWC 2006 Workshop on Web Content Mining with Human Language Tech- nologies.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Exploiting Wikipedia as external knowledge for named entity recognition", "authors": [ { "first": "Kentaro", "middle": [], "last": "Jun'ichi Kazama", "suffix": "" }, { "first": "", "middle": [], "last": "Torisawa", "suffix": "" } ], "year": 2007, "venue": "Proc. of Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "698--707", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun'ichi Kazama and Kentaro Torisawa. 2007. Ex- ploiting Wikipedia as external knowledge for named entity recognition. In Proc. of Joint Conference on Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning, pages 698-707.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Word translation disambiguation using bilingual bootstrapping", "authors": [ { "first": "Cong", "middle": [], "last": "Li", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" } ], "year": 2002, "venue": "Proc. of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "343--351", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cong Li and Hang Li. 2002. Word translation disam- biguation using bilingual bootstrapping. In Proc. of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 343-351.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "MeCab: Yet another part-of-speech and morphological analyzer", "authors": [ { "first": "", "middle": [], "last": "Mecab", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "MeCab. 2008. MeCab: Yet another part-of-speech and morphological analyzer. http://mecab. sourceforge.net/.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A multilingual paradigm for automatic verb classification", "authors": [ { "first": "Paola", "middle": [], "last": "Merlo", "suffix": "" }, { "first": "Suzanne", "middle": [], "last": "Stevenson", "suffix": "" }, { "first": "Vivian", "middle": [], "last": "Tsang", "suffix": "" }, { "first": "Gianluca", "middle": [], "last": "Allaria", "suffix": "" } ], "year": 2002, "venue": "Proc. of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "207--214", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paola Merlo, Suzanne Stevenson, Vivian Tsang, and Gianluca Allaria. 2002. A multilingual paradigm for automatic verb classification. In Proc. of the 40th Annual Meeting of the Association for Compu- tational Linguistics, pages 207-214.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Decoding Wikipedia categories for knowledge acquisition", "authors": [ { "first": "Vivi", "middle": [], "last": "Nastase", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Strube", "suffix": "" } ], "year": 2008, "venue": "Proc. of AAAI 08", "volume": "", "issue": "", "pages": "1219--1224", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vivi Nastase and Michael Strube. 2008. Decoding Wikipedia categories for knowledge acquisition. In Proc. of AAAI 08, pages 1219-1224.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Automatic extraction of semantic relationships for Wordnet by means of pattern learning from Wikipedia", "authors": [ { "first": "Maria", "middle": [], "last": "Ruiz-Casado", "suffix": "" }, { "first": "Enrique", "middle": [], "last": "Alfonseca", "suffix": "" }, { "first": "Pablo", "middle": [], "last": "Castells", "suffix": "" } ], "year": 2005, "venue": "Proc. of NLDB", "volume": "", "issue": "", "pages": "67--79", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maria Ruiz-casado, Enrique Alfonseca, and Pablo Castells. 2005. Automatic extraction of semantic relationships for Wordnet by means of pattern learn- ing from Wikipedia. In Proc. of NLDB, pages 67- 79. Springer Verlag.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Yago: A Core of Semantic Knowledge", "authors": [ { "first": "Fabian", "middle": [ "M" ], "last": "Suchanek", "suffix": "" }, { "first": "Gjergji", "middle": [], "last": "Kasneci", "suffix": "" }, { "first": "Gerhard", "middle": [], "last": "Weikum", "suffix": "" } ], "year": 2007, "venue": "Proc. of the 16th international conference on World Wide Web", "volume": "", "issue": "", "pages": "697--706", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: A Core of Semantic Knowl- edge. In Proc. of the 16th international conference on World Wide Web, pages 697-706.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Hacking Wikipedia for hyponymy relation acquisition", "authors": [ { "first": "Asuka", "middle": [], "last": "Sumida", "suffix": "" }, { "first": "Kentaro", "middle": [], "last": "Torisawa", "suffix": "" } ], "year": 2008, "venue": "Proc. of the Third International Joint Conference on Natural Language Processing (IJCNLP)", "volume": "", "issue": "", "pages": "883--888", "other_ids": {}, "num": null, "urls": [], "raw_text": "Asuka Sumida and Kentaro Torisawa. 2008. Hack- ing Wikipedia for hyponymy relation acquisition. In Proc. of the Third International Joint Conference on Natural Language Processing (IJCNLP), pages 883-888, January.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Boosting precision and recall of hyponymy relation acquisition from hierarchical layouts in Wikipedia", "authors": [ { "first": "Asuka", "middle": [], "last": "Sumida", "suffix": "" }, { "first": "Naoki", "middle": [], "last": "Yoshinaga", "suffix": "" }, { "first": "Kentaro", "middle": [], "last": "Torisawa", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 6th International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Asuka Sumida, Naoki Yoshinaga, and Kentaro Tori- sawa. 2008. Boosting precision and recall of hy- ponymy relation acquisition from hierarchical lay- outs in Wikipedia. In Proceedings of the 6th In- ternational Conference on Language Resources and Evaluation.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "The nature of statistical learning theory", "authors": [ { "first": "Vladimir", "middle": [ "N" ], "last": "Vapnik", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vladimir N. Vapnik. 1995. The nature of statistical learning theory. Springer-Verlag New York, Inc., New York, NY, USA.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Autonomously semantifying Wikipedia", "authors": [ { "first": "Fei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Daniel", "middle": [ "S" ], "last": "Weld", "suffix": "" } ], "year": 2007, "venue": "CIKM '07: Proceedings of the sixteenth ACM conference on Conference on information and knowledge management", "volume": "", "issue": "", "pages": "41--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fei Wu and Daniel S. Weld. 2007. Autonomously se- mantifying Wikipedia. In CIKM '07: Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, pages 41- 50.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Pseudo-code of bilingual co-training beled instances to be added to a new training set", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "System architecture", "type_str": "figure", "num": null, "uris": null }, "FIGREF2": { "text": "Wikipedia article and its layout structure", "type_str": "figure", "num": null, "uris": null }, "FIGREF4": { "text": "F 1 based on training data size: with/without bilingual co-training n 2n n", "type_str": "figure", "num": null, "uris": null }, "TABREF0": { "num": null, "html": null, "content": "
\u2026..\u2026..
TrainingTraining
Training Data
for Language 2
Translate
Trainingreliable parts of classificationTraining
results
Translate
reliable parts of
classification
results
", "text": ").", "type_str": "table" }, "TABREF4": { "num": null, "html": null, "content": "
Type DescriptionExample
LF 1 4 Used lexical patternshyper: \"List of X\", hypo: \"Notable X\"
LF 5 Typical section headingshyper: History, hypo: Reference
SF 1 Distance between hyper and hypo3
SF 2 Type of layout itemshyper: title, hypo: bulleted list
SF 3 Type of tree nodeshyper: root node, hypo: leaf node
SF 4
", "text": "Morphemes/words hyper: tiger * , hypo: Siberian, hypo: tiger * LF 2 POS of morphemes/words hyper: NN * , hypo: NP, hypo: NN * LF 3 hyper and hypo, themselves hyper: Tiger, hypo: Siberian tiger LF LF 1 and LF 3 of hypo's parent node LF 3 :Subspecies SF 5 LF 1 and LF 3 of hyper's child node LF", "type_str": "table" }, "TABREF5": { "num": null, "html": null, "content": "", "text": "Feature type and its value. * in LF 1 and LF 2 represent the head morpheme/word and its POS. Except those in LF 4 and LF 5 , examples are derived from (TIGER, SIBERIAN TIGER) in", "type_str": "table" }, "TABREF10": { "num": null, "html": null, "content": "
", "text": "F 1 based on training data size: when Japanese classifier is strong one", "type_str": "table" } } } }