|
{ |
|
"paper_id": "Y16-3009", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:47:08.744627Z" |
|
}, |
|
"title": "Automatic Identifying Entity Type in Linked Data", |
|
"authors": [ |
|
{ |
|
"first": "Qingliang", |
|
"middle": [], |
|
"last": "Miao", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Jun Sun Fujitsu R&D Center Co., Ltd. Chaoyang District", |
|
"location": { |
|
"addrLine": "Shuangyong Song, Zhongguang Zheng, Lu Fang", |
|
"postCode": "100027", |
|
"settlement": "Yao Meng, Beijing", |
|
"country": "P. R. China" |
|
} |
|
}, |
|
"email": "qingliang.miao@cn.fujitsu.com" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Type information is an important component of linked data. Unfortunately, many linked datasets lack of type information, which obstructs linked data applications such as question answering and recommendation. In this paper, we study how to automatically identify entity type information from Chinese linked data and present a novel approach by integrating classification and entity linking techniques. In particular, entity type information is inferred from internal clues such as entity's abstract, infobox and subject using classifiers. Moreover, external evidence is obtained from other knowledge bases using entity linking techniques. To evaluate the effectiveness of the approach, we conduct preliminary experiments on a real-world linked dataset from Zhishi.me 1. Experimental results indicate that our approach is effective in identifying entity types.", |
|
"pdf_parse": { |
|
"paper_id": "Y16-3009", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Type information is an important component of linked data. Unfortunately, many linked datasets lack of type information, which obstructs linked data applications such as question answering and recommendation. In this paper, we study how to automatically identify entity type information from Chinese linked data and present a novel approach by integrating classification and entity linking techniques. In particular, entity type information is inferred from internal clues such as entity's abstract, infobox and subject using classifiers. Moreover, external evidence is obtained from other knowledge bases using entity linking techniques. To evaluate the effectiveness of the approach, we conduct preliminary experiments on a real-world linked dataset from Zhishi.me 1. Experimental results indicate that our approach is effective in identifying entity types.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "An increasing number of linked datasets is published on the Web. At present, there have been more than 200 datasets in the LOD cloud. Among these datasets, DBpedia (Bizer, C. et al., 2009 ) and 1 http://zhishi.me/ Yago (Suchanek, F.M. et al., 2007) serve as hubs in LOD cloud. As the first effort of Chinese LOD, Zhishi.me (Niu, X. et al., 2011) extracted RDF triples from three largest Chinese encyclopedia web sites i.e. Chinese Wikipedia, Baidu Baike 2 and Hudong Baike 3 . However, type information is incomplete or missing in these linked datasets. For example, more than 36% of type information is missing in DBpedia (Kenza Kellou-Menouer and Zoubida Kedad, 2012) . Zhishi.me only uses the SKOS vocabulary to represent the category system and does not strictly define the \"rdf:type\" relation between instances and classes. Type information is an important component of linked datasets. Knowing what a certain entity is, e.g., a person, organization, place, etc., is crucial for enabling a number of desirable applications such as query understanding (Tonon, A. et al., 2013) , question answering (Kalyanpur, A. et al., 2011; Welty, C. et al., 2012) , recommendation (Lee, T. et al., 2006; Hepp, M. 2008) , and automatic linking (Aldo . Since it is often not feasible to manually assign types to all instances in a large linked data, automatic identifying type information is desirable.", |
|
"cite_spans": [ |
|
{ |
|
"start": 164, |
|
"end": 187, |
|
"text": "(Bizer, C. et al., 2009", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 219, |
|
"end": 248, |
|
"text": "(Suchanek, F.M. et al., 2007)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 323, |
|
"end": 345, |
|
"text": "(Niu, X. et al., 2011)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 630, |
|
"end": 669, |
|
"text": "Kellou-Menouer and Zoubida Kedad, 2012)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1056, |
|
"end": 1080, |
|
"text": "(Tonon, A. et al., 2013)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1102, |
|
"end": 1130, |
|
"text": "(Kalyanpur, A. et al., 2011;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1131, |
|
"end": 1154, |
|
"text": "Welty, C. et al., 2012)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1172, |
|
"end": 1194, |
|
"text": "(Lee, T. et al., 2006;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1195, |
|
"end": 1209, |
|
"text": "Hepp, M. 2008)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Furthermore, since open and crowd-sourced encyclopedia often contain noisy data, filtering out the incorrect type information is crucial as well (Heiko Paulheim and Christian Bizer, 2013) . Recently, more and more attention has been paid to extracting or mining type information from linked data. However, most of current techniques on obtaining type information are either languagedependent or inferring type information only from internal clues such as textual description of entity. Most existing work was mainly focused on mining entity type from internal clues, and missed out the point that the issue can be boosted by integrating external evidence. Our assumption is that given an entity e1 without type information, if we can find an equivalent entity e2 with type information, we can obtain the type information of e1 directly. In this paper, we investigate whether external evidence from other knowledge base could be helpful to entity type identification, and how to combine internal clues such as abstract, infobox and subject with external evidence. In particular, several learning features are extracted from entity abstract, infobox and subject, and then classifiers are trained to get entity type prediction models. Meanwhile, entity linking tools are utilized to link entities with external knowledge base e.g. DBpedia, where we can get type information. Finally, a voting mechanism is adopted to decide the final entity type. We have implemented our algorithms and present some experimental evaluation results to demonstrate the effectiveness of the approach. The remainder of the paper is organized as follows. In the following section we review the existing literature on entity type identification. Then, we introduce the proposed approach in section 3. We conduct comparative experiments and present the results in section 4. At last, we conclude the paper with a summary of our work and give our future working directions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 152, |
|
"end": 187, |
|
"text": "Paulheim and Christian Bizer, 2013)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the field of entity type inference, there are two dominant methods, namely, content-based (Aldo Gangemi et al., 2012; Tianxing Wu et al., 2014) and link-based methods (Andrea Giovanni Nuzzolese et al., 2012; Heiko Paulheim and Christian Bizer, 2013) . Next we will introduce these methods respectively. Content-based methods usually utilize entity descriptions such as abstract, infobox and properties to identify entity types. Several learning features are extracted from textual data and classification or clustering models are trained to predict entity types. For example, Aldo Gangemi et al., first extracted definitions from Wikipedia pages, used a natural language deep parser FRED to produce a logical RDF representation of definition sentences, and then select types and type-relations from the RDF graph based on graph patterns. Finally, a word sense disambiguation engine is used to identify the types of an entity and their taxonomical relations (Aldo Gangemi et al., 2012) . Tianxing Wu et al., also mined type information from abstracts, infobox and categories of article pages in Chinese encyclopedia Web sites. They presented an attribute propagation algorithm to generate attributes for categories and a graphbased random walk method to infer instance types from categories of entities (Tianxing Wu et al., 2014) . Man Zhu et al., transformed type assertion detection into multiclass classification of pairs of type assertions, and adopted Adaboost as the meta classifier with C4.5 as the base classifier (Man Zhu et. al., 2014) . Kenza Kellou-Menouer and Zoubida Kedad utilized a density-based clustering algorithm to discovery types in RDF datasets. They first adopted Jaccard similarity to measure the closeness between two entities. In particular, they calculated the similarity between two given entities by considering their respective sets of both incoming and outgoing properties. Then entities are grouped according to their similarity (Kenza Kellou-Menouer and Zoubida Kedad, 2015) . Link-based methods can also be used in entity type assignment. For example, Heiko Paulheim and Christian Bizer proposed a heuristic link-based type inference mechanism. They used each link from and to an instance as an indicator for the resource's type. For each link, they use the statistical distribution of types in the subject and object position of the property for predicting the instance's types (Heiko Paulheim and Christian Bizer, 2013) . Andrea Giovanni Nuzzolese et al., utilized k-Nearest Neighbor algorithm for classifying DBpedia entities based on the wikilinks (Andrea Giovanni Nuzzolese et al., 2012) . In this paper, we integrate content-based methods and external evidence to identify entity type. We views type identification as classification issue, and adopt classifiers to train type prediction models. Meanwhile, entity linking tools are adopted to link entities with external knowledge base, where we can get type information. Finally, we use a weighted voting approach to obtain the entity type.", |
|
"cite_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 120, |
|
"text": "(Aldo Gangemi et al., 2012;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 121, |
|
"end": 146, |
|
"text": "Tianxing Wu et al., 2014)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 170, |
|
"end": 210, |
|
"text": "(Andrea Giovanni Nuzzolese et al., 2012;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 211, |
|
"end": 252, |
|
"text": "Heiko Paulheim and Christian Bizer, 2013)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 960, |
|
"end": 987, |
|
"text": "(Aldo Gangemi et al., 2012)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1305, |
|
"end": 1331, |
|
"text": "(Tianxing Wu et al., 2014)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1524, |
|
"end": 1547, |
|
"text": "(Man Zhu et. al., 2014)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1971, |
|
"end": 2010, |
|
"text": "Kellou-Menouer and Zoubida Kedad, 2015)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 2423, |
|
"end": 2458, |
|
"text": "Paulheim and Christian Bizer, 2013)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 2589, |
|
"end": 2629, |
|
"text": "(Andrea Giovanni Nuzzolese et al., 2012)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this section, we will introduce the architecture of the system as shown in figure 1. The inputs of the system are entity data as illustrated in figure 2, the outputs are entity types. In particular the system consists of two parallel parts: (1) classification module; (2) entity linking module;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In classification module, we first extract entity definition from its abstract. And then, we extract several learning features from its definition, infobox, and subject. We choose several classification models to train the entity type prediction model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In entity linking module, we first construct profile for each entity, and then entity linking tool (Qingliang Miao et al., 2015) is used as a bridge to get entity type information from other linked data i.e. DBpedia. Finally, a voting mechanism is used to get the final answer. In particular, if these two models' results are different, we use entity linking based results as the final answer. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 128, |
|
"text": "(Qingliang Miao et al., 2015)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In this section, we mainly introduce learning features and feature selection method. In Linked Data, entities are usually descripted using Resource Description Framework (RDF) 4 . Each entity in Linked Data space is identified by a unique HTTP dereferenceable Uniform Resource Identifier (URI) and the relations of resources are described with simple subject-predicate-object triples. Figure 2 shows an example of entity \" /Seoul\". The task of this research is to identify the type of the entity using existing information as illustrated in figure 2. Table 2 shows top 5 type keywords of each entity type. The type keyword set is obtained from encyclopedia and Chinese corpus and we will detail the process in next section. The type keywords are selected from keyword set manually. The feature vector based on pattern is Qi, where N is the number of entity type. If the first k sentences x in abstract contain the patterns in Table 1 , we set the value G , otherwise the value is 0. In our experiment, we set G =1.0 empirically. For example, the definition of \" /Seoul\" we extracted from abstracts is \" Seoul \". And the feature value for type \"city\" is G and 0 for the other types.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 385, |
|
"end": 393, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 551, |
|
"end": 558, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 926, |
|
"end": 935, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Classification based Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "` , ( , ) 1; 1, 2,... 0, ( , ) 0; i i i if f x t Q i N if f x t G \u00ae K eyword feature", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classification based Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Besides pattern features described above, we use keywords features as well. To ensure high coverage and quality of keywords for each type, we use rule base method and statistic based method to mine related keywords. For rule based method, we first collect entity description page with type information from three Chinese encyclopedia. Through analyzing description page, we extract 4 types of contents to construct keyword set, \"Title\", \"Alias\", \"Category\", and \"Related Entity\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classification based Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "x Title: The titles in Chinese encyclopedia are used as labels for the corresponding entities directly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classification based Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "x Alias: The alias in Chinese encyclopedia is used to represent the same entity. For example,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classification based Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "[ | | ].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classification based Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "x Category: Categories describe the subjects of a given entity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classification based Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "x Related Entities: In Chinese encyclopedia there are related entities of a given entity. For example, related entities of \" (university)\" are \" (Peking university)\", \"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classification based Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "(Tsinghua University)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classification based Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For statistic based method, we use word2vec model to obtain word vectors based on Chinese corpus and obtain similar word lists for each entity type. The final keyword list is obtained by a voting method. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classification based Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Since different entity types have different properties. For example, person has birthday and organization has locations. We extract property names from infobox and use them as infobox features. For example, in figure 1, property features of entity \" /Seoul\" is \" /region\", \" /area\", \" /population\", \" /climatic condition\", \" /famous scenery\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Infobox features", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Besides infobox features, we collect entity subject information from zhishi.me. Subject information contains many domain-specific terms, which are indicator of entity types. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Subject features", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The learning features are all extracted empirically, therefore, effective feature selection is necessary. We design a feature selection scheme as below: we take 'maximum probability of a feature representing a category' as the indicator of the effectiveness of features, and remove features whose effectiveness is smaller than a threshold T.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature selection", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In our experiment, we set T=0.85 empirically based on the development set. The changing curve of F-measure and threshold T is shown in figure 3. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature selection", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To use type information of DBpedia, we use entity linking tool to link entities with Chinese DBpedia. Since entities in Chinese DBpedia lack of \"rdf:type\" property, we use following steps to get type information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity linking based Model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Since many entities in English DBpedia have \"rdf:type\" property, we can use \"owl:sameAs\" relation to obtain type information of Chinese DBpedia entities. Figure 4 : Example of \"sameAs\" relation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 154, |
|
"end": 162, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Using \"sameAs\" relation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In some cases, we can use redirect relation to obtain the type. Figure 5 shows Figure 5 . Example of \"redirect\" relation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 78, |
|
"text": "Figure 5 shows", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 79, |
|
"end": 87, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Using redirect relation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Besides \"sameAs\" and \"redirection\" relation, we use entity category information to infer type information as well. Category information in DBPedia is usually a strong indicator for entity type. For example, person usually has category information \"People_from_Beijing\" or \"People_born_1960s\". Therefore, we can infer an entity's type from category. In particular, we use a simple method that match category information e.g. \"People\" with DBpedia ontology class.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Using category information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Since the DBPedia Ontology (dbo) is different from type information in Zhishi.me, we have to map dbo with entity type in Zhishi.me. In particular, given a dbo type, we use a type mapping table shown in table 4 to find the corresponding type in Zhishi.me. We use entity linking tools to link Zhishi.me training data with DBPedia, and obtain the type mapping relation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Type mapping", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For example, if entity e1 in Zhishi.me with type \"Politician\" is linked with e2 in DBPedia with type \"Governor\", we can obtain a mapping relation between \"Politician\" and \"Governor\". ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Type mapping", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In order to evaluate the effectiveness of the proposed approach, we conduct our experiments by using test data from JIST15 type identification challenge 5 . The data includes 1397 entities with type information and 500 unlabeled entities that are used as test data. There are 10 classes including insect, university, game, politician, city, song, novel, scene, cartoon and actor. The statistics of the data is shown in Table 5 : The statistics of the test data Precision, Recall and F-measure are used as the evaluation metric. All of them are defined as follows where ai is the number of URLs that are actually in label i and also predicted in label i, bi is the number of URLs that are predicted in label i, ci is the number of URLs that are actually in label i. In experiment, we first evaluate the performance using internal information only, namely classification based method. And then we evaluate whether external knowledge is useful to improve type identification performance. We also compare with our method with state of the art method (Tianxing Wu. et al., 2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 1046, |
|
"end": 1073, |
|
"text": "(Tianxing Wu. et al., 2014)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 419, |
|
"end": 426, |
|
"text": "Table 5", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In this experiment, we have compared with four classification algorithms, Na\u00efve Bayes, Bayes Net, Random Forest and Support Vector Machine. Figure 6 shows experiment results, from which we can see F-measure is relative high in classification method, and Random Forest algorithm performs best among four classifiers and F-measure is above 0.98.This results indicate the learning features are very predictive for this task. To evaluate whether external evidence derived from other knowledge base is helpful, we have built and compared two kinds of type identification methods, one with utilizing entity linking techniques and the other without. Figure 7 shows the comparing results of type identification models with and without entity linking. From Figure 7 we can see that when incorporating entity linking results, the average Fmeasure can be improved by 1.5%. The improvement of F-measure is likely attributable to the external knowledge base. The improvement is not as much as expected. Through carefully analyze the results, we find two reasons. First, entity linking tools only link 40% entity in testing data. Second, most derived type from external knowledge base is consistent with classification results In order to validate whether the improvement is significant, we adopt pair-wise t-tests on Fmeasure. For all t-tests, p-values are all less than 0.01, therefore the improvement is significant. We confirm that the improvement of F-measure is due to incorporating external evidence and we believe that it will achieve better results if we incorporate enough and high quality external evidence. From the above analysis, it is evident that entity linking results can be incorporated as knowledge to improve the performance of entity type identification. We also use state of the art method (Tianxing Wu et al., 2014) as baseline and conduct experiment to compare our method with the baseline. Figure 8 shows the experiment results. From figure 8 we can see our best performance (Random forest with entity linking) outperform state of the art method by 1.1%.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1799, |
|
"end": 1825, |
|
"text": "(Tianxing Wu et al., 2014)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 148, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF4" |
|
}, |
|
{ |
|
"start": 643, |
|
"end": 651, |
|
"text": "Figure 7", |
|
"ref_id": "FIGREF5" |
|
}, |
|
{ |
|
"start": 748, |
|
"end": 756, |
|
"text": "Figure 7", |
|
"ref_id": "FIGREF5" |
|
}, |
|
{ |
|
"start": 1902, |
|
"end": 1910, |
|
"text": "Figure 8", |
|
"ref_id": "FIGREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In this paper, we study entity type information identification from Chinese linked data and present a novel approach by integrating classification and entity linking techniques. In particular, entity type information is inferred from internal clues using classifiers. Moreover, external evidence is obtained from other knowledge bases through entity linking techniques. Experimental results on real-world datasets show the learning features we selected are predictive. Moreover, results indicate external evidence derived by entity linking techniques is helpful to type identification as well. We believe that this study is just the first step in type identification and much more work needs to be done to further explore the issue. In our ongoing work, we plan to improve entity tools to find more equivalent entities in external knowledge base. We also plan to reduce the amount of training data, which is time consuming to obtain, by using entity linking results. For example, type information obtained by entity linking techniques could be used as training data directly. Another direction is to harvest external evidence from broader resources, e.g. text or web tables, not just from linked data or knowledge base. For instance, in sentence \"...including cities such as Birmingham, Montgomery, Huntsville...\", if we know the type information of \"Birmingham\", we can infer other entities' type as well. Similarly, if we know the type of an entity, the other entity types in the same column can also be obtained by reasoning. At last, we plan to study fine grained type identification.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "http://baike.baidu.com/ 3 http://www.baike.com/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "30th Pacific Asia Conference on Language, Information and Computation (PACLIC 30)Seoul, Republic of Korea, October 28-30, 2016", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://www.w3.org/RDF/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "ttp://www.jist2015.org/index.php?m=list&a=index&id=48&skip=50", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Automatic typing of DBpedia entities", |
|
"authors": [ |
|
{ |
|
"first": "Aldo", |
|
"middle": [], |
|
"last": "Gangemi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrea", |
|
"middle": [ |
|
"Giovanni" |
|
], |
|
"last": "Nuzzolese", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Valentina", |
|
"middle": [], |
|
"last": "Presutti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francesco", |
|
"middle": [], |
|
"last": "Draicchio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alberto", |
|
"middle": [], |
|
"last": "Musetti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paolo", |
|
"middle": [], |
|
"last": "Ciancarini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 11th International Semantic Web Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "65--81", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aldo Gangemi, Andrea Giovanni Nuzzolese, Valentina Presutti, Francesco Draicchio, Alberto Musetti, and Paolo Ciancarini. Automatic typing of DBpedia entities, In Proceedings of the 11th International Semantic Web Conference, 2012, pp. 65-81.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Type inference through the analysis of Wikipedia links", |
|
"authors": [ |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Giovanni Nuzzolese", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aldo", |
|
"middle": [], |
|
"last": "Gangemi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Valentina", |
|
"middle": [], |
|
"last": "Presutti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paolo", |
|
"middle": [], |
|
"last": "Ciancarini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the LDOW2012", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrea Giovanni Nuzzolese, Aldo Gangemi, Valentina Presutti and Paolo Ciancarini, Type inference through the analysis of Wikipedia links, In Proceedings of the LDOW2012, 2012.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "DBpedia-a Crystallization Point for the Web of Data", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Bizer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Lehmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Kobilarov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Auer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Becker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Cyganiak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Hellmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Web Semantics: Science, Services and Agents on the World Wide Web", |
|
"volume": "7", |
|
"issue": "3", |
|
"pages": "154--165", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bizer, C., Lehmann, J., Kobilarov, G., Auer, S., Becker, C., Cyganiak, R., Hellmann, S.: DBpedia-a Crystallization Point for the Web of Data. Web Semantics: Science, Services and Agents on the World Wide Web 7(3), 2009, pp. 154-165.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Type Inference on Noisy RDF Data", |
|
"authors": [ |
|
{ |
|
"first": "Heiko", |
|
"middle": [], |
|
"last": "Paulheim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Bizer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 12th International Semantic Web Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "510--525", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Heiko Paulheim and Christian Bizer, Type Inference on Noisy RDF Data, In Proceedings of the 12th International Semantic Web Conference, 2013, pp. 510-525.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "GoodRelations: An ontology for describing products and services offers on the web", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Hepp", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "5268", |
|
"issue": "", |
|
"pages": "329--346", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hepp, M.: GoodRelations: An ontology for describing products and services offers on the web. In: EKAW 2008, Vol. 5268, 2008, pp. 329-346.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Leveraging Community-built Knowledge for Type Coercion in Question Answering", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Kalyanpur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Murdock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Welty", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the 10th International Semantic Web Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "144--156", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kalyanpur, A., Murdock, J.W., Fan, J., Welty, C.: Leveraging Community-built Knowledge for Type Coercion in Question Answering. In Proceedings of the 10th International Semantic Web Conference, pp. 144-156.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Discovering Types in RDF Datasets", |
|
"authors": [ |
|
{ |
|
"first": "Kenza", |
|
"middle": [], |
|
"last": "Kellou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "-", |
|
"middle": [], |
|
"last": "Menouer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zoubida", |
|
"middle": [], |
|
"last": "Kedad", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 12th Extended Semantic Web Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "77--81", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenza Kellou-Menouer and Zoubida Kedad, Discovering Types in RDF Datasets, In Proceedings of the 12th Extended Semantic Web Conference, 2015, pp. 77-81.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "An Ontologybased Product Recommender System for B2B Marketplaces", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Chun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Shim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "International Journal of Electronic Commerce", |
|
"volume": "11", |
|
"issue": "2", |
|
"pages": "125--155", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lee, T., Chun, J., Shim, J., Lee, S. G.: An Ontology- based Product Recommender System for B2B Marketplaces. International Journal of Electronic Commerce 11(2), 2006, pp. 125-155.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Noisy Type Assertion Detection in Semantic Datasets", |
|
"authors": [ |
|
{ |
|
"first": "Man", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiqiang", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhibin", |
|
"middle": [], |
|
"last": "Quan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 13th International Semantic Web Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "373--388", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Man Zhu, Zhiqiang Gao, and Zhibin Quan, Noisy Type Assertion Detection in Semantic Datasets, In Proceedings of the 13th International Semantic Web Conference, 2014, pp. 373-388.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Zhishi.me -Weaving Chinese Linking Open Data", |
|
"authors": [ |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Niu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Rong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Qi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the 10th International Semantic Web Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "205--220", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Niu, X., Sun, X., Wang, H., Rong, S., Qi, G., Yu, Y.: Zhishi.me -Weaving Chinese Linking Open Data. In Proceedings of the 10th International Semantic Web Conference, pp. 205-220.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Fumihito Nishino and Nobuyuki Igata, Link Scientific Publications using Linked Data", |
|
"authors": [ |
|
{ |
|
"first": "Qingliang", |
|
"middle": [], |
|
"last": "Miao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yao", |
|
"middle": [], |
|
"last": "Meng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lu", |
|
"middle": [], |
|
"last": "Fang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 9th IEEE International Conference on Semantic Computing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qingliang Miao, Yao Meng, Lu Fang, Fumihito Nishino and Nobuyuki Igata, Link Scientific Publications using Linked Data. In Proceedings of the 9th IEEE International Conference on Semantic Computing, 2015.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Yago: a Core of Semantic Knowledge", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Suchanek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Kasneci", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Weikum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 16th International Conference on World Wide Web", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "697--706", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Suchanek, F.M., Kasneci, G., Weikum, G.: Yago: a Core of Semantic Knowledge. In Proceedings of the 16th International Conference on World Wide Web, 2007 pp. 697-706.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Mining Type Information from Chinese Online Encyclopedias", |
|
"authors": [ |
|
{ |
|
"first": "Tianxing", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shaowei", |
|
"middle": [], |
|
"last": "Ling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guilin", |
|
"middle": [], |
|
"last": "Qi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haofen", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 4th Joint International Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "213--229", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tianxing Wu, Shaowei Ling, Guilin Qi, and Haofen Wang, Mining Type Information from Chinese Online Encyclopedias, In Proceedings of the 4th Joint International Conference, 2014, pp 213-229.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "TRank: Ranking Entity Types Using the Web of Data", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Tonon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Catasta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Demartini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Cudr\u00b4e-Mauroux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Aberer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the 12th International Semantic Web Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "640--656", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tonon, A., Catasta, M., Demartini, G., Cudr\u00b4e-Mauroux, P., Aberer, K.: TRank: Ranking Entity Types Using the Web of Data. In Proceedings of the 12th International Semantic Web Conference, pp. 640-656", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "A Comparison of Hard Filters and Soft Evidence for Answer Typing in Watson", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Welty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Murdock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Kalyanpur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the 11th International Semantic Web Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "243--256", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Welty, C., Murdock, J.W., Kalyanpur, A., Fan, J.: A Comparison of Hard Filters and Soft Evidence for Answer Typing in Watson. In Proceedings of the 11th International Semantic Web Conference, pp. 243-256.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "The workflow of the approach.", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"text": "Linked Data example of entity \" /Seoul\"", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"text": "F-measure changes with threshold T.", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF4": { |
|
"uris": null, |
|
"text": "Experiment results on precision.", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF5": { |
|
"uris": null, |
|
"text": "Experiment results of models with and without entity linking on f-measure.", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF6": { |
|
"uris": null, |
|
"text": "Experiment results of models with entity linking and baseline on f-measure", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Typically, the definition of an entity is in the first k sentences of its abstract. Inspired by(Aldo Gangemi et al., 2012), we use a set of heuristics based on lexico-syntactic patterns to extract entity definition. The pattern features are derived from entity definition text in the form of \"[entity] is/belongs [ti] [word1\u2026wordn]\", where ti is the type keyword of entity type i and n is the distance between the key word ti and the sentence's end.Table 1shows some examples of the patterns.", |
|
"content": "<table><tr><td>Entity type</td><td colspan=\"2\">Patterns</td><td/><td/></tr><tr><td>Insect</td><td colspan=\"3\">< .+ >,<is.+insect>,</td><td/></tr><tr><td/><td><</td><td colspan=\"3\">.+ ><belongs to.+species></td></tr><tr><td>University</td><td>< .+</td><td>>,< .+</td><td>></td><td/></tr><tr><td/><td colspan=\"3\"><is.+university/college></td><td/></tr><tr><td>Game</td><td>< .+</td><td colspan=\"2\">>,<is.+game></td><td/></tr><tr><td>City</td><td>< .+</td><td>>,<</td><td>.+</td><td>></td></tr><tr><td/><td colspan=\"2\"><is.+city></td><td/><td/></tr><tr><td>Scene</td><td>< .+</td><td>>,< .+</td><td>>,</td><td/></tr><tr><td/><td colspan=\"3\"><is.+attraction/scenic></td><td/></tr><tr><td colspan=\"5\">Table 1: Example of pattern features</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "", |
|
"content": "<table><tr><td/><td/><td colspan=\"4\">shows the top 5 keywords for</td></tr><tr><td>each type.</td><td/><td/><td/><td/><td/></tr><tr><td>Entity</td><td colspan=\"2\">Keywords</td><td/><td/><td/></tr><tr><td>type</td><td/><td/><td/><td/><td/></tr><tr><td>Insect</td><td>{</td><td>, ,</td><td>,</td><td>,</td><td>}/{insect,</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Top 5 keywords for each type", |
|
"content": "<table/>", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "shows some example of subject features. In this study, all these above features are binary features.", |
|
"content": "<table><tr><td>Entity</td><td colspan=\"3\">Subject features</td><td/></tr><tr><td>/Summer</td><td>{</td><td>,</td><td>,</td><td>}/{park,</td></tr><tr><td>Palace</td><td colspan=\"4\">attraction, tourism}</td></tr><tr><td>/Shizuoka</td><td>{</td><td>,</td><td colspan=\"2\">}/{japan, city}</td></tr><tr><td>City</td><td/><td/><td/><td/></tr><tr><td/><td>{</td><td/><td colspan=\"2\">, }/{cartoon, cute}</td></tr><tr><td>/Anpanman</td><td/><td/><td/><td/></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Example of subject features", |
|
"content": "<table/>", |
|
"num": null |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Therefore, the type of <zhishi.me: > is city.Figure 4illustrates the process.", |
|
"content": "<table><tr><td/><td/><td/><td/><td/><td>owl sameAs</td></tr><tr><td/><td/><td/><td/><td/><td>zhishi.me:</td><td>zh.dbpedia:</td></tr><tr><td/><td/><td/><td/><td/><td>rdf type</td><td>owl sameAs</td></tr><tr><td/><td/><td/><td/><td/><td>dbpedia:City</td><td>en.dbpedia:Islamabad</td></tr><tr><td/><td/><td/><td/><td/><td>rdf type</td></tr><tr><td/><td/><td colspan=\"3\">For example, <zhishi.me:</td></tr><tr><td colspan=\"4\">> is linked with <zh.dbpedia:</td><td colspan=\"2\">> that is</td></tr><tr><td>same</td><td>as</td><td>English</td><td colspan=\"2\">DBpedia</td><td>entity:</td></tr><tr><td colspan=\"6\"><en.dbpedia:Islamabad>, and the type of</td></tr><tr><td colspan=\"3\"><en.dbpedia:Islamabad></td><td>is</td><td colspan=\"2\"><dbo:City>.</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF8": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "", |
|
"content": "<table/>", |
|
"num": null |
|
}, |
|
"TABREF9": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "", |
|
"content": "<table><tr><td/><td>.</td><td/></tr><tr><td>Entity Type</td><td># training data</td><td># testing data</td></tr><tr><td>Insect</td><td>124</td><td>41</td></tr><tr><td>University</td><td>157</td><td>42</td></tr><tr><td>Game</td><td>143</td><td>59</td></tr><tr><td>Politician</td><td>134</td><td>43</td></tr><tr><td>City</td><td>139</td><td>59</td></tr><tr><td>Song</td><td>139</td><td>59</td></tr><tr><td>Novel</td><td>150</td><td>51</td></tr><tr><td>Scene</td><td>130</td><td>60</td></tr><tr><td>Cartoon</td><td>134</td><td>38</td></tr><tr><td>Actor</td><td>147</td><td>48</td></tr></table>", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |