|
{ |
|
"paper_id": "2016", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T01:05:12.189017Z" |
|
}, |
|
"title": "Verifying Integrity Constraints of a RDF-based WordNet", |
|
"authors": [ |
|
{ |
|
"first": "Fabricio", |
|
"middle": [], |
|
"last": "Chalub", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IBM Research Avenida Pasteur", |
|
"location": { |
|
"postCode": "138", |
|
"settlement": "Rio de Janeiro", |
|
"country": "Brazil" |
|
} |
|
}, |
|
"email": "fchalub@br.ibm.com" |
|
}, |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Rademaker", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IBM Research Avenida Pasteur", |
|
"location": { |
|
"postCode": "138", |
|
"settlement": "Rio de Janeiro", |
|
"country": "Brazil" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper presents our first attempt at verifying integrity constraints of our openWordnet-PT against the ontology for Wordnets encoding. Our wordnet is distributed in Resource Description Format (RDF) and we want to guarantee not only the syntax correctness but also its semantics soundness.", |
|
"pdf_parse": { |
|
"paper_id": "2016", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper presents our first attempt at verifying integrity constraints of our openWordnet-PT against the ontology for Wordnets encoding. Our wordnet is distributed in Resource Description Format (RDF) and we want to guarantee not only the syntax correctness but also its semantics soundness.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Lexical databases are organized knowledge bases of information about words. These resources typically include information about the possible meanings of words, relations between these meanings, definitions and phrases that exemplify their use and maybe some numeric grades of confidence in the information provided. The Princeton English Wordnet (Fellbaum, 1998) , is probably the most popular model of a lexical knowledge base. Our main goal is to provide good quality lexical resources for Portuguese, making use, as much as possible, of the effort already spent creating similar resources for English. Thus we are working towards a Portuguese wordnet, based on the Princeton model (de Paiva et al., 2012) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 346, |
|
"end": 362, |
|
"text": "(Fellbaum, 1998)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 684, |
|
"end": 707, |
|
"text": "(de Paiva et al., 2012)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In a previous paper (Real et al., 2015) we reported the new web interface 1 for searching, browsing and collaborating on the improvement of OpenWordnet-PT. Correcting and improving linguistic data is a hard task, as the guidelines for what to aim for are not set in stone nor really known in advance. While the WordNet model has been paradigmatic in modern computational lexicography, this model is not without its failings and shortcomings, as far as specific tasks are concerned. Also it is easy and somewhat satisfying to provide copious quantitative descriptions of numbers of synsets, for different parts-of-speech, of triples associated to these synsets and of intersections with different subsets of Wordnet, etc. However, the whole community dedicated to creating wordnets in other languages, the Global WordNet Association 2 , has not come up with criteria for semantic evaluation of these resources nor has it produced, so far, ways of comparing their relative quality or accuracy. Thus qualitative assessment of a new wordnet seems, presently, a matter of judgment and art, more than a commonly agreed practice.", |
|
"cite_spans": [ |
|
{ |
|
"start": 20, |
|
"end": 39, |
|
"text": "(Real et al., 2015)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Believing that this qualitative assessment is important, and so far rather elusive, we propose that having many eyes over the resource, with the ability to shape it in the directions wanted, is a main advantage. This notion of volunteer curated content, as first and foremost exemplified by Wikipedia, needs adaptation to work for lexical resources.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our openWordnet-PT was distributed since its beginning in RDF, following the Semantic Web standards proposed by Tim Berners-Lee (Berners-Lee, 1998). Nevertheless, so far, although we make available not only the data but also its model definition in OWL 3 , we have not addressed the task to confront the data with its model to guarantee that data is compliance with the defined model. This is the main contribution of this paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The OpenWordnet-PT , abbreviated as OpenWN-PT, is a wordnet originally developed as a projection of the Universal WordNet (UWN) ( de Melo and Weikum, 2009) . Its long term goal is to serve as the main lexicon for a system of natural language processing focused on logical reasoning, based on representation of knowledge, using an ontology, such as SUMO (Pease and Fellbaum, 2010) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 155, |
|
"text": "de Melo and Weikum, 2009)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 353, |
|
"end": 379, |
|
"text": "(Pease and Fellbaum, 2010)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OpenWordnet-PT", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "OpenWN-PT has been constantly improved through linguistically motivated additions and removals, either manually or by making use of large corpora. This is also the case for the lexicon of nominalizations, called NomLex-PT, that is integrated to the OpenWN-PT (Freitas et al., 2014) . One of the features of both resources is to try to incorporate different kinds of quality data already produced and made available for the Portuguese language, independent of which variant of Portuguese one considers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 259, |
|
"end": 281, |
|
"text": "(Freitas et al., 2014)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OpenWordnet-PT", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The philosophy of OpenWN-PT is to maintain a close connection with Princeton's wordnet since this minimizes the impact of lexicographical decisions on the separation or grouping of senses in a given synset. Such disambiguation decisions are inherently arbitrary (Kilgarriff, 1997) , thus the multilingual alignment gives us a pragmatic and practical solution. It is practical because Princeton WordNet remains the most used lexical resource in the world. It is also pragmatic, since those decisions will be more useful, if they are similar to what other wordnets say. Of course this does not mean that all decisions will be sorted out for us. As part of our processing is automated and errorprone, we strive to remove the biggest mistakes created by automation, using linguistic skills and tools. In this endeavor we are much helped by the linked data philosophy and implementation, as keeping the alignment between synsets is facilitated by looking at the synsets in several different languages in parallel. For this we make use of the Open Multilingual WordNet's interface (Bond and Foster, 2013) through links from our interface.", |
|
"cite_spans": [ |
|
{ |
|
"start": 262, |
|
"end": 280, |
|
"text": "(Kilgarriff, 1997)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1075, |
|
"end": 1098, |
|
"text": "(Bond and Foster, 2013)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OpenWordnet-PT", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "This lexical enrichment process of OpenWN-PT reported in employs three language strategies: (1) translation; (2) corpus extraction; and (3) dictionaries. The interested reader will find more details in Real et al., 2015) . The essential fact is that given the constant release of new versions of our openWN-PT, we must ensure the quality of the data that we make available. By quality here we mean not only the data content but its encoding consistency.", |
|
"cite_spans": [ |
|
{ |
|
"start": 202, |
|
"end": 220, |
|
"text": "Real et al., 2015)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OpenWordnet-PT", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "As reported in , since its beginning OpenWN-PT is distributed using the Resource Description Format (RDF) (Cyganiak and Wood, 2003) . We have being following the increasingly popular way of addressing the is-sue of interoperability by relying on Linked Data and Semantic Web standards such as RDF and OWL (Hitzler et al., 2012) , which have led to the emergence of a number of Linked Data projects for lexical resources (de Melo and Weikum, 2008; Chiarcos et al., 2012) . The adoption of such standards not only allows us to publish both the data model and the actual data in the same format, they also provide for instant compatibility with a vast range of existing data processing tools and storage systems, triple stores, providing query interfaces based on the SPARQL standard (Harris and Seaborne, 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 131, |
|
"text": "(Cyganiak and Wood, 2003)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 305, |
|
"end": 327, |
|
"text": "(Hitzler et al., 2012)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 420, |
|
"end": 446, |
|
"text": "(de Melo and Weikum, 2008;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 447, |
|
"end": 469, |
|
"text": "Chiarcos et al., 2012)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 781, |
|
"end": 808, |
|
"text": "(Harris and Seaborne, 2013)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OpenWordnet-PT in RDF", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "To encode any data in RDF, one needs to decide which classes and properties (vocabulary) will be used. The adoption of already defined vocabularies helps on the data interoperability since these makes data easily integrate with other resources.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OpenWordnet-PT in RDF", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We chose to use the vocabulary for wordnets encoding proposed by (van Assem et al., 2006) which is based on Princeton Wordnet 2.0. Their work includes (1) a mapping of WordNet 2.0 concepts and data model to RDF/OWL; (2) conversion scripts from the WordNet 2.0 Prolog distribution to RDF/OWL files; and (3) the actual Word-Net 2.0 data. The suggested representation stayed as close to the original source as possible, that is, it reflects the original WordNet data model without interpretation. The WordNet schema proposed by (van Assem et al., 2006) has three main classes: Synset, WordSense and Word. The first two classes have subclasses for each lexical group present in WordNet. Each instance of Synset, WordSense and Word has its own URI.", |
|
"cite_spans": [ |
|
{ |
|
"start": 65, |
|
"end": 89, |
|
"text": "(van Assem et al., 2006)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 525, |
|
"end": 549, |
|
"text": "(van Assem et al., 2006)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OpenWordnet-PT in RDF", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Since (van Assem et al., 2006 ) is based on Princeton Wordnet 2.0, its use required few adaptations. Our first decision was to adapt the Word-Net 2.0 vocabulary to version 3.0, having our own URIs for all entities (classes and properties). We converted the WordNet 3.0 data to RDF in such a way that OpenWN-PT is an extension of Word-Net 3.0, with its instances, connected to Princeton instances through owl:sameAs relations. That is, for each Princeton WordNet synset, we created an equivalent synset in OpenWN-PT synset, with no additional synsets or relations so far. Given that OpenWN-PT's RDF is only useful together with an RDF version of Princeton WordNet and we wanted to ensure that all information in the WordNet 3.0 distribution was transformed to RDF, we wrote our own script to translate the Princeton WordNet 3.0 data files to RDF so they can be distributed alongside OpenWN-PT. 4 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 6, |
|
"end": 29, |
|
"text": "(van Assem et al., 2006", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 893, |
|
"end": 894, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OpenWordnet-PT in RDF", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "For the URI schema, we adopted a similar approach of (van Assem et al., 2006) of pattern for the URIs by classes. Moreover, we created the domain https://w3id.org/own-pt/ under our control as suggested by the Linked Data principles. In Table 1 , under the namespace [1] we have the classes and properties of our vocabulary (TBox), adapted from (van Assem et al., 2006) . The namespace [2] holds the instances of our openWordnet-PT and [3] holds the Princeton instances. Our Nomlex-PT (Freitas et al., 2014) data also has its vocabulary and data namespace, respectively, [4] and [5].", |
|
"cite_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 77, |
|
"text": "(van Assem et al., 2006)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 344, |
|
"end": 368, |
|
"text": "(van Assem et al., 2006)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 484, |
|
"end": 506, |
|
"text": "(Freitas et al., 2014)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 236, |
|
"end": 243, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "OpenWordnet-PT in RDF", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "1 https://w3id.org/own-pt/wn30/schema/ 2 https://w3id.org/own-pt/wn30-pt/instances/ 3 https://w3id.org/own-pt/wn30-en/instances/ 4 https://w3id.org/own-pt/nomlex/schema/ 5 https://w3id.org/own-pt/nomlex/instances/ (Baader, 2003) . DL are a family of logics that are decidable fragments of first-order logic with attractive and well-understood computational properties. A DL knowledge base is comprised by two components, TBox and ABox. The TBox contains intensional knowledge in the form of a terminology and is built through declarations of the general properties of concepts 6 . The ABox contains extensional knowledge, also called assertional knowledge. The knowledge that is specific to the individuals of the domain of discourse. Intensional knowledge is usually thought not to change and extensional knowledge is usually thought to be contingent, and therefore subject to occasional or even constant change.", |
|
"cite_spans": [ |
|
{ |
|
"start": 214, |
|
"end": 228, |
|
"text": "(Baader, 2003)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OpenWordnet-PT in RDF", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Given an ontology encoded in OWL (Lite or DL) one can use DL reasoners for different tasks such as: concepts consistency checking, query answering, classification, etc. In particular, classification amounts to placing a new concept expression in the proper place in a taxonomic hierarchy of concepts, it can be accomplished by verifying the subsumption relation between each defined concept in the hierarchy and the new concept expression. Validating an ontology means to guarantee that all concepts are satisfiable, that is, the concepts definition do not contain contradictions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OpenWordnet-PT in RDF", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The basic reasoning task in an ABox is instance checking, which verifies whether a given individual is an instance of (or belongs to) a specified concept. Although other reasoning services are usually employed, they can be defined in terms of instance checking. Among them we find knowledge base consistency, which amounts to verifying whether every concept in the knowledge base admits at least one individual; realization, which finds the most specific concept an individual object is an instance of; and retrieval, which finds the individuals in the knowledge base that are instances of a given concept (query answering).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OpenWordnet-PT in RDF", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In some use cases, we need a method to validating the RDF data regarding a given model. In this case, OWL users intend OWL axioms to be interpreted as constraints on RDF data (P\u00e9rez-Urbina et al., 2012) . For that, one has to define a semantics for OWL based on the Closed World Assumption and a weak variant of the Unique Name Assumption (Baader, 2003) . OWL default semantics adopts the Open World Assumption (OWA) and does not adopt the Unique Name Assumption (UNA). These design choices make it very difficult to treat these axioms as ICs. On the one hand, due to OWA, a statement must not be inferred to be false on the basis of failures to prove it; therefore, the fact that a piece of information has not been specified does not mean that such information does not exist. On the other hand, the absence of UNA allows two different constants to refer to the same individual.", |
|
"cite_spans": [ |
|
{ |
|
"start": 175, |
|
"end": 202, |
|
"text": "(P\u00e9rez-Urbina et al., 2012)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 339, |
|
"end": 353, |
|
"text": "(Baader, 2003)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OpenWordnet-PT in RDF", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In the next section, we present some preliminary experiments with TBox and ABox consistency check and integrity constraints (IC) validation in our RDF/OWL data, reporting our experience with most well-know freely available tools. Nevertheless, it is important to emphasize the capabilities that semantic web technologies that ex-ceed the currently mainstream technologies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OpenWordnet-PT in RDF", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Most research groups that are still using XML for lexical resources distribution would argue that XML Schema (Fallside and Walmsley, 2004) can ensure some constraints that we verify in the next section. Relational database users would argue that SQL is an already mature and declarative query language. We argue that OWL/RDF brings much more expressivity allowing much more robust and semantics aware verification with queries such as: In the SPARQL query above, we are asking for words that occur repeated in the same branch of the hierarchy of synsets formed by the wn30:hyponymOf transitive closure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OpenWordnet-PT in RDF", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We were interested in checking our RDF and OWL files against a wide variety of errors, both minor and major and to increase our coverage we opted to use a variety of reasoners.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Validating OpenWN-PT", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We started with Prot\u00e9g\u00e9 7 , which is an ontology editor that among other features has interface with two well-know DL reasoners: FaCT++ (Tsarkov and Horrocks, 2006) and HermiT (Shearer et al., 2008) . Starting in version 4, Prot\u00e9g\u00e9 also gives us the opportunity to search for explanations that caused an inconsistency (Horridge et al., 2008) . Racer (Haarslev et al., 2012) and Pellet (Sirin et al., 2007) are reasoners that have this feature builtin.", |
|
"cite_spans": [ |
|
{ |
|
"start": 136, |
|
"end": 164, |
|
"text": "(Tsarkov and Horrocks, 2006)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 176, |
|
"end": 198, |
|
"text": "(Shearer et al., 2008)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 318, |
|
"end": 341, |
|
"text": "(Horridge et al., 2008)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 350, |
|
"end": 373, |
|
"text": "(Haarslev et al., 2012)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 385, |
|
"end": 405, |
|
"text": "(Sirin et al., 2007)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Validating OpenWN-PT", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In order to verify OWN-PT files we needed to combine all files in https://github. com/own-pt/openWordnet-PT and the Simple Knowledge Organization System (SKOS) 8 ontology file. There are a number of tools available for this, we chose RDF pro (Corcoglioniti et al., 2015) , which was the fastest in our benchmarks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 242, |
|
"end": 270, |
|
"text": "(Corcoglioniti et al., 2015)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Validating OpenWN-PT", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The errors found can be categorized in three different classes: datatype errors, domain and range errors, structural errors. 7 http://protege.stanford.edu/ 8 http://www.w3.org/TR/skos-reference/", |
|
"cite_spans": [ |
|
{ |
|
"start": 125, |
|
"end": 126, |
|
"text": "7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Validating OpenWN-PT", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Errors such as missing datatype declarations and wrongly typed literals were found by both Hermit and Pellet. Hermit identified the following missing classes:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datatype errors", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "wn30:AdjectiveWordSense rdfs:subClassOf wn30:WordSense .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datatype errors", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "wn30:VerbWordSense rdfs:subClassOf wn30:WordSense .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datatype errors", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "And the following verification fails due to incorrectly typed literals:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datatype errors", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Literal value \"00113726\" does not belong to datatype nonNegativeInteger", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datatype errors", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Literal value \"104\" does not belong to datatype nonNegativeInteger", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datatype errors", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "These errors were caused by the fact that wn30:synsetId and wn30:tagCount are defined as properties of synsets and word senses that are non-negative integers, but they were incorrectly stored without the type qualifier, for example: the literal in synset-13363970-n synsetId \"13363970\" should have been specified as \"13363970\"^^xsd:nonNegativeInteger.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datatype errors", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Pellet Lint, like lint tools for programming languages, aims to detect possibly incorrect constructions that generally indicate bugs. For brevity we omit the prefix https://w3id.org/own-pt/ from the individuals below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datatype errors", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "[Untyped classes] wn30/schema/BaseConcept nomlex/schema/Nominalization wn30/schema/CoreConcept [...] [Untyped datatype properties] wn30/schema/senseKey wn30/schema/syntacticMarker wn30/schema/lexicographerFile [...] [Untyped individuals] wn30-en/instances/wordsense-01362387-a-2 wn30-en/instances/wordsense-01362387-a-1 wn30-en/instances/wordsense-01722140-a-1 [...] What Pellet Lint calls an untyped class is an object of a triple involving rdf:type, but it was never formally defined as an OWL class. The same idea applies to untyped properties: these are never formally defined as an OWL property, and lacks any information about its domain and range. Untyped individuals also are used as objects, but never participate in triples as a subject, which seems like a mistake on some previous data import task. These likely need to be removed.", |
|
"cite_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 100, |
|
"text": "[...]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 210, |
|
"end": 215, |
|
"text": "[...]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 361, |
|
"end": 366, |
|
"text": "[...]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datatype errors", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Moving beyond these initial type checks, we used initially Prot\u00e9g\u00e9 with the FaCT++ reasoner to match our triple store statements against the OWL definition. The ontology was found to be inconsistent, with the following explanation: We now give a detailed analysis of this explanation; we'll omit such details from the other inconsistencies found later on this section. The relation wn30:classifiedByRegion was created from the ;r pointer symbol in Princeton WordNet data distribution, documented in wninput(5wn). 9 In the explanation above, current account is the label of wordsense-13363970-n-3 and Britain the label of wordsense-08860123-n-4. These two subjects are related via the following triple:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain and range errors", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "wordsense-13363970-n-3 classifiedByRegion wordsense-08860123-n-4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain and range errors", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "This triple was generated from the following line in original Princeton data.noun file (formatted for clarity):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain and range errors", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "13363970 21 n 03 checking_account 0 chequing_account 0 current_account 1 004 @ 13359690 n 0000 ;r 08860123 n 0304 ;r 08820121 n 0201 ;r 09044862 n 0101 | a bank account against which the depositor can draw checks that are payable on demand", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain and range errors", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Notice that the triple in the explanation above is a relationship between two word senses, while our definition of the wn30:classifiedByRegion property is as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain and range errors", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "9 http://goo.gl/AbkdaZ wn30:classifiedByRegion a rdf:Property, owl:ObjectProperty ; rdfs:domain wn30:Synset ; rdfs:range wn30:NounSynset ; rdfs:subPropertyOf wn30:classifiedBy .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain and range errors", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In other words, it is a property whose domain contains synsets and its range contains all noun synsets. This is contradicted by the example, where the rdfs:domain and rdfs:range restrictions were violated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain and range errors", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "To fix the inconsistency, we need to understand the source of the error: is the problem in our translation from the Wordnet file to RDF, the OWL definition of wn30:classifiedByRegion, or an issue in Wordnet itself?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain and range errors", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In the excerpt from data.noun above, all three domain/region pointers are between word senses, which was preserved in the translation to RDF. Looking at the other entries there, we find that chequing account and Canada and checking account and United States are also word senses labels that are related by wn30:classifiedByRegion. This indicates a desire to differentiate between the different lexical forms and their regions of usage, which can be seen as a form of lexical relationship. This indicates an issue with the formalization of the relation wn30:classifiedByRegion. Going back to the original definition in wninput(5wn) we find the following (emphasis ours):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain and range errors", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The following pointer types are usually used to indicate lexical relations: Antonym, Pertainym, Participle, Also See, Derivationally Related. The remaining pointer types are generally used to represent semantic relations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain and range errors", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "While generally a domain/region pointer is a semantic relationship, our examples show that this is not always the case. Also, by using words such as 'generally' and 'usually' the informal description above accommodates such cases. This leads us to think that wn30:classifiedByRegion is both a semantic and a lexical relation, unlike our formal definition states.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain and range errors", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We can query for the statistics of the wn30:classifiedByRegion domain in our endpoint. 10 The SPARQL query below selects all individuals that are involved in wn30:classifiedByRegion relations, their types, and counts the number of individual by type. select ?t (count(?t) as ?ct) { ?s wn30:classifiedByRegion ?o ; a ?t } group by ?t", |
|
"cite_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 89, |
|
"text": "10", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain and range errors", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The majority of the subjects -over 1200are synsets, but there are 15 word senses as well, meaning that wn30:classifiedByRegion is definitely not strictly a semantic relation. To fix this issue, the definition needed to be changed so that the domain and range contains both synsets and word senses. This is done using the owl:unionOf operator, which represents set union. We found similar problems with the properties wn30:frame, wn30:classifiedByUsage and wn30:classifiedByTopic. We selected the latter since it highlights one of the issues that we find while performing formal verifications, which is the complexity of the proofs/explanations. This is the explanation found for the issue: While this example can be understood, it definitely could be made simpler. For instance, synset-01220528-v found to be of type 'synset' due to the fact that it is the object of a triple containing the predicate wn30:hypernymOf combined with that fact that the range of this predicate is the set of all synsets. A more concise way is to realize that synset-01220528-v is a verb synset and that verb synsets are a subset of synsets. In any case, interpreting the explanation, we see that wn30:frame is being used as a relation whose domain contains a synset, but its definition prohibits this. We can query our triple store for the de facto domains of wn30:frame via a SPARQL query similar to the one used for wn30:classifiedByRegion. We again omit the results for brevity, but there are both word senses and synsets in the domain of this relation. Checking the definition of wn30:frame in wninput(5wn) we find that its original formal definition is too restrictive as it allows frames to exist between both synsets and word senses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain and range errors", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "synset-01345109-v hypernymOf synset-01220528-v", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain and range errors", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "After fixing those, only a couple of issues remained: While wn30:adjectivePertainsTo is a relation between word senses, it was marked as a subproperty of wn30:meronymOf, which is a relationship between synsets. It was also marked as the inverse of wn30:holonymOf, which is also a semantic relation. Both restrictions are, of course, incorrect and were removed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain and range errors", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The final issues were investigated using the Pellet reasoner. This allows us to verify our work and also experiment with the different implementations of the explanations for inconsistencies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain and range errors", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Axiom: Thing subClassOf Nothing inSynset range Synset VerbWordSense subClassOf WordSense synset-00105023-a containsWordSense wordsense-00105023-a-2 synset-00105023-a seeAlso synset-00885415-a AdjectiveWordSense subClassOf WordSense seeAlso domain AdjectiveWordSense or VerbWordSense inSynset inverseOf containsWordSense Synset disjointWith WordSense Here, wn30:seeAlso usually indicates lexical relations, but the explanation shows relationship between two synsets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain and range errors", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Our last example show cases yet another trap that should be avoided when designing ontologies, which is to assume that once it is consistent, there is nothing else to do. In our case, our modifications so far lead us to a consistent ontology, but unfortunately that doesn't mean that there weren't any issues left. In fact, there were two extremely serious errors in our RDF distribution that were not caught by the analyses so far and were found accidentally through a cursory look: during one of our post-processing jobs we mistakenly implemented a blank node renaming algorithm and ended up having two invalid situations: (a) two or more words associated to a single word sense subject; (b) two or more lexical forms associated to a single word subject.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structural errors", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "After fixing our ontology to give the proper restrictions on word senses, words, and lexical forms, Pellet was able to identify the issues. The following excerpt describes a single word sense (wordsense-01860795-v-2) with two words associated ('deixar', 'parar').", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structural errors", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "wordsense-01860795-v-2 type WordSense word-deixar lexicalForm \"deixar\"@pt word-parar lexicalForm \"parar\"@pt wordsense-01860795-v-2 word word-deixar Word subClassOf lexicalForm exactly 1 wordsense-01860795-v-2 word word-parar word-deixar type Word word-parar type Word WordSense subClassOf word exactly 1 Word", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structural errors", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "The last tool that we tested was Stardog 11 . Stardog is the only reasoner and database system that supports ICV. Under the ICV semantics, the axioms below from the wn30:WordSense class were taken as constraints rather than terminology definitions. In other words, if Stardog finds an instance of the class wn30:WordSense connected to more than one instance of wn30:Word, it will raise an exception instead of infer that the two different wn30:Word instances should be the same.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structural errors", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "wn30:WordSense a rdfs:Class, owl:Class ; rdfs:subClassOf [ a owl:Restriction ; owl:onProperty wn30:inSynset ; owl:qualifiedCardinality \"1\"^^xsd:nonNegativeInteger ; owl:onClass wn30:Synset ], [ a owl:Restriction ; owl:onProperty wn30:word; owl:qualifiedCardinality \"1\"^^xsd:nonNegativeInteger ; owl:onClass wn30:Word ] .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structural errors", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Unfortunately, in all tests that we run, Stardog hung without producing any output, even when we executed it with few axioms of our ontology. We hope to investigate the problem in a future report.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structural errors", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "The use of different systems, with different functionalities, give us more confidence in our validations. Unfortunately, it required considerable ef-11 http://www.stardog.com. fort to prepare data in different formats and interpret the results. Racer and RDFUnit did not give us meaningful results. We could not use Stardog at all. We will continue to try them, though, as we believe the diversity of tools and techniques are beneficial to the coverage of potential problems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Performance is still an issue. Some of these experiment took hours to complete, in a relatively simple ontology. It looks like most of DL reasoners are not prepared to handle large ABoxes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Most DL reasoners are based on some variation of tableaux or other refutation based procedure (Baader, 2003) . Prove by refutation does not preserve information and tableaux proofs usually have exponential size. In the future, we hope to implement a proof-theoretical based reasoner for DL based on .", |
|
"cite_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 108, |
|
"text": "(Baader, 2003)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "It is also worthy to mention that the tools that we tested do not always have an user-friendly interface, making adoption for people outside the area difficult.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Reasoning with closed world assumption for ICV is a future work given the problems that we faced with Stardog. Finally, DL Learning (Lehmann, 2009) and Shapes Constraint Language (Knublauch and Ryman, 2016) are another possible interesting techniques to explorer. The former would allow us to extract the minimum required TBox for a given ABox, the latter would be an alternative language for expressing constraints.", |
|
"cite_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 147, |
|
"text": "(Lehmann, 2009)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 179, |
|
"end": 206, |
|
"text": "(Knublauch and Ryman, 2016)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Aknowledgements This work used Prot\u00e9g\u00e9 resource, which is supported by grant GM10331601 from the National Institute of General Medical Sciences of the United States National Institutes of Health.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "http://wnpt.brlcloud.com/wn/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://globalwordnet.org/ 3 https://github.com/own-pt/openWordnet-PT", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/own-pt/wordnet2rdf 5 http://www.w3.org/OWL/ 6 In this paper the TBox is sometimes called the vocabulary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://goo.gl/ptPw6S", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The description logic handbook: theory, implementation, and applications", |
|
"authors": [ |
|
{ |
|
"first": "Franz", |
|
"middle": [], |
|
"last": "Baader", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz Baader. 2003. The description logic handbook: theory, implementation, and appli- cations. Cambridge university press.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Linking and extending an open multilingual wordnet", |
|
"authors": [ |
|
{ |
|
"first": "Francis", |
|
"middle": [], |
|
"last": "Bond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Foster", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "1352--1362", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Berners-Lee1998] Tim Berners-Lee. 1998. Semantic web road map. Technical report, W3C, September. [Bond and Foster2013] Francis Bond and Ryan Foster. 2013. Linking and extending an open multilingual wordnet. In Proceedings of the 51st Annual Meet- ing of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 1352-1362, Sofia, Bulgaria, August. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Linked data in linguistics: Representing and connecting language data and language metadata", |
|
"authors": [ |
|
{ |
|
"first": "Chiarcos", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "ACM SAC 2015 Proceedings", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Chiarcos et al.2012] Christian Chiarcos, Sebastian Nordhoff, and Sebastian Hellmann. 2012. Linked data in linguistics: Representing and connecting language data and language metadata. Springer. [Corcoglioniti et al.2015] Francesco Corcoglioniti, Marco Rospocher, Michele Mostarda, and Marco Amadori. 2015. Processing billions of rdf triples on a single machine using streaming and sorting. In ACM SAC 2015 Proceedings.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Language as a foundation of the Semantic Web", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Cyganiak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Wood", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proc. of ISWC", |
|
"volume": "401", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Cyganiak and Wood2003] Richard Cyganiak and David Wood. 2003. RDF 1.1 concepts and abstract syntax. Technical Report Draft 23 July 2013, W3C. [de Melo and Weikum2008] Gerard de Melo and Ger- hard Weikum. 2008. Language as a foundation of the Semantic Web. In Proc. of ISWC 2008, volume 401.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Towards a universal wordnet by learning from combined evidence", |
|
"authors": [ |
|
{ |
|
"first": "Weikum2009] Gerard", |
|
"middle": [], |
|
"last": "Melo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerhard", |
|
"middle": [], |
|
"last": "De Melo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Weikum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 18th ACM Conference on Information and Knowledge Management (CIKM 2009)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "513--522", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Melo and Weikum2009] Gerard de Melo and Ger- hard Weikum. 2009. Towards a universal wordnet by learning from combined evidence. In Proceed- ings of the 18th ACM Conference on Information and Knowledge Management (CIKM 2009), pages 513-522, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "OpenWordNet-PT: An open Brazilian wordnet for reasoning", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "De Paiva", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of 24th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[de Paiva et al.2012] Valeria de Paiva, Alexandre Rade- maker, and Gerard de Melo. 2012. OpenWordNet- PT: An open Brazilian wordnet for reasoning. In Proceedings of 24th International Conference on Computational Linguistics, COLING (Demo Paper).", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Xml schema part 0: primer second edition", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Priscilla", |
|
"middle": [], |
|
"last": "Fallside", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Walmsley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Fallside and Walmsley2004] David C. Fallside and Priscilla Walmsley. 2004. Xml schema part 0: primer second edition. Technical Report W3C Rec- ommendation 28 October 2004, W3C.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "WordNet: An electronic lexical database", |
|
"authors": [], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An electronic lexical database. MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Extending a lexicon of portuguese nominalizations with data from corpora", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Freitas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Computational Processing of the Portuguese Language, 11th International Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Freitas et al.2014] Cl\u00e1udia Freitas, Valeria de Paiva, Alexandre Rademaker, Gerard de Melo, Livy Real, and Anne de Araujo Correia da Silva. 2014. Ex- tending a lexicon of portuguese nominalizations with data from corpora. In Jorge Baptista, Nuno Mamede, Sara Candeias, Ivandr\u00e9 Paraboni, Thiago A. S. Pardo, and Maria das Gra\u00e7as Volpe Nunes, ed- itors, Computational Processing of the Portuguese Language, 11th International Conference, PROPOR 2014, S\u00e3o Carlos, Brazil, oct. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "The RacerPro knowledge representation and reasoning system", |
|
"authors": [ |
|
{ |
|
"first": "[", |
|
"middle": [], |
|
"last": "Haarslev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Semantic Web Journal", |
|
"volume": "3", |
|
"issue": "3", |
|
"pages": "267--277", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Haarslev et al.2012] Volker Haarslev, Kay Hidde, Ralf M\u00f6ller, and Michael Wessel. 2012. The RacerPro knowledge representation and reasoning system. Se- mantic Web Journal, 3(3):267-277.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "SPARQL 1.1 query language", |
|
"authors": [ |
|
{ |
|
"first": "Steve", |
|
"middle": [], |
|
"last": "Harris", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andy", |
|
"middle": [], |
|
"last": "Seaborne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Harris and Seaborne2013] Steve Harris and Andy Seaborne. 2013. SPARQL 1.1 query language. Technical Report W3C Recommendation 21 March 2013, W3C.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Explanation of OWL entailments in prot\u00e9g\u00e9 4", |
|
"authors": [ |
|
{ |
|
"first": "[", |
|
"middle": [], |
|
"last": "Hitzler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Poster and Demonstration Session at the 7th International Semantic Web Conference (ISWC2008)", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Hitzler et al.2012] Pascal Hitzler, Markus Krotzsch, Bijan Parsia, Peter F. Patel-Schneider, and Sebastian Rudolph. 2012. OWL 2 web ontology language primer. Technical Report W3C Rec 11 Dec 2012, W3C. [Horridge et al.2008] Matthew Horridge, Bijan Parsia, and Ulrike Sattler. 2008. Explanation of OWL en- tailments in prot\u00e9g\u00e9 4. In Proceedings of the Poster and Demonstration Session at the 7th International Semantic Web Conference (ISWC2008), Karlsruhe, Germany, October 28, 2008.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "I don't believe in word senses", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Kilgarriff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Computers and the Humanities", |
|
"volume": "31", |
|
"issue": "2", |
|
"pages": "91--113", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Kilgarriff. 1997. I don't be- lieve in word senses. Computers and the Humani- ties, 31(2):91-113.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Shapes constraint language (shacl)", |
|
"authors": [ |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Knublauch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arthur", |
|
"middle": [], |
|
"last": "Ryman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Knublauch and Ryman2016] Holger Knublauch and Arthur Ryman. 2016. Shapes constraint language (shacl). Technical Report W3C Working Draft 28 January 2016, W3C. http://www.w3.org/TR/ shacl/.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "DL-Learner: learning concepts in description logics", |
|
"authors": [ |
|
{ |
|
"first": "Jens", |
|
"middle": [], |
|
"last": "Lehmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Journal of Machine Learning Research (JMLR)", |
|
"volume": "10", |
|
"issue": "", |
|
"pages": "2639--2642", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jens Lehmann. 2009. DL-Learner: learning concepts in description logics. Journal of Machine Learning Research (JMLR), 10:2639- 2642.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Formal ontology as interlingua: the SUMO and WordNet linking project and global WordNet linking project", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Pease", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christiane", |
|
"middle": [], |
|
"last": "Fellbaum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Ontology and the Lexicon: A Natural Language Processing Perspective, Studies in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "25--35", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Pease and Fellbaum2010] Adam Pease and Christiane Fellbaum. 2010. Formal ontology as interlingua: the SUMO and WordNet linking project and global WordNet linking project. In Ontology and the Lex- icon: A Natural Language Processing Perspective, Studies in Natural Language Processing, chapter 2, pages 25-35. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Validating rdf with owl integrity constraints", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "P\u00e9rez-Urbina", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P\u00e9rez-Urbina et al.2012] H\u00e9ctor P\u00e9rez-Urbina, Evren Sirin, and Kendall Clark. 2012. Validating rdf with owl integrity constraints. Technical report, Clark & Parsia, LLC.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Openwordnet-pt: A project report", |
|
"authors": [ |
|
{ |
|
"first": "[", |
|
"middle": [], |
|
"last": "Rademaker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 7th Global WordNet Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Rademaker et al.2014] Alexandre Rademaker, Valeria de Paiva, Gerard de Melo, Livy Maria Real Coelho, and Maira Gatti. 2014. Openwordnet-pt: A project report. In Heili Orav, Christiane Fellbaum, and Piek Vossen, editors, Proceedings of the 7th Global WordNet Conference, Tartu, Estonia, jan.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "A Proof Theory for Description Logics. Springer-Briefs in Computer Science", |
|
"authors": [ |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Rademaker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexandre Rademaker. 2012. A Proof Theory for Description Logics. Springer- Briefs in Computer Science. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Seeing is correcting: curating lexical resources using social interfaces", |
|
"authors": [ |
|
{ |
|
"first": "Livy", |
|
"middle": [], |
|
"last": "Real", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabricio", |
|
"middle": [], |
|
"last": "Chalub", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Valeria", |
|
"middle": [], |
|
"last": "Depaiva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claudia", |
|
"middle": [], |
|
"last": "Freitas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Rademaker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 4th Workshop on Linked Data in Linguistics: Resources and Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "20--29", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Real et al.2015] Livy Real, Fabricio Chalub, Valeria dePaiva, Claudia Freitas, and Alexandre Rademaker. 2015. Seeing is correcting: curating lexical re- sources using social interfaces. In Proceedings of the 4th Workshop on Linked Data in Linguistics: Resources and Applications, pages 20-29, Beijing, China, July. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "HermiT: a highly efficient OWL reasoner", |
|
"authors": [ |
|
{ |
|
"first": "[", |
|
"middle": [], |
|
"last": "Shearer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Fifth International Workshop on OWL: Experiences and Directions (OWLED)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Shearer et al.2008] R. Shearer, B. Motik, and I. Hor- rocks. 2008. HermiT: a highly efficient OWL reasoner. In Proceedings of the Fifth Interna- tional Workshop on OWL: Experiences and Direc- tions (OWLED).", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Tsarkov and Horrocks2006] Dmitry Tsarkov and Ian Horrocks", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sirin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Third International Joint Conference on Automated Reasoning, IJCAR'06", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "292--297", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Sirin et al.2007] Evren Sirin, Bijan Parsia, Bernardo Cuenca Grau, Aditya Kalyanpur, and Yarden Katz. 2007. Pellet: A practical OWL-DL reasoner. Web Semant., 5(2):51-53, June. [Tsarkov and Horrocks2006] Dmitry Tsarkov and Ian Horrocks. 2006. FaCT++ description logic rea- soner: System description. In Proceedings of the Third International Joint Conference on Automated Reasoning, IJCAR'06, pages 292-297, Berlin, Hei- delberg. Springer-Verlag.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "RDF/OWL representation of WordNet", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Van Assem", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[van Assem et al.2006] Mark van Assem, Aldo Gangemi, and Guus Schreiber. 2006. RDF/OWL representation of WordNet. Technical Report W3C Working Draft 19 June 2006, W3C.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "wn30:classifiedByRegion a rdf:Property, owl:ObjectProperty ; rdfs:subPropertyOf wn30:classifiedBy ; rdfs:range [ a owl:Class ; owl:unionOf (wn30:NounWordSense wn30:NounSynset)] ; rdfs:domain [ a owl:Class ; owl:unionOf (wn30:WordSense wn30:Synset)] .", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"text": "the used URIs 4 Consistency check of OWL and Integrity Constraints in RDF", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |