ACL-OCL / Base_JSON /prefixG /json /gwc /2016.gwc-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2016",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:04:30.129255Z"
},
"title": "WSD in monolingual dictionaries for Russian WordNet",
"authors": [
{
"first": "Daniil",
"middle": [],
"last": "Alexeyevsky",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Research University Higher School of Economics",
"location": {
"addrLine": "21/4 Staraya Basmannaya Ulitsa",
"postCode": "105066",
"settlement": "Moscow",
"country": "Russia"
}
},
"email": "dalexeyevsky@hse.ru"
},
{
"first": "Anastasiya",
"middle": [
"V"
],
"last": "Temchenko",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Research University Higher School of Economics",
"location": {
"addrLine": "21/4 Staraya Basmannaya Ulitsa",
"postCode": "105066",
"settlement": "Moscow",
"country": "Russia"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Russian Language is currently poorly supported with WordNet-like resources. One of the new efforts for building Russian WordNet involves mining the monolingual dictionaries. While most steps of the building process are straightforward, word sense disambiguation (WSD) is a source of problems. Due to limited word context specific WSD mechanism is required for each kind of relations mined. This paper describes the WSD method used for mining hypernym relations. First part of the paper explains the main reasons for choosing monolingual dictionaries as the primary source of information for Russian language WordNet and states some problems faced during the information extraction. The second part defines algorithm used to extract hyponym-hypernym pair. The third part describes the algorithm used for WSD",
"pdf_parse": {
"paper_id": "2016",
"_pdf_hash": "",
"abstract": [
{
"text": "Russian Language is currently poorly supported with WordNet-like resources. One of the new efforts for building Russian WordNet involves mining the monolingual dictionaries. While most steps of the building process are straightforward, word sense disambiguation (WSD) is a source of problems. Due to limited word context specific WSD mechanism is required for each kind of relations mined. This paper describes the WSD method used for mining hypernym relations. First part of the paper explains the main reasons for choosing monolingual dictionaries as the primary source of information for Russian language WordNet and states some problems faced during the information extraction. The second part defines algorithm used to extract hyponym-hypernym pair. The third part describes the algorithm used for WSD",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "After the development of Princeton WordNet (Fellbaum, 2012) , two main approaches were widely exploited to create WordNet for any given language: dictionary-based concept (Brazilian Portuguese WordNet, Dias-da-Silva et al., 2002) and translation-based approach (see for example, Turkish WordNet, Bilgin et al., 2004) . The last one assumes that there is a correlation between synset and hyponym hierarchy in different languages, even in the languages that come from distant families. Bilgin et al. employ bilingual dictionaries for building the Turkish WordNet using existing WordNets.",
"cite_spans": [
{
"start": 43,
"end": 59,
"text": "(Fellbaum, 2012)",
"ref_id": "BIBREF6"
},
{
"start": 171,
"end": 229,
"text": "(Brazilian Portuguese WordNet, Dias-da-Silva et al., 2002)",
"ref_id": null
},
{
"start": 296,
"end": 316,
"text": "Bilgin et al., 2004)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Multilingual resources represent the next stage in WordNet history. EuroWordNet, described by Vossen (1998) , was build for Dutch, Italian, Spanish, German, French, Czech, Estonian and English languages. Tufis et al. (2004) explain the methods used to create BalkaNet for Bulgarian, Greek, Romanian, Serbian and Turkish languages. These projects developed monolingual WordNets for a group of languages and aligned them to the structure of Princeton WordNet by the means of Inter-Lingual-Index.",
"cite_spans": [
{
"start": 94,
"end": 107,
"text": "Vossen (1998)",
"ref_id": "BIBREF22"
},
{
"start": 204,
"end": 223,
"text": "Tufis et al. (2004)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Several attempts were made to create Russian WordNet. Azarova et al. (2002) attempted to create Russian WordNet from scratch using merge approach: first the authors created the core of the Base Concepts by combining the most frequent Russian words and so-called \"core of the national mental lexicon\", extracted from the Russian Word Association Thesaurus, and then proceeded with linking the structure of RussNet to EuroWordNet. The result, according to project's site 1 , contains more than 5500 synsets, which are not published for general use. Group of Balkova et al. (2004) started a large project based on bilingual and monolingual dictionaries and manual lexicographer work. As for 2004, the project is reported to have nearly 145 000 synsets (Balkova et al. 2004) , but no website is available (Loukachevitch and Dobrov, 2014) . Gelfenbeyn et al. (2003) used direct machine translation without any manual interference or proofreading to create a resource for Russian WordNet 2 . Project RuThes by Loukachevitch and Dobrov (2014) , which differs in structure from the canonical Princeton WordNet, is a linguistically motivated ontology and contains 158 000 words and 53 500 concepts at the moment of writing. YARN (Yet Another RussNet) project, described by Ustalov (2014) , is based on the crowdsourcing approach towards creating WordNetlike machine readable open online thesaurus and contains at the time of writing more than 46 500 synsets and more than 119 500 words, but lacks any type of relation between synsets.",
"cite_spans": [
{
"start": 54,
"end": 75,
"text": "Azarova et al. (2002)",
"ref_id": "BIBREF0"
},
{
"start": 556,
"end": 577,
"text": "Balkova et al. (2004)",
"ref_id": "BIBREF3"
},
{
"start": 749,
"end": 770,
"text": "(Balkova et al. 2004)",
"ref_id": "BIBREF3"
},
{
"start": 801,
"end": 833,
"text": "(Loukachevitch and Dobrov, 2014)",
"ref_id": "BIBREF14"
},
{
"start": 836,
"end": 860,
"text": "Gelfenbeyn et al. (2003)",
"ref_id": "BIBREF7"
},
{
"start": 1004,
"end": 1035,
"text": "Loukachevitch and Dobrov (2014)",
"ref_id": "BIBREF14"
},
{
"start": 1264,
"end": 1278,
"text": "Ustalov (2014)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper describes one step of semiautomated effort towards building Russian WordNet. The work is based on the hypothesis that existing monolingual dictionaries are the most reliable resource for creating the core of Russian WordNet. Due to absence of open machine-readable dictionaries (MRD) for Russian Language the work involves shallow sectioning of a non machine-readable dictionary (non-MRD). This paper focuses on automatic extraction of hypernyms from Russian dictionary over a limited number of article types. Experts then evaluate the results manually.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As far as our knowledge extends, there is no Russian monolingual dictionary that was designed and structured according to machinereadable dictionary (MRD) principles and is also available for public use.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing the Dictionary",
"sec_num": "1.1"
},
{
"text": "There exist two Russian Government Standards that specify structure for machine readable thesauri (Standard, 2008) , but they are not widely obeyed.",
"cite_spans": [
{
"start": 98,
"end": 114,
"text": "(Standard, 2008)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing the Dictionary",
"sec_num": "1.1"
},
{
"text": "Some printed monolingual dictionaries are available in form of scanned and proof-read texts or online resources. For example, http://dic.academic.ru/ offers online access to 5 monolingual Russian dictionaries and more than 100 theme-specific encyclopedias. Each dictionary article is presented as one unparsed text entry.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing the Dictionary",
"sec_num": "1.1"
},
{
"text": "Resource http://www.lingvoda.ru/dictionaries/, supported by ABBYY, publishes user-created dictionaries in Dictionary Specification Language (DSL) format. DSL purpose is to describe how the article is displayed. DSL operates in terms of italic, sub-article, reference-to-article and contains no instrument to specify type of relations. This seems to be closest to MRD among available resources. Fully automated information extraction is out of the question in this case. When using non-MRD we have faced with number of problems that should be addressed before any future processing can be started:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing the Dictionary",
"sec_num": "1.1"
},
{
"text": "1. Words and word senses at the article head are not marked by unique numeric identifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing the Dictionary",
"sec_num": "1.1"
},
{
"text": "2. Words used in article definitions are not disambiguated, so creating a link from a word in a definition to article defining the word sense is not trivial task. 3. Many contractions and special symbols are used. 4. Circular references exist; this is expected for synonyms and base lexicon, but uncalled for in sister terms, hypernyms, and pairs of articles with more complex relations. 5. The lexicon used in definitions is nearly equal to or larger than the lexicon of the dictionary. In general, ordinary monolingual dictionaries, compiled by lexicographers, were not intended for future automated parsing and analysis. As stated in Ide and V\u00e9ronis (1994) , when converting typeset dictionaries to more suitable format researchers are forced to deal with:",
"cite_spans": [
{
"start": 637,
"end": 659,
"text": "Ide and V\u00e9ronis (1994)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing the Dictionary",
"sec_num": "1.1"
},
{
"text": "1. Difficulties when converting from the original format, that often requires development of complex dedicated grammar, as previously showed by Neff and Boguraev (1989) .",
"cite_spans": [
{
"start": 144,
"end": 168,
"text": "Neff and Boguraev (1989)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing the Dictionary",
"sec_num": "1.1"
},
{
"text": "tion format and meta-text; 3. Partiality of information, since some critical information in definitions is considered common knowledge and is omitted. Research by Ide and V\u00e9ronis (1994) gives us hope that using monolingual dictionaries is the best source of lexical information for WordNet. First they show that one dictionary may lack significant amount of relevant hypernym links (around 50-70%). Next they collect hypernym links from merged set of dictionaries and in the resulting set of hypernym links only 5% are missing or inconsistent as compared with expert created ontology.",
"cite_spans": [
{
"start": 163,
"end": 185,
"text": "Ide and V\u00e9ronis (1994)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inconsistencies and variations in defini-",
"sec_num": "2."
},
{
"text": "Their work is partly based on work by Hearst (1998) who introduced patterns for parsing definitions in traditional monolingual dictionaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inconsistencies and variations in defini-",
"sec_num": "2."
},
{
"text": "One notable work for word sense disambiguation using text definitions from articles was performed by Lesk (1986) . The approach is based on intersecting set of words in word context with set of words in different definitions of the word being disambiguated. The approach was further extended by Navigli (2009) to use corpus bootstrapping to compensate for restricted context in dictionary articles.",
"cite_spans": [
{
"start": 101,
"end": 112,
"text": "Lesk (1986)",
"ref_id": "BIBREF13"
},
{
"start": 295,
"end": 309,
"text": "Navigli (2009)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inconsistencies and variations in defini-",
"sec_num": "2."
},
{
"text": "In this paper we propose yet another extension of Lesk's algorithm based on semantic similarity databases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inconsistencies and variations in defini-",
"sec_num": "2."
},
{
"text": "Specific aim of this work is to create a bulk of noun synsets and hypernym relations between them for further manual filtering and editing. To simplify the task we assume that every word sense defined in a dictionary represents a unique synset. Furthermore we only consider one kind of word definitions: such definitions that start with nominative case noun phrase. E. g.: rus. \u0412\u0415\u041d\u0422\u0418\u041b\u042f\u0301\u0426\u0418\u042f: \u041f\u0440\u043e\u0446\u0435\u0441\u0441 \u0432\u043e\u0437\u0434\u0443\u0445\u043e\u043e\u0431\u043c\u0435\u043d\u0430 \u0432 \u043b\u0451\u0433\u043a\u0438\u0445. eng.'VENTILATION: Process of gas exchange in lungs'. We adhere to hypothesis that in this kind of definitions top noun in the NP is hypernym. In order to build a relation between word sense and its hypernym we need to decide which sense of hypernym word is used in the definition. This step is the focus of this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building the Russian WordNet",
"sec_num": "2"
},
{
"text": "The work is based on the Big Russian Explanatory Dictionary (BRED) by Kuznetsov S.A. (2008) . The dictionary has rich structure and includes morphological, word derivation, grammatical, phonetic, etymological information, three-level sense hierarchy, usage examples and quotes from classical literature and proverbs. The electronic version of the dictionary is produced by OCR and proofreading with very high quality (less than 1 error in 1000 words overall). The version also has sectioning markup of lower quality, with FPR in range 1~10 in 1000 tag uses for the section tags of our interest.",
"cite_spans": [
{
"start": 70,
"end": 91,
"text": "Kuznetsov S.A. (2008)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Dictionary",
"sec_num": "2.1"
},
{
"text": "We developed specific preprocessor for the dictionary that extracts word, its definition and usage examples (if any) from each article. We call every such triplet word sense, and give it unique numeric ID. A article can have reference to derived word or synonym instead of text definition. Type of the reference is not annotated in the dictionary. We preserve such references in a special slot of word sense. The preprocessor produces a CSV table with senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dictionary",
"sec_num": "2.1"
},
{
"text": "Given a word sense W we produce a list of all candidate hypernym senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypernym candidates",
"sec_num": "2.2"
},
{
"text": "Ideally under our assumption the first nominative case noun in W's definition is a hypernym. However, due to variance in article definition styles and imperfect morphological disambiguation used, some words before the actual hypernym are erroneously considered candidate hypernym. To mitigate this we consider each of the first three nominative nouns candidate hyper-nyms. For each such noun we add each of its senses as candidate hypernym senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypernym candidates",
"sec_num": "2.2"
},
{
"text": "If sense W is defined by reference rather than by textual definition, we add both every sense of referenced word and each of its candidate hypernym senses to the list of candidate hypernym senses of W.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypernym candidates",
"sec_num": "2.2"
},
{
"text": "We have developed a pipeline for massively testing different disambiguation setups. The pipeline is preceded by obtaining common data: word lemmas, morphological information, word frequency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disambiguation pipeline",
"sec_num": "2.3"
},
{
"text": "For the pipeline we broke down the task of disambiguation into steps. For each step we presented several alternative implementations. These are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disambiguation pipeline",
"sec_num": "2.3"
},
{
"text": "1. Represent candidate hyponym-hypernym sense pair as a Cartesian product of list of words in hyponym sense and list of words in hypernym sense, repeats retained. 2. Calculate numerical metric of words similarity. This is the point we strive to improve. As a baseline we used: random number, inverse dictionary definition number; classic Lesk algorithm. We also introduce several new metrics described below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disambiguation pipeline",
"sec_num": "2.3"
},
{
"text": "frequency. We assume that coincidence of frequent words in to definitions gives us much less information about their relatedness than coincidence of infrequent words. We try the following compensation functions: no compensation, divide by logarithm of word frequency, divide by word frequency. 4. Apply non-parametric normalization function to similarity measure. Some of the metrics produce values with very large variance. This leads to situations where one matching pair of words outweighs a lot of outright mismatching pairs. To mitigate this we attempted to apply these functions to reduce variance: linear (no normalization), logarithm, Gaussian, and logistic curve. 5. Apply adjustment function to prioritize the first noun in each definition. While extracting candidate hypernyms the algorithm retained up to three candidate nouns in each article. Our hypothesis states that the first one is most likely the hypernym. We apply penalty to the metric depending on candidate hypernym position within hyponym definition. We tested the following penalties: no penalty, divide by word number, divide by exponent of word number. 6. Aggregate weights of individual pairs of words. We test two aggregation functions: average weight and sum of best N weights. In the last case we repeat the sequence of weights if there were less than N pairs. We also tested the following values of N: 2, 4, 8, 16, 32. Finally, the algorithm returns candidate hypernym with the highest score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Apply compensation function for word",
"sec_num": "3."
},
{
"text": "For testing the algorithms we selected words in several domains for manual markup. We determined domain as a connected component in a graph of word senses and hypernyms produced by one of the algorithms. Each annotator was given the task to disambiguate every sense for every word in such domain. Given a triplet an annotator assigns either no hypernyms or one hypernym; in exceptional cases assigning two hypernyms for a sense is allowed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Testing setup",
"sec_num": "2.4"
},
{
"text": "One domain with 175 senses defining 90 nouns and noun phrases was given to two annotators to estimate inter-annotator agreement. Both annotators assigned 145 hypernyms within the set. Of those only 93 matched, resulting in 64% inter-annotator agreement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Testing setup",
"sec_num": "2.4"
},
{
"text": "The 93 identically assigned hyponymhypernym pairs were used as a core dataset for testing results. Additional 300 word senses were marked up to verify the results on larger datasets. The algorithms described were tested on both of the datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Testing setup",
"sec_num": "2.4"
},
{
"text": "In this section we describe various alternatives to metric function on step 2 of the pipeline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Approach to Disambiguation",
"sec_num": "2.5"
},
{
"text": "One known problem with Lesk algorithm is that it uses only word co-occurrence when calculating overlap rate (Basile et al., 2004) and does not extract information from synonyms or inflected words. In our test it worked surprisingly well on the dictionary corpus, finding twice as many correct hypernym senses as the random baseline. We strive to improve that result for dictionary definition texts.",
"cite_spans": [
{
"start": 108,
"end": 129,
"text": "(Basile et al., 2004)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Our Approach to Disambiguation",
"sec_num": "2.5"
},
{
"text": "Russian language has rich word derivation through variation of word suffixes. The first obvious enhancement to Lesk algorithm to account for this is to assign similarity scores to words based on length of common prefix. In the results we refer to this metric as advanced Lesk. Another approach to enhance Lesk algorithm is to detect cases where two different words are semantically related. To this end we picked up a database of word associations Serelex (Panchenko et al, 2013) . It assigns a score on a 0 to infinity scale to a pair of noun lemmas roughly describing their semantic similarity. As a possible way to score words that are not nouns in Serelex we truncate a few characters off the ends of both words and search for the best pair matching the prefixes in Serelex. (See prefix \"serelex\" in Table 1).",
"cite_spans": [
{
"start": 271,
"end": 276,
"text": "Lesk.",
"ref_id": null
},
{
"start": 456,
"end": 479,
"text": "(Panchenko et al, 2013)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Our Approach to Disambiguation",
"sec_num": "2.5"
},
{
"text": "We tested several hypotheses on how these two metrics can be used to improve the resulting performance. The tests were: to use only Lesk; to use only Serelex; to use Serelex where possible and fallback to advanced Lesk for cases where no answer was available; and to sum the results of Serelex and Lesk. Since Serelex has a specific distribution of scores we adjusted the advanced Lesk score to produce similar distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Approach to Disambiguation",
"sec_num": "2.5"
},
{
"text": "For each estimator we performed full search through available variations on steps 3-6 of the pipeline and selected the best on the core set and estimated again on the larger dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Approach to Disambiguation",
"sec_num": "2.5"
},
{
"text": "Test results are given in the ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Approach to Disambiguation",
"sec_num": "2.5"
},
{
"text": "The low resulting quality of disambiguation seems to be a result of several factors: overall difficulty of the task (inter-annotator agreement is 64%), quality of input dictionaries, quality of used similarity database. We also seem to have missed some important linguistic or systemic features of text as well. Notably, the algorithms presented are still generically-applicable and do not use hypernym information. Despite the low precision in determining the exact hypernyms, the pipeline produces thematically related chains of words. Examples of chains, extracted by prefix Serelex algorithm are given below with English translation and comparison to Princeton WordNet (here \">>\" symbolises IS_A relation): quality appears to be crucial for the current work, and the dictionary we selected provides us with a huge set of difficulties: abbreviations; alternating language in sense definitions; not all head words are lemmas (e.g. plural for nouns that have singular); poor quality of sectioning in OCR. Sectioning within BRED presents a large problem due to underspecified vaguely nested nature of sections. Properly digitized openly published Russian dictionary is really wished for.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3"
},
{
"text": "\uf0b7 rus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3"
},
{
"text": "Another problem with the dictionary is presence of nearly-identical definitions for the same term. Due to restricted context in dictionary in some cases it is difficult even for a human annotator to guess correctly whether a given pair of definitions describes the same concepts or two very distinct ones. This is especially true with abstract terms like time (rus.: \u0432\u0440\u0435\u043c\u044f), but physical entities like field (rus.: \u043f\u043e\u043b\u0435) also present such troubles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3"
},
{
"text": "One further step to building the Russian WordNet is to differentiate hypernyms from synonyms and co-hyponyms. Currently we hope to achieve this through classification of definitions and developing morphosyntactic templates to match different relation types within them. This is out of the scope of the current article though.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3"
},
{
"text": "In this work we present a new pipeline for disambiguating and testing disambiguation frame-works for building WordNet relations from raw dictionary data in Russian language 3 .",
"cite_spans": [
{
"start": 173,
"end": 174,
"text": "3",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "We described new algorithm for hypernym disambiguation which performs somewhat better than baseline in cases where annotators agree. The possibility for better disambiguation of specific relation types within dictionaries to be still open.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "The resulting network, though noisy, is very suitable for rapid manual filtering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "http://project.phil.spbgu.ru/RussNet/, last update June 14, 2005 2 \u0410vailable for download at http://www.wordnet.ru",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Russnet: Building a lexical database for the russian language",
"authors": [
{
"first": "I",
"middle": [],
"last": "Azarova",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Mitrofanova",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sinopalnikova",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Yavorskaya",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Oparin",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of Workshop on Wordnet Structures and Standardisation and How this affect Wordnet Applications and Evaluation",
"volume": "",
"issue": "",
"pages": "60--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Azarova, I., Mitrofanova, O., Sinopalnikova, A., Ya- vorskaya, M., and Oparin, I. 2002. Russnet: Building a lexical database for the russian language. In Proceedings of Workshop on Word- net Structures and Standardisation and How this af- fect Wordnet Applications and Evaluation. Las Palmas: 60-64.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An Enhanced Lesk Word Sense Disambiguation Algorithm through a Distributional Semantic Model",
"authors": [
{
"first": "P",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Caputo",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Semeraro",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "1591--1600",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Basile, P., Caputo, A., and Semeraro, G. 2014. An Enhanced Lesk Word Sense Disambiguation Algorithm through a Distributional Semantic Model. In Proceedings of COLING: 1591-1600.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Building a wordnet for Turkish",
"authors": [
{
"first": "O",
"middle": [],
"last": "Bilgin",
"suffix": ""
},
{
"first": "\u00d6",
"middle": [],
"last": "\u00c7etino\u011flu",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Oflazer",
"suffix": ""
}
],
"year": 2004,
"venue": "Information Science and Technology",
"volume": "7",
"issue": "1-2",
"pages": "163--172",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bilgin, O., \u00c7etino\u011flu, \u00d6., and Oflazer, K. 2004. Building a wordnet for Turkish.Romanian Jour- nal of Information Science and Technology, 7(1- 2):163-172.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Russian wordnet. From UML-notation to Internet/Intranet Database Implementation",
"authors": [
{
"first": "V",
"middle": [],
"last": "Balkova",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sukhonogov",
"suffix": ""
},
{
"first": "Yablonsky",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Second Global Wordnet Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Balkova, V., Sukhonogov, A., and Yablonsky, S. 2004. Russian wordnet. From UML-notation to Internet/Intranet Database Implementation. In Proceedings of the Second Global Wordnet Con- ference.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Groundwork for the development of the Brazilian Portuguese Wordnet",
"authors": [
{
"first": "B",
"middle": [
"C"
],
"last": "Dias-Da-Silva",
"suffix": ""
},
{
"first": "M",
"middle": [
"F"
],
"last": "De Oliveira",
"suffix": ""
},
{
"first": "H",
"middle": [
"R"
],
"last": "De Moraes",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dias-da-Silva, B. C., de Oliveira, M. F., and de Moraes, H. R. 2002. Groundwork for the devel- opment of the Brazilian Portuguese Wordnet.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Advances in natural language processing",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "189--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In Advances in natural language processing:189- 196.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The Encyclopedia of Applied Linguistics",
"authors": [
{
"first": "C",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fellbaum, C. 2012. WordNet. The Encyclopedia of Applied Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic translation of WordNet semantic network to Russian language",
"authors": [
{
"first": "I",
"middle": [],
"last": "Gelfenbeyn",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Goncharuk",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Lehelt",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lipatov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Shilo",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of International Conference on Computational Linguistics and Intellectual Technologies Dialog",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gelfenbeyn, I., Goncharuk, A., Lehelt, V., Lipatov, A. and Shilo, V. 2003. Automatic translation of WordNet semantic network to Russian lan- guage. In Proceedings of International Conference on Computational Linguistics and Intellectual Technologies Dialog-2003.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automated discovery of WordNet relations. WordNet: an electronic lexical database",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Hearst",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "131--153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hearst, M. A. 1998. Automated discovery of WordNet relations. WordNet: an electronic lexi- cal database: 131-153.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Machine Readable Dictionaries: What have we learned",
"authors": [
{
"first": "N",
"middle": [],
"last": "Ide",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "V\u00e9ronis",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ide, N., V\u00e9ronis, J. 1994. Machine Readable Dic- tionaries: What have we learned, where do we",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Proceedings of the International Workshop on the Future of Lexical Research",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "137--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Available at http://bitbucket.org/dendik/yarn-pipeline go. In Proceedings of the International Workshop on the Future of Lexical Research, Beijing, China: 137-146.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Refining taxonomies extracted from machine-readable dictionaries",
"authors": [
{
"first": "N",
"middle": [],
"last": "Ide",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "V\u00e9ronis",
"suffix": ""
}
],
"year": 1993,
"venue": "in Humanities Computing 2",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ide, N., V\u00e9ronis, J. 1993. Refining taxonomies ex- tracted from machine-readable dictionaries. In Hockey, S., Ide, N. Research in Humanities Com- puting 2, Oxford University Press.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "\u041d\u043e\u0432\u0435\u0439\u0448\u0438\u0439 \u0431\u043e\u043b\u044c\u0448\u043e\u0439 \u0442\u043e\u043b\u043a\u043e\u0432\u044b\u0439 \u0441\u043b\u043e\u0432\u0430\u0440\u044c \u0440\u0443\u0441\u0441\u043a\u043e\u0433\u043e \u044f\u0437\u044b\u043a\u0430",
"authors": [
{
"first": "S",
"middle": [
"A"
],
"last": "Kuznetsov",
"suffix": ""
},
{
"first": "\u0421",
"middle": [
"\u0410"
],
"last": "\u041a\u0443\u0437\u043d\u0435\u0446\u043e\u0432",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kuznetsov S.A. \u041a\u0443\u0437\u043d\u0435\u0446\u043e\u0432, \u0421. \u0410. 2008. \u041d\u043e\u0432\u0435\u0439\u0448\u0438\u0439 \u0431\u043e\u043b\u044c\u0448\u043e\u0439 \u0442\u043e\u043b\u043a\u043e\u0432\u044b\u0439 \u0441\u043b\u043e\u0432\u0430\u0440\u044c \u0440\u0443\u0441\u0441\u043a\u043e\u0433\u043e \u044f\u0437\u044b\u043a\u0430. \u0421\u041f\u0431.: \u0420\u0418\u041f\u041e\u041b-\u041d\u043e\u0440\u0438\u043d\u0442.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Automatic sense disambiguation using machine readable dictionaries: how to tell a pine cone from an ice cream cone",
"authors": [
{
"first": "M",
"middle": [],
"last": "Lesk",
"suffix": ""
}
],
"year": 1986,
"venue": "Proceedings of the 5th annual international conference on Systems documentation",
"volume": "",
"issue": "",
"pages": "24--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lesk, M. 1986. Automatic sense disambiguation using machine readable dictionaries: how to tell a pine cone from an ice cream cone. In Proceedings of the 5th annual international confer- ence on Systems documentation: 24-26",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "RuThes linguistic ontology vs. Russian Wordnets",
"authors": [
{
"first": "N",
"middle": [],
"last": "Loukachevitch",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Dobrov",
"suffix": ""
}
],
"year": 2014,
"venue": "GWC 2014: Proceedings of the 7th Global Wordnet Conference",
"volume": "",
"issue": "",
"pages": "154--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Loukachevitch, N., Dobrov, B. 2014. RuThes lin- guistic ontology vs. Russian Wordnets.GWC 2014: Proceedings of the 7th Global Wordnet Con- ference: 154-162.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Using cycles and quasicycles to disambiguate dictionary glosses",
"authors": [
{
"first": "R",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Navigli, R. (2009, March). Using cycles and quasi- cycles to disambiguate dictionary glosses.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "594--602",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In Proceedings of the 12th Conference of the Euro- pean Chapter of the Association for Computational Linguistics: 594-602.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Dictionaries, dictionary grammars and dictionary entry parsing",
"authors": [
{
"first": "M",
"middle": [
"S"
],
"last": "Neff",
"suffix": ""
},
{
"first": "B",
"middle": [
"K"
],
"last": "Boguraev",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the 27th annual meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "91--101",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Neff, M. S., and Boguraev, B. K. 1989. Dictionaries, dictionary grammars and dictionary entry parsing. In Proceedings of the 27th annual meet- ing on Association for Computational Linguistics: 91-101.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Serelex: Search and visualization of semantically related words",
"authors": [
{
"first": "A",
"middle": [],
"last": "Panchenko",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Romanov",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Morozova",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Naets",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Philippovich",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Romanov",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Fairon",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Information Retrieval",
"volume": "",
"issue": "",
"pages": "837--840",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Panchenko, A., Romanov, P., Morozova, O., Naets, H., Philippovich, A., Romanov, A., and Fairon, C. 2013. Serelex: Search and visualization of se- mantically related words. In Advances in Infor- mation Retrieval: 837-840.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Standard 7.0.47-2008, Format for representation on machinereadable media of information retrieval languages vocabularies and terminological data",
"authors": [
{
"first": "G",
"middle": [
"O S T"
],
"last": "Standard",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Standard, G. O. S. T. 2008. Standard 7.0.47-2008, Format for representation on machine- readable media of information retrieval lan- guages vocabularies and terminological data.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "BalkaNet: Aims, Methods, Results and Perspectives. A General Overview",
"authors": [
{
"first": "D",
"middle": [],
"last": "Tufis",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Cristea",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Stamou",
"suffix": ""
}
],
"year": 2004,
"venue": "Special Issue on BalkaNet. Romanian Journal on Science and Technology of Information",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tufis, D., Cristea, D., Stamou, S. 2004. BalkaNet: Aims, Methods, Results and Perspectives. A General Overview In: D. Tufi\u015f (ed): Special Issue on BalkaNet. Romanian Journal on Science and Technology of Information.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Enhancing Russian Wordnets Using the Force of the Crowd",
"authors": [
{
"first": "D",
"middle": [],
"last": "Ustalov",
"suffix": ""
}
],
"year": 2014,
"venue": "Analysis of Images, Social Networks and Texts. Third International Conference",
"volume": "",
"issue": "",
"pages": "257--264",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ustalov, D. 2014. Enhancing Russian Wordnets Using the Force of the Crowd. In Analysis of Images, Social Networks and Texts. Third Interna- tional Conference, AIST 2014. Springer Interna- tional Publishing: 257-264.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "EuroWordNet: A Multilingual Database with Lexical Semantic Network",
"authors": [
{
"first": "P",
"middle": [],
"last": "Vossen",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vossen, P. 1998. EuroWordNet: A Multilingual Database with Lexical Semantic Network. Dordrecht.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td>:</td></tr></table>",
"num": null,
"html": null,
"text": ""
},
"TABREF1": {
"type_str": "table",
"content": "<table><tr><td>PWN spiral &gt;&gt; curve, curved shape</td></tr><tr><td>&gt;&gt; line &gt;&gt; shape &gt;&gt; attribute &gt;&gt; ab-</td></tr><tr><td>straction &gt;&gt; entity</td></tr><tr><td>\uf0b7 rus. \u043f\u0435\u0440\u0435\u0434\u043d\u044f\u044f &gt;&gt; \u043a\u043e\u043c\u043d\u0430\u0442\u0430 &gt;&gt;</td></tr><tr><td>\u043f\u043e\u043c\u0435\u0449\u0435\u043d\u0438\u0435 eng. 'anteroom &gt;&gt; room &gt;&gt;</td></tr><tr><td>premises' compared to PWN ante-</td></tr><tr><td>room &gt;&gt; room &gt;&gt; area &gt;&gt; structure</td></tr><tr><td>&gt;&gt; artifact &gt;&gt; whole &gt;&gt; object &gt;&gt;</td></tr><tr><td>physical entity &gt;&gt; entity</td></tr><tr><td>\uf0b7 rus. \u0440\u043e\u0441\u0442 &gt;&gt; \u0432\u044b\u0441\u043e\u0442\u0430 &gt;&gt; \u0440\u0430\u0441\u0441\u0442\u043e\u044f\u043d\u0438\u0435</td></tr><tr><td>eng. 'stature, height &gt;&gt; height &gt;&gt; dis-</td></tr><tr><td>tance' compared to PWN stature, height</td></tr><tr><td>&gt;&gt; bodily property &gt;&gt; property &gt;&gt; at-</td></tr><tr><td>tribute &gt;&gt; abstraction &gt;&gt; entity</td></tr><tr><td>Dictionary parsing</td></tr></table>",
"num": null,
"html": null,
"text": "\u0441\u043f\u0438\u0440\u0430\u043b\u044c >> \u043a\u0440\u0438\u0432\u0430\u044f >> \u043b\u0438\u043d\u0438\u044f eng.'spiral >> curve >> line' compared to"
}
}
}
}