|
{ |
|
"paper_id": "2016", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T01:03:29.829909Z" |
|
}, |
|
"title": "Eliminating Fuzzy Duplicates in Crowdsourced Lexical Resources", |
|
"authors": [ |
|
{ |
|
"first": "Yuri", |
|
"middle": [], |
|
"last": "Kiselev", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Ural Federal University", |
|
"location": { |
|
"addrLine": "19 Mira Str", |
|
"postCode": "620002", |
|
"settlement": "Yekaterinburg", |
|
"country": "Russia" |
|
} |
|
}, |
|
"email": "yurikiselev@yandex-team.ru" |
|
}, |
|
{ |
|
"first": "Dmitry", |
|
"middle": [], |
|
"last": "Ustalov", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Ural Federal University", |
|
"location": { |
|
"addrLine": "19 Mira Str", |
|
"postCode": "620002", |
|
"settlement": "Yekaterinburg", |
|
"country": "Russia" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Porshnev", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Ural Federal University", |
|
"location": { |
|
"addrLine": "19 Mira Str", |
|
"postCode": "620002", |
|
"settlement": "Yekaterinburg", |
|
"country": "Russia" |
|
} |
|
}, |
|
"email": "s.v.porshnev@urfu.ru" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Collaboratively created lexical resources is a trending approach to creating high quality thesauri in a short time span at a remarkably low price. The key idea is to invite non-expert participants to express and share their knowledge with the aim of constructing a resource. However, this approach tends to be noisy and error-prone, thus making data cleansing a highly topical task to perform. In this paper, we study different techniques for synset deduplication including machineand crowd-based ones. Eventually, we put forward an approach that can solve the deduplication problem fully automatically, with the quality comparable to the expertbased approach.", |
|
"pdf_parse": { |
|
"paper_id": "2016", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Collaboratively created lexical resources is a trending approach to creating high quality thesauri in a short time span at a remarkably low price. The key idea is to invite non-expert participants to express and share their knowledge with the aim of constructing a resource. However, this approach tends to be noisy and error-prone, thus making data cleansing a highly topical task to perform. In this paper, we study different techniques for synset deduplication including machineand crowd-based ones. Eventually, we put forward an approach that can solve the deduplication problem fully automatically, with the quality comparable to the expertbased approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "A WordNet-like thesaurus is a dictionary of a special type that represents different semantic relations between synsets-sets of quasi-synonyms (Miller et al., 1990) . It is a crucial resource for addressing such problems as word sense disambiguation, search query extension and many other problems in the fields of natural language processing (NLP) and artificial intelligence (AI). Typical semantic relations represented by thesauri are synonymy, antonomy (primarily for nouns and adjectives), troponymy (for verbs), hypo-/hypernymic relations, and meronymy.", |
|
"cite_spans": [ |
|
{ |
|
"start": 143, |
|
"end": 164, |
|
"text": "(Miller et al., 1990)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A good linguistic resource should not contain duplicated lexical senses, because duplicates violate the data integrity and complicate addition of semantic relations to the resource. Therefore, removing duplicated synsets from thesauri is an important problem to be addressed, especially in collaboratively created lexical resources like Wiktionary, which is known to suffer this problem . However, deduplication is rather problematic because thesauri may contain fuzzy duplicated synsets composed of different words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The work, as described in this paper, makes the following contributions: (1) it proposes an automatic approach to synset deduplication, (2) presents a synonymic dictionary-based technique for assessing synset quality, and (3) compares the proposed approach with the crowdsourcing-based one.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The rest of this paper is organized as follows. Section 2 reviews the related work. Section 3 defines the problem of synset duplicates existing in thesauri. Section 4 presents a novel approach to synset deduplication. Section 5 describes the experimental setup. Section 6 shows the obtained results. Section 7 discusses the interesting findings. Section 8 concludes the paper and defines directions for future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One of the most straightforward ways to clear a thesaurus of sense duplicates is to align its entries with another resource of proven quality, e.g. using the OntoClean methodology proposed by Guarino and Welty (2009) . Consequently, synsets that will be linked with one synset from another resource represent the same concepts, and should be merged. However, such alignment can be performed only manually. It is also a time-consuming process that requires careful examination of every synset by an expert. Therefore, it is crucial to fo-cus on methods that are either automatic or involve lesser amount of human intervention.", |
|
"cite_spans": [ |
|
{ |
|
"start": 192, |
|
"end": 216, |
|
"text": "Guarino and Welty (2009)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Many studies nowadays aim to evaluate the feasibility of crowdsourcing for various NLP problems. For instance, Snow et al. (2008) showed that non-expert annotators can produce the data whose quality may compete with the expert annotation in such tasks as word sense disambiguation and word similarity estimation (they conducted their study using Amazon Mechanical Turk 1 (AMT), a popular online labor marketplace). Sagot and Fi\u0161er (2012) assumed that semantically related words tend to co-occur in texts. Given such an assumption, they managed to find and eliminate the words that had been added to synsets by mistake. This approach can be used to find sense duplicates, but it requires a large amount of semantic relations to be present in a resource. It should be noted that some resources that contain synsets may not contain any links between them. For instance, Wiktionary represents certain words and relations between them, but it does not explicitly link its synsets. Sajous et al. (2013) presented a method for semi-automatic enrichment of the Wiktionaryderived synsets. First, they analyzed the contents of Wiktionary and produced new synonymy relations that had not been previously included in the resource. After that, they invited collaborators to manually process the data using a custom Firefox plugin to add missing synonyms to the data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 111, |
|
"end": 129, |
|
"text": "Snow et al. (2008)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 415, |
|
"end": 437, |
|
"text": "Sagot and Fi\u0161er (2012)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 976, |
|
"end": 996, |
|
"text": "Sajous et al. (2013)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A similar approach was used by Braslavski et al. (2014) to bootstrap YARN (Yet Another Russ-Net) project, which aims at creating a large open WordNet-like machine-readable thesaurus for the Russian language by means of crowdsourcing. In this project, a dedicated collaborative synset editing tool was used by the annotators to construct synsets by adding and removing words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 31, |
|
"end": 55, |
|
"text": "Braslavski et al. (2014)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The most recognized crowdsourcing workflow is the Find-Fix-Verify pattern proposed by Bernstein et al. and used in Soylent, a Microsoft Word plugin that submits human intelligence tasks to AMT for rephrasing and improving the original text (Bernstein et al., 2010) . As the name implies, the workflow includes the three stages: (1) in the Find stage crowd workers find the text area that can be shortened without changing the meaning, (2) in the Fix stage the workers propose improvements for these text areas, and (3) in the Verify stage the workers select the worst proposed fixes.", |
|
"cite_spans": [ |
|
{ |
|
"start": 240, |
|
"end": 264, |
|
"text": "(Bernstein et al., 2010)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Inspired by this pattern, Ustalov and Kiselev (2015) presented the Add-Remove-Confirm workflow for improving synset quality. Similarly, it contains three stages: (1) in the Add stage workers choose the words to be added to a synset from a given list of candidates, (2) in the Remove stage the workers choose the words that should be removed from a synset, (3) in the Confirm stage the workers choose which synset is better-the initial one or the fixed one.", |
|
"cite_spans": [ |
|
{ |
|
"start": 26, |
|
"end": 52, |
|
"text": "Ustalov and Kiselev (2015)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In our study, we focus on the synsets represented in a WordNet-like thesaurus. Hence, we regard a thesaurus as a set of synsets S, where every synset s \u2208 S consists of different words and represents some sense or concept.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In lexical resources created by expert lexicographers, synsets usually correspond to different meanings, so synset duplicates never arise. Unfortunately, it is not true for the resources created by non-expert users, e.g. through the use of crowdsourcing. One approach to synset creation would be to combine manually constructed synsets with synsets that are imported from open resources. Obviously, it is going to lead to the situation where there is a plenty of synsets representing identical concepts. The crowdsourcing approach to synset creation is also prone to this drawback, as the crowd is likely to create duplicate synsets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The following example from the Russian Wiktionary 2 shows that it contains synsets with identical meanings. For example, the synset {\u0441\u0442\u043e\u043c\u0430\u0442\u043e\u043b\u043e\u0433 (stomatologist), \u0434\u0430\u043d\u0442\u0438\u0441\u0442 (dentist), \u0437\u0443\u0431\u043d\u043e\u0439 \u0432\u0440\u0430\u0447 (\"tooth doctor\")} and the synset {\u0434\u0430\u043d\u0442\u0438\u0441\u0442 (dentist), \u0441\u0442\u043e\u043c\u0430\u0442\u043e\u043b\u043e\u0433 (stomatologist)} definitely describe the same concept \"a person qualified to treat the diseases and conditions that affect the teeth\". Hence, such synsets should be combined, yet they both are present in the Russian Wiktionary. Note that in this example the second synset is a full subset of the first one; however, it is possible that two synsets may intersect only partly while sharing the same meaning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "For a native speaker, it is relatively easy to detect whether two synsets share the same meanings. So, the detection may be done by non-experts via crowdsourcing. However, the key problem here is how to retrieve the pairs of synsets that presumably represent identical concepts. In the next section, we propose a simple, yet effective approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Suppose the word w has several meanings. According to Miller et al. (1990) , it is usually enough to provide one synonym for every meaning of w to a native speaker of a language to be able to distinguish the meanings from each other (provided that the speaker is familiar with the corresponding concepts). This phenomenon is widely exploited by explanatory dictionaries. It is also utilized in some thesauri which assume that a synset itself is enough to deduce its meaning, therefore definitions of synsets may be omitted.", |
|
"cite_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 74, |
|
"text": "Miller et al. (1990)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Hence, we formulate the meaning deduplication problem as follows. Given a pair of different synsets s 1 \u2208 S and s 2 \u2208 S, we treat them as duplicates if they share at least two words:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2203s 1 \u2208 S, s 2 \u2208 S : s 1 = s 2 \u2227 |s1 \u2229 s2| \u2265 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Obviously, this is a strong criterion that may be violated, so we propose the following two-stage workflow for synset deduplication.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Filtering. In this stage, the possible duplicates are retrieved using the above described criterion resulting in the set of synset pairs (s 1 , s 2 ) for further validation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Voting. In this stage, the obtained synset pairs are subject to manual verification. The pairs voted as equivalent are combined.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The assessment required in the Voting stage may be provided by expert lexicographers; in crowdsourced resources, the contributors may be invited not only to add the new data, but also to increase the quality of the created data and to deduplicate it.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Since task submission to Amazon Mechanical Turk requires a U.S. billing address, this solution is not accessible to users from other countries. Although there are many other crowdsourcing platforms, e.g. CrowdFlower, Microworkers, Prolific Academic, etc., yet the proportion of Russian speakers on such platforms is still low (Pavlick et al., 2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 326, |
|
"end": 348, |
|
"text": "(Pavlick et al., 2014)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Given the fact that our workers are native Russian speakers, we decided to use the open source crowdsourcing engine Mechanical Tsar 3 , which is designed for rapid deployment of mechanized labor workflows (Ustalov, 2015) . Inspired by the similar annotation study conducted by Snow et al. (2008) , we used the default configuration, i.e. the majority voting strategy for answer aggregation, the fixed answer number per task strategy for task allocation, and the no worker ranking. The workers were invited from VK, Facebook and Twitter via a short-term open call for participation posted by us.", |
|
"cite_spans": [ |
|
{ |
|
"start": 205, |
|
"end": 220, |
|
"text": "(Ustalov, 2015)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 277, |
|
"end": 295, |
|
"text": "Snow et al. (2008)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We used two different electronic thesauri for the experiments. The first one was chosen from among crowdsourced lexical resouces. Selecting between the Russian Wiktionary and YARN, we settled on the latter because it comprises one and half time more synsets, and it is easier to parse because YARN 4 synsets are available in the CSV format.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stage \"Filtering\"", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We were also interested in applying the described approach to a resource created by expert lexicographers. The current situation with electronic thesauri for the Russian language is that there is only one resource that is large enough and is available for study. This resource is RuTheslite 5 , a publicly available version of the RuThes linguistic ontology, which has been developing for many years (Loukachevitch, 2011).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stage \"Filtering\"", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We retrieved 210 presumably duplicated synsets from each resource-70 synsets with exactly two common words, 70 synsets with three, and 70 synset with four or more common words. Such a stratification is motivated by the interest in analyzing how the number of shared words correlates with their meanings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stage \"Filtering\"", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "By randomly sampling pairs of possibly duplicated synsets from YARN, we concluded that the proposed criterion for synset equivalence is very robust. It appears that for YARN this approach may be used even without the Voting stage. Thus, we decided to study whether the manual annotation does increase the quality of synset deduplication. In order to do this, we selected synsets from YARN as follows.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stage \"Filtering\"", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Since synsets in YARN are not always accompanied by sense definitions, we asked an expert to manually align the selected synsets with an expertbuilt lexical resource. We chose the Babenko dictionary (2011) (hereinafter referred to as BAB) as an expert-built lexical resource because it is a relatively recent dictionary with a wide language coverage. As a result of the alignment, each YARN synset s was provided with a corresponding synset s BAB defined by a sense definition d.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stage \"Filtering\"", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The goal of the Voting stage is to choose true equivalents among the prepared presumably equivalent synset. The input of this stage is a pair of synsets (s 1 , s 2 ) from a resource, and a worker is to determine if the synsets share the same meaning (Figure 1 ). ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 250, |
|
"end": 259, |
|
"text": "(Figure 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Stage \"Voting\"", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We use precision and recall to measure the quality of synsets in a thesaurus S. Precision P (s) of a synset s \u2208 S is the fraction of the synset words with the meaning represented by s, compared to all the words in the language representing the meaning of the synset L(s).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quality Metrics", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P (s) = |s \u2229 L(s)| |s|", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Quality Metrics", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Recall R(s) of a synset s is the fraction of all words S in the language that have the meaning that s represents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quality Metrics", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "R(s) = |s \u2229 L(s)| |L(s)|", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Quality Metrics", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "As may be easily noticed, it is impossible to precisely calculate the measure of synset recall R(s), since the whole set of words that can correspond to a particular meaning is unknown. In order to estimate L(\u2022), we used the data retrieved at the Filtering stage. We combined the YARN synsets in each pair (s 1 , s 2 ) into a new synset s. Then, we provided the resulting synset s with a corresponding definition d from the BAB and asked the same expert as in the Filtering stage to remove words from s, which do not correspond to the definition d. The fixed synsets s were then combined with the corresponding synsets s BAB . These combined synsets were used as the gold standard synsets s GS for concepts, as we considered that such synsets contained all the words representing the concepts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quality Metrics", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Consider the following example in order to better understand the described process of data preparation and the further evaluations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example of Quality Calculation", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Let say that YARN contains synset s 1 ={think, opine, suppose, sleep} and synset s 2 ={think, suppose, reckon}, and BAB contains synset s BAB ={think, opine, suppose, imagine} with definition d \"expect, believe, or suppose\" (|s 1 \u2229 s 2 | = |{think, suppose}| = 2 and |s 1 \u2229 s BAB | = |{think, opine, suppose}| = 3). Assume that the expert aligned s 1 and s BAB in the Filtering stage. In that case the expert would be provided with synset s = s 1 \u222a s 2 ={think, opine, suppose, sleep, reckon} and definition d from BAB. After fixing this synset s (by removing the wrong word sleep), it will be combined with the corresponding synset s BAB . So the synset that will be further treated as the gold standard for this concept is s GS ={think, opine, suppose, imagine, reckon}. This set will be used as L for calculating (1) and (2) (for the corresponding s 1 and s BAB , L(s 1 ) = L(s BAB )). According to this,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example of Quality Calculation", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "P (s 1 ) = |s 1 \u2229 L(s 1 )| |s 1 | = 3 4 = 0.75, R(s BAB ) = |s BAB \u2229 L(s BAB )| |L(s BAB )| = 4 5 = 0.8.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example of Quality Calculation", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Note that in the proposed evaluation method, precision P of any synset from BAB s BAB is 1.0.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example of Quality Calculation", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "The procedure described in Section 6.1 allowed us to calculate the suggested quality measures for the resources (Table 1) . The BAB row is calculated for 210 synsets from the Babenko dictionary, the YARN, aligned row-for 210 synsets s 1 from YARN that were aligned with the BAB by the expert, and the YARN, machine-for the automatically merged all 210 presumably equivalent synsets (s 1 , s 2 ) of YARN. The F 1 -measure for YARN is expectedly lower than for the BAB, yet, after a simple merging of the presumably equivalent synsets, its average F 1measure became higher than for the BAB. However, this result was due to the significant increase in the recall, while the precision dropped.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 112, |
|
"end": 121, |
|
"text": "(Table 1)", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Quality Assessment", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "To investigate how people's participation can improve the quality of automatic merging, we conducted a crowdsourcing experiment. Every task (Figure 1 ) was annotated by at least three different workers. The decision about merging was made by majority voting. Table 2 shows the share of synsets that the workers decided to merge. Quite expectedly, the two analyzed lexical resources proved very different. Our equivalence criterion worked only in one third of the cases for RuThes-lite. And even the stronger version of the criterion (the one considering synsets that share 4+ words as sense duplicates) was true only in 2 3 cases according to the annotators. However, for YARN the criterion proved to be rather robust, so that it can be applied without crowd checking, provided that the results of the merging will be verified by a moderator of the resource.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 149, |
|
"text": "(Figure 1", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 259, |
|
"end": 266, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Quality Assessment", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "This conclusion agreed with the quality estimates of the merging performed according to human annotations (Table 3) . The first row (YARN, machine) corresponds to the automatic merge of all 210 synsets repeats the row of Table 1 with the same name, and the second row (YARN, crowd) corresponds to the selective merge performed according to the human judgements. So, 61+64+68 synset pairs (s 1 , s 2 ) were merged (Table 2) , and the 17 remained synsets we left as they were (s 1 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 115, |
|
"text": "(Table 3)", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 221, |
|
"end": 228, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 413, |
|
"end": 422, |
|
"text": "(Table 2)", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Quality Assessment", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "The F 1 -measure shows no change after applying the Voting stage, yet the precision increases by 0.012 while the recall drops by 0.01. Despite the fact that the overall quality is constant regardless of the human annotations, it still presents an interesting finding, since people increase the precision of the merging. This is important because it allows to compensate, at least partially, for the reduction in the precision against the original synsets caused by the automatic merge. (Table 3) . It is also of interest that YARN contains 24.8 thousand synsets that presumably have a duplicate (58% of the synsets with two or more words), while the Russian Wiktionary has 13.2 thousand (40%), and RuThes-lite has only 6.3 thousand (28%). We may therefore conclude that the proposed approach should mainly be applied to resources that a priori are known to contain duplicate synsets rather than to improve the quality of expert-built resources.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 486, |
|
"end": 495, |
|
"text": "(Table 3)", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The analysis of the results of the experiments and the annotations provided by our expert showed that in some cases it is almost impossible to derive a meaning from a synset. For instance, just a couple of synonyms is not enough to distinguish the meaning \"a woman thought to have evil magic powers\" from \"a woman who uses magic or sorcery\" (the latter definition does not imply an \"evil\" woman, which can be not obvious from a synonymy row).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Synset Ambiguity", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "Another example of such ambiguity are the concepts corresponding to \"a bed with a back\" and \"a bed without a back\". Given only a synset, it is barely possible to discern this shade of meaning and distinguish any of these two concepts from the more common one (simply \"a bed\"). With this observation in mind, we suggest that the authors of the wordnets for which the meanings of synsets are optional should take it into account and include definitions for vague concepts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Synset Ambiguity", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "Special attention should be given to the performance of the crowd workers. In our experiment, 25 workers provided 1262 answers to 420 pairwise comparison tasks (Figure 1) . The workers repeatedly reported that the tasks were time consuming due to data inconsistency. Suppose that synset sizes are n 1 and n 2 correspondingly, and an annotator spends O(n 1 + n 2 ) time to make a decision. Hence, even in the simplest case (Table 4 ) an annotator will perform 4 + 4 = 8 operations per pair, which is inconvenient. Further studies should avoid pairwise comparison in problems involving contextual or domain knowledge for making a decision by annotators. However, it still may be useful in various visual recognition tasks, especially when the workers are provided with an observable hint (Deng et al., 2013) . We should also note that this outcome agrees well with the study conducted by Wang et al. (2012) , when cluster-based task generation led to lower time spent rather than in pair-based tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 786, |
|
"end": 805, |
|
"text": "(Deng et al., 2013)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 886, |
|
"end": 904, |
|
"text": "Wang et al. (2012)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 160, |
|
"end": 170, |
|
"text": "(Figure 1)", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 422, |
|
"end": 430, |
|
"text": "(Table 4", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Pairwise Annotation", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "We have analyzed all the cases when all the three workers gave the same answer to the task (Table 5). For YARN , the number of cases when all the workers agreed rises with the number of common words in synsets. This is quite expected considering that sharing more common words makes it more obvious that the synsets have common senses. However, we do not observe the same in RuThes-lite. Manual analyses of the data from RuThes-lite showed that its authors tend to discriminate meanings of synsets with common words by means of only one word, e.g. using a hyponym for a concept in one set and a corresponding hypernym in another. It is enough to emphasize the difference in meanings, but workers may find it problematic to detect the only pair of words that defines the difference in the pair of synsets. This task may become even more complicated in large synsets, as they grow in size along with the increase in the number of common words in them (Table 4) .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 949, |
|
"end": 958, |
|
"text": "(Table 4)", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Agreement & Issues", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "In this study, we presented an automated approach to synset deduplication. The results were obtained from expert labels and annotations provided by crowd work. At least three different annotations per every synset pair from two different resources (YARN and RuThes-lite) were used. The approach allows to significantly increase the synset quality in crowdsourcing lexical resources. Participation of people does not notably affect the average synset quality, though the precision slightly increases when people are involved.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "The results showed that two synonyms are not sufficient for defining a meaning, but three words usually give a satisfactory result. So, it is three words that should be used as a threshold value for merging duplicate synsets when using the proposed deduplication approach in a fully automatic mode. Our results, including the crowd answers and the produced gold standard, are available 6 under the terms of Creative Commons Attribution-ShareAlike 3.0 license.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "As a possible future direction, we may suggest using more sophisticated similarity measures to select a threshold for fully automatic merging of synsets. Another possible way to improve the approach is to detect not just pairs, but clusters of synsets. This is hardly possible in resources that are manually crafted by a team of experts, but it is definitely worth exploring for crowdsourcing resources.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "https://www.mturk.com/mturk/welcome", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://ru.wiktionary.org/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://mtsar.nlpub.org/ 4 https://russianword.net/yarn-synsets. csv 5 http://www.labinform.ru/pub/ruthes/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://ustalov.imm.uran.ru/pub/ duplicates-gwc.tar.gz", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work is supported by the Russian Foundation for the Humanities, Project No. 13-04-12020 \"New Open Electronic Thesaurus for Russian\". The reported study was funded by RFBR according to the research project No. 16-37-00354 \u043c\u043e\u043b_a. The authors are grateful to Yulia Badryzlova for proofreading the text, and to Alisa Porshneva for labeling synsets. The authors would also like to thank all those who participated in the crowdsourced experiment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Dictionary of synonyms of the Russian Language", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Ljudmila", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Babenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "AST: Astrel", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ljudmila G. Babenko, editor. 2011. Dictionary of synonyms of the Russian Language. AST: Astrel, Moscow, Russia.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Soylent: A Word Processor with a Crowd Inside", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Bernstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Little", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Miller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bj\u00f6rn", |
|
"middle": [], |
|
"last": "Hartmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Ackerman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Karger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Crowell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katrina", |
|
"middle": [], |
|
"last": "Panovich", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 23Nd Annual ACM Symposium on User Interface Software and Technology, UIST '10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "313--322", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael S. Bernstein, Greg Little, Robert C. Miller, Bj\u00f6rn Hartmann, Mark S. Ackerman, David R. Karger, David Crowell, and Katrina Panovich. 2010. Soylent: A Word Processor with a Crowd Inside. In Proceedings of the 23Nd Annual ACM Symposium on User Interface Software and Tech- nology, UIST '10, pages 313-322, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A Spinning Wheel for YARN: User Interface for a Crowdsourced Thesaurus", |
|
"authors": [ |
|
{ |
|
"first": "Pavel", |
|
"middle": [], |
|
"last": "Braslavski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dmitry", |
|
"middle": [], |
|
"last": "Ustalov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mikhail", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Demonstrations at the 14th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "101--104", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pavel Braslavski, Dmitry Ustalov, and Mikhail Yu. Mukhin. 2014. A Spinning Wheel for YARN: User Interface for a Crowdsourced Thesaurus. In Pro- ceedings of the Demonstrations at the 14th Confer- ence of the European Chapter of the Association for Computational Linguistics, pages 101-104, Gothen- burg, Sweden. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Fine-Grained Crowdsourcing for Fine-Grained Recognition", |
|
"authors": [ |
|
{ |
|
"first": "Jia", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Krause", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Fei-Fei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "580--587", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jia Deng, Jonathan Krause, and Li Fei-Fei. 2013. Fine- Grained Crowdsourcing for Fine-Grained Recogni- tion. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pages 580-587.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "An Overview of OntoClean", |
|
"authors": [ |
|
{ |
|
"first": "Nicola", |
|
"middle": [], |
|
"last": "Guarino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Welty", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Handbook on Ontologies, International Handbooks on Information Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "201--220", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nicola Guarino and Christopher A. Welty. 2009. An Overview of OntoClean. In Steffen Staab and Rudi Studer, editors, Handbook on Ontologies, Interna- tional Handbooks on Information Systems, pages 201-220. Springer Berlin Heidelberg.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Russian Lexicographic Landscape: a Tale of 12 Dictionaries", |
|
"authors": [ |
|
{ |
|
"first": "Yuri", |
|
"middle": [], |
|
"last": "Kiselev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Krizhanovsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pavel", |
|
"middle": [], |
|
"last": "Braslavski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Computational Linguistics and Intellectual Technologies: papers from the Annual conference", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "254--271", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuri Kiselev, Andrew Krizhanovsky, Pavel Braslavski, et al. 2015. Russian Lexicographic Landscape: a Tale of 12 Dictionaries. In Computational Lin- guistics and Intellectual Technologies: papers from the Annual conference \"Dialogue\", volume 1, pages 254-271. RGGU, Moscow.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Thesauri in information retrieval tasks", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Natalia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Loukachevitch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Natalia V. Loukachevitch. 2011. Thesauri in infor- mation retrieval tasks. Moscow University Press, Moscow, Russia.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Introduction to WordNet: An On-line", |
|
"authors": [ |
|
{ |
|
"first": "George", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Miller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Beckwith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christiane", |
|
"middle": [], |
|
"last": "Fellbaum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Derek", |
|
"middle": [], |
|
"last": "Gross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katherine", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Miller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Lexical Database. Lexicography", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "235--244", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "George A. Miller, Richard Beckwith, Christiane Fell- baum, Derek Gross, and Katherine J. Miller. 1990. Introduction to WordNet: An On-line Lexical Database. Lexicography, 3:235-244.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "The Language Demographics of Amazon Mechanical Turk", |
|
"authors": [ |
|
{ |
|
"first": "Ellie", |
|
"middle": [], |
|
"last": "Pavlick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Post", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ann", |
|
"middle": [], |
|
"last": "Irvine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dmitry", |
|
"middle": [], |
|
"last": "Kachaev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "79--92", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ellie Pavlick, Matt Post, Ann Irvine, Dmitry Kachaev, and Chris Callison-Burch. 2014. The Language De- mographics of Amazon Mechanical Turk. Transac- tions of the Association for Computational Linguis- tics, 2:79-92.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Cleaning noisy wordnets", |
|
"authors": [ |
|
{ |
|
"first": "Beno\u00eet", |
|
"middle": [], |
|
"last": "Sagot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Darja", |
|
"middle": [], |
|
"last": "Fi\u0161er", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Beno\u00eet Sagot and Darja Fi\u0161er. 2012. Cleaning noisy wordnets. In Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12), Istanbul, Turkey.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Semi-Automatic Enrichment of Crowdsourced Synonymy Networks: The WISIGOTH System Applied to Wiktionary. Language Resources and Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Franck", |
|
"middle": [], |
|
"last": "Sajous", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emmanuel", |
|
"middle": [], |
|
"last": "Navarro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bruno", |
|
"middle": [], |
|
"last": "Gaume", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laurent", |
|
"middle": [], |
|
"last": "Pr\u00e9vot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yannick", |
|
"middle": [], |
|
"last": "Chudy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "47", |
|
"issue": "", |
|
"pages": "63--96", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franck Sajous, Emmanuel Navarro, Bruno Gaume, Laurent Pr\u00e9vot, and Yannick Chudy. 2013. Semi- Automatic Enrichment of Crowdsourced Synonymy Networks: The WISIGOTH System Applied to Wiktionary. Language Resources and Evaluation, 47(1):63-96.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Cheap and Fast-but is It Good?: Evaluating Non-expert Annotations for Natural Language Tasks", |
|
"authors": [ |
|
{ |
|
"first": "Rion", |
|
"middle": [], |
|
"last": "Snow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O'", |
|
"middle": [], |
|
"last": "Brendan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Connor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '08", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "254--263", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Y. Ng. 2008. Cheap and Fast-but is It Good?: Evaluating Non-expert Annotations for Nat- ural Language Tasks. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing, EMNLP '08, pages 254-263, Strouds- burg, PA, USA. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Add-Remove-Confirm: Crowdsourcing Synset Cleansing", |
|
"authors": [ |
|
{ |
|
"first": "Dmitry", |
|
"middle": [], |
|
"last": "Ustalov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuri", |
|
"middle": [], |
|
"last": "Kiselev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Application of Information and Communication Technologies (AICT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "143--147", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dmitry Ustalov and Yuri Kiselev. 2015. Add-Remove- Confirm: Crowdsourcing Synset Cleansing. In Ap- plication of Information and Communication Tech- nologies (AICT), 2015 IEEE 9th International Con- ference on, pages 143-147. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "A Crowdsourcing Engine for Mechanized Labor", |
|
"authors": [ |
|
{ |
|
"first": "Dmitry", |
|
"middle": [], |
|
"last": "Ustalov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Institute for System Programming", |
|
"volume": "27", |
|
"issue": "", |
|
"pages": "351--364", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dmitry Ustalov. 2015. A Crowdsourcing Engine for Mechanized Labor. Proceedings of the Institute for System Programming, 27(3):351-364.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "CrowdER: Crowdsourcing Entity Resolution", |
|
"authors": [ |
|
{ |
|
"first": "Jiannan", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Kraska", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Franklin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianhua", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proc. VLDB Endow", |
|
"volume": "5", |
|
"issue": "11", |
|
"pages": "1483--1494", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiannan Wang, Tim Kraska, Michael J. Franklin, and Jianhua Feng. 2012. CrowdER: Crowdsourcing En- tity Resolution. Proc. VLDB Endow., 5(11):1483- 1494.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Do the following synsets have the same meanings: \"s1\" and \"s2\"?[ ] Yes [ ] No", |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"text": "Task format for Voting stage (the original text was in Russian).", |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"text": "Synset quality.", |
|
"content": "<table><tr><td/><td colspan=\"2\">Avg P Avg R Avg F 1</td></tr><tr><td>BAB</td><td>1.000 0.661</td><td>0.796</td></tr><tr><td colspan=\"2\">YARN, aligned 0.901 0.634</td><td>0.744</td></tr><tr><td colspan=\"2\">YARN, machine 0.840 0.774</td><td>0.805</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF1": { |
|
"text": "Crowdsourcing synset deduplication.", |
|
"content": "<table><tr><td># of common words</td><td>2</td><td>3</td><td>4+</td></tr><tr><td>YARN</td><td>61 / 70</td><td>64 / 70</td><td>68 / 70</td></tr><tr><td>RuThes-lite</td><td>25 / 70</td><td>40 / 70</td><td>51 / 70</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"text": "YARN synset deduplication.", |
|
"content": "<table><tr><td/><td colspan=\"2\">Avg P Avg R Avg F 1</td></tr><tr><td colspan=\"2\">YARN, machine 0.840 0.774</td><td>0.805</td></tr><tr><td>YARN, crowd</td><td>0.852 0.764</td><td>0.805</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"text": "Average synset sizes.", |
|
"content": "<table><tr><td colspan=\"2\"># of common words 2</td><td>3</td><td>4+</td></tr><tr><td>YARN</td><td colspan=\"3\">4.2 4.6 5.5</td></tr><tr><td>RuThes-lite</td><td colspan=\"3\">4.3 5.0 5.8</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"text": "# of merge decisions made unanimously.", |
|
"content": "<table><tr><td># of common words</td><td>2</td><td>3</td><td>4+</td></tr><tr><td>YARN</td><td>32 / 70</td><td>47 / 70</td><td>57 / 70</td></tr><tr><td>RuThes-lite</td><td>36 / 70</td><td>35 / 70</td><td>32 / 70</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |