{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:05:44.332703Z" }, "title": "Building Sense Representations in Danish by Combining Word Embeddings with Lexical Resources", "authors": [ { "first": "Ida", "middle": [ "R\u00f8rmann" ], "last": "Olsen", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Bolette", "middle": [ "S" ], "last": "Pedersen", "suffix": "", "affiliation": {}, "email": "bspedersen@hum.ku.dk" }, { "first": "Asad", "middle": [], "last": "Sayeed", "suffix": "", "affiliation": {}, "email": "asad.sayeed@gu.se" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Our aim is to identify suitable sense representations for NLP in Danish. We investigate sense inventories that correlate with human interpretations of word meaning and ambiguity as typically described in dictionaries and wordnets and that are well reflected distributionally as expressed in word embeddings. To this end, we study a number of highly ambiguous Danish nouns and examine the effectiveness of sense representations constructed by combining vectors from a distributional model with the information from a wordnet. We establish representations based on centroids obtained from wordnet synsets and example sentences as well as representations established via a clustering approach; these representations are tested in a word sense disambiguation task. We conclude that the more information extracted from the wordnet entries (example sentence, definition, semantic relations) the more successful the sense representation vector.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Our aim is to identify suitable sense representations for NLP in Danish. We investigate sense inventories that correlate with human interpretations of word meaning and ambiguity as typically described in dictionaries and wordnets and that are well reflected distributionally as expressed in word embeddings. To this end, we study a number of highly ambiguous Danish nouns and examine the effectiveness of sense representations constructed by combining vectors from a distributional model with the information from a wordnet. We establish representations based on centroids obtained from wordnet synsets and example sentences as well as representations established via a clustering approach; these representations are tested in a word sense disambiguation task. We conclude that the more information extracted from the wordnet entries (example sentence, definition, semantic relations) the more successful the sense representation vector.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The effective handling of sense ambiguity in Natural Language Processing (NLP) is an extremely challenging task, as is well described in the literature (Kilgarriff, 1997; Agirre and Edmonds, 2006; Palmer et al., 2004; Navigli and Di Marco, 2013; Edmonds and Kilgarriff, 2002; Mihalcea et al., 2004; Pradhan et al., 2007) . In this paper, we focus on a lower-resourced language, Danish, with the hypothesis that if we can compile sense inventories that both correlate well with human interpretations of word meaning and are well-reflected statistically in large corpora, we would have made a first and important step towards an improved and useful sense inventory: not too fine-grained, but still capturing the essential meaning differences that are relevant in language processing. We investigate this hypothesis by building sense representations from word embeddings using wordnet-associated data. In order to assess the performance of the proposed model, we study a number of Danish nouns with very high meaning complexity, i.e., nouns that are described in lexica as being extremely polysemous. We apply a central semantic NLP task as our test scenario, namely that of word sense disambiguation (WSD). For lower-resourced languages, obtaining performance better than a majority-class baseline in WSD tasks is very difficult due to the extremely unbalanced distribution of senses in small corpora. However, the task is an ideal platform for achieving our goal of examining different approaches to sense representation. Our aim is both to support a data-driven basis for distinguishing between senses when compiling new lexical resources and also to enrich and supplement our lexical resource with distributional information from the word embedding model. In the following, we carry out a series of experiments and evaluate the sense representations in a WSD lexical sample task. For the experiments, we represent wordnet synset information from the Danish wordnet, DanNet (Pedersen et al., 2009) , in a word embedding model. We test five dif-ferent Bag-Of-Words (BOWs) combinations-defined as 'sense-bags'-that we derive from the synsets, including information such as example sentence, definition, and semantic relations. Generally speaking, the synsets incorporate associated concepts via semantic relations which lexicographers have chosen as being the defining relation for each particular concept. This approach sheds light on the extent to which the hand-picked words in the synsets are actually representative of the processed corpus data. It is not possible at this stage to evaluate an unsupervised word sense induction (WSI) system for Danish with curated open-source data. However, with a knowledge-based system, where the sense representations are linked to lexical entries, it is possible to evaluate with the semantically annotated data available for Danish, the SemDaX Corpus (Pedersen et al., 2016) . This corpus is annotated with dictionary senses. The paper is structured as follows: Section 2 describes Danish as a lower-resourced language and presents existing semantic resources that are available for our task. In Section 3, we present related work, and in Section 4 we describe our five experiments in detail. Section 5 and 6 describe and discuss our results, and in Section 7 we conclude and outline plans for future work.", "cite_spans": [ { "start": 152, "end": 170, "text": "(Kilgarriff, 1997;", "ref_id": "BIBREF16" }, { "start": 171, "end": 196, "text": "Agirre and Edmonds, 2006;", "ref_id": "BIBREF0" }, { "start": 197, "end": 217, "text": "Palmer et al., 2004;", "ref_id": "BIBREF25" }, { "start": 218, "end": 245, "text": "Navigli and Di Marco, 2013;", "ref_id": "BIBREF22" }, { "start": 246, "end": 275, "text": "Edmonds and Kilgarriff, 2002;", "ref_id": "BIBREF10" }, { "start": 276, "end": 298, "text": "Mihalcea et al., 2004;", "ref_id": "BIBREF20" }, { "start": 299, "end": 320, "text": "Pradhan et al., 2007)", "ref_id": "BIBREF34" }, { "start": 1974, "end": 1997, "text": "(Pedersen et al., 2009)", "ref_id": "BIBREF27" }, { "start": 2893, "end": 2916, "text": "(Pedersen et al., 2016)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Semantic processing of lower-resourced languages is a challenging enterprise typically calling for combined methods of applying both supervised and unsupervised methods in combination with language transfer from richerresourced languages. For Danish we have now a number of standard semantic resources and tools such as a wordnet and SemDaX corpus, a framenet lexicon (Pedersen et al., 2018b) , several word embedding models (S\u00f8rensen and Nimb, 2018) , and a preliminary sense tagger (Martinez Alonso et al., 2015) . However, the size and accessibility of the resources as well as the evaluation datasets accompanying them typically constitute a bottleneck. For instance, the wordnet, DanNet, which contains 65,000 synsets, is open-source, but the links from DanNet to the complete sense inventory of The Danish Dictionary is not. Our work requires this key, which necessitated connecting the dictionary labels to DanNet synsets through cumbersome manual compilation. 1", "cite_spans": [ { "start": 368, "end": 392, "text": "(Pedersen et al., 2018b)", "ref_id": "BIBREF30" }, { "start": 425, "end": 450, "text": "(S\u00f8rensen and Nimb, 2018)", "ref_id": "BIBREF38" }, { "start": 484, "end": 514, "text": "(Martinez Alonso et al., 2015)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Danish as a lower-resourced language", "sec_num": "2." }, { "text": "Both supervised and unsupervised methods to represent words and word senses have been widely explored in NLP, especially given the popularity of word embeddings. Unsupervised approaches to obtain not only word embeddings, but also sense embeddings (such as SenseGram (Pelevina et al., 2017) , Adagram (Bartunov et al., 2016) , and Neelakantan et al. (2014)) do not rely on existing large datasets; they are are thus suitable for lower-resourced languages. A downside is that the induced senses are not humanly readable or easy to link to lexical resources; this limits their applicability. An incorporation of valuable high-quality resources, e.g., wordnets, in unsupervised methods can augment the sense representations with additional lexical information, especially for non-frequent word senses. The combination of contextual and knowledge-based information can be established by joint training (Faralli et al., 2016; Johansson and Nieto-Pi\u00f1a, 2015; Mancini et al., 2017) , or by postprocessing normal word embeddings (Rothe and Sch\u00fctze, 2017; Bhingardive et al., 2015; Chen et al., 2014; Pilehvar and Collier, 2016; Camacho-Collados et al., 2016) . Alternatively, Saedi et al. (2018) successfully converted a semantic network (WordNet) into a semantic space, where the semantic affinity of two words is stronger when they are closer in the semantic network (in terms of paths). They tested the resulting representations in a semantic similarity task and found a significant improvement compared to a regular word2vec space. The study also indicated that the more semantic relations included from the semantic network, the better the result. Bhingardive et al. (2015) detected the most frequent senses by comparing the target word embedding in a word embedding model with constructed sense representations based on synset information represented in a word embedding model. Our work is also related to Ustalov et al. (2018) who proposed a synset-averaged sense-embedding approach to WSD for an under-resourced language (Russian). They evaluate the system's clustering on a goldstandard with an average number of word senses of 3.2 (Panchenko et al., 2018) . Their results show that the task of building unsupervised sense embeddings this way is remarkably difficult. We estimate the quality of the sense representations in a lexical sample WSD task. The contribution of this paper is therefore a study on these methods for Danish data evaluated on a WSD task and not for most frequent sense detection or on a gold standard. The work provides a detailed investigation of which information types from DanNet improve our WSD results, and with more focus on the role of example sentences than seen in related work.", "cite_spans": [ { "start": 267, "end": 290, "text": "(Pelevina et al., 2017)", "ref_id": "BIBREF32" }, { "start": 301, "end": 324, "text": "(Bartunov et al., 2016)", "ref_id": "BIBREF5" }, { "start": 898, "end": 920, "text": "(Faralli et al., 2016;", "ref_id": "BIBREF11" }, { "start": 921, "end": 952, "text": "Johansson and Nieto-Pi\u00f1a, 2015;", "ref_id": "BIBREF13" }, { "start": 953, "end": 974, "text": "Mancini et al., 2017)", "ref_id": "BIBREF18" }, { "start": 1021, "end": 1046, "text": "(Rothe and Sch\u00fctze, 2017;", "ref_id": "BIBREF36" }, { "start": 1047, "end": 1072, "text": "Bhingardive et al., 2015;", "ref_id": "BIBREF6" }, { "start": 1073, "end": 1091, "text": "Chen et al., 2014;", "ref_id": "BIBREF9" }, { "start": 1092, "end": 1119, "text": "Pilehvar and Collier, 2016;", "ref_id": "BIBREF33" }, { "start": 1120, "end": 1150, "text": "Camacho-Collados et al., 2016)", "ref_id": "BIBREF8" }, { "start": 1168, "end": 1187, "text": "Saedi et al. (2018)", "ref_id": "BIBREF37" }, { "start": 1645, "end": 1670, "text": "Bhingardive et al. (2015)", "ref_id": "BIBREF6" }, { "start": 1904, "end": 1925, "text": "Ustalov et al. (2018)", "ref_id": "BIBREF39" }, { "start": 2133, "end": 2157, "text": "(Panchenko et al., 2018)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3." }, { "text": "For a number of years up to now, embeddings have been ubiquitous in computational approaches to numerous NLP tasks. While word embeddings, such as word2vec (Mikolov et al., 2013) , have been central in NLP research touching on lexical semantics, other forms of embeddings, from character to paragraph to multimodal, have proven to be flexible, often multi-purpose forms of linguistic representation. Our overall idea is to build sense representations in vector spaces with information of associated words extracted from a lexical resource, namely wordnet. We make use of word embeddings to construct a sense representation, a synset embedding. The wordnet synset information (i.e., words) associated to a given sense of a word is collected in a synset \"sense-bag\". The synset sense-bag is used to construct a unified sense representation, the synset embedding, inside a word embedding model. See Figure 1 . Note that for each synset, DanNet provides both the handpicked related concepts (as illustrated in Figure 2 ), one handpicked example sentence where the sense is used in context, and (part of) the sense definition from The Danish Dictionary. For example, a particular synset sense-bag of the polysemous Danish targetword model (approximately the same concept as in English)-in the sense of a representation of something (sometimes on a smaller scale) consists of the example sentence: \"Faergen er en model i 1:4\" and the synset members Effekt, videnskab, fremstille, figur, afpr\u00f8ve, gengive, pynte, arbejdsmodel, gine, globus, globus, mockup, modelbygning, modelfly, skalamodel, skibsmodel, modeljernbane, modelbil, modelskib, modeltog, kirkeskib 2 . In addition, the synset sense-bag of model in the sense of a schematic description or illustration of an abstract, complicated thing or relation, has the example sentence \"Watson og Crick fremsatte deres model af DNA-molekylet som en dobbeltspiral, der kan visualiseres som en vredet stige\" and the synset members Anskueligg\u00f8relse, videnskab, atommodel, forklaringsmodel 3 .", "cite_spans": [ { "start": 156, "end": 178, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 896, "end": 904, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1006, "end": 1014, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Five word embedding experiments", "sec_num": "4." }, { "text": "First, we construct synset embeddings represented in a word embedding model by unifying information extracted from DanNet for each sense of the target nouns. These synset embeddings are tested in a WSD task using cosine similarity. Second, we apply the synset embeddings to sense-tag new unannotated data via a clustering approach. By doing this, we build more corpus-influenced synset embeddings (i.e. synset embeddings not exclusively built from wordnet information) and, at the same time, also obtain training data of a proper size to benefit of the advantages of machine learning models for future WSD experiments. See details of the method in section 4.1. The method will work when there is a correspondence between how words in the knowledge-base for the given lexical resource (DanNet) are distributed across senses and what the distributional information of the words looks like in the word embedding model. If the words associated for each sense in DanNet are important for the concept's use in language, then the collection of those words in the word embedding model is reasonable, since such a model represents word similarity based on the distribution of words used in data. The approach can be seen as highly scalable since the sense representations can be obtained without full annotation of a training corpus and is applicable for all word entries included in the input resource. The method would therefore be applicable also to other lower-resourced languages. It should be emphasized that we test our approach both on a set of some of the most polysemous words found in Danish and operate on the most fine-grained version of the applied evaluation data (the SemDaX Corpus). Working with this corpus, Pedersen et al. (2018a) suggested a principled approach to sense clustering. In that work, the coarsest sense granularity level proved to be most operational (in a WSD task), obtaining the highest inter-annotator agreement score. In our work, however, we choose the finest level of granularity to access the potential of the method when tested on a really hard task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Five word embedding experiments", "sec_num": "4." }, { "text": "We collect various synset information in synset sense-bags, and each word sense representation (synset embedding) is the centroid of the word embeddings from the corresponding sense-bag. The word embeddings originate from the word2vec word embedding model described in section 4.3., and the constructed synset embeddings live within that same vector space. The synset information varies for each experiment. More precisely, a synset sense-bag is a set, B = {w 1 , . . . w n }, where n is the number of words in B and the w's are the words selected 4 from the synset information. Each word, w i in B, can be represented by a word vector \u2212 \u2192 W i in the word embedding model. These word vectors in B are averaged into a mean vector,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Details", "sec_num": "4.1." }, { "text": "\u2212 \u2192 M , where \u2212 \u2192 M = \u2212 \u2192 W i n .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Details", "sec_num": "4.1." }, { "text": "\u2212 \u2192 M is the resulting synset embedding of the given synset sense-bag, B. Therefore, for each sense of each targetword we can collect a synset sense-bag, B, from DanNet and construct a synset embedding, \u2212 \u2192 M , with the word embedding model. The extracted information from DanNet contain only words (not numbers). The words are not weighted when constructing the synset embedding with their word embeddings. Multi-word terms are treated as multiple words under word tokenization (these instances are rarer in Danish, than in English). In doing this, we examine whether the selected knowledge-based information from DanNet in combination with the distributional representation of the words in the synset sense-bags can construct appropriate sense representations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Details", "sec_num": "4.1." }, { "text": "Four types of synset sense-bags for building synset embeddings are tested:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Details", "sec_num": "4.1." }, { "text": "1. Local synset members: Collection of hypernyms, hyponyms, synonyms, near-synonyms, used-for and made-by semantic relations, together with the bag-ofwords (BOW) of the word sense definition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Details", "sec_num": "4.1." }, { "text": "2. Example sentence: BOW from the example sentence using the sense in context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Details", "sec_num": "4.1." }, { "text": "3. Example sentence+: BOW collection of local raw example sentence and raw example sentences from the hypo-and hypernym synsets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Details", "sec_num": "4.1." }, { "text": "4. Combination: All collections from exp. 1-3 put together and the BOW of definitions of hypo-and hypernyms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Details", "sec_num": "4.1." }, { "text": "A fifth and final synset embedding is tested, in which the best performing synset embedding above is used as a seed in the k-means algorithm (Lloyd, 1982) to auto-tag unannotated example context sentences by a clustering approach:", "cite_spans": [ { "start": 141, "end": 154, "text": "(Lloyd, 1982)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Details", "sec_num": "4.1." }, { "text": "The idea is to tune the synset embeddings by adding more data than merely information from DanNet. The seeds bootstrap the resulting clusters to a category, and since each target word has a set number of senses (synset embeddings), the number of clusters per target word is pre-set. See figure 3 for a visualization. The new and unlabelled example context sentences are extracted from Korpus DK 5 and are simply word tokenized, lowercased, stripped of punctuation, considered as a BOW, and represented in the word embedding model (with the same method as decribed above for constructing synset embeddings from sense-bags). Around 1000 example sentences are extracted per targetword. We apply the K-means algorithm from the cluster package 6 included in the module Scikit-Learn (Pedregosa and Varoquaux, 2011) from Python. We set the parameter of number of clusters (n clusters) to the number of synset embeddings constructed for the current target word and set the synset embeddings as initial cluster centers (init). ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cluster centroid: Centroid of clustered context vectors", "sec_num": "5." }, { "text": "WSI systems and sense embeddings have typically been evaluated by comparing to a gold standard or in a WSD task measuring the quality by performance. In our approach, we implicitly seek to find a gold standard for word sense representations, and the quality of the developed sense representations are measured here by performance in a WSD task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Method", "sec_num": "4.2." }, { "text": "Computational semantic analysis systems are typically evaluated on the data sets from the ongoing series of Se-mEval, the International Workshop on Semantic Evaluation (Kilgarriff and Palmer, 2000) . The evaluation data produced for SemEval 2013 task 13: Word Sense Induction for Graded and Non-graded Senses3 is the standard data used to test WSI systems and sense embeddings. Our evaluation data, SemDaX, contains unranked sense annotations, and annotators were asked to assign one sense to the given instance.", "cite_spans": [ { "start": 168, "end": 197, "text": "(Kilgarriff and Palmer, 2000)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Method", "sec_num": "4.2." }, { "text": "5 A clustering of the annotated sentences in the evaluation data, SemDax, would be more precise, but would not be a scalable approach relying on as little annotated data as possible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Method", "sec_num": "4.2." }, { "text": "6 https://scikit-learn.org/stable/modules/ generated/sklearn.cluster.KMeans.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Method", "sec_num": "4.2." }, { "text": "Three test sentences from SemDaX for the Danish targetword model (approximately similar concept as in English) are shown below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Method", "sec_num": "4.2." }, { "text": "\u2022 Og s\u00e5 havde vi kursister den luksus ogs\u00e5 at have fire fantastiske modeller at arbejde med 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Method", "sec_num": "4.2." }, { "text": "\u2022 Men s\u00e5dan er prisklassen konkret, og de fleste modeller bliver ofte kun produceret i et meget lille antal 8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Method", "sec_num": "4.2." }, { "text": "\u2022 Jeg bryder mig ikke om ordet model 9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Method", "sec_num": "4.2." }, { "text": "It has been observed in the SemDaX corpus that almost all discrepancies among annotators were due to underspecified examples, i.e., examples where the precise word sense could not be deduced from the isolated corpus excerpt alone (Pedersen et al., 2018a) . In order to account for this fact, all diverging annotations in the data set are considered to be correct (and unranked). The systems applied do not detect groups of relevant senses; they merely rank by similarity and pick the most similar sense. Since the annotated data do not contain ranked senses, and our word sense representation system does not choose a set (or cluster) of relevant senses, a direct comparison with the systems developed for SemEval 2013 task 13 with the same measures is not straightforward. As indicated above, there might be multiple (correct) classes per instance. The combination of classes might change at every instance. We make use of an accuracy score that counts a \"miss\" for each instance where the system fails to identify any human-labelled sense, and a \"hit\" whenever it guesses at least one that matches a human label 10 . It should be noted that the system has \"an advantage\" in cases where annotators disagree (since more that one value is considered correct) so the results need to be analyzed together with the inter-annotator agreement. This measure is equally generous to the baselines as it is to the systems we tested.", "cite_spans": [ { "start": 230, "end": 254, "text": "(Pedersen et al., 2018a)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Method", "sec_num": "4.2." }, { "text": "WSD is done by maximizing the cosine similarity between the synset embeddings and the given test sentence represented as a context vector within the word embedding model. The test sentence context vector is the mean vector of the sentence considered as a bag-of-word vectors. The highest-similarity sense representation is chosen. We apply three baselines:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Method", "sec_num": "4.2." }, { "text": "\u2022 Extended Lesk (E-lesk): WSD by cosine similarity between the centroid of the BOW from the wordnet definition of the word sense, and the evaluation text instance vector. (Banerjee and Pedersen, 2002) \u2022 Random: WSD by chance 7 and we as participants then had the luxury of having four fantastic models to work with 8 But that is how the price level actually is, and most models are produced in a very limited amount 9 I do not like the word model 10 We tested the Kullback-Leibler divergence score as an alternative \"soft\" evaluation measure to incorporate the fact that there can be multiple correct answers, but the human distributions are far more \"spiky\" than the normalized system scores, leading to statistically insignificant differences between systems. Table 1 : Target words with number of DanNet synsets (column 1) and number of senses actually encountered in the data (column 2). Some senses encountered in the annotated data are merged and link into the same synset, the reason for which we see the difference in numbers across columns.", "cite_spans": [ { "start": 171, "end": 200, "text": "(Banerjee and Pedersen, 2002)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 762, "end": 769, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Method", "sec_num": "4.2." }, { "text": "\u2022 Most frequent sense (MF)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Method", "sec_num": "4.2." }, { "text": "The MF as default is usually a very hard baseline to beat, in particular for the most polysemous part of the vocabulary, as we are doing here. See discussion of this in section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Method", "sec_num": "4.2." }, { "text": "DanNet: The Danish wordnet, DanNet, was compiled semi-automatically from the Danish dictionary Den Danske Ordbog (Hjorth and Kristensen, 2005) . These two resources are therefore highly related and possible to link. The 65,000 synsets in DanNet are interrelated via 325,000 semantic relations. All synsets are assigned an ontological type, a corresponding supersense, and come with a definition and an example sentence. The DanNet information extracted are word collections: either words from relevant synsets (i.e., related concepts), or words from the synset example sentences and definition sentence considered as a BOW. The BOW (i.e. the synset sense-bag) is unified and represented as a centroid in the word embedding model according to the method described in 4.1. Evaluation data: As previously mentioned, the words of interest in our work are 17 of the most polysemous Danish nouns. These words were handpicked by language experts for lexical sample studies as they are both extremely polysemous, yet frequent. See Table 1 . The SemDaX corpus is a subpart of the 45 million words CLARIN Reference Corpus (Asmussen, 2012) and consists of different text types. We extract from SemDaX the 6,012 sentences containing our polysemous target nouns. These are annotated with dictionary senses by 2-6 annotators (advanced students and researchers). There are 355 sentences per target noun on average, and the more polysemous a word, the more sentences are included. For the WSD task, we include the window of 5 context words around the target noun in each annotated sentence. The text is simply lowercased and punctuation is removed. As mentioned above, every test sentence is considered as a BOW and represented as a centroid in the word embedding model (similarly as the synset sense-bags). Note, the nouns are highly ambiguous, so a Krippendorf's \u03b1 agreement of 0.80 is hard to reach here. The work of Pedersen et al. (2018) finds an agreement of 0.67 useful, which is mostly met in the agreement statistics. For relatively fine-grained sense inventories, a lower agreement score is acceptable. The word embedding model is created by the Society for Danish Language and Literature (S\u00f8rensen and Nimb, 2018) . They used the Gensim package (\u0158ehu\u0159ek and Sojka, 2010) to train a Word2Vec model (Mikolov et al., 2013 ) on a corpus of roughly 920 million running words. The corpus had 6.3 million token types, where 5 million occurred less than 5 times. The dimensions of the CBOW word embeddings are 500, a window size of 5, and a threshold for rare words at 5. Korpus DK: is a corpus 11 of different text types in Danish, and has a size of 56 million words. It consists of relatively recent language and mostly every-day language use. For each target noun, around 1000 sentences containing that noun are extracted. A window of 5 words and no normalization is chosen in line with the pre-processing of other data in this project. Every sentence is considered as a BOW and represented as a centroid in the vector space. Software packages: With Python (van Rossum, 1995) we used the Sci-kit Learn package (Pedregosa and Varoquaux, 2011) , the NLTK package (Bird et al., 2009) and SciPy (Jones et al., 2001 ). Data mapping: As mentioned in the introduction, a key from dictionary senses in the evaluation data to DanNet was manually created. For 17 target nouns with 19.1 dic-tionary senses on average, where 15.6 senses on average was apparent in the annotated data, 159 links are found, with an average on 9.4 senses per word. See Table 1 for an overview across target words. The number of DanNet senses is slightly smaller than that of the dictionary. This is for the most part due to the many idiomatic expressions in the dictionary which are not (as they normally are not) included in the wordnet. To avoid leaving these instances out, the dictionary labels of the target noun in the figurative expressions are merged with the synset that corresponds to the literal sense of the noun. This follows the principle of annotation of idiomatic expressions (without a dictionary entry) or other figurative speech in the work of Pedersen et al. (2018a) where the annotation process is described.", "cite_spans": [ { "start": 113, "end": 142, "text": "(Hjorth and Kristensen, 2005)", "ref_id": "BIBREF12" }, { "start": 1112, "end": 1128, "text": "(Asmussen, 2012)", "ref_id": null }, { "start": 1904, "end": 1926, "text": "Pedersen et al. (2018)", "ref_id": "BIBREF29" }, { "start": 2183, "end": 2208, "text": "(S\u00f8rensen and Nimb, 2018)", "ref_id": "BIBREF38" }, { "start": 2240, "end": 2265, "text": "(\u0158ehu\u0159ek and Sojka, 2010)", "ref_id": null }, { "start": 2292, "end": 2313, "text": "(Mikolov et al., 2013", "ref_id": "BIBREF21" }, { "start": 3100, "end": 3131, "text": "(Pedregosa and Varoquaux, 2011)", "ref_id": "BIBREF31" }, { "start": 3151, "end": 3170, "text": "(Bird et al., 2009)", "ref_id": "BIBREF7" }, { "start": 3181, "end": 3200, "text": "(Jones et al., 2001", "ref_id": "BIBREF14" }, { "start": 4121, "end": 4144, "text": "Pedersen et al. (2018a)", "ref_id": "BIBREF29" } ], "ref_spans": [ { "start": 1023, "end": 1030, "text": "Table 1", "ref_id": null }, { "start": 3527, "end": 3534, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Materials", "sec_num": "4.3." }, { "text": "The results for all experiments are shown in Table 2 . Except for the cluster centroid experiment, the results show steady improvements from .21 to .34 and exceed the random and E-lesk baseline at .13 and .16, respectively. However, the performance does not reach the MF sense baseline at .56 (discussed in Section 6.). When excluding the MF class in the data and the corresponding synset embedding , the experiments actually perform slightly better and show the same steady improvements (again, except for exp. 5). Interestingly, when working with less frequent senses, the performance of exp. 1 seems to be the most improved.", "cite_spans": [], "ref_spans": [ { "start": 45, "end": 52, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "5." }, { "text": "The best results for WSD with cosine similarity are achieved when combining all components (exp. 4): hypernyms, hyponyms, synonyms, near-synonyms, used-for, made-by semantic relations together with BOW word sense definition, the BOW example sentence, as well as and the BOW example sentences from hypo-and hypernym synsets. The more features used, the better the performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6." }, { "text": "Synset richness: The size of and shared proportion of information of the synsets seems to be important for the sense representations in experiment 4, where the example sentence information for experiment 2-3 works best for homonyms. Experiment 4 performs worse than experiment 2, in particular in the case of the words hold, and vold, but also for slag, stand, kontakt, and selskab. Investigation of the synset member size for hold shows that almost half of the synsets only have one concept associated with it in DanNet, namely one hypernym. This is rather little information for establishing a synset embedding, and further, hypernyms tend to be more general and thus less informative. Level of polysemy: Annotators report that the sentences often lack context and that the senses are highly polysemous (Pedersen et al., 2018a) . Worst results from the system are found for lys which has a high number of senses (16), but no huge evaluation advantage since the interannotator agreement is relatively high (.81). Also, though the sense number is high, the senses are related in meaning and the differences are often very subtle. The target nouns lys and kort both share word form with common adjectives in Danish, which possibly affects the word embeddings. This could explain why the system performs worse for these words. The words that generally are disambiguated most satisfactorily are blik, hold, stand, top and selskab. All of these words have low overlap in the DanNet synsets, are homonyms, or have non-subtle sense differences. For the word top and especially stand, the performance of experiment 4 is higher than for the other words. This might be due to the low number of senses of these words: stand has 4 senses, and top has 5, where the average number of senses is 9.4. Also, stand is often annotated with the same sense (and high inter-coder agreement) which suggests that there is one highly dominant sense. In experiment 1-4, the WSD of blik also works well compared to the other words considering the performance of the most frequent sense. This word has a relatively low inter-annotator agreement and \"only\" 6 senses, which could be an explanation. This word is also a case of homonymy (i.e., unrelated meanings) which is foreseen to increase the distance between the sense embeddings in the word embedding model. Idiomatic expressions: These expressions are relatively static in appearance. A BOW of an idiomatic expression as a sense representation vector will most likely disambiguate a corresponding context vector correctly. (See discussion of face below.) Now they are merged with the literal sense used in the expression, which creates bias and imprecise mapping between dictionary senses and DanNet synsets. Clusters: Experiment 5 was motivated by the hypothesis that the best synset embeddings from former experiments might work as seeds for the clustering of more example sentence data, where the cluster centroids could function as a new synset embedding. However, the results prove otherwise, suggesting that the construction of the synset embeddings does not have clear enough information as a base for clustering. A qualitative investigation of the sentences in the clusters confirms the results. There are patterns that begin to emerge. The target word ansigt (face) has 6 senses. The non-literal senses were captured in the least satisfactory way: the clusters for face as a manifestation/appearance of a thing or phenomenon, and face as the character/nature of a person contained many instances of the literal and simplest sense of face. The clusters of this literal sense proved to be the best and had fewer non-literal senses, although they still contained several errors. This sense was often mixed with face as an expression/state of mood, which actually can also be hard for annotators to distinguish between. The cluster of face as a face-like front of an object contains mostly non-literal senses: the DanNet synset only contains form (same as in English) as the related lexeme and no words about persons or physiological words. This cluster contains mostly sentences about God and the Bible, which could be because the clustering algorithm followed that gradient. Finally, face as a public profile/known face performs relatively well and captures most instances where kendt ansigt (known face) and ansigt udadtil (public/outward face) appears in the sentence. MF sense is hard to beat: As mentioned previously, beating a majority classifier is in general very difficult, and even more difficult when dealing with a lower-resourced language such as Danish. Our experiments indeed confirm this; however, it should be emphasized that we examine the performance of the approach when tested on the hardest task available: the most polysemous nouns in Danish. In other words, our model is expected to perform considerably better on closer-to-average polysemy words.", "cite_spans": [ { "start": 805, "end": 829, "text": "(Pedersen et al., 2018a)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6." }, { "text": "This study set out to determine the possibility of building appropriate sense representations for Danish by combining word embeddings with synset information from the Danish wordnet. The rationale is to combine corpus evidence with senses outlined by humans. We represented the data in a word embeddings space and tested the process in a very hard WSD task. Thousands of example sentences were auto-tagged by sense clustering, As expected, wordnet-associated data proves to be quite informative for the WSD task. Generally speaking, the more semantic relations and information included from the wordnet, the better the results. However, the word sense representation system has room for improvement, in that the most-frequent baseline is not yet overcome in these unbalanced datasets. Nevertheless, our sense representation system produces promising results. The best synset embeddings in our study are able to disambiguate well above chance, considering the highly polysemous selection of test words in mind (almost 20 senses on average). We expect performance to increase when handling Danish vocabulary items with closer-to-average polysemy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "For future work, we plan to enrich the synset information with data from The Danish Thesaurus, and we foresee that these enriched data could potentially improve our model. Additionally, the technique of Nieto-Pi\u00f1a and Johansson (2018), linking word sense embedding models to lexical resources, is interesting and could be relevant for future improvements. Finally, it would be interesting in future to experiment with the granularity level of senses, with the exclusion of idiomatic expressions from the WSD task, and with using our sense-based word clusters to create new evaluation materials.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "We build the sense representations with DanNet, but our evaluation data, SemDaX, is annotated with dictionary labels. The Danish Dictionary is not fully available for research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "\"The ferry is a model 1:4\", Effect, science, produce, figure, test, represent, decorate, working model, gine, globus, mock-up, model building, airplane model, scale model, ship model, traintrack model, car model, ship model, train model, church ship", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "\"Watson and Crick presented their model of a DNA molecule like a double-spiral, that can be visualized like a twisted latter\", visualization, science, atom model, explanation model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Selected according to the given experiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://ordnet.dk/korpusdk", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work is partly funded by the H2020 Infraria project ELEXIS and by the Swedish Research Council project 2014-39 that funds the Center for Linguistics Theory and Studies in Probability (CLASP).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "8." } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Word Sense Disambiguation: Algorithms and Applications. Text Speech and Language Technology", "authors": [ { "first": "E", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "P", "middle": [], "last": "Edmonds", "suffix": "" } ], "year": 2006, "venue": "", "volume": "33", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agirre, E. and Edmonds, P. (2006). Word Sense Disam- biguation: Algorithms and Applications. Text Speech and Language Technology, 33(33):384.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "An Adapted Lesk Algorithm for Word Sense Disambiguation Using Word-Net", "authors": [ { "first": "S", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "T", "middle": [], "last": "Pedersen", "suffix": "" } ], "year": 2002, "venue": "Computational Linguistics and Intelligent Text Processing: Third Initernational Conference, CiCLing", "volume": "", "issue": "", "pages": "136--145", "other_ids": {}, "num": null, "urls": [], "raw_text": "Banerjee, S. and Pedersen, T. (2002). An Adapted Lesk Algorithm for Word Sense Disambiguation Using Word- Net. In Computational Linguistics and Intelligent Text Processing: Third Initernational Conference, CiCLing, pages 136-145, Mexico City, Mexico.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Breaking sticks and ambiguities with adaptive skip-gram", "authors": [ { "first": "S", "middle": [], "last": "Bartunov", "suffix": "" }, { "first": "D", "middle": [], "last": "Kondrashkin", "suffix": "" }, { "first": "A", "middle": [], "last": "Osokin", "suffix": "" }, { "first": "D", "middle": [], "last": "Vetrov", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bartunov, S., Kondrashkin, D., Osokin, A., and Vetrov, D. (2016). Breaking sticks and ambiguities with adaptive skip-gram. In Proceedings of the International Confer- ence on Artificial Intelligence and Statistics (AISTATS), Cadiz, Spain.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Unsupervised Most Frequent Sense Detection using Word Embeddings", "authors": [ { "first": "S", "middle": [], "last": "Bhingardive", "suffix": "" }, { "first": "D", "middle": [], "last": "Singh", "suffix": "" }, { "first": "R", "middle": [], "last": "Murthy", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1238--1243", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bhingardive, S., Singh, D., and Murthy, R. (2015). Un- supervised Most Frequent Sense Detection using Word Embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technologies, pages 1238-1243.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Natural Language ToolKit (NLTK)", "authors": [ { "first": "S", "middle": [], "last": "Bird", "suffix": "" }, { "first": "E", "middle": [], "last": "Loper", "suffix": "" }, { "first": "E", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bird, S., Loper, E., and Klein, E. (2009). Natural Lan- guage ToolKit (NLTK) Book. O'Reilly Media Inc.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "NASARI: Integrating explicit knowledge and corpus statistics for a multilingual representation of concepts and entities", "authors": [ { "first": "J", "middle": [], "last": "Camacho-Collados", "suffix": "" }, { "first": "M", "middle": [ "T" ], "last": "Pilehvar", "suffix": "" }, { "first": "R", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2016, "venue": "Artificial Intelligence", "volume": "240", "issue": "", "pages": "36--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Camacho-Collados, J., Pilehvar, M. T., and Navigli, R. (2016). NASARI: Integrating explicit knowledge and corpus statistics for a multilingual representation of con- cepts and entities. Artificial Intelligence, 240:36-64.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A Unified Model for Word Sense Representation and Disambiguation", "authors": [ { "first": "X", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Z", "middle": [], "last": "Liu", "suffix": "" }, { "first": "M", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2014, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "1025--1035", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, X., Liu, Z., and Sun, M. (2014). A Unified Model for Word Sense Representation and Disambiguation. In Proceedings of EMNLP, pages 1025-1035, Doha, Qatar.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Introduction to the special issue on evaluating word sense disambiguation systems", "authors": [ { "first": "P", "middle": [], "last": "Edmonds", "suffix": "" }, { "first": "A", "middle": [], "last": "Kilgarriff", "suffix": "" } ], "year": 2002, "venue": "Natural Language Engineering", "volume": "8", "issue": "", "pages": "279--291", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edmonds , P. and Kilgarriff, A. (2002). Introduction to the special issue on evaluating word sense disambiguation systems. Natural Language Engineering, 8:279 -291, 12.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Linked Disambiguated Distributional Semantic Networks", "authors": [ { "first": "S", "middle": [], "last": "Faralli", "suffix": "" }, { "first": "A", "middle": [], "last": "Panchenko", "suffix": "" }, { "first": "C", "middle": [], "last": "Biemann", "suffix": "" }, { "first": "S", "middle": [ "P" ], "last": "Ponzetto", "suffix": "" } ], "year": 2016, "venue": "The 15th International Semantic Web Conference (ISWC)", "volume": "", "issue": "", "pages": "56--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Faralli, S., Panchenko, A., Biemann, C., and Ponzetto, S. P. (2016). Linked Disambiguated Distributional Semantic Networks. In The 15th International Semantic Web Con- ference (ISWC), pages 56-64, Kobe, Japan.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Den Danske Ordbog. Gyldendal", "authors": [ { "first": "E", "middle": [], "last": "Hjorth", "suffix": "" }, { "first": "K", "middle": [], "last": "Kristensen", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hjorth, E. and Kristensen, K. (2005). Den Danske Ordbog. Gyldendal, Copenhagen, 1 edition.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Embedding a Semantic Network in a Word Space", "authors": [ { "first": "R", "middle": [], "last": "Johansson", "suffix": "" }, { "first": "L", "middle": [], "last": "Nieto-Pi\u00f1a", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johansson, R. and Nieto-Pi\u00f1a, L. (2015). Embedding a Se- mantic Network in a Word Space. Naacl-2015.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "SciPy: Open source scientific tools for Python", "authors": [ { "first": "E", "middle": [], "last": "Jones", "suffix": "" }, { "first": "T", "middle": [], "last": "Oliphant", "suffix": "" }, { "first": "P", "middle": [], "last": "Peterson", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jones, E., Oliphant, T., Peterson, P., and others. (2001). SciPy: Open source scientific tools for Python.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Introduction to the special issue on SENSEVAL. Computers and the Humanities", "authors": [ { "first": "A", "middle": [], "last": "Kilgarriff", "suffix": "" }, { "first": "M", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kilgarriff, A. and Palmer, M. (2000). Introduction to the special issue on SENSEVAL. Computers and the Hu- manities.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "I don't believe in word senses", "authors": [ { "first": "A", "middle": [], "last": "Kilgarriff", "suffix": "" } ], "year": 1997, "venue": "Computers and the Humanities", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kilgarriff, A. (1997). \"I don't believe in word senses\". Computers and the Humanities.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Least squares quantization in pcm", "authors": [ { "first": "S", "middle": [], "last": "Lloyd", "suffix": "" } ], "year": 1982, "venue": "IEEE Transactions on Information Theory", "volume": "28", "issue": "2", "pages": "129--137", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lloyd, S. (1982). Least squares quantization in pcm. IEEE Transactions on Information Theory, 28(2):129- 137, March.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Embedding Words and Senses Together via Joint Knowledge-Enhanced Training", "authors": [ { "first": "M", "middle": [], "last": "Mancini", "suffix": "" }, { "first": "J", "middle": [], "last": "Camacho-Collados", "suffix": "" }, { "first": "I", "middle": [], "last": "Iacobacci", "suffix": "" }, { "first": "R", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 21st Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "100--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mancini, M., Camacho-Collados, J., Iacobacci, I., and Nav- igli, R. (2017). Embedding Words and Senses Together via Joint Knowledge-Enhanced Training. In Proceed- ings of the 21st Conference on Computational Natu- ral Language Learning (CoNLL 2017), pages 100-111, Vancouver, Canada. Association for Computational Lin- guistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Supersense tagging for danish", "authors": [ { "first": "H", "middle": [], "last": "Martinez Alonso", "suffix": "" }, { "first": "A", "middle": [], "last": "Johannsen", "suffix": "" }, { "first": "S", "middle": [], "last": "Olsen", "suffix": "" }, { "first": "S", "middle": [], "last": "Nimb", "suffix": "" }, { "first": "N", "middle": [], "last": "S\u00f8rensen", "suffix": "" }, { "first": "A", "middle": [], "last": "Braasch", "suffix": "" }, { "first": "A", "middle": [], "last": "S\u00f8gaard", "suffix": "" }, { "first": "B", "middle": [], "last": "Pedersen", "suffix": "" } ], "year": 2015, "venue": "Der er ikke overensstemmelse mellem det ISSN-nr der st\u00e5r p\u00e5 proceedings og det der findes i databasen", "volume": "109", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martinez Alonso, H., Johannsen, A., Olsen, S., Nimb, S., S\u00f8rensen, N., Braasch, A., S\u00f8gaard, A., and Pedersen, B. (2015). Supersense tagging for danish. In Proceedings of the 20th Nordic Conference of Computational Lin- guistics NODALIDA 2015, volume 109. Link\u00f6ping Uni- versity Electronic Press. Der er ikke overensstemmelse mellem det ISSN-nr der st\u00e5r p\u00e5 proceedings og det der findes i databasen.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The senseval-3 english lexical sample task", "authors": [ { "first": "R", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "T", "middle": [], "last": "Chklovski", "suffix": "" }, { "first": "A", "middle": [], "last": "Kilgarriff", "suffix": "" } ], "year": 2004, "venue": "Senseval-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihalcea, R., Chklovski, T., and Kilgarriff, A. (2004). The senseval-3 english lexical sample task. Senseval-3: Third International Workshop on the Evaluation of Sys- tems for the Semantic Analysis of Text, 01.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Efficient Estimation of Word Representations in Vector Space", "authors": [ { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "K", "middle": [], "last": "Chen", "suffix": "" }, { "first": "G", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "J", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "1--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013). Efficient Estimation of Word Representations in Vector Space. pages 1-12.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Clustering and Diversifying Web Search Results with Graph-Based Word Sense Induction", "authors": [ { "first": "R", "middle": [], "last": "Navigli", "suffix": "" }, { "first": "Di", "middle": [], "last": "Marco", "suffix": "" }, { "first": "A", "middle": [], "last": "", "suffix": "" } ], "year": 2013, "venue": "Computational Linguistics", "volume": "39", "issue": "3", "pages": "709--754", "other_ids": {}, "num": null, "urls": [], "raw_text": "Navigli, R. and Di Marco, A. (2013). Clustering and Di- versifying Web Search Results with Graph-Based Word Sense Induction. Computational Linguistics, 39(3):709- 754.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Efficient non-parametric estimation of multiple embeddings per word in vector space", "authors": [ { "first": "A", "middle": [], "last": "Neelakantan", "suffix": "" }, { "first": "J", "middle": [], "last": "Shankar", "suffix": "" }, { "first": "A", "middle": [], "last": "Passos", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1059--1069", "other_ids": {}, "num": null, "urls": [], "raw_text": "Neelakantan, A., Shankar, J., Passos, A., and McCallum, A. (2014). Efficient non-parametric estimation of multiple embeddings per word in vector space. In Proceedings of the 2014 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), pages 1059-1069, Doha, Qatar. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Automatically Linking Lexical Resources with Word Sense Embedding Models", "authors": [ { "first": "L", "middle": [], "last": "Nieto-Pi\u00f1a", "suffix": "" }, { "first": "R", "middle": [], "last": "Johansson", "suffix": "" } ], "year": 2018, "venue": "Proceedings of SemDeep-3, the 3rd Workshop on Semantic Deep Learning", "volume": "", "issue": "", "pages": "23--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nieto-Pi\u00f1a, L. and Johansson, R. (2018). Automatically Linking Lexical Resources with Word Sense Embedding Models. In Proceedings of SemDeep-3, the 3rd Work- shop on Semantic Deep Learning, pages 23-29, Santa Fe, New Mexico, USA.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Different sense granularities for different applications", "authors": [ { "first": "M", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "O", "middle": [], "last": "Babko-Malaya", "suffix": "" }, { "first": "H", "middle": [], "last": "Dang", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 2nd Workshop on Scalable Natural Language Understanding Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Palmer, M., Babko-Malaya, O., and Dang, H. (2004). Different sense granularities for different applications. In Proceedings of the 2nd Workshop on Scalable Nat- ural Language Understanding Systems, Boston, MA. HTL/NAACL.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Russe'2018: A shared task on word sense induction for the russian language. 03", "authors": [ { "first": "A", "middle": [], "last": "Panchenko", "suffix": "" }, { "first": "A", "middle": [], "last": "Lopukhina", "suffix": "" }, { "first": "D", "middle": [], "last": "Ustalov", "suffix": "" }, { "first": "K", "middle": [], "last": "Lopukhin", "suffix": "" }, { "first": "N", "middle": [], "last": "Arefyev", "suffix": "" }, { "first": "A", "middle": [], "last": "Leontyev", "suffix": "" }, { "first": "N", "middle": [], "last": "Loukachevitch", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Panchenko, A., Lopukhina, A., Ustalov, D., Lopukhin, K., Arefyev, N., Leontyev, A., and Loukachevitch, N. (2018). Russe'2018: A shared task on word sense in- duction for the russian language. 03.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Dannet: The challenge of compiling a wordnet for Danish by reusing a monolingual dictionary", "authors": [ { "first": "B", "middle": [ "S" ], "last": "Pedersen", "suffix": "" }, { "first": "S", "middle": [], "last": "Nimb", "suffix": "" }, { "first": "J", "middle": [], "last": "Asmussen", "suffix": "" }, { "first": "N", "middle": [ "H" ], "last": "S\u00f8rensen", "suffix": "" }, { "first": "L", "middle": [], "last": "Trap-Jensen", "suffix": "" }, { "first": "H", "middle": [], "last": "Lorentzen", "suffix": "" } ], "year": 2009, "venue": "Language Resources and Evaluation", "volume": "43", "issue": "3", "pages": "269--299", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pedersen, B. S., Nimb, S., Asmussen, J., S\u00f8rensen, N. H., Trap-Jensen, L., and Lorentzen, H. (2009). Dannet: The challenge of compiling a wordnet for Danish by reusing a monolingual dictionary. Language Resources and Eval- uation, 43(3):269-299.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "The semdax corpus -sense annotations with scalable sense inventories", "authors": [ { "first": "B", "middle": [], "last": "Pedersen", "suffix": "" }, { "first": "A", "middle": [], "last": "Braasch", "suffix": "" }, { "first": "A", "middle": [], "last": "Johannsen", "suffix": "" }, { "first": "H", "middle": [], "last": "Martinez Alonso", "suffix": "" }, { "first": "S", "middle": [], "last": "Nimb", "suffix": "" }, { "first": "S", "middle": [], "last": "Olsen", "suffix": "" }, { "first": "A", "middle": [], "last": "S\u00f8gaard", "suffix": "" }, { "first": "N", "middle": [], "last": "S\u00f8rensen", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 10th conference of the Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "842--847", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pedersen, B., Braasch, A., Johannsen, A., Martinez Alonso, H., Nimb, S., Olsen, S., S\u00f8gaard, A., and S\u00f8rensen, N. (2016). The semdax corpus -sense annotations with scalable sense inventories. In Proceedings of the 10th conference of the Language Resources and Evaluation Conference, pages 842-847. European Language Re- sources Association.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Towards a principled approach to sense clustering -a case study of wordnet and dictionary senses in Danish", "authors": [ { "first": "B", "middle": [ "S" ], "last": "Pedersen", "suffix": "" }, { "first": "M", "middle": [], "last": "Zabaleta", "suffix": "" }, { "first": "S", "middle": [], "last": "Nimb", "suffix": "" }, { "first": "S", "middle": [], "last": "Olsen", "suffix": "" }, { "first": "I", "middle": [], "last": "R\u00f8rmann", "suffix": "" } ], "year": 2018, "venue": "Proceedings of Global WordNet Conference", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pedersen, B. S., Aguirrezabal Zabaleta, M., Nimb, S., Olsen, S., and R\u00f8rmann, I. (2018a). Towards a princi- pled approach to sense clustering -a case study of word- net and dictionary senses in Danish. In Proceedings of Global WordNet Conference 2018, pages 1-8, Singapore. Global WordNet Association.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "A danish framenet lexicon and an annotated corpus used for training and evaluating a semantic frame classifier", "authors": [ { "first": "B", "middle": [], "last": "Pedersen", "suffix": "" }, { "first": "S", "middle": [], "last": "Nimb", "suffix": "" }, { "first": "A", "middle": [], "last": "S\u00f8gaard", "suffix": "" }, { "first": "M", "middle": [], "last": "Hartmann", "suffix": "" }, { "first": "S", "middle": [], "last": "Olsen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 11th edition of the Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pedersen, B., Nimb, S., S\u00f8gaard, A., Hartmann, M., and Olsen, S. (2018b). A danish framenet lexicon and an an- notated corpus used for training and evaluating a seman- tic frame classifier. In Proceedings of the 11th edition of the Language Resources and Evaluation Conference, Miyazaki, Japan. European Language Resources Associ- ation.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Scikit-learn: Machine learning in Python", "authors": [ { "first": "F", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "G", "middle": [], "last": "Varoquaux", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pedregosa, F. and Varoquaux, G. (2011). Scikit-learn: Ma- chine learning in Python.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Making Sense of Word Embeddings", "authors": [ { "first": "M", "middle": [], "last": "Pelevina", "suffix": "" }, { "first": "N", "middle": [], "last": "Arefyev", "suffix": "" }, { "first": "C", "middle": [], "last": "Biemann", "suffix": "" }, { "first": "A", "middle": [], "last": "Panchenko", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pelevina, M., Arefyev, N., Biemann, C., and Panchenko, A. (2017). Making Sense of Word Embeddings. (2012).", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "De-conflated semantic representations", "authors": [ { "first": "M", "middle": [ "T" ], "last": "Pilehvar", "suffix": "" }, { "first": "N", "middle": [], "last": "Collier", "suffix": "" } ], "year": 2016, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pilehvar, M. T. and Collier, N. (2016). De-conflated semantic representations. In Proceedings of EMNLP, Austin, Texas.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Semeval-2007 task 17: English lexical sample, srl and all words", "authors": [ { "first": "S", "middle": [ "S" ], "last": "Pradhan", "suffix": "" }, { "first": "E", "middle": [], "last": "Loper", "suffix": "" }, { "first": "D", "middle": [], "last": "Dligach", "suffix": "" }, { "first": "M", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 4th International Workshop on Semantic Evaluations, SemEval '07", "volume": "", "issue": "", "pages": "87--92", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pradhan, S. S., Loper, E., Dligach, D., and Palmer, M. (2007). Semeval-2007 task 17: English lexical sample, srl and all words. In Proceedings of the 4th International Workshop on Semantic Evaluations, SemEval '07, pages 87-92, Stroudsburg, PA, USA. Association for Compu- tational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Software Framework for Topic Modelling with Large Corpora", "authors": [ { "first": "R", "middle": [], "last": "Rehu\u0159ek", "suffix": "" }, { "first": "P", "middle": [], "last": "Sojka", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks", "volume": "", "issue": "", "pages": "45--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rehu\u0159ek, R. and Sojka, P. (2010). Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50, Valletta, Malta, 5. ELRA.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Autoextend: Combining Word Embeddings with Semantic Resources", "authors": [ { "first": "S", "middle": [], "last": "Rothe", "suffix": "" }, { "first": "H", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2017, "venue": "Computational Linguistics", "volume": "43", "issue": "", "pages": "593--617", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rothe, S. and Sch\u00fctze, H. (2017). Autoextend: Combining Word Embeddings with Semantic Resources. Computa- tional Linguistics, 43:3:593-617.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "WordNet Embeddings", "authors": [ { "first": "C", "middle": [], "last": "Saedi", "suffix": "" }, { "first": "A", "middle": [], "last": "Branco", "suffix": "" }, { "first": "J", "middle": [], "last": "Ant\u00f3nio Rodrigues", "suffix": "" }, { "first": "J", "middle": [], "last": "Silva", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 3rd Workshop on Representation Learning for NLP", "volume": "", "issue": "", "pages": "122--131", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saedi, C., Branco, A., Ant\u00f3nio Rodrigues, J., and Silva, J. (2018). WordNet Embeddings. In Proceedings of the 3rd Workshop on Representation Learning for NLP, pages 122-131, Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Word2Dict -Lemma Selection and Dictionary Editing Assisted by Word Embeddings", "authors": [ { "first": "N", "middle": [ "H" ], "last": "S\u00f8rensen", "suffix": "" }, { "first": "S", "middle": [], "last": "Nimb", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 18th EURALEX International Congres: Lexocography in Global Contexts", "volume": "", "issue": "", "pages": "819--827", "other_ids": {}, "num": null, "urls": [], "raw_text": "S\u00f8rensen, N. H. and Nimb, S. (2018). Word2Dict -Lemma Selection and Dictionary Editing Assisted by Word Em- beddings. Proceedings of the 18th EURALEX Interna- tional Congres: Lexocography in Global Contexts, pages 819-827.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "An Unsupervised Word Sense Disambiguation System for Under-Resourced Languages", "authors": [ { "first": "D", "middle": [], "last": "Ustalov", "suffix": "" }, { "first": "D", "middle": [], "last": "Teslenko", "suffix": "" }, { "first": "A", "middle": [], "last": "Panchenko", "suffix": "" }, { "first": "M", "middle": [], "last": "Chernoskutov", "suffix": "" }, { "first": "C", "middle": [], "last": "Biemann", "suffix": "" }, { "first": "S", "middle": [ "P" ], "last": "Ponzetto", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ustalov, D., Teslenko, D., Panchenko, A., Chernoskutov, M., Biemann, C., and Ponzetto, S. P. (2018). An Unsu- pervised Word Sense Disambiguation System for Under- Resourced Languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), Miyazaki, Japan, 4. van Rossum, G. (1995). Python tutorial. Technical Re- port CS-R9526, Centrum voor Wiskunde en Informatica (CWI), Amsterdam, 5.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "The method used to build the synset embeddings." }, "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "A synset of the targetword model (as in a model in industrial production). Semantic relations on the right." }, "FIGREF2": { "uris": null, "type_str": "figure", "num": null, "text": "A 2D plot of resulting clusters of the Korpus DK example sentences for stand and skade, which have 4 and 6 synsets, and therefore 4 and 6 clusters, respectively. The black crosses are the seeds. Dimensionality reduction with PCA." }, "TABREF2": { "html": null, "content": "", "text": "WSD results", "type_str": "table", "num": null } } } }