{ "paper_id": "S15-1010", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:37:55.073306Z" }, "title": "Non-Orthogonal Explicit Semantic Analysis", "authors": [ { "first": "Nitish", "middle": [], "last": "Aggarwal", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Ireland", "location": { "settlement": "Galway", "country": "Ireland" } }, "email": "" }, { "first": "Kartik", "middle": [], "last": "Asooja", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Ireland", "location": { "settlement": "Galway", "country": "Ireland" } }, "email": "" }, { "first": "Georgeta", "middle": [], "last": "Bordea", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Ireland", "location": { "settlement": "Galway", "country": "Ireland" } }, "email": "" }, { "first": "Paul", "middle": [], "last": "Buitelaar", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Ireland", "location": { "settlement": "Galway", "country": "Ireland" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Explicit Semantic Analysis (ESA) utilizes the Wikipedia knowledge base to represent the semantics of a word by a vector where every dimension refers to an explicitly defined concept like a Wikipedia article. ESA inherently assumes that Wikipedia concepts are orthogonal to each other, therefore, it considers that two words are related only if they co-occur in the same articles. However, two words can be related to each other even if they appear separately in related articles rather than cooccurring in the same articles. This leads to a need for extending the ESA model to consider the relatedness between the explicit concepts (i.e. Wikipedia articles in Wikipedia based implementation) for computing textual relatedness. In this paper, we present Non-Orthogonal ESA (NESA) which represents more fine grained semantics of a word as a vector of explicit concept dimensions, where every such concept dimension further constitutes a semantic vector built in another vector space. Thus, NESA considers the concept correlations in computing the relatedness between two words. We explore different approaches to compute the concept correlation weights, and compare these approaches with other existing methods. Furthermore, we evaluate our model NESA on several word relatedness benchmarks showing that it outperforms the state of the art methods.", "pdf_parse": { "paper_id": "S15-1010", "_pdf_hash": "", "abstract": [ { "text": "Explicit Semantic Analysis (ESA) utilizes the Wikipedia knowledge base to represent the semantics of a word by a vector where every dimension refers to an explicitly defined concept like a Wikipedia article. ESA inherently assumes that Wikipedia concepts are orthogonal to each other, therefore, it considers that two words are related only if they co-occur in the same articles. However, two words can be related to each other even if they appear separately in related articles rather than cooccurring in the same articles. This leads to a need for extending the ESA model to consider the relatedness between the explicit concepts (i.e. Wikipedia articles in Wikipedia based implementation) for computing textual relatedness. In this paper, we present Non-Orthogonal ESA (NESA) which represents more fine grained semantics of a word as a vector of explicit concept dimensions, where every such concept dimension further constitutes a semantic vector built in another vector space. Thus, NESA considers the concept correlations in computing the relatedness between two words. We explore different approaches to compute the concept correlation weights, and compare these approaches with other existing methods. Furthermore, we evaluate our model NESA on several word relatedness benchmarks showing that it outperforms the state of the art methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Significance of quantifying relatedness between two natural language texts has been shown in various tasks which deal with information retrieval (IR), natural language processing (NLP), or other related fields. The semantics of a word can be obtained from existing lexical resources like WordNet and FrameNet. However, such lexical resources require domain expertise for defining the hierarchical structure, which makes their creation very expensive. Therefore, distributional semantic models (DSMs) have achieved much attention as they utilize available document collections like Wikipedia, and do not depend upon human expertise (Harris, 1954) . DSMs represent the semantics of a word by transforming it to a high dimensional distributional vector in a predefined concept space. Many models have been proposed that derive this concept space by using explicit concepts or implicit concepts. Explicit Semantic Analysis (ESA) (Gabrilovich and Markovitch, 2007) utilizes the concepts which are explicitly derived under human cognition like Wikipedia concepts (articles). However, Latent Semantic Analysis (LSA) derives a latent concept space by performing dimensionality reduction (Landauer et al., 1998) . Gabrilovich and Markovitch (2007) introduced ESA model in which Wikipedia and Open Directory Project were used to obtain the explicit concepts, however, Wikipedia has been a popular choice in further ESA implementations (Polajnar et al., 2013; Gottron et al., 2011; . ESA represents the semantics of a word with a high dimensional vector over the Wikipedia concepts. The tf-idf weight of the word with the textual content under a Wikipedia concept reflects the magnitude Association football United Soccer Leagues of the corresponding vector dimension. To obtain the semantic relatedness between two words, it computes the vector dot product between their vectors. ESA considers the dimensions as orthogonal to each other. For instance, the synonyms like \"soccer\" and \"football\" are highly related, however, they may not co-occur together in many Wikipedia articles. Table 1 shows that the top 5 Wikipedia concepts retrieved for \"football\" and \"soccer\" do not share any concept, however, the concepts may exhibit relatedness to each other. Consequently, ESA model assumes that words can be related only if they co-occur in the same articles. However, two words can also be related even if they do not share the same articles at all, but appear in the related ones. LSA resolves the orthogonality issue to some extent by building latent concept space in an unsupervised way (Landauer et al., 1998) . However, the resulting latent concepts are not as clearly interpretable as the human-labeled concepts in the ESA model. Previous studies (Gabrilovich and Markovitch, 2007; Cimiano et al., 2009; Hassan and Mihalcea, 2011) show that ESA performs better than LSA for computing text relatedness. Therefore, it is important to consider the relatedness between dimensions in the ESA model, rather than considering them orthogonal, and also without losing the explicit property of ESA model at the same time.", "cite_spans": [ { "start": 631, "end": 645, "text": "(Harris, 1954)", "ref_id": "BIBREF11" }, { "start": 925, "end": 959, "text": "(Gabrilovich and Markovitch, 2007)", "ref_id": "BIBREF8" }, { "start": 1179, "end": 1202, "text": "(Landauer et al., 1998)", "ref_id": "BIBREF17" }, { "start": 1205, "end": 1238, "text": "Gabrilovich and Markovitch (2007)", "ref_id": "BIBREF8" }, { "start": 1425, "end": 1448, "text": "(Polajnar et al., 2013;", "ref_id": "BIBREF21" }, { "start": 1449, "end": 1470, "text": "Gottron et al., 2011;", "ref_id": "BIBREF10" }, { "start": 2578, "end": 2601, "text": "(Landauer et al., 1998)", "ref_id": "BIBREF17" }, { "start": 2741, "end": 2775, "text": "(Gabrilovich and Markovitch, 2007;", "ref_id": "BIBREF8" }, { "start": 2776, "end": 2797, "text": "Cimiano et al., 2009;", "ref_id": "BIBREF7" }, { "start": 2798, "end": 2824, "text": "Hassan and Mihalcea, 2011)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 2072, "end": 2079, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we present Non-Orthogonal ESA (NESA) model, an extension to ESA, which also uses relatedness between the explicit concepts for computing semantic relatedness between texts. The concepts in ESA model are clearly interpretable and they refer to the title of Wikipedia articles. This characteristic provides an opportunity to investigate different concept relatedness measures, such as relatedness between articles' content (document relatedness) or relatedness between corresponding Wikipedia titles. In order to investigate the performance of these concept relatedness measures, we evaluate them on an entity relatedness benchmark called KORE (Hoffart et al., 2012) as Wikipedia article title generally refers to an entity.", "cite_spans": [ { "start": 657, "end": 679, "text": "(Hoffart et al., 2012)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We then apply the different approaches for computing concept relatedness in our model NESA to compute text relatedness. We evaluate NESA on several word relatedness benchmarks to verify whether considering non-orthogonality in ESA model improves its performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 Related Work", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In recent years, there have been a variety of efforts to develop semantic relatedness measures. Classical approaches assess the relatedness scores by using existing knowledge bases or corpus statistics. Lexical resources such as WordNet or Roget thesaurus (Jarmasz and Szpakowicz, 2004) are used as knowledge bases to compute the relatedness scores between two words. Most of these approaches make use of the hierarchical structure present in the lexical resources. For instance, Hirst and St-Onge (1998), Leacock and Chodorow (1998) , and Wu and Palmer (1994) utilize the edges that define taxonomic relations between words; Banerjee and Pedersen (2002) computes the scores by obtaining the overlap between glosses associated with the words; and some of the other approaches (Resnik, 1995; Lin, 1998) use corpus evidence with the taxonomic structure of WordNet. These approaches are limited to perform only for the lexical entries and thus do not work with non-dictionary words. Moreover, these measures rely on the manually constructed lexical resources and they are not portable to multiple languages due to unavailability of lexical resources in multiple languages.", "cite_spans": [ { "start": 256, "end": 286, "text": "(Jarmasz and Szpakowicz, 2004)", "ref_id": "BIBREF15" }, { "start": 506, "end": 533, "text": "Leacock and Chodorow (1998)", "ref_id": "BIBREF18" }, { "start": 540, "end": 560, "text": "Wu and Palmer (1994)", "ref_id": "BIBREF27" }, { "start": 626, "end": 654, "text": "Banerjee and Pedersen (2002)", "ref_id": "BIBREF4" }, { "start": 776, "end": 790, "text": "(Resnik, 1995;", "ref_id": "BIBREF23" }, { "start": 791, "end": 801, "text": "Lin, 1998)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Text Relatedness", "sec_num": "2.1" }, { "text": "Corpus-based methods such as LSA (Landauer et al., 1998) , Latent Dirichlet Allocation (LDA) (Blei et al., 2003) , and ESA (Gabrilovich and Markovitch, 2007) employ statistical models to build the semantic profile of a word. LSA and LDA generate unsupervised topics from a textual corpus, and represent the semantics of a word by its distribution over these topics. LSA performs singular value decomposition (SVD) to obtain a latent concept space. On the contrary, ESA directly uses supervised topics such as Wikipedia concepts that are built manually, and considers that every concept is orthogonal to each other. Polajnar at el. (2013) proposed an approach to improve ESA by considering the concept relatedness using word overlap in Wikipedia articles' content. Radinsky at el. 2011introduced Temporal Semantic Analysis (TSA) also considers the concept relatedness in ESA model, which is computed by using their temporal distribution over the NewYork Times news archives from the last 100 years. Although, these approaches consider relatedness between explicit concepts (Polajnar et al., 2013; Radinsky et al., 2011) and improve the accuracy, however, either they define a weak concept relatedness measure or require an external corpus statistics. Our approach takes inspiration from them and uses more advanced concept relatedness measures that rely on the same corpus statistics, which is used to build the ESA model.", "cite_spans": [ { "start": 29, "end": 56, "text": "LSA (Landauer et al., 1998)", "ref_id": null }, { "start": 93, "end": 112, "text": "(Blei et al., 2003)", "ref_id": "BIBREF5" }, { "start": 123, "end": 157, "text": "(Gabrilovich and Markovitch, 2007)", "ref_id": "BIBREF8" }, { "start": 615, "end": 637, "text": "Polajnar at el. (2013)", "ref_id": null }, { "start": 1072, "end": 1095, "text": "(Polajnar et al., 2013;", "ref_id": "BIBREF21" }, { "start": 1096, "end": 1118, "text": "Radinsky et al., 2011)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Text Relatedness", "sec_num": "2.1" }, { "text": "As NESA model requires a concept relatedness measure to overcome orthogonality, we address here the existing methods of computing it (Strube and Ponzetto, 2006; Witten and Milne, 2008; Polajnar et al., 2013) . Most of these approaches rely on Wikipedia and its derived knowledge bases such as DBpedia 1 , YAGO 2 and FreeBase 3 . These knowledge bases provide immense amount of information about millions of concepts or entities which can be utilized for computing concept relatedness. Strube and Ponzetto (2006) proposed WikiRelate that counts the edges between two concepts in Wikipedia link structure, and also considers the depth of a concept in the Wikipedia category structure. Witten and Milne (2008) applied Google distance metric (Cilibrasi and Vitanyi, 2007) on incoming links in Wikipedia. Hoffart at el. 2012utilized the textual content associated with the Wikipedia concepts. It observes the partial overlap between the concepts (key-phrases) appearing in the article content. The above mentioned approaches mainly exploit the article content or Wikipedia link structure for computing concept relatedness. In this paper, we also utilize the distributional information of the title and hyperlinks for computing concept relatedness.", "cite_spans": [ { "start": 133, "end": 160, "text": "(Strube and Ponzetto, 2006;", "ref_id": "BIBREF25" }, { "start": 161, "end": 184, "text": "Witten and Milne, 2008;", "ref_id": "BIBREF26" }, { "start": 185, "end": 207, "text": "Polajnar et al., 2013)", "ref_id": "BIBREF21" }, { "start": 485, "end": 511, "text": "Strube and Ponzetto (2006)", "ref_id": "BIBREF25" }, { "start": 683, "end": 706, "text": "Witten and Milne (2008)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Concept Relatedness", "sec_num": "2.2" }, { "text": "To compute text relatedness, NESA uses relatedness between the dimensions of the distributional vectors to overcome the orthogonality in ESA model. In addition to represent the words as distributional vectors, where each dimension is associated with a Wikipedia concept as in ESA model, NESA also utilizes a square matrix C n,n (n is the total number of dimensions) containing the correlation weights between the dimensions. Thus, to obtain the relatedness score between the words w1 and w2, NESA formulates the measure as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Orthogonal Explicit Semantic Analysis", "sec_num": "3" }, { "text": "rel N ESA (w1, w2) = w1 T 1,n .C n,n .w2 n,1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Orthogonal Explicit Semantic Analysis", "sec_num": "3" }, { "text": "(1) where w1 n,1 and w2 n,1 are the corresponding distributional vectors consisting of n dimensions. Every concept dimension can be further semantically interpreted as a distributional vector in some other vector space of m dimensions. This transformation allows the computation of the correlation weights between the concept dimensions. Thus, a transformation matrix E m,n can be built, where each column corresponds to a transformation vector for each concept dimension. Using the matrix E m,n , we can compute the matrix C n,n by multiplying E m,n with its transpose as in equation 2. In the next section, we discuss the different approaches used for computing C n,n containing the relatedness between the concept dimensions .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Orthogonal Explicit Semantic Analysis", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "C n,n = E T n,m .E m,n", "eq_num": "(2)" } ], "section": "Non-Orthogonal Explicit Semantic Analysis", "sec_num": "3" }, { "text": "4 Computing Concept Relatedness", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Orthogonal Explicit Semantic Analysis", "sec_num": "3" }, { "text": "NESA requires the relatedness scores between Wikipedia concepts (articles), therefore we present the different approaches for computing C n,n matrix using E m,n . Every Wikipedia article consists of different fields to represent the semantics of the concept dimensions, such as Wikipedia title, textual description and hyperlinks. We utilize this information to implement four different concept relatedness measures: VSM-Text, VSM-Hyperlinks, ESA-WikiTitle, and DiSER. These approaches represent the semantics of a concept with a distributional vector of m dimensions. All such vectors combined as column vectors for n concept dimensions form the matrix E m,n .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Orthogonal Explicit Semantic Analysis", "sec_num": "3" }, { "text": "This approach is based on plain Vector Space Model (VSM) for text. It calculates the relatedness scores between concepts by taking word overlap between their corresponding Wikipedia article content. The concept is transformed to a column vector mx1, where m is the total number of unique words in the Wikipedia corpus. The magnitude of each dimension is calculated on the basis of the number of occurrences of the different words in the associated Wikipedia article content.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "VSM-Text", "sec_num": "4.1" }, { "text": "Similar to the VSM-Text, this approach calculates the concept relatedness by taking the overlap between the hyperlinks present in their corresponding Wikipedia articles' content. The concept is transformed to a column vector mx1, where m is the total number of hyperlinks in the whole Wikipedia. The magnitude of each dimension is calculated on the basis of the number of occurrences of the different hyperlinks in the associated Wikipedia article content.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "VSM-Hyperlink", "sec_num": "4.2" }, { "text": "One intuitive way of obtaining concept relatedness scores is by using ESA itself for calculating the relatedness between the concepts. We use the associated Wikipedia article title for this purpose. ESA represents the semantics of a word with a high dimensional vector over the Wikipedia concepts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ESA-WikiTitle", "sec_num": "4.3" }, { "text": "Therefore, each concept dimension is transformed into a column vector of mx1, where m is the total number of Wikipedia concepts. The magnitude of each dimension is computed by using the term frequency (tf) and inverse document frequency (idf) for the terms appearing in the Wikipedia article title over the Wikipedia corpus (Gabrilovich and Markovitch, 2007) .", "cite_spans": [ { "start": 324, "end": 358, "text": "(Gabrilovich and Markovitch, 2007)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "ESA-WikiTitle", "sec_num": "4.3" }, { "text": "Distributional Semantics for Entity Relatedness (DiSER) (Aggarwal and Buitelaar, 2014) is a model for computing relatedness scores between entities. DiSER considers every Wikipedia concept as an entity. Therefore, it can be used for computing concept relatedness matrix C n,n , as required by the NESA model. In contrast to text relatedness measures based on DSMs such as ESA, which do not distinguish between entity and text, DiSER differentiate between entity and its surface forms by using unique hyperlinks referring to entities in Wikipedia for encoding entities while building DSMs. It uses the distributional information of such hyperlinks only over the whole Wikipedia corpus for representing a concept by a high dimensional distributional vector. Therefore, each concept dimension is transformed into a column vector of mx1, where m is the total number of Wikipedia concepts. The magnitude of each dimension is computed by using the concept frequency (ef) and inverse document frequency (idf) for an concept in the Wikipedia corpus. The concept frequency (cf) is a slight variation of term frequency. It computes the frequency of a concept appearing as hyperlink in the Wikipedia articles. To obtain the DiSER based relatedness scores between Wikipedia concepts, we use Entity Relatedness Graph (EnRG) 4 (Aggarwal et al., 2015) , which is a focused related entities explorer based on DiSER scores.", "cite_spans": [ { "start": 1313, "end": 1336, "text": "(Aggarwal et al., 2015)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "DiSER", "sec_num": "4.4" }, { "text": "In this section, we evaluate the different approaches defined for computing concept relatedness measures in the previous section. For our evaluation, we use the snapshot of English Wikipedia from 1 st October, 2013. This snapshot consists of 13,872,614 articles, in which 5,934,022 are Wikipedia redirects. We filtered out all the namespace 5 pages by using the articles' titles as they have specific namespace patterns. There are 3,571,206 namespace pages in this snapshot. We remove all those articles which contain less than 100 unique words or less than 5 hyperlinks; such articles are too specific and may generate some noise. We perform further filtering by removing all the articles if their titles are numbers like \"19\", dates like \"June 1\", or if the title starts with \"list\". We finally obtain a total of 3,635,833 Wikipedia articles for our experiment. We implement all the concept relatedness measures by using these obtained Wikipedia articles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Concept Relatedness Measures", "sec_num": "5" }, { "text": "VSM-Text represents the semantics of a concept with a column vector of mx1, where m is the total number of unique words appear in Wikipedia. Wikipedia contains more than 2.5 billion unique words, therefore, to reduce the matrix size, we use only 5 million most frequent words. ESA-WikiTitle represents the semantics of a concept with a column vector of mx1, where m is 3,635,833 in our implementation. In order to obtain the hyperlinks for VSM-Hyperlink and DiSER, we retain only those text segments which have manually defined links provided by Wikipedia volunteers. However, the volunteers may not create the link for every surface form appearing in the article content. For instance, \"Apple\" occurs 213 times in \"Steve Jobs\" Wikipedia page in our corpus, but only 7 out of these 213 are linked to the \"Apple Inc.\" Wikipedia page. The term frequency of \"Apple\" is calculated without considering the partial string matches, for example, we do not count if \"apple\" appears as a substring of any annotated text segment like \"Apple Store\" or \"Apple Lisa\". To obtain the actual frequency of every hyperlink for computing the magnitude of the dimension, we apply \"one sense per discourse\" heuristic (Gale et al., 1992) , which assumes that a term tends to have the same meaning in the same discourse. We link every additional un-linked occurrence of the text segment with the same hyperlink appearing most of the times for the same segment in the article. The total number of hyperlinks possible in our corpus would be 5 http://en.wikipedia.org/wiki/Wikipedia:Namespace equal to the total number of Wikipedia articles i.e. 3,635,833.", "cite_spans": [ { "start": 1195, "end": 1214, "text": "(Gale et al., 1992)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Concept Relatedness Measures", "sec_num": "5" }, { "text": "In order to evaluate the concept relatedness measures, we performed our experiments on the gold standard benchmark dataset KORE (Hoffart et al., 2012) . The KORE dataset consists of 21 seed Wikipedia concepts selected from the YAGO knowledge base 6 . Every seed concept has a ranked list of 20 related Wikipedia concepts. In order to build this dataset, 20 concept candidates are selected and ranked by human evaluators on crowdsourcing platforms to give the relative comparison between two candidates against the corresponding seed Wikipedia concept. For instance, human evaluators provide their judgement if \"Mark Zuckerberg\" is more related to \"Facebook\" than \"Sean Parker\". With the answers for such binary questions, a ranked list is prepared for every seed Wikipedia concept. The KORE dataset 7 consists of 21 seed candidates, thus forming 420 concept pairs with their relatedness scores assigned by 15 human evaluators.", "cite_spans": [ { "start": 128, "end": 150, "text": "(Hoffart et al., 2012)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "5.1" }, { "text": "We compare the concept relatedness measures described in section 4 against other existing methods. Hoffart at el. (2012) proposed KORE and KPCS which use the article content to compute the concept relatedness. They use Mutual Information (MIweight) to capture the importance of the hyperlink for a Wikipedia concept. To evaluate the concept relatedness measures using KORE dataset, we compute the concept relatedness scores for all the concept pairs and rank the list of 20 candidates for each seed Wikipedia concept. We calculated Spearman Rank correlation between the gold standard dataset and the results obtained from VSM-Text, VSM-Hyperlink, ESA-WikiTitle and DiSER.", "cite_spans": [ { "start": 99, "end": 120, "text": "Hoffart at el. (2012)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "5.2" }, { "text": "Experimental results are shown in Table 2 . We compare our results with the other existing methods of computing concept relatedness: WLM, KORE, and KPCS. WLM is the Wikipedia Link-based approach by Witten and Milne (2008) . KPCS and (Hoffart et al., 2012) , where KPCS is the cosine similarity on MI-weighted keyphrases while KORE represents the keyphrase overlap relatedness. These keyphrases can be the text segment with hyperlinks in the article content. Therefore, KPCS is a similar approach to VSM-Hyperlink, besides KPCS assigns MI-weights to capture the generality and specificity of concept in the Wikipedia article. Many concepts in the gold standard dataset are defined by ambiguous surface forms such as \"NeXT\" and \"Nice\", or they have ambiguous text segments in their surface forms like \"Jobs\" in \"Steve Jobs\" and \"Guitar\" in the \"Guitar Hero\" video game. Therefore, the effect of using only hyperlinks can be observed with the remarkable difference between the results obtained by ESA and DiSER. DiSER improves the accuracy over ESA by 20%. These scores illustrate that ESA fails in generating the appropriate semantic profiles for ambiguous terms. VSM-Text does not capture the semantics of Wikipedia concepts as the textual description in Wikipedia article also contains generic terms which are not enough to specify the precisely semantics of Wikipedia concepts. Therefore, VSM-Hyperlink achieved noticeable improvement over VSM-Text as VSM-Hyperlink builds the semantic profile by using hyperlinks in the article content. These hyperlinks are created by Wikipedia volunteers, therefore, it can be assumed that the text segments which are linked to other Wikipedia article, are more important than un-linked ones. However, KPCS and KORE achieved significantly higher accuracy in comparison to VSM-Hyperlink, which indicates that generality and specificity of hyperlinks in the article con-tent are very influential features for concept relatedness measures.", "cite_spans": [ { "start": 198, "end": 221, "text": "Witten and Milne (2008)", "ref_id": "BIBREF26" }, { "start": 233, "end": 255, "text": "(Hoffart et al., 2012)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 34, "end": 41, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5.3" }, { "text": "In this section, we evaluate NESA for word relatedness. We experiment by using different concept relatedness measures as explained in section 4 for building the C n,n in NESA model as shown in equations 1 and 2. We use the same filtered Wikipedia articles as used for evaluating the concept relatedness measures in the previous section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of NESA for Word Relatedness", "sec_num": "6" }, { "text": "We use 6 different word relatedness benchmarks to evaluate NESA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "6.1" }, { "text": "WN353 consists of 353 word pairs annotated by 13-15 human experts on a scale of 0-10. 0 refers to un-related and 10 stands for highly related or identical. This dataset mainly contains generic words like \"money\", \"drink\", \"movie\", etc.. It also contains named entities such as \"Jerusalem\", \"Palestinian\" and \"Israel\", which makes this dataset more challenging for approaches that use only the lexical resources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "6.1" }, { "text": "WN353Rel and WN353Sim datasets are the subsets of WN353. As WN353 contains similar and related word pairs, Agirre at el. (2009) refine the WN353 gold standard by splitting it in two parts: related word pairs and similar word pairs. The notion of similarity and relatedness are defined as follow: two words are similar if they are connected through the taxonomic relations like synonym or hyponym in lexical resources, while two words can be considered related if they are connected through other relations such as meronym and holonym. For instance, \"football\" and \"soccer\" are two similar words while \"computer\" and \"software\" can be considered as related. Finally, WN353Rel and WN353Sim contain 252 and 203 word pairs respectively.", "cite_spans": [ { "start": 107, "end": 127, "text": "Agirre at el. (2009)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "6.1" }, { "text": "MC30 is the dataset build by Miller and Charles (1991) that contains the selected word pairs of WN353. The relatedness scores of these words are RG65 is a collection of 65 non-technical word pairs. These word pairs are annotated by 51 human experts (see for more detail (Rubenstein and Goodenough, 1965) ).", "cite_spans": [ { "start": 29, "end": 54, "text": "Miller and Charles (1991)", "ref_id": "BIBREF20" }, { "start": 270, "end": 303, "text": "(Rubenstein and Goodenough, 1965)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "6.1" }, { "text": "MT287 is a relatively newer dataset that contains 287 word pairs. This dataset is prepared mainly to study the effect of temporal distribution (Radinsky et al., 2011) of a word over several years. The relatedness scores of the word pairs are obtained from 15-20 mechanical turkers.", "cite_spans": [ { "start": 143, "end": 166, "text": "(Radinsky et al., 2011)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "6.1" }, { "text": "We compare the NESA model with other state of the art methods of calculating word relatedness: Explicit Semantic Analysis (ESA), Salient Semantic Analysis (SSA), and several WordNet-based similarity measures. Hassan and Mihalcea (2011) Table 3 shows the results of the NESA model with different concept relatedness approaches and other state of the art methods of calculating word relatedness. The knowledge-based methods that use lexical resources like WordNet or Roget thesaurus (Jarmasz and Szpakowicz, 2004) , achieve higher accuracy if the words in benchmark datasets are available in the knowledge bases. For instance, WordNet-based measures (H&S (Hirst and St-Onge, 1998) , L&C (Leacock and Chodorow, 1998) , Lesk (Banerjee and Pedersen, 2002) , W&P (Wu and Palmer, 1994) , Resnik (Resnik, 1995) J&C (Jiang and Conrath, 1997), Lin (Lin, 1998) ) and Roget thesaurus-based measure (Jarmasz and Szpakowicz, 2004) achieved higher accuracy on MC30 and RG65 datasets. However, these approaches may not fit well for the datasets that contain non-dictionary words, therefore, the accuracy of knowledge-based measures decrease significantly on other datasets. Corpus-based measures ESA and SSA achieved higher scores than knowledge-based methods on WN353, WN353Rel, WN353Sim and MT287 datasets. Moreover, corpus-based methods performed comparable to knowledge-based methods on MC30 and RG65. Most of the knowledge-based measures use the taxonomic relations for computing word relatedness. Therefore, these measures obtained poor results on WN353Rel in contrast to WN353Sim dataset. However, corpus-based measures performed well for both type of relations i.e. similarity and relatedness.", "cite_spans": [ { "start": 209, "end": 235, "text": "Hassan and Mihalcea (2011)", "ref_id": "BIBREF12" }, { "start": 481, "end": 511, "text": "(Jarmasz and Szpakowicz, 2004)", "ref_id": "BIBREF15" }, { "start": 653, "end": 678, "text": "(Hirst and St-Onge, 1998)", "ref_id": "BIBREF13" }, { "start": 685, "end": 713, "text": "(Leacock and Chodorow, 1998)", "ref_id": "BIBREF18" }, { "start": 721, "end": 750, "text": "(Banerjee and Pedersen, 2002)", "ref_id": "BIBREF4" }, { "start": 757, "end": 778, "text": "(Wu and Palmer, 1994)", "ref_id": "BIBREF27" }, { "start": 788, "end": 802, "text": "(Resnik, 1995)", "ref_id": "BIBREF23" }, { "start": 838, "end": 849, "text": "(Lin, 1998)", "ref_id": "BIBREF19" }, { "start": 886, "end": 916, "text": "(Jarmasz and Szpakowicz, 2004)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 236, "end": 243, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experiment", "sec_num": "6.2" }, { "text": "The NESA model combined with any concept relatedness measure outperforms ESA for all the word relatedness benchmark datasets. It shows that considering non-orthogonality between explicit concepts in ESA model improves the accuracy. NESA-VSM-Hyperlink performs better than NESA-VSM-Text implying that considering only the hyperlinks from the article content works better than taking the overlap of whole content. NESA-ESA-WikiTitle and NESA-DiSER achieved higher scores than both NESA-VSM-Text and NESA-VSM-Hyperlink. It shows that the distributional representation of the article title captures the semantic information better than considering only the corresponding article content. Another interesting thing to note is that the correlation scores obtained by NESA model with the four concept relatedness measures follow the same order in table 3 as of the correlation scores obtained in evaluating concept relatedness shown in table 2. It represents the consistency of proposed concept relatedness measures in two different experiment settings. NESA-DiSER achieved the highest correlation scores in all the word relatedness benchmark datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "6.2" }, { "text": "We presented Non-Orthogonal ESA which introduces the relatedness between the explicit concepts in the ESA model for computing semantic relatedness, without compromising with the explicit property of the ESA concept space. We showed that the word relatedness results vary with the different concept relatedness measures. NESA outperformed all state of the art methods, in particular, NESA-DiSER achieved the highest correlation with the gold standard. We also evaluated the different concept relatedness measures using benchmark dataset KORE, in which DiSER outperformed all others.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "http://dbpedia.org/About 2 http://yago-knowledge.org/ 3 https://www.freebase.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "EnRG demo: http://enrg.insight-centre.org", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://datahub.io/dataset/yago 7 http://www.mpi-inf.mpg.de/yago-naga/aida", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work has been funded by a research grant from Science Foundation Ireland (SFI) under Grant Number SFI/12/RC/2289 (INSIGHT).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "8" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Wikipediabased distributional semantics for entity relatedness", "authors": [ { "first": "Nitish", "middle": [], "last": "Aggarwal", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Buitelaar", "suffix": "" } ], "year": 2014, "venue": "AAAI Fall Symposium Series", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Aggarwal and Paul Buitelaar. 2014. Wikipedia- based distributional semantics for entity relatedness. In 2014 AAAI Fall Symposium Series.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Exploring esa to improve word relatedness", "authors": [ { "first": "Nitish", "middle": [], "last": "Aggarwal", "suffix": "" }, { "first": "Kartik", "middle": [], "last": "Asooja", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Buitelaar", "suffix": "" } ], "year": 2014, "venue": "Lexical and Computational Semantics (* SEM 2014", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Aggarwal, Kartik Asooja, and Paul Buitelaar. 2014. Exploring esa to improve word relatedness. Lexical and Computational Semantics (* SEM 2014), 51.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Who are the american vegans related to brad pitt? exploring related entities", "authors": [ { "first": "Nitish", "middle": [], "last": "Aggarwal", "suffix": "" }, { "first": "Kartik", "middle": [], "last": "Asooja", "suffix": "" }, { "first": "Housam", "middle": [], "last": "Ziad", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Buitelaar", "suffix": "" } ], "year": 2015, "venue": "24th International World Wide Web Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Aggarwal, Kartik Asooja, Housam Ziad, and Paul Buitelaar. 2015. Who are the american vegans related to brad pitt? exploring related entities. In 24th Inter- national World Wide Web Conference (WWW 2015), Florence, Italy.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A study on similarity and relatedness using distributional and wordnet-based approaches", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Enrique", "middle": [], "last": "Alfonseca", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Jana", "middle": [], "last": "Kravalova", "suffix": "" }, { "first": "Marius", "middle": [], "last": "Pa\u015fca", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Soroa", "suffix": "" } ], "year": 2009, "venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "19--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pa\u015fca, and Aitor Soroa. 2009. A study on similarity and relatedness using distributional and wordnet-based approaches. In Proceedings of Hu- man Language Technologies: The 2009 Annual Con- ference of the North American Chapter of the Associ- ation for Computational Linguistics, pages 19-27.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "An adapted lesk algorithm for word sense disambiguation using wordnet", "authors": [ { "first": "Satanjeev", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Pedersen", "suffix": "" } ], "year": 2002, "venue": "Computational linguistics and intelligent text processing", "volume": "", "issue": "", "pages": "136--145", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satanjeev Banerjee and Ted Pedersen. 2002. An adapted lesk algorithm for word sense disambiguation using wordnet. In Computational linguistics and intelligent text processing, pages 136-145. Springer.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Latent dirichlet allocation", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Jordan", "suffix": "" } ], "year": 2003, "venue": "J. Mach. Learn. Res", "volume": "3", "issue": "", "pages": "993--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993-1022.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The google similarity distance. Knowledge and Data Engineering", "authors": [ { "first": "L", "middle": [], "last": "Rudi", "suffix": "" }, { "first": "", "middle": [], "last": "Cilibrasi", "suffix": "" }, { "first": "M", "middle": [ "B" ], "last": "Paul", "suffix": "" }, { "first": "", "middle": [], "last": "Vitanyi", "suffix": "" } ], "year": 2007, "venue": "IEEE Transactions on", "volume": "19", "issue": "3", "pages": "370--383", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rudi L Cilibrasi and Paul MB Vitanyi. 2007. The google similarity distance. Knowledge and Data Engineering, IEEE Transactions on, 19(3):370-383.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Explicit versus latent concept models for cross-language information retrieval", "authors": [ { "first": "Philipp", "middle": [], "last": "Cimiano", "suffix": "" }, { "first": "Antje", "middle": [], "last": "Schultz", "suffix": "" }, { "first": "Sergej", "middle": [], "last": "Sizov", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Sorg", "suffix": "" }, { "first": "Steffen", "middle": [], "last": "Staab", "suffix": "" } ], "year": 2009, "venue": "IJCAI", "volume": "9", "issue": "", "pages": "1513--1518", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Cimiano, Antje Schultz, Sergej Sizov, Philipp Sorg, and Steffen Staab. 2009. Explicit versus la- tent concept models for cross-language information re- trieval. In IJCAI, volume 9, pages 1513-1518.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Computing semantic relatedness using wikipedia-based explicit semantic analysis", "authors": [ { "first": "Evgeniy", "middle": [], "last": "Gabrilovich", "suffix": "" }, { "first": "Shaul", "middle": [], "last": "Markovitch", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 20th IJCAI", "volume": "", "issue": "", "pages": "1606--1611", "other_ids": {}, "num": null, "urls": [], "raw_text": "Evgeniy Gabrilovich and Shaul Markovitch. 2007. Com- puting semantic relatedness using wikipedia-based ex- plicit semantic analysis. In Proceedings of the 20th IJCAI, pages 1606-1611.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "One sense per discourse", "authors": [ { "first": "A", "middle": [], "last": "William", "suffix": "" }, { "first": "", "middle": [], "last": "Gale", "suffix": "" }, { "first": "W", "middle": [], "last": "Kenneth", "suffix": "" }, { "first": "David", "middle": [], "last": "Church", "suffix": "" }, { "first": "", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the workshop on Speech and Natural Language", "volume": "", "issue": "", "pages": "233--237", "other_ids": {}, "num": null, "urls": [], "raw_text": "William A Gale, Kenneth W Church, and David Yarowsky. 1992. One sense per discourse. In Pro- ceedings of the workshop on Speech and Natural Lan- guage, pages 233-237. ACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Insights into explicit semantic analysis", "authors": [ { "first": "Thomas", "middle": [], "last": "Gottron", "suffix": "" }, { "first": "Maik", "middle": [], "last": "Anderka", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 20th CIKM", "volume": "", "issue": "", "pages": "1961--1964", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Gottron, Maik Anderka, and Benno Stein. 2011. Insights into explicit semantic analysis. In Proceed- ings of the 20th CIKM, pages 1961-1964. ACM.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Distributional structure", "authors": [ { "first": "Zellig", "middle": [], "last": "Harris", "suffix": "" } ], "year": 1954, "venue": "", "volume": "10", "issue": "", "pages": "146--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zellig Harris. 1954. Distributional structure. In Word 10 (23), pages 146-162.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Semantic relatedness using salient semantic analysis", "authors": [ { "first": "Samer", "middle": [], "last": "Hassan", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2011, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samer Hassan and Rada Mihalcea. 2011. Semantic re- latedness using salient semantic analysis. In AAAI.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Lexical chains as representations of context for the detection and correction of malapropisms. WordNet: An electronic lexical database", "authors": [ { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" }, { "first": "David", "middle": [], "last": "St-Onge", "suffix": "" } ], "year": 1998, "venue": "", "volume": "305", "issue": "", "pages": "305--332", "other_ids": {}, "num": null, "urls": [], "raw_text": "Graeme Hirst and David St-Onge. 1998. Lexical chains as representations of context for the detection and cor- rection of malapropisms. WordNet: An electronic lex- ical database, 305:305-332.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Kore: Keyphrase overlap relatedness for entity disambiguation", "authors": [ { "first": "Johannes", "middle": [], "last": "Hoffart", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Seufert", "suffix": "" }, { "first": "Dat", "middle": [], "last": "Ba Nguyen", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Theobald", "suffix": "" }, { "first": "Gerhard", "middle": [], "last": "Weikum", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 21st ACM international conference on Information and knowledge management", "volume": "", "issue": "", "pages": "545--554", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johannes Hoffart, Stephan Seufert, Dat Ba Nguyen, Mar- tin Theobald, and Gerhard Weikum. 2012. Kore: Keyphrase overlap relatedness for entity disambigua- tion. In Proceedings of the 21st ACM international conference on Information and knowledge manage- ment, pages 545-554. ACM.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Rogets thesaurus and semantic similarity1", "authors": [ { "first": "Mario", "middle": [], "last": "Jarmasz", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Szpakowicz", "suffix": "" } ], "year": 2003, "venue": "Recent Advances in Natural Language Processing III: Selected Papers from RANLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mario Jarmasz and Stan Szpakowicz. 2004. Rogets thesaurus and semantic similarity1. Recent Advances in Natural Language Processing III: Selected Papers from RANLP, 2003:111.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Semantic similarity based on corpus statistics and lexical taxonomy", "authors": [ { "first": "J", "middle": [], "last": "Jay", "suffix": "" }, { "first": "David", "middle": [ "W" ], "last": "Jiang", "suffix": "" }, { "first": "", "middle": [], "last": "Conrath", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jay J Jiang and David W Conrath. 1997. Semantic simi- larity based on corpus statistics and lexical taxonomy. arXiv preprint cmp-lg/9709008.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "An introduction to latent semantic analysis. Discourse Processes", "authors": [ { "first": "P", "middle": [ "W" ], "last": "T K Landauer", "suffix": "" }, { "first": "D", "middle": [], "last": "Foltz", "suffix": "" }, { "first": "", "middle": [], "last": "Laham", "suffix": "" } ], "year": 1998, "venue": "", "volume": "25", "issue": "", "pages": "259--284", "other_ids": {}, "num": null, "urls": [], "raw_text": "T K Landauer, P. W. Foltz, and D Laham. 1998. An in- troduction to latent semantic analysis. Discourse Pro- cesses, 25:259-284.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Combining local context and wordnet similarity for word sense identification. WordNet: An electronic lexical database", "authors": [ { "first": "Claudia", "middle": [], "last": "Leacock", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Chodorow", "suffix": "" } ], "year": 1998, "venue": "", "volume": "49", "issue": "", "pages": "265--283", "other_ids": {}, "num": null, "urls": [], "raw_text": "Claudia Leacock and Martin Chodorow. 1998. Com- bining local context and wordnet similarity for word sense identification. WordNet: An electronic lexical database, 49(2):265-283.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "An information-theoretic definition of similarity", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1998, "venue": "ICML", "volume": "98", "issue": "", "pages": "296--304", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin. 1998. An information-theoretic definition of similarity. In ICML, volume 98, pages 296-304.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Contextual correlates of semantic similarity", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" }, { "first": "G", "middle": [], "last": "Walter", "suffix": "" }, { "first": "", "middle": [], "last": "Charles", "suffix": "" } ], "year": 1991, "venue": "Language and cognitive processes", "volume": "6", "issue": "1", "pages": "1--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A Miller and Walter G Charles. 1991. Contex- tual correlates of semantic similarity. Language and cognitive processes, 6(1):1-28.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Improving esa with document similarity", "authors": [ { "first": "Tamara", "middle": [], "last": "Polajnar", "suffix": "" }, { "first": "Nitish", "middle": [], "last": "Aggarwal", "suffix": "" }, { "first": "Kartik", "middle": [], "last": "Asooja", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Buitelaar", "suffix": "" } ], "year": 2013, "venue": "Advances in Information Retrieval", "volume": "", "issue": "", "pages": "582--593", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tamara Polajnar, Nitish Aggarwal, Kartik Asooja, and Paul Buitelaar. 2013. Improving esa with docu- ment similarity. In Advances in Information Retrieval, pages 582-593. Springer.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A word at a time: computing word relatedness using temporal semantic analysis", "authors": [ { "first": "Kira", "middle": [], "last": "Radinsky", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Agichtein", "suffix": "" }, { "first": "Evgeniy", "middle": [], "last": "Gabrilovich", "suffix": "" }, { "first": "Shaul", "middle": [], "last": "Markovitch", "suffix": "" } ], "year": 2011, "venue": "20th WWW", "volume": "", "issue": "", "pages": "337--346", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kira Radinsky, Eugene Agichtein, Evgeniy Gabrilovich, and Shaul Markovitch. 2011. A word at a time: com- puting word relatedness using temporal semantic anal- ysis. In 20th WWW, pages 337-346.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Using information content to evaluate semantic similarity in a taxonomy", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Resnik. 1995. Using information content to eval- uate semantic similarity in a taxonomy. arXiv preprint cmp-lg/9511007.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Contextual correlates of synonymy", "authors": [ { "first": "Herbert", "middle": [], "last": "Rubenstein", "suffix": "" }, { "first": "B", "middle": [], "last": "John", "suffix": "" }, { "first": "", "middle": [], "last": "Goodenough", "suffix": "" } ], "year": 1965, "venue": "Communications of the ACM", "volume": "8", "issue": "10", "pages": "627--633", "other_ids": {}, "num": null, "urls": [], "raw_text": "Herbert Rubenstein and John B Goodenough. 1965. Contextual correlates of synonymy. Communications of the ACM, 8(10):627-633.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Wikirelate! computing semantic relatedness using wikipedia", "authors": [ { "first": "Michael", "middle": [], "last": "Strube", "suffix": "" }, { "first": "Simone", "middle": [ "Paolo" ], "last": "Ponzetto", "suffix": "" } ], "year": 2006, "venue": "AAAI", "volume": "6", "issue": "", "pages": "1419--1424", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Strube and Simone Paolo Ponzetto. 2006. Wikirelate! computing semantic relatedness using wikipedia. In AAAI, volume 6, pages 1419-1424.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "An effective, lowcost measure of semantic relatedness obtained from wikipedia links", "authors": [ { "first": "I", "middle": [], "last": "Witten", "suffix": "" }, { "first": "David", "middle": [], "last": "Milne", "suffix": "" } ], "year": 2008, "venue": "Proceeding of AAAI Workshop on Wikipedia and Artificial Intelligence: an Evolving Synergy", "volume": "", "issue": "", "pages": "25--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "I Witten and David Milne. 2008. An effective, low- cost measure of semantic relatedness obtained from wikipedia links. In Proceeding of AAAI Workshop on Wikipedia and Artificial Intelligence: an Evolving Syn- ergy, AAAI Press, Chicago, USA, pages 25-30.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Verbs semantics and lexical selection", "authors": [ { "first": "Zhibiao", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 32nd annual meeting on ACL", "volume": "", "issue": "", "pages": "133--138", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhibiao Wu and Martha Palmer. 1994. Verbs semantics and lexical selection. In Proceedings of the 32nd an- nual meeting on ACL, pages 133-138. ACL.", "links": null } }, "ref_entries": { "TABREF0": { "html": null, "content": "
#footballsoccer
1FIFAHistory of soccer in the United States
2FootballSoccer in the United States
3 History of association footballUnited States Soccer Federation
4Football in EnglandNorth American Soccer League (196884)
5
", "num": null, "text": "Top 5 Wikipedia concepts for \"football\" and \"soccer\" in the ESA vector", "type_str": "table" }, "TABREF1": { "html": null, "content": "
Concept RelatednessSpearman Rank
MeasuresCorrelation with human
VSM-Text0.510
VSM-Hyperlink0.637
ESA0.661
DiSER0.781
WLM0.610
KPCS0.698
KORE0.673
KORE are the approaches proposed in
", "num": null, "text": "Spearman rank correlation of concept relatedness measures with gold standard", "type_str": "table" }, "TABREF2": { "html": null, "content": "
#WN353 WN353Rel WN353Sim MC30 RG65 MT287
H&S0.3470.1420.4970.8110.8130.278
L&C0.3020.1720.4120.7930.8230.284
Lesk0.3370.1250.5110.583 0.54660.271
W&P0.3160.1310.4610.7840.8070.331
Resnik0.3530.1840.5350.6930.7310.234
J&C0.3170.0890.4420.8200.8040.296
Lin0.3480.1540.4830.7500.7880.286
Roget0.415----0.8560.804--
SSA0.629----0.8100.830--
Polajnar et al.0.664----------
ESA0.6600.6430.6630.7650.8260.507
NESA (VSM-Text)0.6660.6480.6690.7680.8270.509
NESA (VSM-Hyperlink)0.6700.6490.6720.7680.8280.516
NESA (ESA-WikiTitle)0.6810.6520.6840.7740.8300.541
NESA (DiSER)0.6960.6630.7190.7840.8390.572
provided by 38 human experts on a scale of 0-4.
", "num": null, "text": "Spearman rank correlation of relatedness measures with gold standard datasets", "type_str": "table" } } } }