{ "paper_id": "P18-1004", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:41:30.896909Z" }, "title": "Explicit Retrofitting of Distributional Word Vectors", "authors": [ { "first": "Goran", "middle": [], "last": "Glava\u0161", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Mannheim B6", "location": { "addrLine": "29", "postCode": "DE-68161", "settlement": "Mannheim" } }, "email": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "", "affiliation": { "laboratory": "Language Technology Lab University of Cambridge", "institution": "", "location": { "addrLine": "9 West Road", "postCode": "CB3 9DA", "settlement": "Cambridge" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Semantic specialization of distributional word vectors, referred to as retrofitting, is a process of fine-tuning word vectors using external lexical knowledge in order to better embed some semantic relation. Existing retrofitting models integrate linguistic constraints directly into learning objectives and, consequently, specialize only the vectors of words from the constraints. In this work, in contrast, we transform external lexico-semantic relations into training examples which we use to learn an explicit retrofitting model (ER). The ER model allows us to learn a global specialization function and specialize the vectors of words unobserved in the training data as well. We report large gains over original distributional vector spaces in (1) intrinsic word similarity evaluation and on (2) two downstream tasks-lexical simplification and dialog state tracking. Finally, we also successfully specialize vector spaces of new languages (i.e., unseen in the training data) by coupling ER with shared multilingual distributional vector spaces.", "pdf_parse": { "paper_id": "P18-1004", "_pdf_hash": "", "abstract": [ { "text": "Semantic specialization of distributional word vectors, referred to as retrofitting, is a process of fine-tuning word vectors using external lexical knowledge in order to better embed some semantic relation. Existing retrofitting models integrate linguistic constraints directly into learning objectives and, consequently, specialize only the vectors of words from the constraints. In this work, in contrast, we transform external lexico-semantic relations into training examples which we use to learn an explicit retrofitting model (ER). The ER model allows us to learn a global specialization function and specialize the vectors of words unobserved in the training data as well. We report large gains over original distributional vector spaces in (1) intrinsic word similarity evaluation and on (2) two downstream tasks-lexical simplification and dialog state tracking. Finally, we also successfully specialize vector spaces of new languages (i.e., unseen in the training data) by coupling ER with shared multilingual distributional vector spaces.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Algebraic modeling of word vector spaces is one of the core research areas in modern Natural Language Processing (NLP) and its usefulness has been shown across a wide variety of NLP tasks (Collobert et al., 2011; Chen and Manning, 2014; Melamud et al., 2016) . Commonly employed distributional models for word vector induction are based on the distributional hypothesis (Harris, 1954) , i.e., they rely on word co-occurrences obtained from large text corpora (Mikolov et al., 2013b; Pennington et al., 2014; Levy and Goldberg, 2014a; Levy et al., 2015; Bojanowski et al., 2017) .", "cite_spans": [ { "start": 188, "end": 212, "text": "(Collobert et al., 2011;", "ref_id": "BIBREF7" }, { "start": 213, "end": 236, "text": "Chen and Manning, 2014;", "ref_id": "BIBREF6" }, { "start": 237, "end": 258, "text": "Melamud et al., 2016)", "ref_id": "BIBREF35" }, { "start": 370, "end": 384, "text": "(Harris, 1954)", "ref_id": "BIBREF19" }, { "start": 459, "end": 482, "text": "(Mikolov et al., 2013b;", "ref_id": "BIBREF37" }, { "start": 483, "end": 507, "text": "Pennington et al., 2014;", "ref_id": "BIBREF48" }, { "start": 508, "end": 533, "text": "Levy and Goldberg, 2014a;", "ref_id": "BIBREF30" }, { "start": 534, "end": 552, "text": "Levy et al., 2015;", "ref_id": "BIBREF32" }, { "start": 553, "end": 577, "text": "Bojanowski et al., 2017)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The dependence on purely distributional knowledge results in a well-known tendency of fusing semantic similarity with other types of semantic relatedness Schwartz et al., 2015) in the induced vector spaces. Consequently, the similarity between distributional vectors indicates just an abstract semantic association and not a precise semantic relation (Yih et al., 2012; Mohammad et al., 2013) . For example, it is difficult to discern synonyms from antonyms in distributional spaces. This property has a particularly negative effect on NLP applications like text simplification and statistical dialog modeling, in which discerning semantic similarity from other types of semantic relatedness is pivotal to the system performance (Glava\u0161 an\u010f Stajner, 2015; Faruqui et al., 2015; Mrk\u0161i\u0107 et al., 2016; Kim et al., 2016b) .", "cite_spans": [ { "start": 154, "end": 176, "text": "Schwartz et al., 2015)", "ref_id": "BIBREF51" }, { "start": 351, "end": 369, "text": "(Yih et al., 2012;", "ref_id": "BIBREF60" }, { "start": 370, "end": 392, "text": "Mohammad et al., 2013)", "ref_id": null }, { "start": 729, "end": 755, "text": "(Glava\u0161 an\u010f Stajner, 2015;", "ref_id": null }, { "start": 756, "end": 777, "text": "Faruqui et al., 2015;", "ref_id": "BIBREF11" }, { "start": 778, "end": 798, "text": "Mrk\u0161i\u0107 et al., 2016;", "ref_id": "BIBREF40" }, { "start": 799, "end": 817, "text": "Kim et al., 2016b)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A standard solution is to move beyond purely unsupervised learning of word representations, in a process referred to as word vector space specialization or retrofitting. Specialization models leverage external lexical knowledge from lexical resources, such as WordNet (Fellbaum, 1998) , the Paraphrase Database (Ganitkevitch et al., 2013) , or BabelNet (Navigli and Ponzetto, 2012) , to specialize distributional spaces for a particular lexical relation, e.g., synonymy (Faruqui et al., 2015; or hypernymy (Glava\u0161 and Ponzetto, 2017) . External constraints are commonly pairs of words between which a particular relation holds.", "cite_spans": [ { "start": 268, "end": 284, "text": "(Fellbaum, 1998)", "ref_id": null }, { "start": 311, "end": 338, "text": "(Ganitkevitch et al., 2013)", "ref_id": "BIBREF15" }, { "start": 353, "end": 381, "text": "(Navigli and Ponzetto, 2012)", "ref_id": "BIBREF42" }, { "start": 470, "end": 492, "text": "(Faruqui et al., 2015;", "ref_id": "BIBREF11" }, { "start": 506, "end": 533, "text": "(Glava\u0161 and Ponzetto, 2017)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Existing specialization methods exploit the external linguistic constraints in two prominent ways:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) joint specialization models modify the learning objective of the original distributional model by integrating the constraints into it (Yu and Dredze, 2014; Kiela et al., 2015; Nguyen et al., 2016, inter alia) ; (2) post-processing models fine-tune distributional vectors retroactively after training to satisfy the external constraints (Faruqui et al., 2015; Mrk\u0161i\u0107 et al., 2017, inter alia) . The latter, in general, outperform the former (Mrk\u0161i\u0107 et al., 2016) . Retrofitting models can be applied to arbitrary distributional spaces but they suffer from a major limitation -they locally update only vectors of words present in the external constraints, whereas vectors of all other (unseen) words remain intact. In contrast, joint specialization models propagate the external signal to all words via the joint objective.", "cite_spans": [ { "start": 138, "end": 159, "text": "(Yu and Dredze, 2014;", "ref_id": "BIBREF62" }, { "start": 160, "end": 179, "text": "Kiela et al., 2015;", "ref_id": "BIBREF24" }, { "start": 180, "end": 212, "text": "Nguyen et al., 2016, inter alia)", "ref_id": null }, { "start": 340, "end": 362, "text": "(Faruqui et al., 2015;", "ref_id": "BIBREF11" }, { "start": 363, "end": 395, "text": "Mrk\u0161i\u0107 et al., 2017, inter alia)", "ref_id": null }, { "start": 444, "end": 465, "text": "(Mrk\u0161i\u0107 et al., 2016)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose a new approach for specializing word vectors that unifies the strengths of both prior strategies, while mitigating their limitations. Same as retrofitting models, our novel framework, termed explicit retrofitting (ER), is applicable to arbitrary distributional spaces. At the same time, the method learns an explicit global specialization function that can specialize vectors for all vocabulary words, similar as in joint models. Yet, unlike the joint models, ER does not require expensive re-training on large text corpora, but is directly applied on top of any pre-trained vector space. The key idea of ER is to directly learn a specialization function in a supervised setting, using lexical constraints as training instances. In other words, our model, implemented as a deep feedforward neural architecture, learns a (non-linear) function which \"translates\" word vectors from the distributional space into the specialized space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We show that the proposed ER approach yields considerable gains over distributional spaces in word similarity evaluation on standard benchmarks Gerz et al., 2016) , as well as in two downstream tasks -lexical simplification and dialog state tracking. Furthermore, we show that, by coupling the ER model with shared multilingual embedding spaces (Mikolov et al., 2013a; Smith et al., 2017) , we can also specialize distributional spaces for languages unseen in the training data in a zero-shot language transfer setup. In other words, we show that an explicit retrofitting model trained with external constraints from one language can be successfully used to specialize the distributional space of another language.", "cite_spans": [ { "start": 144, "end": 162, "text": "Gerz et al., 2016)", "ref_id": "BIBREF16" }, { "start": 345, "end": 368, "text": "(Mikolov et al., 2013a;", "ref_id": "BIBREF36" }, { "start": 369, "end": 388, "text": "Smith et al., 2017)", "ref_id": "BIBREF52" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The importance of vector space specialization for downstream tasks has been observed, inter alia, for dialog state tracking Vuli\u0107 et al., 2017b) , spoken language understanding (Kim et al., 2016b,a) , judging lexical entailment (Nguyen et al., 2017; Glava\u0161 and Ponzetto, 2017; , lexical contrast modeling (Nguyen et al., 2016) , and cross-lingual transfer of lexical resources (Vuli\u0107 et al., 2017a) . A common goal pertaining to all retrofitting models is to pull the vectors of similar words (e.g., synonyms) closer together, while some models also push the vectors of dissimilar words (e.g., antonyms) further apart. The specialization methods fall into two categories: (1) joint specialization methods, and (2) post-processing (i.e., retrofitting) methods. Methods from both categories make use of similar lexical resources -they typically leverage WordNet (Fellbaum, 1998) , FrameNet (Baker et al., 1998) , the Paraphrase Database (PPDB) (Ganitkevitch et al., 2013; Pavlick et al., 2015) , morphological lexicons (Cotterell et al., 2016) , or simple handcrafted linguistic rules (Vuli\u0107 et al., 2017b) . In what follows, we discuss the two model categories.", "cite_spans": [ { "start": 124, "end": 144, "text": "Vuli\u0107 et al., 2017b)", "ref_id": "BIBREF55" }, { "start": 177, "end": 198, "text": "(Kim et al., 2016b,a)", "ref_id": null }, { "start": 228, "end": 249, "text": "(Nguyen et al., 2017;", "ref_id": "BIBREF43" }, { "start": 250, "end": 276, "text": "Glava\u0161 and Ponzetto, 2017;", "ref_id": "BIBREF17" }, { "start": 305, "end": 326, "text": "(Nguyen et al., 2016)", "ref_id": "BIBREF44" }, { "start": 377, "end": 398, "text": "(Vuli\u0107 et al., 2017a)", "ref_id": "BIBREF54" }, { "start": 860, "end": 876, "text": "(Fellbaum, 1998)", "ref_id": null }, { "start": 888, "end": 908, "text": "(Baker et al., 1998)", "ref_id": "BIBREF1" }, { "start": 942, "end": 969, "text": "(Ganitkevitch et al., 2013;", "ref_id": "BIBREF15" }, { "start": 970, "end": 991, "text": "Pavlick et al., 2015)", "ref_id": "BIBREF47" }, { "start": 1017, "end": 1041, "text": "(Cotterell et al., 2016)", "ref_id": "BIBREF8" }, { "start": 1083, "end": 1104, "text": "(Vuli\u0107 et al., 2017b)", "ref_id": "BIBREF55" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Joint Specialization Models. These models integrate external constraints into the distributional training procedure of general word embedding algorithms such as CBOW, Skip-Gram (Mikolov et al., 2013b ), or Canonical Correlation Analysis (Dhillon et al., 2015 . They modify the prior or the regularization of the original objective (Yu and Dredze, 2014; Xu et al., 2014; Kiela et al., 2015) or integrate the constraints directly into the, e.g., an SGNS-or CBOW-style objective (Liu et al., 2015; Ono et al., 2015; Bollegala et al., 2016; Osborne et al., 2016; Nguyen et al., 2016 Nguyen et al., , 2017 . Besides generally displaying lower performance compared to retrofitting methods (Mrk\u0161i\u0107 et al., 2016) , these models are also tied to the distributional objective and any change of the underlying distributional model induces a change of the entire joint model. This makes them less versatile than the retrofitting methods.", "cite_spans": [ { "start": 177, "end": 199, "text": "(Mikolov et al., 2013b", "ref_id": "BIBREF37" }, { "start": 200, "end": 258, "text": "), or Canonical Correlation Analysis (Dhillon et al., 2015", "ref_id": null }, { "start": 331, "end": 352, "text": "(Yu and Dredze, 2014;", "ref_id": "BIBREF62" }, { "start": 353, "end": 369, "text": "Xu et al., 2014;", "ref_id": "BIBREF59" }, { "start": 370, "end": 389, "text": "Kiela et al., 2015)", "ref_id": "BIBREF24" }, { "start": 476, "end": 494, "text": "(Liu et al., 2015;", "ref_id": "BIBREF33" }, { "start": 495, "end": 512, "text": "Ono et al., 2015;", "ref_id": "BIBREF45" }, { "start": 513, "end": 536, "text": "Bollegala et al., 2016;", "ref_id": "BIBREF4" }, { "start": 537, "end": 558, "text": "Osborne et al., 2016;", "ref_id": "BIBREF46" }, { "start": 559, "end": 578, "text": "Nguyen et al., 2016", "ref_id": "BIBREF44" }, { "start": 579, "end": 600, "text": "Nguyen et al., , 2017", "ref_id": "BIBREF43" }, { "start": 683, "end": 704, "text": "(Mrk\u0161i\u0107 et al., 2016)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Post-Processing Models. Models from the popularly termed retrofitting family inject lexical knowledge from external resources into arbitrary pretrained word vectors (Faruqui et al., 2015; Rothe and Sch\u00fctze, 2015; Wieting et al., 2015; Nguyen et al., 2016; Mrk\u0161i\u0107 et al., 2016) . These models fine-tune the vectors of words present in the linguistic constraints to reflect the ground-truth lexical knowledge. While the large majority of specialization models from both classes operate only with similarity constraints, a line of recent work (Mrk\u0161i\u0107 et al., 2016; Vuli\u0107 et al., 2017b) demonstrates that knowledge about both similar and dissimilar words leads to improved performance in downstream tasks. The main shortcoming of the existing retrofitting models is their inability to specialize vectors of words unseen in external lexical resources.", "cite_spans": [ { "start": 165, "end": 187, "text": "(Faruqui et al., 2015;", "ref_id": "BIBREF11" }, { "start": 188, "end": 212, "text": "Rothe and Sch\u00fctze, 2015;", "ref_id": "BIBREF49" }, { "start": 213, "end": 234, "text": "Wieting et al., 2015;", "ref_id": "BIBREF57" }, { "start": 235, "end": 255, "text": "Nguyen et al., 2016;", "ref_id": "BIBREF44" }, { "start": 256, "end": 276, "text": "Mrk\u0161i\u0107 et al., 2016)", "ref_id": "BIBREF40" }, { "start": 540, "end": 561, "text": "(Mrk\u0161i\u0107 et al., 2016;", "ref_id": "BIBREF40" }, { "start": 562, "end": 582, "text": "Vuli\u0107 et al., 2017b)", "ref_id": "BIBREF55" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our explicit retrofitting framework brings together desirable properties of both model classes:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "(1) unlike joint models, it does not require adaptation to the underlying distributional model and expensive re-training, i.e., it is applicable to any pre-trained distributional space; (2) it allows for easy integration of both similarity and dissimilarity constraints into the specialization process; and (3) unlike post-processors, it specializes the full vocabulary of the original distributional space and not only vectors of words from external constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our explicit retrofitting (ER) approach, illustrated by Figure 1a , consists of two major components:", "cite_spans": [], "ref_spans": [ { "start": 56, "end": 65, "text": "Figure 1a", "ref_id": null } ], "eq_spans": [], "section": "Explicit Retrofitting", "sec_num": "3" }, { "text": "(1) an algorithm for preparing training instances from external lexical constraints, and (2) a supervised specialization model, based on a deep feedforward neural network. This network, shown in Figure 1b learns a non-linear global specialization function from the training instances. ", "cite_spans": [], "ref_spans": [ { "start": 195, "end": 204, "text": "Figure 1b", "ref_id": null } ], "eq_spans": [], "section": "Explicit Retrofitting", "sec_num": "3" }, { "text": "Let X = {x i } N i=1 , x i \u2208 R d be the d-dimensional distributional", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Constraints to Training Instances", "sec_num": "3.1" }, { "text": "V = {w i } N i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Constraints to Training Instances", "sec_num": "3.1" }, { "text": "referring to the associated vocabulary) and let X = {x i } N i=1 be the corresponding specialized vector space that we seek to obtain through explicit retrofitting. Let", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Constraints to Training Instances", "sec_num": "3.1" }, { "text": "C = {(w i , w j , r) l } L l=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Constraints to Training Instances", "sec_num": "3.1" }, { "text": "be the set of L linguistic constraints from an external lexical resource, each consisting of a pair of vocabulary words w i and w j and a semantic relation r that holds between them. The most recent state-of-the-art retrofitting work Vuli\u0107 et al., 2017b) suggests that using both similarity and dissimilarity constraints leads to better performance compared to using only similarity constraints. Therefore, we use synonymy and antonymy relations from external resources, i.e., r l \u2208 {ant, syn}. Let g be the function measuring the distance between words w i and w j based on their vector representations. The algorithm for preparing training instances from constraints is guided by the following assumptions:", "cite_spans": [ { "start": 234, "end": 254, "text": "Vuli\u0107 et al., 2017b)", "ref_id": "BIBREF55" } ], "ref_spans": [], "eq_spans": [], "section": "From Constraints to Training Instances", "sec_num": "3.1" }, { "text": "1. All synonymy pairs (w i , w j , syn) should have a minimal possible distance score in the spe-cialized space, i.e., g(x i , x j ) = g min ; 1 2. All antonymy pairs (w i , w j , ant) should have a maximal distance in the specialized space, i.e., g(x i , x j ) = g max ; 2 3. The distances g(x i , x k ) in the specialized space between some word w i and all other words w k that are not synonyms or antonyms of w i should be in the interval (g min , g max ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Constraints to Training Instances", "sec_num": "3.1" }, { "text": "Our goal is to discern semantic similarity from semantic relatedness by comparing, in the specialized space, the distances between word pairs (w i , w j , r) \u2208 C with distances that words w i and w j from those pairs have with other vocabulary words w m . It is intuitive to enforce that the synonyms are as close as possible and antonyms as far as possible. However, we do not know what the distances between non-synonymous and nonantonymous words g(x i , x m ) in the specialized space should look like. This is why, for all other words, similar to (Faruqui et al., 2016; , we assume that the distances in the specialized space for all word pairs not found in C should stay the same as in the distributional space:", "cite_spans": [ { "start": 551, "end": 573, "text": "(Faruqui et al., 2016;", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "From Constraints to Training Instances", "sec_num": "3.1" }, { "text": "g(x i , x m ) = g(x i , x m )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Constraints to Training Instances", "sec_num": "3.1" }, { "text": ". This way we preserve the useful semantic content available in the original distributional space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Constraints to Training Instances", "sec_num": "3.1" }, { "text": "In downstream tasks most errors stem from vectors of semantically related words (e.g., car driver) being as similar as vectors of semantically similar words (e.g., carautomobile).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Constraints to Training Instances", "sec_num": "3.1" }, { "text": "To anticipate this, we compare the distances of pairs (w i , w j , r) \u2208 C with the distances for pairs (w i , w m ) and (w j , w n ), where w m and w n are negative examples: the vocabulary words that are most similar to w i and w j , respectively, in the original distributional space X. Concretely, for each constraint (w i , w j , r) \u2208 C we retrieve (1) K vocabulary words {w k m } K k=1 that are closest in the input distributional space (according to the distance function g) to the word w i and (2) K vocabulary words {w k n } K k=1 that are closest to the word w j . We then create, for each constraint (w i , w j , r) \u2208 C, a corresponding set M (termed micro-batch) of 2K + 1 embedding pairs coupled with a corresponding distance in the input distributional space:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Constraints to Training Instances", "sec_num": "3.1" }, { "text": "External knowledge (bright, light, syn) (source, target, ant) (buy, acquire, syn)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Constraints to Training Instances", "sec_num": "3.1" }, { "text": "... 21, -0.52, ..., 0.47] ...", "cite_spans": [ { "start": 4, "end": 25, "text": "21, -0.52, ..., 0.47]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "From Constraints to Training Instances", "sec_num": "3.1" }, { "text": "acquire \uf0e0 [0.11, -0.23, ...,1.11] bright \uf0e0 [0.11, -0.23, ..., 1.11] buy \uf0e0 [-0.41, 0.29, ..., -1.07] ... target \uf0e0 [-1.7, 0.13, ..., -0.92] top \uf0e0 [-0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributional vector space", "sec_num": null }, { "text": "micro-batch 1: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training instances (micro-batches)", "sec_num": null }, { "text": "original: v bright , v light : 0.0 neg 1: V bright , V sunset : 0.35 neg 2: V light , V bulb : 0.27 micro-batch 2: original: v source , v target : 2.0 neg 1: V source , V river : 0.29 neg 2: V target , V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training instances (micro-batches)", "sec_num": null }, { "text": "micro-batch 1: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training instances (micro-batches)", "sec_num": null }, { "text": "original: v bright , v light : 0.0 neg 1: V bright , V sunset : 0.35 neg 2: V light , V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training instances (micro-batches)", "sec_num": null }, { "text": "x j x i x' j =f(x j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training instances (micro-batches)", "sec_num": null }, { "text": "x' i =f(x i ) (b) Supervised specialization model Figure 1 : (a) High-level illustration of the explicit retrofitting approach: lexical constraints, i.e., pairs of synonyms and antonyms, are transformed into respective micro-batches, which are then used to train the supervised specialization model. (b) The low-level implementation of the specialization model, combining the non-linear embedding specialization function f , defined as the deep fully-connected feed-forward network, with the distance metric g, measuring the distance between word vectors after their specialization.", "cite_spans": [], "ref_spans": [ { "start": 50, "end": 58, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Training instances (micro-batches)", "sec_num": null }, { "text": "M (wi, wj, r) = {(xi, xj, gr)} \u222a {(xi, x k m , g(xi, x k m ))} K k=1 \u222a {(xj, x k n , g(xj, x k n ))} K k=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training instances (micro-batches)", "sec_num": null }, { "text": "( 1)with g r = g min if r = syn; g r = g max if r = ant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training instances (micro-batches)", "sec_num": null }, { "text": "Our retrofitting framework learns a global explicit specialization function which, when applied on a distributional vector space, transforms it into a space that better captures semantic similarity, i.e., discerns similarity from all other types of semantic relatedness. We seek the optimal parameters \u03b8 of the parametrized function f (x; \u03b8) :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Linear Specialization Function", "sec_num": "3.2" }, { "text": "R d \u2192 R d (where d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Linear Specialization Function", "sec_num": "3.2" }, { "text": "is the dimensionality of the input space). The specialized embedding x i of the word w i is then obtained as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Linear Specialization Function", "sec_num": "3.2" }, { "text": "x i = f (x i ; \u03b8).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Linear Specialization Function", "sec_num": "3.2" }, { "text": "The specialized space X is obtained by transforming distributional vectors of all vocabulary words, X = f (X; \u03b8). We define the specialization function f to be a multi-layer fully-connected feed-forward network with H hidden layers and non-linear activations \u03c6. The illustration of this network is given in Figure 1b. The i-th hidden layer is defined with a weight matrix W i and a bias vector b i :", "cite_spans": [], "ref_spans": [ { "start": 307, "end": 313, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Non-Linear Specialization Function", "sec_num": "3.2" }, { "text": "h i (x; \u03b8i) = \u03c6 h i\u22121 (x; \u03b8i\u22121)W i + b i (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Linear Specialization Function", "sec_num": "3.2" }, { "text": "where \u03b8 i is the subset of network's parameters up to the i-th layer. Note that in this notation, x = h 0 (x; \u2205) and x = f (x, \u03b8) = h H (x; \u03b8). Let d h be the size of the hidden layers. The network's parameters are then as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Linear Specialization Function", "sec_num": "3.2" }, { "text": "W 1 \u2208 R d\u00d7d h ; W i \u2208 R d h \u00d7d h , i \u2208 {2, . . . , H \u2212 1}; W H \u2208 R d h \u00d7d ; b i \u2208 R d h , i \u2208 {1, . . . , H \u2212 1}; b H \u2208 R d .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Linear Specialization Function", "sec_num": "3.2" }, { "text": "We feed the micro-batches consisting of 2K + 1 training instances to the specialization model (see Section 3.1). Each training instance consists of a pair of distributional (i.e., unspecialized) embedding vectors x i and x j and a score g denoting the desired distance between the specialized vectors x i and x j of corresponding words w i and w j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Objectives", "sec_num": "3.3" }, { "text": "Mean Square Distance Objective (ER-MSD).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Objectives", "sec_num": "3.3" }, { "text": "Let our training batch consist of N training instances,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Objectives", "sec_num": "3.3" }, { "text": "{(x i 1 , x i 2 , g i )} N i=1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Objectives", "sec_num": "3.3" }, { "text": "The simplest objective function is then the difference between the desired and obtained distances of specialized vectors:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Objectives", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "JMSD = N i=1 g(f (x i 1 ), f (x i 2 )) \u2212 g i 2", "eq_num": "(3)" } ], "section": "Optimization Objectives", "sec_num": "3.3" }, { "text": "By minimizing the MSD objective we simply force the specialization model to produce a specialized embedding space X in which distances between all synonyms amount to g min , distances between all antonyms amount to g max and distances between all other word pairs remain the same as in the original space. The MSD objective does not leverage negative examples: it only indirectly enforces that synonym (or antonym) pairs (w i , w j ) have smaller (or larger) distances than corresponding non-constraint word pairs (w i , w k ) and (w j , w k ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Objectives", "sec_num": "3.3" }, { "text": "Contrastive Objective (ER-CNT). An alternative to MSD is to directly contrast the distances of constraint pairs (i.e., antonyms and synonyms)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Objectives", "sec_num": "3.3" }, { "text": "with the distances of their corresponding negative examples, i.e., the pairs from their respective microbatch (cf. Eq. (1) in Section 3.1). Such an objective should directly enforce that the similarity scores for synonyms (antonyms) (w i , w j ) are larger (or smaller, for antonyms) than for pairs (w i , w k ) and (w j , w k ) involving the same words w i and w j , respectively. Let S and A be the sets of microbatches created from synonymy and antonymy con-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Objectives", "sec_num": "3.3" }, { "text": "straints. Let M s = {(x i 1 , x i 2 , g i )} 2K+1 i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Objectives", "sec_num": "3.3" }, { "text": "be one micro-batch created from one synonymy constraint and let M a be the analogous micro-batch created from one antonymy constraint. Let us then assume that the first triple (i.e., for i = 1) in every microbatch corresponds to the constraint pair and the remaining 2K triples (i.e., for i \u2208 {2, . . . , 2K + 1}) to respective non-constraint word pairs. We then define the contrastive objective as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Objectives", "sec_num": "3.3" }, { "text": "JCNT = Ms\u2208S 2K+1 i=2 (g i \u2212 gmin ) \u2212 (g i \u2212 g 1 ) 2 + Ma\u2208A 2K+1 i=2 (gmax \u2212 g i ) \u2212 (g 1 \u2212 g i ) 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Objectives", "sec_num": "3.3" }, { "text": "where g is a short-hand notation for the distance between vectors in the specialized space, i.e.,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Objectives", "sec_num": "3.3" }, { "text": "g (x 1 , x 2 ) = g(x 1 , x 2 ) = g(f (x 1 ), f (x 2 )).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Objectives", "sec_num": "3.3" }, { "text": "Topological Regularization. Because the distributional space X already contains useful semantic information, we want our specialized space X to move similar words closer together and dissimilar words further apart, but without disrupting the overall topology of X. To this end, we define an additional regularization objective that measures the distance between the original vectors x 1 and x 2 and their specialized counterparts x 1 = f (x 1 ) and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Objectives", "sec_num": "3.3" }, { "text": "x 2 = f (x 2 ), for all examples in the training set:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Objectives", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "JREG = N i=1 g(x i 1 , f (x i 1 )) + g(x i 2 , f (x i 2 ))", "eq_num": "(4)" } ], "section": "Optimization Objectives", "sec_num": "3.3" }, { "text": "We minimize the final objective function J = J + \u03bbJ REG . J is either J MSD or J CNT and \u03bb is the regularization factor which determines how strictly we retain the topology of the original space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization Objectives", "sec_num": "3.3" }, { "text": "Distributional Vectors. In order to estimate the robustness of the proposed explicit retrofitting procedure, we experiment with three different publicly available and widely used collections of pre-trained distributional vectors for English: (1) SGNS-W2 -vectors trained on the Wikipedia dump from the Polyglot project (Al-Rfou et al., 2013) using the Skip-Gram algorithm with Negative Sampling (SGNS) (Mikolov et al., 2013b) by Levy and Goldberg (2014b) , using the context windows of size 2;", "cite_spans": [ { "start": 402, "end": 425, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF37" }, { "start": 429, "end": 454, "text": "Levy and Goldberg (2014b)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "(2) GLOVE-CC -vectors trained with the GloVe (Pennington et al., 2014 ) model on the Common Crawl; and (3) FASTTEXT -vectors trained on Wikipedia with a variant of SGNS that builds word vectors by summing the vectors of their constituent character n-grams (Bojanowski et al., 2017) .", "cite_spans": [ { "start": 45, "end": 69, "text": "(Pennington et al., 2014", "ref_id": "BIBREF48" }, { "start": 256, "end": 281, "text": "(Bojanowski et al., 2017)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "Linguistic Constraints. We experiment with the sets of linguistic constraints used in prior work (Zhang et al., 2014; Ono et al., 2015) . These constraints, extracted from WordNet (Fellbaum, 1998) and Roget's Thesaurus (Kipfer, 2009) , comprise a total of 1,023,082 synonymy word pairs and 380,873 antonymy word pairs. Although this seems like a large number of linguistic constraints, there is only 57,320 unique words in all synonymy and antonymy constraints combined, and not all of these words are found in the dictionary of the pre-trained distributional vector space. For example, only 15.3% of the words from constraints are found in the whole vocabulary of SGNS-W2 embeddings. Similarly, we find only 13.3% and 14.6% constraint words among the 200K most frequent words from the GLOVE-CC and FASTTEXT vocabularies, respectively. This low coverage emphasizes the core limitation of current retrofitting methods, being able to specialize only the vectors of words seen in the external constraints, and the need for our global ER method which can specialize all word vectors from the distributional space.", "cite_spans": [ { "start": 97, "end": 117, "text": "(Zhang et al., 2014;", "ref_id": "BIBREF63" }, { "start": 118, "end": 135, "text": "Ono et al., 2015)", "ref_id": "BIBREF45" }, { "start": 180, "end": 196, "text": "(Fellbaum, 1998)", "ref_id": null }, { "start": 201, "end": 233, "text": "Roget's Thesaurus (Kipfer, 2009)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "In all experiments, we set the distance function g to cosine distance: 2 ) ) and use the hyperbolic tangent as activation, \u03c6 = tanh. For each constraint (w i , w j ), we create K = 4 corresponding negative examples for both w i and w j , resulting in micro-batches with 2K + 1 = 9 training instances. 3 We separate 10% of the created micro-batches as the validation set. We then tune the hyper-parameter values, the number of hidden layers H = 5 and their size d h = 1000, and the topological regularization factor \u03bb = 0.3 by minimizing the model's objective J on the validation set. We train the model in mini-batches, each containing N b = 100 constraints (i.e., 900 training instances, see above), using the Adam optimizer (Kingma and Ba, 2015) with initial learning rate set to 10 \u22124 . We use the loss on the validation set as the early stopping criteria.", "cite_spans": [ { "start": 301, "end": 302, "text": "3", "ref_id": null } ], "ref_spans": [ { "start": 71, "end": 74, "text": "2 )", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "ER Model Configuration.", "sec_num": null }, { "text": "g(x 1 , x 2 ) = 1 \u2212 (x 1 \u2022 x 2 /( x 1 x", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ER Model Configuration.", "sec_num": null }, { "text": "Evaluation Setup. We first evaluate the quality of the explicitly retrofitted embedding spaces intrinsically, on two word similarity benchmarks: SimLex-999 dataset and SimVerb-3500 (Gerz et al., 2016) , a recent dataset containing human similarity ratings for 3,500 verb pairs. 4 We use Spearman's \u03c1 rank correlation between gold and predicted word pair scores as the evaluation metric. We evaluate the specialized embedding spaces in two settings. In the first setting, termed lexically disjoint, we remove from our training set all linguistic constraints that contain any of the words found in SimLex or SimVerb. This way, we effectively evaluate the model's ability to generalize the specialization function to unseen words. In the second setting (lexical overlap) we retain the constraints containing SimLex or SimVerb words in the training set. For comparison, we also report performance of the state-of-the-art local retrofitting model ATTRACT-REPEL , which is able to specialize only the words from the linguistic constraints.", "cite_spans": [ { "start": 181, "end": 200, "text": "(Gerz et al., 2016)", "ref_id": "BIBREF16" }, { "start": 278, "end": 279, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Word Similarity", "sec_num": "5.1" }, { "text": "Results. The results with our ER model applied to three distributional spaces are shown in Table 1 . The scores suggest that the proposed ER model is universally useful and robust. The ER-specialized spaces outperform original distributional spaces across the board, for both objective functions. The results in the lexically disjoint setting are especially indicative of the improvements achieved by the ER. For example, we achieve a correlation gain of 18% for the GLOVE-CC vectors on SimLex using a specialization function learned without seeing a single constraint with any SimLex word.", "cite_spans": [], "ref_spans": [ { "start": 91, "end": 98, "text": "Table 1", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Word Similarity", "sec_num": "5.1" }, { "text": "In the lexical overlap setting, we observe substantial gains only for GLOVE-CC. The modest gains in this setting with FASTTEXT and SGNS-W2 in fact strengthen the impression that the ER model learns a general specialization function, i.e., it does not \"overfit\" to words from linguistic constraints. The ER model with the contrastive objective (ER-CNT) yields better performance on average than the one using the simpler square distance objective (ER-MSD). This is expected, given that the contrastive objective enforces the model to distinguish pairs of semantically (dis)similar words from pairs of semantically related words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Similarity", "sec_num": "5.1" }, { "text": "Finally, the post-processing ATTRACT-REPEL model based on local vector updates seems to substantially outperform the ER method in this task. The gap is especially visible for FASTTEXT and SGNS-W2 vectors. However, since ATTRACT-REPEL specializes only words seen in linguistic constraints, 5 its performance crucially depends on the coverage of test set words in the constraints. ATTRACT-REPEL excels on the intrinsic evaluation as the constraints cover 99.2% of SimLex words and 99.9% of SimVerb words. However, its usefulness is less pronounced in real-life downstream scenarios in which such high coverage cannot be guaranteed, as demonstrated in Section 5.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Similarity", "sec_num": "5.1" }, { "text": "Analysis. We examine in more detail the performance of the ER model with respect to (1) the type of constraints used for training the model: synonyms and antonyms, only synonyms, or only antonyms and (2) the extent to which we retain the topology of the original distributional space (i.e., with respect to the value of the topological regularization factor \u03bb). All reported results were obtained by specializing the GLOVE-CC distributional space in the lexically disjoint setting (i.e., employed constraints did not contain any of the SimLex or SimVerb words).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Similarity", "sec_num": "5.1" }, { "text": "In Table 2 we show the specialization performance of the ER-CNT models (H = 5, \u03bb = 0.3), using different types of constraints on SimLex-999 (SL) and SimVerb-3500 (SV). We compare the standard model, which exploits both synonym and antonym pairs for creating training instances, with the models employing only synonym and only antonym constraints, respectively. Clearly, we obtain the best specialization when combining synonyms and antonyms. Note, however, that using .544 ER-Specialized (X = f (X)) ER- Table 2 : Performance (\u03c1) on SL and SV for ER-CNT models trained with different constraints. only synonyms or only antonyms also improves over the original distributional space. Next, in Figure 2 we depict the specialization performance (on SimLex and SimVerb) of the ER models with different values of the topology regularization factor \u03bb (H fixed to 5). The best performance for is obtained for \u03bb = 0.3. Smaller lambda values overly distort the original distributional space, whereas larger lambda values dampen the specialization effects of linguistic constraints.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 2", "ref_id": null }, { "start": 504, "end": 511, "text": "Table 2", "ref_id": null }, { "start": 691, "end": 699, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Word Similarity", "sec_num": "5.1" }, { "text": "Readily available large collections of synonymy and antonymy word pairs do not exist for many languages. This is why we also investigate zeroshot specialization: we test if it is possible, with the help of cross-lingual word embeddings, to transfer the specialization knowledge learned from English constraints to languages without any training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Transfer", "sec_num": "5.2" }, { "text": "Evaluation Setup. We use the mapping model of Smith et al. 2017 Table 3 : Spearman's \u03c1 correlation scores for German, Italian, and Croatian embeddings in the transfer setup: the vectors are specialized using the models trained on English constraints and evaluated on respective language-specific SimLex-999 variants.", "cite_spans": [], "ref_spans": [ { "start": 64, "end": 71, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Language Transfer", "sec_num": "5.2" }, { "text": "tor space 6 containing word vectors of three other languages -German, Italian, and Croatian -along with the English vectors. 7 Concretely, we map the Italian CBOW vectors (Dinu et al., 2015) , German FastText vectors trained on German Wikipedia (Bojanowski et al., 2017), and Croatian Skip-Gram vectors trained on HrWaC corpus (Ljube\u0161i\u0107 and Erjavec, 2011) to the GLOVE-CC English space. We create the translation pairs needed to learn the projections by automatically translating 4,000 most frequent English words to all three other languages with Google Translate. We then employ the ER model trained to specialize the GLOVE-CC space using the full set of English constraints, to specialize the distributional spaces of other languages. We evaluate the quality of the specialized spaces on the respective SimLex-999 dataset for each language (Leviant and Reichart, 2015; .", "cite_spans": [ { "start": 171, "end": 190, "text": "(Dinu et al., 2015)", "ref_id": "BIBREF10" }, { "start": 327, "end": 355, "text": "(Ljube\u0161i\u0107 and Erjavec, 2011)", "ref_id": "BIBREF34" }, { "start": 843, "end": 871, "text": "(Leviant and Reichart, 2015;", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Language Transfer", "sec_num": "5.2" }, { "text": "Results. The results are provided in Table 3 . They indicate that the ER models can substantially improve (e.g., by 13% for German vector space) over distributional spaces also in the language transfer setup without seeing a single constraint in the target language. These transfer results hold promise to support vector space specialization even for resource-lean languages. The more sophisticated contrastive ER-CNT model variant again outperforms the simpler ER-MSD variant, and it does so for all three languages, which is consistent with the findings from the monolingual English experiments (see Table 1 ).", "cite_spans": [], "ref_spans": [ { "start": 37, "end": 44, "text": "Table 3", "ref_id": null }, { "start": 602, "end": 609, "text": "Table 1", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Language Transfer", "sec_num": "5.2" }, { "text": "We now evaluate the impact of our global ER method on two downstream tasks in which differentiating semantic similarity from semantic relatedness is particularly important: lexical text simplification (LS) and dialog state tracking (DST).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Downstream Tasks", "sec_num": "5.3" }, { "text": "Lexical simplification aims to replace complex words -used less frequently and known to fewer speakers -with their simpler synonyms that fit into the context, that is, without changing the meaning of the original text. Because retaining the meaning of the original text is a strict requirement, complex words need to be replaced with semantically similar words, whereas replacements with semantically related words (e.g., replacing \"pilot\" with \"airplane\" in \"Ferrari's pilot won the race\") produce incorrect text which is more difficult to comprehend.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Text Simplification", "sec_num": "5.3.1" }, { "text": "Simplification Using Distributional Vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Text Simplification", "sec_num": "5.3.1" }, { "text": "We use the LIGHT-LS lexical simplification algorithm of Glava\u0161 and\u0160tajner (2015) which makes the word replacement decisions primarily based on semantic similarities between words in a distributional vector space. 8 For each word in the input text LIGHT-LS retrieves most similar replacement candidates from the vector space. The candidates are then ranked according to several measures of simplicity and fitness for the context. Finally, the replacement is made if the top-ranked candidate is estimated to be simpler than the original word. By plugging-in vector spaces specialized by the ER model into LIGHT-LS, we hope to generate true synonymous candidates more frequently than with the unspecialized distributional space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Text Simplification", "sec_num": "5.3.1" }, { "text": "Evaluation Setup. We evaluate LIGHT-LS on the LS dataset crowdsourced by Horn et al. (2014) . For each indicated complex word Horn et al. (2014) collected 50 manual simplifications. We use two evaluation metrics from prior work (Horn et al., 2014; Glava\u0161 and\u0160tajner, 2015) to quantify the quality and frequency of word replacements: (1) Table 4 : Lexical simplification performance with explicit retrofitting applied on three input spaces.", "cite_spans": [ { "start": 73, "end": 91, "text": "Horn et al. (2014)", "ref_id": "BIBREF22" }, { "start": 126, "end": 144, "text": "Horn et al. (2014)", "ref_id": "BIBREF22" }, { "start": 228, "end": 247, "text": "(Horn et al., 2014;", "ref_id": "BIBREF22" }, { "start": 248, "end": 272, "text": "Glava\u0161 and\u0160tajner, 2015)", "ref_id": null } ], "ref_spans": [ { "start": 337, "end": 344, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Lexical Text Simplification", "sec_num": "5.3.1" }, { "text": "accurracy (A) is the number of correct simplifications made (i.e., when the replacement made by the system is found in the list of manual replacements) divided by the total number of indicated complex words; and (2) change (C) is the percentage of indicated complex words that were replaced by the system (regardless of whether the replacement was correct). We plug into LIGHT-LS both unspecialized and specialized variants of three previously used English embedding spaces: GLOVE-CC, FASTTEXT, and SGNS-W2. Additionally, we again evaluate specializations of the same spaces produced by the state-of-the-art local retrofitting model ATTRACT-REPEL .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Text Simplification", "sec_num": "5.3.1" }, { "text": "Results and Analysis. The results with LIGHT-LS are summarized in Table 4 . ER-CNT model yields considerable gains over unspecialized spaces for both metrics. This suggests that the ER-specialized embedding spaces allow LIGHT-LS to generate true synonymous candidate replacements more often than with unspecialized spaces, and also verifies the importance of specialization for the LS task. Our ER-CNT model now also yields better results than ATTRACT-REPEL in a real-world downstream task. Only 59.6 % of all indicated complex words and manual replacement candidates from the LS dataset are now covered by the linguistic constraints. This accentuates the need to specialize the full distributional space in downstream applications as done by the ER model, while ATTRACT-REPEL is limited to local vector updates only of words seen in the constraints. By learning a global specialization function the proposed ER models seem more resilient to the observed drop in coverage of test words by linguistic constraints. Table 5 shows example substitutions of LIGHT-LS when using different embedding spaces: original GLOVE-CC space and its specializations obtained with ER-CNT and ATTRACT-REPEL.", "cite_spans": [], "ref_spans": [ { "start": 66, "end": 73, "text": "Table 4", "ref_id": null }, { "start": 1013, "end": 1020, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Lexical Text Simplification", "sec_num": "5.3.1" }, { "text": "Finally, we also evaluate the importance of explicit retrofitting in a downstream language understand-Text GLOVE-CC ATTRACT-REPEL ER-CNT Wrestlers portrayed a villain or a hero as they followed a series of events that built tension character protagonist demon", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dialog State Tracking", "sec_num": "5.3.2" }, { "text": "This large version number jump was due to a feeling that a version 1.0 with no major missing pieces was imminent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dialog State Tracking", "sec_num": "5.3.2" }, { "text": "The storm continued, crossing North Carolina , and retained its strength until June 20 when it became extratropical near Newfoundland lost preserved preserved", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ones songs parts", "sec_num": null }, { "text": "Tibooburra has an arid, desert climate with temperatures soaring above 40 Celsius in summer, often reaching as high as 47 degrees Celsius. subtropical humid dry Table 5 : Examples of lexical simplifications performed with the Light-LS tool when using different embedding spaces. The target word to be simplified is in bold.", "cite_spans": [], "ref_spans": [ { "start": 161, "end": 168, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "ones songs parts", "sec_num": null }, { "text": "GLOVE-CC embedding vectors JGA", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ones songs parts", "sec_num": null }, { "text": "Distributional (X) .797 Specialized (X = f (X)) ATTRACT-REPEL", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ones songs parts", "sec_num": null }, { "text": ".817 ER-CNT .816 Table 6 : DST performance of GLOVE-CC embeddings specialized using explicit retrofitting.", "cite_spans": [], "ref_spans": [ { "start": 17, "end": 24, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "ones songs parts", "sec_num": null }, { "text": "ing task, namely dialog state tracking (DST) (Henderson et al., 2014; Williams et al., 2016) . A DST model is typically the first component of a dialog system pipeline (Young, 2010) , tasked with capturing user's goals and updating the dialog state at each dialog turn. Similarly as in lexical simplification, discerning similarity from relatedness is crucial in DST (e.g., a dialog system should not recommend an \"expensive pub in the south\" when asked for a \"cheap bar in the east\").", "cite_spans": [ { "start": 45, "end": 69, "text": "(Henderson et al., 2014;", "ref_id": "BIBREF20" }, { "start": 70, "end": 92, "text": "Williams et al., 2016)", "ref_id": "BIBREF58" }, { "start": 168, "end": 181, "text": "(Young, 2010)", "ref_id": "BIBREF61" } ], "ref_spans": [], "eq_spans": [], "section": "ones songs parts", "sec_num": null }, { "text": "Evaluation Setup. To evaluate the impact of specialized word vectors on DST, we employ the Neural Belief Tracker (NBT), a DST model that makes inferences purely based on pre-trained word vectors . 9 NBT composes word embeddings into intermediate utterance and context representations. For full model details, we refer the reader to the original paper. Following prior work, our DST evaluation is based on the Wizard-of-Oz (WOZ) v2.0 dataset which contains 1,200 dialogs (600 training, 200 validation, and 400 test dialogs). We evaluate performance of the distributional and specialized GLOVE-CC embeddings and report it in terms of joint goal accuracy (JGA), a standard DST evaluation metric. All reported results are averages over 5 runs of the NBT model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ones songs parts", "sec_num": null }, { "text": "Results. We show DST performance in Table 6 . The DST results tell a similar story like word similarity and lexical simplification results -the ER 9 https://github.com/nmrksic/neural-belief-tracker model substantially improves over the distributional space. With linguistic specialization constraints covering 57% of words from the WOZ dataset, ER model's performance is on a par with the ATTRACT-REPEL specialization. This further confirms our hypothesis that the importance of learning a global specialization for the full vocabulary in downstream tasks grows with the drop of the test word coverage by specialization constraints.", "cite_spans": [], "ref_spans": [ { "start": 36, "end": 43, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "ones songs parts", "sec_num": null }, { "text": "We presented a novel method for specializing word embeddings to better discern similarity from other types of semantic relatedness. Unlike existing retrofitting models, which directly update vectors of words from external constraints, we use the constraints as training examples to learn an explicit specialization function, implemented as a deep feedforward neural network. Our global specialization approach resolves the well-known inability of retrofitting models to specialize vectors of words unseen in the constraints. We demonstrated the effectiveness of the proposed model on word similarity benchmarks, and in two downstream tasks: lexical simplification and dialog state tracking. We also showed that it is possible to transfer the specialization to languages without linguistic constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "In future work, we will investigate explicit retrofitting methods for asymmetric relations like hypernymy and meronymy. We also intend to apply the method to other downstream tasks and to investigate the zero-shot language transfer of the specialization function for more language pairs. ER code is publicly available at: https:// github.com/codogogo/explirefit.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "The minimal distance value is gmin = 0 for, e.g., cosine distance or Euclidean distance.2 While some distance functions do have a theoretical maximum (e.g., gmax = 2 for cosine distance), others (e.g., Euclidean distance) may be theoretically unbounded. For unbounded distance measures, we propose using the maximal distance between any two words from the vocabulary as gmax .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For K < 4 we observed significant performance drop. Setting K > 4 resulted in negligible performance gains but significantly increased the model training time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Other word similarity datasets such as MEN(Bruni et al., 2014) or WordSim-353(Finkelstein et al., 2002) conflate the concepts of true semantic similarity and semantic relatedness in a broader sense. In contrast, SimLex and SimVerb explicitly discern between the two, with pairs of semantically related but not similar words (e.g. car and wheel) having low ratings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This is why ATTRACT-REPEL cannot be applied in the lexically disjoint setting: the scores simply stay the same.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This model was chosen for its ease of use, readily available implementation, and strong comparative results (see(Ruder et al., 2017)). For more details we refer the reader to the original paper and the survey.7 The choice of languages was determined by the availability of the language-specific SimLex-999 variants.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The Light-LS implementation is available at: https://bitbucket.org/gg42554/embesimp", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Ivan Vuli\u0107 is supported by the ERC Consolidator Grant LEXICAL (no. 648909).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Polyglot: Distributed word representations for multilingual NLP", "authors": [ { "first": "Rami", "middle": [], "last": "Al-Rfou", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Perozzi", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Skiena", "suffix": "" } ], "year": 2013, "venue": "Proceedings of CoNLL", "volume": "", "issue": "", "pages": "183--192", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual NLP. In Proceedings of CoNLL, pages 183-192.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The Berkeley FrameNet project", "authors": [ { "first": "Collin", "middle": [ "F" ], "last": "Baker", "suffix": "" }, { "first": "Charles", "middle": [ "J" ], "last": "Fillmore", "suffix": "" }, { "first": "John", "middle": [ "B" ], "last": "Lowe", "suffix": "" } ], "year": 1998, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "86--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In Proceed- ings of ACL, pages 86-90.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Knowledge-powered deep learning for word embedding", "authors": [ { "first": "Jiang", "middle": [], "last": "Bian", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ECML-PKDD", "volume": "", "issue": "", "pages": "132--148", "other_ids": { "DOI": [ "10.1007/978-3-662-44848-9_9" ] }, "num": null, "urls": [], "raw_text": "Jiang Bian, Bin Gao, and Tie-Yan Liu. 2014. Knowledge-powered deep learning for word embed- ding. In Proceedings of ECML-PKDD, pages 132- 148.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the ACL", "volume": "5", "issue": "", "pages": "135--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the ACL, 5:135-146.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Joint word representation learning using a corpus and a semantic lexicon", "authors": [ { "first": "Danushka", "middle": [], "last": "Bollegala", "suffix": "" }, { "first": "Mohammed", "middle": [], "last": "Alsuhaibani", "suffix": "" }, { "first": "Takanori", "middle": [], "last": "Maehara", "suffix": "" }, { "first": "Ken-Ichi", "middle": [], "last": "Kawarabayashi", "suffix": "" } ], "year": 2016, "venue": "Proceedings of AAAI", "volume": "", "issue": "", "pages": "2690--2696", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danushka Bollegala, Mohammed Alsuhaibani, Takanori Maehara, and Ken-ichi Kawarabayashi. 2016. Joint word representation learning using a corpus and a semantic lexicon. In Proceedings of AAAI, pages 2690-2696.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Multimodal distributional semantics", "authors": [ { "first": "Elia", "middle": [], "last": "Bruni", "suffix": "" }, { "first": "Nam-Khanh", "middle": [], "last": "Tran", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2014, "venue": "Journal of Artificial Intelligence Research", "volume": "49", "issue": "", "pages": "1--47", "other_ids": { "DOI": [ "10.1613/jair.4135" ] }, "num": null, "urls": [], "raw_text": "Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Ar- tificial Intelligence Research, 49:1-47.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A fast and accurate dependency parser using neural networks", "authors": [ { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "740--750", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danqi Chen and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural net- works. In Proceedings of EMNLP, pages 740-750.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "Pavel", "middle": [ "P" ], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493-2537.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Morphological smoothing and extrapolation of word embeddings", "authors": [ { "first": "Ryan", "middle": [], "last": "Cotterell", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2016, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "1651--1660", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan Cotterell, Hinrich Sch\u00fctze, and Jason Eisner. 2016. Morphological smoothing and extrapolation of word embeddings. In Proceedings of ACL, pages 1651-1660.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Eigenwords: Spectral word embeddings", "authors": [ { "first": "S", "middle": [], "last": "Paramveer", "suffix": "" }, { "first": "Dean", "middle": [ "P" ], "last": "Dhillon", "suffix": "" }, { "first": "Lyle", "middle": [ "H" ], "last": "Foster", "suffix": "" }, { "first": "", "middle": [], "last": "Ungar", "suffix": "" } ], "year": 2015, "venue": "Journal of Machine Learning Research", "volume": "16", "issue": "", "pages": "3035--3078", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paramveer S. Dhillon, Dean P. Foster, and Lyle H. Un- gar. 2015. Eigenwords: Spectral word embeddings. Journal of Machine Learning Research, 16:3035- 3078.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Improving zero-shot learning by mitigating the hubness problem", "authors": [ { "first": "Georgiana", "middle": [], "last": "Dinu", "suffix": "" }, { "first": "Angeliki", "middle": [], "last": "Lazaridou", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ICLR: Workshop Papers", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Georgiana Dinu, Angeliki Lazaridou, and Marco Ba- roni. 2015. Improving zero-shot learning by mitigat- ing the hubness problem. In Proceedings of ICLR: Workshop Papers.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Retrofitting word vectors to semantic lexicons", "authors": [ { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Jesse", "middle": [], "last": "Dodge", "suffix": "" }, { "first": "Sujay", "middle": [], "last": "Kumar Jauhar", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "1606--1615", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of NAACL-HLT, pages 1606-1615.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Morphological inflection generation using character sequence to sequence learning", "authors": [ { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2016, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "634--643", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manaal Faruqui, Yulia Tsvetkov, Graham Neubig, and Chris Dyer. 2016. Morphological inflection genera- tion using character sequence to sequence learning. In Proceedings of NAACL-HLT, pages 634-643.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Placing search in context: The concept revisited", "authors": [ { "first": "Lev", "middle": [], "last": "Finkelstein", "suffix": "" }, { "first": "Evgeniy", "middle": [], "last": "Gabrilovich", "suffix": "" }, { "first": "Yossi", "middle": [], "last": "Matias", "suffix": "" }, { "first": "Ehud", "middle": [], "last": "Rivlin", "suffix": "" }, { "first": "Zach", "middle": [], "last": "Solan", "suffix": "" }, { "first": "Gadi", "middle": [], "last": "Wolfman", "suffix": "" }, { "first": "Eytan", "middle": [], "last": "Ruppin", "suffix": "" } ], "year": 2002, "venue": "ACM Transactions on Information Systems", "volume": "20", "issue": "1", "pages": "116--131", "other_ids": { "DOI": [ "10.1145/503104.503110" ] }, "num": null, "urls": [], "raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Ey- tan Ruppin. 2002. Placing search in context: The concept revisited. ACM Transactions on Informa- tion Systems, 20(1):116-131.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "PPDB: The Paraphrase Database", "authors": [ { "first": "Juri", "middle": [], "last": "Ganitkevitch", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2013, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "758--764", "other_ids": {}, "num": null, "urls": [], "raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The Paraphrase Database. In Proceedings of NAACL-HLT, pages 758-764.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "SimVerb-3500: A largescale evaluation set of verb similarity", "authors": [ { "first": "Daniela", "middle": [], "last": "Gerz", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2016, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "2173--2182", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniela Gerz, Ivan Vuli\u0107, Felix Hill, Roi Reichart, and Anna Korhonen. 2016. SimVerb-3500: A large- scale evaluation set of verb similarity. In Proceed- ings of EMNLP, pages 2173-2182.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Dual tensor model for detecting asymmetric lexicosemantic relations", "authors": [ { "first": "Goran", "middle": [], "last": "Glava\u0161", "suffix": "" }, { "first": "Simone", "middle": [ "Paolo" ], "last": "Ponzetto", "suffix": "" } ], "year": 2017, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "1758--1768", "other_ids": {}, "num": null, "urls": [], "raw_text": "Goran Glava\u0161 and Simone Paolo Ponzetto. 2017. Dual tensor model for detecting asymmetric lexico- semantic relations. In Proceedings of EMNLP, pages 1758-1768.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Simplifying lexical simplification: Do we need simplified corpora?", "authors": [ { "first": "Goran", "middle": [], "last": "Glava\u0161", "suffix": "" }, { "first": "", "middle": [], "last": "Sanja\u0161tajner", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "63--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Goran Glava\u0161 and Sanja\u0160tajner. 2015. Simplifying lex- ical simplification: Do we need simplified corpora? In Proceedings of ACL, pages 63-68.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Distributional structure. Word", "authors": [ { "first": "S", "middle": [], "last": "Zellig", "suffix": "" }, { "first": "", "middle": [], "last": "Harris", "suffix": "" } ], "year": 1954, "venue": "", "volume": "10", "issue": "", "pages": "146--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zellig S. Harris. 1954. Distributional structure. Word, 10(23):146-162.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The Second Dialog State Tracking Challenge", "authors": [ { "first": "Matthew", "middle": [], "last": "Henderson", "suffix": "" }, { "first": "Blaise", "middle": [], "last": "Thomson", "suffix": "" }, { "first": "Jason", "middle": [ "D" ], "last": "Wiliams", "suffix": "" } ], "year": 2014, "venue": "Proceedings of SIGDIAL", "volume": "", "issue": "", "pages": "263--272", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Henderson, Blaise Thomson, and Jason D. Wiliams. 2014. The Second Dialog State Tracking Challenge. In Proceedings of SIGDIAL, pages 263- 272.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "SimLex-999: Evaluating semantic models with (genuine) similarity estimation", "authors": [ { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2015, "venue": "Computational Linguistics", "volume": "41", "issue": "4", "pages": "665--695", "other_ids": { "DOI": [ "10.1162/COLI_a_00237" ] }, "num": null, "urls": [], "raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015. SimLex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics, 41(4):665-695.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Learning a lexical simplifier using wikipedia", "authors": [ { "first": "Colby", "middle": [], "last": "Horn", "suffix": "" }, { "first": "Cathryn", "middle": [], "last": "Manduca", "suffix": "" }, { "first": "David", "middle": [], "last": "Kauchak", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "458--463", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colby Horn, Cathryn Manduca, and David Kauchak. 2014. Learning a lexical simplifier using wikipedia. In Proceedings of the ACL, pages 458-463.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Ontologically grounded multi-sense representation learning for semantic vector space models", "authors": [ { "first": "Chris", "middle": [], "last": "Sujay Kumar Jauhar", "suffix": "" }, { "first": "Eduard", "middle": [ "H" ], "last": "Dyer", "suffix": "" }, { "first": "", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2015, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "683--693", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sujay Kumar Jauhar, Chris Dyer, and Eduard H. Hovy. 2015. Ontologically grounded multi-sense represen- tation learning for semantic vector space models. In Proceedings of NAACL, pages 683-693.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Specializing word embeddings for similarity or relatedness", "authors": [ { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2015, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "2044--2048", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douwe Kiela, Felix Hill, and Stephen Clark. 2015. Specializing word embeddings for similarity or re- latedness. In Proceedings of EMNLP, pages 2044- 2048.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Adjusting word embeddings with semantic intensity orders", "authors": [ { "first": "Joo-Kyung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Fosler-Lussier", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 1st Workshop on Representation Learning for NLP", "volume": "", "issue": "", "pages": "62--69", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joo-Kyung Kim, Marie-Catherine de Marneffe, and Eric Fosler-Lussier. 2016a. Adjusting word embed- dings with semantic intensity orders. In Proceedings of the 1st Workshop on Representation Learning for NLP, pages 62-69.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Intent detection using semantically enriched word embeddings", "authors": [ { "first": "Joo-Kyung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Gokhan", "middle": [], "last": "Tur", "suffix": "" }, { "first": "Asli", "middle": [], "last": "Celikyilmaz", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Ye-Yi", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of SLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joo-Kyung Kim, Gokhan Tur, Asli Celikyilmaz, Bin Cao, and Ye-Yi Wang. 2016b. Intent detection us- ing semantically enriched word embeddings. In Pro- ceedings of SLT.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ICLR (Conference Track)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR (Conference Track).", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Roget's 21st Century Thesaurus", "authors": [ { "first": "Ann", "middle": [], "last": "Barbara", "suffix": "" }, { "first": "", "middle": [], "last": "Kipfer", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barbara Ann Kipfer. 2009. Roget's 21st Century The- saurus (3rd Edition). Philip Lief Group.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Separated by an un-common language: Towards judgment language informed vector space modeling", "authors": [ { "first": "Ira", "middle": [], "last": "Leviant", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ira Leviant and Roi Reichart. 2015. Separated by an un-common language: Towards judgment lan- guage informed vector space modeling. CoRR, abs/1508.00106.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Dependencybased word embeddings", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "302--308", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Levy and Yoav Goldberg. 2014a. Dependency- based word embeddings. In Proceedings of ACL, pages 302-308.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Dependencybased word embeddings", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "302--308", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Levy and Yoav Goldberg. 2014b. Dependency- based word embeddings. In Proceedings of ACL, pages 302-308.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Improving distributional similarity with lessons learned from word embeddings", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" } ], "year": 2015, "venue": "Transactions of the ACL", "volume": "3", "issue": "", "pages": "211--225", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Im- proving distributional similarity with lessons learned from word embeddings. Transactions of the ACL, 3:211-225.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Learning semantic word embeddings based on ordinal knowledge constraints", "authors": [ { "first": "Quan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Si", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Zhen-Hua", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "1501--1511", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quan Liu, Hui Jiang, Si Wei, Zhen-Hua Ling, and Yu Hu. 2015. Learning semantic word embeddings based on ordinal knowledge constraints. In Proceed- ings of ACL, pages 1501-1511.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "hrWaC and slWaC: Compiling web corpora for croatian and slovene", "authors": [ { "first": "Nikola", "middle": [], "last": "Ljube\u0161i\u0107", "suffix": "" }, { "first": "Toma\u017e", "middle": [], "last": "Erjavec", "suffix": "" } ], "year": 2011, "venue": "Proceedings of TSD", "volume": "", "issue": "", "pages": "395--402", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikola Ljube\u0161i\u0107 and Toma\u017e Erjavec. 2011. hrWaC and slWaC: Compiling web corpora for croatian and slovene. In Proceedings of TSD, pages 395-402.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "The role of context types and dimensionality in learning word embeddings", "authors": [ { "first": "Oren", "middle": [], "last": "Melamud", "suffix": "" }, { "first": "David", "middle": [], "last": "Mcclosky", "suffix": "" }, { "first": "Siddharth", "middle": [], "last": "Patwardhan", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2016, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "1030--1040", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oren Melamud, David McClosky, Siddharth Patward- han, and Mohit Bansal. 2016. The role of context types and dimensionality in learning word embed- dings. In Proceedings of NAACL-HLT, pages 1030- 1040.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Exploiting similarities among languages for machine translation", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Le", "suffix": "" }, { "first": "", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2013, "venue": "CoRR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for machine translation. arXiv preprint, CoRR, abs/1309.4168.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Gregory", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of NIPS", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed rep- resentations of words and phrases and their compo- sitionality. In Proceedings of NIPS, pages 3111- 3119.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Neural belief tracker: Data-driven dialogue state tracking", "authors": [ { "first": "Nikola", "middle": [], "last": "Mrk\u0161i\u0107", "suffix": "" }, { "first": "Diarmuid\u00f3", "middle": [], "last": "S\u00e9aghdha", "suffix": "" }, { "first": "Tsung-Hsien", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Blaise", "middle": [], "last": "Thomson", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Young", "suffix": "" } ], "year": 2017, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "1777--1788", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikola Mrk\u0161i\u0107, Diarmuid\u00d3 S\u00e9aghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2017. Neu- ral belief tracker: Data-driven dialogue state track- ing. In Proceedings of ACL, pages 1777-1788.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Counter-fitting word vectors to linguistic constraints", "authors": [ { "first": "Nikola", "middle": [], "last": "Mrk\u0161i\u0107", "suffix": "" }, { "first": "Diarmuid\u00f3", "middle": [], "last": "S\u00e9aghdha", "suffix": "" }, { "first": "Blaise", "middle": [], "last": "Thomson", "suffix": "" }, { "first": "Milica", "middle": [], "last": "Ga\u0161i\u0107", "suffix": "" }, { "first": "Lina", "middle": [ "Maria" ], "last": "Rojas-Barahona", "suffix": "" }, { "first": "Pei-Hao", "middle": [], "last": "Su", "suffix": "" }, { "first": "David", "middle": [], "last": "Vandyke", "suffix": "" }, { "first": "Tsung-Hsien", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Young", "suffix": "" } ], "year": 2016, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikola Mrk\u0161i\u0107, Diarmuid\u00d3 S\u00e9aghdha, Blaise Thom- son, Milica Ga\u0161i\u0107, Lina Maria Rojas-Barahona, Pei- Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. In Proceedings of NAACL- HLT.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Semantic specialisation of distributional word vector spaces using monolingual and cross-lingual constraints", "authors": [ { "first": "Nikola", "middle": [], "last": "Mrk\u0161i\u0107", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Diarmuid\u00f3", "middle": [], "last": "S\u00e9aghdha", "suffix": "" }, { "first": "Ira", "middle": [], "last": "Leviant", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "Milica", "middle": [], "last": "Ga\u0161i\u0107", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Young", "suffix": "" } ], "year": 2017, "venue": "Transactions of the ACL", "volume": "5", "issue": "", "pages": "309--324", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikola Mrk\u0161i\u0107, Ivan Vuli\u0107, Diarmuid\u00d3 S\u00e9aghdha, Ira Leviant, Roi Reichart, Milica Ga\u0161i\u0107, Anna Korho- nen, and Steve Young. 2017. Semantic specialisa- tion of distributional word vector spaces using mono- lingual and cross-lingual constraints. Transactions of the ACL, 5:309-324.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Ba-belNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network", "authors": [ { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" }, { "first": "Simone", "middle": [ "Paolo" ], "last": "Ponzetto", "suffix": "" } ], "year": 2012, "venue": "Artificial Intelligence", "volume": "193", "issue": "", "pages": "217--250", "other_ids": { "DOI": [ "10.1016/j.artint.2012.07.001" ] }, "num": null, "urls": [], "raw_text": "Roberto Navigli and Simone Paolo Ponzetto. 2012. Ba- belNet: The automatic construction, evaluation and application of a wide-coverage multilingual seman- tic network. Artificial Intelligence, 193:217-250.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Hierarchical embeddings for hypernymy detection and directionality", "authors": [ { "first": "Maximilian", "middle": [], "last": "Kim Anh Nguyen", "suffix": "" }, { "first": "Sabine", "middle": [], "last": "K\u00f6per", "suffix": "" }, { "first": "Ngoc", "middle": [ "Thang" ], "last": "Schulte Im Walde", "suffix": "" }, { "first": "", "middle": [], "last": "Vu", "suffix": "" } ], "year": 2017, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "233--243", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kim Anh Nguyen, Maximilian K\u00f6per, Sabine Schulte im Walde, and Ngoc Thang Vu. 2017. Hierarchical embeddings for hypernymy detection and directionality. In Proceedings of EMNLP, pages 233-243.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Integrating distributional lexical contrast into word embeddings for antonymsynonym distinction", "authors": [ { "first": "Sabine", "middle": [], "last": "Kim Anh Nguyen", "suffix": "" }, { "first": "Ngoc", "middle": [ "Thang" ], "last": "Schulte Im Walde", "suffix": "" }, { "first": "", "middle": [], "last": "Vu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "454--459", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kim Anh Nguyen, Sabine Schulte im Walde, and Ngoc Thang Vu. 2016. Integrating distributional lexical contrast into word embeddings for antonym- synonym distinction. In Proceedings of ACL, pages 454-459.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Word embedding-based antonym detection using thesauri and distributional information", "authors": [ { "first": "Masataka", "middle": [], "last": "Ono", "suffix": "" }, { "first": "Makoto", "middle": [], "last": "Miwa", "suffix": "" }, { "first": "Yutaka", "middle": [], "last": "Sasaki", "suffix": "" } ], "year": 2015, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "984--989", "other_ids": {}, "num": null, "urls": [], "raw_text": "Masataka Ono, Makoto Miwa, and Yutaka Sasaki. 2015. Word embedding-based antonym detection using thesauri and distributional information. In Proceedings of NAACL-HLT, pages 984-989.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Encoding prior knowledge with eigenword embeddings", "authors": [ { "first": "Dominique", "middle": [], "last": "Osborne", "suffix": "" }, { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "Shay", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 2016, "venue": "Transactions of the ACL", "volume": "4", "issue": "", "pages": "417--430", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dominique Osborne, Shashi Narayan, and Shay Cohen. 2016. Encoding prior knowledge with eigenword embeddings. Transactions of the ACL, 4:417-430.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "PPDB 2.0: Better paraphrase ranking, finegrained entailment relations, word embeddings, and style classification", "authors": [ { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Pushpendre", "middle": [], "last": "Rastogi", "suffix": "" }, { "first": "Juri", "middle": [], "last": "Ganitkevitch", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "425--430", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2015. PPDB 2.0: Better paraphrase ranking, fine- grained entailment relations, word embeddings, and style classification. In Proceedings of ACL, pages 425-430.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of EMNLP, pages 1532- 1543.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "AutoExtend: Extending word embeddings to embeddings for synsets and lexemes", "authors": [ { "first": "Sascha", "middle": [], "last": "Rothe", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "1793--1803", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sascha Rothe and Hinrich Sch\u00fctze. 2015. AutoEx- tend: Extending word embeddings to embeddings for synsets and lexemes. In Proceedings of ACL, pages 1793-1803.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "A survey of cross-lingual embedding models", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Ruder, Ivan Vuli\u0107, and Anders S\u00f8gaard. 2017. A survey of cross-lingual embedding models. CoRR, abs/1706.04902.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Symmetric pattern based word embeddings for improved word similarity prediction", "authors": [ { "first": "Roy", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2015, "venue": "Proceedings of CoNLL", "volume": "", "issue": "", "pages": "258--267", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roy Schwartz, Roi Reichart, and Ari Rappoport. 2015. Symmetric pattern based word embeddings for im- proved word similarity prediction. In Proceedings of CoNLL, pages 258-267.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Offline bilingual word vectors, orthogonal transformations and the inverted softmax", "authors": [ { "first": "L", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" }, { "first": "H", "middle": [ "P" ], "last": "David", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Turban", "suffix": "" }, { "first": "Nils", "middle": [ "Y" ], "last": "Hamblin", "suffix": "" }, { "first": "", "middle": [], "last": "Hammerla", "suffix": "" } ], "year": 2017, "venue": "Proceedings of ICLR (Conference Track)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel L. Smith, David H.P. Turban, Steven Ham- blin, and Nils Y. Hammerla. 2017. Offline bilin- gual word vectors, orthogonal transformations and the inverted softmax. In Proceedings of ICLR (Con- ference Track).", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Specialising word vectors for lexical entailment", "authors": [ { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Mrk\u0161i\u0107", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Vuli\u0107 and Nikola Mrk\u0161i\u0107. 2017. Specialis- ing word vectors for lexical entailment. CoRR, abs/1710.06371.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Cross-lingual induction and transfer of verb classes based on word vector space specialisation", "authors": [ { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Mrk\u0161i\u0107", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2017, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "2536--2548", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Vuli\u0107, Nikola Mrk\u0161i\u0107, and Anna Korhonen. 2017a. Cross-lingual induction and transfer of verb classes based on word vector space specialisation. In Pro- ceedings of EMNLP, pages 2536-2548.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Morph-fitting: Fine-tuning word vector spaces with simple language-specific rules", "authors": [ { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Mrk\u0161i\u0107", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "O", "middle": [], "last": "Diarmuid", "suffix": "" }, { "first": "Steve", "middle": [], "last": "S\u00e9aghdha", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Young", "suffix": "" }, { "first": "", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2017, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "56--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Vuli\u0107, Nikola Mrk\u0161i\u0107, Roi Reichart, Diarmuid O S\u00e9aghdha, Steve Young, and Anna Korhonen. 2017b. Morph-fitting: Fine-tuning word vector spaces with simple language-specific rules. In Pro- ceedings of ACL, pages 56-68.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "A networkbased end-to-end trainable task-oriented dialogue system", "authors": [ { "first": "David", "middle": [], "last": "Tsung-Hsien Wen", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Vandyke", "suffix": "" }, { "first": "Milica", "middle": [], "last": "Mrk\u0161i\u0107", "suffix": "" }, { "first": "Lina", "middle": [ "M" ], "last": "Ga\u0161i\u0107", "suffix": "" }, { "first": "Pei-Hao", "middle": [], "last": "Rojas-Barahona", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Su", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Ultes", "suffix": "" }, { "first": "", "middle": [], "last": "Young", "suffix": "" } ], "year": 2017, "venue": "Proceedings of EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsung-Hsien Wen, David Vandyke, Nikola Mrk\u0161i\u0107, Milica Ga\u0161i\u0107, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A network- based end-to-end trainable task-oriented dialogue system. In Proceedings of EACL.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "From paraphrase database to compositional paraphrase model and back", "authors": [ { "first": "John", "middle": [], "last": "Wieting", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Karen", "middle": [], "last": "Livescu", "suffix": "" } ], "year": 2015, "venue": "Transactions of the ACL", "volume": "3", "issue": "", "pages": "345--358", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. From paraphrase database to compo- sitional paraphrase model and back. Transactions of the ACL, 3:345-358.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "The Dialog State Tracking Challenge series: A review", "authors": [ { "first": "Jason", "middle": [ "D" ], "last": "Williams", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Raux", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Henderson", "suffix": "" } ], "year": 2016, "venue": "Dialogue & Discourse", "volume": "7", "issue": "3", "pages": "4--33", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason D. Williams, Antoine Raux, and Matthew Hen- derson. 2016. The Dialog State Tracking Challenge series: A review. Dialogue & Discourse, 7(3):4-33.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "RC-NET: A general framework for incorporating knowledge into word representations", "authors": [ { "first": "Chang", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Yalong", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Jiang", "middle": [], "last": "Bian", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Gang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xiaoguang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of CIKM", "volume": "", "issue": "", "pages": "1219--1228", "other_ids": { "DOI": [ "10.1145/2661829.2662038" ] }, "num": null, "urls": [], "raw_text": "Chang Xu, Yalong Bai, Jiang Bian, Bin Gao, Gang Wang, Xiaoguang Liu, and Tie-Yan Liu. 2014. RC- NET: A general framework for incorporating knowl- edge into word representations. In Proceedings of CIKM, pages 1219-1228.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "Polarity inducing latent semantic analysis", "authors": [ { "first": "Geoffrey", "middle": [], "last": "Wen-Tau Yih", "suffix": "" }, { "first": "John", "middle": [ "C" ], "last": "Zweig", "suffix": "" }, { "first": "", "middle": [], "last": "Platt", "suffix": "" } ], "year": 2012, "venue": "EMNLP-CoNLL", "volume": "", "issue": "", "pages": "1212--1222", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wen-tau Yih, Geoffrey Zweig, and John C. Platt. 2012. Polarity inducing latent semantic analysis. In EMNLP-CoNLL, pages 1212-1222.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Cognitive User Interfaces", "authors": [ { "first": "Steve", "middle": [], "last": "Young", "suffix": "" } ], "year": 2010, "venue": "IEEE Signal Processing Magazine", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steve Young. 2010. Cognitive User Interfaces. IEEE Signal Processing Magazine.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "Improving lexical embeddings with semantic knowledge", "authors": [ { "first": "Mo", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "545--550", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mo Yu and Mark Dredze. 2014. Improving lexical em- beddings with semantic knowledge. In Proceedings of ACL, pages 545-550.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "Word semantic representations using bayesian probabilistic tensor factorization", "authors": [ { "first": "Jingwei", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jeremy", "middle": [], "last": "Salwen", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Glass", "suffix": "" }, { "first": "Alfio", "middle": [], "last": "Gliozzo", "suffix": "" } ], "year": 2014, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "1522--1531", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingwei Zhang, Jeremy Salwen, Michael Glass, and Al- fio Gliozzo. 2014. Word semantic representations using bayesian probabilistic tensor factorization. In Proceedings of EMNLP, pages 1522-1531.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "Specialization performance on SimLex-999 (blue line) and SimVerb-3500 (red line) for ER models with different topology regularization factors \u03bb. Dashed lines indicate performance levels of the distributional (i.e., unspecialized) space.", "type_str": "figure" }, "TABREF1": { "type_str": "table", "html": null, "content": "
External knowledge
(bright, light, syn)f: specialization function
(source, target, ant) (buy, acquire, syn)...
... Distributional vector space.........g: distance
...function
Specialization model (non-linear regression)...
(a) Illustration of the explicit retrofitting approach
", "text": "bullet : 0.41 ...\uf0e0 [0.11, -0.23, ...,1.11] bright \uf0e0 [0.11, -0.23, ..., 1.11] buy \uf0e0[-0.41, 0.29, ..., -1.07] ...target \uf0e0 [-1.7, 0.13, ..., -0.92] top \uf0e0[-0.21, -0.52, ..., 0.47] ...", "num": null }, "TABREF2": { "type_str": "table", "html": null, "content": "
f: specialization function
...
bulb : 0.27 original: v source , v target : 2.0 micro-batch 2:...... ... ... .........g: distance
(non-linear regression) neg 1: Specialization model... ...function
", "text": "V source , V river : 0.29 neg 2: V target , V bullet : 0.41 ...", "num": null }, "TABREF4": { "type_str": "table", "html": null, "content": "
English SimLex-999 (SL) and SimVerb-3500 (SV), using explicit retrofitting models with two different
objective functions (ER-MSD and ER-CNT, cf. Section 3.3).
Constraints (ER-CNT model)SLSV
Synonyms only.465.339
Antonyms only.451.317
Synonyms + Antonyms.582.439
", "text": "Spearman's \u03c1 correlation scores for three standard English distributional vectors spaces on", "num": null }, "TABREF6": { "type_str": "table", "html": null, "content": "
GLOVE-CC FASTTEXT SGNS-W2
Emb. spaceACACAC
", "text": "Distributional 66.0 94.0 57.8 84.0 56.0 79.1 Specialized ATTRACT-REPEL 67.6 87.0 69.8 89.4 64.4 86.7 ER-CNT 73.8 93.0 71.2 93.2 68.4 92.3", "num": null } } } }