{ "paper_id": "J17-1003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:46:11.784686Z" }, "title": "Multilingual Metaphor Processing: Experiments with Semi-Supervised and Unsupervised Learning", "authors": [ { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Lin", "middle": [], "last": "Sun", "suffix": "", "affiliation": {}, "email": "lin.sun@greedyint.com" }, { "first": "Elkin", "middle": [], "last": "Dar\u00edo Guti\u00e9rrez", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Patricia", "middle": [], "last": "Lichtenstein", "suffix": "", "affiliation": {}, "email": "plichtenstein@ucmerced.edu" }, { "first": "Srini", "middle": [], "last": "Narayanan", "suffix": "", "affiliation": {}, "email": "srinin@google.com" }, { "first": "Google", "middle": [], "last": "Research", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Highly frequent in language and communication, metaphor represents a significant challenge for Natural Language Processing (NLP) applications. Computational work on metaphor has traditionally evolved around the use of hand-coded knowledge, making the systems hard to scale. Recent years have witnessed a rise in statistical approaches to metaphor processing. However, these approaches often require extensive human annotation effort and are predominantly evaluated within a limited domain. In contrast, we experiment with weakly supervised and unsupervised techniques-with little or no annotation-to generalize higher-level mechanisms of metaphor from distributional properties of concepts. We investigate different levels and types of supervision (learning from linguistic examples vs. learning from a given set of metaphorical mappings vs. learning without annotation) in flat and hierarchical, unconstrained and constrained clustering settings. Our aim is to identify the optimal type of supervision for a learning algorithm that discovers patterns of metaphorical association from text. In order to investigate", "pdf_parse": { "paper_id": "J17-1003", "_pdf_hash": "", "abstract": [ { "text": "Highly frequent in language and communication, metaphor represents a significant challenge for Natural Language Processing (NLP) applications. Computational work on metaphor has traditionally evolved around the use of hand-coded knowledge, making the systems hard to scale. Recent years have witnessed a rise in statistical approaches to metaphor processing. However, these approaches often require extensive human annotation effort and are predominantly evaluated within a limited domain. In contrast, we experiment with weakly supervised and unsupervised techniques-with little or no annotation-to generalize higher-level mechanisms of metaphor from distributional properties of concepts. We investigate different levels and types of supervision (learning from linguistic examples vs. learning from a given set of metaphorical mappings vs. learning without annotation) in flat and hierarchical, unconstrained and constrained clustering settings. Our aim is to identify the optimal type of supervision for a learning algorithm that discovers patterns of metaphorical association from text. In order to investigate", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "the scalability and adaptability of our models, we applied them to data in three languages from different language groups-English, Spanish, and Russian-achieving state-of-the-art results with little supervision. Finally, we demonstrate that statistical methods can facilitate and scale up cross-linguistic research on metaphor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Metaphor brings vividness, distinction, and clarity to our thought and communication. At the same time, it plays an important structural role in our cognition, helping us to organize and project knowledge (Lakoff and Johnson 1980; Feldman 2006) and guide our reasoning (Thibodeau and Boroditsky 2011) . Metaphors arise from systematic associations between distinct, and seemingly unrelated, concepts. For instance, when we talk about \"the turning wheels of a political regime,\" \"rebuilding the campaign machinery\" or \"mending foreign policy,\" we view politics and political systems in terms of mechanisms-they can function, break, be mended, have wheels, and so forth. The existence of this association allows us to transfer knowledge and inferences from the domain of mechanisms to that of political systems. As a result, we reason about political systems in terms of mechanisms and discuss them using the mechanism terminology in a variety of metaphorical expressions. The view of metaphor as a mapping between two distinct domains was echoed by numerous theories in the field (Black 1962; Hesse 1966; Lakoff and Johnson 1980; Gentner 1983) . The most influential of these was the Conceptual Metaphor Theory of Lakoff and Johnson (1980) . Lakoff and Johnson claimed that metaphor is not merely a property of language, but rather a cognitive mechanism that structures our conceptual system in a certain way. They coined the term conceptual metaphor to describe the mapping between the target concept (e.g., politics) and the source concept (e.g., mechanism), and linguistic metaphor to describe the resulting metaphorical expressions. Other examples of common metaphorical mappings include: TIME IS MONEY (e.g., \"That flat tire cost me an hour\"); IDEAS ARE PHYSICAL OBJECTS (e.g., \"I can not grasp his way of thinking\"); VIOLENCE IS FIRE (e.g., \"violence flares amid curfew\"); EMOTIONS ARE VEHICLES (e.g., \" [...] she was transported with pleasure\"); FEELINGS ARE LIQUIDS (e.g., \" [...] all of this stirred an unfathomable excitement in her\"); LIFE IS A JOURNEY (e.g., \"He arrived at the end of his life with very little emotional baggage\").", "cite_spans": [ { "start": 205, "end": 230, "text": "(Lakoff and Johnson 1980;", "ref_id": "BIBREF54" }, { "start": 231, "end": 244, "text": "Feldman 2006)", "ref_id": "BIBREF30" }, { "start": 269, "end": 300, "text": "(Thibodeau and Boroditsky 2011)", "ref_id": "BIBREF96" }, { "start": 1079, "end": 1091, "text": "(Black 1962;", "ref_id": "BIBREF10" }, { "start": 1092, "end": 1103, "text": "Hesse 1966;", "ref_id": "BIBREF42" }, { "start": 1104, "end": 1128, "text": "Lakoff and Johnson 1980;", "ref_id": "BIBREF54" }, { "start": 1129, "end": 1142, "text": "Gentner 1983)", "ref_id": "BIBREF37" }, { "start": 1213, "end": 1238, "text": "Lakoff and Johnson (1980)", "ref_id": "BIBREF54" }, { "start": 1241, "end": 1251, "text": "Lakoff and", "ref_id": null }, { "start": 1909, "end": 1914, "text": "[...]", "ref_id": null }, { "start": 1982, "end": 1987, "text": "[...]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Manifestations of metaphor are pervasive in language and reasoning, making its computational processing an imperative task within Natural Language Processing (NLP). Explaining up to 20% of all word meanings according to corpus studies (Shutova and Teufel 2010; Steen et al. 2010) , metaphor is currently a bottleneck, particularly in semantic tasks. An accurate and scalable metaphor processing system would become an important component of many practical NLP applications. These include, for instance, machine translation (MT): A large number of metaphorical expressions are culturespecific and therefore represent a considerable challenge in translation (Sch\u00e4ffner 2004; Zhou, Yang, and Huang 2007) . Shutova, Teufel, and Korhonen (2013) conducted a study of metaphor translation in MT. Using Google Translate, 1 a state-of-the-art MT system, they found that as many as 44% of metaphorical expressions in their data set were translated incorrectly, resulting in semantically infelicitous sentences. A metaphor processing component could help to avoid such errors. Other applications of metaphor processing include, for instance, opinion mining: metaphorical expressions tend to contain a strong emotional component (e.g., compare the metaphor \"Government loosened its stranglehold on business\" and its literal counterpart \"Government deregulated business\" [Narayanan 1999]) ; or information retrieval: non-literal language without appropriate disambiguation may lead to false positives in information retrieval (e.g., documents describing \"old school gentlemen\" should not be returned for the query \"school\" [Korkontzelos et al. 2013] ); and many others.", "cite_spans": [ { "start": 235, "end": 260, "text": "(Shutova and Teufel 2010;", "ref_id": null }, { "start": 261, "end": 279, "text": "Steen et al. 2010)", "ref_id": "BIBREF92" }, { "start": 656, "end": 672, "text": "(Sch\u00e4ffner 2004;", "ref_id": "BIBREF81" }, { "start": 673, "end": 700, "text": "Zhou, Yang, and Huang 2007)", "ref_id": "BIBREF111" }, { "start": 703, "end": 739, "text": "Shutova, Teufel, and Korhonen (2013)", "ref_id": "BIBREF88" }, { "start": 1358, "end": 1375, "text": "[Narayanan 1999])", "ref_id": "BIBREF71" }, { "start": 1610, "end": 1636, "text": "[Korkontzelos et al. 2013]", "ref_id": "BIBREF50" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Because the metaphors we use are also known to be indicative of our underlying viewpoints, metaphor processing is likely to be fruitful in determining political affiliation from text or pinning down cross-cultural and cross-population differences, and thus become a useful tool in data mining. In social science, metaphor is extensively studied as a way to frame cultural and moral models, and to predict social choice (Landau, Sullivan, and Greenberg 2009; Thibodeau and Boroditsky 2011; Lakoff and Wehling 2012) . Metaphor is also widely viewed as a creative tool. Its knowledge projection mechanisms help us to grasp new concepts and generate innovative ideas. This opens many avenues for the creation of computational tools that foster creativity (Veale 2011 (Veale , 2014 and support assessment in education (Burstein et al. 2013) .", "cite_spans": [ { "start": 419, "end": 457, "text": "(Landau, Sullivan, and Greenberg 2009;", "ref_id": "BIBREF56" }, { "start": 458, "end": 488, "text": "Thibodeau and Boroditsky 2011;", "ref_id": "BIBREF96" }, { "start": 489, "end": 513, "text": "Lakoff and Wehling 2012)", "ref_id": "BIBREF55" }, { "start": 751, "end": 762, "text": "(Veale 2011", "ref_id": "BIBREF99" }, { "start": 763, "end": 776, "text": "(Veale , 2014", "ref_id": "BIBREF100" }, { "start": 813, "end": 835, "text": "(Burstein et al. 2013)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "For many years, computational work on metaphor evolved around the use of hand-coded knowledge and rules to model metaphorical associations, making the systems hard to scale. Recent years have seen a growing interest in statistical modeling of metaphor (Mason 2004; Gedigian et al. 2006; Shutova 2010; Shutova, Sun, and Korhonen 2010; Turney et al. 2011; Heintz et al. 2013; Hovy et al. 2013; Li, Zhu, and Wang 2013; Mohler et al. 2013; Shutova and Sun 2013; Strzalkowski et al. 2013; Tsvetkov, Mukomel, and Gershman 2013; Beigman Klebanov et al. 2014; Mohler et al. 2014) , with many new techniques opening routes for improving system accuracy and robustness. A wide range of methods have been proposed and investigated by the community, including supervised classification (Gedigian et al. 2006; Dunn 2013a; Hovy et al. 2013; Mohler et al. 2013; Tsvetkov, Mukomel, and Gershman 2013) , unsupervised learning (Heintz et al. 2013; Shutova and Sun 2013) , distributional approaches (Shutova 2010; Shutova, Van de Cruys, and Korhonen 2012; Shutova 2013; Mohler et al. 2014) , lexical resource-based methods (Krishnakumaran and Zhu 2007; Wilks et al. 2013) , psycholinguistic features (Turney et al. 2011; Gandy et al. 2013; Neuman et al. 2013; Strzalkowski et al. 2013) , and Web search using lexico-syntactic patterns (Veale and Hao 2008; Bollegala and Shutova 2013; Li, Zhu, and Wang 2013) . However, even the statistical methods have been predominantly applied in limited-domain, small-scale experiments. This is mainly due to the lack of general-domain corpora annotated for metaphor that are sufficiently large for training wide-coverage supervised systems. In addition, supervised methods tend to rely on lexical resources and ontologies for feature extraction, which limits the robustness of the features themselves and makes the methods dependent on the coverage (and the availability) of these resources. This also makes these methods difficult to port to new languages, for which such lexical resources or corpora may not exist. In contrast, we experiment with minimally supervised and unsupervised learning methods that require little or no annotation; and use robust, dynamically mined lexico-syntactic features that are well suited for metaphor processing. This makes our methods scalable to new data and portable across languages, domains, and tasks, bringing metaphor processing technology a step closer to a possibility of integration with real-world NLP.", "cite_spans": [ { "start": 252, "end": 264, "text": "(Mason 2004;", "ref_id": "BIBREF64" }, { "start": 265, "end": 286, "text": "Gedigian et al. 2006;", "ref_id": "BIBREF36" }, { "start": 287, "end": 300, "text": "Shutova 2010;", "ref_id": "BIBREF85" }, { "start": 301, "end": 333, "text": "Shutova, Sun, and Korhonen 2010;", "ref_id": "BIBREF87" }, { "start": 334, "end": 353, "text": "Turney et al. 2011;", "ref_id": "BIBREF98" }, { "start": 354, "end": 373, "text": "Heintz et al. 2013;", "ref_id": "BIBREF41" }, { "start": 374, "end": 391, "text": "Hovy et al. 2013;", "ref_id": "BIBREF44" }, { "start": 392, "end": 415, "text": "Li, Zhu, and Wang 2013;", "ref_id": "BIBREF57" }, { "start": 416, "end": 435, "text": "Mohler et al. 2013;", "ref_id": "BIBREF67" }, { "start": 436, "end": 457, "text": "Shutova and Sun 2013;", "ref_id": "BIBREF87" }, { "start": 458, "end": 483, "text": "Strzalkowski et al. 2013;", "ref_id": "BIBREF94" }, { "start": 484, "end": 521, "text": "Tsvetkov, Mukomel, and Gershman 2013;", "ref_id": "BIBREF97" }, { "start": 522, "end": 551, "text": "Beigman Klebanov et al. 2014;", "ref_id": "BIBREF6" }, { "start": 552, "end": 571, "text": "Mohler et al. 2014)", "ref_id": "BIBREF68" }, { "start": 774, "end": 796, "text": "(Gedigian et al. 2006;", "ref_id": "BIBREF36" }, { "start": 797, "end": 808, "text": "Dunn 2013a;", "ref_id": "BIBREF27" }, { "start": 809, "end": 826, "text": "Hovy et al. 2013;", "ref_id": "BIBREF44" }, { "start": 827, "end": 846, "text": "Mohler et al. 2013;", "ref_id": "BIBREF67" }, { "start": 847, "end": 884, "text": "Tsvetkov, Mukomel, and Gershman 2013)", "ref_id": "BIBREF97" }, { "start": 909, "end": 929, "text": "(Heintz et al. 2013;", "ref_id": "BIBREF41" }, { "start": 930, "end": 951, "text": "Shutova and Sun 2013)", "ref_id": "BIBREF87" }, { "start": 980, "end": 994, "text": "(Shutova 2010;", "ref_id": "BIBREF85" }, { "start": 995, "end": 1036, "text": "Shutova, Van de Cruys, and Korhonen 2012;", "ref_id": "BIBREF89" }, { "start": 1037, "end": 1050, "text": "Shutova 2013;", "ref_id": "BIBREF85" }, { "start": 1051, "end": 1070, "text": "Mohler et al. 2014)", "ref_id": "BIBREF68" }, { "start": 1104, "end": 1133, "text": "(Krishnakumaran and Zhu 2007;", "ref_id": "BIBREF52" }, { "start": 1134, "end": 1152, "text": "Wilks et al. 2013)", "ref_id": "BIBREF107" }, { "start": 1181, "end": 1201, "text": "(Turney et al. 2011;", "ref_id": "BIBREF98" }, { "start": 1202, "end": 1220, "text": "Gandy et al. 2013;", "ref_id": "BIBREF35" }, { "start": 1221, "end": 1240, "text": "Neuman et al. 2013;", "ref_id": null }, { "start": 1241, "end": 1266, "text": "Strzalkowski et al. 2013)", "ref_id": "BIBREF94" }, { "start": 1316, "end": 1336, "text": "(Veale and Hao 2008;", "ref_id": "BIBREF101" }, { "start": 1337, "end": 1364, "text": "Bollegala and Shutova 2013;", "ref_id": "BIBREF13" }, { "start": 1365, "end": 1388, "text": "Li, Zhu, and Wang 2013)", "ref_id": "BIBREF57" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Our methods use distributional clustering techniques to investigate how metaphorical cross-domain mappings partition the semantic space in three different languages-English, Russian, and Spanish. In a distributional semantic space, each word is represented as a vector of contexts in which it occurs in a text corpus. 2 Because of the high frequency and systematicity with which metaphor is used in language, it is naturally and systematically reflected in the distributional space. As a result of metaphorical cross-domain mappings, the words' context vectors tend to be non-homogeneous in structure and to contain vocabulary from different domains. For instance, the context vector for the noun idea would contain a set of literally used terms (e.g., understand [an idea]) and a set of metaphorically used terms, describing ideas as PHYSICAL OBJECTS (e.g., grasp [an idea], throw [an idea]), LIQUIDS (e.g., [ideas] flow), or FOOD (e.g., digest [an idea]), and so on. Similarly, the context vector for politics would contain MECHANISM terms (e.g., operate or refuel [politics]), GAME terms (e.g., play or dominate [politics] ), SPACE terms (e.g., enter or leave [politics]), as well as the literally used terms (e.g., explain or understand [politics]), as shown in Figure 1 . This demonstrates how metaphorical usages, abundant in the data, structure the distributional space. As a result, the context vectors of different concepts contain a certain degree of crossdomain overlap, thus implicitly encoding cross-domain mappings. Figure 1 shows such a term overlap in the direct object vectors for the concepts of GAME and POLITICS. We exploit such composition of the context vectors to induce information about metaphorical mappings directly from the words' distributional behavior in an unsupervised or a minimally supervised way. We then use this information to identify metaphorical language. Clustering methods model modularity in the structure of the semantic space, and thus naturally provide a suitable framework to capture metaphorical information. To our knowledge, the metaphorical cross-domain structure of the distributional space has not yet been explicitly exploited in wider NLP. Instead, most NLP approaches tend to treat all types of distributional features as identical, thus possibly losing important conceptual information that is naturally encoded in the distributional semantic space.", "cite_spans": [ { "start": 318, "end": 319, "text": "2", "ref_id": null }, { "start": 909, "end": 916, "text": "[ideas]", "ref_id": null }, { "start": 1115, "end": 1125, "text": "[politics]", "ref_id": null } ], "ref_spans": [ { "start": 1266, "end": 1274, "text": "Figure 1", "ref_id": null }, { "start": 1530, "end": 1538, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The focus of our experiments is on the identification of metaphorical expressions in verb-subject and verb-object constructions, where the verb is used metaphorically. In the first set of experiments, we apply a flat clustering algorithm, spectral clustering (Ng et al. 2002) , to learn metaphorical associations from text. The system clusters verbs and nouns to create representations of source and target domains. The verb clustering is used to harvest source domain vocabulary and the noun clustering is used to identify groups of target concepts associated with the same source. For instance, the nouns democracy and marriage are clustered together (in the target noun cluster), because both are metaphorically associated with (for example) mechanisms or games and, as such, appear with mechanism and game terms in the corpus (the source verb cluster). The obtained clusters represent source and target concepts between which metaphorical associations hold. We first experiment with the unconstrained version of spectral clustering using the method of Shutova, Sun, and Korhonen (2010) , where metaphorical patterns are derived from the distributional information alone and the clustering process is fully unsupervised. We then extend this method to perform constrained clustering, where a small number of example metaphorical mappings are used to guide the learning process, with the expectation of changing the cluster structure towards capturing metaphorically associated concepts. We then analyze and compare the structure of the clusters obtained with or without the use of constraints. The learning of metaphorical associations is then boosted from a small set of example metaphorical expressions that are used to connect the verb and noun clusters. Finally, the acquired set of associations is used to identify new, unseen metaphorical expressions in a large corpus.", "cite_spans": [ { "start": 259, "end": 275, "text": "(Ng et al. 2002)", "ref_id": "BIBREF72" }, { "start": 1056, "end": 1089, "text": "Shutova, Sun, and Korhonen (2010)", "ref_id": "BIBREF87" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Although we believe that these methods would capture a substantial amount of information about metaphorical associations from distributional properties of concepts, they are still dependent on the seed expressions to identify new metaphorical language. In our second set of experiments, we investigate to what extent it is possible to acquire information about metaphor from distributional properties of concepts alone, without any need for labeled examples. For this purpose, we apply the hierarchical clustering method of Shutova and Sun (2013) to identify both metaphorical associations and metaphorical expressions in a fully unsupervised way. We use hierarchical graph factorization clustering (Yu, Yu, and Tresp 2006) of nouns to create a network (or a graph) of concepts and to quantify the strength of association between concepts in this graph. The metaphorical mappings are then identified based on the association patterns between concepts in the graph. The mappings are represented as cross-level, one-directional connections between clusters in the graph. The system then uses salient features of the metaphorically connected clusters to identify metaphorical expressions in text. Given a source domain, the method outputs a set of target concepts associated with this source, as well as the corresponding metaphorical expressions.", "cite_spans": [ { "start": 524, "end": 546, "text": "Shutova and Sun (2013)", "ref_id": "BIBREF87" }, { "start": 699, "end": 723, "text": "(Yu, Yu, and Tresp 2006)", "ref_id": "BIBREF108" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "We then compare the ability of these methods (that require different kinds and levels of supervision) to identify metaphor. In order to investigate the scalability and adaptability of the methods, we applied them to unrestricted, general-domain text in three typologically different languages-English, Spanish, and Russian. We evaluated the performance of the systems with the aid of human judges in precision-and recalloriented settings, achieving state-of-the-art results with little supervision. Finally, we analyze the differences in the use of metaphor across languages, as discovered by the systems, and demonstrate that statistical methods can facilitate and scale up crosslinguistic research on metaphor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Metaphor annotation studies have typically been corpus-based and involved either continuous annotation of metaphorical language (i.e., distinguishing between literal and metaphorical uses of words in a given text), or search for instances of a specific metaphor in a corpus and an analysis thereof. The majority of corpus-linguistic studies were concerned with metaphorical expressions and mappings within a limited domain, for example, WAR, BUSINESS, FOOD, or PLANT metaphors (Santa Ana 1999; Izwaini 2003; Koller 2004; Skorczynska Sznajder and Pique-Angordans 2004; Hardie et al. 2007; Lu and Ahrens 2008; Low et al. 2010) , or in a particular genre or type of discourse, such as financial (Charteris- Black and Ennis 2001; Martin 2006) , political (Lu and Ahrens 2008) , or educational (Cameron 2003; Beigman Klebanov and Flor 2013) discourse.", "cite_spans": [ { "start": 477, "end": 493, "text": "(Santa Ana 1999;", "ref_id": "BIBREF80" }, { "start": 494, "end": 507, "text": "Izwaini 2003;", "ref_id": "BIBREF46" }, { "start": 508, "end": 520, "text": "Koller 2004;", "ref_id": "BIBREF49" }, { "start": 521, "end": 567, "text": "Skorczynska Sznajder and Pique-Angordans 2004;", "ref_id": "BIBREF91" }, { "start": 568, "end": 587, "text": "Hardie et al. 2007;", "ref_id": "BIBREF39" }, { "start": 588, "end": 607, "text": "Lu and Ahrens 2008;", "ref_id": "BIBREF61" }, { "start": 608, "end": 624, "text": "Low et al. 2010)", "ref_id": "BIBREF60" }, { "start": 704, "end": 725, "text": "Black and Ennis 2001;", "ref_id": "BIBREF20" }, { "start": 726, "end": 738, "text": "Martin 2006)", "ref_id": "BIBREF63" }, { "start": 751, "end": 771, "text": "(Lu and Ahrens 2008)", "ref_id": "BIBREF61" }, { "start": 789, "end": 803, "text": "(Cameron 2003;", "ref_id": "BIBREF19" }, { "start": 804, "end": 835, "text": "Beigman Klebanov and Flor 2013)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Metaphor Annotation Studies", "sec_num": "2.1" }, { "text": "Two studies (Steen et al. 2010; Shutova and Teufel 2010) moved away from investigating particular domains to a more general study of how metaphor behaves in unrestricted continuous text. Steen and colleagues (Pragglejaz Group 2007; Steen et al. 2010) proposed a metaphor identification procedure (MIP), in which every word is tagged as literal or metaphorical, based on whether it has a \"more basic meaning\" in other contexts than the current one. The basic meaning was defined as \"more concrete; related to bodily action; more precise (as opposed to vague); historically older\" and its identification was guided by dictionary definitions. The resulting VU Amsterdam Metaphor Corpus 3 is a 200,000-word subset of the British National Corpus (BNC) (Burnard 2007) annotated for linguistic metaphor. The corpus has already found application in computational metaphor processing research (Dunn 2013b; Niculae and Yaneva 2013; Beigman Klebanov et al. 2014) , as well as inspiring metaphor annotation efforts in other languages (Badryzlova et al. 2013) . Shutova and Teufel (2010) extended MIP to the identification of conceptual metaphors along with the linguistic ones. Following MIP, the annotators were asked to identify the more basic sense of the word, and then label the context in which the word occurs in the basic sense as the source domain, and the current context as the target. Shutova and Teufel's corpus is a 13,000word subset of the BNC sampling a range of genres, and it has served as a testbed in a number of computational experiments (Shutova 2010; Shutova, Sun, and Korhonen 2010; Bollegala and Shutova 2013) . L\u00f6nneker (2004) investigated metaphor annotation in lexical resources. The resulting Hamburg Metaphor Database contains examples of metaphorical expressions in German and French, which are mapped to senses from EuroWordNet 4 and annotated with source-target domain mappings.", "cite_spans": [ { "start": 12, "end": 31, "text": "(Steen et al. 2010;", "ref_id": "BIBREF92" }, { "start": 32, "end": 56, "text": "Shutova and Teufel 2010)", "ref_id": null }, { "start": 187, "end": 231, "text": "Steen and colleagues (Pragglejaz Group 2007;", "ref_id": null }, { "start": 232, "end": 250, "text": "Steen et al. 2010)", "ref_id": "BIBREF92" }, { "start": 747, "end": 761, "text": "(Burnard 2007)", "ref_id": "BIBREF17" }, { "start": 884, "end": 896, "text": "(Dunn 2013b;", "ref_id": "BIBREF28" }, { "start": 897, "end": 921, "text": "Niculae and Yaneva 2013;", "ref_id": "BIBREF73" }, { "start": 922, "end": 951, "text": "Beigman Klebanov et al. 2014)", "ref_id": "BIBREF6" }, { "start": 1022, "end": 1046, "text": "(Badryzlova et al. 2013)", "ref_id": "BIBREF1" }, { "start": 1049, "end": 1074, "text": "Shutova and Teufel (2010)", "ref_id": null }, { "start": 1547, "end": 1561, "text": "(Shutova 2010;", "ref_id": "BIBREF85" }, { "start": 1562, "end": 1594, "text": "Shutova, Sun, and Korhonen 2010;", "ref_id": "BIBREF87" }, { "start": 1595, "end": 1622, "text": "Bollegala and Shutova 2013)", "ref_id": "BIBREF13" }, { "start": 1625, "end": 1640, "text": "L\u00f6nneker (2004)", "ref_id": "BIBREF59" } ], "ref_spans": [], "eq_spans": [], "section": "Metaphor Annotation Studies", "sec_num": "2.1" }, { "text": "Early computational work on metaphor tended to be theory-driven and utilized hand-coded descriptions of concepts and domains to identify and interpret metaphor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computational Approaches to Metaphor Identification", "sec_num": "2.2" }, { "text": "The system of Fass (1991) , for instance, was an implementation of the selectional preference violation view of metaphor (Wilks 1978) and detected metaphor and metonymy as a violation of a common preference of a predicate by a given argument. Another branch of approaches (Martin 1990; Narayanan 1997; Barnden and Lee 2002) implemented some aspects of the conceptual metaphor theory (Lakoff and Johnson 1980) , reasoning over hand-crafted representations of source and target domains. The system of Martin (1990) explained linguistic metaphors through finding the corresponding metaphorical mapping. The systems of Narayanan (1997) and Barnden and Lee (2002) performed inferences about entities and events in the source and target domains in order to interpret a given metaphor. The reasoning processes relied on manually coded knowledge about the world and operated mainly in the source domain. The results were then projected onto the target domain using the conceptual mapping representation.", "cite_spans": [ { "start": 14, "end": 25, "text": "Fass (1991)", "ref_id": "BIBREF29" }, { "start": 121, "end": 133, "text": "(Wilks 1978)", "ref_id": "BIBREF106" }, { "start": 272, "end": 285, "text": "(Martin 1990;", "ref_id": "BIBREF62" }, { "start": 286, "end": 301, "text": "Narayanan 1997;", "ref_id": "BIBREF70" }, { "start": 302, "end": 323, "text": "Barnden and Lee 2002)", "ref_id": "BIBREF4" }, { "start": 383, "end": 408, "text": "(Lakoff and Johnson 1980)", "ref_id": "BIBREF54" }, { "start": 499, "end": 512, "text": "Martin (1990)", "ref_id": "BIBREF62" }, { "start": 615, "end": 631, "text": "Narayanan (1997)", "ref_id": "BIBREF70" }, { "start": 636, "end": 658, "text": "Barnden and Lee (2002)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Computational Approaches to Metaphor Identification", "sec_num": "2.2" }, { "text": "The reliance on task-and domain-specific hand-coded knowledge makes these systems difficult to scale to real-world text. Later research thus turned to generaldomain lexical resources and ontologies, as well as statistical methods, in order to design more scalable solutions. Mason (2004) introduced the use of statistical techniques for metaphor processing; however, his approach had a considerable reliance on Word-Net (Fellbaum 1998). His CorMet system discovered source-target domain mappings automatically, by searching for systematic variations in domain-specific verb preferences. For example, pour is a characteristic verb in both LAB and FINANCE domains. In the LAB domain it has a strong preference for liquids and in the FINANCE domain for money. From this information, Mason's system inferred the domain mapping FINANCE-LAB and the concept mapping money-liquid. The system of Krishnakumaran and Zhu (2007) used hyponymy relations in WordNet and word bigram counts to predict verbal, nominal, and adjectival metaphors. For instance, given an IS-A construction (e.g., \"The world is a stage\") the system verified that the two nouns were in hyponymy relation in WordNet, and if this was not the case the expression was tagged as metaphorical. Given a verb-noun or an adjective-noun pair (such as \"planting ideas\" or \"fertile imagination\"), the system computed the bigram probability of this pair (including the hyponyms/hypernyms of the noun) and if the combination was not observed in the data with sufficient frequency, it was tagged as metaphorical.", "cite_spans": [ { "start": 275, "end": 287, "text": "Mason (2004)", "ref_id": "BIBREF64" }, { "start": 887, "end": 916, "text": "Krishnakumaran and Zhu (2007)", "ref_id": "BIBREF52" } ], "ref_spans": [], "eq_spans": [], "section": "Computational Approaches to Metaphor Identification", "sec_num": "2.2" }, { "text": "These systems have demonstrated that statistical methods, when combined with broad-coverage lexical resources, can be successfully used to model at least some aspects of metaphor, increasing the system coverage. As statistical NLP, lexical semantics, and lexical acquisition techniques developed over the years, it has become possible to build larger-scale statistical metaphor processing systems that promise a step forward both in accuracy and robustness. Numerous approaches (Li and Sporleder 2010; Shutova 2010; Turney et al. 2011; Hovy et al. 2013; Shutova and Sun 2013; Shutova, Teufel, and Korhonen 2013; Tsvetkov, Mukomel, and Gershman 2013) used machine learning and statistical techniques to address a wider range of metaphorical language in general-domain text. For instance, the method of Turney et al. (2011) classified verbs and adjectives as literal or metaphorical based on their level of concreteness or abstractness in relation to the noun they appear with. They learned concreteness rankings for words automatically (starting from a set of examples) and then searched for expressions where a concrete adjective or verb was used with an abstract noun (e.g., \"dark humor\" was tagged as a metaphor and dark hair was not). The method of Turney et al. (2011) has served as a foundation for the later approaches of Neuman et al. (2013) and Gandy et al. (2013) , who extended it through the use of selectional preferences and the identification of source domains, respectively.", "cite_spans": [ { "start": 478, "end": 501, "text": "(Li and Sporleder 2010;", "ref_id": "BIBREF58" }, { "start": 502, "end": 515, "text": "Shutova 2010;", "ref_id": "BIBREF85" }, { "start": 516, "end": 535, "text": "Turney et al. 2011;", "ref_id": "BIBREF98" }, { "start": 536, "end": 553, "text": "Hovy et al. 2013;", "ref_id": "BIBREF44" }, { "start": 554, "end": 575, "text": "Shutova and Sun 2013;", "ref_id": "BIBREF87" }, { "start": 576, "end": 611, "text": "Shutova, Teufel, and Korhonen 2013;", "ref_id": "BIBREF88" }, { "start": 612, "end": 649, "text": "Tsvetkov, Mukomel, and Gershman 2013)", "ref_id": "BIBREF97" }, { "start": 801, "end": 821, "text": "Turney et al. (2011)", "ref_id": "BIBREF98" }, { "start": 1252, "end": 1272, "text": "Turney et al. (2011)", "ref_id": "BIBREF98" }, { "start": 1328, "end": 1348, "text": "Neuman et al. (2013)", "ref_id": null }, { "start": 1353, "end": 1372, "text": "Gandy et al. (2013)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Computational Approaches to Metaphor Identification", "sec_num": "2.2" }, { "text": "Another branch of research focused on applying statistical learning to the problem of metaphor identification (Gedigian et al. 2006; Shutova, Sun, and Korhonen 2010; Dunn 2013a; Heintz et al. 2013; Hovy et al. 2013; Mohler et al. 2013; Shutova and Sun 2013; Tsvetkov, Mukomel, and Gershman 2013; Beigman Klebanov et al. 2014) . The learning techniques they have investigated include supervised classification, clustering, and Latent Dirichlet Allocation (LDA) topic modeling. We review these methods in more detail subsequently.", "cite_spans": [ { "start": 110, "end": 132, "text": "(Gedigian et al. 2006;", "ref_id": "BIBREF36" }, { "start": 133, "end": 165, "text": "Shutova, Sun, and Korhonen 2010;", "ref_id": "BIBREF87" }, { "start": 166, "end": 177, "text": "Dunn 2013a;", "ref_id": "BIBREF27" }, { "start": 178, "end": 197, "text": "Heintz et al. 2013;", "ref_id": "BIBREF41" }, { "start": 198, "end": 215, "text": "Hovy et al. 2013;", "ref_id": "BIBREF44" }, { "start": 216, "end": 235, "text": "Mohler et al. 2013;", "ref_id": "BIBREF67" }, { "start": 236, "end": 257, "text": "Shutova and Sun 2013;", "ref_id": "BIBREF87" }, { "start": 258, "end": 295, "text": "Tsvetkov, Mukomel, and Gershman 2013;", "ref_id": "BIBREF97" }, { "start": 296, "end": 325, "text": "Beigman Klebanov et al. 2014)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Computational Approaches to Metaphor Identification", "sec_num": "2.2" }, { "text": "A number of approaches trained classifiers on manually annotated data to recognize metaphor (Gedigian et al. 2006; Dunn 2013a; Hovy et al. 2013; Mohler et al. 2013; Tsvetkov, Mukomel, and Gershman 2013; Beigman Klebanov et al. 2014) . The method of Gedigian et al. (2006) , for instance, discriminated between literal and metaphorical uses of the verbs of MOTION and CURE using a maximum entropy classifier. The authors obtained their data by extracting the lexical items whose frames are related to MOTION and CURE from FrameNet (Fillmore, Johnson, and Petruck 2003) . To construct their training and test sets, they searched the PropBank Wall Street Journal corpus (Kingsbury and Palmer 2002) for sentences containing such lexical items and manually annotated them for metaphoricity. They used PropBank annotation (arguments and their semantic types) as features to train the classifier and reported an accuracy of 95.12%. This result was, however, only a little higher than the performance of the naive baseline assigning majority class to all instances (92.90%).", "cite_spans": [ { "start": 92, "end": 114, "text": "(Gedigian et al. 2006;", "ref_id": "BIBREF36" }, { "start": 115, "end": 126, "text": "Dunn 2013a;", "ref_id": "BIBREF27" }, { "start": 127, "end": 144, "text": "Hovy et al. 2013;", "ref_id": "BIBREF44" }, { "start": 145, "end": 164, "text": "Mohler et al. 2013;", "ref_id": "BIBREF67" }, { "start": 165, "end": 202, "text": "Tsvetkov, Mukomel, and Gershman 2013;", "ref_id": "BIBREF97" }, { "start": 203, "end": 232, "text": "Beigman Klebanov et al. 2014)", "ref_id": "BIBREF6" }, { "start": 249, "end": 271, "text": "Gedigian et al. (2006)", "ref_id": "BIBREF36" }, { "start": 530, "end": 567, "text": "(Fillmore, Johnson, and Petruck 2003)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Metaphor Identification as Supervised Classification.", "sec_num": "2.2.1" }, { "text": "Dunn (2013a, 2013b) presented an ontology-based domain interaction approach that identified metaphorical expressions at the utterance level. Dunn's system first mapped the lexical items in the given utterance to concepts from SUMO ontology (Niles and Pease 2001, 2003) , assuming that each lexical item was used in its default sense-that is, no sense disambiguation was performed. The system then extracted the properties of concepts from the ontology, such as their domain type (ABSTRACT, PHYSICAL, SOCIAL, MENTAL) and event status (PROCESS, STATE, OBJECT). Those properties were then combined into feature-vector representations of the utterances. Dunn trained a logistic regression classifier using these features to perform metaphor identification, reporting an F-score of 0.58 on general-domain data. Tsvetkov, Mukomel, and Gershman (2013) experimented with metaphor identification in English and Russian, first training a classifier on English data only, and then projecting the trained model to Russian using a dictionary. They abstracted from the words in English data to their higher-level features, such as concreteness, animateness, named-entity labels, and coarse-grained WordNet categories (corresponding to WN lexicographer files, 5 [e.g., noun.artifact, noun.body, verb.motion, verb.cognition] ). The authors used a logistic regression classifier and a combination of the listed features to annotate metaphor at the sentence level. The model was trained on the TroFi data set (Birke and Sarkar 2006) of 1,298 sentences containing literal and metaphorical uses of 25 verbs. Tsvetkov and colleagues evaluated their method on self-constructed data sets of 98 sentences for English and 140 sentences for Russian, attaining F-scores of 0.78 and 0.76, respectively. The results are encouraging and show that porting coarsegrained semantic knowledge across languages is feasible. However, it should be noted that the generalization to coarse semantic features is likely to focus on shallow properties of metaphorical language and to bypass conceptual information. Corpus-linguistic research (Charteris- Black and Ennis 2001; Kovecses 2005; Diaz-Vera and Caballero 2013) suggests that there is considerable variation in metaphorical language across cultures, which makes training only on one language and translating the model problematic for modeling conceptual structure behind metaphor.", "cite_spans": [ { "start": 240, "end": 250, "text": "(Niles and", "ref_id": "BIBREF74" }, { "start": 251, "end": 268, "text": "Pease 2001, 2003)", "ref_id": "BIBREF74" }, { "start": 806, "end": 844, "text": "Tsvetkov, Mukomel, and Gershman (2013)", "ref_id": "BIBREF97" }, { "start": 1245, "end": 1308, "text": "5 [e.g., noun.artifact, noun.body, verb.motion, verb.cognition]", "ref_id": null }, { "start": 1491, "end": 1514, "text": "(Birke and Sarkar 2006)", "ref_id": "BIBREF9" }, { "start": 2111, "end": 2132, "text": "Black and Ennis 2001;", "ref_id": "BIBREF20" }, { "start": 2133, "end": 2147, "text": "Kovecses 2005;", "ref_id": "BIBREF51" }, { "start": 2148, "end": 2177, "text": "Diaz-Vera and Caballero 2013)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Metaphor Identification as Supervised Classification.", "sec_num": "2.2.1" }, { "text": "The approach of Mohler et al. (2013) relied on the concept of semantic signature of a text, defined as a set of highly related and interlinked WordNet senses. They induced domain-sensitive semantic signatures of texts and then trained a set of classifiers to detect metaphoricity within a text by comparing its semantic signature to a set of known metaphors. The intuition behind this approach was that the texts whose semantic signature closely matched the signature of a known metaphor would be likely to contain an instance of the same conceptual metaphor. Mohler and colleagues conducted their experiments within a limited domain (the target domain of governance) and manually constructed an index of known metaphors for this domain. They then automatically created the target domain signature and a signature for each source domain among the known metaphors in the index. This was done by means of semantic expansion of domain terms using WordNet, Wikipedia links, and corpus co-occurrence statistics. Given an input text their method first identified all target domain terms using the target domain signature, then disambiguated the remaining terms using sense clustering and classified them according to their proximity to the source domains listed in the index. For the latter purpose, the authors experimented with a set of classifiers, including maximum entropy classifier, unpruned decision tree classifier, support vector machines, random forest classifier, as well as the combination thereof. They evaluated their system on a balanced data set containing 241 metaphorical and 241 literal examples, and obtained the highest F-score of 0.70 using the decision tree classifier. Hovy et al. (2013) trained a support vector machine classifier (Cortes and Vapnik 1995) with tree kernels (Moschitti, Pighin, and Basili 2006) to capture the compositional properties of metaphorical language. Their hypothesis was that unusual semantic compositions in the data would be indicative of the use of metaphor. The system was trained on labeled examples of literal and metaphorical uses of 329 words (3,872 sentences in total), with an expectation to learn the differences in their compositional behavior in the given lexico-syntactic contexts. The choice of dependency-tree kernels helped to capture such compositional properties, according to the authors. Hovy et al. used word vectors, as well as lexical, part-of-speech tags and WordNet supersense representations of sentence trees as features. They report encouraging results, F-score = 0.75, which is an indication of the importance of syntactic information and compositionality in metaphor identification.", "cite_spans": [ { "start": 16, "end": 36, "text": "Mohler et al. (2013)", "ref_id": "BIBREF67" }, { "start": 1688, "end": 1706, "text": "Hovy et al. (2013)", "ref_id": "BIBREF44" }, { "start": 1751, "end": 1775, "text": "(Cortes and Vapnik 1995)", "ref_id": "BIBREF22" }, { "start": 1794, "end": 1830, "text": "(Moschitti, Pighin, and Basili 2006)", "ref_id": "BIBREF69" } ], "ref_spans": [], "eq_spans": [], "section": "Metaphor Identification as Supervised Classification.", "sec_num": "2.2.1" }, { "text": "The key question that supervised classification poses is, what features are indicative of metaphor and how can one abstract from individual expressions to its highlevel mechanisms? The described approaches experimented with a number of features, including lexical and syntactic information and higher-level features such as semantic roles, WordNet supersenses, and domain types extracted from ontologies. The results that came out of these studies suggest that in order to reliably capture the patterns of the use of metaphor in the data on a large scale, one needs to address conceptual properties of metaphor, along with the surface ones. Thus the model would need to make generalizations at the level of metaphorical mappings and coarse-grained classes of concepts, in essence representing different domains (such as politics or machines). Although our intention in this article is to model such domain structure in a minimally supervised or unsupervised way and to learn it from the data directly, the clusters produced by our models provide a representation of conceptual domains that could also be a useful feature within a supervised classification framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metaphor Identification as Supervised Classification.", "sec_num": "2.2.1" }, { "text": "The Use of Clustering for Metaphor Processing. We first introduced the use of clustering techniques to learn metaphorical associations in our earlier work (Shutova, Sun, and Korhonen 2010; Shutova and Sun 2013) . The metaphor identification system of Shutova, Sun, and Korhonen (2010) starts from a small seed set of metaphorical expressions, learns the analogies involved in their production, and extends the set of analogies by means of spectral clustering of verbs and nouns. Shutova, Sun, and Korhonen (2010) introduced the hypothesis of \"clustering by association,\" stating that in the course of distributional noun clustering, abstract concepts tend to cluster together if they are associated with the same source domain, whereas concrete concepts cluster by meaning similarity. In the course of distributional clustering, concrete concepts (e.g., water, coffee, beer, liquid) tend to be clustered together when they have similar meanings. In contrast, abstract concepts (e.g., marriage, democracy, cooperation) tend to be clustered together when they are metaphorically associated with the same source domain(s) (e.g., both marriage and democracy can be viewed as mechanisms or games). Because of this shared association structure they share common contexts in the corpus. For instance, Figure 2 shows a more concrete cluster of mechanisms and a more abstract cluster containing both marriage and democracy, along with their associated verb cluster. Such clustering patterns allow the system to discover new, previously unseen conceptual and linguistic metaphors starting from a small set of examples, or seed metaphors. For instance, having seen the seed metaphor \"mend marriage\" it infers that \"the functioning of democracy\" is also used metaphorically, since mend and function are both MECHANISM verbs and marriage and democracy are in the same cluster. This is how the system expands from a small set of seed metaphorical expressions to cover new concepts and new metaphors. Shutova, Sun, and Korhonen (2010) experimented with unconstrained spectral clustering and applied their system to English data. In this article, we extend their method to perform constrained clustering, and thus investigate the effectiveness of additional supervision in the form of annotated metaphorical mappings. We then also apply the", "cite_spans": [ { "start": 155, "end": 188, "text": "(Shutova, Sun, and Korhonen 2010;", "ref_id": "BIBREF87" }, { "start": 189, "end": 210, "text": "Shutova and Sun 2013)", "ref_id": "BIBREF87" }, { "start": 251, "end": 284, "text": "Shutova, Sun, and Korhonen (2010)", "ref_id": "BIBREF87" }, { "start": 479, "end": 512, "text": "Shutova, Sun, and Korhonen (2010)", "ref_id": "BIBREF87" }, { "start": 1986, "end": 2019, "text": "Shutova, Sun, and Korhonen (2010)", "ref_id": "BIBREF87" } ], "ref_spans": [ { "start": 1294, "end": 1302, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "2.2.2", "sec_num": null }, { "text": "Clusters of abstract and concrete nouns. On the right is a cluster containing concrete concepts that are various kinds of mechanisms; at the bottom is a cluster containing verbs co-occurring with mechanisms in the corpus; and on the left is a cluster containing abstract concepts that tend to co-occur with these verbs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "original unconstrained method and its new constrained variant to three languages-English, Spanish, and Russian-thus testing the approach in a multilingual setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "The second set of experiments in this article are based on the method of Shutova and Sun (2013) , which is inspired by the same observation about distributional clustering. Through the use of hierarchical soft clustering techniques, Shutova and Sun (2013) derive a network of concepts in which metaphorical associations are exhibited at different levels of granularity. If, in the method of Shutova, Sun, and Korhonen (2010) , the source and target domain clusters were connected through the use of the seed expressions, the method of Shutova and Sun (2013) learns both the clusters and the connections between them automatically from the data, in a fully unsupervised fashion. Because one of the aims of this article is to investigate the level and type of supervision optimally required to generalize metaphorical mechanisms from text, we adapt and apply the method of Shutova and Sun (2013) to our languages of interest and compare its performance to that of the spectral clustering based methods across languages. We thus also test the method, which has been previously evaluated only on English data, in a multilingual setting.", "cite_spans": [ { "start": 73, "end": 95, "text": "Shutova and Sun (2013)", "ref_id": "BIBREF87" }, { "start": 391, "end": 424, "text": "Shutova, Sun, and Korhonen (2010)", "ref_id": "BIBREF87" }, { "start": 535, "end": 557, "text": "Shutova and Sun (2013)", "ref_id": "BIBREF87" }, { "start": 871, "end": 893, "text": "Shutova and Sun (2013)", "ref_id": "BIBREF87" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "Clustering techniques have also been previously used in metaphor processing research in a more traditional sense (i.e., to identify linguistic expressions with a similar or related meaning). Mason (2004) performed WordNet sense clustering to obtain selectional preference classes, and Mohler et al. (2013) used it to determine similarity between concepts and to link them in semantic signatures. Strzalkowski et al. (2013) and Gandy et al. (2013) clustered metaphorically used terms to form potential source domains. Birke and Sarkar (2006) clustered sentences containing metaphorical and literal uses of verbs. Their core assumption was that all instances of the verb in semantically similar sentences have the same sense, either the literal or the metaphorical one. However, the latter approaches did not investigate how metaphorical associations structure the distributional semantic space, which is what we focus on in this article.", "cite_spans": [ { "start": 191, "end": 203, "text": "Mason (2004)", "ref_id": "BIBREF64" }, { "start": 285, "end": 305, "text": "Mohler et al. (2013)", "ref_id": "BIBREF67" }, { "start": 396, "end": 422, "text": "Strzalkowski et al. (2013)", "ref_id": "BIBREF94" }, { "start": 427, "end": 446, "text": "Gandy et al. (2013)", "ref_id": "BIBREF35" }, { "start": 517, "end": 540, "text": "Birke and Sarkar (2006)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "2.2.3 LDA Topic Modeling. Heintz et al. (2013) applied LDA topic modeling (Blei, Ng, and Jordan 2003) to the problem of metaphor identification in experiments with English and Spanish. Their hypothesis was that if a sentence contained both source and target domain vocabulary, it contained a metaphor. The authors focused on the target domain of governance and manually compiled a set of source concepts with which governance could be associated. They used LDA topics as proxies for source and target concepts: If vocabulary from both source and target topics was present in a sentence, this sentence was tagged as containing a metaphor. The topics were learned from Wikipedia and then aligned to source and target concepts using sets of human-created seed words. When the metaphorical sentences were retrieved, the source topics that are common in the document were excluded. This ensured that the source vocabulary was transferred from a new domain. The authors collected the data for their experiments from news Web sites and governance-related blogs in English and Spanish. They ran their system on these data, and output a ranked set of metaphorical examples. They carried out two types of evaluation: (1) top five linguistic examples for each conceptual metaphor were judged by two annotators, yielding an F-score of 0.59 for English (\u03ba = 0.48); and (2) 250 top-ranked examples in system output were annotated for metaphoricity using Amazon Mechanical Turk, yielding a mean metaphoricity of 0.41 (standard deviation = 0.33) in English and 0.33 (standard deviation = 0.23) in Spanish.", "cite_spans": [ { "start": 26, "end": 46, "text": "Heintz et al. (2013)", "ref_id": "BIBREF41" }, { "start": 74, "end": 101, "text": "(Blei, Ng, and Jordan 2003)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "The method of Heintz et al. (2013) relies on the ideas of the Conceptual Metaphor Theory, in that metaphorical language can be generalized using information about source and target domains. Many supervised classification approaches (e.g., Mohler et al. 2013; Tsvetkov, Mukomel, and Gershman 2013) , as well as our own approach, share this intuition. However, our methods are different in their aims. If the method of Heintz et al. (2013) learned information about the internal domain structure from the data (through the use of LDA), our methods aim to learn information about cross-domain mappings, as well as the internal domain structure, from the words' distributional behavior.", "cite_spans": [ { "start": 14, "end": 34, "text": "Heintz et al. (2013)", "ref_id": "BIBREF41" }, { "start": 239, "end": 258, "text": "Mohler et al. 2013;", "ref_id": "BIBREF67" }, { "start": 259, "end": 296, "text": "Tsvetkov, Mukomel, and Gershman 2013)", "ref_id": "BIBREF97" }, { "start": 417, "end": 437, "text": "Heintz et al. (2013)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "In addition, in contrast to most of the systems described in this section, we experiment with minimally supervised and unsupervised techniques that require little or no annotated training data, and thus can be easily adapted to new domains and languages. Unlike most previous approaches, we also experiment with metaphor identification in a general-domain setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "Because our approach involves distributional learning from large collections of text, the choice of an appropriate text corpus plays an important role in the experiments and the interpretation of results. We have selected comparably large, wide-coverage corpora in our three languages to train the systems. The corpora were then parsed using a dependency parser and VERB-SUBJECT, VERB-DIRECT OBJECT, and VERB-INDIRECT OBJECT relations were extracted from the parser output. Following previous semantic noun and verb clustering experiments (Pantel and Lin 2002; Bergsma, Lin, and Goebel 2008; Sun and Korhonen 2009) , we use these grammatical relations (GRs) as features for clustering. The features used for noun clustering consisted of the verb lemmas occurring in VERB-SUBJECT, VERB-DIRECT OBJECT, and VERB-INDIRECT OBJECT relations with the nouns in our data set, indexed by relation type. The features used for verb clustering were the noun lemmas, occurring in the above GRs with the verbs in the data set, also indexed by relation type. The feature values were the relative frequencies of the features. For instance, the feature vector for democracy in English would contain the following entries: {restore-dobj n 1 , establish-dobj n 2 , build-dobj n 3 , ... , vote in-iobj n i , call for-iobj n i+1 , ... , survive-subj n k , emerge-subj n k+1 , ...}, where n is the frequency of the feature.", "cite_spans": [ { "start": 539, "end": 560, "text": "(Pantel and Lin 2002;", "ref_id": "BIBREF77" }, { "start": 561, "end": 591, "text": "Bergsma, Lin, and Goebel 2008;", "ref_id": "BIBREF7" }, { "start": 592, "end": 614, "text": "Sun and Korhonen 2009)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Data Sets and Feature Extraction", "sec_num": "3." }, { "text": "The English verb and noun data sets used for clustering contain the 2,000 most frequent verbs and the 2,000 most frequent nouns in the BNC (Burnard 2007) , respectively. The BNC is balanced with respect to topic and genre, which makes it appropriate for the selection of a data set of most common source and target concepts and their linguistic realizations. The features for clustering were, however, extracted from the English Gigaword corpus (Graff et al. 2003) , which is more suitable for feature extraction because of its large size. The Gigaword corpus was first parsed using the RASP parser (Briscoe, Carroll, and Watson 2006) and the VERB-SUBJECT, VERB-DIRECT OBJECT, and VERB-INDIRECT OBJECT relations were then extracted from the GR output of the parser, from which the feature vectors were formed.", "cite_spans": [ { "start": 139, "end": 153, "text": "(Burnard 2007)", "ref_id": "BIBREF17" }, { "start": 445, "end": 464, "text": "(Graff et al. 2003)", "ref_id": "BIBREF39" }, { "start": 599, "end": 634, "text": "(Briscoe, Carroll, and Watson 2006)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "English Data", "sec_num": "3.1" }, { "text": "The Spanish data were extracted from the Spanish Gigaword corpus (Mendonca et al. 2011) . The verb and noun data sets used for clustering consisted of the 2,000 most frequent verbs and 2,000 most frequent nouns in this corpus. The corpus was parsed using the Spanish Malt parser (Nivre et al. 2007; Ballesteros et al. 2010) . VERB-SUBJECT, VERB-DIRECT OBJECT, and VERB-INDIRECT OBJECT relations were then extracted from the output of the parser and the feature vectors were constructed for all verbs and nouns in the data set in a similar manner to the English system. For example, the feature vector for the noun democracia included the following entries: {destruir-dobj n 1 , reinstaurar-dobj n 2 , proteger-dobj n 3 , ... , elegir a-iobj n i , comprometer con-iobj n i+1 , ... , florecer-subj n k , funcionar-subj n k+1 , ...}.", "cite_spans": [ { "start": 65, "end": 87, "text": "(Mendonca et al. 2011)", "ref_id": "BIBREF66" }, { "start": 279, "end": 298, "text": "(Nivre et al. 2007;", "ref_id": "BIBREF75" }, { "start": 299, "end": 323, "text": "Ballesteros et al. 2010)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Spanish Data", "sec_num": "3.2" }, { "text": "The Russian data were extracted from the RU-WaC corpus (Sharoff 2006 ), a two-billionword representative collection of text from the Russian Web. The corpus was parsed using the Malt dependency parser for Russian (Sharoff and Nivre 2011) , and the VERB-SUBJECT, VERB-DIRECT OBJECT, and VERB-INDIRECT OBJECT relations were extracted to create the feature vectors. Similarly to the English and Spanish experiments, the 2,000 most frequent verbs and 2,000 most frequent nouns, according to the RU-WaC, constituted the verb and noun data sets used for clustering.", "cite_spans": [ { "start": 55, "end": 68, "text": "(Sharoff 2006", "ref_id": null }, { "start": 213, "end": 237, "text": "(Sharoff and Nivre 2011)", "ref_id": "BIBREF83" } ], "ref_spans": [], "eq_spans": [], "section": "Russian Data", "sec_num": "3.3" }, { "text": "We first experiment with a flat clustering solution, where metaphorical patterns are learned by means of hard clustering of verbs and nouns at one level of generality. 6 This approach to metaphor identification is based on the hypothesis of clustering by association, which we first introduced in Shutova, Sun, and Korhonen (2010). Our expectation is that clustering by association would allow us to learn numerous new target domains that are associated with the same source domain from the data in a minimally supervised way. Following Shutova, Sun, and Korhonen (2010), we also use clustering techniques to collect source domain vocabulary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semi-Supervised Metaphor Identification Experiments", "sec_num": "4." }, { "text": "We perform verb and noun clustering using the spectral clustering algorithm, which has proven to be effective in lexical acquisition tasks (Brew and Schulte im Walde 2002; Sun and Korhonen 2009) and is suitable for high-dimensional data (Chen et al. 2006) . We experiment with its unconstrained and constrained versions. The unconstrained algorithm performs clustering (and thus identifies metaphorical patterns) in a fully unsupervised way, relying on the information contained in the data alone. The constrained version uses a small set of example metaphorical mappings as constraints to reinforce clustering by association. We then investigate to what extent adding metaphorical constraints affects the resulting partition of the semantic space as a whole. Further details of these two methods are provided subsequently. Once the clusters have been created in either the unconstrained or constrained setting, the identification of metaphorical expressions is boosted from a small number of linguistic examples-the seed expressions.", "cite_spans": [ { "start": 139, "end": 171, "text": "(Brew and Schulte im Walde 2002;", "ref_id": "BIBREF14" }, { "start": 172, "end": 194, "text": "Sun and Korhonen 2009)", "ref_id": null }, { "start": 237, "end": 255, "text": "(Chen et al. 2006)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Semi-Supervised Metaphor Identification Experiments", "sec_num": "4." }, { "text": "The seed expressions in our experiments are verb-subject and verb-direct object metaphors, in which the verb metaphorically describes the noun (e.g., \"mend marriage\"). Note that these are linguistic metaphors; their corresponding metaphorical mappings are not annotated. The seed expressions are then used to establish a link between the verb cluster that contains source domain vocabulary and the noun cluster that contains diverse target concepts associated with that source domain. This link then allows the system to identify a large number of new metaphorical expressions in a text corpus. In summary, the system (1) performs noun clustering in order to harvest target concepts associated with the same source domain; (2) creates a source domain verb lexicon by means of verb clustering; (3) uses seed expressions to connect source (verb) and target (noun) clusters between which metaphorical associations hold; and (4) searches the corpus for metaphorical expressions describing the target domain concepts using the verbs from the source domain lexicon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semi-Supervised Metaphor Identification Experiments", "sec_num": "4." }, { "text": "4.1.1 Spectral Clustering. Spectral clustering partitions objects relying on their similarity matrix. Given a set of data points, the similarity matrix W \u2208 R N\u00d7N records similarities w ij between all pairs of points. We construct similarity matrices using the Jensen-Shannon divergence as a measure. Jensen-Shannon divergence between two feature vectors q i and q j is defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Methods", "sec_num": "4.1" }, { "text": "JSD(q i , q j ) = 1 2 D(q i ||m) + 1 2 D(q j ||m) (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Methods", "sec_num": "4.1" }, { "text": "where D is the Kullback-Leibler divergence, and m is the average of the q i and q j . We then use the following similarity w ij between i and j as defined in Sun and Korhonen 2009:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Methods", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "w ij = e \u2212JSD(q i ,q j )", "eq_num": "(2)" } ], "section": "Clustering Methods", "sec_num": "4.1" }, { "text": "The similarity matrix W encodes a weighted undirected graph G := (V, E), by providing its adjacency weights. We can think of the points we are going to cluster as the vertices of the graph, and their similarities w ij as connection weights on the edges of the graph. Spectral clustering attempts to find a partitioning of the graph into clusters that are minimally connected to vertices in other clusters, but which are of roughly equal sizes (Shi and Malik 2000) . This is important for metaphor identification, as our aim is to identify clusters of target concepts associated with the same source domain on one hand and to ensure that different metaphorical mappings are separated from each other in the overall partition on the other hand. In particular, we use the NJW spectral clustering algorithm introduced by Ng et al. (2002) . 7 In our case, each vertex v i represents a word indexed by i \u2208 1, ..., N. The weight between vertices v i and v j is denoted by w ij \u2265 0 and represents the similarity or adjacency between v i and v j , taken from the adjacency matrix W. If w ij = 0, we say vertices v i and v j are unconnected. Because G is taken to be undirected, W must be symmetricthis explains our use of Jensen-Shannon divergence rather than the more well-known Kullback-Leibler divergence in constructing our similarity matrix W. 8 We denote the degree of a vertex v i by d i := N j=1 w ij . The degree represents the weighted connectivity of v i to the rest of the graph. Finally, we define the graph Laplacian of G as L := D \u2212 W; the role of the graph Laplacian will become apparent subsequently.", "cite_spans": [ { "start": 443, "end": 463, "text": "(Shi and Malik 2000)", "ref_id": "BIBREF84" }, { "start": 817, "end": 833, "text": "Ng et al. (2002)", "ref_id": "BIBREF72" }, { "start": 836, "end": 837, "text": "7", "ref_id": null }, { "start": 1340, "end": 1341, "text": "8", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Clustering Methods", "sec_num": "4.1" }, { "text": "Recall that our goal is to minimize similarities (weights) between clusters while producing clusters of roughly equal sizes. Denote the sum of weights between cluster A and points not in cluster A as W(A, \u2212A) := i\u2208A,j/ \u2208A w ij . The NCUT objective function introduced by Shi and Malik (2000) incorporates a tradeoff between these two objectives as:", "cite_spans": [ { "start": 271, "end": 291, "text": "Shi and Malik (2000)", "ref_id": "BIBREF84" } ], "ref_spans": [], "eq_spans": [], "section": "Clustering Methods", "sec_num": "4.1" }, { "text": "NCut(A 1 , ..., A K ) := K k=1 W(A k , \u2212A k ) v \u2208A k d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Methods", "sec_num": "4.1" }, { "text": "( 3)We can now recast our goal as finding the partitioning A 1 , ... A K that minimizes this objective function. We can achieve some clarity about this objective function by rewriting it using linear algebra. If we define the normalized indicator vectors h k := (h 1k , ..., h Nk ) T where we set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Methods", "sec_num": "4.1" }, { "text": "h i,k := \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 1 v \u2208A k d if v i \u2208 A k 0 otherwise (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Methods", "sec_num": "4.1" }, { "text": "then some straightforward computations reveal that:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Methods", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h T k Lh k = 1 2 i\u2208A k ,j/ \u2208A k w ij = W(A k , \u2212A k ) v \u2208A k d", "eq_num": "(5)" } ], "section": "Clustering Methods", "sec_num": "4.1" }, { "text": "Therefore, if we collect the vectors h 1 , ..., h K into a matrix H = (h 1 , ..., h K ), then h T k Lh k = (H T LH) kk , and minimizing Equation (3) is equivalent to the following minimization problem on the graph Laplacian:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Methods", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "min H Tr(H T LH) where H is subject to constraint 4", "eq_num": "(6)" } ], "section": "Clustering Methods", "sec_num": "4.1" }, { "text": "If we could find the optimal H, it would be straightforward to find the cluster memberships from H, since h ik is nonzero if and only if v i is in cluster A k . Unfortunately, solving this minimization problem is NP hard (Wagner and Wagner 1993; Von Luxburg 2007) . However, an approximate solution can be found by relaxing the constraints on the elements of H in constraint 4. Thus, we must relax our optimization problem somewhat. One entailment of constraint 4 is that the matrix D 1/2 H is a matrix of orthonormal vectors-that is, (D 1/2 H) T (D 1/2 H) = H T DH = I. Ng et al. (2002) proceed by dropping the constraint that h ik be either 0 or 1/ v \u2208A k d , but keeping the orthonormality constraint. Thus, they seek to solve the following problem:", "cite_spans": [ { "start": 221, "end": 245, "text": "(Wagner and Wagner 1993;", "ref_id": "BIBREF103" }, { "start": 246, "end": 263, "text": "Von Luxburg 2007)", "ref_id": "BIBREF102" }, { "start": 571, "end": 587, "text": "Ng et al. (2002)", "ref_id": "BIBREF72" } ], "ref_spans": [], "eq_spans": [], "section": "Clustering Methods", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "min H\u2208R N\u00d7K Tr(H T LH) subject to H T DH = I", "eq_num": "(7)" } ], "section": "Clustering Methods", "sec_num": "4.1" }, { "text": "By setting T := D 1/2 H, this can be rewritten as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Methods", "sec_num": "4.1" }, { "text": "min T\u2208R N\u00d7K Tr(T T D \u22121/2 LD \u22121/2 T) subject to T T T = I (8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Methods", "sec_num": "4.1" }, { "text": "This problem is tractable because it is equivalent to the problem of finding the first K eigenvectors of D \u22121/2 LD \u22121/2 . Because we have dropped the constraint that h i,k be nonzero if and only if v i is in cluster A k from Equation 4, then we can no longer infer the cluster memberships directly from H or T. Instead, Ng et al. (2002) approximately infer cluster memberships by clustering in the eigenspace defined by T using a clustering algorithm such as K-MEANS. The algorithm of Ng et al. (2002) is summarized as Algorithm 1.", "cite_spans": [ { "start": 320, "end": 336, "text": "Ng et al. (2002)", "ref_id": "BIBREF72" }, { "start": 485, "end": 501, "text": "Ng et al. (2002)", "ref_id": "BIBREF72" } ], "ref_spans": [], "eq_spans": [], "section": "Clustering Methods", "sec_num": "4.1" }, { "text": "Constraints. Constrained clustering methods incorporate prior knowledge about which words belong in the same clusters. In our experiments, we sought methods that were well-behaved when given only positive constraints (i.e., two words belong in the same cluster) rather than both positive and negative constraints (i.e., two words do not belong in the same cluster). Because we have no hard-and-fast constraints that must be satisfied, but rather subjective information that we believe should influence the constraints, it was also important that our methods not strictly enforce constraints, but rather be capable of weighing the constraints against information available in the similarity matrix over the set of words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Spectral Clustering with", "sec_num": "4.1.2" }, { "text": "In the constrained spectral clustering algorithm introduced by Ji, Xu, and Zhu (2006), constraints are introduced by a simple modification of the objective function of NCUT. Suppose we have C pairs of constraints indicating that two words belong to the same cluster, and we have N words overall. For each pair c of words i and j that belong to the same cluster, we create an N-dimensional vector", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Spectral Clustering with", "sec_num": "4.1.2" }, { "text": "u c = [u c1 , u c2 , ..., u cN ] T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Spectral Clustering with", "sec_num": "4.1.2" }, { "text": "where u ci = 1, u cj = \u22121, and the rest of the elements are equal to zero. We then collect these vectors into the C \u00d7 N constraint matrix", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Spectral Clustering with", "sec_num": "4.1.2" }, { "text": "U T = [u 1 , u 2 , ..., u N ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Spectral Clustering with", "sec_num": "4.1.2" }, { "text": "Suppose that we form the matrix H using the constraints on h ik in Equation 4, as before. Then if all of the constraints encoded in U are correctly specified, we have that UH = 0 and therefore the spectral norm UH 2 = Tr((UH) T UH) = 0. As more and more of the constraints encoded in U are violated by H, UH will grow. This motivates Ji,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Spectral Clustering with", "sec_num": "4.1.2" }, { "text": "Require: Number K of clusters; similarity matrix W \u2208 R N\u00d7N Compute the degree matrix D where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1 NJW algorithm", "sec_num": null }, { "text": "d ii = N j=1 w ij and d ij = 0 if i = j Compute the graph Laplacian L \u2190 D \u2212 W Compute normalized graph LaplacianL \u2190 D \u22121/2 LD \u22121/2 Compute the first K eigenvectors V 1 , ..., V K of D \u22121/2 LD \u22121/2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1 NJW algorithm", "sec_num": null }, { "text": "Let T \u2208 R N\u00d7K be the matrix containing the normalized eigenvectors", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1 NJW algorithm", "sec_num": null }, { "text": "V 1 V 1 2 , ..., V K V K 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1 NJW algorithm", "sec_num": null }, { "text": "Let y i \u2208 R K be the vector corresponding to the i th row of T Cluster the points ( , and Zhu (2006) to modify the objective function in Equation (6) by adding a term that penalizes a large norm for UH:", "cite_spans": [ { "start": 84, "end": 100, "text": ", and Zhu (2006)", "ref_id": "BIBREF47" } ], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1 NJW algorithm", "sec_num": null }, { "text": "y i ) i=1,...,N into clusters A 1 , ..., A K using the K-MEANS algorithm return A 1 , ..., A K Xu", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1 NJW algorithm", "sec_num": null }, { "text": "min H Tr(H T LH) + \u03b2 UH 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1 NJW algorithm", "sec_num": null }, { "text": "where H is subject to constraint 4 (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1 NJW algorithm", "sec_num": null }, { "text": "Here, \u03b2 governs how strongly the constraints encoded in U should be enforced. As before, we now relax contraint 4 and set T = D 1/2 H to yield:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1 NJW algorithm", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "min T\u2208R N\u00d7K Tr(T T D \u22121/2 LD \u22121/2 T + \u03b2 UD \u22121/2 T 2 ) subject to T T T = I", "eq_num": "(10)" } ], "section": "Algorithm 1 NJW algorithm", "sec_num": null }, { "text": "Note that \u03b2 UD \u22121/2 T 2 = \u03b2Tr(T T D \u22121/2 U T UD \u22121/2 T). Therefore by collecting terms we can rewrite the objective function as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1 NJW algorithm", "sec_num": null }, { "text": "min T\u2208R N\u00d7K Tr(T T D \u22121/2 (L + \u03b2U T U)D \u22121/2 T) subject to T T T = I (11)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1 NJW algorithm", "sec_num": null }, { "text": "Therefore, we can find the optimal T as the first K eigenvectors of (L + \u03b2U T U), and we can assign cluster memberships using K-MEANS in a manner analogous to algorithm NJW. The pseudocode for the JXZ algorithm is shown in Algorithm 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1 NJW algorithm", "sec_num": null }, { "text": "4.2.1 Unconstrained Setting. We first applied the unconstrained version of spectral clustering algorithm to our data. We experimented with different clustering granularities (producing 100, 200, 300, and 400 clusters), examined the obtained clusters, and determined that the number of clusters set to 200 is the optimal setting for both nouns and verbs in our task, across the three languages. This was done by means of qualitative analysis of the clusters as representations of source and target domains-that is, by judging how complete and homogeneous the verb clusters were as lists of potential source domain vocabulary and how many new target domains associated with the same source domain were found correctly in the noun clusters. This analysis was performed on 10 randomly selected clusters taken from different granularity settings and none of the seed expressions were used for it. Examples of clusters generated with this setting are shown in ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Experiments", "sec_num": "4.2" }, { "text": "= 0 if i = j Compute the graph Laplacian L \u2190 D \u2212 W Compute the first K eigenvectors V 1 , ..., V K of D \u22121/2 (L + \u03b2U T U)D \u22121/2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Experiments", "sec_num": "4.2" }, { "text": "Let T \u2208 R N\u00d7K be the matrix containing the normalized eigenvectors", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Experiments", "sec_num": "4.2" }, { "text": "V 1 V 1 2 , ..., V K V K 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Experiments", "sec_num": "4.2" }, { "text": "Let y i \u2208 R K be the vector corresponding to the i th row of T Cluster the points (y i ) i=1,...,N into clusters C 1 , ..., C K using the K-MEANS algorithm return C 1 , ..., C K Suggested source domain: MECHANISM Target Cluster: venture partnership alliance network association trust link relationship environment Suggested source domain: PHYSICAL OBJECT; LIVING BEING; STRUCTURE Target Cluster: tradition concept doctrine idea principle notion definition theory logic hypothesis interpretation proposition thesis argument refusal Suggested source domain: STORY; JOURNEY Target Cluster: politics profession affair ideology philosophy religion competition education Suggested source domain: LIQUID Target Cluster: frustration concern excitement anger speculation desire hostility anxiety passion fear curiosity enthusiasm emotion feeling suspicion", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Experiments", "sec_num": "4.2" }, { "text": "Clusters of English nouns (unconstrained setting; the source domain labels in the figure are suggested by the authors for clarity, the system does not assign any labels). ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3", "sec_num": null }, { "text": "Clusters of English verbs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "Suggested source domain: MECHANISM Target Cluster: avance consenso progreso soluci\u00f3n paz acercamiento entendimiento arreglo coincidencia igualdad equilibrio Target Cluster: relaci\u00f3n amistad lazo v\u00ednculo conexi\u00f3n nexo vinculaci\u00f3n Suggested source domain: LIVING BEING, ORGANISM, MECHANISM, STRUCTURE, BUILDING Target Cluster: comunidad pa\u00eds mundo naci\u00f3n africa sector sociedad regi\u00f3n europa estados continente asia centroam\u00e9rica bando planeta latinoam\u00e9rica Suggested source domain: STORY, JOURNEY Target Cluster: tendencia acontecimiento paso curso trayectoria ejemplo pendiente tradici\u00f3n pista evoluci\u00f3n Suggested source domain: CONSTRUCTION, STRUCTURE, BUILDING Target Cluster: seguridad vida democracia confianza estabilidad salud finanzas credibilidad competitividad", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "Clusters of Spanish nouns (unconstrained setting; the source domain labels in the figure are suggested by the authors for clarity, the system does not assign any labels).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "target concepts associated with the same source concept. 9 The verb clusters contain lists of source domain vocabulary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "We then experimented with adding constraints to guide the clustering process. We used two types of constraints: (1) target-source constraints (TS) directly corresponding to metaphorical mappings (e.g., marriage and mechanism); and (2) target-target constraints (TT), where two target concepts were associated with the same We created 30 st and 30 tt constraint pairs following this procedure. Constraints were selected and validated by the authors (who are native speakers of the respective languages) without taking the output of the unconstrained clustering step into account. Tables 1, 2 and 3 show some examples of st and tt constraints for the three languages. One pair of constraints (relationship & trade (tt) and relationship & vehicle (st)) was excluded from the set, since relationship can only be translated into Spanish and Russian by a plural form (e.g. relaci\u00f3nes). We thus used 29 tt constraints and 29 st constraints in our experiments. Our expectation is that the tt constraints are better suited to aid metaphor discovery, as the noun clusters tend to naturally contain distinct target domains associated with the same source. The tt constraints are designed to reinforce this principle. However, introducing the st type of constraints allows us to investigate to what extent explicitly reinforcing the source domain features in clustering allows to harvest more target domains associated with the source.", "cite_spans": [], "ref_spans": [ { "start": 579, "end": 596, "text": "Tables 1, 2 and 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Constrained Setting.", "sec_num": "4.2.2" }, { "text": "We experimented with different constraint enforcement parameter settings (\u03b2 = 0.25; 1.0; 4.0) in order to investigate the effect of the constraints on the overall partition of the semantic space. The positive effects of the constraints were most strongly observed with \u03b2 = 4.0 and we thus used this setting in our experiments. Examples of clusters generated with the use of constraints in the three languages are shown in Figures 9, 10 and 11. Our analysis of the Clusters of Russian nouns (unconstrained setting; the source domain labels in the figure are suggested by the authors for clarity, the system does not assign any labels).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained Setting.", "sec_num": "4.2.2" }, { "text": "Clusters of Russian verbs source domain (e.g., marriage and democracy). The constraints were generated according to the following procedure:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "TS constraints:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "r Select 30 target concepts. r For each of the target concepts select a source concept that it is associated with. This results in 30 pairs of TS constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "r For each of the resulting 30 TS pairs of concepts, select another target concept associated with the given source.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TT constraints:", "sec_num": "2." }, { "text": "r Pair the two target concepts into a TT constraint. r Each concept should appear in the set of constraints only once. 10 We created 30 TS and 30 TT constraint pairs following this procedure. The source and target concepts in the constraints were selected from the lists of 2,000 nouns that we clustered in the three languages. Constraints were selected and validated by the authors (who are native speakers of the respective languages) without taking the output of the unconstrained clustering step into account (i.e., prior to having seen it). The lists of constraints were first created through individual introspection, and then finalized through discussion. Tables 1, 2, and 3 Our expectation is that the TT constraints are better suited to aid metaphor discovery, as the noun clusters tend to naturally contain distinct target domains associated with the same source. The TT constraints are designed to reinforce this principle. However, introducing the TS type of constraint allows us to investigate to what extent explicitly reinforcing the source domain features in clustering allows us to harvest more target domains associated with the source.", "cite_spans": [ { "start": 119, "end": 121, "text": "10", "ref_id": null } ], "ref_spans": [ { "start": 663, "end": 681, "text": "Tables 1, 2, and 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "TT constraints:", "sec_num": "2." }, { "text": "We experimented with different constraint enforcement parameter settings (\u03b2 = 0.25, 1.0, 4.0) in order to investigate the effect of the constraints on the overall partition of Figures 9, 10 and 11. Our analysis of the clusters has confirmed that the use of tt constraints resulted in clusters containing more diverse target concepts associated with the same source. Compare, for instance, the unconstrained and tt constrained clusters in Figure 9 . The unconstrained cluster predominantly contains concepts related to politics, such as profession and ideology, albeit also capturing other target domains, such as religion and education. Adding the constraint marriage & politics, however, further increases the domain diversity of the cluster, adding such target concepts as life, hope, dream and economy. The Spanish tt constrained clustering in Figure 10 shows the wider effects of constrained clustering throughout the whole noun space. Although none of the constraints is explicitly manifested in this cluster, one can see that this cluster nonetheless contains a more diverse set of target concepts associated with the same source, as compared to the original unconstrained cluster (see Figure 10) . The ts constraints, as expected, highlighted the source domain features of the target word, resulting in e.g. assigning politics to the same cluster as game terms, such as round and match in English (given the ts constraint politics & game). This type of constraints are thus the semantic space. Our data analysis has shown that interesting effects of metaphorical constraints were most strongly manifested with \u03b2 = 4.0, and we thus used this setting in our further experiments. Examples of clusters generated with the use of constraints in the three languages are shown in Figures 9, 10 , and 11. Our analysis of the clusters has confirmed that the use of TT constraints resulted in clusters containing more diverse target concepts associated with the same source. Compare, for instance, the unconstrained and TT constrained clusters in Figure 9 . The unconstrained cluster predominantly contains concepts related to politics, such as profession and ideology, albeit also capturing other target domains, such as religion and education. Adding the constraint MARRIAGE & POLITICS, however, further increases the domain diversity of the cluster, adding such", "cite_spans": [], "ref_spans": [ { "start": 176, "end": 189, "text": "Figures 9, 10", "ref_id": "FIGREF4" }, { "start": 438, "end": 446, "text": "Figure 9", "ref_id": "FIGREF4" }, { "start": 847, "end": 856, "text": "Figure 10", "ref_id": null }, { "start": 1192, "end": 1202, "text": "Figure 10)", "ref_id": null }, { "start": 1779, "end": 1792, "text": "Figures 9, 10", "ref_id": "FIGREF4" }, { "start": 2043, "end": 2051, "text": "Figure 9", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "TT constraints:", "sec_num": "2." }, { "text": "Cluster: politics profession affair ideology philosophy religion competition education TT constraints: Cluster: fibre marriage politics affair career life hope dream religion education economy TS constraints: Cluster: field england part card politics sport music tape tune guitar trick football organ instrument round match game role ball host", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unconstrained:", "sec_num": null }, { "text": "Clusters of English nouns: unconstrained and constrained settings. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 9", "sec_num": null }, { "text": "Clusters of Russian nouns: unconstrained and constrained settings less likely to be suitable for metaphor identification, where purely target clusters are desired. These trends were evident across the three languages, as demonstrated by the examples in the respective figures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 11", "sec_num": null }, { "text": "Once the clusters have been obtained, we then used a set of seed metaphorical expressions to connect the source and target clusters, thus enabling the system to recognise new metaphorical expressions. The seed expressions in the three languages were extracted from naturally-occurring text, manually annotated for linguistic metaphor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of metaphorical expressions 4.3.1 Seed expressions.", "sec_num": "4.3" }, { "text": "The seed examples used in the English experiments were extracted from the metaphor corpus created by Shutova and Teufel (2010) . Their corpus is a subset of the BNC covering a range of genres: fiction, news articles, essays on politics, international relations and history, radio broadcast (transcribed speech). As such, the corpus provides a suitable platform for testing the metaphor processing system on real-world general-domain expressions in contemporary English. We extracted verb -subject and verb -direct object metaphorical expressions from this corpus. All phrases were included unless they fell in one of the following categories: target concepts as life, hope, dream, and economy. The Spanish TT constrained clustering in Figure 10 shows the wider effects of constrained clustering throughout the whole noun space. Although none of the constraints is explicitly manifested in this cluster, one can see that this cluster nonetheless contains a more diverse set of target concepts associated with the same source, as compared to the original unconstrained cluster (see Figure 10) . The TS constraints, as expected, highlighted the source domain features of the target word, resulting in (for example) assigning politics to the same cluster as game terms, such as round and match in English (given the TS constraint POLITICS & GAME). These types of constraints are thus less likely to be suitable for metaphor identification, where purely target clusters are desired. These trends were evident across the three languages, as demonstrated by the examples in the respective figures.", "cite_spans": [ { "start": 101, "end": 126, "text": "Shutova and Teufel (2010)", "ref_id": null } ], "ref_spans": [ { "start": 735, "end": 744, "text": "Figure 10", "ref_id": null }, { "start": 1080, "end": 1090, "text": "Figure 10)", "ref_id": null } ], "eq_spans": [], "section": "English seed expressions", "sec_num": null }, { "text": "4.3.1 Seed Expressions. Once the clusters have been obtained, we then used a set of seed metaphorical expressions to connect the source and target clusters, thus enabling the system to recognize new metaphorical expressions. The seed expressions in the three languages were extracted from naturally occurring text, manually annotated for linguistic metaphor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Metaphorical Expressions", "sec_num": "4.3" }, { "text": "The seed examples used in the English experiments were extracted from the metaphor corpus created by Shutova and Teufel (2010) . Their corpus is a subset of the BNC covering a range of genres: fiction; news articles; essays on politics, international relations, and history; and radio broadcast (transcribed speech).", "cite_spans": [ { "start": 101, "end": 126, "text": "Shutova and Teufel (2010)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "English seed expressions", "sec_num": null }, { "text": "As such, the corpus provides a suitable platform for testing the metaphor-processing system on real-world general-domain expressions in contemporary English. We extracted verb-subject and verb-direct object metaphorical expressions from this corpus. All phrases were included unless they fell into one of the following categories:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "English seed expressions", "sec_num": null }, { "text": "r Phrases where the subject or object referent is unknown (e.g., containing pronouns such as \"in which they [changes] operated\") or represented by a named entity (e.g., \"Then Hillary leapt into the conversation\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "English seed expressions", "sec_num": null }, { "text": "r Phrases whose metaphorical meaning is realized solely in passive constructions (e.g., \"sociologists have been inclined to [..]\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "English seed expressions", "sec_num": null }, { "text": "r Multi-word metaphors (e.g., \"go on pilgrimage with Raleigh or put out to sea with Tennyson\"), because these are beyond the scope of our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "English seed expressions", "sec_num": null }, { "text": "The resulting data set consists of 62 phrases that are different single-word metaphors representing verb-subject and verb-direct object relations, where a verb is used metaphorically. The phrases include, for instance, \"stir excitement,\" \"reflect enthusiasm,\" \"grasp theory,\" \"cast doubt,\" \"suppress memory,\" \"throw remark\" (verb-direct object constructions); and \"campaign surged,\" \"factor shaped [...],\" \"tension mounted,\" \"ideology embraces,\" \"example illustrates\" (subject-verb constructions). The phrases in the seed set were manually annotated for grammatical relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "English seed expressions", "sec_num": null }, { "text": "We have collected a set of texts in Russian and Spanish, following the genre distribution of the English corpus of Shutova and Teufel (2010) , insofar as possible. Native speakers of Russian and Spanish then annotated linguistic metaphors in these corpora, following the annotation procedures and guidelines of Shutova and Teufel. We then extracted the metaphorical expressions in verb-subject and verb-direct object constructions from these data, according to the same criteria used to create the English seed set. This resulted in 72 seed expressions for Spanish and 85 seed expressions for Russian. The Spanish seed set includes, for instance, the following examples: \"vender influencia,\" \"inundar mercado,\" \"empapelar ciudad,\" \"labrarse futuro,\" contagiar estado\" (verb-direct object constructions); and \"violencia salpic\u00f3,\" \"debate tropez\u00f3,\" \"alegr\u00eda brota,\" \"historia gira,\" \"coraz\u00f3n salt\u00f3\" (subject-verb constructions). The expressions in the seed sets were manually annotated for the corresponding grammatical relations.", "cite_spans": [ { "start": 115, "end": 140, "text": "Shutova and Teufel (2010)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Russian and Spanish seed expressions", "sec_num": null }, { "text": "Each individual seed expression implies a connection between a source domain (through the source domain verb; e.g., mend) and a target domain (through the target domain noun; e.g., marriage). The seed expressions are thus used to connect source and target clusters between which metaphorical associations hold. The system then proceeds to search the respective corpus for source and target domain terms from the connected clusters within a single grammatical relation. Specifically, the system classifies verb-direct object and verb-subject relations in the corpus as metaphorical if the lexical items in the grammatical relation appear in the linked source (verb) and target (noun) clusters. Consider the following example sentence extracted from the BNC for English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Search.", "sec_num": "4.3.2" }, { "text": "(1) Few would deny that in the nineteenth century change was greatly accelerated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Search.", "sec_num": "4.3.2" }, { "text": "The relevant GRs identified by the parser are presented in Figure 12 . The relation between the verb accelerate and its semantic object change is expressed in the passive voice and is, therefore, tagged by RASP as an ncsubj GR. Because this GR contains terminology from associated source (MOTION) and target (CHANGE) domains, it is marked as metaphorical and so is the term accelerate, which belongs to the source domain. The search space for metaphor identification was the BNC parsed by RASP for English; the Spanish Gigaword corpus parsed by the Spanish Malt parser for Spanish; and the RuWaC parsed by the Russian Malt parser for Russian. The search was performed similarly in the three languages: The system searched the corpus for the source and target domain vocabulary within a particular grammatical relation (verbdirect object or verb-subject). Some examples of retrieved metaphorical expressions are presented in Figures 13, 14 , and 15.", "cite_spans": [], "ref_spans": [ { "start": 59, "end": 68, "text": "Figure 12", "ref_id": "FIGREF6" }, { "start": 924, "end": 938, "text": "Figures 13, 14", "ref_id": null } ], "eq_spans": [], "section": "Corpus Search.", "sec_num": "4.3.2" }, { "text": "We applied the UNCONSTRAINED and CONSTRAINED versions of our system to identify metaphor in continuous text in the three languages. Examples of full sentences containing metaphorical expressions as annotated by the UNCONSTRAINED systems are shown in Figures 16, 17, and 18 . We evaluated the performance of UNCONSTRAINED and CONSTRAINED methods in the three languages on a random sample of the extracted metaphors against human judgments.", "cite_spans": [], "ref_spans": [ { "start": 250, "end": 272, "text": "Figures 16, 17, and 18", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Evaluation", "sec_num": "4.4" }, { "text": "( ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.4" }, { "text": "English metaphorical expressions identified by the system for the seeds \"cast doubt\" and \"campaign surged.\" debate tropez\u00f3 (debate stumbled) (S-V) proceso empantan\u00f3 (get swamped), juicio empantan\u00f3, proceso estanc\u00f3, debate estanc\u00f3, juicio prosper\u00f3, contacto prosper\u00f3, audiencia prosper\u00f3, proceso se top\u00f3, juicio se top\u00f3, proceso se trab\u00f3, debate se trab\u00f3, proceso tropez\u00f3, juicio tropez\u00f3, contacto tropez\u00f3 ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 13", "sec_num": null }, { "text": "inundar mercado (to flood the market) (V-O) abarrotar mercado, abarrotar comercio, atestar mercado, colmar mercado, colmar comercio, copar mercado, inundar comercio, inundar negocio, llenar mercado, llenar comercio, saturar mercado, saturar venta, saturar negocio, vaciar negocio, vaciar intercambio ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 13", "sec_num": null }, { "text": "Spanish metaphorical expressions identified by the system for the seeds \"debate tropez\u00f3\" and \"inundar mercado. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 14", "sec_num": null }, { "text": "Russian metaphorical expressions identified by the system CKM 391 Time and time again he would stare at the ground, hand on hip, if he thought he had received a bad call, and then swallow his anger and play tennis. AD9 3205 He tried to disguise the anxiety he felt when he found the comms system down, but Tammuz was nearly hysterical by this stage. AMA 349 We will halt the reduction in NHS services for long-term care and community health services which support elderly and disabled patients at home. ADK 634 Catch their interest and spark their enthusiasm so that they begin to see the product's potential. K2W 1771 The committee heard today that gangs regularly hurled abusive comments at local people, making an unacceptable level of noise and leaving litter behind them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 15", "sec_num": null }, { "text": "Retrieved English sentences Se espera que el principal mediador se re\u00fana el martes con todos los involucrados en el proceso de paz liberiano, pero es seguro que los disturbios ensombrecer\u00e1n el proceso. (violencia salpic\u00f3 -'violence splashed over (onto)') Sigue siendo la falla hist\u00f3rica, religiosa y \u00e9tnica que puede romper nuevamente la estabilidad regional [..] (rescatar seguridad -'to save security') Desea trasladar las maquiladoras de la zona fronteriza a zonas del interior, con el fin de repartir las oportunidades de empleo m\u00e1s equitativamente. (vender influencia -'to sell influence') Los precios del caf\u00e9 cayeron a principios de la actual d\u00e9cada, al abarrotarse el mercado como consecuencia del derrumbe de un sistema de cuotas de exportaci\u00f3n. (inundar mercado -'to flood the market')", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 16", "sec_num": null }, { "text": "Retrieved Spanish sentences (the corresponding seed expressions are shown in brackets)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 17", "sec_num": null }, { "text": "We applied the unconstrained and constrained versions of our system to identify metaphor in continuous text in the three languages. Examples of full sentences containing metaphorical", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.4" }, { "text": "Russian metaphorical expressions identified by the system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 15", "sec_num": null }, { "text": "CKM 391 Time and time again he would stare at the ground, hand on hip, if he thought he had received a bad call, and then swallow his anger and play tennis. AD9 3205 He tried to disguise the anxiety he felt when he found the comms system down, but Tammuz was nearly hysterical by this stage. AMA 349 We will halt the reduction in NHS services for long-term care and community health services which support elderly and disabled patients at home. ADK 634 Catch their interest and spark their enthusiasm so that they begin to see the product's potential. K2W 1771 The committee heard today that gangs regularly hurled abusive comments at local people, making an unacceptable level of noise and leaving litter behind them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 15", "sec_num": null }, { "text": "Retrieved English sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 16", "sec_num": null }, { "text": "4.4.1 Baseline. In order to show that our metaphor identification methods generalize well over the seed set and capture diverse target domains (rather than merely synonymous ones), we compared their output with that of a baseline system built upon WordNet. In the baseline system, WordNet synsets represent source and target domains in place of automatically generated clusters. The system thus expands over the seed set by using synonyms of the metaphorical verb and the target domain noun. It then searches the corpus for phrases composed of lexical items belonging to those synsets. For example, given a seed expression \"stir excitement,\" the baseline finds phrases such as \"arouse fervor, stimulate agitation, stir turmoil,\" and so forth. The comparison against the WordNet 1. Se espera que el principal mediador se re\u00fana el martes con todos los involucrados en el proceso de paz liberiano, pero es seguro que los disturbios ensombrecer\u00e1n el proceso. 2. Sigue siendo la falla hist\u00f3rica, religiosa y\u00e9tnica que puede romper nuevamente la estabilidad regional [..] 3. Desea trasladar las maquiladoras de la zona fronteriza a zonas del interior, con el fin de repartir las oportunidades de empleo m\u00e1s equitativamente. 4. Los precios del caf\u00e9 cayeron a principios de la actual d\u00e9cada, al abarrotarse el mercado como consecuencia del derrumbe de un sistema de cuotas de exportaci\u00f3n.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 16", "sec_num": null }, { "text": "Retrieved Spanish sentences. baseline was carried out for the English systems only, because the English WordNet is considerably more comprehensive than the Spanish or the Russian one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 17", "sec_num": null }, { "text": "Judgments. The quality of metaphor identification for the systems and the baseline was evaluated in terms of precision with the aid of human judges. For this purpose, we randomly sampled sentences containing metaphorical expressions as annotated by the UNCONSTRAINED and CONSTRAINED systems and by the baseline (for English) and asked human annotators to decide whether these were metaphorical or not. Participants Two volunteer annotators per language participated in the experiments. 11 They were all native speakers of the respective languages and held at least a Bachelor's degree. Materials We randomly sampled 100 sentences from the output of the UNCON-STRAINED, TT CONSTRAINED, and TS CONSTRAINED systems for each language and the WordNet baseline system for English. Each sentence contained a metaphorical expression annotated by the respective system. We then also extracted 100 random Please evaluate the expressions below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Soliciting Human", "sec_num": "4.4.2" }, { "text": "CKM 391 Time and time again he would stare at the ground, hand on hip, if he thought he had received a bad call, and then swallow his anger and play tennis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Soliciting Human", "sec_num": "4.4.2" }, { "text": "AD2 631 This is not to say that Paisley was dictatorial and simply imposed his will on other activists.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metaphorical (X) Literal ( )", "sec_num": null }, { "text": "HYPERLINK \"http://bnc.bl.uk/BNCbib/AN.html\" \\l \"AND\" AND 322 It's almost as if some teachers hold the belief that the best parents are those that are docile and ignorant about the school, leaving the professionals to get on with the job.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metaphorical ( ) Literal (X)", "sec_num": null }, { "text": "HYPERLINK \"http://bnc.bl.uk/BNCbib/K5.html\" \\l \"K54\" K54 2685 And it approved the recommendation by Darlington Council not to have special exemptions for disabled drivers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metaphorical (X ) Literal ( )", "sec_num": null }, { "text": "( ) Literal ( X)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metaphorical", "sec_num": null }, { "text": "Soliciting human judgments: Annotation set-up.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 19", "sec_num": null }, { "text": "sentences containing verbs in direct object and subject relations from corpora for each language. These examples were used as distractors in the experiments. The subjects were thus presented with a set of 500 sentences for English (UNCONSTRAINED, TT and TS CONSTRAINED, baseline, distractors) and 400 sentences for Russian and Spanish (UNCONSTRAINED, TT and TS CONSTRAINED, distractors). The sentences in the sets were randomized. An example of the sentence annotation format is given in Figure 19 . Task and guidelines The participants were asked to mark which of the expressions were metaphorical in their judgment. They were encouraged to rely on their own intuition of what a metaphor is in the annotation process. However, additional guidance in the form of the following definition of metaphor (Pragglejaz Group 2007) was also provided:", "cite_spans": [], "ref_spans": [ { "start": 488, "end": 497, "text": "Figure 19", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Figure 19", "sec_num": null }, { "text": "1. For each verb establish its meaning in context and try to imagine a more basic meaning of this verb in other contexts. Basic meanings normally are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 19", "sec_num": null }, { "text": "(1) more concrete;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 19", "sec_num": null }, { "text": "(2) related to bodily action; (3) more precise (as opposed to vague); (4) historically older.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 19", "sec_num": null }, { "text": "If you can establish a basic meaning that is distinct from the meaning of the verb in this context, the verb is likely to be used metaphorically.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "We assessed the reliability of the annotations in terms of kappa (Siegel and Castellan 1988) . The interannotator agreement was measured at \u03ba = 0.62 (n = 2, N = 500, k = 2) in the English experiments (substantial agreement); \u03ba = 0.58 (n = 2, N = 400, k = 2) in the Spanish experiments (moderate agreement); and \u03ba = 0.64 (n = 2, N = 400, k = 2) in the Russian experiments (substantial agreement). The data suggest that the main source of disagreement between the annotators was the presence of highly conventional metaphors (e.g., verbs such as impose, convey, decline).", "cite_spans": [ { "start": 65, "end": 92, "text": "(Siegel and Castellan 1988)", "ref_id": "BIBREF90" } ], "ref_spans": [], "eq_spans": [], "section": "Interannotator agreement", "sec_num": null }, { "text": "According to previous studies (Gibbs 1984; Pragglejaz Group 2007; Shutova and Teufel 2010) such metaphors are deeply ingrained in our everyday use of language and thus are perceived by some annotators as literal expressions.", "cite_spans": [ { "start": 30, "end": 42, "text": "(Gibbs 1984;", "ref_id": "BIBREF38" }, { "start": 43, "end": 65, "text": "Pragglejaz Group 2007;", "ref_id": "BIBREF78" }, { "start": 66, "end": 90, "text": "Shutova and Teufel 2010)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Interannotator agreement", "sec_num": null }, { "text": "The system performance was then evaluated against the elicited judgments in terms of precision. The system output was compared with the judgments of each annotator individually and the average precision across annotators for a given language is reported. The results are presented in Table 4 . These results demonstrate that the method is portable across languages, with the UNCONSTRAINED system achieving a high precision of 0.77 in English, 0.74 in Spanish, and 0.67 in Russian. As we expected, TT constraints outperformed the TS constraints in all languages. This is likely to be the result of the explicit emphasis on the source domain features in TS-constrained clustering, which led to a number of literal expressions (containing the source domain noun) being tagged as metaphorical (e.g., approach a barrier). The effect of TT constraints is not as pronounced as we expected in English and Spanish. In Russian, however, TT constraints led to a considerable improvement of 6 percentage points in system performance, yielding the highest precision. The CONSTRAINED and UNCONSTRAINED variants of our method harvested a comparable number of metaphorical expressions. Table 5 shows the number of seeds used in our experiments in each language, the number of unique metaphorical expressions identified by the unconstrained systems for these seeds, and the total number of sentences containing these expressions as retrieved in the respective corpus. 12 These statistics demonstrate that the systems expand considerably over the small seed sets they use as training data and identify a large number of new metaphorical expressions in corpora. It should be noted, however, that the output of the systems exhibits significant overlap in the CONSTRAINED and UNCONSTRAINED settings (e.g., 68% overlap in TS-constrained and unconstrained settings, and 73% in TT-constrained and unconstrained settings in English).", "cite_spans": [ { "start": 1451, "end": 1453, "text": "12", "ref_id": null } ], "ref_spans": [ { "start": 284, "end": 291, "text": "Table 4", "ref_id": "TABREF11" }, { "start": 1170, "end": 1177, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Results.", "sec_num": "4.4.3" }, { "text": "We have shown that the method leads to a considerable expansion over the seed set and operates with a high precision-that is, produces high quality annotations-in the three languages. It identifies new metaphorical expressions relying on the patterns of metaphorical use that it learns automatically through clustering. We have conducted a data analysis to compare the UNCONSTRAINED and CONSTRAINED variants of our method and to gain insights about the effects of metaphorical constraints. Although at first glance the performance of the systems appeared not to be strongly influenced by the use of TT constraints (except in the case of Russian), the analysis of the identified expressions revealed interesting qualitative differences. According to our qualitative analysis, the TT constrained clusters exhibited a higher diversity with respect to the target domains they contained in all languages, leading to the system capturing a higher number of new metaphorical patterns, as compared to the unconstrained clusters. As a result, it discovered a more diverse set of metaphorical expressions given the same seeds. Such examples include \"mend world\" (given the seed \"mend marriage\"); \"frame rule\" (given the seed \"glimpse duty\"); or \"lodge service,\" \"fuel life,\" \"probe world,\" \"found science,\" or \"fuel economy\" (given the seed \"base career\"). Overall, our analysis has shown that even a small number of metaphorical constraints (such as 29 in our case) has global effects throughout the cluster space, that is, influences the structure of all clusters. The fact that the TT constrained method yielded a performance similar to the unconstrained method in English and Spanish and a considerably better performance in Russian suggests that such effects are desirable for metaphor processing. Another consideration that has arisen from the analysis of the system output is that the TT clustering setting may benefit from a larger cluster size in order to incorporate both similar and diverse target concepts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Error Analysis", "sec_num": "4.5" }, { "text": "The TS constrained clusters exhibit the same trend with respect to cluster diversity. However, the explicit pairing of source and target concepts (that occasionally leads to them being assigned to the same cluster) produces a number of false positives, decreasing the system precision. For instance, in the case of the constraint DIFFICULTY & BARRIER, these two nouns are clustered together. As a result, given the seed \"confront problem,\" the system falsely tags expressions such as approach barrier or face barrier as metaphorical.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Error Analysis", "sec_num": "4.5" }, { "text": "The comparison of the English system output to that of a WordNet baseline shows that the clusters in all clustering settings capture diverse concepts, rather than merely the synonymous ones, as in the case of WordNet synsets. The clusters thus provide generalizations over the source and target domains, leading to a wider coverage and acquisition of a diverse set of metaphors. The observed discrepancy in precision between the clustering methods and the baseline (i.e., as high as 37%) can be explained by the fact that a large number of metaphorical senses are included in WordNet. This means that in WordNet synsets, source domain verbs appear together with more abstract terms. For instance, the metaphorical sense of shape in the phrase \"shape opinion\" is part of the synset \"(determine, shape, mold, influence, regulate).\" This results in the low precision of the baseline system, because it tags literal expressions (e.g., influence opinion) as metaphorical, assuming that all verbs from the synset belong to the source domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Error Analysis", "sec_num": "4.5" }, { "text": "System errors were of similar nature across the three languages and had the following key sources: (1) metaphor conventionality and (2) general polysemy. Because a number of metaphorical uses of verbs are highly conventional (such as those in \"hold views, adopt traditions, tackle a problem\"), such verbs tend to be clustered together with the verbs that would be literal in the same context. For instance, the verb tackle is found in a cluster with solve, resolve, handle, confront, face, and so on. This results in the system tagging resolve a problem as metaphorical if it has seen \"tackle a problem\" as a seed expression. However, the errors of this type do not occur nearly as frequently as in the case of the baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Error Analysis", "sec_num": "4.5" }, { "text": "A number of system errors were due to cases of general polysemy and homonymy of both verbs and nouns. For example, the noun passage can mean both \"the act of passing from one state or place to the next\" and \"a section of text; particularly a section of medium length,\" as defined in WordNet. Our method performs hard clustering, that is, it does not distinguish between different word senses. Hence the noun passage occurred in only one cluster, containing concepts such as thought, word, sentence, expression, reference, address, description, and so on. This cluster models the latter meaning of passage. Given the seed phrase \"she blocked the thought,\" the system then tags a number of false positives such as block passage, impede passage, obstruct passage, and speed passage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Error Analysis", "sec_num": "4.5" }, { "text": "Russian exhibited an interesting difference from English and Spanish in the organization of its word space. This is likely to be due to its rich derivational morphology. In other words, in Russian, more lexical items can be used to refer to the same concept than in English or Spanish, highlighting slightly different aspects of meaning. In English and Spanish, the same meaning differences tend to be expressed at the phrase level rather than at word level. For instance, the English verb to pour can be translated into Russian by at least five different verbs: lit, nalit, slit, otlit, vilit, roughly meaning to pour, to pour into, to pour out, to pour only a small amount, to pour all of the liquid out, to pour some of the liquid out, etc. 13 As a result, some Russian words tend to naturally form highly dense clusters essentially referring to a single concept (as in case of the verbs of pouring), while at the same time sharing similar distributional features with other, related but different concepts (such as sip or spill). This property suggests that it may be necessary to cluster a larger number of Russian nouns or verbs (into the same or lower number of clusters) in order to achieve the cluster coverage and diversity comparable to the English system. With respect to our experiments, this phenomenon has led to the unconstrained clusters containing more near-synonyms (such as the many variations of pouring), and the metaphorical constraints had a stronger effect in diversifying the clusters, thus allowing us to better capture new metaphorical associations.", "cite_spans": [ { "start": 744, "end": 746, "text": "13", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Error Analysis", "sec_num": "4.5" }, { "text": "Although the diversity of the noun clusters is central to the acquisition of metaphorical patterns, it is also worth noting that in many cases the system benefits not only from dissimilar concepts within the noun clusters, but also from dissimilar concepts in the verb clusters. Verb clusters produced automatically relying on contextual features may contain lexical items with distinct, or even opposite meanings (e.g., throw and catch, take off and land). However, they tend to belong to the same semantic domain. It is the diversity of verb meanings within the domain cluster that allows the generalization from a limited number of seed expressions to a broader spectrum of previously unseen metaphors, non-synonymous to those in the seed set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Error Analysis", "sec_num": "4.5" }, { "text": "The fact that our approach is seed-dependent is one of its possible limitations, affecting the coverage of the system. Wide coverage is essential for the practical use of the system. In order to obtain full coverage, a large and representative seed set is necessary. Although it is difficult to capture the whole variety of metaphorical language in a limited set of examples, it is possible to compile a seed set representative of common source-target domain mappings. The learning capabilities of the system can then be used to expand from those to the whole range of conventional metaphorical mappings and expressions. In addition, because the precision of the system was measured on the data set produced by expanding individual seed expressions, we would expect the expansion of new seed expressions to yield a comparable quality of annotations. Incorporating new seed expressions is thus likely to increase the recall of the system without a considerable loss in precision. However, creating seed sets for new languages may not always be practical. We thus further experiment with fully unsupervised metaphor identification techniques.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Error Analysis", "sec_num": "4.5" }, { "text": "The focus of our experiments so far has been mainly on metaphorical expressions, and metaphorical associations were modeled implicitly within the system. In addition, both the CONSTRAINED and the UNCONSTRAINED methods relied on a small amount of supervision in the form of seed expressions to identify new metaphorical language. In our next set of experiments, we investigate whether it is possible to learn metaphorical connections between the clusters from the data directly (without the use of metaphorical seeds for supervision) and thus to acquire a large set of explicit metaphorical associations and derive the corresponding metaphorical expressions in a fully unsupervised fashion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Metaphor Identification Experiments", "sec_num": "5." }, { "text": "This approach is theoretically grounded in cognitive science findings suggesting that abstract and concrete concepts are organized differently in the human brain (Binder et al. 2005; Warrington 2005, 2010; Huang, Lee, and Federmeier 2010; Wiemer-Hastings and Xu 2005; Adorni and Proverbio 2012) . According to Crutch and Warrington (2005) , these differences emerge from their general patterns of relation with other concepts. In this section, we present a method that learns such different patterns of association of abstract and concrete concepts with other concepts automatically. Our system performs soft hierarchical clustering of nouns to create a network (or a graph) of concepts at multiple levels of generality and to determine the strength of association between the concepts in this graph. We expect that, whereas concrete concepts would tend to naturally organize into a tree-like structure (with more specific terms descending from the more general terms), abstract concepts would exhibit a more complex pattern of association. Consider the example in Figure 20 . The figure schematically shows a small portion of the graph describing the concepts of mechanism (concrete), political system, and relationship (abstract) at two levels of generality. One can see from this graph that concrete concepts, such as bike or engine, tend to be strongly associated with one concept at the higher level in the hierarchy (mechanism). In contrast, abstract concepts may have multiple higher-level associates: the literal ones and the metaphorical ones. For instance, the abstract concept of democracy is literally associated with the more general concept of political system, as well as metaphorically associated with the concept of mechanism. Such multiple associations are due to the fact that political systems are metaphorically viewed as mechanisms; they can function, break, they can be oiled, and so forth. We often discuss concepts such as democracy or dictatorship using mechanism terminology, and thus a distributional learning approach would learn that they share features with political systems (from their literal uses), as well as with mechanisms (from their metaphorical uses, as shown next to the respective graph edges in the figure). Our system discovers such association patterns within the graph and uses them to identify metaphorical connections between concepts.", "cite_spans": [ { "start": 162, "end": 182, "text": "(Binder et al. 2005;", "ref_id": "BIBREF8" }, { "start": 183, "end": 205, "text": "Warrington 2005, 2010;", "ref_id": null }, { "start": 206, "end": 238, "text": "Huang, Lee, and Federmeier 2010;", "ref_id": "BIBREF45" }, { "start": 239, "end": 267, "text": "Wiemer-Hastings and Xu 2005;", "ref_id": "BIBREF105" }, { "start": 268, "end": 294, "text": "Adorni and Proverbio 2012)", "ref_id": "BIBREF0" }, { "start": 310, "end": 338, "text": "Crutch and Warrington (2005)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 1065, "end": 1074, "text": "Figure 20", "ref_id": "FIGREF8" } ], "eq_spans": [], "section": "Unsupervised Metaphor Identification Experiments", "sec_num": "5." }, { "text": "The graph of concepts is built using hierarchical graph factorization clustering (HGFC) (Yu, Yu, and Tresp 2006) of nouns, yielding a network of clusters with different levels of generality. The weights on the edges of the graph indicate the level of association between the clusters (concepts). The system then traverses the graph to find metaphorical associations between clusters using the weights on the edges of the graph. It then generates lists of salient features for the metaphorically connected clusters and searches the corpus for metaphorical expressions describing the target domain concepts, using the verbs from the set of salient features.", "cite_spans": [ { "start": 88, "end": 112, "text": "(Yu, Yu, and Tresp 2006)", "ref_id": "BIBREF108" } ], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Metaphor Identification Experiments", "sec_num": "5." }, { "text": "In contrast to flat clustering, which produces a partition at one level of generality, the goal of hierarchical clustering is to organize the objects into a hierarchy of clusters with different granularities. Traditional hierarchical clustering methods widely used in NLP (such as agglomerative clustering [Schulte im Walde and Brew 2001; Stevenson and Joanis 2003; Ferrer 2004; Devereux and Costello 2005] ) take decisions about cluster membership at the level of individual clusters when these are merged. As Sun and Korhonen (2011) pointed out, such algorithms suffer from two problems-error propagation and local pairwise merging-because the clustering solution is not globally optimized. In addition, they are designed to perform hard clustering of objects at each level, by successively merging the clusters. This makes them unsuitable to model multi-way associations between concepts within the hierarchy, albeit such association patterns exist in language and reasoning (Crutch and Warrington 2005; Hill, Korhonen, and Bentz 2014) . As opposed to this, HGFC allows modeling of multiple relations between concepts simultaneously via a soft clustering solution. It successively derives probabilistic bipartite graphs for every level in the hierarchy. The algorithm delays the decisions about cluster membership of individual words until the overall graph structure has been computed, which allows it to globally optimize the assignment of words to clusters. In addition, HGFC can detect the number of levels and the number of clusters at each level of the hierarchical graph automatically. This is essential for our task as these settings are difficult to pre-define for a general-purpose concept graph.", "cite_spans": [ { "start": 318, "end": 338, "text": "Walde and Brew 2001;", "ref_id": "BIBREF82" }, { "start": 339, "end": 365, "text": "Stevenson and Joanis 2003;", "ref_id": "BIBREF93" }, { "start": 366, "end": 378, "text": "Ferrer 2004;", "ref_id": "BIBREF32" }, { "start": 379, "end": 406, "text": "Devereux and Costello 2005]", "ref_id": "BIBREF25" }, { "start": 978, "end": 1006, "text": "(Crutch and Warrington 2005;", "ref_id": "BIBREF23" }, { "start": 1007, "end": 1038, "text": "Hill, Korhonen, and Bentz 2014)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Graph Factorization Clustering", "sec_num": "5.1" }, { "text": "The algorithm starts from a similarity matrix that encodes similarities between the objects. Given a set of nouns, V = {v n } N n=1 , we construct their similarity matrix W,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Graph Factorization Clustering", "sec_num": "5.1" }, { "text": "using Jensen-Shannon Divergence as a similarity measure (as in the spectral clustering experiments). The matrix W in turn encodes an undirected similarity graph G, where the nouns are mapped to vertices and their similarities represent the weights w ij on the edges between vertices i and j. Such a similarity graph is schematically shown in Figure 21(a) . The clustering problem can now be formulated as partitioning of the graph G and deriving the cluster structure from it. The graph G and the cluster structure can be represented by a bipartite graph K(V, U), where V are the vertices on G and U = {u p } m p=1 represent the hidden m clusters. For example, as shown in Figure 21(b) , V on G can be grouped into three clusters: u 1 , u 2 , and u 3 . The matrix B denotes the n \u00d7 m adjacency matrix, with b ip being the connection weight between the vertex v i and the cluster u p . Thus, B represents the connections between clusters at an upper and lower level of clustering. In order to derive the clustering structure, we first need to compute B from the original similarity matrix. The similarities w ij in W can be interpreted as the probabilities of direct transition between v i and v j :", "cite_spans": [], "ref_spans": [ { "start": 342, "end": 354, "text": "Figure 21(a)", "ref_id": "FIGREF9" }, { "start": 673, "end": 685, "text": "Figure 21(b)", "ref_id": "FIGREF9" } ], "eq_spans": [], "section": "Hierarchical Graph Factorization Clustering", "sec_num": "5.1" }, { "text": "w ij = p(v i , v j ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Graph Factorization Clustering", "sec_num": "5.1" }, { "text": "The bipartite graph K also induces a similarity (W ) between v i and v j , with all the paths from v i to v j going through vertices in U. This means that the similarities w ij are to be computed via the weights ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Graph Factorization Clustering", "sec_num": "5.1" }, { "text": "b ip = p(v i , u p ). v 1 v 6 v 2 v 4 v 3 v 5 v 7 v 8 v 9 (a) v 1 v 7 v 6 v 9 v 8 v 2 v 3 v 4 v 5 u 1 u 2 u 3 (b) u 2 u 1 u 3 v 1 v 6 v 2 v 4 v 3 v 5 v 7 v 8 v 9 (c) u 1 u 2 u 3 (d) v 1 v 7 v 6 v 9 v 8 v 2 v 3 v 4 v 5 u 1 u 2 u 3 q 1 q 2 (e)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Graph Factorization Clustering", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(v i , v j ) = p(v i )p(v j |v i ) = p(v i ) p p(u p |v i )p(v j |u p ) = p(v i ) p p(v i , u p )p(u p , v j ) p(v i )p(u p ) = p p(v i , u p )p(u p , v j ) p(u p ) = p b ip b jp \u03bb p", "eq_num": "(12)" } ], "section": "Hierarchical Graph Factorization Clustering", "sec_num": "5.1" }, { "text": "where \u03bb i = n i=1 b ip is the degree of vertex u p . The new similarity matrix W can thus be derived as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Graph Factorization Clustering", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "W : w ij = m p=1 b ip b jp \u03bb p = (B\u039b \u22121 B T ) ij", "eq_num": "(13)" } ], "section": "Hierarchical Graph Factorization Clustering", "sec_num": "5.1" }, { "text": "where \u039b = diag(\u03bb 1 , ..., \u03bb m ). B can then be found by minimizing the divergence distance (\u03b6) between the similarity matrices W and W . min", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Graph Factorization Clustering", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "H,\u039b \u03b6(W, H\u039bH T ), s.t. n i=1 h ip = 1", "eq_num": "(14)" } ], "section": "Hierarchical Graph Factorization Clustering", "sec_num": "5.1" }, { "text": "We remove the coupling between B and \u039b by setting H = B\u039b \u22121 . Following Yu, Yu, and Tresp (2006) we define \u03b6 as", "cite_spans": [ { "start": 72, "end": 96, "text": "Yu, Yu, and Tresp (2006)", "ref_id": "BIBREF108" } ], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Graph Factorization Clustering", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b6(X, Y) = ij (x ij log x ij y ij \u2212 x ij + y ij )", "eq_num": "(15)" } ], "section": "Hierarchical Graph Factorization Clustering", "sec_num": "5.1" }, { "text": "Yu , Yu, and Tresp (2006) showed that this cost function is non-increasing under the update rule. 14h", "cite_spans": [ { "start": 3, "end": 25, "text": ", Yu, and Tresp (2006)", "ref_id": "BIBREF108" } ], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Graph Factorization Clustering", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "ip \u221d h ip j w ij (H\u039bH T ) ij \u03bb p h jp s.t. ih ip = 1 (16) \u03bb p \u221d \u03bb p j w ij (H\u039bH T ) ij h ip h jp s.t. p\u03bb p = ij w ij", "eq_num": "(17)" } ], "section": "Hierarchical Graph Factorization Clustering", "sec_num": "5.1" }, { "text": "We optimized this cost function by alternately updating h and \u03bb.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Graph Factorization Clustering", "sec_num": "5.1" }, { "text": "A flat clustering algorithm can be induced by computing B and assigning a lower level node to the parent node that has the largest connection weight. The number of clusters at any level can be determined by only counting the number of non-empty nodes (namely, the nodes that have at least one lower level node associated). To create a hierarchical graph we need to repeat this process to successively add levels of clusters to the graph. To create a bipartite graph for the next level, we first need to compute a new similarity matrix for the clusters U. The similarity between clusters p(u p , u q ) can be induced from B, as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Graph Factorization Clustering", "sec_num": "5.1" }, { "text": "p(u p , u q ) = p(u p )p(u p |u q ) = (B T D \u22121 B) pq (18) D = diag(d 1 , ..., d n ) where d i = m p=1 b ip", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Graph Factorization Clustering", "sec_num": "5.1" }, { "text": "We can then construct a new graph G 1 (Figure 21(d) ) with the clusters U as vertices, and the cluster similarities p(u p , u q ) as the connection weights. The clustering algorithm can now be applied again (Figure 21(e) ). This process can go on iteratively, leading to a hierarchical graph. The number of levels (L) and the number of clusters (m ) are detected automatically, using the method of Sun and Korhonen (2011). Clustering starts with an initial setting of number of clusters (m 0 ) for the first level. In our experiment, we set the value of m 0 to 800. For the subsequent levels, m is set to the number of non-empty clusters (bipartite graph nodes) on the parent level -1. The matrix B is initialized randomly. We found that the actual initialization values have little impact on the final result. The rows in B are normalized after the initialization so the values in each row add up to one.", "cite_spans": [], "ref_spans": [ { "start": 38, "end": 51, "text": "(Figure 21(d)", "ref_id": "FIGREF9" }, { "start": 207, "end": 220, "text": "(Figure 21(e)", "ref_id": "FIGREF9" } ], "eq_spans": [], "section": "Hierarchical Graph Factorization Clustering", "sec_num": "5.1" }, { "text": "For a word v i , the probability of assigning it to cluster x", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Graph Factorization Clustering", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) p \u2208 X at level is given by p(x ( ) p |v i ) = X \u22121 ... x (1) \u2208X 1 p(x ( ) p |x ( \u22121) )...p(x (1) |v i ) = (D (\u22121) 1 B 1 D \u22121 2 B 2 ...D \u22121 B ) ip", "eq_num": "(19)" } ], "section": "Hierarchical Graph Factorization Clustering", "sec_num": "5.1" }, { "text": "m can then be determined as the number of clusters with at least one member noun according to Equation (19). Because of the random walk property of the graph, m is non-increasing for higher levels (Sun and Korhonen 2011). The algorithm can thus terminate when all nouns are assigned to one cluster. We run 1,000 iterations of updates of h and \u03bb (Equations (16) and (17)) for each two adjacent levels. The whole algorithm can be summarized as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Graph Factorization Clustering", "sec_num": "5.1" }, { "text": "Algorithm 3 HGFC algorithm Require: N nouns V, initial number of clusters m 1 Compute the similarity matrix W 0 from V Build the graph G 0 from W 0 , \u2190 1 while m > 1 do Factorize G \u22121 to obtain bipartite graph K with adjacency matrix B (eqs. 16, 17) Build a graph G with similarity matrix W = B T D \u22121 B according to equation 18", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Graph Factorization Clustering", "sec_num": "5.1" }, { "text": "\u2190 + 1 ; m \u2190 m \u22121 \u2212 1 end while return B , B \u22121 ...B 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Graph Factorization Clustering", "sec_num": "5.1" }, { "text": "The resulting graph is composed of a set of bipartite graphs defined by B , B \u22121 , ..., B 1 . A bipartite graph has a similar structure to the one shown in Figure 20 . For a given noun, we can rank the clusters at any level according to the soft assignment probability (Equation (19)). The clusters that have no member noun were hidden from the ranking because they do not explicitly represent any concept. However, these clusters are still part of the organization of the conceptual space within the model and they contribute to the probability for the clusters at upper levels (Equation (19)). We call the view of the hierarchical graph where these empty clusters are hidden an explicit graph.", "cite_spans": [], "ref_spans": [ { "start": 156, "end": 165, "text": "Figure 20", "ref_id": "FIGREF8" } ], "eq_spans": [], "section": "Hierarchical Graph Factorization Clustering", "sec_num": "5.1" }, { "text": "Once we obtain the explicit graph of concepts, we can now identify metaphorical associations based on the weights connecting the clusters at different levels. Taking a single noun (e.g., fire) as input, the system computes the probability of its cluster membership for each cluster at each level, using the weights on the edges of the graph (Equation (19)). We expect the cluster membership probabilities to indicate the level of association of the input noun with the clusters. The system can then rank the clusters at each level based on these probabilities. We chose level 3 as the optimal level of generality for our experiments, based on our qualitative analysis of the graph. 15 The system selects six top-ranked clusters from this level (we expect an average source concept to have no more than five typical target associates) and excludes the literal cluster containing the input concept (e.g., fire flame blaze). The remaining clusters represent the target concepts associated with the input source concept.", "cite_spans": [ { "start": 682, "end": 684, "text": "15", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Identification of Metaphorical Associations", "sec_num": "5.2" }, { "text": "Example output for the input concepts of fire and disease in English is shown in Figure 22 . One can see from the figure that each of the noun-to-cluster mappings represents a new conceptual metaphor (e.g., EMOTION is FIRE, VIOLENCE is FIRE, CRIME is a DISEASE). These mappings are exemplified in language by a number of metaphorical expressions (e.g., \"His anger will burn him,\" \"violence flared again,\" \"it's time they found a cure for corruption\"). Figures 23 and 24 show metaphorical associations identified by the Spanish and Russian systems for the same source concepts. As we can see from the figures, FEELINGS tend to be associated with FIRE in all three languages. Unsurprisingly, however, many of the identified metaphors differ across languages. For instance, VICTORY, SUCCESS, and LOOKS are viewed as FIRE in Russian, whereas IMMIGRANTS and PRISONERS have a stronger association with FIRE in English and Spanish, according to the systems. All of the languages exhibit CRIME IS A DISEASE metaphor, with Russian and Spanish also generalizing it to VIOLENCE IS A DISEASE. Interestingly, throughout our data set, Spanish data tends to exhibit more negative metaphors about CORPORATIONS, as it is demonstrated by the DISEASE example in Figure 23 . Although we do not claim that this output is exhaustively representative of all conceptual metaphors present in a particular culture, we believe that these examples showcase some interesting differences in the use of metaphor across languages that can be discovered via large-scale statistical processing.", "cite_spans": [], "ref_spans": [ { "start": 81, "end": 90, "text": "Figure 22", "ref_id": null }, { "start": 452, "end": 469, "text": "Figures 23 and 24", "ref_id": "FIGREF10" }, { "start": 1243, "end": 1252, "text": "Figure 23", "ref_id": null } ], "eq_spans": [], "section": "Identification of Metaphorical Associations", "sec_num": "5.2" }, { "text": "After extracting the source-target domain mappings, we now move on to the identification of the corresponding metaphorical expressions. The system does this by harvesting the salient features that lead to the input noun being strongly associated with the extracted clusters. The salient features are selected by ranking the features according to the joint probability of the feature ( f ) occurring both with the input SOURCE: fire TARGET 1: sense hatred emotion passion enthusiasm sentiment hope interest feeling resentment optimism hostility excitement anger TARGET 2: coup violence fight resistance clash rebellion battle drive fighting riot revolt war confrontation volcano row revolution struggle TARGET 3: alien immigrant TARGET 4: prisoner hostage inmate TARGET 5: patrol militia squad warplane peacekeeper SOURCE: disease TARGET 1: fraud outbreak offense connection leak count crime violation abuse conspiracy corruption terrorism suicide TARGET 2: opponent critic rival TARGET 3: execution destruction signing TARGET 4: refusal absence fact failure lack delay TARGET 5: wind storm flood rain weather", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Metaphorical Expressions", "sec_num": "5.3" }, { "text": "Metaphorical associations discovered by the English system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 22", "sec_num": null }, { "text": "TARGET 1: esfuerzo negocio tarea debate operaci\u00f3n operativo ofensiva gira acci\u00f3n actividad trabajo juicio campa\u00f1a gesti\u00f3n labor proceso negociaci\u00f3n TARGET 2: quiebra indignaci\u00f3n ira perjuicio p\u00e1nico caos alarma TARGET 3: reh\u00e9n refugiado preso prisionero detenido inmigrante TARGET 4: soberan\u00eda derecho independencia libertad autonom\u00eda TARGET 5: referencia sustituci\u00f3n exilio lengua reemplazo SOURCE: enfermedad (disease) TARGET 1: calentamiento inmigraci\u00f3n impunidad TARGET 2: desaceleraci\u00f3n brote fen\u00f3meno epidemia sequ\u00eda violencia mal recesi\u00f3n escasez contaminaci\u00f3n TARGET 3: petrolero fabricante gigante firma aerol\u00ednea TARGET 4: mafia TARGET 5: hamas milicia serbio talib\u00e1n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SOURCE: fuego (fire)", "sec_num": null }, { "text": "Metaphorical associations discovered by the Spanish system. source noun (w) and the target cluster (c). Under a simplified independence assumption, p(w, c| f ) = p(w| f ) \u00d7 p(c| f ). p(w| f ) and p(c| f ) are calculated as the ratio of the frequency of the feature f to the total frequency of the input noun and the cluster, respectively. The features ranked higher are expected to represent the source domain vocabulary that can be used to metaphorically describe the target concepts. Example features (verbs and their grammatical relations) extracted for the source domain noun fire and the violence cluster in English are shown in Figure 25 .", "cite_spans": [], "ref_spans": [ { "start": 634, "end": 643, "text": "Figure 25", "ref_id": "FIGREF11" } ], "eq_spans": [], "section": "Figure 23", "sec_num": null }, { "text": "We then refined the lists of features by means of selectional preference (SP) filtering. Many features that co-occur with the source noun and the target cluster may be general, that is, they can describe many different domains rather than being characteristic of the source domain. For example, the verb start, which is a common feature for both the fire and the violence cluster (e.g., start a war, start a fire) also co-occurs with many other arguments in a large corpus. We use SPs to quantify how well the extracted features describe the source domain (e.g., fire) by measuring how characteristic the domain word is as an argument of the verb. This allows us to filter out non-characteristic verbs, such as start in our example. We extracted nominal argument distributions of the verbs in target 4: mafia target 5: hamas milicia serbio talib\u00e1n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 23", "sec_num": null }, { "text": "Metaphorical associations discovered by the Spanish system SOURCE: \u043e\u0433\u043e\u043d\u044c (fire) TARGET 1: \u043e\u0431\u043b\u0438\u043a (looks) TARGET 2: \u043f\u043e\u0431\u0435\u0434\u0430 \u0443\u0441\u043f\u0435\u0445 (victory, success) TARGET 3: \u0434\u0443\u0448\u0430 \u0441\u0442\u0440\u0430\u0434\u0430\u043d\u0438\u0435 \u0441\u0435\u0440\u0434\u0446\u0435 \u0434\u0443\u0445 (soul, suffering, heart, spirit) TARGET 4: \u0441\u0442\u0440\u0430\u043d\u0430 \u043c\u0438\u0440 \u0436\u0438\u0437\u043d\u044c \u0440\u043e\u0441\u0441\u0438\u044f (country, world, life, russia) TARGET 5: \u043c\u043d\u043e\u0436\u0435\u0441\u0442\u0432\u043e \u043c\u0430\u0441\u0441\u0430 \u0440\u044f\u0434 (multitude, crowd, range) SOURCE: \u0431\u043e\u043b\u0435\u0437\u043d\u044c (disease) TARGET 1: \u0433\u043e\u0442\u043e\u0432\u043d\u043e\u0441\u0442\u044c \u0441\u043e\u043e\u0442\u0432\u0435\u0442\u0441\u0442\u0432\u0438\u0435 \u0437\u043b\u043e \u0434\u043e\u0431\u0440\u043e (evil, kindness, readiness) TARGET 2: \u0443\u0431\u0438\u0439\u0441\u0442\u0432\u043e \u043d\u0430\u0441\u0438\u043b\u0438\u0435 \u0430\u0442\u0430\u043a\u0430 \u043f\u043e\u0434\u0432\u0438\u0433 \u043f\u043e\u0441\u0442\u0443\u043f\u043e\u043a \u043f\u0440\u0435\u0441\u0442\u0443\u043f\u043b\u0435\u043d\u0438\u0435 \u043e\u0448\u0438\u0431\u043a\u0430 \u0433\u0440\u0435\u0445 \u043d\u0430\u043f\u0430\u0434\u0435\u043d\u0438\u0435 (murder, crime, assault, mistake, sin etc.) TARGET 3: \u0434\u0435\u043f\u0440\u0435\u0441\u0441\u0438\u044f \u0443\u0441\u0442\u0430\u043b\u043e\u0441\u0442\u044c \u043d\u0430\u043f\u0440\u044f\u0436\u0435\u043d\u0438\u0435 \u043d\u0430\u0433\u0440\u0443\u0437\u043a\u0430 \u0441\u0442\u0440\u0435\u0441\u0441 \u043f\u0440\u0438\u0441\u0442\u0443\u043f \u043e\u0440\u0433\u0430\u0437\u043c (depression, tiredness, stress etc.) TARGET 4: \u0441\u0440\u0430\u0436\u0435\u043d\u0438\u0435 \u0432\u043e\u0439\u043d\u0430 \u0431\u0438\u0442\u0432\u0430 \u0433\u043e\u043d\u043a\u0430 (battle, war, race) TARGET 5: \u0430\u0441\u043f\u0435\u043a\u0442 \u0441\u0438\u043c\u043f\u0442\u043e\u043c \u043d\u0430\u0440\u0443\u0448\u0435\u043d\u0438\u0435 \u0442\u0435\u043d\u0434\u0435\u043d\u0446\u0438\u044f \u0444\u0435\u043d\u043e\u043c\u0435\u043d \u043f\u0440\u043e\u044f\u0432\u043b\u0435\u043d\u0438\u0435 (aspect, trend, phenomenon, violation, symptom) We then refined the lists of features by means of selectional preference (SP) filtering. We use SPs to quantify how well the extracted features describe the source domain (e.g. fire). We extracted nominal argument distributions of the verbs in our feature lists for verb--subject, verb--direct_object and verb--indirect_object relations. We used the algorithm of Sun and Korhonen (2009) to create SP classes and the measure of Resnik (1993) to quantify how well a particular argument class fits the verb. Resnik measures selectional preference strength S R (v) of a predicate as a Kullback-Leibler distance between two distributions: the prior probability of the noun class 34", "cite_spans": [ { "start": 1203, "end": 1226, "text": "Sun and Korhonen (2009)", "ref_id": null }, { "start": 1267, "end": 1280, "text": "Resnik (1993)", "ref_id": "BIBREF79" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 23", "sec_num": null }, { "text": "Metaphorical associations discovered by the Russian system. rage-ncsubj engulf -ncsubj erupt-ncsubj burn-ncsubj light-dobj consume-ncsubj flare-ncsubj sweep-ncsubj spark-dobj battle-dobj gut-idobj smolder-ncsubj ignite-dobj destroy-idobj spreadncsubj damage-idobj light-ncsubj ravage-ncsubj crackle-ncsubj open-dobj fuel-dobj spray-idobj roar-ncsubj perish-idobj destroy-ncsubj wound-idobj start-dobj ignite-ncsubj injure-idobj fightdobj rock-ncsubj retaliate-idobj devastate-idobj blaze-ncsubj ravage-idobj rip-ncsubj burn-idobj spark-ncsubj warm-idobj suppress-dobj rekindle-dobj ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 24", "sec_num": null }, { "text": "Salient features for the fire and the violence cluster.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 25", "sec_num": null }, { "text": "our feature lists for VERB-SUBJECT, VERB-DIRECT OBJECT, and VERB-INDIRECT OBJECT relations. We used the algorithm of Sun and Korhonen (2009) to create SP classes and the measure of Resnik (1993) to quantify how well a particular argument class fits the verb. Sun and Korhonen (2009) create SP classes by distributional clustering of nouns with lexico-syntactic features (i.e., the verbs they co-occur with in a large corpus and their corresponding grammatical relations). Resnik measures selectional preference strength S R (v) of a predicate as a Kullback-Leibler distance between two distributions: the prior probability of the noun class P(c) and the conditional probability of the noun class given the verb P(c|v).", "cite_spans": [ { "start": 117, "end": 140, "text": "Sun and Korhonen (2009)", "ref_id": null }, { "start": 181, "end": 194, "text": "Resnik (1993)", "ref_id": "BIBREF79" }, { "start": 259, "end": 282, "text": "Sun and Korhonen (2009)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 25", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "S R (v) = D(P(c|v)||P(c)) = c P(c|v) log P(c|v) P(c)", "eq_num": "(20)" } ], "section": "Figure 25", "sec_num": null }, { "text": "In order to quantify how well a particular argument class fits the verb, Resnik defines selectional association as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 25", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "A R (v, c) = 1 S R (v) P(c|v) log P(c|v) P(c)", "eq_num": "(21)" } ], "section": "Figure 25", "sec_num": null }, { "text": "We rank the nominal arguments of the verbs in our feature lists using their selectional association with the verb, and then only retain the features whose top five arguments contain the source concept. For example, the verb start, which is a common feature for both fire and the violence cluster, would be filtered out in this way because its top five argument classes do not contain fire or any of the nouns in the violence cluster.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 25", "sec_num": null }, { "text": "In contrast, the verbs flare or blaze would be retained as descriptive source domain vocabulary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 25", "sec_num": null }, { "text": "Similarly to the spectral clustering experiments, we then search the parsed corpus for grammatical relations, in which the nouns from the target domain cluster appear with the verbs from the source domain vocabulary (e.g., \"war blazed\" (subj), \"to fuel violence\" (dobj) for the mapping VIOLENCE is FIRE in English). The system thus annotates metaphorical expressions in text, as well as the corresponding conceptual metaphors, as shown in Figure 26 . Metaphorical expressions identified by the Spanish and Russian systems are shown in Figures 27 and 28 , respectively.", "cite_spans": [], "ref_spans": [ { "start": 439, "end": 448, "text": "Figure 26", "ref_id": null }, { "start": 535, "end": 552, "text": "Figures 27 and 28", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Figure 25", "sec_num": null }, { "text": "hope lit (Subj), anger blazed (Subj), optimism raged (Subj), enthusiasm engulfed them (Subj), hatred flared (Subj), passion flared (Subj), interest lit (Subj), fuel resentment (Dobj), anger crackled (Subj), feelings roared (Subj), hostility blazed (Subj), light with hope (Iobj) ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FEELING IS FIRE", "sec_num": null }, { "text": "cure crime (Dobj), abuse transmitted (Subj), eradicate terrorism (Dobj), suffer from corruption (Iobj), diagnose abuse (Dobj), combat fraud (Dobj), cope with crime (Iobj), cure abuse (Dobj), eradicate corruption (Dobj), violations spread (Subj) ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CRIME IS A DISEASE", "sec_num": null }, { "text": "Identified metaphorical expressions for the mappings FEELING IS FIRE and CRIME IS A DISEASE in English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 26", "sec_num": null }, { "text": "bombardear con indignaci\u00f3n, estallar de indignaci\u00f3n, reavivar indignaci\u00f3n, detonar indignaci\u00f3n, indignaci\u00f3n estalla, consumido por p\u00e1nico, golpear por p\u00e1nico, sacudir por p\u00e1nico, contener p\u00e1nico, desatar p\u00e1nico, p\u00e1nico golpea, consumido por ira, estallar de ira, abarcado a ira, ira destruya, ira propaga, encender ira, atizar ira, detonar ira ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SENTIDO ES FUEGO (FEELING IS FIRE)", "sec_num": null }, { "text": "tratar mafia, erradicar mafia, detectar mafia, eliminar mafia, luchar contra mafia, impedir mafia, se\u00f1alar mafia, mafia propaga, mafia mata, mafia desarrolla, padecer de mafia, debilitar por mafia, contaminar con mafia ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CRIMEN ES ENFERMEDAD (CRIME IS A DISEASE)", "sec_num": null }, { "text": "Identified metaphorical expressions for the mappings FEELING IS FIRE and CRIME IS A DISEASE in Spanish.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 27", "sec_num": null }, { "text": "Volume ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computational Linguistics", "sec_num": null }, { "text": "Identified metaphorical expressions for the mappings feeling is fire and crime is a disease in English ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 26", "sec_num": null }, { "text": "Identified metaphorical expressions for the mappings feeling is fire and crime is a disease in Spanish feeling is fire crime is a disease", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 27", "sec_num": null }, { "text": "Identified metaphorical expressions for the mappings feeling is fire and crime is a disease in Russian", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 28", "sec_num": null }, { "text": "Identified metaphorical expressions for the mappings FEELING IS FIRE and CRIME IS A DISEASE in Russian.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 28", "sec_num": null }, { "text": "Because there is no large and comprehensive gold standard of metaphorical mappings available, we evaluated the quality of metaphorical mappings and metaphorical expressions identified by the system against human judgments. We conducted two types of evaluation: (1) precision-oriented, for both metaphorical mappings and metaphorical expressions; and (2) recall-oriented, for metaphorical expressions. In the first setting, the human judges were presented with a random sample of systemproduced metaphorical mappings and metaphorical expressions, and asked to mark the ones they considered valid as correct. In the second setting, the human annotators were presented with a set of source domain concepts and asked to write down all target concepts they associated with a given source, thus creating a gold standard.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5.4" }, { "text": "5.4.1 Baselines. We compared the system performance with that of two baseline systems: an unsupervised agglomerative clustering baseline (AGG) for the three languages and a supervised baseline built upon Wordnet (WN) for English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5.4" }, { "text": "We constructed the agglomerative clustering baseline using SciPy implementation (Oliphant 2007 ) of Ward's linkage method (Ward 1963) . The output tree was cut according to the number of levels and the number of clusters of the explicit graph detected by HGFC. The resulting tree was then converted into a graph by adding connections from each cluster to all the clusters one level above. We computed the connection weights as cluster distances measured using Jensen-Shannon Divergence between the cluster centroids. This graph was then used in place of the HGFC graph in the metaphor identification experiments.", "cite_spans": [ { "start": 80, "end": 94, "text": "(Oliphant 2007", "ref_id": "BIBREF76" }, { "start": 122, "end": 133, "text": "(Ward 1963)", "ref_id": "BIBREF104" } ], "ref_spans": [], "eq_spans": [], "section": "AGG:", "sec_num": null }, { "text": "In the WN baseline, the WordNet hierarchy was used as the underlying graph of concepts to which the metaphor extraction method was applied. Given a source concept, the system extracted all its sense-1 hypernyms two levels above and subsequently all of their sister terms. The hypernyms themselves were considered to represent the literal sense of the source noun and were therefore removed. The sister terms were kept as potential target domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WN:", "sec_num": null }, { "text": "Associations. To create our data set, we extracted 10 common source concepts that map to multiple targets from the Master Metaphor List (Lakoff, Espenson, and Schwartz 1991) and linguistic analyses of metaphor (Lakoff and Johnson 1980; Shutova and Teufel 2010) . These included FIRE, CHILD, SPEED, WAR, DISEASE, BREAKDOWN, CONSTRUCTION, VEHICLE, SYSTEM, BUSINESS. We then translated them into Spanish and Russian. Each of the systems and the baselines identified 50 source-target domain mappings for the given source domains. This resulted in a set of 150 conceptual metaphors for English (HGFC,AGG,WN), 100 for Spanish (HGFC,AGG), and 100 for Russian (HGFC,AGG). Each of these conceptual mappings represents a number of submappings since all the target concepts are clusters or synsets. These were then evaluated against human judgments in two different experimental settings.", "cite_spans": [ { "start": 136, "end": 173, "text": "(Lakoff, Espenson, and Schwartz 1991)", "ref_id": "BIBREF53" }, { "start": 210, "end": 235, "text": "(Lakoff and Johnson 1980;", "ref_id": "BIBREF54" }, { "start": 236, "end": 260, "text": "Shutova and Teufel 2010)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Metaphorical", "sec_num": "5.4.2" }, { "text": "The judges were presented with a set of conceptual metaphors identified by the three systems, randomized. They were asked to annotate the mappings they considered valid as correct. In all our experiments, the judges were encouraged to rely on their own intuition of metaphor, but they also reviewed the metaphor annotation guidelines of Shutova and Teufel (2010) at the beginning of the experiment.", "cite_spans": [ { "start": 337, "end": 362, "text": "Shutova and Teufel (2010)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Setting 1 Task and guidelines", "sec_num": null }, { "text": "Participants Two judges per language, who were native speakers of English, Russian, and Spanish participated in this experiment. All of them held at least a bachelor's degree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setting 1 Task and guidelines", "sec_num": null }, { "text": "Interannotator agreement The agreement on this task was measured at \u03ba = 0.60 (n = 2, N = 150, k = 2) for English, \u03ba = 0.59 (n = 2, N = 100, k = 2) for Spanish, and \u03ba = 0.55 (n = 2, N = 100, k = 2) for Russian. The main differences in the annotators' judgments stem from the fact that some metaphorical associations are less obvious and common than others, and thus need more context (or imaginative effort) to establish. Such examples where the judges disagreed included metaphorical mappings such as INTENSITY is SPEED, GOAL is a CHILD, COLLECTION is a SYSTEM, and ILLNESS is a BREAKDOWN.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setting 1 Task and guidelines", "sec_num": null }, { "text": "The system performance was then evaluated against these judgments in terms of precision (P), i.e., the proportion of the valid metaphorical mappings among those identified. We calculated system precision (in all experiments) as an average over both annotations. The results across the three languages are presented in Table 6 .", "cite_spans": [], "ref_spans": [ { "start": 318, "end": 325, "text": "Table 6", "ref_id": "TABREF13" } ], "eq_spans": [], "section": "Results", "sec_num": null }, { "text": "To measure recall, R, of the systems we asked two annotators per language (native speakers with a background in metaphor, different from Setting 1) to write down up to five target concepts they strongly associated with each of the 10 source concepts. Their annotations were then aggregated into a single metaphor association gold standard, including all of the mappings listed by the annotators. The gold standard consisted of 63 mappings for English, 70 mappings for Spanish, and 68 mappings for Russian. The recall of the systems was measured against this gold standard. The results are shown in Table 6 .", "cite_spans": [], "ref_spans": [ { "start": 598, "end": 605, "text": "Table 6", "ref_id": "TABREF13" } ], "eq_spans": [], "section": "Setting 2", "sec_num": null }, { "text": "For each of the identified conceptual metaphors, the systems extracted a number of metaphorical expressions from the corpus. For the purposes of this evaluation, we selected the top 50 features from the ranked feature list (as described in Section 5.3) and searched the corpus for expressions where the verbs from the feature list co-occurred with the nouns from the target cluster. Figure 29 shows example sentences annotated by HGFC for English. The identification of metaphorical expressions was also evaluated against human judgments. Materials The judges were presented with a set of randomly sampled sentences containing metaphorical expressions as annotated by the systems and by the baselines (200 each). This resulted in a data set of 600 sentences for English (HGFC, AGG, WN), 400 sentences for Spanish (HGFC, AGG), and 400 sentences for Russian (HGFC, AGG). The order of the presented sentences was randomized. Task and guidelines The judges were asked to mark the expressions that were metaphorical in their judgment as correct, following the same guidelines as in the spectral clustering evaluation. Participants Two judges per language, who were native speakers of English, Russian, and Spanish, participated in this experiment. All of them held at least a bachelor's degree.", "cite_spans": [], "ref_spans": [ { "start": 383, "end": 392, "text": "Figure 29", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Evaluation of Metaphorical Expressions.", "sec_num": "5.4.3" }, { "text": "Interannotator agreement Their agreement on the task was measured at \u03ba = 0.56 (n = 2, N = 600, k = 2) for English, \u03ba = 0.52 (n = 2, N = 400, k = 2) for Spanish, and \u03ba = 0.55 (n = 2, N = 400, k = 2) for Russian.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Metaphorical Expressions.", "sec_num": "5.4.3" }, { "text": "The system performance was measured against these annotations in terms of an average precision across judges. The results are presented in Table 7 . HGFC outperforms both AGG and WN, yielding a precision of 0.65 in English, 0.54 in Spanish, and 0.59 in Russian.", "cite_spans": [], "ref_spans": [ { "start": 139, "end": 146, "text": "Table 7", "ref_id": "TABREF14" } ], "eq_spans": [], "section": "Results", "sec_num": null }, { "text": "As expected, HGFC outperforms both AGG and WN baselines in all evaluation settings. AGG has been previously shown to be less accurate than HGFC in the verb clustering task (Sun and Korhonen 2011). Our analysis of the noun clusters indicated that HGFC tends to produce more pure and complete clusters than AGG. Another important reason AGG fails is that it by definition organizes all concepts into a tree and optimizes its solution locally, taking into account a small number of clusters at a time. However, being able to discover connections between more distant domains and optimizing globally over all concepts is crucial for metaphor identification. This makes AGG less suitable for the task, as demonstrated by our results. However, AGG identified a number of interesting mappings missed by HGFC (e.g. CAREER IS A CHILD, LANGUAGE IS A SYSTEM, CORRUP-TION IS A VEHICLE, EMPIRE IS A CONSTRUCTION), as well as a number of mappings in EG0 275 In the 1930s the words means test was a curse, fuelling the resistance against it both among the unemployed and some of its administrators. CRX 1054 These problems would be serious enough even if the rehabilitative approach were demonstrably successful in curing crime. HL3 1206 [..] he would strive to accelerate progress towards the economic integration of the Caribbean. HXJ 121 [..] it is likely that some industries will flourish in certain countries as the market widens. CEM 2622 The attack in Bautzen, Germany, came as racial violence flared again.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Error Analysis", "sec_num": "5.5" }, { "text": "Metaphors tagged by the English HGFC system (in bold). common with HGFC (e.g. DEBATE IS A WAR, DESTRUCTION IS A DISEASE). The fact that both HGFC and AGG identified valid metaphorical mappings across languages confirms our hypothesis that clustering techniques are well suited to detect metaphorical patterns in a distributional word space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 29", "sec_num": null }, { "text": "The WN system also identified a few interesting metaphorical mappings (e.g., COG-NITION IS FIRE, EDUCATION IS CONSTRUCTION), but its output is largely dominated by the concepts similar to the source noun and contains some unrelated concepts. The comparison of HGFC to WN shows that HGFC identifies meaningful properties and relations of abstract concepts that cannot be captured in a tree-like classification (even an accurate, manually created one such as WordNet). The latter is more appropriate for concrete concepts, and a more flexible representation is needed to model abstract concepts. The fact that both baselines identified some valid metaphorical associations, relying on less suitable conceptual graphs, suggests that our way of traversing the graph is a viable approach to identification of metaphorical associations in principle.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 29", "sec_num": null }, { "text": "HGFC identifies valid metaphorical associations for a range of source concepts. One of them (CRIME IS A DISEASE, or CRIME IS A VIRUS) happened to have been already validated in behavioral experiments with English speakers (Thibodeau and Boroditsky 2011) . The most frequent type of error of HGFC across the three languages is the presence of target clusters similar or closely related to the source noun. For instance, the source noun CHILD tends to be linked to other \"human\" clusters across languagesfor example, the parent cluster for English; the student, resident, and worker clusters in Spanish, and the crowd, journalist, and emperor clusters in Russian. The clusters from the same domain can, however, be filtered out if their nouns frequently occur in the same documents with the source noun (in a large corpus), that is, by topical similarity. The latter is less likely to be the case for the metaphorically associated nouns. However, we leave such an experiment to future work.", "cite_spans": [ { "start": 222, "end": 253, "text": "(Thibodeau and Boroditsky 2011)", "ref_id": "BIBREF96" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 29", "sec_num": null }, { "text": "The system errors in the identification of metaphorical expressions stem from multiple word senses of the salient features or the source and target sharing some physical properties (e.g., one can \"die from crime\" and \"die from a disease,\" an error that manifested itself in all three languages). Some identified expressions invoke a chain of mappings (e.g., ABUSE IS A DISEASE, DISEASE IS AN ENEMY for \"combat abuse\"); however, such chains are not yet incorporated into the system. In some cases, the same salient feature could be used metaphorically both in the source and target domain (e.g., \"to open fire\" vs. \"to open one's heart\" in Russian). In this example the expression is correctly tagged as metaphorical, although representing a different conceptual metaphor than FEELING IS FIRE. The performance of AGG in the identification of metaphorical expressions is higher than in the identification of metaphorical associations, because it outputs only few expressions for the incorrect associations. In contrast, WN tagged a large number of literal expressions due to the incorrect prior identification of the underlying associations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 29", "sec_num": null }, { "text": "The performance of the Russian and Spanish systems is slightly lower than that of the English system. This is likely to be due to errors from the data preprocessing step (i.e., parsing). The quality of parser output in English is likely to be higher than in Russian or Spanish, for which fewer parsers exist. Another important difference lies in the corpora used. Whereas the English and Spanish systems have been trained on English and Spanish Gigaword corpora (containing data extracted from news sources), the Russian system has been trained on RuWaC, which is a Web corpus containing a greater amount of noisy text (including misspellings, slang, etc.) The difference in corpora is also likely to have an impact on the mappings identified-that is, different target domains and different metaphorical mappings may be prevalent in different types of data. However, because our goal is to test the capability of clustering techniques to identify metaphorical associations and expressions in principle, the specific types of metaphors identified from different corpora (e.g., the domains covered) are less relevant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 29", "sec_num": null }, { "text": "Importantly, our results show that the method is portable across languages. This is an encouraging result, particularly because HGFC is unsupervised, making metaphor processing technology available to a large number of languages for which metaphorannotated data sets and lexical resources do not exist.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 29", "sec_num": null }, { "text": "By automatically discovering metaphors in a data-driven way, our methods allow us to investigate and compare the semantic spaces of different languages and gain insights for cross-linguistic research on metaphor. The contrastive study of differences in metaphor is important for several reasons. Understanding how metaphor varies across languages could provide clues about the roles of metaphor and cognition in structuring each other (K\u00f6vecses 2004). Contrastive differences in metaphor also have implications for second-language learning (Barcelona 2001) , and thus a systematic understanding of variation of metaphor across languages would benefit educational applications. From an engineering perspective, metaphor poses a challenge for machine translation systems (Zhou, Yang, and Huang 2007; Shutova, Teufel, and Korhonen 2013) , and can even be difficult for human translators (Sch\u00e4ffner 2004) .", "cite_spans": [ { "start": 540, "end": 556, "text": "(Barcelona 2001)", "ref_id": "BIBREF3" }, { "start": 769, "end": 797, "text": "(Zhou, Yang, and Huang 2007;", "ref_id": "BIBREF111" }, { "start": 798, "end": 833, "text": "Shutova, Teufel, and Korhonen 2013)", "ref_id": "BIBREF88" }, { "start": 884, "end": 900, "text": "(Sch\u00e4ffner 2004)", "ref_id": "BIBREF81" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-Linguistic Analysis and Metaphor Variation", "sec_num": "6." }, { "text": "Although some aspects of the way that metaphor structures language may be widely shared and near-universal (K\u00f6vecses 2004), there are significant differences in how conventionalized and pervasive different metaphors are in different languages and cultures. The earliest analyses of cross-lingual metaphorical differences were essentially qualitative. 16 In these studies, the authors typically produce examples of metaphors that they argue are routine and widely used in one language, but unconventionalized or unattested in another language. Languages that have been studied in such a way include Spanish (Barcelona 2001) , Chinese (Yu 1998) , Japanese (Matsuki 1995) , and Zulu (Taylor and Mbense 1998) . One drawback of these studies is that they rely on the judgment of the authors, who may not be representative of the speakers of the language at large. They also do not allow for subtler differences in metaphor use across languages to be exposed. One possibility for addressing these shortcomings involves manually searching corpora in two languages and counting all instances of a metaphorical mapping. This is the approach taken by Charteris- Black and Ennis (2001) with respect to financial metaphors in English and Spanish. They find several metaphors that are much more common in one language than in the other. However, the process of manually identifying instances is time-consuming and expensive, limiting the size of corpora and the scope of metaphors that can be analyzed in a given time frame. As a result, it can be difficult to draw broad conclusions from these studies.", "cite_spans": [ { "start": 351, "end": 353, "text": "16", "ref_id": null }, { "start": 606, "end": 622, "text": "(Barcelona 2001)", "ref_id": "BIBREF3" }, { "start": 633, "end": 642, "text": "(Yu 1998)", "ref_id": "BIBREF109" }, { "start": 654, "end": 668, "text": "(Matsuki 1995)", "ref_id": "BIBREF65" }, { "start": 680, "end": 704, "text": "(Taylor and Mbense 1998)", "ref_id": "BIBREF95" }, { "start": 1152, "end": 1174, "text": "Black and Ennis (2001)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-Linguistic Analysis and Metaphor Variation", "sec_num": "6." }, { "text": "Our systems present a step towards a large-scale data-driven analysis of linguistic variation in the use of metaphor. In order to investigate whether statistically learned patterns of metaphor can capture such variation, we conducted an analysis of the metaphors identified by our systems in the three languages. We ran the HGFC systems with a larger set of source domains taken from the literature on metaphor and conducted a qualitative analysis of the resulting metaphorical mappings to identify the similarities and the differences across languages. As one might expect, the majority of metaphorical mappings identified by the systems are present across languages. For instance, VIOLENCE and FEELINGS are associated with FIRE in all three languages, DEBATE or ARGUMENT are associated with WAR, CRIME is universally associated with DISEASE, MONEY with LIQUID, and so on. However, although the instances of a conceptual metaphor may be present in all three languages, interestingly, it is often the case that the same conceptual metaphor is lexicalized differently in different languages. For instance, although FEELINGS are generally associated with LIQUIDS in both English and Russian, the expression \"stir excitement\" is English-specific and cannot be used in Russian. At the same time, the expression \"mixed feelings\" (another instantiation of the same conceptual metaphor) is common in both languages. Our systems allow us to trace such variation through the different metaphorical expressions that they identify for the same or similar conceptual metaphors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Linguistic Analysis and Metaphor Variation", "sec_num": "6." }, { "text": "Importantly, besides the linguistic variation our methods are also able to capture and generalize conceptual differences in metaphorical use in the three languages. For instance, they exposed some interesting cross-linguistic differences pertaining to the target domains of business and finance. The Spanish conceptual metaphor output manifested rather negative metaphors about business, market, and commerce: BUSINESS was typically associated with BOMB, FIRE, WAR, DISEASE, and ENEMY. Although it is the case that BUSINESS is typically discussed in terms of a WAR or a RACE in English and Russian, the other four Spanish metaphors are uncommon. Russian, in fact, has rather positive metaphors for the related concepts of MONEY and WEALTH, which are strongly associated with SUN, LIGHT, STAR, and FOOD, possibly indicating that money is viewed primarily as a way to improve one's own life. An example of the linguistic instantiations of the Russian MONEY is LIGHT metaphor and their corresponding wordfor-word English translations is shown in Figure 30 . We have validated that the wordfor-word English translations of the Russian expressions in the Figure are not typically used in English by searching the BNC, where none of the expressions were found. In contrast, in English, MONEY is frequently discussed as a WEAPON, that is, a means to achieve a goal or win a struggle (which is directly related to BUSINESS IS A WAR metaphor). At the same time, the English data exhibit positive metaphors for POWER and INFLUENCE, which are viewed as LIGHT, SUN, or WING. In Russian, on the contrary,", "cite_spans": [], "ref_spans": [ { "start": 1043, "end": 1052, "text": "Figure 30", "ref_id": null } ], "eq_spans": [], "section": "Cross-Linguistic Analysis and Metaphor Variation", "sec_num": "6." }, { "text": "Linguistic instantiations of MONEY IS LIGHT metaphor in Russian.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 30", "sec_num": null }, { "text": "POWER is associated with BOMB and BULLET, perhaps linking it to the concepts of physical strength and domination. The concepts of FREEDOM and INDEPENDENCE were also associated with a WING, WEAPON, and STRENGTH in the Russian data, however. English and Spanish data also exhibited interesting differences with respect to the topic of immigration. According to the system output, in English IMMIGRANTS tend to be viewed as FIRE or ENEMIES, possibly indicating danger. In Spanish, on the other hand, IMMIGRANTS and, more specifically, undocumented people have a stronger association with ANIMALS, which is likely a reference to them as victims, being treated like animals.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 30", "sec_num": null }, { "text": "Although these differences may be a direct result of the contemporary socioeconomic context and political rhetoric, and are likely to change over time, other conceptual differences have a deeper grounding in our culture and way of life. For instance, the concept of BIRTH tends to be strongly associated with LIGHT in Spanish and BATTLE in Russian, each metaphor highlighting a different aspect of birth. The differences that stem from highly conventional metaphors seem to be even more deeply entrenched in the conceptual system of the speakers of a language. For instance, our analysis of system-produced data revealed systematic differences in discussing quantity and intensity in the three languages. Let us consider, for instance, the concept of heat. In English, heat intensity is typically measured on a vertical scale; for example, it is common to say \"low heat\" and \"high heat.\" In Russian, heat intensity is rather thought of in terms of strength; for example, one would say \"strong heat\" or \"weak fire.\" As opposed to this, Spanish speakers talk about heat in terms of its speed; for example, \"fuego lento\" (literally \"slow fire\") refers to \"low heat\" (on the stove). This metaphor also appears to generalize to other phenomena whose level or quantity can be assessed (e.g., INTELLIGENCE is also discussed in terms of SPEED in Spanish, HEIGHT in English, and STRENGTH in Russian). Such a systematic variation provides new insights for the study of cognition of quantity, intensity, and scale. Statistical methods provide a tool to expose such variation through automatic analysis of large quantities of linguistic data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 30", "sec_num": null }, { "text": "More generally, such systematic cross-linguistic differences in the use of metaphor have significance beyond language and can be associated with contrastive behavioral patterns across the different linguistic communities (Casasanto and Boroditsky 2008; Fuhrman et al. 2011) . Psychologists Thibodeau and Boroditsky (2011) investigated how the metaphors we use affect our decision-making. They presented two groups of human subjects with two different texts about crime. In the first text, crime was metaphorically portrayed as a virus and in the second as a beast. The two groups were then asked a set of questions on how to tackle crime in the city. As a result, the first group tended to opt for preventive measures in tackling crime (e.g., stronger social policies), whereas the second group converged on punishment-or restraint-oriented measures. According to the researchers, their results demonstrate that metaphors have profound influence on how we conceptualize and act with respect to societal issues. Although Thibodeau and Boroditsky's study did not investigate cross-linguistic contrasts in the use of metaphor, it still suggests that metaphor-induced differences in decision-making may manifest themselves across communities. Applying data-driven methods such as ours to investigate variation in the use of metaphor across (linguistic) communities would allow this research to be scaled-up, using statistical patterns learned from linguistic data to inform experimental psychology.", "cite_spans": [ { "start": 221, "end": 252, "text": "(Casasanto and Boroditsky 2008;", "ref_id": "BIBREF20" }, { "start": 253, "end": 273, "text": "Fuhrman et al. 2011)", "ref_id": "BIBREF34" }, { "start": 290, "end": 321, "text": "Thibodeau and Boroditsky (2011)", "ref_id": "BIBREF96" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 30", "sec_num": null }, { "text": "We have presented three methods for metaphor identification that acquire metaphorical patterns from distributional properties of concepts. All of the methods (UNCONSTRAINED, CONSTRAINED, HGFC) are based on distributional word clustering using lexico-syntactic features. The methods are minimally supervised and unsupervised and, as our experiments have shown, they can be successfully ported across languages. Despite requiring little supervision, their performance is competitive even in comparison to fully supervised systems. 17 In addition, the methods identify a large number of new metaphorical expressions in corpora (e.g., given the English seed \"accelerate change,\" the UNCONSTRAINED method identifies as many as 113 new, different metaphors in the BNC), enabling large-scale cross-linguistic analyses of metaphor.", "cite_spans": [ { "start": 529, "end": 531, "text": "17", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Directions", "sec_num": "7." }, { "text": "Our experimental results have demonstrated that lexico-syntactic features are effective for clustering and metaphor identification in all three languages. However, we have also identified important differences in the structure of the semantic spaces across languages. For instance, in Russian, a morphologically rich language, the semantic space is structured differently from English or Spanish. Because of its highly productive derivational morphology, Russian exhibits a higher number of near-synonyms (often originating from the same stem) for both verbs and nouns. This has an impact on clustering, in that (1) more nouns or verbs need to be clustered in order to represent a concept with sufficient coverage and (2) the clusters need to be larger, often containing tight subclusters of derivational word forms. While playing a role in metaphor identification, this finding may also have implications for other multilingual NLP tasks beyond metaphor research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Directions", "sec_num": "7." }, { "text": "Importantly, our results confirm the hypothesis that metaphor and cross-domain vocabulary projection are naturally encoded in the distributional semantic spaces in all three languages. As a result, metaphorical mappings could be learned from distributional properties of concepts using clustering techniques. The differences in performance across languages are mainly explained by the differences in the quality of the data and pre-processing tools available for them. However, both our quantitative results and the analysis of the system output confirm that all systems successfully discover metaphorical patterns from distributional information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Directions", "sec_num": "7." }, { "text": "We have investigated different kinds of supervision: learning from a small set of metaphorical expressions, metaphorical mappings, and without supervision. Although both minimally supervised (UNCONSTRAINED, CONSTRAINED) and unsupervised (HGFC) methods successfully discover new metaphorical patterns from the data, our results indicate that minimally supervised methods achieve a higher precision. The use of annotated metaphorical mappings for supervision at the clustering stage does not significantly alter the performance of the system, because their patterns are already to a certain extent encoded in the data and can be learned. However, metaphorical expressions are a good starting point in learning metaphorical generalizations in conjunction with clustering techniques.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Directions", "sec_num": "7." }, { "text": "Despite its comparatively lower performance, we believe that HGFC may prove to be a practically useful tool for NLP applications. Because it does not require any metaphor annotation, it can be easily applied to a new language (including low resource languages) for which a large enough corpus and a shallow syntactic parser are available. In addition, whereas the semi-supervised CONSTRAINED and UNCONSTRAINED methods discover metaphorical expressions somewhat related to the seeds, the range of metaphors discovered by HGFC is unrestricted and thus considerably wider. Since the two types of methods differ in their precision vs. their coverage, one may also consider a combination of these methods when designing a metaphor processing component for a real-world application-or, depending on the needs of the application, one may choose a more suitable one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Directions", "sec_num": "7." }, { "text": "In the future, the models need to be extended to identify not only verb-subject and verb-object metaphors, but also metaphorical expressions in other syntactic constructions (e.g., adjectival or nominal metaphors). Previous distributional clustering and lexical acquisition research has shown that it is possible to model the meanings of a range of word classes using similar techniques (Hatzivassiloglou and McKeown 1993; Boleda Torrent and Alonso i Alemany 2003; Brockmann and Lapata 2003; Zapirain, Agirre, and M\u00e0rquez 2009) . We thus expect our methods to be equally applicable to metaphorical uses of other word classes and syntactic constructions. For spectral clustering systems, such an extension would require incorporation of adjectival and nominal modifier features in clustering, clustering adjectives, and adding seed expressions representing a variety of syntactic constructions. The extension of HGFC would be more straightforward, only requiring ranking additional adjectival and nominal features that the metaphorically associated clusters in the graph share.", "cite_spans": [ { "start": 387, "end": 422, "text": "(Hatzivassiloglou and McKeown 1993;", "ref_id": "BIBREF40" }, { "start": 423, "end": 464, "text": "Boleda Torrent and Alonso i Alemany 2003;", "ref_id": "BIBREF12" }, { "start": 465, "end": 491, "text": "Brockmann and Lapata 2003;", "ref_id": "BIBREF16" }, { "start": 492, "end": 527, "text": "Zapirain, Agirre, and M\u00e0rquez 2009)", "ref_id": "BIBREF110" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Directions", "sec_num": "7." }, { "text": "The results of our HGFC experiments also offer support to the cognitive science findings on the differences in organization of abstract and concrete concepts in the human brain (Crutch and Warrington 2005; Wiemer-Hastings and Xu 2005; Huang, Lee, and Federmeier 2010; Adorni and Proverbio 2012) . Specifically, our experiments have shown that abstract concepts exhibit both within-domain and cross-domain association patterns (i.e., the literal ones and the metaphorical ones) and that the respective patterns can be successfully learned from linguistic data via the words' distributional properties. The metaphorical patterns that the system is able to acquire (for different languages or different data sets) can in turn be used to guide further cognitive science and psychology research on metaphor and concept representation more generally. In addition, we believe that the presented techniques may have applications in NLP beyond metaphor processing and would impact a number of tasks in computational semantics that model the properties of and relations between concepts in a distributional space.", "cite_spans": [ { "start": 177, "end": 205, "text": "(Crutch and Warrington 2005;", "ref_id": "BIBREF23" }, { "start": 206, "end": 234, "text": "Wiemer-Hastings and Xu 2005;", "ref_id": "BIBREF105" }, { "start": 235, "end": 267, "text": "Huang, Lee, and Federmeier 2010;", "ref_id": "BIBREF45" }, { "start": 268, "end": 294, "text": "Adorni and Proverbio 2012)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Directions", "sec_num": "7." }, { "text": "http://translate.google.com/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In our experiments we use a syntax-aware distributional space, where the vectors are constructed using the words' grammatical relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.ota.ox.ac.uk/headers/2541.xml. 4 http://www.illc.uva.nl/EuroWordNet/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://wordnet.princeton.edu/man/lexnames.5WN.html.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Hard clustering produces a partition where every object belongs to one cluster only.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For a comprehensive review of spectral clustering algorithms see VonLuxburg (2007). Our description of spectral clustering here is largely based on this review. 8 Note that any symmetric matrix with non-negative, real-valued elements can therefore be taken to represent a weighted, undirected graph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Some suggested source concepts are given in the figures for clarity only. The system does not use or assign those labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This applies to both the source and the target concepts. This requirement was imposed to ensure that the constraints are enforced pairwise during clustering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We were limited in resources when recruiting annotators for Russian and Spanish, thus we had to restrict the number of participants to two per language. However, we would like to note that it is generally desirable to recruit multiple annotators for a metaphor annotation task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that the English BNC is smaller in size than the Spanish Gigaword or the Russian RuWaC, leading to fewer English sentences retrieved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Similar examples can be found in other languages with a highly productive derivational morphology, such as German.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "See Yu, Yu, and Tresp (2006) for the full proof.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "However, the level of granularity can be adapted depending on the task and application in mind.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "See K\u00f6vecses (2004) for a review.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The precision typically reported for supervised metaphor identification is in the range of 0.56-0.78, with the highest performing systems frequently evaluated within a limited domain(Shutova 2015).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank our anonymous reviewers for their most insightful comments. Ekaterina Shutova's research is supported by the Leverhulme Trust Early Career Fellowship.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The neural manifestation of the word concreteness effect: An electrical neuroimaging study", "authors": [ { "first": "Roberta", "middle": [], "last": "Adorni", "suffix": "" }, { "first": "Alice", "middle": [ "Mado" ], "last": "Proverbio", "suffix": "" } ], "year": 2012, "venue": "Neuropsychologia", "volume": "50", "issue": "5", "pages": "880--891", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adorni, Roberta and Alice Mado Proverbio. 2012. The neural manifestation of the word concreteness effect: An electrical neuroimaging study. Neuropsychologia, 50(5):880-891.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Annotating a Russian corpus of conceptual metaphor: A bottom-up approach", "authors": [ { "first": "Yulia", "middle": [], "last": "Badryzlova", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Shekhtman", "suffix": "" }, { "first": "Yekaterina", "middle": [], "last": "Isaeva", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Kerimov", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the First Workshop on Metaphor in NLP", "volume": "", "issue": "", "pages": "77--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Badryzlova, Yulia, Natalia Shekhtman, Yekaterina Isaeva, and Ruslan Kerimov. 2013. Annotating a Russian corpus of conceptual metaphor: A bottom-up approach. In Proceedings of the First Workshop on Metaphor in NLP, pages 77-86, Atlanta, GA.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A feasibility study on low level techniques for improving parsing accuracy for Spanish using MaltParser", "authors": [ { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Jes\u00fas", "middle": [], "last": "Herrera", "suffix": "" }, { "first": "Virginia", "middle": [], "last": "Francisco", "suffix": "" }, { "first": "Pablo", "middle": [], "last": "Gerv\u00e1s", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 6th Hellenic Conference on Artificial Intelligence: Theories, Models and Applications", "volume": "", "issue": "", "pages": "39--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ballesteros, Miguel, Jes\u00fas Herrera, Virginia Francisco, and Pablo Gerv\u00e1s. 2010. A feasibility study on low level techniques for improving parsing accuracy for Spanish using MaltParser. In Proceedings of the 6th Hellenic Conference on Artificial Intelligence: Theories, Models and Applications, pages 39-48, Athens.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "On the systematic contrastive analysis of conceptual metaphors: Case studies and proposed methodology", "authors": [ { "first": "Antonio", "middle": [], "last": "Barcelona", "suffix": "" } ], "year": 2001, "venue": "Applied Cognitive Linguistics II: Language Pedagogy. Mouton-De Gruyter", "volume": "", "issue": "", "pages": "117--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barcelona, Antonio. 2001. On the systematic contrastive analysis of conceptual metaphors: Case studies and proposed methodology. In Martin P\u00fctz, Susanne Niemeier, Ren\u00e9 Dirven (editors), Applied Cognitive Linguistics II: Language Pedagogy. Mouton-De Gruyter, Berlin, pages 117-146.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "An artificial intelligence approach to metaphor understanding", "authors": [ { "first": "John", "middle": [], "last": "Barnden", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2002, "venue": "Theoria et Historia Scientiarum", "volume": "6", "issue": "1", "pages": "399--412", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barnden, John and Mark Lee. 2002. An artificial intelligence approach to metaphor understanding. Theoria et Historia Scientiarum, 6(1):399-412.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Argumentation-relevant metaphors in test-taker essays", "authors": [ { "first": "Beigman", "middle": [], "last": "Klebanov", "suffix": "" }, { "first": "Beata", "middle": [], "last": "", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Flor", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the First Workshop on Metaphor in NLP", "volume": "", "issue": "", "pages": "11--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "Beigman Klebanov, Beata and Michael Flor. 2013. Argumentation-relevant metaphors in test-taker essays. In Proceedings of the First Workshop on Metaphor in NLP, pages 11-20, Atlanta, GA.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Different texts, same metaphors: Unigrams and beyond", "authors": [ { "first": "Beigman", "middle": [], "last": "Klebanov", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Beata", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Leong", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Heilman", "suffix": "" }, { "first": "", "middle": [], "last": "Flor", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Second Workshop on Metaphor in NLP", "volume": "", "issue": "", "pages": "11--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Beigman Klebanov, Beata, Ben Leong, Michael Heilman, and Michael Flor. 2014. Different texts, same metaphors: Unigrams and beyond. In Proceedings of the Second Workshop on Metaphor in NLP, pages 11-17, Baltimore, MD.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Discriminative learning of selectional preference from unlabeled text", "authors": [ { "first": "Shane", "middle": [], "last": "Bergsma", "suffix": "" }, { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Randy", "middle": [], "last": "Goebel", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '08", "volume": "", "issue": "", "pages": "59--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bergsma, Shane, Dekang Lin, and Randy Goebel. 2008. Discriminative learning of selectional preference from unlabeled text. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '08, pages 59-68, Honolulu, HI.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Distinct brain systems for processing concrete and abstract concepts", "authors": [ { "first": "Jeffrey", "middle": [ "R" ], "last": "Binder", "suffix": "" }, { "first": "Chris", "middle": [ "F" ], "last": "Westbury", "suffix": "" }, { "first": "Kristen", "middle": [ "A" ], "last": "Mckiernan", "suffix": "" }, { "first": "Edward", "middle": [ "T" ], "last": "Possing", "suffix": "" }, { "first": "David", "middle": [ "A" ], "last": "Medler", "suffix": "" } ], "year": 2005, "venue": "Journal of Cognitive Neuroscience", "volume": "17", "issue": "6", "pages": "905--917", "other_ids": {}, "num": null, "urls": [], "raw_text": "Binder, Jeffrey R., Chris F. Westbury, Kristen A. McKiernan, Edward T. Possing, and David A. Medler. 2005. Distinct brain systems for processing concrete and abstract concepts. Journal of Cognitive Neuroscience, 17(6):905-917.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A clustering approach for the nearly unsupervised recognition of nonliteral language", "authors": [ { "first": "Julia", "middle": [], "last": "Birke", "suffix": "" }, { "first": "Anoop", "middle": [], "last": "Sarkar", "suffix": "" } ], "year": 2006, "venue": "Proceedings of EACL-06", "volume": "", "issue": "", "pages": "329--336", "other_ids": {}, "num": null, "urls": [], "raw_text": "Birke, Julia and Anoop Sarkar. 2006. A clustering approach for the nearly unsupervised recognition of nonliteral language. In Proceedings of EACL-06, pages 329-336, Trento.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Models and Metaphors", "authors": [ { "first": "Max", "middle": [], "last": "Black", "suffix": "" } ], "year": 1962, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Black, Max. 1962. Models and Metaphors. Cornell University Press.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Latent Dirichlet allocation", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "Y", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Jordan", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "993--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "Blei, David M., Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993-1022.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Clustering adjectives for class acquisition", "authors": [ { "first": "Gemma", "middle": [], "last": "Boleda Torrent", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Alonso I Alemany", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Tenth Conference on European Chapter", "volume": "2", "issue": "", "pages": "9--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Boleda Torrent, Gemma and Laura Alonso i Alemany. 2003. Clustering adjectives for class acquisition. In Proceedings of the Tenth Conference on European Chapter of the Association for Computational Linguistics -Volume 2, EACL '03, pages 9-16, Budapest.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Metaphor interpretation using paraphrases extracted from the Web", "authors": [ { "first": "Danushka", "middle": [], "last": "Bollegala", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" } ], "year": 2013, "venue": "PLoS ONE", "volume": "8", "issue": "9", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bollegala, Danushka and Ekaterina Shutova. 2013. Metaphor interpretation using paraphrases extracted from the Web. PLoS ONE, 8(9):e74304.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Spectral clustering for German verbs", "authors": [ { "first": "Chris", "middle": [], "last": "Brew", "suffix": "" }, { "first": "Sabine", "middle": [], "last": "Schulte Im Walde", "suffix": "" } ], "year": 2002, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "117--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brew, Chris and Sabine Schulte im Walde. 2002. Spectral clustering for German verbs. In Proceedings of EMNLP, pages 117-124, Philadelphia, PA.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The second release of the RASP system", "authors": [ { "first": "Ted", "middle": [], "last": "Briscoe", "suffix": "" }, { "first": "John", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Watson", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the COLING/ACL on Interactive Presentation Sessions", "volume": "", "issue": "", "pages": "77--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Briscoe, Ted, John Carroll, and Rebecca Watson. 2006. The second release of the RASP system. In Proceedings of the COLING/ACL on Interactive Presentation Sessions, pages 77-80, Sydney.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Evaluating and combining approaches to selectional preference acquisition", "authors": [ { "first": "Carsten", "middle": [], "last": "Brockmann", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Tenth Conference on European Chapter", "volume": "1", "issue": "", "pages": "27--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brockmann, Carsten and Mirella Lapata. 2003. Evaluating and combining approaches to selectional preference acquisition. In Proceedings of the Tenth Conference on European Chapter of the Association for Computational Linguistics -Volume 1, EACL '03, pages 27-34, Budapest.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Reference Guide for the British National Corpus", "authors": [ { "first": "Lou", "middle": [], "last": "Burnard", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Burnard, Lou. 2007. Reference Guide for the British National Corpus (XML Edition).", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A user study: Technology to increase teachers' linguistic awareness to improve instructional language support for English language learners", "authors": [ { "first": "Jill", "middle": [], "last": "Burstein", "suffix": "" }, { "first": "John", "middle": [], "last": "Sabatini", "suffix": "" }, { "first": "Jane", "middle": [], "last": "Shore", "suffix": "" }, { "first": "Brad", "middle": [], "last": "Moulder", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Lentini", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Workshop on Natural Language Processing for Improving Textual Accessibility", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Burstein, Jill, John Sabatini, Jane Shore, Brad Moulder, and Jennifer Lentini. 2013. A user study: Technology to increase teachers' linguistic awareness to improve instructional language support for English language learners. In Proceedings of the Workshop on Natural Language Processing for Improving Textual Accessibility, pages 1-10, Atlanta, GA.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Metaphor in Educational Discourse. Continuum", "authors": [ { "first": "Lynne", "middle": [], "last": "Cameron", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cameron, Lynne. 2003. Metaphor in Educational Discourse. Continuum, London.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A comparative study of metaphor in Spanish and English financial reporting", "authors": [ { "first": "Daniel", "middle": [], "last": "Casasanto", "suffix": "" }, { "first": "Lera", "middle": [], "last": "Boroditsky", "suffix": "" }, { "first": ";", "middle": [], "last": "Charteris-Black", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Ennis", "suffix": "" } ], "year": 2001, "venue": "Cognition", "volume": "106", "issue": "2", "pages": "249--266", "other_ids": {}, "num": null, "urls": [], "raw_text": "Casasanto, Daniel and Lera Boroditsky. 2008. Time in the mind: Using space to think about time. Cognition, 106(2):579-593. Charteris-Black, Jonathan and Timothy Ennis. 2001. A comparative study of metaphor in Spanish and English financial reporting. English for Specific Purposes, 20(3):249-266.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Unsupervised relation disambiguation using spectral clustering", "authors": [ { "first": "Jinxiu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Donghong", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Zhengyu", "middle": [], "last": "Chew Lim Tan", "suffix": "" }, { "first": "", "middle": [], "last": "Niu", "suffix": "" } ], "year": 2006, "venue": "Proceedings of COLING/ACL", "volume": "", "issue": "", "pages": "89--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, Jinxiu, Donghong Ji, Chew Lim Tan, and Zhengyu Niu. 2006. Unsupervised relation disambiguation using spectral clustering. In Proceedings of COLING/ACL, pages 89-96, Sydney.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Support-vector networks", "authors": [ { "first": "Corinna", "middle": [], "last": "Cortes", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Vapnik", "suffix": "" } ], "year": 1995, "venue": "Machine Learning", "volume": "20", "issue": "3", "pages": "273--297", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cortes, Corinna and Vladimir Vapnik. 1995. Support-vector networks. Machine Learning, 20(3):273-297.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Abstract and concrete concepts have structurally different representational frameworks", "authors": [ { "first": "Sebastian", "middle": [ "J" ], "last": "Crutch", "suffix": "" }, { "first": "Elizabeth", "middle": [ "K" ], "last": "Warrington", "suffix": "" } ], "year": 2005, "venue": "Brain", "volume": "128", "issue": "3", "pages": "615--627", "other_ids": {}, "num": null, "urls": [], "raw_text": "Crutch, Sebastian J. and Elizabeth K. Warrington. 2005. Abstract and concrete concepts have structurally different representational frameworks. Brain, 128(3):615-627.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "The differential dependence of abstract and concrete words upon associative and similarity-based information: Complementary semantic interference and facilitation effects", "authors": [ { "first": "Sebastian", "middle": [ "J" ], "last": "Crutch", "suffix": "" }, { "first": "K", "middle": [], "last": "Elizabeth", "suffix": "" }, { "first": "", "middle": [], "last": "Warrington", "suffix": "" } ], "year": 2010, "venue": "Cognitive Neuropsychology", "volume": "27", "issue": "1", "pages": "46--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Crutch, Sebastian J. and Elizabeth K. Warrington. 2010. The differential dependence of abstract and concrete words upon associative and similarity-based information: Complementary semantic interference and facilitation effects. Cognitive Neuropsychology, 27(1):46-71.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Propane stoves and gas lamps: How the concept hierarchy influences the interpretation of noun-noun compounds", "authors": [ { "first": "Barry", "middle": [], "last": "Devereux", "suffix": "" }, { "first": "Fintan", "middle": [], "last": "Costello", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Twenty-Seventh Annual Conference of the Cognitive Science Society", "volume": "", "issue": "", "pages": "1--6", "other_ids": {}, "num": null, "urls": [], "raw_text": "Devereux, Barry and Fintan Costello. 2005. Propane stoves and gas lamps: How the concept hierarchy influences the interpretation of noun-noun compounds. In Proceedings of the Twenty-Seventh Annual Conference of the Cognitive Science Society, pages 1-6, Streso.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Exploring the feeling-emotions continuum across cultures", "authors": [ { "first": "Javier", "middle": [], "last": "Diaz-Vera", "suffix": "" }, { "first": "Rosario", "middle": [], "last": "Caballero", "suffix": "" } ], "year": 2013, "venue": "Jealousy in English and Spanish. Intercultural Pragmatics", "volume": "10", "issue": "2", "pages": "265--294", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diaz-Vera, Javier and Rosario Caballero. 2013. Exploring the feeling-emotions continuum across cultures: Jealousy in English and Spanish. Intercultural Pragmatics, 10(2):265-294.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Evaluating the premises and results of four metaphor identification systems", "authors": [ { "first": "Jonathan", "middle": [], "last": "Dunn", "suffix": "" } ], "year": 2013, "venue": "Proceedings of CICLing'13", "volume": "", "issue": "", "pages": "471--486", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dunn, Jonathan. 2013a. Evaluating the premises and results of four metaphor identification systems. In Proceedings of CICLing'13, pages 471-486, Samos.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "What metaphor identification systems can tell us about metaphor-in-language", "authors": [ { "first": "Jonathan", "middle": [], "last": "Dunn", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the First Workshop on Metaphor in NLP", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dunn, Jonathan. 2013b. What metaphor identification systems can tell us about metaphor-in-language. In Proceedings of the First Workshop on Metaphor in NLP, pages 1-10, Atlanta, GA.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "met*: A method for discriminating metonymy and metaphor by computer", "authors": [ { "first": "Dan", "middle": [], "last": "Fass", "suffix": "" } ], "year": 1991, "venue": "Computational Linguistics", "volume": "17", "issue": "1", "pages": "49--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fass, Dan. 1991. met*: A method for discriminating metonymy and metaphor by computer. Computational Linguistics, 17(1):49-90.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "From Molecule to Metaphor: A Neural Theory of Language", "authors": [ { "first": "Jerome", "middle": [], "last": "Feldman", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Feldman, Jerome. 2006. From Molecule to Metaphor: A Neural Theory of Language. The MIT Press.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "WordNet: An Electronic Lexical Database", "authors": [], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fellbaum, Christiane, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Towards a semantic classification of Spanish verbs based on subcategorisation information", "authors": [ { "first": "Eva", "middle": [ "E" ], "last": "Ferrer", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the ACL 2004 Workshop on Student Research", "volume": "", "issue": "", "pages": "13--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ferrer, Eva E. 2004. Towards a semantic classification of Spanish verbs based on subcategorisation information. In Proceedings of the ACL 2004 Workshop on Student Research, pages 13-19, Barcelona.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Background to FrameNet", "authors": [ { "first": "Charles", "middle": [], "last": "Fillmore", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Miriam", "middle": [], "last": "Petruck", "suffix": "" } ], "year": 2003, "venue": "International Journal of Lexicography", "volume": "16", "issue": "3", "pages": "235--250", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fillmore, Charles, Christopher Johnson, and Miriam Petruck. 2003. Background to FrameNet. International Journal of Lexicography, 16(3):235-250.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "How linguistic and cultural forces shape conceptions of time: English and Mandarin time in 3D", "authors": [ { "first": "Orly", "middle": [], "last": "Fuhrman", "suffix": "" }, { "first": "Kelly", "middle": [], "last": "Mccormick", "suffix": "" }, { "first": "Eva", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Heidi", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Dingfang", "middle": [], "last": "Shu", "suffix": "" }, { "first": "Shuaimei", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Lera", "middle": [], "last": "Boroditsky", "suffix": "" } ], "year": 2011, "venue": "Cognitive Science", "volume": "35", "issue": "", "pages": "1305--1328", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fuhrman, Orly, Kelly McCormick, Eva Chen, Heidi Jiang, Dingfang Shu, Shuaimei Mao, and Lera Boroditsky. 2011. How linguistic and cultural forces shape conceptions of time: English and Mandarin time in 3D. Cognitive Science, 35:1305-1328.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Automatic identification of conceptual metaphors with limited knowledge", "authors": [ { "first": "Lisa", "middle": [], "last": "Gandy", "suffix": "" }, { "first": "Nadji", "middle": [], "last": "Allan", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Atallah", "suffix": "" }, { "first": "Ophir", "middle": [], "last": "Frieder", "suffix": "" } ], "year": 2013, "venue": "Proceedings of AAAI 2013", "volume": "", "issue": "", "pages": "328--334", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gandy, Lisa, Nadji Allan, Mark Atallah, Ophir Frieder, Newton Howard, Sergey Kanareykin, Moshe Koppel, Mark Last, Yair Neuman, and Shlomo Argamon. 2013. Automatic identification of conceptual metaphors with limited knowledge. In Proceedings of AAAI 2013, pages 328-334, Bellevue, WA.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Catching metaphors", "authors": [ { "first": "Matt", "middle": [], "last": "Gedigian", "suffix": "" }, { "first": "John", "middle": [], "last": "Bryant", "suffix": "" }, { "first": "Srini", "middle": [], "last": "Narayanan", "suffix": "" }, { "first": "Branimir", "middle": [], "last": "Ciric", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 3rd Workshop on Scalable Natural Language Understanding", "volume": "", "issue": "", "pages": "41--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gedigian, Matt, John Bryant, Srini Narayanan, and Branimir Ciric. 2006. Catching metaphors. In Proceedings of the 3rd Workshop on Scalable Natural Language Understanding, pages 41-48, New York, NY.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Structure mapping: A theoretical framework for analogy", "authors": [ { "first": "Deirdre", "middle": [], "last": "Gentner", "suffix": "" } ], "year": 1983, "venue": "Cognitive Science", "volume": "7", "issue": "", "pages": "155--170", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gentner, Deirdre. 1983. Structure mapping: A theoretical framework for analogy. Cognitive Science, 7:155-170.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Literal meaning and psychological theory", "authors": [ { "first": "R", "middle": [], "last": "Gibbs", "suffix": "" } ], "year": 1984, "venue": "Cognitive Science", "volume": "8", "issue": "", "pages": "275--304", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gibbs, R. 1984. Literal meaning and psychological theory. Cognitive Science, 8:275-304.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Exploiting a semantic annotation tool for metaphor analysis", "authors": [ { "first": "David", "middle": [], "last": "Graff", "suffix": "" }, { "first": "Junbo", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Ke", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Kazuaki", "middle": [], "last": "Maeda", "suffix": "" }, { "first": ";", "middle": [], "last": "Philadelphia", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Hardie", "suffix": "" }, { "first": "Veronika", "middle": [], "last": "Koller", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Rayson", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Semino", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Corpus Linguistics Conference", "volume": "", "issue": "", "pages": "1--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Graff, David, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2003. English gigaword. Linguistic Data Consortium, Philadelphia. Hardie, Andrew, Veronika Koller, Paul Rayson, and Elena Semino. 2007. Exploiting a semantic annotation tool for metaphor analysis. In Proceedings of the Corpus Linguistics Conference, pages 1-12, Birmingham.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Towards the automatic identification of adjectival scales: Clustering adjectives according to meaning", "authors": [ { "first": "Vasileios", "middle": [], "last": "Hatzivassiloglou", "suffix": "" }, { "first": "Kathleen", "middle": [ "R" ], "last": "Mckeown", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics, ACL '93", "volume": "", "issue": "", "pages": "172--182", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hatzivassiloglou, Vasileios and Kathleen R. McKeown. 1993. Towards the automatic identification of adjectival scales: Clustering adjectives according to meaning. In Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics, ACL '93, pages 172-182, Columbus, OA.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Automatic extraction of linguistic metaphors with LDA topic modeling", "authors": [ { "first": "Ilana", "middle": [], "last": "Heintz", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Gabbard", "suffix": "" }, { "first": "Mahesh", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Dave", "middle": [], "last": "Barner", "suffix": "" }, { "first": "Donald", "middle": [], "last": "Black", "suffix": "" }, { "first": "Majorie", "middle": [], "last": "Friedman", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Weischedel", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the First Workshop on Metaphor in NLP", "volume": "", "issue": "", "pages": "58--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heintz, Ilana, Ryan Gabbard, Mahesh Srivastava, Dave Barner, Donald Black, Majorie Friedman, and Ralph Weischedel. 2013. Automatic extraction of linguistic metaphors with LDA topic modeling. In Proceedings of the First Workshop on Metaphor in NLP, pages 58-66, Atlanta, GA.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Models and Analogies in Science", "authors": [ { "first": "Mary", "middle": [], "last": "Hesse", "suffix": "" } ], "year": 1966, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hesse, Mary. 1966. Models and Analogies in Science. Notre Dame University Press.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "A quantitative empirical analysis of the abstract/concrete distinction", "authors": [ { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Bentz", "suffix": "" } ], "year": 2014, "venue": "Cognitive Science", "volume": "38", "issue": "1", "pages": "162--177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hill, Felix, Anna Korhonen, and Christian Bentz. 2014. A quantitative empirical analysis of the abstract/concrete distinction. Cognitive Science, 38(1):162-177.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Identifying metaphorical word use with tree kernels", "authors": [ { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Shashank", "middle": [], "last": "Shrivastava", "suffix": "" }, { "first": "Sujay", "middle": [], "last": "Kumar Jauhar", "suffix": "" }, { "first": "Mrinmaya", "middle": [], "last": "Sachan", "suffix": "" }, { "first": "Kartik", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Huying", "middle": [], "last": "Li", "suffix": "" }, { "first": "Whitney", "middle": [], "last": "Sanders", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the First Workshop on Metaphor in NLP", "volume": "", "issue": "", "pages": "52--57", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hovy, Dirk, Shashank Shrivastava, Sujay Kumar Jauhar, Mrinmaya Sachan, Kartik Goyal, Huying Li, Whitney Sanders, and Eduard Hovy. 2013. Identifying metaphorical word use with tree kernels. In Proceedings of the First Workshop on Metaphor in NLP, pages 52-57, Atlanta, GA.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Imagine that! ERPs provide evidence for distinct hemispheric contributions to the processing of concrete and abstract concepts", "authors": [ { "first": "", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Chia-Lin", "middle": [], "last": "Hsu-Wen", "suffix": "" }, { "first": "Kara", "middle": [ "D" ], "last": "Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Federmeier", "suffix": "" } ], "year": 2010, "venue": "NeuroImage", "volume": "49", "issue": "1", "pages": "1116--1123", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, Hsu-Wen, Chia-Lin Lee, and Kara D. Federmeier. 2010. Imagine that! ERPs provide evidence for distinct hemispheric contributions to the processing of concrete and abstract concepts. NeuroImage, 49(1):1116-1123.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Corpus-based study of metaphor in information technology", "authors": [ { "first": "Sattar", "middle": [], "last": "Izwaini", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Workshop on Corpus-based Approaches to Figurative Language, Corpus Linguistics", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Izwaini, Sattar. 2003. Corpus-based study of metaphor in information technology. In Proceedings of the Workshop on Corpus-based Approaches to Figurative Language, Corpus Linguistics 2003, pages 1-8, Lancaster.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Document clustering with prior knowledge", "authors": [ { "first": "Xiang", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Shenghuo", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 29th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ji, Xiang, Wei Xu, and Shenghuo Zhu. 2006. Document clustering with prior knowledge. In Proceedings of the 29th", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "From TreeBank to PropBank", "authors": [ { "first": "W", "middle": [ "A" ], "last": "Seattle", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Kingsbury", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2002, "venue": "Annual International ACM SIGIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "1989--1993", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 405-412, Seattle, WA. Kingsbury, Paul and Martha Palmer. 2002. From TreeBank to PropBank. In Proceedings of LREC-2002, pages 1989-1993, Gran Canaria.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Metaphor and Gender in Business Media Discourse: A Critical Cognitive Study", "authors": [ { "first": "Veronika", "middle": [], "last": "Koller", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koller, Veronika. 2004. Metaphor and Gender in Business Media Discourse: A Critical Cognitive Study. Palgrave Macmillan, Basingstoke and New York.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Semeval-2013 task 5: Evaluating phrasal semantics", "authors": [ { "first": "Ioannis", "middle": [], "last": "Korkontzelos", "suffix": "" }, { "first": "Torsten", "middle": [], "last": "Zesch", "suffix": "" }, { "first": "Fabio", "middle": [ "Massimo" ], "last": "Zanzotto", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation", "volume": "2", "issue": "", "pages": "263--274", "other_ids": {}, "num": null, "urls": [], "raw_text": "Korkontzelos, Ioannis, Torsten Zesch, Fabio Massimo Zanzotto, and Chris Biemann. 2013. Semeval-2013 task 5: Evaluating phrasal semantics. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 39-47, Atlanta, GA. K\u00f6vecses, Zolt\u00e1n. 2004. Introduction: Cultural variation in metaphor. European Journal of English Studies, 8:263-274.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Metaphor in Culture: Universality and Variation", "authors": [ { "first": "Zoltan", "middle": [], "last": "Kovecses", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kovecses, Zoltan. 2005. Metaphor in Culture: Universality and Variation. Cambridge University Press.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Hunting elusive metaphors using lexical resources", "authors": [ { "first": "Saisuresh", "middle": [], "last": "Krishnakumaran", "suffix": "" }, { "first": "Xiaojin", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Workshop on Computational Approaches to Figurative Language", "volume": "", "issue": "", "pages": "13--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "Krishnakumaran, Saisuresh and Xiaojin Zhu. 2007. Hunting elusive metaphors using lexical resources. In Proceedings of the Workshop on Computational Approaches to Figurative Language, pages 13-20, Rochester, NY.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "The master metaphor list", "authors": [ { "first": "George", "middle": [], "last": "Lakoff", "suffix": "" }, { "first": "Jane", "middle": [], "last": "Espenson", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Schwartz", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lakoff, George, Jane Espenson, and Alan Schwartz. 1991. The master metaphor list. Technical report, University of California at Berkeley.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Metaphors We Live By", "authors": [ { "first": "George", "middle": [], "last": "Lakoff", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 1980, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lakoff, George and Mark Johnson. 1980. Metaphors We Live By. University of Chicago Press.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "The Little Blue Book: The Essential Guide to Thinking and Talking Democratic", "authors": [ { "first": "George", "middle": [], "last": "Lakoff", "suffix": "" }, { "first": "Elisabeth", "middle": [], "last": "Wehling", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lakoff, George and Elisabeth Wehling. 2012. The Little Blue Book: The Essential Guide to Thinking and Talking Democratic. Free Press, New York.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Evidence that self-relevant motives and metaphoric framing interact to influence political and social attitudes", "authors": [ { "first": "Mark", "middle": [ "J" ], "last": "Landau", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Sullivan", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Greenberg", "suffix": "" } ], "year": 2009, "venue": "Psychological Science", "volume": "20", "issue": "11", "pages": "1421--1427", "other_ids": {}, "num": null, "urls": [], "raw_text": "Landau, Mark J., Daniel Sullivan, and Jeff Greenberg. 2009. Evidence that self-relevant motives and metaphoric framing interact to influence political and social attitudes. Psychological Science, 20(11):1421-1427.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Data-driven metaphor recognition and explanation", "authors": [ { "first": "", "middle": [], "last": "Li", "suffix": "" }, { "first": "Kenny", "middle": [ "Q" ], "last": "Hongsong", "suffix": "" }, { "first": "Haixun", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2013, "venue": "Transactions of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "379--390", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, Hongsong, Kenny Q. Zhu, and Haixun Wang. 2013. Data-driven metaphor recognition and explanation. Transactions of the Association for Computational Linguistics, 1:379-390.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Using Gaussian mixture models to detect figurative language in context", "authors": [ { "first": "Linlin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Caroline", "middle": [], "last": "Sporleder", "suffix": "" } ], "year": 2010, "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "297--300", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, Linlin and Caroline Sporleder. 2010. Using Gaussian mixture models to detect figurative language in context. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 297-300, Los Angeles, CA.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Lexical databases as resources for linguistic creativity: Focus on metaphor", "authors": [ { "first": "Birte", "middle": [], "last": "L\u00f6nneker", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the LREC 2004 Workshop on Language Resources for Linguistic Creativity", "volume": "", "issue": "", "pages": "9--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "L\u00f6nneker, Birte. 2004. Lexical databases as resources for linguistic creativity: Focus on metaphor. In Proceedings of the LREC 2004 Workshop on Language Resources for Linguistic Creativity, pages 9-16, Lisbon.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "Researching and Applying Metaphor in the Real World", "authors": [ { "first": "Graham", "middle": [], "last": "Low", "suffix": "" }, { "first": "Zazie", "middle": [], "last": "Todd", "suffix": "" }, { "first": "Alice", "middle": [], "last": "Deignan", "suffix": "" }, { "first": "Lynne", "middle": [ "Cameron" ], "last": "", "suffix": "" } ], "year": 2010, "venue": "John Benjamins", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Low, Graham, Zazie Todd, Alice Deignan, and Lynne Cameron. 2010. Researching and Applying Metaphor in the Real World. John Benjamins, Amsterdam/Philadelphia.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Ideological influences on building metaphors in Taiwanese presidential speeches", "authors": [ { "first": "Louis", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Ahrens", "suffix": "" } ], "year": 2008, "venue": "Discourse and Society", "volume": "19", "issue": "3", "pages": "383--408", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lu, Louis and Kathleen Ahrens. 2008. Ideological influences on building metaphors in Taiwanese presidential speeches. Discourse and Society, 19(3):383-408.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "A Computational Model of Metaphor Interpretation", "authors": [ { "first": "James", "middle": [], "last": "Martin", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin, James. 1990. A Computational Model of Metaphor Interpretation. Academic Press, San Diego, CA.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "A corpus-based analysis of context effects on metaphor comprehension", "authors": [ { "first": "James", "middle": [], "last": "Martin", "suffix": "" } ], "year": 2006, "venue": "Corpus-Based Approaches to Metaphor and Metonymy", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin, James. 2006. A corpus-based analysis of context effects on metaphor comprehension. In A. Stefanowitsch and S. T. Gries, editors, Corpus-Based Approaches to Metaphor and Metonymy. Mouton de Gruyter, Berlin.", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "Cormet: A computational, corpus-based conventional metaphor extraction system", "authors": [ { "first": "Zachary", "middle": [], "last": "Mason", "suffix": "" } ], "year": 2004, "venue": "Computational Linguistics", "volume": "30", "issue": "1", "pages": "23--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mason, Zachary. 2004. Cormet: A computational, corpus-based conventional metaphor extraction system. Computational Linguistics, 30(1):23-44.", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "Metaphors of anger in Japanese", "authors": [ { "first": "Keiko", "middle": [], "last": "Matsuki", "suffix": "" } ], "year": 1995, "venue": "Language and the Cognitive Construal of the World", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matsuki, Keiko. 1995. Metaphors of anger in Japanese. In John Taylor and Robert MacLaury, editors, Language and the Cognitive Construal of the World. Gruyter, Berlin.", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "Spanish Gigaword Third Edition. Linguistic Data Consortium", "authors": [ { "first": "Angelo", "middle": [], "last": "Mendonca", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jaquette", "suffix": "" }, { "first": "David", "middle": [], "last": "Graff", "suffix": "" }, { "first": "Denise", "middle": [], "last": "Dipersio", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mendonca, Angelo, Daniel Jaquette, David Graff, and Denise DiPersio. 2011. Spanish Gigaword Third Edition. Linguistic Data Consortium, Philadelphia.", "links": null }, "BIBREF67": { "ref_id": "b67", "title": "Semantic signatures for example-based linguistic metaphor detection", "authors": [ { "first": "Michael", "middle": [], "last": "Mohler", "suffix": "" }, { "first": "David", "middle": [], "last": "Bracewell", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Tomlinson", "suffix": "" }, { "first": "David", "middle": [], "last": "Hinote", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the First Workshop on Metaphor in NLP", "volume": "", "issue": "", "pages": "27--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohler, Michael, David Bracewell, Marc Tomlinson, and David Hinote. 2013. Semantic signatures for example-based linguistic metaphor detection. In Proceedings of the First Workshop on Metaphor in NLP, pages 27-35, Atlanta, GA.", "links": null }, "BIBREF68": { "ref_id": "b68", "title": "A novel distributional approach to multilingual conceptual metaphor recognition", "authors": [ { "first": "Michael", "middle": [], "last": "Mohler", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Rink", "suffix": "" }, { "first": "David", "middle": [], "last": "Bracewell", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Tomlinson", "suffix": "" } ], "year": 2014, "venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "1752--1763", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohler, Michael, Bryan Rink, David Bracewell, and Marc Tomlinson. 2014. A novel distributional approach to multilingual conceptual metaphor recognition. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1752-1763, Dublin.", "links": null }, "BIBREF69": { "ref_id": "b69", "title": "Tree kernel engineering for proposition re-ranking", "authors": [ { "first": "Ro", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Daniele", "middle": [], "last": "Pighin", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Basili", "suffix": "" } ], "year": 2006, "venue": "Proceedings of Mining and Learning with Graphs (MLG)", "volume": "", "issue": "", "pages": "165--172", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moschitti, Ro, Daniele Pighin, and Roberto Basili. 2006. Tree kernel engineering for proposition re-ranking. In Proceedings of Mining and Learning with Graphs (MLG), pages 165-172, Berlin.", "links": null }, "BIBREF70": { "ref_id": "b70", "title": "Knowledge-based Action Representations for Metaphor and Aspect (KARMA)", "authors": [ { "first": "Srini", "middle": [], "last": "Narayanan", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Narayanan, Srini. 1997. Knowledge-based Action Representations for Metaphor and Aspect (KARMA). Ph.D. thesis, University of California at Berkeley.", "links": null }, "BIBREF71": { "ref_id": "b71", "title": "Moving right along: A computational model of metaphoric reasoning about events", "authors": [ { "first": "Srini", "middle": [], "last": "Narayanan", "suffix": "" } ], "year": 1999, "venue": "Proceedings of AAAI 99", "volume": "8", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Narayanan, Srini. 1999. Moving right along: A computational model of metaphoric reasoning about events. In Proceedings of AAAI 99, pages 121-128, Orlando, FL. Neuman, Yair, Dan Assaf, Yohai Cohen, Mark Last, Shlomo Argamon, Newton Howard, and Ophir Frieder. 2013. Metaphor identification in large texts corpora. PLoS ONE, 8(4):e62343.", "links": null }, "BIBREF72": { "ref_id": "b72", "title": "On spectral clustering: Analysis and an algorithm", "authors": [ { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "I", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Yair", "middle": [], "last": "Jordan", "suffix": "" }, { "first": "", "middle": [], "last": "Weiss", "suffix": "" } ], "year": 2002, "venue": "Advances in Neural Information Processing Systems", "volume": "2", "issue": "", "pages": "849--856", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ng, Andrew Y., Michael I. Jordan, Yair Weiss et al. 2002. On spectral clustering: Analysis and an algorithm. Advances in Neural Information Processing Systems, 2:849-856.", "links": null }, "BIBREF73": { "ref_id": "b73", "title": "Computational considerations of comparisons and similes", "authors": [ { "first": "Vlad", "middle": [], "last": "Niculae", "suffix": "" }, { "first": "Victoria", "middle": [], "last": "Yaneva", "suffix": "" } ], "year": 2013, "venue": "Proceedings of ACL (Student Research Workshop)", "volume": "", "issue": "", "pages": "89--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niculae, Vlad and Victoria Yaneva. 2013. Computational considerations of comparisons and similes. In Proceedings of ACL (Student Research Workshop), pages 89-95, Sophia.", "links": null }, "BIBREF74": { "ref_id": "b74", "title": "Linking lexicons and ontologies: Mapping WordNet to the suggested upper merged ontology", "authors": [ { "first": "Ian", "middle": [], "last": "Niles", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Pease", "suffix": "" }, { "first": ";", "middle": [], "last": "Ian", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Pease", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 2003 International Conference on Information and Knowledge Engineering", "volume": "", "issue": "", "pages": "412--416", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niles, Ian and Adam Pease. 2001. Towards a standard upper ontology. In Proceedings of the International Conference on Formal Ontology in Information Systems -Volume 2001, FOIS '01, pages 2-9, New York, NY. Niles, Ian and Adam Pease. 2003. Linking lexicons and ontologies: Mapping WordNet to the suggested upper merged ontology. In Proceedings of the 2003 International Conference on Information and Knowledge Engineering (IKE 03), pages 412-416, Las Vegas, NV.", "links": null }, "BIBREF75": { "ref_id": "b75", "title": "MaltParser: A language-independent system for data-driven dependency parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Nilsson", "suffix": "" }, { "first": "Atanas", "middle": [], "last": "Chanev", "suffix": "" }, { "first": "G\u00fclsen", "middle": [], "last": "Eryigit", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "Svetoslav", "middle": [], "last": "Marinov", "suffix": "" }, { "first": "Erwin", "middle": [], "last": "Marsi", "suffix": "" } ], "year": 2007, "venue": "Natural Language Engineering", "volume": "13", "issue": "2", "pages": "95--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nivre, Joakim, Johan Hall, Jens Nilsson, Atanas Chanev, G\u00fclsen Eryigit, Sandra K\u00fcbler, Svetoslav Marinov, and Erwin Marsi. 2007. MaltParser: A language-independent system for data-driven dependency parsing. Natural Language Engineering, 13(2):95-135.", "links": null }, "BIBREF76": { "ref_id": "b76", "title": "Python for scientific computing", "authors": [ { "first": "Travis", "middle": [ "E" ], "last": "Oliphant", "suffix": "" } ], "year": 2007, "venue": "Computing in Science and Engineering", "volume": "9", "issue": "", "pages": "10--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oliphant, Travis E. 2007. Python for scientific computing. Computing in Science and Engineering, 9:10-20.", "links": null }, "BIBREF77": { "ref_id": "b77", "title": "Discovering word senses from text", "authors": [ { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" }, { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "613--619", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pantel, Patrick and Dekang Lin. 2002. Discovering word senses from text. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 613-619, Edmonton.", "links": null }, "BIBREF78": { "ref_id": "b78", "title": "MIP: A method for identifying metaphorically used words in discourse", "authors": [ { "first": "Pragglejaz", "middle": [], "last": "Group", "suffix": "" } ], "year": 2007, "venue": "Metaphor and Symbol", "volume": "22", "issue": "", "pages": "1--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pragglejaz Group. 2007. MIP: A method for identifying metaphorically used words in discourse. Metaphor and Symbol, 22:1-39.", "links": null }, "BIBREF79": { "ref_id": "b79", "title": "Selection and Information: A Class-based Approach to Lexical Relationships", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Resnik, Philip. 1993. Selection and Information: A Class-based Approach to Lexical Relationships. Ph.D. thesis, University of Pennsylvania.", "links": null }, "BIBREF80": { "ref_id": "b80", "title": "Like an animal I was treated?: Anti-immigrant metaphor in US public discourse", "authors": [ { "first": "Santa", "middle": [], "last": "Ana", "suffix": "" }, { "first": "Otto", "middle": [], "last": "", "suffix": "" } ], "year": 1999, "venue": "", "volume": "10", "issue": "", "pages": "191--224", "other_ids": {}, "num": null, "urls": [], "raw_text": "Santa Ana, Otto. 1999. Like an animal I was treated?: Anti-immigrant metaphor in US public discourse. Discourse Society, 10(2):191-224.", "links": null }, "BIBREF81": { "ref_id": "b81", "title": "Metaphor and translation: Some implications of a cognitive approach", "authors": [ { "first": "Christina", "middle": [], "last": "Sch\u00e4ffner", "suffix": "" } ], "year": 2004, "venue": "Journal of Pragmatics", "volume": "36", "issue": "", "pages": "1253--1269", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sch\u00e4ffner, Christina. 2004. Metaphor and translation: Some implications of a cognitive approach. Journal of Pragmatics, 36:1253-1269.", "links": null }, "BIBREF82": { "ref_id": "b82", "title": "Inducing German semantic verb classes from purely syntactic subcategorisation information", "authors": [ { "first": "Sabine", "middle": [], "last": "Schulte Im Walde", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brew", "suffix": "" } ], "year": 2001, "venue": "ACL '02: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "657--670", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schulte im Walde, Sabine and Chris Brew. 2001. Inducing German semantic verb classes from purely syntactic subcategorisation information. In ACL '02: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 223-230, Morristown, NJ. Sharoff, Serge. 2006. Creating general-purpose corpora using automated search engine queries. In Marco Baroni and Silvia Bernardini, editors, WaCky! Working Papers on the Web as Corpus, pages 657-670, Moscow.", "links": null }, "BIBREF83": { "ref_id": "b83", "title": "The proper place of men and machines in language technology processing Russian without any linguistic knowledge", "authors": [ { "first": "Serge", "middle": [], "last": "Sharoff", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2011, "venue": "Dialogue 2011, Russian Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "591--605", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sharoff, Serge and Joakim Nivre. 2011. The proper place of men and machines in language technology processing Russian without any linguistic knowledge. In Dialogue 2011, Russian Conference on Computational Linguistics, pages 591-605, Moscow.", "links": null }, "BIBREF84": { "ref_id": "b84", "title": "Normalized cuts and image segmentation", "authors": [ { "first": "J", "middle": [], "last": "Shi", "suffix": "" }, { "first": "J", "middle": [], "last": "Malik", "suffix": "" } ], "year": 2000, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "22", "issue": "8", "pages": "888--905", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shi, J. and J. Malik. 2000. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888-905.", "links": null }, "BIBREF85": { "ref_id": "b85", "title": "Automatic metaphor interpretation as a paraphrasing task", "authors": [ { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" } ], "year": 2010, "venue": "Proceedings of NAACL 2010", "volume": "", "issue": "", "pages": "276--285", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shutova, Ekaterina. 2010. Automatic metaphor interpretation as a paraphrasing task. In Proceedings of NAACL 2010, pages 1029-1037, Los Angeles, CA. Shutova, Ekaterina. 2013. Metaphor identification as interpretation. In Proceedings of *SEM 2013, pages 276-285, Atlanta, GA.", "links": null }, "BIBREF86": { "ref_id": "b86", "title": "Design and Evaluation of Metaphor Processing Systems", "authors": [ { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" } ], "year": 2015, "venue": "Computational Linguistics", "volume": "41", "issue": "4", "pages": "579--623", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shutova, Ekaterina. 2015. Design and Evaluation of Metaphor Processing Systems. Computational Linguistics, 41(4):579-623.", "links": null }, "BIBREF87": { "ref_id": "b87", "title": "Unsupervised metaphor identification using hierarchical graph factorization clustering", "authors": [ { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" }, { "first": "Lin", "middle": [], "last": "Sun ; Atlanta", "suffix": "" }, { "first": "G", "middle": [ "A" ], "last": "Shutova", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "", "suffix": "" }, { "first": "Lin", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2010, "venue": "Proceedings of COLING 2010", "volume": "", "issue": "", "pages": "3255--3261", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shutova, Ekaterina and Lin Sun. 2013. Unsupervised metaphor identification using hierarchical graph factorization clustering. In Proceedings of NAACL 2013, pages 978-988, Atlanta, GA. Shutova, Ekaterina, Lin Sun, and Anna Korhonen. 2010. Metaphor identification using verb and noun clustering. In Proceedings of COLING 2010, pages 1002-1010, Beijing. Shutova, Ekaterina and Simone Teufel. 2010. Metaphor corpus annotated for source-target domain mappings. In Proceedings of LREC 2010, pages 3255-3261, Malta.", "links": null }, "BIBREF88": { "ref_id": "b88", "title": "Statistical metaphor processing", "authors": [ { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" }, { "first": "Simone", "middle": [], "last": "Teufel", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2013, "venue": "Computational Linguistics", "volume": "39", "issue": "2", "pages": "301--353", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shutova, Ekaterina, Simone Teufel, and Anna Korhonen. 2013. Statistical metaphor processing. Computational Linguistics, 39(2):301-353.", "links": null }, "BIBREF89": { "ref_id": "b89", "title": "Unsupervised metaphor paraphrasing using a vector space model", "authors": [ { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Van De Cruys", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2012, "venue": "Proceedings of COLING 2012", "volume": "", "issue": "", "pages": "1121--1130", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shutova, Ekaterina, Tim Van de Cruys, and Anna Korhonen. 2012. Unsupervised metaphor paraphrasing using a vector space model. In Proceedings of COLING 2012, pages 1121-1130, Mumbai.", "links": null }, "BIBREF90": { "ref_id": "b90", "title": "Nonparametric Statistics for the Behavioral Sciences", "authors": [ { "first": "Sidney", "middle": [], "last": "Siegel", "suffix": "" }, { "first": "N. John", "middle": [], "last": "Castellan", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siegel, Sidney and N. John Castellan. 1988. Nonparametric Statistics for the Behavioral Sciences. McGraw-Hill Book Company, New York.", "links": null }, "BIBREF91": { "ref_id": "b91", "title": "A corpus-based description of metaphorical marking patterns in scientific and popular business discourse", "authors": [ { "first": "Skorczynska", "middle": [], "last": "Sznajder", "suffix": "" }, { "first": "Jordi", "middle": [], "last": "Hanna", "suffix": "" }, { "first": "", "middle": [], "last": "Pique-Angordans", "suffix": "" } ], "year": 2004, "venue": "Proceedings of European Research Conference on Mind, Language and Metaphor (Euresco Conference)", "volume": "", "issue": "", "pages": "112--129", "other_ids": {}, "num": null, "urls": [], "raw_text": "Skorczynska Sznajder, Hanna and Jordi Pique-Angordans. 2004. A corpus-based description of metaphorical marking patterns in scientific and popular business discourse. In Proceedings of European Research Conference on Mind, Language and Metaphor (Euresco Conference), pages 112-129, Granada.", "links": null }, "BIBREF92": { "ref_id": "b92", "title": "A Method for Linguistic Metaphor Identification: From MIP to MIPVU", "authors": [ { "first": "Gerard", "middle": [ "J" ], "last": "Steen", "suffix": "" }, { "first": "G", "middle": [], "last": "Aletta", "suffix": "" }, { "first": "J", "middle": [ "Berenike" ], "last": "Dorst", "suffix": "" }, { "first": "Anna", "middle": [ "A" ], "last": "Herrmann", "suffix": "" }, { "first": "Tina", "middle": [], "last": "Kaal", "suffix": "" }, { "first": "Trijntje", "middle": [], "last": "Krennmayr", "suffix": "" }, { "first": "", "middle": [], "last": "Pasma", "suffix": "" } ], "year": 2010, "venue": "John Benjamins", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steen, Gerard J., Aletta G. Dorst, J. Berenike Herrmann, Anna A. Kaal, Tina Krennmayr, and Trijntje Pasma. 2010. A Method for Linguistic Metaphor Identification: From MIP to MIPVU. John Benjamins, Amsterdam/Philadelphia.", "links": null }, "BIBREF93": { "ref_id": "b93", "title": "Semi-supervised verb class discovery using noisy features", "authors": [ { "first": "Suzanne", "middle": [], "last": "Stevenson", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Joanis", "suffix": "" } ], "year": 2003, "venue": "Proceedings of HLT-NAACL 2003", "volume": "", "issue": "", "pages": "71--78", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stevenson, Suzanne and Eric Joanis. 2003. Semi-supervised verb class discovery using noisy features. In Proceedings of HLT-NAACL 2003, pages 71-78, Edmonton.", "links": null }, "BIBREF94": { "ref_id": "b94", "title": "Improving verb clustering with automatically acquired selectional preferences", "authors": [ { "first": "Tomek", "middle": [], "last": "Strzalkowski", "suffix": "" }, { "first": "George", "middle": [ "Aaron" ], "last": "Broadwell", "suffix": "" }, { "first": "Sarah", "middle": [], "last": "Taylor", "suffix": "" }, { "first": "Laurie", "middle": [], "last": "Feldman", "suffix": "" }, { "first": "Samira", "middle": [], "last": "Shaikh", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Boris", "middle": [], "last": "Yamrom", "suffix": "" }, { "first": "Kit", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Umit", "middle": [], "last": "Boz", "suffix": "" }, { "first": "Ignacio", "middle": [], "last": "Cases", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Elliot", "suffix": "" }, { "first": ";", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the First Workshop on Metaphor in NLP", "volume": "", "issue": "", "pages": "1023--1033", "other_ids": {}, "num": null, "urls": [], "raw_text": "Strzalkowski, Tomek, George Aaron Broadwell, Sarah Taylor, Laurie Feldman, Samira Shaikh, Ting Liu, Boris Yamrom, Kit Cho, Umit Boz, Ignacio Cases, and Kyle Elliot. 2013. Robust extraction of metaphor from novel data. In Proceedings of the First Workshop on Metaphor in NLP, pages 67-76, Atlanta, GA. Sun, Lin and Anna Korhonen. 2009. Improving verb clustering with automatically acquired selectional preferences. In Proceedings of EMNLP 2009, pages 638-647, Singapore. Sun, Lin and Anna Korhonen. 2011. Hierarchical verb clustering using graph factorization. In Proceedings of EMNLP, pages 1023-1033, Edinburgh.", "links": null }, "BIBREF95": { "ref_id": "b95", "title": "Red Dogs and Rotten Mealies: How Zulus Talk About Anger, volume Speaking of Emotions", "authors": [ { "first": "John", "middle": [], "last": "Taylor", "suffix": "" }, { "first": "Thandi", "middle": [], "last": "Mbense", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taylor, John and Thandi Mbense. 1998. Red Dogs and Rotten Mealies: How Zulus Talk About Anger, volume Speaking of Emotions. Gruyter, Berlin.", "links": null }, "BIBREF96": { "ref_id": "b96", "title": "Metaphors we think with: The role of metaphor in reasoning", "authors": [ { "first": "Paul", "middle": [ "H" ], "last": "Thibodeau", "suffix": "" }, { "first": "Lera", "middle": [], "last": "Boroditsky", "suffix": "" } ], "year": 2011, "venue": "PLoS ONE", "volume": "6", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thibodeau, Paul H. and Lera Boroditsky. 2011. Metaphors we think with: The role of metaphor in reasoning. PLoS ONE, 6(2):e16782, 02.", "links": null }, "BIBREF97": { "ref_id": "b97", "title": "Cross-lingual metaphor detection using common semantic features", "authors": [ { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Mukomel", "suffix": "" }, { "first": "Anatole", "middle": [], "last": "Gershman", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the First Workshop on Metaphor in NLP", "volume": "", "issue": "", "pages": "45--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsvetkov, Yulia, Elena Mukomel, and Anatole Gershman. 2013. Cross-lingual metaphor detection using common semantic features. In Proceedings of the First Workshop on Metaphor in NLP, pages 45-51, Atlanta, GA.", "links": null }, "BIBREF98": { "ref_id": "b98", "title": "Literal and metaphorical sense identification through concrete and abstract context", "authors": [ { "first": "Peter", "middle": [ "D" ], "last": "Turney", "suffix": "" }, { "first": "Yair", "middle": [], "last": "Neuman", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Assaf", "suffix": "" }, { "first": "Yohai", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "680--690", "other_ids": {}, "num": null, "urls": [], "raw_text": "Turney, Peter D., Yair Neuman, Dan Assaf, and Yohai Cohen. 2011. Literal and metaphorical sense identification through concrete and abstract context. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '11, pages 680-690, Stroudsburg, PA.", "links": null }, "BIBREF99": { "ref_id": "b99", "title": "Creative language retrieval: A robust hybrid of information retrieval and linguistic creativity", "authors": [ { "first": "Tony", "middle": [], "last": "Veale", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "278--287", "other_ids": {}, "num": null, "urls": [], "raw_text": "Veale, Tony. 2011. Creative language retrieval: A robust hybrid of information retrieval and linguistic creativity. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 278-287, Portland, OR.", "links": null }, "BIBREF100": { "ref_id": "b100", "title": "A service-oriented architecture for metaphor processing", "authors": [ { "first": "Tony", "middle": [], "last": "Veale", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Second Workshop on Metaphor in NLP", "volume": "", "issue": "", "pages": "52--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Veale, Tony. 2014. A service-oriented architecture for metaphor processing. In Proceedings of the Second Workshop on Metaphor in NLP, pages 52-60, Baltimore, MD.", "links": null }, "BIBREF101": { "ref_id": "b101", "title": "A fluid knowledge representation for understanding and generating creative metaphors", "authors": [ { "first": "Tony", "middle": [], "last": "Veale", "suffix": "" }, { "first": "Yanfen", "middle": [], "last": "Hao", "suffix": "" } ], "year": 2008, "venue": "Proceedings of COLING 2008", "volume": "", "issue": "", "pages": "945--952", "other_ids": {}, "num": null, "urls": [], "raw_text": "Veale, Tony and Yanfen Hao. 2008. A fluid knowledge representation for understanding and generating creative metaphors. In Proceedings of COLING 2008, pages 945-952, Manchester, UK.", "links": null }, "BIBREF102": { "ref_id": "b102", "title": "A tutorial on spectral clustering", "authors": [ { "first": "Von", "middle": [], "last": "Luxburg", "suffix": "" }, { "first": "Ulrike", "middle": [], "last": "", "suffix": "" } ], "year": 2007, "venue": "Statistics and Computing", "volume": "17", "issue": "4", "pages": "395--416", "other_ids": {}, "num": null, "urls": [], "raw_text": "Von Luxburg, Ulrike. 2007. A tutorial on spectral clustering. Statistics and Computing, 17(4):395-416.", "links": null }, "BIBREF103": { "ref_id": "b103", "title": "Between min cut and graph bisection", "authors": [ { "first": "Dorothea", "middle": [], "last": "Wagner", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Wagner", "suffix": "" } ], "year": 1993, "venue": "Lecture Notes in Computer Science", "volume": "711", "issue": "", "pages": "744--750", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wagner, Dorothea and Frank Wagner. 1993. Between min cut and graph bisection. Volume 711 of Lecture Notes in Computer Science. Springer, pages 744-750.", "links": null }, "BIBREF104": { "ref_id": "b104", "title": "Hierarchical grouping to optimize an objective function", "authors": [ { "first": "Joe", "middle": [ "H" ], "last": "Ward", "suffix": "" } ], "year": 1963, "venue": "Journal of the American Statistical Association", "volume": "58", "issue": "301", "pages": "236--244", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ward, Joe H. 1963. Hierarchical grouping to optimize an objective function. Journal of the American Statistical Association, 58(301):236-244.", "links": null }, "BIBREF105": { "ref_id": "b105", "title": "Content differences for abstract and concrete concepts", "authors": [ { "first": "Katja", "middle": [], "last": "Wiemer-Hastings", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2005, "venue": "Cognitive Science", "volume": "29", "issue": "5", "pages": "719--736", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wiemer-Hastings, Katja and Xu Xu. 2005. Content differences for abstract and concrete concepts. Cognitive Science, 29(5):719-736.", "links": null }, "BIBREF106": { "ref_id": "b106", "title": "Making preferences more active", "authors": [ { "first": "Yorick", "middle": [], "last": "Wilks", "suffix": "" } ], "year": 1978, "venue": "Artificial Intelligence", "volume": "11", "issue": "3", "pages": "197--223", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wilks, Yorick. 1978. Making preferences more active. Artificial Intelligence, 11(3):197-223.", "links": null }, "BIBREF107": { "ref_id": "b107", "title": "Automatic metaphor detection using large-scale lexical resources and conventional metaphor extraction", "authors": [ { "first": "Yorick", "middle": [], "last": "Wilks", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Dalton", "suffix": "" }, { "first": "James", "middle": [], "last": "Allen", "suffix": "" }, { "first": "Lucian", "middle": [], "last": "Galescu", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the First Workshop on Metaphor in NLP", "volume": "", "issue": "", "pages": "36--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wilks, Yorick, Adam Dalton, James Allen, and Lucian Galescu. 2013. Automatic metaphor detection using large-scale lexical resources and conventional metaphor extraction. In Proceedings of the First Workshop on Metaphor in NLP, pages 36-44, Atlanta, GA.", "links": null }, "BIBREF108": { "ref_id": "b108", "title": "Soft clustering on graphs", "authors": [ { "first": "Kai", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Shipeng", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Volker", "middle": [], "last": "Tresp", "suffix": "" } ], "year": 2006, "venue": "Proceedings of Advances in Neural Information Processing Systems", "volume": "18", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu, Kai, Shipeng Yu, and Volker Tresp. 2006. Soft clustering on graphs. In Proceedings of Advances in Neural Information Processing Systems, 18, Vancouver.", "links": null }, "BIBREF109": { "ref_id": "b109", "title": "The Contemporary Theory of Metahpor in Chinese: A Perspective from Chinese. John Benjamins", "authors": [ { "first": "Ning", "middle": [], "last": "Yu", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu, Ning. 1998. The Contemporary Theory of Metahpor in Chinese: A Perspective from Chinese. John Benjamins, Amsterdam.", "links": null }, "BIBREF110": { "ref_id": "b110", "title": "Generalizing over lexical features: Selectional preferences for semantic role classification", "authors": [ { "first": "Be\u00f1at", "middle": [], "last": "Zapirain", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Llu\u00eds", "middle": [], "last": "M\u00e0rquez", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the ACL-IJCNLP 2009 Conference Short Papers", "volume": "", "issue": "", "pages": "73--76", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zapirain, Be\u00f1at, Eneko Agirre, and Llu\u00eds M\u00e0rquez. 2009. Generalizing over lexical features: Selectional preferences for semantic role classification. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 73-76, Singapore.", "links": null }, "BIBREF111": { "ref_id": "b111", "title": "Computational mechanisms for metaphor in languages: A survey", "authors": [ { "first": "Chang-Le", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Yun", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Xiao-Xi", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2007, "venue": "Journal of Computer Science and Technology", "volume": "22", "issue": "", "pages": "308--319", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhou, Chang-Le, Yun Yang, and Xiao-Xi Huang. 2007. Computational mechanisms for metaphor in languages: A survey. Journal of Computer Science and Technology, 22:308-319.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "Clusters of Russian nouns (unconstrained setting; the source domain labels in the Figure are suggested by the authors for clarity, the system does not assign any labels)" }, "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "Clusters of Russian verbs" }, "FIGREF2": { "type_str": "figure", "num": null, "uris": null, "text": "Figure 7 Clusters of Russian nouns (unconstrained setting; the source domain labels in the figure are suggested by the authors for clarity, the system does not assign any labels)." }, "FIGREF3": { "type_str": "figure", "num": null, "uris": null, "text": "impacto miedo repercusi\u00f3n consecuencia escasez efecto dificultad TT constraints: Cluster: miedo cuidado repercusi\u00f3n epicentro acceso pendiente oportunidad conocimiento dificultad TS constraints: Cluster: veto bloqueo inmunidad restricci\u00f3n obst\u00e1culo barrera dificultadFigure 10Clusters of Spanish nouns: unconstrained and constrained settings. politics profession affair ideology philosophy religion competition education tt constraints: Cluster: fibre marriage politics affair career life hope dream religion education economy ts constraints: Cluster: field england part card politics sport music tape tune guitar trick football organ instrument round match game role ball host" }, "FIGREF4": { "type_str": "figure", "num": null, "uris": null, "text": "Clusters of English nouns: unconstrained and constrained settings Unconstrained: Cluster: dolor impacto miedo repercusi\u00f3n consecuencia escasez efecto dificultad tt constraints: Cluster: miedo cuidado repercusi\u00f3n epicentro acceso pendiente oportunidad conocimiento dificultad ts constraints: Cluster: veto bloqueo inmunidad restricci\u00f3n obst\u00e1culo barrera dificultad Figure 10 Clusters of Spanish nouns: unconstrained and constrained settings Unconstrained: Cluster: \u0437\u043d\u0430\u043d\u0438\u0435 \u0441\u043f\u043e\u0441\u043e\u0431\u043d\u043e\u0441\u0442\u044c \u043a\u0440\u0430\u0441\u043e\u0442\u0430 \u0443\u0441\u0438\u043b\u0438\u0435 \u0443\u043c\u0435\u043d\u0438\u0435 \u0442\u0430\u043b\u0430\u043d\u0442 \u043d\u0430\u0432\u044b\u043a \u0442\u043e\u0447\u043d\u043e\u0441\u0442\u044c \u0434\u0430\u0440 \u043f\u043e\u0437\u043d\u0430\u043d\u0438\u0435 \u043c\u0443\u0434\u0440\u043e\u0441\u0442\u044c \u043a\u0432\u0430\u043b\u0438\u0444\u0438\u043a\u0430\u0446\u0438\u044f \u043c\u0430\u0441\u0442\u0435\u0440\u0441\u0442\u0432\u043e TT constraints: Cluster: \u0432\u043b\u0430\u0441\u0442\u044c \u0441\u0447\u0430\u0441\u0442\u044c\u0435 \u043a\u0440\u0430\u0441\u043e\u0442\u0430 \u0441\u043b\u0430\u0432\u0430 \u0447\u0435\u0441\u0442\u044c \u043f\u043e\u043f\u0443\u043b\u044f\u0440\u043d\u043e\u0441\u0442\u044c \u0431\u043b\u0430\u0433\u043e \u0431\u043e\u0433\u0430\u0442\u0441\u0442\u0432\u043e \u0434\u0430\u0440 \u0430\u0432\u0442\u043e\u0440\u0438\u0442\u0435\u0442 \u0432\u0435\u0441\u0442\u044c TS constraints: Cluster: \u0441\u0432\u0435\u0442 \u0437\u0432\u0435\u0437\u0434\u0430 \u0441\u043e\u043b\u043d\u0446\u0435 \u043a\u0440\u0430\u0441\u043e\u0442\u0430 \u0443\u043b\u044b\u0431\u043a\u0430 \u043b\u0443\u043d\u0430 \u043b\u0443\u0447" }, "FIGREF5": { "type_str": "figure", "num": null, "uris": null, "text": "Clusters of Russian nouns: unconstrained and constrained settings." }, "FIGREF6": { "type_str": "figure", "num": null, "uris": null, "text": "RASP grammatical relations output for metaphorical expressions. cast doubt (V-O) cast fear, cast suspicion, catch feeling, catch suspicion, catch enthusiasm, catch emotion, spark fear, spark enthusiasm, spark passion, spark feeling, fix emotion, shade emotion, blink impulse, flick anxiety, roll doubt, dart hostility ... campaign surged (S-V) charity boomed, effort dropped, campaign shrank, campaign soared, drive spiraled, mission tumbled, initiative spiraled, venture plunged, effort rose, initiative soared, effort fluctuated, venture declined, effort dwindled ..." }, "FIGREF7": { "type_str": "figure", "num": null, "uris": null, "text": "Retrieved Russian sentences." }, "FIGREF8": { "type_str": "figure", "num": null, "uris": null, "text": "Organization of the hierarchical graph of concepts." }, "FIGREF9": { "type_str": "figure", "num": null, "uris": null, "text": "(a) An undirected graph G representing the similarity matrix. (b) The bipartite graph showing three clusters on G. (c) The induced clusters U. (d) The new graph G 1 over clusters U. (e) The new bipartite graph over G 1 ." }, "FIGREF10": { "type_str": "figure", "num": null, "uris": null, "text": "Metaphorical associations discovered by the Russian system rage-ncsubj engulf -ncsubj erupt-ncsubj burn-ncsubj light-dobj consume-ncsubj flare-ncsubj sweepncsubj spark-dobj battle-dobj gut-idobj smolder-ncsubj ignite-dobj destroy-idobj spread-ncsubj damage-idobj light-ncsubj ravage-ncsubj crackle-ncsubj open-dobj fuel-dobj spray-idobj roar-ncsubj perish-idobj destroy-ncsubj wound-idobj start-dobj ignite-ncsubj injure-idobj fight-dobj rock-ncsubj retaliate-idobj devastate-idobj blaze-ncsubj ravage-idobj rip-ncsubj burn-idobj spark-ncsubj warmidobj suppress-dobj rekindle-dobj ..." }, "FIGREF11": { "type_str": "figure", "num": null, "uris": null, "text": "Salient features for fire and the violence cluster" }, "FIGREF12": { "type_str": "figure", "num": null, "uris": null, "text": "XX, Number XX feeling is fire hope lit (Subj), anger blazed (Subj), optimism raged (Subj), enthusiasm engulfed them (Subj), hatred flared (Subj), passion flared (Subj), interest lit (Subj), fuel resentment (Dobj), anger crackled (Subj), feelings roared (Subj), hostility blazed (Subj), light with hope (Iobj) crime is a disease cure crime (Dobj), abuse transmitted (Subj), eradicate terrorism (Dobj), suffer from corruption (Iobj), diagnose abuse (Dobj), combat fraud (Dobj), cope with crime (Iobj), cure abuse (Dobj), eradicate corruption (Dobj), violations spread (Subj)" }, "TABREF0": { "html": null, "type_str": "table", "content": "
Figures 3 (nouns) and 4 (verbs) for English; Figures 5 (nouns) and 6 (verbs)
for Spanish; and Figures 7 (nouns) and 8 (verbs) for Russian. The noun clusters represent
Algorithm 2 JXZ algorithm
Require:
", "num": null, "text": "Number K of clusters; similarity matrix W \u2208 R N\u00d7N ; constraint matrix U \u2208 R N\u00d7C ; enforcement parameter \u03b2 Compute the degree matrix D where d ii = N j=1 w ij and d ij" }, "TABREF1": { "html": null, "type_str": "table", "content": "", "num": null, "text": "Source Cluster: sparkle glow widen flash flare gleam darken narrow flicker shine blaze bulge Source Cluster: gulp drain stir empty pour sip spill swallow drink pollute seep flow drip purify ooze pump bubble splash ripple simmer boil tread Source Cluster: polish clean scrape scrub soak Source Cluster: kick hurl push fling throw pull drag haul Source Cluster: rise fall shrink drop double fluctuate dwindle decline plunge decrease soar tumble surge spiral boom Source Cluster: initiate inhibit aid halt trace track speed obstruct impede accelerate slow stimulate hinder block" }, "TABREF2": { "html": null, "type_str": "table", "content": "
Figure 6 Shutova, Sun, Guti\u00e9rrez and NarayananMultilingual Metaphor Processing
Clusters of Spanish verbs.
Suggested source domain: construction, structure, building
Target Cluster: \u0441\u043d\u0433 \u0433\u0440\u0443\u043f\u043f\u0438\u0440\u043e\u0432\u043a\u0430 \u0438\u0441\u043b\u0430\u043c \u0438\u043d\u0444\u0440\u0430\u0441\u0442\u0440\u0443\u043a\u0442\u0443\u0440\u0430 \u043f\u0440\u0430\u0432\u043e\u0441\u043b\u0430\u0432\u0438\u0435 \u0445\u043e\u0440 \u043a\u043b\u0430\u043d \u0432\u043e\u0441\u0441\u0442\u0430\u043d\u0438\u0435 \u043a\u043e\u043b\u043e\u043d\u0438\u044f \u043a\u0443\u043b\u044c\u0442
\u0441\u043e\u0446\u0438\u0430\u043b\u0438\u0437\u043c \u043f\u0438\u0440\u0430\u043c\u0438\u0434\u0430 \u0434\u0435\u0440\u0436\u0430\u0432\u0430 \u0438\u043d\u0434\u0443\u0441\u0442\u0440\u0438\u044f \u0440\u043e\u0442\u0430 \u043e\u0440\u043a\u0435\u0441\u0442\u0440 \u0440\u0430\u0441\u0430 \u043a\u0440\u0443\u0436\u043e\u043a \u0437\u0430\u0433\u043e\u0432\u043e\u0440
Suggested source domain: mechanism, game, structure, living being, organism
Target Cluster: \u043e\u0431\u0440\u0430\u0437 \u044f\u0437\u044b\u043a \u0431\u043e\u0433 \u043b\u044e\u0431\u043e\u0432\u044c \u0432\u0435\u0449\u044c \u043a\u0443\u043b\u044c\u0442\u0443\u0440\u0430 \u043d\u0430\u0443\u043a\u0430 \u0438\u0441\u043a\u0443\u0441\u0441\u0442\u0432\u043e \u0431\u0438\u0437\u043d\u0435\u0441 \u043f\u043e\u043b\u0438\u0442\u0438\u043a\u0430 \u043f\u0440\u0438\u0440\u043e\u0434\u0430 \u043b\u0438\u0442\u0435\u0440\u0430\u0442\u0443\u0440\u0430
\u0442\u0435\u043e\u0440\u0438\u044f \u0441\u0442\u0438\u043b\u044c \u0441\u0435\u043a\u0441 \u043b\u0438\u0447\u043d\u043e\u0441\u0442\u044c
Suggested source domain: story; journey; battle
Target Cluster: \u043f\u043e\u0445\u043e\u0434 \u0441\u043e\u0442\u0440\u0443\u0434\u043d\u0438\u0447\u0435\u0441\u0442\u0432\u043e \u0442\u0430\u043d\u0435\u0446 \u0441\u043f\u043e\u0440 \u0430\u0442\u0430\u043a\u0430 \u0431\u0435\u0441\u0435\u0434\u0430 \u043a\u0430\u0440\u044c\u0435\u0440\u0430 \u043f\u0435\u0440\u0435\u0433\u043e\u0432\u043e\u0440\u044b \u043e\u0445\u043e\u0442\u0430 \u0431\u0438\u0442\u0432\u0430 \u0434\u0438\u0430\u043b\u043e\u0433
\u043d\u0430\u0441\u0442\u0443\u043f\u043b\u0435\u043d\u0438\u0435 \u043f\u0440\u043e\u0433\u0443\u043b\u043a\u0430
Suggested source domain: liquid
Target Cluster: \u0432\u043e\u043f\u0440\u043e\u0441 \u043f\u0440\u043e\u0431\u043b\u0435\u043c\u0430 \u0442\u0435\u043c\u0430 \u043c\u044b\u0441\u043b\u044c \u0438\u0434\u0435\u044f \u043c\u043d\u0435\u043d\u0438\u0435 \u0437\u0430\u0434\u0430\u0447\u0430 \u0447\u0443\u0432\u0441\u0442\u0432\u043e \u0438\u043d\u0442\u0435\u0440\u0435\u0441 \u0436\u0435\u043b\u0430\u043d\u0438\u0435 \u043e\u0449\u0443\u0449\u0435\u043d\u0438\u0435
\u043d\u0435\u043e\u0431\u0445\u043e\u0434\u0438\u043c\u043e\u0441\u0442\u044c
Target Cluster: \u0431\u043e\u043b\u044c \u0432\u043f\u0435\u0447\u0430\u0442\u043b\u0435\u043d\u0438\u0435 \u0440\u0430\u0434\u043e\u0441\u0442\u044c \u043d\u0430\u0434\u0435\u0436\u0434\u0430 \u043d\u0430\u0441\u0442\u0440\u043e\u0435\u043d\u0438\u0435 \u0441\u0442\u0440\u0430\u0445 \u0441\u043e\u0436\u0430\u043b\u0435\u043d\u0438\u0435 \u043c\u0435\u0447\u0442\u0430 \u043f\u043e\u0442\u0440\u0435\u0431\u043d\u043e\u0441\u0442\u044c
\u0441\u043e\u043c\u043d\u0435\u043d\u0438\u0435 \u044d\u043c\u043e\u0446\u0438\u044f \u0443\u0436\u0430\u0441 \u0443\u0432\u0430\u0436\u0435\u043d\u0438\u0435 \u0437\u0430\u043f\u0430\u0445
Target Cluster: \u0440\u0435\u0437\u0443\u043b\u044c\u0442\u0430\u0442 \u0438\u043d\u0444\u043e\u0440\u043c\u0430\u0446\u0438\u044f \u0441\u0441\u044b\u043b\u043a\u0430 \u043c\u0430\u0442\u0435\u0440\u0438\u0430\u043b \u0434\u0430\u043d\u043d\u044b\u0435 \u0434\u043e\u043a\u0443\u043c\u0435\u043d\u0442 \u043e\u043f\u044b\u0442 \u0438\u0441\u0441\u043b\u0435\u0434\u043e\u0432\u0430\u043d\u0438\u0435 \u0441\u043f\u0438\u0441\u043e\u043a \u0437\u043d\u0430\u043d\u0438\u0435
\u043e\u0446\u0435\u043d\u043a\u0430 \u0430\u043d\u0430\u043b\u0438\u0437 \u043f\u0440\u0430\u043a\u0442\u0438\u043a\u0430
", "num": null, "text": "Source Cluster: distribuir consumir importar ingerir comer fumar comercializar tragar consumar beber recetar Source Cluster: atropellar chocar volcar colisionar embestir descarrilar arrollar Source Cluster: secar fluir regar limpiar Source Cluster: llevar sacar lanzar colocar cargar transportar arrojar tirar echar descargar Source Cluster: caer subir descender desplomar declinar bajar retroceder progresar repuntar replegar Source Cluster: inundar llenar abarrotar frecuentar copar colmar atestar saturar vaciar" }, "TABREF4": { "html": null, "type_str": "table", "content": "
TT constraintsTS constraints
poverty & inequalitypoverty & disease
democracy & friendshipdemocracy & machine
society & mindsociety & organism
education & lifeeducation & journey
politics & marriagepolitics & game
country & familycountry & building
government & kingdomgovernment & household
career & changecareer & hill
innovation & evolutioninnovation & flower
unemployment & panicunemployment & prison
faith & peacefaith & warmth
violence & passionviolence & fire
mood & lovemood & climate
debt & tensiondebt & weight
3.Constraints should satisfy the following criteria:
", "num": null, "text": "Examples of constraints used in English clustering." }, "TABREF5": { "html": null, "type_str": "table", "content": "", "num": null, "text": "show some examples of TS and TT constraints for the three languages. One pair of constraints (relationship & trade [TT] and relationship & vehicle [TS]) was excluded from the set, because relationship is usually translated into Spanish and Russian by a plural form (e.g., relaciones). We thus used 29 TT constraints and 29 TS constraints in our experiments." }, "TABREF6": { "html": null, "type_str": "table", "content": "
TT constraintsTS constraints
pobreza & desigualdadpobreza & enfermedad
democracia & amistaddemocracia & m\u00e1quina
sociedad & mentesociedad & organismo
educaci\u00f3n & vidaeducaci\u00f3n & viaje
pol\u00edtica & matrimoniopol\u00edtica & juego
pa\u00eds & familiapa\u00eds & edificio
gobierno & reinogobierno & casa
carrera & cambiocarrera & colina
innovaci\u00f3n & evoluci\u00f3ninnovaci\u00f3n & flor
desempleo & p\u00e1nicodesempleo & prisi\u00f3n
fe & pazfe & calor
violencia & pasi\u00f3nviolencia & fueg\u00f3
animo & amor\u00e1nimo & clima
deuda & tensi\u00f3ndeuda & peso
", "num": null, "text": "Examples of constraints used in Spanish clustering." }, "TABREF7": { "html": null, "type_str": "table", "content": "
Shutova et al.
", "num": null, "text": "Examples of constraints used in Russian clustering." }, "TABREF8": { "html": null, "type_str": "table", "content": "
TT constraintsTS constraints
\u0431\u0435\u0434\u043d\u043e\u0441\u0442\u044c & \u043d\u0435\u0440\u0430\u0432\u0435\u043d\u0441\u0442\u0432\u043e\u0431\u0435\u0434\u043d\u043e\u0441\u0442\u044c & \u0431\u043e\u043b\u0435\u0437\u043d\u044c
\u0434\u0435\u043c\u043e\u043a\u0440\u0430\u0442\u0438\u044f & \u0434\u0440\u0443\u0436\u0431\u0430\u0434\u0435\u043c\u043e\u043a\u0440\u0430\u0442\u0438\u044f & \u043c\u0435\u0445\u0430\u043d\u0438\u0437\u043c
\u043e\u0431\u0449\u0435\u0441\u0442\u0432\u043e & \u0440\u0430\u0437\u0443\u043c\u043e\u0431\u0449\u0435\u0441\u0442\u0432\u043e & \u043e\u0440\u0433\u0430\u043d\u0438\u0437\u043c
\u043e\u0431\u0440\u0430\u0437\u043e\u0432\u0430\u043d\u0438\u0435 & \u0436\u0438\u0437\u043d\u044c\u043e\u0431\u0440\u0430\u0437\u043e\u0432\u0430\u043d\u0438\u0435 & \u043f\u0443\u0442\u0435\u0448\u0435\u0441\u0442\u0432\u0438\u0435
\u043f\u043e\u043b\u0438\u0442\u0438\u043a\u0430 & \u0431\u0440\u0430\u043a\u043f\u043e\u043b\u0438\u0442\u0438\u043a\u0430 & \u0438\u0433\u0440\u0430
\u0441\u0442\u0440\u0430\u043d\u0430 & \u0441\u0435\u043c\u044c\u044f\u0441\u0442\u0440\u0430\u043d\u0430 & \u043f\u043e\u0441\u0442\u0440\u043e\u0439\u043a\u0430
\u043f\u0440\u0430\u0432\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u043e & \u043a\u043e\u0440\u043e\u043b\u0435\u0432\u0441\u0442\u0432\u043e\u043f\u0440\u0430\u0432\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u043e & \u0445\u043e\u0437\u044f\u0439\u0441\u0442\u0432\u043e
\u043a\u0430\u0440\u044c\u0435\u0440\u0430 & \u043f\u0435\u0440\u0435\u043c\u0435\u043d\u0430\u043a\u0430\u0440\u044c\u0435\u0440\u0430 & \u0445\u043e\u043b\u043c
\u0438\u043d\u043d\u043e\u0432\u0430\u0446\u0438\u044f & \u044d\u0432\u043e\u043b\u044e\u0446\u0438\u044f\u0438\u043d\u043d\u043e\u0432\u0430\u0446\u0438\u044f & \u0446\u0432\u0435\u0442\u043e\u043a
\u0431\u0435\u0437\u0440\u0430\u0431\u043e\u0442\u0438\u0446\u0430 & \u043f\u0430\u043d\u0438\u043a\u0430\u0431\u0435\u0437\u0440\u0430\u0431\u043e\u0442\u0438\u0446\u0430 & \u0442\u044e\u0440\u044c\u043c\u0430
\u0432\u0435\u0440\u0430 & \u043c\u0438\u0440\u0432\u0435\u0440\u0430 & \u0442\u0435\u043f\u043b\u043e
\u043d\u0430\u0441\u0438\u043b\u0438\u0435 & \u0441\u0442\u0440\u0430\u0441\u0442\u044c\u043d\u0430\u0441\u0438\u043b\u0438\u0435 & \u043e\u0433\u043e\u043d\u044c
\u043d\u0430\u0441\u0442\u0440\u043e\u0435\u043d\u0438\u0435 & \u043b\u044e\u0431\u043e\u0432\u044c\u043d\u0430\u0441\u0442\u0440\u043e\u0435\u043d\u0438\u0435 & \u043a\u043b\u0438\u043c\u0430\u0442
\u0434\u043e\u043b\u0433 & \u043d\u0430\u043f\u0440\u044f\u0436\u0435\u043d\u0438\u0435\u0434\u043e\u043b\u0433 & \u0432\u0435\u0441
languages are shown in
", "num": null, "text": "Examples of constraints used in Russian clustering" }, "TABREF11": { "html": null, "type_str": "table", "content": "
SystemUNCONSTRAINEDTS CONSTTT CONSTWordNet baseline
English0.770.700.760.40
Spanish0.740.690.72-
Russian0.670.620.73-
Table 5
English, Russian, and Spanish system coverage (unconstrained setting).
Language Total seeds Total expressions identified Total sentences
English621,5124,456
Spanish721,53822,219
Russian851,81538,703
", "num": null, "text": "UNCONSTRAINED, CONSTRAINED, and baseline precision in the identification of metaphorical expressions." }, "TABREF12": { "html": null, "type_str": "table", "content": "
\u0427\u0423\u0412\u0421\u0422\u0412\u0410 --\u041e\u0413\u041e\u041d\u042c (feeling is fire)
\u043f\u043e\u0442\u0443\u0448\u0438\u0442\u044c \u0441\u0442\u0440\u0430\u0434\u0430\u043d\u0438\u044f, \u043f\u043e\u0433\u0430\u0441\u0438\u0442\u044c \u0441\u0442\u0440\u0430\u0434\u0430\u043d\u0438\u044f, \u0434\u0443\u0448\u0430 \u043f\u044b\u043b\u0430\u0435\u0442, \u0434\u0443\u0448\u0430 \u043f\u043e\u043b\u044b\u0445\u0430\u0435\u0442, \u0434\u0443\u0448\u0430 \u0433\u043e\u0440\u0438\u0442,
\u0437\u0430\u0436\u0438\u0433\u0430\u0442\u044c \u0441\u0435\u0440\u0434\u0446\u0435, \u0441\u0435\u0440\u0434\u0446\u0435 \u043f\u044b\u043b\u0430\u0435\u0442, \u0441\u0436\u0435\u0447\u044c \u0441\u0435\u0440\u0434\u0446\u0435, \u0441\u0435\u0440\u0434\u0446\u0435 \u0437\u0430\u0436\u0433\u043b\u043e\u0441\u044c, \u0441\u0435\u0440\u0434\u0446\u0435 \u0432\u0441\u043f\u044b\u0445\u043d\u0443\u043b\u043e, \u0440\u0430\u0437\u0436\u0435\u0447\u044c
\u0434\u0443\u0445, \u0434\u0443\u0445 \u043f\u044b\u043b\u0430\u0435\u0442, \u0437\u0430\u0436\u0435\u0447\u044c \u0434\u0443\u0445
\u041f\u0420\u0415\u0421\u0422\u0423\u041f\u041d\u041e\u0421\u0422\u042c --\u0411\u041e\u041b\u0415\u0417\u041d\u042c (crime is a disease)
\u0432\u044b\u044f\u0432\u0438\u0442\u044c \u043f\u0440\u0435\u0441\u0442\u0443\u043f\u043b\u0435\u043d\u0438\u0435, \u043f\u0440\u0435\u0441\u0442\u0443\u043f\u043b\u0435\u043d\u0438\u0435 \u0437\u0430\u0440\u0430\u0437\u0438\u043b\u043e, \u043e\u0431\u043d\u0430\u0440\u0443\u0436\u0438\u0442\u044c \u043f\u0440\u0435\u0441\u0442\u0443\u043f\u043b\u0435\u043d\u0438\u0435, \u043f\u0440\u043e\u0432\u043e\u0446\u0438\u0440\u043e\u0432\u0430\u0442\u044c
\u043f\u0440\u0435\u0441\u0442\u0443\u043f\u043b\u0435\u043d\u0438\u0435,
", "num": null, "text": "\u0432\u044b\u0437\u044b\u0432\u0430\u0442\u044c \u0443\u0431\u0438\u0439\u0441\u0442\u0432\u0430, \u0438\u0441\u043a\u043e\u0440\u0435\u043d\u0438\u0442\u044c \u0443\u0431\u0438\u0439\u0441\u0442\u0432\u0430, \u0441\u0438\u043c\u0443\u043b\u0438\u0440\u043e\u0432\u0430\u0442\u044c \u0443\u0431\u0438\u0439\u0441\u0442\u0432\u043e, \u043f\u0440\u0435\u0434\u0443\u043f\u0440\u0435\u0436\u0434\u0430\u0442\u044c \u0443\u0431\u0438\u0439\u0441\u0442\u0432\u043e, \u0438\u0437\u043b\u0435\u0447\u0438\u0442\u044c \u043d\u0430\u0441\u0438\u043b\u0438\u0435, \u043f\u0435\u0440\u0435\u043d\u0435\u0441\u0442\u0438 \u043d\u0430\u0441\u0438\u043b\u0438\u0435, \u0440\u0430\u0441\u043f\u043e\u0437\u043d\u0430\u0442\u044c \u043d\u0430\u0441\u0438\u043b\u0438\u0435, \u0438\u0441\u0446\u0435\u043b\u044f\u0442\u044c \u0433\u0440\u0435\u0445\u0438, \u0437\u0430\u0431\u043e\u043b\u0435\u0442\u044c \u0433\u0440\u0435\u0445\u043e\u043c, \u0438\u0437\u043b\u0435\u0447\u0438\u0432\u0430\u0442\u044c \u0433\u0440\u0435\u0445\u0438, \u0432\u044b\u043b\u0435\u0447\u0438\u0442\u044c \u0433\u0440\u0435\u0445\u0438, \u0431\u043e\u043b\u0435\u0442\u044c \u0433\u0440\u0435\u0445\u043e\u043c" }, "TABREF13": { "html": null, "type_str": "table", "content": "
SystemAGGWNHGFC
Precision Recall Precision Recall Precision Recall
English0.360.110.290.030.690.61
Spanish0.230.12--0.590.54
Russian0.280.09--0.620.42
", "num": null, "text": "HGFC and baseline performance in the identification of metaphorical associations." }, "TABREF14": { "html": null, "type_str": "table", "content": "
SystemAGGWNHGFC
English0.470.12 0.65
Spanish 0.38-0.54
Russian 0.40-0.59
", "num": null, "text": "HGFC and baseline precision in the identification of metaphorical expressions." } } } }