{ "paper_id": "J04-1002", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:56:59.945393Z" }, "title": "CorMet: A Computational, Corpus-Based Conventional Metaphor Extraction System", "authors": [ { "first": "Zachary", "middle": [ "J" ], "last": "Mason", "suffix": "", "affiliation": { "laboratory": "", "institution": "Brandeis University", "location": {} }, "email": "zmason@amazon.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "CorMet is a corpus-based system for discovering metaphorical mappings between concepts. It does this by finding systematic variations in domain-specific selectional preferences, which are inferred from large, dynamically mined Internet corpora. Metaphors transfer structure from a source domain to a target domain, making some concepts in the target domain metaphorically equivalent to concepts in the source domain. The verbs that select for a concept in the source domain tend to select for its metaphorical equivalent in the target domain. This regularity, detectable with a shallow linguistic analysis, is used to find the metaphorical interconcept mappings, which can then be used to infer the existence of higher-level conventional metaphors. Most other computational metaphor systems use small, hand-coded semantic knowledge bases and work on a few examples. Although CorMet's only knowledge base is WordNet (Fellbaum 1998) it can find the mappings constituting many conventional metaphors and in some cases recognize sentences instantiating those mappings. CorMet is tested on its ability to find a subset of the Master Metaphor List (Lakoff, Espenson, and Schwartz 1991).", "pdf_parse": { "paper_id": "J04-1002", "_pdf_hash": "", "abstract": [ { "text": "CorMet is a corpus-based system for discovering metaphorical mappings between concepts. It does this by finding systematic variations in domain-specific selectional preferences, which are inferred from large, dynamically mined Internet corpora. Metaphors transfer structure from a source domain to a target domain, making some concepts in the target domain metaphorically equivalent to concepts in the source domain. The verbs that select for a concept in the source domain tend to select for its metaphorical equivalent in the target domain. This regularity, detectable with a shallow linguistic analysis, is used to find the metaphorical interconcept mappings, which can then be used to infer the existence of higher-level conventional metaphors. Most other computational metaphor systems use small, hand-coded semantic knowledge bases and work on a few examples. Although CorMet's only knowledge base is WordNet (Fellbaum 1998) it can find the mappings constituting many conventional metaphors and in some cases recognize sentences instantiating those mappings. CorMet is tested on its ability to find a subset of the Master Metaphor List (Lakoff, Espenson, and Schwartz 1991).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "argues that rather than being a rare form of creative language, some metaphors are ubiquitous, highly structured, and relevant to cognition. To date, there has been no robust, broadly applicable computational metaphor interpretation system, a gap this article is intended to take a first step toward filling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Most computational models of metaphor depend on hand-coded knowledge bases and work on a few examples. CorMet is designed to work on a larger class of metaphors by extracting knowledge from large corpora without drawing on any handcoded knowledge sources besides WordNet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "A method for computationally interpreting metaphorical language would be useful for NLP. Although metaphorical word senses can be cataloged and treated as just another part of the lexicon, this kind of representation ignores regularities in polysemy. A conventional metaphor may have a very large number of linguistic manifestations, which makes it useful to model the metaphor's underlying mechanisms. CorMet is not capable of interpreting any manifestation of conventional metaphor but is a step toward such a system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "CorMet analyzes large corpora of domain-specific documents and learns the selectional preferences of the characteristic verbs of each domain. A selectional preference is a verb's predilection for a particular type of argument in a particular role. For instance, the object of the verb pour is generally a liquid. Any noun that pour takes as an an object is likely to be intended as a liquid, either metaphorically or literally. CorMet finds conventional metaphors by finding systematic differences in selectional preferences between domains. For instance, if CorMet were to find a sentence like Funds poured into his bank account in a document from the FINANCE domain, it could infer that in that domain, pour has a selection preference for financial assets in its subject. By comparing this selectional preference with pour's selectional preferences in the LAB domain, CorMet can infer a metaphorical mapping from money to liquids. By finding sets of co-occuring interconcept mappings (like the above mapping and a mapping from investments to containers, for instance), Cormet can articulate the higher-order structure of conceptual metaphors. Note that Cormet is designed to detect higherorder conceptual metaphors by finding some of the sentences embodying some of the interconcept mappings constituting the metaphor of interest but is not designed to be a tool for reliably detecting all instances of a particular metaphor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "CorMet's domain-specific corpora are obtained from the Internet. In this context, a domain is a set of related concepts, and a domain-specific corpus is a set of documents relevant to those concepts. CorMet's input parameters are two domains between which to search for interconcept mappings and, for each domain, a set of characteristic keywords.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "CorMet is tested on its ability to find a subset of the Master Metaphor List (Lakoff, Espenson, and Schwartz 1991) , a manually compiled catalog of metaphor. CorMet works on domains that are specific and concrete (e.g., the domain of finance, but not that of actions). CorMet's discrimination is relatively coarse: It measures trends in selectional preferences across many documents, so common mappings are discernible. CorMet considers the selectional preferences only of verbs, on the theory that they are generally more selectively restrictive than nouns or adjectives.", "cite_spans": [ { "start": 77, "end": 114, "text": "(Lakoff, Espenson, and Schwartz 1991)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "It is worth noting that WordNet, CorMet's primary knowledge source, implicitly encodes some of the metaphors CorMet is intended to find; Peters and Peters (2000) use WordNet to find many artifact/cognition metaphors. Also, WordNet enumerates some metaphorical senses of some verbs. CorMet does not use any of WordNet's information about verbs and ignores regularities in the distribution of noun homonyms that could be used to find some metaphors.", "cite_spans": [ { "start": 137, "end": 161, "text": "Peters and Peters (2000)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The article is organized as follows: Section 2 describes the mechanisms by which conventional metaphors are detected. Section 3 walks through CorMet's process in two examples. Section 4 describes how the system's performance is evaluated against the Master Metaphor List (Lakoff, Espenson, and Schwartz 1991) , and Section 5 covers select related work.", "cite_spans": [ { "start": 271, "end": 308, "text": "(Lakoff, Espenson, and Schwartz 1991)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Ideally, CorMet could draw on a large quantity of manually vetted, highly representative domain-specific documents. The precompiled corpora available on-line (Kucera 1992; Marcus, Santorini, and Marcinkiewicz 1993) do not span enough subjects. Other on-line data sources include the Internet's hierarchically structured indices, such as Yahoo's ontology (www.yahoo.com) and Google's (www.google.com). Each index entry contains a small number of high-quality links to relevant Web pages, but this is not helpful, because CorMet requires many documents, and those documents need not be of more than moderate quality. Searching the Internet for domain-specific text seems to be the only way to obtain sufficiently large, diverse corpora.", "cite_spans": [ { "start": 158, "end": 171, "text": "(Kucera 1992;", "ref_id": "BIBREF10" }, { "start": 172, "end": 214, "text": "Marcus, Santorini, and Marcinkiewicz 1993)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Searching the Net for Domain Corpora", "sec_num": "2.1" }, { "text": "CorMet obtains documents by submitting queries to the Google search engine. There are two types of queries: one to fetch any domain-specific documents and an-other to fetch domain-specific documents that contain a particular verb. The first kind of query consists of a conjunction of from two to five randomly selected domain keywords. Domain keywords are words characteristic of a domain, supplied by the user as an input. For the FINANCE domain, a reasonable set of keywords is stocks, bonds, NASDAQ, Dow, investment, finance. Each query incorporates only a few keywords in order to maximize the number of distinct possible queries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Searching the Net for Domain Corpora", "sec_num": "2.1" }, { "text": "Queries for domain-specific documents containing a particular verb are composed of a conjunction of domain-specific terms and a disjunction of forms of the verb that are more likely to be verbs than other parts of speech. For the verb attack, for instance, acceptable forms are attacked and attacking, but not attack and attacks, which are more likely to be nouns. The syntactic categories in which a word form appears are determined by reference to WordNet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Searching the Net for Domain Corpora", "sec_num": "2.1" }, { "text": "Some queries for the verb attack in the FINANCE domain are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Searching the Net for Domain Corpora", "sec_num": "2.1" }, { "text": "1. (attacked OR attacking) AND (bonds AND Dow AND investment)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Searching the Net for Domain Corpora", "sec_num": "2.1" }, { "text": "2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Searching the Net for Domain Corpora", "sec_num": "2.1" }, { "text": "(attacked OR attacking) AND (NASDAQ AND investment AND finance)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Searching the Net for Domain Corpora", "sec_num": "2.1" }, { "text": "3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Searching the Net for Domain Corpora", "sec_num": "2.1" }, { "text": "(attacked OR attacking) AND (stocks AND bonds AND NASDAQ) 4. (attacked OR attacking) AND (stocks AND NASDAQ AND Dow)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Searching the Net for Domain Corpora", "sec_num": "2.1" }, { "text": "Queries return links to up to 10,000 documents, of which CorMet fetches and analyzes no more than 3,000. In the 13 domains studied, about 75% of these documents are relevant to the domain of interest (as measured through a randomly chosen, handevaluated sample of 100 documents per domain), so the noise is substantial. The documents are processed to remove embedded scripts and HTML tags. The mined documents are parsed with the apple pie parser (Sekine and Grishman 1995) . Case frames are extracted from parsed sentences using templates; for instance, (S (NP & OBJ) (VP (were | was | got | get) (VP WORDFORM-PASSIVE)) is used to extract roles for passive, agentless sentences (where WORDFORM-PASSIVE is replaced by a passive form of the verb under analysis).", "cite_spans": [ { "start": 447, "end": 473, "text": "(Sekine and Grishman 1995)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Searching the Net for Domain Corpora", "sec_num": "2.1" }, { "text": "Learning the selectional preferences for a verb in a domain is expensive in terms of time, so it is useful to find a small set of important verbs in each domain. CorMet seeks information about verbs typical of a domain, because these verbs are more likely to figure in metaphors in which that domain is the metaphor's source. Besiege, for instance, is characteristic of the MILITARY domain and appears in many instances of the MILITARY \u2192 MEDICINE mapping, such as The antigens besieged the virus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding Characteristic Predicates", "sec_num": "2.2" }, { "text": "To find domain-characteristic verbs, CorMet dynamically obtains a large sample of domain-relevant documents, decomposes them into a bag-of-words representation, stems the words with an implementation of the Porter (1980) stemmer, and finds the ratio of occurrences of each word stem to the total number of stems in the domain corpus. The frequency of each stem in the corpus is compared to its frequency in general English (as recorded in an English-language frequency dictionary [Kilgarriff 2003 ]).", "cite_spans": [ { "start": 207, "end": 220, "text": "Porter (1980)", "ref_id": "BIBREF20" }, { "start": 480, "end": 496, "text": "[Kilgarriff 2003", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Finding Characteristic Predicates", "sec_num": "2.2" }, { "text": "The 400 verb stems with the highest relative frequency (computed as a ratio of the stem's frequency in the domain to its frequency in the English frequency dictionary) are considered characteristic. CorMet treats any word form that may be a verb (according to WordNet) as though it is a verb, which biases CorMet toward verbs with common nominal homonyms. Word stems that have high relative frequency in more than one domain, like e-mail and download, are eliminated on the suspicion that they are more characteristic of documents on the Internet in general than of a substantive domain. Table 1 lists the 20 highest-scoring stems in the LAB and FINANCE domains.", "cite_spans": [], "ref_spans": [ { "start": 588, "end": 595, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Finding Characteristic Predicates", "sec_num": "2.2" }, { "text": "There are three constraints on CorMet's selectional-preference-learning algorithm. First, it must tolerate noise, because complex sentences are often misparsed, and the case frame extractor is error prone. Second, it should be able to work around WordNet's lacunae. Finally, there should be a reasonable metric for comparing the similarity between selectional preferences. CorMet first uses the selectional-preference-learning algorithm described in Resnik (1993) , then clustering over the results. Resnik's algorithm takes a set of words observed in a case slot (e.g., the subject of pour or the indirect object of give) and finds the WordNet nodes that best characterize the selectional preferences of that slot. (Note that WordNet nodes are treated as categories subcategorizing their descendants.) A case slot has a preference for a WordNet node to the extent that that node, or one of its descendants, is more likely to appear in that case slot than it is to appear at random.", "cite_spans": [ { "start": 450, "end": 463, "text": "Resnik (1993)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Selectional Preference Learning", "sec_num": "2.3" }, { "text": "An overall measure of the choosiness of a case slot is selectional-preference strength, S R (p), defined as the relative entropy of the posterior probability P(c|p) and the prior probability P(c) (where P(c) is the a priori probability of the appearance of a WordNet node c, or one of its descendants, and P(c|p) is the probability of that node or one of its descendants appearing in a case slot p.) Recall that the relative entropy of two distributions X and Y, D(X||Y), is the inefficiency incurred by using an encoding optimal for Y to encode X.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selectional Preference Learning", "sec_num": "2.3" }, { "text": "S R (p) = D(P(c|p)||P(c)) = c P(c|p) log P(c|p) P(c)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selectional Preference Learning", "sec_num": "2.3" }, { "text": "The degree to which a case slot selects for a particular node is measured by selectional association. In effect, the selectional associations divide up the selectional preference strength for a case slot among that slot's possible fillers. Selectional association is defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selectional Preference Learning", "sec_num": "2.3" }, { "text": "\u039b R (p, c) = 1 S R (p) P(c|p) log P(c|p) P(c) To compute \u039b R (p, c)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selectional Preference Learning", "sec_num": "2.3" }, { "text": ", what is needed is a distribution over word classes, but what is observed in the corpus is a distribution over word forms. Resnik's algorithm works around this problem by approximating a word class distribution from the word form distribution. For each word form observed filling a case slot, credit is divided evenly among all of that word form's possible senses (and their ancestors in WordNet). Although Resnik's algorithm makes no explicit attempt at sense disambiguation, greater activation tends to accumulate in those nodes that best characterize a predicate's selectional preferences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selectional Preference Learning", "sec_num": "2.3" }, { "text": "CorMet uses Resnik's algorithm to learn domain-specific selection preferences. It often finds different selectional preferences for predicates whose preferences should, intuitively, be the same. In the MILITARY domain, the object of assault selects strongly for fortification but not social group, whereas the selectional preferences for the object of attack are the opposite. Taking the cosine of the selectional preferences of these two case slots (one of many possible similarity metrics) gives a surprisingly low score. In order to facilitate more accurate judgments of selectional-preference similarity, CorMet finds clusters of WordNet nodes that, although not as accurate, allow more meaningful comparisons of selectional preferences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selectional Preference Learning", "sec_num": "2.3" }, { "text": "Clusters are built using the nearest-neighbor clustering algorithm (Jain, Murty, and Flynn 1999) . A predicate's selectional preferences are represented as vectors whose nth element represents the selectional association of the nth WordNet node for that predicate. The similarity function used is the dot product of the two selectional-preference vectors. Empirically, the level of granularity obtained by running nearest-neighbor clustering twice (i.e., clustering over the sets of nodes constituting selectional preferences, then clustering over the clusters) produces the most conceptually coherent clusters.", "cite_spans": [ { "start": 67, "end": 96, "text": "(Jain, Murty, and Flynn 1999)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Selectional Preference Learning", "sec_num": "2.3" }, { "text": "There are typically fewer than 100 second-order clusters (i.e., clusters of clusters) per domain. In the LAB domain there are 54 second-order clusters, and in the FINANCE domain there are 67. The time complexity of searching for metaphorical interconcept mappings between two domains is proportional to the number of pairs of salient domain objects, so it is more efficient to search over pairs of salient clusters than over the more numerous individual salient nodes. Table 2 shows a MILITARY cluster. These clusters are helpful for finding verbs with similar, but not identical, selectional preferences. Although attack, for instance, does not select for fortification, it does select for other elements of fortification's cluster, such as building and defensive structure.", "cite_spans": [], "ref_spans": [ { "start": 469, "end": 476, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Selectional Preference Learning", "sec_num": "2.3" }, { "text": "The fundamental limitation of WordNet with respect to selectional-preference learning is that it fails to exhaust all possible lexical relationships. WordNet can hardly be blamed: The task of recording all possible relationships between all English words is prohibitively large, if not infinite. Nevertheless, there are many words that intuitively should have a common parent but do not. For instance, liquid body substance and water should both be hyponyms of liquid, but in WordNet their shallowest common ancestor is substance. One of the descendants of substance is solid, so there is no single node that represents all liquids.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selectional Preference Learning", "sec_num": "2.3" }, { "text": "Li and Abe (1998) describe another method of corpus-driven selectional-preference learning that finds a tree cut of WordNet for each case slot. A tree cut is a set of Table 2 The elements of a cluster of WordNet nodes characteristic of the MILITARY domain.", "cite_spans": [], "ref_spans": [ { "start": 167, "end": 174, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Selectional Preference Learning", "sec_num": "2.3" }, { "text": "penal institution-1 fortification-1 correctional institution-1 defensive structure-1 institution-2 housing-1 structure-1 room-1 establishment-4 prison-1 building-1 tower-1 area-3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selectional Preference Learning", "sec_num": "2.3" }, { "text": "nodes that specifies a partition of the ontology's leaf nodes, where a node stands for all the leaf nodes descended from it. The method chooses among possible tree cuts according to minimum-description-length criteria. The description length of a tree cut representation is the sum of the size of the tree cut itself (i.e., the minimum number of nodes specifying the partition) and the space required for representing the observed data with that tree cut. For CorMet's purposes, the problem with this approach is that it is difficult to find clusters of (possibly hypernymically related) nodes representing a selectional preference using its results (because the tree cut includes exactly one node on each path from each leaf node to the root). There are similar objections to similar approaches such as that of Carroll and McCarthy (2000) .", "cite_spans": [ { "start": 812, "end": 839, "text": "Carroll and McCarthy (2000)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Selectional Preference Learning", "sec_num": "2.3" }, { "text": "Polarity is a measure of the directionality and magnitude of structure transfer between two concepts or two domains. Nonzero polarity exists when language characteristic of a concept from one domain is used in a different domain of a different concept. The kind of characteristic language CorMet can detect is limited to verbal selectional preferences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Polarity", "sec_num": "2.4" }, { "text": "Say CorMet is searching for a mapping between the concepts liquids (characteristic of the LAB domain) and assets (characteristic of the FINANCE domain), as illustrated in Figure 1 . There are verbs in LAB that strongly select for liquids, such as pour, flow, and freeze. In FINANCE, these verbs select for assets. In FINANCE there are verbs that strongly select for assets such as spend, invest, and tax. In the LAB domain, these verbs select for nothing in particular. This suggests that liquid is the source concept and asset is the target concept, which implies that LAB and FINANCE are the source and target domains, respectively. CorMet computes the overall polarity between two domains (as opposed to between two concepts) by summing over the polarity between each pair of high-salience concepts from the two domains of interest.", "cite_spans": [], "ref_spans": [ { "start": 171, "end": 179, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Polarity", "sec_num": "2.4" }, { "text": "Interconcept polarity is defined as follows: Let \u03b1 be the set of case slots in domain X with the strongest selectional preference for the node cluster A. Let \u03b2 be the set of case slots in domain Y with the strongest selectional preferences for the node cluster B. The degree of structure flow from A in X to B in Y is computed as the degree to which the predicates \u03b1 select for the nodes B in Y, or selection strength(Y , \u03b1, B ). Structure flow in the opposite direction is selection strength(X, \u03b2, A). The definition of selection strength(Domain, case slots, node cluster) is the average of the selectionalpreference strengths of the predicates in case slots for the nodes in node cluster in Domain. The polarity for \u03b1 and \u03b2 is the difference in the two quantities. If the polarity is near zero, there is not much structure flow and no evidence for a metaphoric mapping.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Polarity", "sec_num": "2.4" }, { "text": "In some cases a difference in selectional preferences between domains does not indicate the presence of a metaphor. To take a fictitious but illustrative example, say", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Polarity", "sec_num": "2.4" }, { "text": "Asymmetric structure transfer between LAB and FINANCE. Predicates from LAB that select for liquids are transferred to FINANCE and select for money. On the other hand, predicates from FINANCE that select for money are transferred to LAB and do not select for liquids. that in the LAB domain the subject of sit has a preference for chemists whereas in the FINANCE domain it has a preference for investment bankers. The difference in selectional preferences is caused by the fact that chemists are the kind of person more likely to appear in LAB documents and investment bankers in FINANCE ones. Instances like this are easy to filter out because their polarity is zero.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "A verb is treated as characteristic of a domain X if it is at least twice as frequent in the domain corpus as it is in general English and it is at least one and a half times as frequent in domain X as in the contrasting domain Y (these ratios were chosen empirically). Pour, for instance, occurs three times as often in FINANCE and twentythree times as often in LAB as it does in general English. Since it is nearly eight times as frequent in LAB as in FINANCE, it is considered characteristic of the former. This heuristic resolves the confusion than can be caused by the ubiquity of certain conventional metaphors-the high density of metaphorical uses of pour in FINANCE could otherwise make it seem as though pour is characteristic of that domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "A verb with weak selectional preferences (e.g., exist) is a bad choice for a characteristic predicate even if it occurs disproportionately often in a domain. Highly selective verbs are more useful because violations of their selectional preferences are more informative. For this reason a predicate's salience to a domain is defined as its selectional-preference strength times the ratio of its frequency in the domain to its frequency in English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "Literal and metaphorical selectional preferences may coexist in the same domain. Consider the selectional preferences of pour in the chemical and financial domains. In the LAB domain, pour is mostly used literally: People pour liquids. There are occasional metaphorical uses (e.g., Funding is pouring into basic proteomics research), but the literal sense is more common. In FINANCE, pour is mostly used metaphorically, although there are occasionally literal uses (e.g., Today oil poured into the new Turkmenistan pipeline).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "Algorithms 1-3 show pseudocode for finding metaphoric mappings between concepts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "Algorithm 1: Find Inter Concept Mappings(domain1, domain2) comment: Find mappings from concepts in domain1 to concepts in domain2 or vice versa", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "Domain 1 Clusters \u2190 Get Best Clusters(domain1) Domain 2 Clusters \u2190 Get Best Clusters(domain2) for each Concept 1 \u2208 Domain 1 Clusters do \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 for each Concept 2 \u2208 Domain 2 Clusters do \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 Polarity Score \u2190 Detect Inter Concept Mapping(Concept 1, Concept 2, domain1, domain2) if Polarity score > NOISE THRESHOLD then output mapping(Concept 1 \u2192 Concept 2) if Polarity score < \u2212NOISE THRESHOLD then output mapping(Concept 2 \u2192 Concept 1) Algorithm 2: Detect Inter Concept Mapping(Concept 1, Concept 2, domain1, do- main2) polarity from 1 to 2 \u2190 Inter Concept Polarity(Concept 1, Concept 2, domain1, domain2) polarity from 2 to 1 \u2190 Inter Concept Polarity(Concept 2, Concept 1, domain2, domain1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "if absolute value(polarity from 1 to 2 \u2212 polarity from 2 to 1) < C1 then return (0); if polarity from 1 to 2 > C2 and polarity from 2 to 1 > C2 then return (0); return (polarity from 1 to 2 \u2212 polarity from 2 to 1) ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "According to the thematic-relation hypothesis (Grubner 1976), many domains are conceived of in terms of physical objects moving along paths between locations in space. In the money domain, assets are mapped to objects and asset holders are mapped to locations. In the idea domain, ideas are mapped to objects, minds are mapped to locations, and communications are mapped to paths. Axioms of inference from the target domain usually become available for reasoning about the source domain, unless there is an aspect of the source domain that specifically contradicts them. For instance, in the domain of material objects, a thing moved from point X to point Y is no longer at X, but in the idea domain, it exists at both locations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systematicity", "sec_num": "2.5" }, { "text": "Thematically related metaphors may consistently co-occur in the same sentences. For example, the metaphors LIQUID \u2192 MONEY and CONTAINERS \u2192 INSTITUTIONS often co-occur, as in the sentence Capital flowed into the new company. Conversely, cooccurring metaphors are often components of a single metaphorical conceptualization. A metaphorical mapping is therefore more credible when it is a component of a system of mappings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systematicity", "sec_num": "2.5" }, { "text": "In CorMet, systematicity measures a metaphorical mapping's tendency to co-occur with other mappings. The systematicity score for a mapping X is defined as the number of strong, distinct mappings co-occurring with X. This measure goes only a little way toward capturing the extent to which a metaphor exhibits the structure described in the thematic-relations hypothesis, but extending CorMet to find the entities that correspond to objects, locations, and paths is beyond the scope of this article.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systematicity", "sec_num": "2.5" }, { "text": "CorMet computes a confidence measure for each metaphor it discovers. Confidence is a function of three things. The more verbs mediating a metaphor (as attack and assault mediate ENEMY \u2192 DISEASE in The antigen attacked the virus and Chemotherapy assaults the tumor), the more credible it is. Strongly unidirectional structure flow from source domain to target makes a mapping more credible. Finally, a mapping is more likely to be correct if it systematically co-occurs with other mappings. The confidence measure should not be interpreted as a probability of correctness: The data available for calibrating such a distribution are inadequate. The weights of each factor, empirically assigned plausible values, are given in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 723, "end": 730, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Confidence Rating", "sec_num": "2.6" }, { "text": "The confidence measure is intended to wrap all the available evidence about a metaphor's credibility into one number. A principled way of doing this is desirable, but unfortunately there are not enough data to make meaningful use of machinelearning techniques to find the best set of components and weights. There is substantial arbitrariness in the confidence rating: The components used and the weights they are ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confidence Rating", "sec_num": "2.6" }, { "text": "This section provides a walk-through of the derivation and analysis of the concept mapping LIQUID \u2192 MONEY and components of the interconcept mapping WAR \u2192 MEDICINE. In the interests of brevity only representative samples of CorMet's data are shown. See Mason (2002) for a more detailed account.", "cite_spans": [ { "start": 253, "end": 265, "text": "Mason (2002)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Two Examples", "sec_num": "3." }, { "text": "CorMet's inputs are two domain sets of characteristic keywords for each domain (Table 4). The keywords must characterize a cluster in the space of Internet documents, but CorMet is relatively insensitive to the particular keywords. It is difficult to find keywords characterizing a cluster centering on money alone, so keywords for a more general domain, FINANCE, are provided. It is also difficult to characterize a cluster of documents mostly about liquids. Chemical-engineering articles and hydrographic encyclopedias tend to pertain to the highly technical aspects of liquids instead of their everyday behavior. Documents related to laboratory work are targeted on the theory that most references to liquids in a corpus dedicated to the manipulation and transformation of different states of matter are likely to be literal and will not necessarily be highly technical. Tables 5 and 6 show the top 20 characteristic verbs for LAB and FINANCE, respectively.", "cite_spans": [], "ref_spans": [ { "start": 874, "end": 888, "text": "Tables 5 and 6", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "LIQUID \u2192 MONEY", "sec_num": "3.1" }, { "text": "CorMet finds the selectional preferences of all of the characteristic predicates' case slots. A sample of the selectional preferences of the top 20 verbs in LAB and FINANCE are shown in Tables 7 and 8, respectively. The leftmost columns of these two tables have the (stemmed form of the) characteristic verb and the thematic role characterized. The right-hand sides have clusters of characteristic nodes. The numbers associated with the nodes are the bits of uncertainty about the identity of a word x resolved by the fact that x fills the given case slot, or P(x \u2190 N) \u2212 P(x \u2190 N|case slot(x)) (where x \u2190 N is read as x is N or a hyponym of N).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LIQUID \u2192 MONEY", "sec_num": "3.1" }, { "text": "All of the 400 possible mappings between the top 20 concepts (clusters) from the two domains are examined. Each possible mapping is evaluated in terms of polarity, the number of frames instantiating the mapping, and the systematic co-occurrence of that mapping with different, highly salient mappings. The best mappings for LAB \u00d7 FINANCE are shown in Table 9 .", "cite_spans": [], "ref_spans": [ { "start": 351, "end": 358, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "LIQUID \u2192 MONEY", "sec_num": "3.1" }, { "text": "Mappings are expressed in abbreviated form for clarity, with only the most recognizable (if not necessarily the most salient) node of each concept displayed. The foremost mapping characterizes money in terms of liquid, the mapping for which the two domains were selected. The second represents a somewhat less intuitive mapping from liquids to institutions. This metaphor is driven primarily by institutions' capacity ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LIQUID \u2192 MONEY", "sec_num": "3.1" }, { "text": "The bottom of the economy dropped out. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "This is an airtight investment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "CorMet has found mappings that can reasonably be construed as corresponding to these metaphors. Compare the mappings from the Master Metaphor List with frames mined by this system and identified as instantiating liquid \u2192 income, shown in Table 10 . It is important to note that although CorMet can list the case frames that have driven the derivation of a particular high-level mapping, it is designed to discover highlevel mappings, not interpret or even recognize particular instances of metaphorical language. Just as in the Master Metaphor List, there are frames in the CorMet listing in which money and equities are characterized as liquids, are moved as liquids (i.e., pouring earnings and pumping reserves) and change state as liquids (i.e., melting stocks, dissolving stakes, evaporating profits, frozen money).", "cite_spans": [], "ref_spans": [ { "start": 238, "end": 246, "text": "Table 10", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "This subsection describes the search for mappings between the MEDICINE and MIL-ITARY domains. The domain keywords for MEDICINE and MILITARY are shown in Table 11 . The characteristic verbs of the MILITARY and MEDICINE domains are given in Tables 12 and 13, respectively. Their selectional preferences are given in Tables 14 and 15, respectively. The highest-quality mappings between the MILITARY and MEDICINE domains are shown in Table 16 . This pair of domains produces more mappings than the the LAB and FINANCE pair. Many source concepts from the MILITARY domain are mapped to body parts. The heterogeneity of the source concepts seems to be driven by the heterogeneity of possible military targets. Similarly, many source concepts are mapped to drugs. The case frames supporting this mapping suggest that this is because of Mason CorMet Table 11 Characteristic keywords for the MEDICINE and MILITARY domains.", "cite_spans": [ { "start": 829, "end": 834, "text": "Mason", "ref_id": null } ], "ref_spans": [ { "start": 153, "end": 161, "text": "Table 11", "ref_id": "TABREF0" }, { "start": 314, "end": 346, "text": "Tables 14 and 15, respectively.", "ref_id": "TABREF0" }, { "start": 431, "end": 439, "text": "Table 16", "ref_id": "TABREF0" }, { "start": 842, "end": 850, "text": "Table 11", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "MILITARY \u2192 MEDICINE", "sec_num": "3.2" }, { "text": "MEDICINE doctor surgeon hospital operate pharmaceutical medicine recuperate organ tissue bacteria virus diagnose cancer sickness nurse research MILITARY army navy soldier battle war attack bombing destruction infantry tactics siege invasion troops barracks the heterogeneity of military aggressors (fortifications do not generally fall into this category; this mapping is an error caused by the frame extractor's frequent confusion of subject and object). These mappings can be interpreted as indicating that things that are attacked map to body parts and things that attack map to drugs. The mapping fortification \u2192 illness represents the mapping of targetable strongholds to disease. Illnesses are conceived of as fortifications besieged by treatment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MILITARY \u2192 MEDICINE", "sec_num": "3.2" }, { "text": "Compare this with the Master Metaphor List's characterization of TREATING ILL-NESS IS FIGHTING A WAR:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MILITARY \u2192 MEDICINE", "sec_num": "3.2" }, { "text": "The Disease is an Enemy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "The Body is a Battleground.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "The body is not immune to invasion. (b)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(a)", "sec_num": null }, { "text": "The disease infiltrates your body and takes over.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(a)", "sec_num": null }, { "text": "Infection is an Attack by the Disease. The virus began an attack on the organ systems. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "Medicine is a Weapon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "The so-called cure is no magic bullet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(a)", "sec_num": null }, { "text": "Medical Procedures are Attacks by the Patient.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.", "sec_num": null }, { "text": "The doctors tried to wipe out the infection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(a)", "sec_num": null }, { "text": "The Immune System is a Defense.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6.", "sec_num": null }, { "text": "The body normally has its own defenses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(a)", "sec_num": null }, { "text": "Winning the War is being Cured of the Disease.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "7.", "sec_num": null }, { "text": "(a) Beating measles takes patience.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "7.", "sec_num": null }, { "text": "Being Defeated is Dying.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "8.", "sec_num": null }, { "text": "The patient finally gave up the battle. CorMet's results can reasonably be interpreted as matching all of the mappings from the Master Metaphor List except winning-is-a-cure and defeat-is-dying. CorMet's failure to find this mapping is caused by the fact that win, lose, and their synonyms do not have high salience in the MILITARY domain, which may be a reflection of the ubiquity of win and lose outside of that domain. Table 17 shows sample frames from which the body part \u2192 {fortification, vehicle, military action, region, skilled worker} mapping was derived.", "cite_spans": [], "ref_spans": [ { "start": 422, "end": 430, "text": "Table 17", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "(a)", "sec_num": null }, { "text": "This section describes the evaluation of CorMet against a gold standard, specifically, by determining how many of the metaphors in a subset of the Master Metaphor List (Lakoff, Espenson, and Schwartz 1991) can be discovered by CorMet given a characterization of the relevant source and target domains. The final evaluation of the correspondence between the mappings CorMet discovers and the Master Metaphor List entry is necessarily done by hand. This is a highly subjective method of evaluation; a formal, objective evaluation of correctness would be preferable, but at present no such metric is available.", "cite_spans": [ { "start": 168, "end": 205, "text": "(Lakoff, Espenson, and Schwartz 1991)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Testing against the Master Metaphor List", "sec_num": "4." }, { "text": "The Master Metaphor List is the basis for evaluation because it is composed of manually verified metaphors common in English. The test set is restricted to those elements of the Master Metaphor List with concrete source and target domains. This requirement excludes many important conventional metaphors, such as EVENTS ARE ACTIONS. About a fifth of the Master Metaphor List meets this constraint. This fraction is surprisingly small: It turns out that the bulk of the Master Metaphor List consists of subtle refinements of a few highly abstract metaphors. The concept pairs and corresponding domain pairs for the target metaphors in the Master Metaphor List are given in Table 18. A mapping discovered by CorMet is considered correct if submappings specified in the Master Metaphor List are nearly all present with high salience and incorrect submappings are present with comparatively low salience. The mappings discovered that best represent the targeted metaphors are shown in Table 19 .", "cite_spans": [], "ref_spans": [ { "start": 672, "end": 681, "text": "Table 18.", "ref_id": "TABREF0" }, { "start": 981, "end": 989, "text": "Table 19", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Testing against the Master Metaphor List", "sec_num": "4." }, { "text": "Some of these test cases are marked successes. For instance, ECONOMIC HARM IS PHYSICAL INJURY seems to be captured by the mapping from the loss-3 cluster to Mason CorMet Table 18 Master Metaphor List mappings and the domain pairs in which they are sought. the harm-1 cluster. CorMet found reasonable mappings in 10 of 13 cases attempted. This implies 77% accuracy, although in light of the small test and the subjectivity of judgment, this number must not be taken too seriously. Some test cases were disappointing. CorMet found no mapping between THE-ORY and ARCHITECTURE. This seems to be an artifact of the low-quality corpora obtained for these domains. The documents intended to be relevant to architecture were often about zoning or building policy, not the structure of buildings. For theory, many documents were calls for papers or about university department policy. It is unsurprising that there are no particular mappings between two sets of miscellaneous administrative and policy documents. The weakness of the ARCHITECTURE corpus also prevented CorMet from discovering any BODY \u2192 ARCHITECTURE mappings. Accuracy could be improved by refining the process by which domain-specific corpora are obtained to eliminate administrative documents or by requiring documents to have a higher density of domain-relevant terms.", "cite_spans": [ { "start": 157, "end": 162, "text": "Mason", "ref_id": null } ], "ref_spans": [ { "start": 170, "end": 178, "text": "Table 18", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Testing against the Master Metaphor List", "sec_num": "4." }, { "text": "Is it meaningful when CorMet finds a mapping, or will it find a mapping between any pair of domains? To answer this question, CorMet was made to search for mappings between randomly selected pairs of domains. Table 20 lists a set of arbitrarily selected domain pairs and the strength of the polarization between them. In all cases, the polarization is zero. This can be interpreted as an encouraging lack of false positives. Another perspective is that CorMet should have found mappings between some of these pairs, such as MEDICINE and SOCIETY, on the theory that societies can be said to sicken, die, or heal. Although this is certainly a valid conventional metaphor, it seems to be less prominent than those metaphors that CorMet did discover.", "cite_spans": [], "ref_spans": [ { "start": 209, "end": 217, "text": "Table 20", "ref_id": "TABREF14" } ], "eq_spans": [], "section": "Testing against the Master Metaphor List", "sec_num": "4." }, { "text": "Two of the most broadly effective computational models of metaphor are Fass (1991) and Martin (1990) , in both of which metaphors are detected through selectionalpreference violations and interpreted using an ontology. They are distinguished from CorMet in that they work on both novel and conventional metaphors and rely on declarative hand-coded knowledge bases. Fass (1991) describes Met*, a system for interpreting nonliteral language that builds on Wilks (1975) and Wilks (1978) . Met* discriminates among metonymic, metaphorical, literal, and anomalous language. It is a component of collative semantics, a semantics for natural language processing that has been implemented in the program meta5 (Fass, 1986 (Fass, , 1987 (Fass, , 1988 . Met* treats metonymy as a way of referring to one thing by means of another and metaphor as a way of revealing an interesting relationship between two entities.", "cite_spans": [ { "start": 71, "end": 82, "text": "Fass (1991)", "ref_id": "BIBREF4" }, { "start": 87, "end": 100, "text": "Martin (1990)", "ref_id": "BIBREF16" }, { "start": 365, "end": 376, "text": "Fass (1991)", "ref_id": "BIBREF4" }, { "start": 454, "end": 466, "text": "Wilks (1975)", "ref_id": "BIBREF23" }, { "start": 471, "end": 483, "text": "Wilks (1978)", "ref_id": "BIBREF24" }, { "start": 702, "end": 713, "text": "(Fass, 1986", "ref_id": "BIBREF2" }, { "start": 714, "end": 727, "text": "(Fass, , 1987", "ref_id": "BIBREF3" }, { "start": 728, "end": 741, "text": "(Fass, , 1988", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5." }, { "text": "In Met*, a verb's selectional preferences are represented as a vector of types. The verb drink's preference for an animal subject and a liquid object are represented as (animal, drink, liquid) . Metaphorical interpretations are made by finding a sense vector in Met*'s knowledge base whose elements are hypernyms of both the preferred argument types and the actual arguments. For example, the car drinks gasoline maps to the vector (car, drink, gasoline). But car is not a hypernym of animal, so Met* searches for a metaphorical interpretation, coming up with (thing, use, energy source). Martin (1990) describes the Metaphor Interpretation, Denotation, and Acquisition System (MIDAS), a computational model of metaphor interpretation. MIDAS has been integrated with the Unix Consultant (UC), a program that answers English questions about using Unix. UC tries to find a literal answer to each question with which it is presented. If violations of literal selectional preference make this impossible, UC calls on MIDAS to search its hierarchical library of conventional metaphors for one that explains the anomaly. If no such metaphor is found, MIDAS tries to generalize a known conventional metaphor by abstracting its components to the most-specific senses that encompass the question's anomalous language. MIDAS then records the most concrete metaphor descended from the new, general metaphor that provides an explanation for the query's language.", "cite_spans": [ { "start": 169, "end": 192, "text": "(animal, drink, liquid)", "ref_id": null }, { "start": 589, "end": 602, "text": "Martin (1990)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5." }, { "text": "MIDAS is driven by the idea that novel metaphors are derived from known, existing ones. The hierarchical structure of conventional metaphor is a regularity not captured by other computational approaches. Although MIDAS can quickly understand novel metaphors that are the descendants of metaphors in its memory, it cannot interpret compound metaphors or detect intermetaphor relationships besides inheritance. INVESTMENTS \u2192 CONTAINERS and MONEY \u2192 WATER, for instance, are clearly related, but not in a way that MIDAS can represent. Since not all novel metaphors are descendants of common conventional metaphors, MIDAS's coverage is limited.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5." }, { "text": "MetaBank (Martin 1994) is an empirically derived knowledge base of conventional metaphors designed for use in natural language applications. MetaBank starts with a knowledge base of metaphors based on the Master Metaphor List. MetaBank can search a corpus for one metaphor or scan a large corpus for any metaphorical content. The search for a target metaphor is accomplished by choosing a set of probe words associated with that metaphor and finding sentences with those words, which are then manually sorted as literal, examples of the target metaphor, examples of a different metaphor, unsystematic homonyms, or something else. MetaBank compiles statistics on the frequency of conventional metaphors and the usefulness of the probe words. MetaBank has been used to study container metaphors in a corpus of UNIX-related e-mail and to study metaphor distributions in the Wall Street Journal. Peters and Peters (2000) mine WordNet for patterns of systematic polysemy by finding pairs of WordNet nodes at a relatively high level in the ontology (but still below the root nodes) whose descendants share a set of common word forms. The nodes publication and publisher, for instance, have paper, newspaper, and magazine as common descendants. This is a metonymic relationship; the system can also capture metaphoric relationships, as in the nodes supporting structure and theory, among whose common descendants are (for example) framework, foundation, and base. Peters and Peters' system found many metaphoric relationships between node pairs that were descendants of the unique beginners artifact and cognition. Goatly (1997) describes a set of linguistic cues of metaphoricality beyond selectional-preference violations, such as metaphorically speaking and, surprisingly, literally. These cues are generally ambiguous (except for metaphorically speaking) but could usefully be incorporated into computational approaches to metaphor.", "cite_spans": [ { "start": 9, "end": 22, "text": "(Martin 1994)", "ref_id": "BIBREF17" }, { "start": 892, "end": 916, "text": "Peters and Peters (2000)", "ref_id": "BIBREF19" }, { "start": 1608, "end": 1621, "text": "Goatly (1997)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5." }, { "text": "CorMet embodies a method for semiautomatically finding metaphoric mappings between concepts, which can then be used to infer conventionally metaphoric relationships between domains. It can sometimes identify metaphoric language, if it manifests as a common selectional-preference gradient between domains, but is far from being able to recognize metaphoric language in general. CorMet differs from other computational approaches to metaphor in requiring no manually compiled knowledge base besides WordNet. It has successfully found some of the conventional metaphors on the Master Metaphor List.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "CorMet uses gradients in selectional preferences learned from dynamically mined, domain-specific corpora to identify metaphoric mappings between concepts. It is reasonably accurate despite the noisiness of many of its components. CorMet demonstrates the viability of a computational, corpus-based approach to conventional metaphor but requires more work before it can constitute a viable NLP tool.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "Computational LinguisticsVolume 30, Number 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Word sense disambiguation using automatically acquired verbal preferences", "authors": [ { "first": "J", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "D", "middle": [], "last": "Mccarthy", "suffix": "" } ], "year": 2000, "venue": "Computers and the Humanities", "volume": "34", "issue": "", "pages": "1--2", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carroll, J., and D. McCarthy. 2000. Word sense disambiguation using automatically acquired verbal preferences. Computers and the Humanities, 34(1-2).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Metaphor and cultural coherence", "authors": [ { "first": "", "middle": [], "last": "Cho", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the 27th Conference on Cross-Language Studies and Contrastive Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cho, See-Young. 1993. Metaphor and cultural coherence. In Proceedings of the 27th Conference on Cross-Language Studies and Contrastive Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Collative semantics: An approach to coherence. Memorandum in Computer and Cognitive Science MCCS-86-56", "authors": [ { "first": "Dan", "middle": [], "last": "Fass", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fass, Dan. 1986. Collative semantics: An approach to coherence. Memorandum in Computer and Cognitive Science MCCS-86-56, New Mexico State University, New Mexico.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Collative semantics: An overview of the current meta5 program", "authors": [ { "first": "Dan", "middle": [], "last": "Fass", "suffix": "" } ], "year": 1987, "venue": "Computer and Cognitive Science MCCS-87-112", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fass, Dan. 1987. Collative semantics: An overview of the current meta5 program. Memorandum in Computer and Cognitive Science MCCS-87-112, New Mexico State University, NM.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Collative semantics: A semantics for natural language processing. Memorandum in Computer and Cognitive Science MCCS-88-118", "authors": [ { "first": "Dan", "middle": [], "last": "Fass", "suffix": "" } ], "year": 1988, "venue": "Computational Linguistics", "volume": "17", "issue": "1", "pages": "49--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fass, Dan. 1988. Collative semantics: A semantics for natural language processing. Memorandum in Computer and Cognitive Science MCCS-88-118, New Mexico State University, NM. Fass, Dan. 1991. Met: A method for discriminating metonymy and metaphor by computer. Computational Linguistics, 17(1):49-90.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "WordNet: An Electronic Lexical Database", "authors": [], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fellbaum, Christiane, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press, Cambridge, MA.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The Language of Metaphors", "authors": [ { "first": "Andrew", "middle": [], "last": "Goatly", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Goatly, Andrew. 1997. The Language of Metaphors. Routledge, London.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Lexical Structures in Syntax and Semantics", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Gruber", "suffix": "" } ], "year": 1976, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gruber, Jeffrey. 1976. Lexical Structures in Syntax and Semantics. Amsterdam, North-Holland.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Data clustering: A review", "authors": [ { "first": "Anil", "middle": [ "K" ], "last": "Jain", "suffix": "" }, { "first": "M", "middle": [ "Narasimha" ], "last": "Murty", "suffix": "" }, { "first": "Patrick", "middle": [ "J" ], "last": "Flynn", "suffix": "" } ], "year": 1999, "venue": "ACM Computing Surveys", "volume": "31", "issue": "3", "pages": "264--323", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jain, Anil K., M. Narasimha Murty, and Patrick J. Flynn. 1999. Data clustering: A review. ACM Computing Surveys, 31(3):264-323.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "BNC word frequency list", "authors": [ { "first": "Adam", "middle": [], "last": "Kilgarriff", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kilgarriff, Adam. 2003. BNC word frequency list. Available online at http://www.itri.brighton.ac.uk/Adam.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Brown corpus", "authors": [ { "first": "Henry", "middle": [], "last": "Kucera", "suffix": "" } ], "year": 1992, "venue": "Encyclopedia of Artificial Intelligence", "volume": "1", "issue": "", "pages": "128--130", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kucera, Henry. 1992. Brown corpus. In S. Shapiro, editor, Encyclopedia of Artificial Intelligence, volume 1. Wiley, New York, pages 128-130.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The contemporary theory of metaphor", "authors": [ { "first": "George", "middle": [], "last": "Lakoff", "suffix": "" } ], "year": 1993, "venue": "Metaphor and Thought", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lakoff, George. 1993. The contemporary theory of metaphor. In Andrew Ortony, editor, Metaphor and Thought. Cambridge University Press, Cambridge.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The master metaphor list", "authors": [ { "first": "George", "middle": [], "last": "Lakoff", "suffix": "" }, { "first": "Jane", "middle": [], "last": "Espenson", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Schwartz", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lakoff, George, Jane Espenson, and Alan Schwartz. 1991. The master metaphor list. Draft 2nd ed. Technical Report, University of California at Berkeley.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Cyc: A large-scale investment in knowledge infrastructure", "authors": [ { "first": "Douglas", "middle": [], "last": "Lenat", "suffix": "" } ], "year": 1995, "venue": "Communications of the ACM", "volume": "38", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lenat, Douglas. 1995. Cyc: A large-scale investment in knowledge infrastructure. In Communications of the ACM, 38:11.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Generalizing case frames using a thesaurus and the MDI principle", "authors": [ { "first": "Hang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Naoki", "middle": [], "last": "Abe", "suffix": "" } ], "year": 1998, "venue": "Computational Linguistics", "volume": "24", "issue": "2", "pages": "217--244", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, Hang and Naoki Abe. 1998. Generalizing case frames using a thesaurus and the MDI principle. Computational Linguistics, 24(2):217-244.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Building a large annotated corpus of English: The Penn Treebank", "authors": [ { "first": "Mitchell", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "Mary", "middle": [ "Ann" ], "last": "Marcinkiewicz", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcus, Mitchell P., Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19:313-330.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A Computational Model of Metaphor Interpretation", "authors": [ { "first": "James", "middle": [], "last": "Martin", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin, James. 1990. A Computational Model of Metaphor Interpretation. Academic Press.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Metabank: A knowledge base of metaphoric language conventions", "authors": [ { "first": "James", "middle": [], "last": "Martin", "suffix": "" } ], "year": 1994, "venue": "Computational Intelligence", "volume": "10", "issue": "2", "pages": "134--149", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin, James. 1994. Metabank: A knowledge base of metaphoric language conventions. Computational Intelligence, 10(2):134-149.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A Computational, Corpus-Based Metaphor Extraction System", "authors": [ { "first": "Zachary", "middle": [], "last": "Mason", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mason, Zachary. 2002. A Computational, Corpus-Based Metaphor Extraction System. Ph.D. thesis, Brandeis University.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Lexicalised systematic polysemy in WordNet", "authors": [ { "first": "Winn", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Ivonne", "middle": [], "last": "Peters", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the Second International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peters, Winn and Ivonne Peters. 2000. Lexicalised systematic polysemy in WordNet. In Proceedings of the Second International Conference on Language Resources and Evaluation, Athens.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "An algorithm for suffix stripping", "authors": [ { "first": "Martin", "middle": [ "F" ], "last": "Porter", "suffix": "" } ], "year": 1980, "venue": "", "volume": "14", "issue": "", "pages": "130--137", "other_ids": {}, "num": null, "urls": [], "raw_text": "Porter, Martin F. 1980. An algorithm for suffix stripping. Program, 14(3):130-137.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Selection and Information: A Class Based Approach to Lexical Relationships", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Resnik, Philip. 1993. Selection and Information: A Class Based Approach to Lexical Relationships. Ph.D. thesis, University of Pennsylvania.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A corpus-based probabilistic grammar with only two non-terminals", "authors": [ { "first": "Satoshi", "middle": [], "last": "Sekine", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the Fourth International Workshop on Parsing Technology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sekine, Satoshi and Ralph Grishman. 1995. A corpus-based probabilistic grammar with only two non-terminals. In Proceedings of the Fourth International Workshop on Parsing Technology, Prague, Czech Republic.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A preferential, pattern-seeking, semantics for natural language inference", "authors": [ { "first": "Yorick", "middle": [], "last": "Wilks", "suffix": "" } ], "year": 1975, "venue": "Artificial Intelligence", "volume": "6", "issue": "", "pages": "53--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wilks, Yorick. 1975. A preferential, pattern-seeking, semantics for natural language inference. Artificial Intelligence, 6:53-74.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Making preferences more active", "authors": [ { "first": "Yorick", "middle": [], "last": "Wilks", "suffix": "" } ], "year": 1978, "venue": "Artificial Intelligence", "volume": "11", "issue": "3", "pages": "197--223", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wilks, Yorick. 1978. Making preferences more active. Artificial Intelligence, 11(3):197-223.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "(a) His body was under siege by AIDS. (b) He was attacked by an unknown virus. (c)", "num": null, "uris": null, "type_str": "figure" }, "TABREF0": { "num": null, "type_str": "table", "html": null, "content": "
RankLABFINANCE
1oxidizamortiz
2sulfatarbitrag
3fluorinlabor
4vaporovervalu
5titratoutsourc
6adsorbescrow
7electroplrepurchas
8valencrefinanc
9atomizforecast
10annealinvest
11sinterdiscount
12substitustock
13compoundcertify
14hydratbank
15fritcredit
16ionizeyield
17deactivbond
18intermixrate
19halogenreinvest
20solublleverag
", "text": "Characteristic stems for LAB and FINANCE domains." }, "TABREF2": { "num": null, "type_str": "table", "html": null, "content": "
ComponentWeight
|supporting predicates(M)| max num of support preds in domain0.25
polarity(M) max polarity in domain0.5
|co occurring mappings(M)| max number of cooccurring mappings0.25
", "text": "Factors used in evaluating a mapping M and their weights." }, "TABREF3": { "num": null, "type_str": "table", "html": null, "content": "
LABbeaker experiment cylinder chemical precipitate mixture
reaction valence molarity pressure
FINANCE money stocks bonds equity trading inflation arbitrage
capital investment market
assigned could easily be different and are best considered guesses that give reasonable
results.
", "text": "Characteristic keywords for LAB and FINANCE domains." }, "TABREF4": { "num": null, "type_str": "table", "html": null, "content": "
RankStemRatio of frequencies Frequency in domain Frequency in English
1oxidiz3,073.6080.00031.0e\u221207
2sulfat2,301.5910.00031.3e\u221207
3fluorin1,452.4670.00011.0e\u221207
4vapor1,325.2370.00075.2e\u221207
5titrat831.0070.00068.3e\u221207
6adsorb433.7215.6e\u2212051.2e\u221207
7electropl392.9863.1e\u2212057.9e\u221208
8valenc349.5220.00041.4e\u221206
9atomiz324.6961.9e\u2212055.9e\u221208
10anneal312.4068.1e\u2212052.5e\u221207
11sinter264.3223.6e\u2212051.3e\u221207
12substitu251.5113.7e\u2212051.4e\u221207
13compound99.6320.0022.0e\u221205
14hydrat238.0170.00016.5e\u221207
15frit237.081.6e\u2212056.9e\u221208
16ionize221.3729.2e\u2212054.1e\u221207
17deactiv207.6291.4e\u2212056.9e\u221208
18intermix84.185.0e\u2212065.9e\u221208
19halogen195.7010.00016.9e\u221207
20solubl192.2040.00074.1e\u221206
", "text": "Characteristic verbs 1-20 of the LAB domain." }, "TABREF5": { "num": null, "type_str": "table", "html": null, "content": "
RankStemRatio of frequencies Freqency in domain Frequency in English
1amortiz807.5315.6e\u2212056.9e\u221208
2arbitrag305.8360.00062.0e\u221206
3labor302.7970.00041.6e-06
4overvalu296.9454.7e\u2212051.5e\u221207
5outsourc260.6252.8e\u2212051.0e\u221207
6escrow248.1922.9e\u2212051.1e\u221207
7repurchas241.3099.4e\u2212053.8e\u221207
8refinanc213.3693.4e\u2212051.5e\u221207
9forecast27.0070.00041.4e\u221205
10invest72.6040.00192.7e\u221205
11discount22.590.00052.2e\u221205
12stock70.1720.00679.5e\u221205
13certify21.085.7e-052.7e\u221206
14bank20.6240.00450.0002
15credit20.4320.00167.9e\u221205
16yield56.1440.0011.8e\u221205
17bond122.4670.00453.7e\u221205
18rate17.5630.00550.0003
19reinvest104.1970.00011.1e\u221206
20leverag100.5760.00022.2e\u221206
", "text": "Characteristic verbs 1-20 of the FINANCE domain." }, "TABREF6": { "num": null, "type_str": "table", "html": null, "content": "
substance-10.0116
vaporobjliquid-10.0478
fluid-10.0473
metallic element-10.0217
annealwith substance-10.0101
chemical element-10.0112
substance-10.0123
compound subjcompound-20.036
organic compound-1 0.0431
matter-30.0145
adsorbobjsubstance-10.014
physical object-10.0087
hydratsubjsubstance-1 compound-20.0181 0.0401
Table 8
Sample selectional preferences for FINANCE verbs.
income-10.0118
financial gain-10.0114
security-80.0069
currency-10.034
sum-10.0136
investobjtransferred property-10.0036
fund-10.008
asset-10.1183
gain-40.0113
medium of exchange-1 0.0415
money-10.0375
cost-10.0269
financial loss-10.0263
discount objtransferred property-10.0237
loss-20.0262
outgo-10.0269
cost-10.0211
financial loss-10.0206
creditsubj transferred property-10.0182
loss-20.0205
outgo-10.0211
Table 9
Mappings LAB \u2192 FINANCE.
MappingFrames Polarity Systematicity Final score
liquid-1 \u2192 income-16111.82.56
liquid-1 \u2192 institution-1593.832.55
container-1 \u2192 institution-1 113.161.35
liquid-1 \u2192 information-1564.292.54
", "text": "Sample selectional preferences for LAB verbs. Of course, this mapping is incorrect insofar as solids undergo dissolution, not liquids. CorMet made this mistake because of faulty thematic-role identification; it frequently failed to distinguish between the different thematic roles played by the subjects in sentences like The company dissolved and The acid dissolved the compound. The third mapping characterizes communication as a liquid. This was not the mapping the author had in mind when he chose the domains, but it is intuitively plausible: One speaks of information flowing as readily as of money flowing. That this mapping appears in a search not targeted to it reflects this metaphor's strength. It also illustrates a source of error in inferring the existence of conventional metaphors between domains from the existence of interconcept mappings. The fourth mapping is from containers to organizations. This mapping complements the first one: As liquids flow into containers, so money flows into organizations. Another good mapping, not present here, is money flows into equities and investments. CorMet misses this mapping because, at the level of concepts, money and equities are conflated. This happens because they are near relatives in the WordNet ontology and because there is very high overlap between the predicates selecting for them." }, "TABREF7": { "num": null, "type_str": "table", "html": null, "content": "
vbsubjobjintofrom with
dissolvstakes
pourinvestorscash
pourinvestorscash
pourprofitsmarket
pourCashshares
pourEarnings
pourstakebrand
pourcash
pourflightmoneystocks
pourinvestorsstocks
coolstocks
coolReserveeconomy
evaporprofit
evapor mortgages
evaporprofitturn
pumpReservereserves
pumpstocksthem
vaporstock
vaporprofits
meltprofitnothing
meltstocks
3.I'm down to my bottom dollar.
", "text": "sample of frames from FINANCE instantiating liquid \u2192 income." }, "TABREF8": { "num": null, "type_str": "table", "html": null, "content": "
RankStemRatio of frequencies Frequency in domain Frequency in English
0nuke372.4942.2e\u2212055.9e\u221208
1harbor714.2530.00046.9e\u221207
2strafe156.4715.1e\u2212053.2e\u221207
3honor626.5770.00034.7e\u221207
4combat105.1210.0019.6e\u221206
5torpedo96.5190.00022.0e\u221206
6stonewal382.933.0e\u2212057.9e\u221208
7bombard54.6020.00025.1e\u221206
8skirmish56.1050.00012.4e\u221206
9bomb49.3410.00193.9e\u221205
10favor169.0230.00016.5e\u221207
11envision158.4171.4e\u2212058.9e\u221208
12attack31.6610.00340.0001
13cannonad117.7421.0e\u2212058.9e\u221208
14rearm115.6011.2e\u2212051.0e\u221207
15sieg107.7320.00087.8e\u221206
16raid20.8170.00042.1e\u221205
17highlight77.3580.00141.9e\u221205
18enlist74.1380.00023.4e\u221206
19infest17.7251.3e\u2212057.4e\u221207
", "text": "Characteristic verbs for MILITARY." }, "TABREF9": { "num": null, "type_str": "table", "html": null, "content": "
RankStemRatio of frequencies Frequency in domain Frequency in English
1immuniz304.7040.00014.0e\u221207
2diaper110.0232.6e\u2212052.3e\u221207
3detoxify106.1812.0e\u2212051.8e\u221207
4oxidiz104.0061.1e\u2212051.0e\u221207
5pasteur102.1493.5e\u2212053.4e\u221207
6palpat89.381.4e\u2212051.5e\u221207
7misdiagnos87.3947.8e\u2212068.9e\u221208
8metastas87.0494.0e\u2212054.5e\u221207
9expector86.8268.6e\u2212069.9e\u221208
10implant85.2630.00012.3e\u221206
11decoct82.9966.6e\u2212067.9e\u221208
12vaccin81.1570.00078.8e\u221206
13transplant78.70.00057.1e\u221206
14labor77.0160.00011.6e\u221206
15infect69.5750.00035.4e\u221206
16deactiv67.1264.6e\u2212066.9e\u221208
17detox63.4177.6e\u2212061.1e\u221207
18recuper62.5887.3e\u2212051.1e\u221206
19heal61.7530.00058.9e\u221206
20clot58.4167.4e\u2212051.2e\u221206
Table 14
Selectional preferences for MILITARY verbs.
social group-10.005
combat subj body-30.0123
gathering-10.0053
combat objmilitary unit-1 social group-10.0156 0.01
unit-30.0135
enlistsubj military unit-10.0603
social group-10.0475
military unit-10.0164
social group-10.0131
military unit-10.0368
military unit-10.0397
muster subj company-60.0101
gathering-10.0013
unit-30.0196
social gathering-10.0049
force-40.0052
district-10.0046
seat-50.0046
region-30.0022
bombsubj administrative district-1 0.0051 country-1 0.0021
capital-30.0046
city-20.0073
national capital-10.0056
", "text": "Characteristic verbs for MEDICINE." }, "TABREF10": { "num": null, "type_str": "table", "html": null, "content": "
descendant-1 0.0246
child-20.0198
immuniz subj relative-10.0137
offspring-10.0193
child-40.0246
oxidizsubj food-1 substance-10.0513 0.0158
organ-10.0204
gland-10.0303
implantsubj body part-10.0238
tissue-10.0151
part-70.0225
Table 16
Mappings MILITARY \u2192 MEDICINE.
MappingFrames Polarity Systematicity Final score
military unit-1 \u2192 body part-128565.55330.95
fortification-1 \u2192 body part-129855.12330.88
vehicle-1 \u2192 body part-123835.2320.67
military action-1 \u2192 body part-120735250.6
region-3 \u2192 body part-15730.950.31
skilled worker-1 \u2192 body part-112717.3110.31
military unit-1 \u2192 drug-18451.77280.64
vehicle-1 \u2192 drug-16335.7280.5
military action-1 \u2192 drug-17130.91270.47
fortification-1 \u2192 drug-16724.64220.38
weaponry-1 \u2192 drug-15810.8240.28
military action-1 \u2192 medical care-1 7128.21200.4
fortification-1 \u2192 medical care-17816.37200.32
weaponry-1 \u2192 medical care-1489.64200.24
fortification-1 \u2192 illness-1 243.2138.45
", "text": "Selectional preferences for MEDICINE verbs." }, "TABREF11": { "num": null, "type_str": "table", "html": null, "content": "
vbsubjobjinto from with
attacksystemreceptors
attackpainjoints
attackimmunosuppressantskidney
besiegfloodabdomen
besiegscarsthigh
destroyorgansbacteria
destroyMicrotubulesagents
destroyganglion
destroytherapytissue
destroycancerbone
destroyvirusliver
destroyinterniststomach
targetorgan
targetvaccineintestines
", "text": "Selected frames supporting {fortification, vehicle, military action, region, skilled worker} \u2192 body part." }, "TABREF13": { "num": null, "type_str": "table", "html": null, "content": "
Master Metaphor List mappingEmpirical mappingScore
Fortifications \u2192 Theoriesnone0
Fluid \u2192 Emotionliquid-1 \u2192 feeling-1.25
Containers for Emotions \u2192 Peoplecontainer-1 \u2192 person-1.13
War \u2192 Lovefeeling-1 \u2192 military unit-1.34
Injuries \u2192 Effects of Humorweapon-1 \u2192 joke-1.18
Fighting a War \u2192 Treating Illnessmilitary action-1 \u2192 medical care-1 .4
Journey \u2192 Lovetravel-1 \u2192 feeling-1.17
Physical Injury \u2192 Economic Harmharm-1 \u2192 loss-3.20
Machines \u2192 Peoplenone0
Liquid \u2192 Moneyliquid-1 \u2192 income-1.56
Containers for Money \u2192 Investments container-1 \u2192 institution-1.35
Buildings \u2192 Bodiesnone0
Body \u2192 Societybody part-1 \u2192 organization-1.14
", "text": "Best mappings for domain pairs." }, "TABREF14": { "num": null, "type_str": "table", "html": null, "content": "
Domain 1 Domain2 Polarity
MedicinePlants0
MilitarySociety0
MedicineSociety0
FinanceBody0
LabTheory0
SocietyJourney0
", "text": "Arbitrarily selected domains and the mapping strengths between them." } } } }