{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:42:58.206910Z" }, "title": "Using Conceptual Norms for Metaphor Detection", "authors": [ { "first": "Mingyu", "middle": [], "last": "Wan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Peking University", "location": {} }, "email": "" }, { "first": "Kathleen", "middle": [], "last": "Ahrens", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hong Kong Polytechnic University", "location": {} }, "email": "kathleen.ahrens@polyu.edu.hk" }, { "first": "Emmanuele", "middle": [], "last": "Chersoni", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hong Kong Polytechnic University", "location": {} }, "email": "emmanuelechersoni@gmail.com" }, { "first": "Menghan", "middle": [], "last": "Jiang", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hong Kong Polytechnic University", "location": {} }, "email": "menghanengl.jiang@polyu.edu.hk" }, { "first": "Qi", "middle": [], "last": "Su", "suffix": "", "affiliation": { "laboratory": "", "institution": "Peking University", "location": {} }, "email": "" }, { "first": "Rong", "middle": [], "last": "Xiang", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hong Kong Polytechnic University", "location": {} }, "email": "xiangrong0302@gmail.com" }, { "first": "Chu-Ren", "middle": [], "last": "Huang", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hong Kong Polytechnic University", "location": {} }, "email": "churen.huang@polyu.edu.hk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper reports a linguistically-enriched method of detecting token-level metaphors for the second shared task on Metaphor Detection. We participate in all four phases of competition with both datasets, i.e. Verbs and All-POS on the VUA and the TOFEL datasets. We use the modality exclusivity and embodiment norms for constructing a conceptual representation of the nodes and the context. Our system obtains an F-score of 0.652 for the VUA Verbs track, which is 5% higher than the strong baselines. The experimental results across models and datasets indicate the salient contribution of using modality exclusivity and modality shift information for predicting metaphoricity.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "This paper reports a linguistically-enriched method of detecting token-level metaphors for the second shared task on Metaphor Detection. We participate in all four phases of competition with both datasets, i.e. Verbs and All-POS on the VUA and the TOFEL datasets. We use the modality exclusivity and embodiment norms for constructing a conceptual representation of the nodes and the context. Our system obtains an F-score of 0.652 for the VUA Verbs track, which is 5% higher than the strong baselines. The experimental results across models and datasets indicate the salient contribution of using modality exclusivity and modality shift information for predicting metaphoricity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Metaphors are one kind of figurative language that use conceptual mapping to represent one thing (target domain) as another (source domain). As proposed by Lakoff and Johnson (1980) in Conceptual Metaphor Theory (CMT), metaphor is not only a property of the language but also a cognitive mechanism that describes our conceptual system. Thus metaphors are devices that transfer the property of one domain to another unrelated or different domain, as in 'sweet voice' (use taste to describe sound).", "cite_spans": [ { "start": 156, "end": 181, "text": "Lakoff and Johnson (1980)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Metaphors are prevalent in daily life and play a significant role for people to interpret/understand complex concepts. On the other hand, as a popular linguistic device, metaphors encode versatile ontological information, which usually involve e.g. domain transfer (Ahrens et al., 2003; Ahrens, 2010; Ahrens and Jiang, 2020) , sentiment reverse (Steen et al., 2010) or modality shift (Winter, 2019) etc. Therefore, detecting the metaphors in texts is essential for capturing the authentic meaning of the texts, which can benefit many natural language processing applications, such as machine translation, dialogue systems and sentiment analysis (Tsvetkov et al., 2014) . In this shared task, we aim to detect token-level metaphors from plain texts by focusing on content words (Verbs, Nouns, Adjectives and Adverbs) of two corpora: VUA 1 and TOFEL 2 . To better understand the intrinsic properties of metaphors and to provide an in-depth analysis to this phenomenon, we propose a linguisticallyenriched model to deal with this task with the use of modality exclusivity and embodiment norms (see details in Section 3).", "cite_spans": [ { "start": 265, "end": 286, "text": "(Ahrens et al., 2003;", "ref_id": "BIBREF1" }, { "start": 287, "end": 300, "text": "Ahrens, 2010;", "ref_id": "BIBREF0" }, { "start": 301, "end": 324, "text": "Ahrens and Jiang, 2020)", "ref_id": "BIBREF2" }, { "start": 345, "end": 365, "text": "(Steen et al., 2010)", "ref_id": "BIBREF23" }, { "start": 384, "end": 398, "text": "(Winter, 2019)", "ref_id": "BIBREF26" }, { "start": 645, "end": 668, "text": "(Tsvetkov et al., 2014)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Many approaches have been proposed for automatic detection of metaphors, using features of lexical information (Klebanov et al., 2014; Wilks et al., 2013) , semantic classes (Klebanov et al., 2016), concreteness (Klebanov et al., 2015) , word associations (Xiao et al., 2016) , constructions and frames (Hong, 2016) and systems such as traditional machine learning classifiers (Rai et al., 2016) , deep neural networks (Do Dinh and Gurevych, 2016) and sequential models (Bizzoni and Ghanimifard, 2018) . Despite many advances in the above work, metaphor detection remains a challenging task. The semantic and ontological differences between metaphorical and non-metaphorical expressions are often subtle and their perception may vary from person to person. To tackle such problems, researchers resort to specific domain knowledge (Tsvetkov et al., 2014) ; lexicons (Mohler et al., 2013; Dodge et al., 2015) ; supervised methods (Klebanov et al., 2014 (Klebanov et al., , 2015 (Klebanov et al., , 2016 or using attention-based deep learning models to capture latent patterns (Igam-berdiev and Shin, 2018). These methods show different strengths on detecting metaphors, yet each has its respective disadvantages, such as having generalization problems or lack association of their results with the intrinsic properties of metaphors. In addition, the reported performances of metaphor detection so far (around 0.6 F1 in the last shared task) (Leong et al., 2018) are still not promising. This calls for further endeavours in all aspects.", "cite_spans": [ { "start": 111, "end": 134, "text": "(Klebanov et al., 2014;", "ref_id": "BIBREF10" }, { "start": 135, "end": 154, "text": "Wilks et al., 2013)", "ref_id": "BIBREF25" }, { "start": 212, "end": 235, "text": "(Klebanov et al., 2015)", "ref_id": "BIBREF11" }, { "start": 256, "end": 275, "text": "(Xiao et al., 2016)", "ref_id": "BIBREF27" }, { "start": 303, "end": 315, "text": "(Hong, 2016)", "ref_id": "BIBREF8" }, { "start": 377, "end": 395, "text": "(Rai et al., 2016)", "ref_id": "BIBREF20" }, { "start": 423, "end": 447, "text": "Dinh and Gurevych, 2016)", "ref_id": "BIBREF5" }, { "start": 470, "end": 501, "text": "(Bizzoni and Ghanimifard, 2018)", "ref_id": "BIBREF3" }, { "start": 830, "end": 853, "text": "(Tsvetkov et al., 2014)", "ref_id": "BIBREF24" }, { "start": 865, "end": 886, "text": "(Mohler et al., 2013;", "ref_id": "BIBREF17" }, { "start": 887, "end": 906, "text": "Dodge et al., 2015)", "ref_id": "BIBREF6" }, { "start": 928, "end": 950, "text": "(Klebanov et al., 2014", "ref_id": "BIBREF10" }, { "start": 951, "end": 975, "text": "(Klebanov et al., , 2015", "ref_id": "BIBREF11" }, { "start": 976, "end": 1000, "text": "(Klebanov et al., , 2016", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this work, we adopt supervised machine learning algorithms based on four categories of features, which include linguistic norms, ngram-word, -lemma and -pos collocations, word embeddings and cosine similarity between the target nodes and its neighboring words, as well as the strong baselines provided by the organizer of the shared task (Leong et al., 2018; Klebanov et al., 2014 Klebanov et al., , 2015 Klebanov et al., , 2016 . Moreover, we use several statistical models and ensemble learning strategies during training and testing so as to test the cross-model consistency of the improvement using the various features. The methods are described in detail in the following sections.", "cite_spans": [ { "start": 341, "end": 361, "text": "(Leong et al., 2018;", "ref_id": "BIBREF15" }, { "start": 362, "end": 383, "text": "Klebanov et al., 2014", "ref_id": "BIBREF10" }, { "start": 384, "end": 407, "text": "Klebanov et al., , 2015", "ref_id": "BIBREF11" }, { "start": 408, "end": 431, "text": "Klebanov et al., , 2016", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "This work uses four categories of features (16 subsets in all) to represent the nodes and contextual information at hierarchical levels, which include the lexical and syntactic-to-semantic information, sensory modality scales, embodiment ratings (of verbs only), as well as word vectors of the nodes and cosine similarity of node-neighbor pairs, as detailed below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Sets", "sec_num": "3" }, { "text": "\u2022 Linguistic Norms: Two linguistic norms are used to construct four linguistically-enriched feature sets in the jsonlines format: 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Sets", "sec_num": "3" }, { "text": "-ME (modality exclusivity): 42 dimension of target nodes representation, containing the mapped sensorimotor values in the modality norms; -DM (dominant modality): 1 \u00d7 5 dimension of node-neighbor pairs (five lexical neighboring words) information, representing the dominant modality of the target nodes and the surrounding lexical words;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Sets", "sec_num": "3" }, { "text": "-EB (embodiment): 2 dimension of nodes representation, including embodiment rating and standard deviation; -EB-diff (embodiment differences): 2 \u00d7 5 dimension of node-neighbor pairs (five lexical neighboring words) information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Sets", "sec_num": "3" }, { "text": "The ME and DM feature sets are constructed by using the Lancaster Sensorimotor norms collected by Lynott et al. (2019) . The data include measures of sensorimotor strength (0-5 scale indicating different degrees of sense modalities/action effectors) for 39,707 English words across six perceptual modalities: touch, hearing, smell, taste, vision and interception, and five action effectors: mouth/throat, hand/arm, foot/leg, head (excluding mouth/throat), torso. 4 As sensorimotor information plays a fundamental role in cognition, these norms provide a valuable knowledge representation to the conceptual categories of the target and neighboring words which serve as salient features for inferring metaphors.", "cite_spans": [ { "start": 98, "end": 118, "text": "Lynott et al. (2019)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Feature Sets", "sec_num": "3" }, { "text": "The EB and EB-diff feature sets are constructed by using the embodiment norms for 687 English verbs which is collected by Sidhu et al. (2014). Research examining semantic richness effects has shown that multiple dimensions of meaning are activated in the process of word recognition (Yap et al., 2011) . This data applies the semantic richness approach (Sidhu et al., 2014 (Sidhu et al., , 2016 to verb stimuli in order to investigate how verb meanings are represented. The relative embodiment ratings (1-7 scale indicating different degrees of bodily involvement) revealed that bodily experience was judged to be more important to the meanings of some verbs (e.g., dance, breathe) than to others (e.g., evaporate, expect), suggesting that relative embodiment is an important aspect of verb meaning, which can be a useful indicator of meaning mismatch of the figurative usage of verbs.", "cite_spans": [ { "start": 283, "end": 301, "text": "(Yap et al., 2011)", "ref_id": "BIBREF28" }, { "start": 353, "end": 372, "text": "(Sidhu et al., 2014", "ref_id": "BIBREF22" }, { "start": 373, "end": 394, "text": "(Sidhu et al., , 2016", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Feature Sets", "sec_num": "3" }, { "text": "\u2022 Collocations: Three sets of collocational features are constructed to represent the lexical, syntactic, grammatical information of the nodes and their neighbors: Trigram, FL (Fivegram Lemma), FPOS (Fivegram POS tags). The two corpora are lemmatized using the nltk WordNetLemmatizer 5 and POS tagged using the nltk averaged perceptron tagger 6 before constructing such features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Sets", "sec_num": "3" }, { "text": "\u2022 Word Embeddings: For comparisons, we utilise distributional vector representation of word meaning to the nodes based on the distributional hypothesis (Firth, 1957; Lenci, 2018) . Two pre-trained Word2Vec models (GoogleNews.300d and Internal-W2V.300d (pre-trained using the VUA and TOFEL corpora)) and the GloVe vectors are used. GoogleNews 7 in this work is pre-trained using the continuous bag-of-words architecture for computing vector representations of words (Church, 2017) . GloVe 8 is an unsupervised learning algorithm for obtaining vector representations for words. We use the 300d vectors pre-trained on Wikipedia 2014+Gigaword 5 (Pennington et al., 2014) .", "cite_spans": [ { "start": 152, "end": 165, "text": "(Firth, 1957;", "ref_id": null }, { "start": 166, "end": 178, "text": "Lenci, 2018)", "ref_id": "BIBREF14" }, { "start": 465, "end": 479, "text": "(Church, 2017)", "ref_id": "BIBREF4" }, { "start": 641, "end": 666, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Feature Sets", "sec_num": "3" }, { "text": "\u2022 Cosine Similarity: We also investigate the cosine similarity (CS) measures for computing word sense distances between the nodes and their neighboring lexical words, based on the hypothesis that words of distant meaning are more likely to be metaphors. Three different sets of CS features are constructed in this work by using the above three different word embedding models: CS-Google, CS-GloVe, CS-Internal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Sets", "sec_num": "3" }, { "text": "These features constitute a rather comprehensive representation of the mismatch of the nodes and their neighbors in terms of senses, domains, modalities, agentivity and concreteness etc, which are highly indicative of metaphorical uses and are hence hypothesized as more distinctive features than the strong baselines in Leong et al. (2018) .", "cite_spans": [ { "start": 321, "end": 340, "text": "Leong et al. (2018)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Feature Sets", "sec_num": "3" }, { "text": "In addition, we replicate the three strong baselines provided by the organizer for comparison purposes:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Sets", "sec_num": "3" }, { "text": "\u2022 B1: lemmatized unigrams (UL)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Sets", "sec_num": "3" }, { "text": "\u2022 B2: lemmatized unigrams, generalized Word-Net semantic classes, and difference in concreteness ratings between verbs/adjectives and nouns (UL + WordNet + CCDB)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Sets", "sec_num": "3" }, { "text": "\u2022 B3: baseline 2 and unigrams, pos tag, topic, concreteness ratings between nodes and up and down words respectively (UL + WordNet + CCDB + U + P + T + CUp + CDown) For parameter tuning, we use grid search to find optimal parameters for the learners. Finally, we set up the following optimized parameters for the three classifiers:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Sets", "sec_num": "3" }, { "text": "\u2022 Logistic Regression (LR): 'class weight':'balanced', 'max iter':5000, 'tol':1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Sets", "sec_num": "3" }, { "text": "\u2022 Linear SVC (LSVC): 'class weight':'balanced', 'max iter':50000, 'C':10", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Sets", "sec_num": "3" }, { "text": "\u2022 Random Forest Classifier (RFC): 'min samples split':8, 'max features':'log2', 'oob score':'True', 'random state':10, 'class weight':'balanced'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Sets", "sec_num": "3" }, { "text": "5 Results and Discussions", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Sets", "sec_num": "3" }, { "text": "In order to evaluate the discriminativeness of the various features for metaphor detection and their fitness to the three classifiers, we focus on the VUA Verbs phase and randomly select a development set (4380 tokens) from the training set in proportion to the Train/Test ratio. Experiments are run using the three classifiers and the setup in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5.1" }, { "text": "The evaluation results on the individual features in terms of F1-score are summarized in Table 1 below:", "cite_spans": [], "ref_spans": [ { "start": 89, "end": 96, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5.1" }, { "text": "In Table 1 B1, W2V.GloVe, Trigram and FL. For the conceptual representations, modality exclusivity features demonstrate outstanding performance, while the embodiment features perform quite poorly. This is due to the data sparseness of the embodiment feature representations. As the data in the embodiment norms only contains 687 English verbs, it cannot cover most of the words in the two corpora of the shared task, which causes many empty values in the feature matrix, resulting in a poor performance in the task. Despite of this, it still helps the overall performance, if concatenated with other features, as to be shown in the later section. The performances of the three classifiers are quite close for all features, with LR performing slightly better. To test the combined power of these features for metaphor detection, we also conduct evaluation on fused features, as shown in Table 2 Results in Table 2 show that B2 is a stronger baseline than B3, so we use B2 as the comparison basis. Among the four categories of features, the linguistic and collocational features in combination with B2 achieve the greatest improvement by around 1.5% F1-score. The top three to five features also improve the performance by 1-2% F1score. However, the word embeddings and cosine similarity features show no improvement over baseline 2. Finally, we selected 12 features (excluding the W2V features) using the automatic feature selection algorithm and have achieved the best results for evaluation (.672 F1 for LR).", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 1", "ref_id": null }, { "start": 886, "end": 893, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 905, "end": 912, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5.1" }, { "text": "We use the best feature sets and classifier (LR) in the above evaluation for the final submission. The released results of our system on the test sets of the four phases in terms of F1-score are summarized in Table 3 In Table 3 , 'L+B2' stands for 'Linguistic feature fused with baseline 2' and the best results are highlighted in bold. In addition to the best methods, we also submit the Top5 features and the 'L+B2' features which all show consistent improvement (1-5% F1) over baseline 2. The evaluation results prove the effectiveness of using the linguistic features, especially the Modality Exclusivity representations for metaphor detection.", "cite_spans": [], "ref_spans": [ { "start": 209, "end": 216, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 220, "end": 227, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results on Test Sets", "sec_num": "5.2" }, { "text": "To demonstrate the effectiveness of our method, this section presents the comparisons of our system to some highly related works that participated in the same shared task (2018) of the VUA corpus. All the results are publicly available, as reported in Leong et al. (2018) . We compare our results on the VUA-Verbs and VUA-AllPOS phases to the top three teams (T1-3), the baseline2 (B2) and the only team using linguistic features (Ling) in 2018. The detailed results are displayed in Table 4 below: Obviously, our method obtains very promising results: it beats the Top 2 team for the Verbs phase and is close to Top3 for the AllPOS phase; moreover, our results are significantly superior to both the baseline and another linguistically-based approach. This suggests the effectiveness of using conceptual features for metaphor detection, echoing the hypothesis that metaphor is a concept mismatch between the source and target domains. ", "cite_spans": [ { "start": 252, "end": 271, "text": "Leong et al. (2018)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 484, "end": 498, "text": "Table 4 below:", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Comparison to other Works", "sec_num": "5.3" }, { "text": "We presented a linguistically enhanced method for word-level metaphor detection using conceptual features of modality and embodiment based on traditional classifiers. As suggested by the results, the modality exclusivity and embodiment norms provide conceptual and bodily information for representing the nodes and the context, which help improve the performance of metaphor detection over the three strong baselines to a great extent. It is noteworthy that our system did not employ any deep learning architectures, showing advantages of simplicity and model efficiency, yet it outperforms many sophisticated neural networks. In the future work, we will use the current feature sets in combination with state-of-the-art deep learning models to further examine the effectiveness of this method for metaphor detection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "http://www.vismet.org/metcor/ documentation/home.html 2 https://catalog.ldc.upenn.edu/ LDC2014T06", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The feature sets can be accessed through the link:https://github.com/ClaraWan629/ Feature-Sets-for-MD", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://osf.io/7emr6/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.nltk.org/_modules/nltk/ stem/wordnet.html 6 https://www.kaggle.com/nltkdata/ averaged-perceptron-tagger 7 https://github.com/mmihaltz/ word2vec-GoogleNews-vectors 8 https://nlp.stanford.edu/projects/ glove/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://skll.readthedocs.io/en/ latest/index.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work is partially supported by the GRF grant (PolyU 156086/18H) and the Post-doctoral project (no. 4-ZZKE) at the Hong Kong Polytechnic University.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Mapping principles for conceptual metaphors. Researching and applying metaphor in the real world", "authors": [ { "first": "Kathleen", "middle": [], "last": "Ahrens", "suffix": "" } ], "year": 2010, "venue": "", "volume": "26", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kathleen Ahrens. 2010. Mapping principles for conceptual metaphors. Researching and applying metaphor in the real world, 26:185.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Conceptual metaphors: Ontologybased representation and corpora driven mapping principles", "authors": [ { "first": "Kathleen", "middle": [], "last": "Ahrens", "suffix": "" }, { "first": "Siaw Fong", "middle": [], "last": "Chung", "suffix": "" }, { "first": "Chu-Ren", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the ACL 2003 workshop on Lexicon and figurative language", "volume": "14", "issue": "", "pages": "36--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kathleen Ahrens, Siaw Fong Chung, and Chu-Ren Huang. 2003. Conceptual metaphors: Ontology- based representation and corpora driven mapping principles. In Proceedings of the ACL 2003 work- shop on Lexicon and figurative language-Volume 14, pages 36-42. Association for Computational Lin- guistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Source domain verification using corpus-based tools", "authors": [ { "first": "Kathleen", "middle": [], "last": "Ahrens", "suffix": "" }, { "first": "Menghan", "middle": [], "last": "Jiang", "suffix": "" } ], "year": 2020, "venue": "Metaphor and Symbol", "volume": "35", "issue": "1", "pages": "43--55", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kathleen Ahrens and Menghan Jiang. 2020. Source domain verification using corpus-based tools. Metaphor and Symbol, 35(1):43-55.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Bigrams and bilstms two neural networks for sequential metaphor detection", "authors": [ { "first": "Yuri", "middle": [], "last": "Bizzoni", "suffix": "" }, { "first": "Mehdi", "middle": [], "last": "Ghanimifard", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Workshop on Figurative Language Processing", "volume": "", "issue": "", "pages": "91--101", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuri Bizzoni and Mehdi Ghanimifard. 2018. Bi- grams and bilstms two neural networks for sequen- tial metaphor detection. In Proceedings of the Work- shop on Figurative Language Processing, pages 91- 101.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Word2vec. Natural Language Engineering", "authors": [ { "first": "Kenneth", "middle": [ "Ward" ], "last": "Church", "suffix": "" } ], "year": 2017, "venue": "", "volume": "23", "issue": "", "pages": "155--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth Ward Church. 2017. Word2vec. Natural Lan- guage Engineering, 23(1):155-162.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Tokenlevel metaphor detection using neural networks", "authors": [ { "first": "Erik-L\u00e2n Do", "middle": [], "last": "Dinh", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Fourth Workshop on Metaphor in NLP", "volume": "", "issue": "", "pages": "28--33", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erik-L\u00e2n Do Dinh and Iryna Gurevych. 2016. Token- level metaphor detection using neural networks. In Proceedings of the Fourth Workshop on Metaphor in NLP, pages 28-33.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Metanet: Deep semantic automatic metaphor analysis", "authors": [ { "first": "Jisup", "middle": [], "last": "Ellen K Dodge", "suffix": "" }, { "first": "Elise", "middle": [], "last": "Hong", "suffix": "" }, { "first": "", "middle": [], "last": "Stickles", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Third Workshop on Metaphor in NLP", "volume": "", "issue": "", "pages": "40--49", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen K Dodge, Jisup Hong, and Elise Stickles. 2015. Metanet: Deep semantic automatic metaphor anal- ysis. In Proceedings of the Third Workshop on Metaphor in NLP, pages 40-49.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "1957. 2. a note on descent groups in polynesia", "authors": [ { "first": "Raymond", "middle": [], "last": "Firth", "suffix": "" } ], "year": null, "venue": "Man", "volume": "57", "issue": "", "pages": "4--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raymond Firth. 1957. 2. a note on descent groups in polynesia. Man, 57:4-8.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Automatic metaphor detection using constructions and frames. Constructions and frames", "authors": [ { "first": "Jisup", "middle": [], "last": "Hong", "suffix": "" } ], "year": 2016, "venue": "", "volume": "8", "issue": "", "pages": "295--322", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jisup Hong. 2016. Automatic metaphor detection us- ing constructions and frames. Constructions and frames, 8(2):295-322.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Metaphor identification with paragraph and word vectorization: An attention-based neural approach", "authors": [ { "first": "Timour", "middle": [], "last": "Igamberdiev", "suffix": "" }, { "first": "Hyopil", "middle": [], "last": "Shin", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timour Igamberdiev and Hyopil Shin. 2018. Metaphor identification with paragraph and word vectoriza- tion: An attention-based neural approach. In Pro- ceedings of the 32nd Pacific Asia Conference on Lan- guage, Information and Computation.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Different texts, same metaphors: Unigrams and beyond", "authors": [ { "first": "Ben", "middle": [], "last": "Beata Beigman Klebanov", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Leong", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Heilman", "suffix": "" }, { "first": "", "middle": [], "last": "Flor", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Second Workshop on Metaphor in NLP", "volume": "", "issue": "", "pages": "11--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Beata Beigman Klebanov, Ben Leong, Michael Heil- man, and Michael Flor. 2014. Different texts, same metaphors: Unigrams and beyond. In Proceedings of the Second Workshop on Metaphor in NLP, pages 11-17.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Supervised word-level metaphor detection: Experiments with concreteness and reweighting of examples", "authors": [ { "first": "Chee Wee", "middle": [], "last": "Beata Beigman Klebanov", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Leong", "suffix": "" }, { "first": "", "middle": [], "last": "Flor", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Third Workshop on Metaphor in NLP", "volume": "", "issue": "", "pages": "11--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "Beata Beigman Klebanov, Chee Wee Leong, and Michael Flor. 2015. Supervised word-level metaphor detection: Experiments with concreteness and reweighting of examples. In Proceedings of the Third Workshop on Metaphor in NLP, pages 11-20.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Semantic classifications for detection of verb metaphors", "authors": [ { "first": "Chee Wee", "middle": [], "last": "Beata Beigman Klebanov", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Leong", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Gutierrez", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Shutova", "suffix": "" }, { "first": "", "middle": [], "last": "Flor", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "101--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Beata Beigman Klebanov, Chee Wee Leong, E Dario Gutierrez, Ekaterina Shutova, and Michael Flor. 2016. Semantic classifications for detection of verb metaphors. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 101-106.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Metaphors we live by", "authors": [ { "first": "George", "middle": [], "last": "Lakoff", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 1980, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Lakoff and Mark Johnson. 1980. Metaphors we live by. Chicago, IL: University of Chicago.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Distributional models of word meaning", "authors": [ { "first": "Alessandro", "middle": [], "last": "Lenci", "suffix": "" } ], "year": 2018, "venue": "Annual review of Linguistics", "volume": "4", "issue": "", "pages": "151--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alessandro Lenci. 2018. Distributional models of word meaning. Annual review of Linguistics, 4:151-171.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A report on the 2018 vua metaphor detection shared task", "authors": [ { "first": "Beata", "middle": [ "Beigman" ], "last": "Chee Wee Leong", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Klebanov", "suffix": "" }, { "first": "", "middle": [], "last": "Shutova", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Workshop on Figurative Language Processing", "volume": "", "issue": "", "pages": "56--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chee Wee Leong, Beata Beigman Klebanov, and Eka- terina Shutova. 2018. A report on the 2018 vua metaphor detection shared task. In Proceedings of the Workshop on Figurative Language Processing, pages 56-66.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "The lancaster sensorimotor norms: multidimensional measures of perceptual and action strength for 40,000 english words", "authors": [ { "first": "Dermot", "middle": [], "last": "Lynott", "suffix": "" }, { "first": "Louise", "middle": [], "last": "Connell", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Brysbaert", "suffix": "" }, { "first": "James", "middle": [], "last": "Brand", "suffix": "" }, { "first": "James", "middle": [], "last": "Carney", "suffix": "" } ], "year": 2019, "venue": "Behavior Research Methods", "volume": "", "issue": "", "pages": "1--21", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dermot Lynott, Louise Connell, Marc Brysbaert, James Brand, and James Carney. 2019. The lan- caster sensorimotor norms: multidimensional mea- sures of perceptual and action strength for 40,000 english words. Behavior Research Methods, pages 1-21.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Semantic signatures for example-based linguistic metaphor detection", "authors": [ { "first": "Michael", "middle": [], "last": "Mohler", "suffix": "" }, { "first": "David", "middle": [], "last": "Bracewell", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Tomlinson", "suffix": "" }, { "first": "David", "middle": [], "last": "Hinote", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the First Workshop on Metaphor in NLP", "volume": "", "issue": "", "pages": "27--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Mohler, David Bracewell, Marc Tomlinson, and David Hinote. 2013. Semantic signatures for example-based linguistic metaphor detection. In Proceedings of the First Workshop on Metaphor in NLP, pages 27-35.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Scikit-learn: Machine learning in python. the", "authors": [ { "first": "Fabian", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "Ga\u00ebl", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Michel", "suffix": "" }, { "first": "Bertrand", "middle": [], "last": "Thirion", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "Mathieu", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "Ron", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Dubourg", "suffix": "" } ], "year": 2011, "venue": "Journal of machine Learning research", "volume": "12", "issue": "", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. the Journal of machine Learning research, 12:2825-2830.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Supervised metaphor detection using conditional random fields", "authors": [ { "first": "Sunny", "middle": [], "last": "Rai", "suffix": "" }, { "first": "Shampa", "middle": [], "last": "Chakraverty", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Fourth Workshop on Metaphor in NLP", "volume": "", "issue": "", "pages": "18--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sunny Rai, Shampa Chakraverty, and Devendra K Tayal. 2016. Supervised metaphor detection using conditional random fields. In Proceedings of the Fourth Workshop on Metaphor in NLP, pages 18-27.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Is more always better for verbs? semantic richness effects and verb meaning", "authors": [ { "first": "M", "middle": [], "last": "David", "suffix": "" }, { "first": "Alison", "middle": [], "last": "Sidhu", "suffix": "" }, { "first": "Penny", "middle": [ "M" ], "last": "Heard", "suffix": "" }, { "first": "", "middle": [], "last": "Pexman", "suffix": "" } ], "year": 2016, "venue": "Frontiers in psychology", "volume": "7", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M Sidhu, Alison Heard, and Penny M Pexman. 2016. Is more always better for verbs? semantic richness effects and verb meaning. Frontiers in psy- chology, 7:798.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Effects of relative embodiment in lexical and semantic processing of verbs", "authors": [ { "first": "M", "middle": [], "last": "David", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Sidhu", "suffix": "" }, { "first": "", "middle": [], "last": "Kwan", "suffix": "" }, { "first": "Paul", "middle": [ "D" ], "last": "Penny M Pexman", "suffix": "" }, { "first": "", "middle": [], "last": "Siakaluk", "suffix": "" } ], "year": 2014, "venue": "Acta psychologica", "volume": "149", "issue": "", "pages": "32--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M Sidhu, Rachel Kwan, Penny M Pexman, and Paul D Siakaluk. 2014. Effects of relative embod- iment in lexical and semantic processing of verbs. Acta psychologica, 149:32-39.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Metaphor in usage. Cognitive Linguistics", "authors": [ { "first": "J", "middle": [], "last": "Gerard", "suffix": "" }, { "first": "", "middle": [], "last": "Steen", "suffix": "" }, { "first": "G", "middle": [], "last": "Aletta", "suffix": "" }, { "first": "", "middle": [], "last": "Dorst", "suffix": "" }, { "first": "Anna", "middle": [ "A" ], "last": "Berenike Herrmann", "suffix": "" }, { "first": "Tina", "middle": [], "last": "Kaal", "suffix": "" }, { "first": "", "middle": [], "last": "Krennmayr", "suffix": "" } ], "year": 2010, "venue": "", "volume": "21", "issue": "", "pages": "765--796", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gerard J Steen, Aletta G Dorst, J Berenike Herrmann, Anna A Kaal, and Tina Krennmayr. 2010. Metaphor in usage. Cognitive Linguistics, 21(4):765-796.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Metaphor detection with cross-lingual model transfer", "authors": [ { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" }, { "first": "Leonid", "middle": [], "last": "Boytsov", "suffix": "" }, { "first": "Anatole", "middle": [], "last": "Gershman", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Nyberg", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "248--258", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yulia Tsvetkov, Leonid Boytsov, Anatole Gershman, Eric Nyberg, and Chris Dyer. 2014. Metaphor detec- tion with cross-lingual model transfer. In Proceed- ings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 248-258.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Automatic metaphor detection using large-scale lexical resources and conventional metaphor extraction", "authors": [ { "first": "Yorick", "middle": [], "last": "Wilks", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Dalton", "suffix": "" }, { "first": "James", "middle": [], "last": "Allen", "suffix": "" }, { "first": "Lucian", "middle": [], "last": "Galescu", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the First Workshop on Metaphor in NLP", "volume": "", "issue": "", "pages": "36--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yorick Wilks, Adam Dalton, James Allen, and Lucian Galescu. 2013. Automatic metaphor detection us- ing large-scale lexical resources and conventional metaphor extraction. In Proceedings of the First Workshop on Metaphor in NLP, pages 36-44.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Synaesthetic metaphors are neither synaesthetic nor metaphorical. Perception metaphors", "authors": [ { "first": "Bodo", "middle": [], "last": "Winter", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "105--126", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bodo Winter. 2019. Synaesthetic metaphors are nei- ther synaesthetic nor metaphorical. Perception metaphors, pages 105-126.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Meta4meaning: Automatic metaphor interpretation using corpus-derived word associations", "authors": [ { "first": "Ping", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Khalid", "middle": [], "last": "Alnajjar", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Granroth-Wilding", "suffix": "" }, { "first": "Kat", "middle": [], "last": "Agres", "suffix": "" }, { "first": "Hannu", "middle": [], "last": "Toivonen", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 7th International Conference on Computational Creativity (ICCC)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ping Xiao, Khalid Alnajjar, Mark Granroth- Wilding, Kat Agres, and Hannu Toivonen. 2016. Meta4meaning: Automatic metaphor interpretation using corpus-derived word associations. In Pro- ceedings of the 7th International Conference on Computational Creativity (ICCC). Paris, France.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Is more always better? effects of semantic richness on lexical decision, speeded pronunciation, and semantic classification", "authors": [ { "first": "J", "middle": [], "last": "Melvin", "suffix": "" }, { "first": "Sarah", "middle": [ "E" ], "last": "Yap", "suffix": "" }, { "first": "", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Ian", "middle": [ "S" ], "last": "Penny M Pexman", "suffix": "" }, { "first": "", "middle": [], "last": "Hargreaves", "suffix": "" } ], "year": 2011, "venue": "Psychonomic Bulletin & Review", "volume": "18", "issue": "4", "pages": "742--750", "other_ids": {}, "num": null, "urls": [], "raw_text": "Melvin J Yap, Sarah E Tan, Penny M Pexman, and Ian S Hargreaves. 2011. Is more always better? effects of semantic richness on lexical decision, speeded pronunciation, and semantic classification. Psychonomic Bulletin & Review, 18(4):742-750.", "links": null } }, "ref_entries": { "TABREF1": { "num": null, "type_str": "table", "content": "
IndividualFeaturesLRLSVC RFC
BaselineB1 T 2.632.621.618
LinguisticME T 1.637.636.632
DM.616.620.623
EB.547.548.544
EB-diff.322.321.302
Collocation Trigram T 4.626.625.612
FL T 5.624.623.621
FPOS.378.369.335
Word2VecGoogleNews .605.607.603
GloVe T 3.630.627.633
Internal.569.555.568
CSGoogleNews .448.451.445
GloVe.403.404.410
Internal.436.421.402
Table 1: Evaluation Results on Individual Features. T1-
5 are the top five features in terms of F1 score.
", "html": null, "text": ", the top five features with the LR classifier are highlighted in bold. Results show that the best individual feature is ME, followed by" }, "TABREF3": { "num": null, "type_str": "table", "content": "", "html": null, "text": "Evaluation Results on Fused Features" }, "TABREF5": { "num": null, "type_str": "table", "content": "
", "html": null, "text": "Released Final Results of Our System" }, "TABREF7": { "num": null, "type_str": "table", "content": "
", "html": null, "text": "Comparison of Results of Our System to Works in the last Shared Task" } } } }