{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:42:57.074083Z" }, "title": "Neural Metaphor Detection with a Residual biLSTM-CRF Model", "authors": [ { "first": "Andr\u00e9s", "middle": [ "Torres" ], "last": "Rivera", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universitat Oberta de Catalunya", "location": {} }, "email": "" }, { "first": "Antoni", "middle": [], "last": "Oliver", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universitat Oberta de Catalunya", "location": {} }, "email": "aoliverg@uoc.edu" }, { "first": "Salvador", "middle": [], "last": "Climent", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universitat Oberta de Catalunya", "location": {} }, "email": "scliment@uoc.edu" }, { "first": "Marta", "middle": [], "last": "Coll-Florit", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universitat Oberta de Catalunya", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper we present a novel resourceinexpensive architecture for metaphor detection based on a residual bidirectional long short-term memory and conditional random fields. Current approaches on this task rely on deep neural networks to identify metaphorical words, using additional linguistic features or word embeddings. We evaluate our proposed approach using different model configurations that combine embeddings, part of speech tags, and semantically disambiguated synonym sets. This evaluation process was performed using the training and testing partitions of the VU Amsterdam Metaphor Corpus. We use this method of evaluation as reference to compare the results with other current neural approaches for this task that implement similar neural architectures and features, and that were evaluated using this corpus. Results show that our system achieves competitive results with a simpler architecture compared to previous approaches.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "In this paper we present a novel resourceinexpensive architecture for metaphor detection based on a residual bidirectional long short-term memory and conditional random fields. Current approaches on this task rely on deep neural networks to identify metaphorical words, using additional linguistic features or word embeddings. We evaluate our proposed approach using different model configurations that combine embeddings, part of speech tags, and semantically disambiguated synonym sets. This evaluation process was performed using the training and testing partitions of the VU Amsterdam Metaphor Corpus. We use this method of evaluation as reference to compare the results with other current neural approaches for this task that implement similar neural architectures and features, and that were evaluated using this corpus. Results show that our system achieves competitive results with a simpler architecture compared to previous approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "This paper presents a new model for automatic metaphor detection which has participated at the FigLang 2020 metaphor detection shared task (Leong et al., 2020) . Our approach, which is based on neural networks, has been developed in the framework of the research project MOMENT (Coll-Florit et al., 2018) , a project devoted to the analysis of metaphors in mental health discourses.", "cite_spans": [ { "start": 139, "end": 159, "text": "(Leong et al., 2020)", "ref_id": "BIBREF15" }, { "start": 278, "end": 304, "text": "(Coll-Florit et al., 2018)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As it is well known in Cognitive Linguistics, a conceptual metaphor (CM) is a cognitive process which allows to understand and communicate an abstract or diffuse concept in terms of a more concrete one (cf. e.g. Lakoff and Johnson (1980) ). This process is expressed linguistically by using metaphorically used words (MUW).", "cite_spans": [ { "start": 212, "end": 237, "text": "Lakoff and Johnson (1980)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The study of metaphor is a prolific area of research in Cognitive Linguistics, being the Metaphor Identification Procedure (MIP) (Pragglejaz Group, 2007) and its derivative MIPVU (Steen et al., 2019) the most standard methods for manual MUW detection. MIPVU is the method that was used to annotate the VU Amsterdam Metaphor Corpus (VUA corpus), used in FigLang 2020. Moreover, in the area of Corpus Linguistics, some methods have been developed for a richer annotation of metaphor in corpora (Ogarkova and Soriano Salinas, 2014; Shutova, 2017; Coll-Florit and Climent, 2019) .", "cite_spans": [ { "start": 141, "end": 153, "text": "Group, 2007)", "ref_id": "BIBREF23" }, { "start": 179, "end": 199, "text": "(Steen et al., 2019)", "ref_id": "BIBREF27" }, { "start": 492, "end": 528, "text": "(Ogarkova and Soriano Salinas, 2014;", "ref_id": "BIBREF20" }, { "start": 529, "end": 543, "text": "Shutova, 2017;", "ref_id": "BIBREF26" }, { "start": 544, "end": 574, "text": "Coll-Florit and Climent, 2019)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "CM is pervasive in natural language text and therefore it is crucial in automatic text understanding (Shutova, 2010) . For this reason automated metaphor processing has become an increasingly important concern in natural language processing, as shown by the holding of the Metaphor in NLP workshop series (at NAACL-HLT 2013, ACL 2014, NAACL-HLT 2015, NAACL-HLT 2016 and NAACL-HLT 2018) and a growing body of research -see Veale et al. (2016) and Shutova (2017) for quite recent reviews.", "cite_spans": [ { "start": 101, "end": 116, "text": "(Shutova, 2010)", "ref_id": "BIBREF25" }, { "start": 422, "end": 441, "text": "Veale et al. (2016)", "ref_id": "BIBREF32" }, { "start": 446, "end": 460, "text": "Shutova (2017)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Automatic metaphor processing involves two main tasks: identifying MUW (metaphor detection or recognition) and attempting to provide a semantic interpretation for the utterance containing them (metaphor interpretation). This work deals with metaphor detection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This problem has been mainly approached in the last decade by supervised and semi-supervised machine learning techniques but recently this paradigm has largely shifted to the use of deep learning algorithms, such as neural networks. Leong et al. (2018) report that all but one of participating teams on the 2018 VUA Metaphor Detection Shared Task used this kind of architectures. Our system follows this trend by trying to improve on previous neural network methods.", "cite_spans": [ { "start": 233, "end": 252, "text": "Leong et al. (2018)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Below we describe the main related works (section 2). Next we present our methodology and model (section 3), experiments (section 4) and results (Section 5). We finish with the discussion and our overall conclusions (sections 6 and 7).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Research on metaphor recognition and interpretation is changing from the use of features (linguistic and concreteness features), classical methods (as generalization, classification and word associations) and the use of theoretical principles (construction grammar, frame semantics and conceptual metaphor theory) to neural networks and other deep learning techniques.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Concreteness features are used by Klebanov et al. (2015) along with re-weighting of the training examples to train a supervised machine learning system. The trained system is able to classify all content words of a text in two groups: metaphorical and non-metaphorical.", "cite_spans": [ { "start": 34, "end": 56, "text": "Klebanov et al. (2015)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Klebanov et al. (2016) study the metaphoricity of verbs using semantic generalization and classification using word forms, lemmas and several other linguistic features. They demonstrated the effectiveness of the generalization from orthographic unigrams to lemmas and the combination of lemmas and semantic classes based on WordNet. They also used automatically generated clusters to combine with unigram lemmas getting a competitive performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "The Meta4meaning (Xiao et al., 2016 ) metaphor interpretation method uses word associations extracted from a corpus to retrieve approximate properties of concepts and provide interpretations for nominal metaphors of the form NOUN 1 is [a] NOUN 2 (where NOUN 1 is the tenor and NOUN 2 the vehicle). Metaphor interpretation is obtained as a combination of the saliences of the properties to the tenor and the vehicle. Combinations can be aggregations (the product or sum of saliences), salience difference or a combination of the results of the two. As an output, Meta4meaning provides a list of interpretations with weights.", "cite_spans": [ { "start": 17, "end": 35, "text": "(Xiao et al., 2016", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "The automatic metaphor detection system MetaNet (Hong, 2016) has been designed applying theoretical principles from construction grammar, frame semantics, and conceptual metaphor theory. The system relies on a conceptual network of frames and metaphors. Rosen (2018) developed an algorithm using deep learning techniques that uses a representation of metaphorical constructions in an argument-structure level. The algorithm allows for the identification of source-level mappings of metaphors. The author concludes that the use of deep learning algorithms with the addition of construction grammatical relations in the feature set improves the accuracy of the prediction of metaphorical source domains. Wu et al. (2018) propose to use a Convolutional Neural Network -Long-Short Term Memory (CNN-LSTM) with a Conditional Random Field (CRF) or Softmax layer for metaphor detection in texts. They combine CNN and LSTM to capture both local and long-distance contextual information to represent the input sentences. Meanwhile, Mu et al. (2019) argue that using broader discourse features can have a substantial positive impact for the task of metaphorical identification. They obtain significant results using document embeddings methods to represent an utterance and its surrounding discourse. With this material a gradient boosting classifier is trained.", "cite_spans": [ { "start": 48, "end": 60, "text": "(Hong, 2016)", "ref_id": "BIBREF8" }, { "start": 254, "end": 266, "text": "Rosen (2018)", "ref_id": "BIBREF24" }, { "start": 702, "end": 718, "text": "Wu et al. (2018)", "ref_id": "BIBREF33" }, { "start": 1022, "end": 1038, "text": "Mu et al. (2019)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Other works for specific tasks within the scope of metaphor recognition, such as detecting the metaphoricity of adjective-noun (AN) pairs in English as isolated units, include the works by Turney et al. 2011 We propose a model that uses residual bidirectional long short-term memory (biLSTM) with a CRF, using ELMo embeddings along with additional linguistic features, such as part of speech tags (POS) and semantically disambiguated Word-Net 1 synonym sets (synsets) (Fellbaum and Miller, 1998) . Our model could be grouped in the same category as the aforementioned approaches: deep neural networks models for metaphor detection.", "cite_spans": [ { "start": 468, "end": 495, "text": "(Fellbaum and Miller, 1998)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Most of the approaches mentioned in section 2 used the VUA corpus (Steen et al., 2010) in order to carry out model training and testing. They divided the training and test sets according to the VUA Metaphor Detection Shared Task specifications. To train and test our model we used the VUA corpus partitions, using ELMo embeddings to represent words and lemmas, and POS and synsets as additional linguistic features. ELMo (Embeddings from Language Models) embeddings (Peters et al., 2018) are derived from a bidirectional language model (biLM) and they are contextualized, deep and character based. ELMo embeddings have been successfully used in several NLP tasks.", "cite_spans": [ { "start": 66, "end": 86, "text": "(Steen et al., 2010)", "ref_id": "BIBREF28" }, { "start": 466, "end": 487, "text": "(Peters et al., 2018)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Model Description", "sec_num": "3" }, { "text": "To process the VUA corpus we used the Natural Language Toolkit (NLTK) (Loper and Bird, 2002) for Python, with this tool we performed tokenization, lemmatization, and POS tagging. Then we used Freeling (Padr\u00f3 and Stanilovsky, 2012) to obtain the respective synset of each token. Although NLTK provides a method for obtaining synsetsusing POS tags or Lesk's Algorithm-, Freeling implements UKB (Agirre et al., 2014) , a graphbased word sense disambiguation (WSD) algorithm that is used to obtain semantically disambiguated synsets. These features along the ELMo embeddings were used -in different configurations-as input for our model. We set a sequence padding value equal to 116, which is the maximum sentence length observed in the corpus. This process normalizes the input in order to train in batches, but might contribute to sparsity on training data.", "cite_spans": [ { "start": 70, "end": 92, "text": "(Loper and Bird, 2002)", "ref_id": "BIBREF17" }, { "start": 201, "end": 230, "text": "(Padr\u00f3 and Stanilovsky, 2012)", "ref_id": "BIBREF21" }, { "start": 392, "end": 413, "text": "(Agirre et al., 2014)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Model Description", "sec_num": "3" }, { "text": "We used one-hot encoded representation for POS, and computed local 100-dimension embeddings for synsets. In the case of POS, we have a small set of tags (43), and therefore resulting in a low dimensionality of the one-hot embeddings. For synsets, the computation of local embeddings provides the semantically disambiguated relations that exist between the units that compose the training data. These embeddings, in addition with their EMLo counterparts, shall provide enough contextual and semantic data to understand metaphorical instances of words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Description", "sec_num": "3" }, { "text": "The main architecture of our model (shown in Figure 1 ) is composed by a residual biLSTM (Kim et al., 2017; Tran et al., 2017) for sequence labeling. One of the particularities of this architecture lies in the implementation of an additive operation that takes the outputs from each biLSTM layer and combines them to calculate the residual connection between them, in order to obtain previously seen information from both instances.", "cite_spans": [ { "start": 89, "end": 107, "text": "(Kim et al., 2017;", "ref_id": "BIBREF9" }, { "start": 108, "end": 126, "text": "Tran et al., 2017)", "ref_id": "BIBREF30" } ], "ref_spans": [ { "start": 45, "end": 53, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Model Description", "sec_num": "3" }, { "text": "After computing the residual connection from both biLSTM layers, our model includes a dropout layer, followed by a time distributed layer in which a dense activation with 2 hidden units to each timestep is applied. We used ReLU (Nair and Hinton, 2010) as activation function in combination with a He-normal (He et al., 2015) kernel initialization function for the time distributed layer, which results in a zero-mean Gaussian distribution with a standard deviation equal to 2 n l . Finally, after the time distributed layer we used a conditional random field (CRF) implemented for sequence labeling (Lafferty et al., 2001 ). Given that the VUA corpus is composed by more negative -or literal-labels than positive -or metaphoric-labels, and that the sequence padding process added non-informative features to the input array, we opted to treat the training partition as an imbalanced dataset. We selected the Nadam optimizer (Dozat, 2016) , which is based on Adam (Kingma and Ba, 2014) and tends to perform better with sparse data. This last optimization algorithm has two main components: a momentum and an adaptive learning rate component. Nadam modifies the momentum component of Adam using Nesterov's accelerated gradient (NAG). The Nadam update rule can be written as follows:", "cite_spans": [ { "start": 228, "end": 251, "text": "(Nair and Hinton, 2010)", "ref_id": "BIBREF19" }, { "start": 307, "end": 324, "text": "(He et al., 2015)", "ref_id": "BIBREF7" }, { "start": 599, "end": 621, "text": "(Lafferty et al., 2001", "ref_id": "BIBREF13" }, { "start": 924, "end": 937, "text": "(Dozat, 2016)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Model Description", "sec_num": "3" }, { "text": "wt+1 = wt \u2212 \u03b1 \u221a\u0175 t + \u03b21mt + 1 \u2212 \u03b21 1 \u2212 \u03b2 t 1 \u2022 \u2202L \u2202wt (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Description", "sec_num": "3" }, { "text": "To carry out the evaluation of our model we used the train and test splits provided in VUA shared task partitions (Shutova, 2017) . In order to obtain a validation split we divided the training partition using the following percentages: 80% for training 20% for validation. With these partitions, we trained a total of 6 different model configurations: words and POS (W+POS); lemmas and POS (L+POS); words, POS and synsets (W+POS+SS); lemmas, POS and synsets (L+POS+SS); words, lemmas and POS (WL+POS); and words, lemmas, POS and synsets (WL+POS+SS). In all cases we used the same training parameters, all model configurations were trained in batches for 5 epochs, using a learning rate = 0.0025. Then, the resulting models were evaluated -using the precision, recall and F 1 score metrics-on both the all POS metaphor detection task and the metaphoric verbs detection task.", "cite_spans": [ { "start": 114, "end": 129, "text": "(Shutova, 2017)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Regarding the all POS prediction task (Table 1 ) , the L+POS+SS model had the best performance with a 0.5729 in precision, 0.6027 in recall and an F 1 score equal to 0.5874. Overall, all configuration obtained a mean F 1 score of 0.58 being the WL+POS model the one with the lowest score (0.5615). Regarding the recall score, the highest observed value was obtained by the W+POS+SS model, with a recall equal to 0.6438. It could be said that a less diverse lexicon obtained by using lemmas instead of words to obtain embeddings, helped to improve the performance of the L+POS+SS model. Nevertheless, when comparing the W+POS and L+POS configuration, both obtained similar results, with less than 1% difference in performance between them. Meanwhile, when comparing the W+POS+SS and L+POS+SS models, it can be observed that both models obtained similar F 1 scores, but a variation of 4% between the precision and recall that favours precision in the L+POS+SS model, and recall in the W+POS+SS model.", "cite_spans": [], "ref_spans": [ { "start": 38, "end": 46, "text": "(Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "In the case of the metaphoric verb labeling task (Table 2 ), the W+POS model obtained the best scores in precision and F 1 score (0.6695 and 0.6543 accordingly), while the W+POS+SS model obtained the highest recall value (0.7032). Overall, the mean F 1 score of all configurations was equal to 0.6411, being the WL+POS the poorest performing configuration with a F 1 score of 0.6101. In a similar way to the all POS task, the W+POS+SS and L+POS+SS configurations obtained precision and recall scores with a difference of 6% in both metrics.", "cite_spans": [], "ref_spans": [ { "start": 49, "end": 57, "text": "(Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Model Precision", "sec_num": null }, { "text": "Unlike in the all POS task, combining features did not improve the performance of the models for verbs labeling. While using synsets to disambiguate the meaning of the different words or lemmas that were fed to the model, using ELMo embeddings and POS tags yielded better results in this task. One of the possible explanations for this behavior could be that verbs tend to be more polysemous than nouns and, therefore, obtain greater benefit from this feature. According to WordNet statistics 2 , verbs have an average polisemy index of 2.17, while nouns have an average of 1.24. It can be observed the all POS models set the W+POS architecture has a higher precision in comparison to the W+POS+SS configuration. This behaviour can also be observed in the Verbs task model set, where both configurations obtained the higher values for these metrics. On one hand, the W+POS classifier captures fewer instances of metaphoric words, but most of the metaphors it classifies are true positives whereas, on the other hand, the W+POS+SS is a greedier model that correctly classifies metaphors but its predictions tend to include instances of false negatives. Such variation might be caused by the inclusion of synsets as training feature: when additional senses are linked to each training word, they provide a polysemous representation of words and cause an increase in semantic patterns for both metaphoric and literal tokens. These semantically disambiguated patterns broaden the prediction scope of the model, as words with similar senses might occur in similar contexts. While the W+POS architecture correctly predicts metaphors to a certain degree, its scope is more precise but narrower than the W+POS+SS architecture in which words -particularly verbs-have a variety of senses that improve the recall metric at the expense of predicting literal tokens as metaphoric when compared to the W+POS model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Precision", "sec_num": null }, { "text": "Our proposed architecture has similarities to other current approaches such as Wu et al. (2018) who propose a LSTM with Softmax model, and Mu et al. (2019) who implement an XGBoost classifier using ELMo embeddings. In comparison to these approaches, our model shows an improvement in precision on the verb labeling task with a value equal to 0.6695, while Mu et al. 2019 Regarding the all POS labeling task, the model presented by Wu et al. (2018) performs better in all metrics, with a difference of 3% in precision, 10% in recall and 8% in F 1 score. It has to be noted that our model presents a simpler architecture (as shown in section 3). Wu et al. (2018) trained their model using 200 biLSTM hidden states and 100 CNN units for 15 epochs, and trained it 20 times using an ensemble method. On the other hand, the most simple W+POS architecture that we presented takes an average time of 5 minutes by epoch 4 to train and validate, thus producing a less complex model that is faster and less expensive to train.", "cite_spans": [ { "start": 79, "end": 95, "text": "Wu et al. (2018)", "ref_id": "BIBREF33" }, { "start": 139, "end": 155, "text": "Mu et al. (2019)", "ref_id": "BIBREF18" }, { "start": 431, "end": 447, "text": "Wu et al. (2018)", "ref_id": "BIBREF33" }, { "start": 644, "end": 660, "text": "Wu et al. (2018)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "On both tasks the poorest performing configuration was WL+POS, combining these features improved recall but lowered both precision and F 1 . Combining words and lemmas might create redundancy in certain features that is not possible to leverage using POS. On the other hand, while the dimensionality becomes higher than the previous configuration (1024 + 1024 + 43), once synsets are added in the WL+POS+SS architecture (and increasing the feature dimensionality by 100) the performance of the model improves on both precision and recall on the all POS task, and in all metrics on the verbs task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "One of the strategies that we implemented to leverage the imbalance of the training data was using a kernel initialization function. The He-normal function uses the size of the last layer in order to generate weights that have different ranges. In this case, the time distributed layer is activated using RELu, and takes the size of the dropout layer and then initializes it with a He-normal distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "In this paper we have described the system we have presented at the FigLang 2020 metaphor detection shared task. Our approach is based on neural networks using a residual biLSTM with a CRF and using ELMo embeddings along with the inclusion of several combinations of words, lemmas and linguistic features as POS and WordNet synsets. The system achieves competitive results with a simpler architecture compared to systems found in the literature. Such systems implement similar elements such as the use of bidirectional LSTM, CRF and ELMo embeddings in different configurations, and with different combination of linguistic features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and further work", "sec_num": "7" }, { "text": "As future work, we plan to further analyse which POS benefits most from the inclusion of synset information. Other aspect we want to explore is how to deal with imbalanced data, i.e. how we can leverage a dataset with only two classes (metaphoric/literal) where most of the samples are literal. Other interesting questions that deserve more research is the effects on optimal dimensionality of the addition of linguistic information. Other features that could be implemented are concreteness value of certain words, or as an strategy to balance classes according to the influence that this feature has on literal and metaphoric classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and further work", "sec_num": "7" }, { "text": "Other future lines of work might include the implementation of this type of model for the detection of metaphors and source domain identification in Spanish. Current developments on metaphor detection are being carried out mainly in English, while this is a great resource it could be interesting to create resources in other languages to broaden the scope of metaphor detection and interpretation. A possible pipeline could be configured with two separated model, one that performs the detection of metaphorical words, followed by another classifier that predicts the domain of those metaphors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and further work", "sec_num": "7" }, { "text": "Freeling implements WordNet version 3.0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://wordnet.princeton.edu/ documentation/wnstats7wn", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Both authors reported metric results using three digits.4 The model was trained using a shared NVIDIA Tesla P100 GPU.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was conducted in the framework of \"MOMENT: Metaphors of severe mental disorders. Discourse analysis of affected people and mental health professionals\", a project funded by the Span- ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Random walks for knowledge-based word sense disambiguation", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2014, "venue": "Computational Linguistics", "volume": "40", "issue": "1", "pages": "57--84", "other_ids": { "DOI": [ "10.1162/COLI_a_00164" ] }, "num": null, "urls": [], "raw_text": "Eneko Agirre, Oier L\u00f3pez de Lacalle, and Aitor Soroa. 2014. Random walks for knowledge-based word sense disambiguation. Computational Linguistics, 40(1):57-84.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Deep\" Learning : Detecting Metaphoricity in Adjective-Noun Pairs", "authors": [ { "first": "Yuri", "middle": [], "last": "Bizzoni", "suffix": "" }, { "first": "Stergios", "middle": [], "last": "Chatzikyriakidis", "suffix": "" }, { "first": "Mehdi", "middle": [], "last": "Ghanimifard", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Workshop on Stylistic Variation", "volume": "", "issue": "", "pages": "43--52", "other_ids": { "DOI": [ "10.18653/v1/W17-4906" ] }, "num": null, "urls": [], "raw_text": "Yuri Bizzoni, Stergios Chatzikyriakidis, and Mehdi Ghanimifard. 2017. \"Deep\" Learning : Detect- ing Metaphoricity in Adjective-Noun Pairs. In Pro- ceedings of the Workshop on Stylistic Variation, pages 43-52, Copenhagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A new methodology for conceptual metaphor detection and formulation in corpora", "authors": [ { "first": "Marta", "middle": [], "last": "Coll", "suffix": "" }, { "first": "-", "middle": [], "last": "Florit", "suffix": "" }, { "first": "Salvador", "middle": [], "last": "Climent", "suffix": "" } ], "year": 2019, "venue": "Journal of Linguistics", "volume": "32", "issue": "", "pages": "43--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marta Coll-Florit and Salvador Climent. 2019. A new methodology for conceptual metaphor detection and formulation in corpora. a case study on a mental health corpus. SKY Journal of Linguistics, 32:43- 74.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "MOMENT: Met\u00e1foras del trastorno mental grave. an\u00e1lisis del discurso de personas afectadas y profesionales de la salud mental [MOMENT: Metaphors of severe mental disorder. discourse analysis of affected people and mental health professionals", "authors": [ { "first": "Marta", "middle": [], "last": "Coll-Florit", "suffix": "" }, { "first": "Salvador", "middle": [], "last": "Climent", "suffix": "" }, { "first": "Mart\u00edn", "middle": [], "last": "Correa-Urquiza", "suffix": "" }, { "first": "Eul\u00e0lia", "middle": [], "last": "Hern\u00e1ndez", "suffix": "" }, { "first": "Antoni", "middle": [], "last": "Oliver", "suffix": "" }, { "first": "Asun", "middle": [], "last": "Pi\u00e9", "suffix": "" } ], "year": 2018, "venue": "Procesamiento del Lenguaje Natural", "volume": "61", "issue": "", "pages": "139--142", "other_ids": { "DOI": [ "10.26342/2018-61-17" ] }, "num": null, "urls": [], "raw_text": "Marta Coll-Florit, Salvador Climent, Mart\u00edn Correa- Urquiza, Eul\u00e0lia Hern\u00e1ndez, Antoni Oliver, and Asun Pi\u00e9. 2018. MOMENT: Met\u00e1foras del trastorno mental grave. an\u00e1lisis del discurso de personas afec- tadas y profesionales de la salud mental [MOMENT: Metaphors of severe mental disorder. discourse anal- ysis of affected people and mental health profession- als]. Procesamiento del Lenguaje Natural, 61:139- 142.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Incorporating nesterov momentum into adam", "authors": [ { "first": "Timothy", "middle": [], "last": "Dozat", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the International Conference on Learning Representations (ICLR-2016) -Workshop Track", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Dozat. 2016. Incorporating nesterov mo- mentum into adam. In Proceedings of the Inter- national Conference on Learning Representations (ICLR-2016) -Workshop Track, San Juan (Puerto Rico).", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "WordNet: An Electronic Lexical Database. Language, speech, and communication", "authors": [ { "first": "C", "middle": [], "last": "Fellbaum", "suffix": "" }, { "first": "G", "middle": [ "A" ], "last": "Miller", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Fellbaum and G.A. Miller. 1998. WordNet: An Elec- tronic Lexical Database. Language, speech, and communication. MIT Press.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Literal and Metaphorical Senses in Compositional Distributional Semantic Models", "authors": [ { "first": "E", "middle": [], "last": "", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Gutierrez", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" }, { "first": "Tyler", "middle": [], "last": "Marghetis", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Bergen", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "183--193", "other_ids": { "DOI": [ "10.18653/v1/P16-1018" ] }, "num": null, "urls": [], "raw_text": "E.Dario Gutierrez, Ekaterina Shutova, Tyler Marghetis, and Benjamin Bergen. 2016. Literal and Metaphor- ical Senses in Compositional Distributional Seman- tic Models. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 183-193, Berlin, Germany. Association for Computational Linguis- tics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "authors": [ { "first": "Kaiming", "middle": [], "last": "He", "suffix": "" }, { "first": "Xiangyu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shaoqing", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. CoRR, abs/1502.01852.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Automatic metaphor detection using constructions and frames. Constructions and frames", "authors": [ { "first": "Jisup", "middle": [], "last": "Hong", "suffix": "" } ], "year": 2016, "venue": "", "volume": "8", "issue": "", "pages": "295--322", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jisup Hong. 2016. Automatic metaphor detection us- ing constructions and frames. Constructions and frames, 8(2):295-322.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Residual LSTM: design of a deep recurrent architecture for distant speech recognition", "authors": [ { "first": "Jaeyoung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Mostafa", "middle": [], "last": "El-Khamy", "suffix": "" }, { "first": "Jungwon", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jaeyoung Kim, Mostafa El-Khamy, and Jungwon Lee. 2017. Residual LSTM: design of a deep recurrent architecture for distant speech recognition. CoRR, abs/1701.03360.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Adam: A Method for Stochastic Optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980[cs].ArXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs]. ArXiv: 1412.6980.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Supervised word-level metaphor detection: Experiments with concreteness and reweighting of examples", "authors": [ { "first": "Chee Wee", "middle": [], "last": "Beata Beigman Klebanov", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Leong", "suffix": "" }, { "first": "", "middle": [], "last": "Flor", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Third Workshop on Metaphor in NLP", "volume": "", "issue": "", "pages": "11--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "Beata Beigman Klebanov, Chee Wee Leong, and Michael Flor. 2015. Supervised word-level metaphor detection: Experiments with concreteness and reweighting of examples. In Proceedings of the Third Workshop on Metaphor in NLP, pages 11-20.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Semantic classifications for detection of verb metaphors", "authors": [ { "first": "Chee Wee", "middle": [], "last": "Beata Beigman Klebanov", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Leong", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Gutierrez", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Shutova", "suffix": "" }, { "first": "", "middle": [], "last": "Flor", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "101--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Beata Beigman Klebanov, Chee Wee Leong, E Dario Gutierrez, Ekaterina Shutova, and Michael Flor. 2016. Semantic classifications for detection of verb metaphors. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 101-106.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "John", "middle": [ "D" ], "last": "Lafferty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Fernando", "middle": [ "C N" ], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Eighteenth International Conference on Machine Learning, ICML '01", "volume": "", "issue": "", "pages": "282--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth Inter- national Conference on Machine Learning, ICML '01, pages 282-289, San Francisco, CA, USA. Mor- gan Kaufmann Publishers Inc.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Metaphors we Live by", "authors": [ { "first": "George", "middle": [], "last": "Lakoff", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 1980, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Lakoff and Mark Johnson. 1980. Metaphors we Live by. University of Chicago Press, Chicago.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A report on the 2020 vua and toefl metaphor detection shared task", "authors": [ { "first": "Beata", "middle": [ "Beigman" ], "last": "Chee Wee Leong", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Klebanov", "suffix": "" }, { "first": "Egon", "middle": [], "last": "Hamill", "suffix": "" }, { "first": "Rutuja", "middle": [], "last": "Stemle", "suffix": "" }, { "first": "Xianyang", "middle": [], "last": "Ubale", "suffix": "" }, { "first": "", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Second Workshop on Figurative Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chee Wee Leong, Beata Beigman Klebanov, Chris Hamill, Egon Stemle, Rutuja Ubale, and Xianyang Chen. 2020. A report on the 2020 vua and toefl metaphor detection shared task. In Proceedings of the Second Workshop on Figurative Language Pro- cessing, Seattle, WA.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A report on the 2018 VUA metaphor detection shared task", "authors": [ { "first": "Chee", "middle": [], "last": "Wee", "suffix": "" }, { "first": ";", "middle": [], "last": "Ben", "suffix": "" }, { "first": ")", "middle": [], "last": "Leong", "suffix": "" }, { "first": "Beata", "middle": [ "Beigman" ], "last": "Klebanov", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Workshop on Figurative Language Processing", "volume": "", "issue": "", "pages": "56--66", "other_ids": { "DOI": [ "10.18653/v1/W18-0907" ] }, "num": null, "urls": [], "raw_text": "Chee Wee (Ben) Leong, Beata Beigman Klebanov, and Ekaterina Shutova. 2018. A report on the 2018 VUA metaphor detection shared task. In Proceedings of the Workshop on Figurative Language Processing, pages 56-66, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Nltk: The natural language toolkit", "authors": [ { "first": "Edward", "middle": [], "last": "Loper", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics", "volume": "", "issue": "", "pages": "63--70", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward Loper and Steven Bird. 2002. Nltk: The natu- ral language toolkit. In Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Compu- tational Linguistics, pages 63-70.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Learning Outside the Box", "authors": [ { "first": "Jesse", "middle": [], "last": "Mu", "suffix": "" }, { "first": "Helen", "middle": [], "last": "Yannakoudakis", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" } ], "year": 2019, "venue": "Discourse-level Features Improve Metaphor Identification", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1904.02246" ] }, "num": null, "urls": [], "raw_text": "Jesse Mu, Helen Yannakoudakis, and Ekaterina Shutova. 2019. Learning Outside the Box: Discourse-level Features Improve Metaphor Iden- tification. arXiv:1904.02246 [cs].", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Rectified linear units improve restricted boltzmann machines", "authors": [ { "first": "Vinod", "middle": [], "last": "Nair", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 27th International Conference on International Conference on Machine Learning, ICML'10", "volume": "", "issue": "", "pages": "807--814", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vinod Nair and Geoffrey E. Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on International Conference on Machine Learning, ICML'10, page 807-814, Madison, WI, USA. Om- nipress.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Variation within universals: The 'metaphorical profile' approach to the study of ANGER concepts in English, Russian and Spanish, Metaphor and Intercultural Communication", "authors": [ { "first": "Anna", "middle": [], "last": "Ogarkova", "suffix": "" }, { "first": "Cristina", "middle": [ "Soriano" ], "last": "Salinas", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Ogarkova and Cristina Soriano Salinas. 2014. Variation within universals: The 'metaphorical pro- file' approach to the study of ANGER concepts in English, Russian and Spanish, Metaphor and Inter- cultural Communication. Bloomsbury, London. ID: unige:98101.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Freeling 3.0: Towards wider multilinguality", "authors": [ { "first": "Llu\u00eds", "middle": [], "last": "Padr\u00f3", "suffix": "" }, { "first": "Evgeny", "middle": [], "last": "Stanilovsky", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Language Resources and Evaluation Conference (LREC 2012)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Llu\u00eds Padr\u00f3 and Evgeny Stanilovsky. 2012. Freeling 3.0: Towards wider multilinguality. In Proceedings of the Language Resources and Evaluation Confer- ence (LREC 2012), Istanbul, Turkey. ELRA.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2227--2237", "other_ids": { "DOI": [ "10.18653/v1/N18-1202" ] }, "num": null, "urls": [], "raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "MIP: A method for identifying metaphorically used words in discourse", "authors": [ { "first": "Pragglejaz", "middle": [], "last": "Group", "suffix": "" } ], "year": 2007, "venue": "Metaphor and Symbol", "volume": "22", "issue": "1", "pages": "1--39", "other_ids": { "DOI": [ "10.1080/10926480709336752" ] }, "num": null, "urls": [], "raw_text": "Pragglejaz Group. 2007. MIP: A method for iden- tifying metaphorically used words in discourse. Metaphor and Symbol, 22(1):1-39.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Computationally Constructed Concepts: A Machine Learning Approach to Metaphor Interpretation Using Usage-Based Construction Grammatical Cues", "authors": [ { "first": "Zachary", "middle": [], "last": "Rosen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Workshop on Figurative Language Processing", "volume": "", "issue": "", "pages": "102--109", "other_ids": { "DOI": [ "10.18653/v1/W18-0912" ] }, "num": null, "urls": [], "raw_text": "Zachary Rosen. 2018. Computationally Constructed Concepts: A Machine Learning Approach to Metaphor Interpretation Using Usage-Based Con- struction Grammatical Cues. In Proceedings of the Workshop on Figurative Language Processing, pages 102-109, New Orleans, Louisiana. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Models of Metaphor in NLP", "authors": [ { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "688--697", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ekaterina Shutova. 2010. Models of Metaphor in NLP. Proceedings of the 48th Annual Meeting of the As- sociation for Computational Linguistics, pages 688- 697.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Annotation of Linguistic and Conceptual Metaphor", "authors": [ { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "1073--1100", "other_ids": { "DOI": [ "10.1007/978-94-024-0881-2_40" ] }, "num": null, "urls": [], "raw_text": "Ekaterina Shutova. 2017. Annotation of Linguistic and Conceptual Metaphor, pages 1073-1100. Springer Netherlands, Dordrecht.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "MIPVU: A manual for identifying metaphor-related words", "authors": [ { "first": "Gerard", "middle": [], "last": "Steen", "suffix": "" }, { "first": "Lettie", "middle": [], "last": "Dorst", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Berenike Herrmann", "suffix": "" }, { "first": "Tina", "middle": [], "last": "Kaal", "suffix": "" }, { "first": "Tryntje", "middle": [], "last": "Krennmayr", "suffix": "" }, { "first": "", "middle": [], "last": "Pasma", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "24--40", "other_ids": { "DOI": [ "10.1075/celcr.22.02ste" ] }, "num": null, "urls": [], "raw_text": "Gerard Steen, Lettie Dorst, J Berenike Herrmann, Anna Kaal, Tina Krennmayr, and Tryntje Pasma. 2019. MIPVU: A manual for identifying metaphor-related words, pages 24-40.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A Method for Linguistic Metaphor Identification: From MIP to MIPVU", "authors": [ { "first": "Gerard", "middle": [ "J" ], "last": "Steen", "suffix": "" }, { "first": "Aletta", "middle": [ "G" ], "last": "Dorst", "suffix": "" }, { "first": "J", "middle": [ "Berenike" ], "last": "Herrmann", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Kaal", "suffix": "" }, { "first": "Tina", "middle": [], "last": "Krennmayr", "suffix": "" }, { "first": "Trijntje", "middle": [], "last": "Pasma", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gerard J. Steen, Aletta G. Dorst, J. Berenike Herrmann, Anna Kaal, Tina Krennmayr, and Trijntje Pasma. 2010. A Method for Linguistic Metaphor Identifi- cation: From MIP to MIPVU. John Benjamins.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Metaphoricity detection in adjectivenoun pairs", "authors": [ { "first": "Andr\u00e9s", "middle": [], "last": "Torres Rivera", "suffix": "" }, { "first": "Antoni", "middle": [], "last": "Oliver", "suffix": "" }, { "first": "Marta", "middle": [], "last": "Coll-Florit", "suffix": "" } ], "year": 2020, "venue": "Procesamiento del Lenguaje Natural", "volume": "64", "issue": "", "pages": "53--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andr\u00e9s Torres Rivera, Antoni Oliver, and Marta Coll- Florit. 2020. Metaphoricity detection in adjective- noun pairs. Procesamiento del Lenguaje Natural, 64:53-60.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Named entity recognition with stack residual LSTM and trainable bias decoding", "authors": [ { "first": "Quan", "middle": [], "last": "Tran", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mackinlay", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Jimeno-Yepes", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quan Tran, Andrew MacKinlay, and Antonio Jimeno- Yepes. 2017. Named entity recognition with stack residual LSTM and trainable bias decoding. CoRR, abs/1706.07598.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Literal and Metaphorical Sense Identification through Concrete and Abstract Context", "authors": [ { "first": "Peter", "middle": [], "last": "Turney", "suffix": "" }, { "first": "Yair", "middle": [], "last": "Neuman", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Assaf", "suffix": "" }, { "first": "Yohai", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "680--690", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Turney, Yair Neuman, Dan Assaf, and Yohai Co- hen. 2011. Literal and Metaphorical Sense Iden- tification through Concrete and Abstract Context. In Proceedings of the 2011 Conference on Empiri- cal Methods in Natural Language Processing, pages 680-690, Edinburgh, Scotland, UK. Association for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Metaphor: A Computational Perspective", "authors": [ { "first": "Tony", "middle": [], "last": "Veale", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" }, { "first": "Beata", "middle": [ "Beigman" ], "last": "Klebanov", "suffix": "" } ], "year": 2016, "venue": "Synthesis Lectures on Human Language Technologies", "volume": "9", "issue": "1", "pages": "1--160", "other_ids": { "DOI": [ "10.2200/S00694ED1V01Y201601HLT031" ] }, "num": null, "urls": [], "raw_text": "Tony Veale, Ekaterina Shutova, and Beata Beigman Klebanov. 2016. Metaphor: A Computational Per- spective. Synthesis Lectures on Human Language Technologies, 9(1):1-160.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Neural Metaphor Detecting with CNN-LSTM Model", "authors": [ { "first": "Chuhan", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Fangzhao", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Yubo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Sixing", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Zhigang", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Yongfeng", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Workshop on Figurative Language Processing", "volume": "", "issue": "", "pages": "110--114", "other_ids": { "DOI": [ "10.18653/v1/W18-0913" ] }, "num": null, "urls": [], "raw_text": "Chuhan Wu, Fangzhao Wu, Yubo Chen, Sixing Wu, Zhigang Yuan, and Yongfeng Huang. 2018. Neu- ral Metaphor Detecting with CNN-LSTM Model. In Proceedings of the Workshop on Figurative Lan- guage Processing, pages 110-114, New Orleans, Louisiana. Association for Computational Linguis- tics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Meta4meaning: Automatic metaphor interpretation using corpus-derived word associations", "authors": [ { "first": "Ping", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Khalid", "middle": [], "last": "Alnajjar", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Granroth-Wilding", "suffix": "" }, { "first": "Kat", "middle": [], "last": "Agres", "suffix": "" }, { "first": "Hannu", "middle": [], "last": "Toivonen", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 7th International Conference on Computational Creativity (ICCC)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ping Xiao, Khalid Alnajjar, Mark Granroth- Wilding, Kat Agres, and Hannu Toivonen. 2016. Meta4meaning: Automatic metaphor interpretation using corpus-derived word associations. In Pro- ceedings of the 7th International Conference on Computational Creativity (ICCC). Paris, France.", "links": null } }, "ref_entries": { "FIGREF0": { "text": ", Gutierrez et al. (2016), Bizzoni et al. (2017), and Torres Rivera et al. (2020). The main goal of this task is to classify AN collocations using external and internal linguistic features, or techniques such as transfer learning along with word embeddings.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "Summarized model diagram.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "reported a precision score of 0.600 3 , and Mu et al. (2019) a precision equal to 0.589. Nevertheless Wu et al. (2018) reported the highest F 1 score (0.671), and Mu et al. (2019) the highest recall (0", "num": null, "uris": null, "type_str": "figure" }, "TABREF1": { "text": "All POS task model comparison.", "num": null, "html": null, "content": "