{ "paper_id": "N10-1039", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:49:51.414737Z" }, "title": "Using Gaussian Mixture Models to Detect Figurative Language in Context", "authors": [ { "first": "Linlin", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Saarland University", "location": { "postCode": "15 11 50 66041", "settlement": "Postfach, Saarbr\u00fccken", "country": "Germany" } }, "email": "linlin@coli.uni-saarland.de" }, { "first": "Caroline", "middle": [], "last": "Sporleder", "suffix": "", "affiliation": { "laboratory": "", "institution": "Saarland University", "location": { "postCode": "15 11 50 66041", "settlement": "Postfach, Saarbr\u00fccken", "country": "Germany" } }, "email": "csporled@coli.uni-saarland.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a Gaussian Mixture model for detecting different types of figurative language in context. We show that this model performs well when the parameters are estimated in an unsupervised fashion using EM. Performance can be improved further by estimating the parameters from a small annotated data set.", "pdf_parse": { "paper_id": "N10-1039", "_pdf_hash": "", "abstract": [ { "text": "We present a Gaussian Mixture model for detecting different types of figurative language in context. We show that this model performs well when the parameters are estimated in an unsupervised fashion using EM. Performance can be improved further by estimating the parameters from a small annotated data set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Figurative language employs words in a way that deviates from their normal meaning. It includes idiomatic usage, metaphor, metonymy or other types of creative language. Being able to detect figurative language is important for a number of NLP applications, e.g., machine translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Simply checking the input against an idiom dictionary does not solve the problem. While some expressions (e.g., trip the light fantastic) are always used idiomatically, many expressions (e.g., spill the beans), can take on a literal meaning as well. Whether such expression is used idiomatically or not has to be inferred from the discourse context. Likewise, simple dictionary look-up would not work for truly creative, one-off usages; these can neither be found in a dictionary nor can they be detected by standard idiom extraction methods, which apply statistical measures to accumulated corpus evidence for an expression to assess its 'idiomaticity'. An example of a fairly creative usage can be found in (1), which is a variation of the idiom put a sock in.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Take the sock out of your mouth and create a brand-new relationship with your mom.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We propose a method for detecting figurative language in context. Because we use context information rather than corpus statistics, our approach works also for truly creative usages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most studies on the detection of idioms and other types of figurative language focus on one of three aspects: type-based extraction (detect idioms on the type level), token-based classification (given a potentially idiomatic phrase in context, decide whether it is used idiomatically), token-based detection (detect figurative expressions in running text).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Type-based extractions exploit the fact that idioms have many properties which differentiate them from other expressions, e.g., they often exhibit a degree of syntactic and lexical fixedness. These properties can be used to identify potential idioms, for instance, by employing measures of association strength between the elements of an expression (Lin, 1999) .", "cite_spans": [ { "start": 349, "end": 360, "text": "(Lin, 1999)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Type-based approaches are unsuitable for expressions which can be used both figuratively and literally. These have to be disambiguated in context. Token-based classification aims to do this. A number of token-based approaches have been proposed: supervised (Katz and Giesbrecht, 2006) , weakly supervised (Birke and Sarkar, 2006) , and unsupervised (Fazly et al., 2009; .", "cite_spans": [ { "start": 257, "end": 284, "text": "(Katz and Giesbrecht, 2006)", "ref_id": "BIBREF6" }, { "start": 305, "end": 329, "text": "(Birke and Sarkar, 2006)", "ref_id": "BIBREF0" }, { "start": 349, "end": 369, "text": "(Fazly et al., 2009;", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Finally, token-based detection can be viewed as a two stage task which is the combination of type-based extraction and token-based classification. There has been relatively little work on this so far. One exception are Fazly et al. (2009) who detect idiom types by using statistical methods that model the general idiomaticity of an expression and then combine this with a simple second-stage process that detects whether the target expression is used figuratively in a given context, based on whether the expression occurs in canonical form or not.", "cite_spans": [ { "start": 219, "end": 238, "text": "Fazly et al. (2009)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "However, modeling token-based detection as a combination of type-based extraction and tokenbased classification has some drawbacks. First, type-based approaches typically compute statistics from multiple occurrences of a target expression, hence they cannot be applied to novel usages. Second, these methods were developed to detect figuratively used multi-word expressions (MWEs) and do not work for figuratively used individual words, like sparrow in example (2). Ideally, one would like to have a generic model that can detect any type of figurative usage in a given context. The model we propose in this paper is one step in this direction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "(2) During the Iraq war, he was a sparrow; he didn't condone the bloodshed but wasn't bothered enough to go out and protest.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We address the problem by using Gaussian Mixture Models (GMMs). We assume that the literal (l) and non-literal (n) data are generated by two different Gaussians (literal and nonliteral Gaussian). The token-based detection task is done by comparing which Gaussian has the higher probability of generating a specific instance. The Gaussian mixture model is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Gaussian Mixture Model to Detect Figurative Language", "sec_num": "3" }, { "text": "p(x) = c\u2208{l,n} w c \u00d7 N (x|\u00b5 c , \u03a3 c )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Gaussian Mixture Model to Detect Figurative Language", "sec_num": "3" }, { "text": "Where, c is the category of the Gaussian, \u00b5 c is the mean, \u03a3 c is the covariance matrix, and w c is the Gaussian weight. Our method is based on the insight that figurative language exhibits less semantic cohesive ties with the context than literal language . We use Normalized Google Distance to model semantic relatedness (Cilibrasi and Vitanyi, 2007) and represent the data by five types of semantic relatedness features", "cite_spans": [ { "start": 323, "end": 352, "text": "(Cilibrasi and Vitanyi, 2007)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Using Gaussian Mixture Model to Detect Figurative Language", "sec_num": "3" }, { "text": "x = (x1, x2, x3, x4, x5):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Gaussian Mixture Model to Detect Figurative Language", "sec_num": "3" }, { "text": "x1 is the average relatedness between the target expression and context words,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Gaussian Mixture Model to Detect Figurative Language", "sec_num": "3" }, { "text": "x1 = 2 |T | \u00d7 |C| (wi,cj )\u2208T \u00d7C relatedness(w i , c j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Gaussian Mixture Model to Detect Figurative Language", "sec_num": "3" }, { "text": "where w i is a component word of the target expression (T); c j is one of the context words (C); |T | is the total number of words in the target expression, and |C| is the total number of words in the context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Gaussian Mixture Model to Detect Figurative Language", "sec_num": "3" }, { "text": "|T |\u00d7|C| is the normalization factor, which is the total number of relatedness pairs between target component words and context words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The term 2", "sec_num": null }, { "text": "x2 is the average semantic relatedness in the context of the target expression,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The term 2", "sec_num": null }, { "text": "x2 = 1 |C| 2 (ci,cj )\u2208C\u00d7C,i =j relatedness(c i , c j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The term 2", "sec_num": null }, { "text": "x3 is the difference between the average semantic relatedness between the target expression and the context words and the average semantic relatedness of the context (i.e., x3 = x1 \u2212 x2). It is an indicator of how strongly the target expression is semantically related to the discourse context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The term 2", "sec_num": null }, { "text": "x4 is the feature used by for predicting literal or idiomatic use in the cohesion graph based method,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The term 2", "sec_num": null }, { "text": "x4 = 1 if x3 < 0 0 else", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The term 2", "sec_num": null }, { "text": "x5 is a high dimensional vector which represents the top relatedness scores between the component words of the target expression and the context,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The term 2", "sec_num": null }, { "text": "x5(k) = max (wi,cj )\u2208T \u00d7C (k, {relatedness(w i , c j )})", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The term 2", "sec_num": null }, { "text": "where the function max(k, A) is defined to choose the k th highest element from the set A. 1 The detection task is done by a Bayes decision rule, which chooses the category by maximizing the probability of fitting the data into the different Gaussian components:", "cite_spans": [ { "start": 91, "end": 92, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The term 2", "sec_num": null }, { "text": "c(x) = arg max i\u2208{l,n} {w i \u00d7 N (x|\u00b5 i , \u03a3 i )}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The term 2", "sec_num": null }, { "text": "4 Evaluating the GMM Approach", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The term 2", "sec_num": null }, { "text": "We evaluate our method on two data sets. The first set (idiom set) is taken from and consists of 3964 idiom occurrences (17 idiom types) which were manually labeled as 'literal' or 'figurative'. The second data set (V+NP set), consists of a randomly selected sample of 500 V+NP constructions from the Gigaword corpus, which were manually labeled.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4.1" }, { "text": "To determine how well our model deals with different types of figurative usage, we distinguish four phenomena: Phrase-level figurative means that the whole phrase is used figuratively. We further divide this class into expressions which are potentially ambiguous between literal and figurative usage (nsa), e.g., spill the beans, and those that are unambiguously figurative irrespective of the context (nsu), e.g., trip the light fantastic. The latter can, theoretically, be detected by dictionary look-up, the former cannot. The label token-level figurative (nw) is used when part of the phrase is used figuratively (e.g., sparrow in (2)). Often it is difficult to determine whether a word is still used in a 'literal' sense or whether it is already used figuratively. Since we are interested in improving the performance of NLP applications such as MT, we take a pragmatic approach and classify usages as 'figurative' if they are not lexicalized, i.e., if the specific sense is not listed in a dictionary. 2 For example, we would classify summit in the 'meeting' sense as 'literal' (l). In our data set, 7.3% of the instances were annotated as 'nsa', 1.9% as 'nsu', 9.2% as 'nw' and 81.5% as 'l'. A randomly selected sample (100 instances) was annotated independently by a second annotator. The kappa score (Cohen, 1960) is 0.84, which suggest that the annotations are reliable.", "cite_spans": [ { "start": 1309, "end": 1322, "text": "(Cohen, 1960)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4.1" }, { "text": "We used the MatLab package provided by Calinon (2009) for estimating the GMM model. The GMM is trained by the EM algorithm. The priors of Gaussian components, means and covariance of each components, are initialized by the k-means clustering algorithm (Hartigan, 1975) .", "cite_spans": [ { "start": 39, "end": 53, "text": "Calinon (2009)", "ref_id": "BIBREF1" }, { "start": 252, "end": 268, "text": "(Hartigan, 1975)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "GMM Estimated by EM", "sec_num": "4.2" }, { "text": "To determine whether the GMM is able to perform token-based idiom classification, we applied it to the idiom data set. The results (see Table 1) show that the GMM can distinguish usages quite well and gains equally good results as Sporleder and Li's cohesion graph method (Co-Graph). In addition, this method can deal with unobserved occurrences of non-literal language. Table 2 shows the results on the second data set. The baseline predicts 'idiomatic' and 'literal' according to a biased probability which is based on the true distribution in the annotated set. GMM shows the performance on the whole V+NP set. We also split the test set into three different subsets to de-", "cite_spans": [], "ref_spans": [ { "start": 136, "end": 144, "text": "Table 1)", "ref_id": null }, { "start": 371, "end": 378, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "GMM Estimated by EM", "sec_num": "4.2" }, { "text": "In a second experiment, we tested how well the GMM performs when utilizing the annotated idiom data set to estimate the two Gaussian components instead of using EM. We give equal weights to the two Gaussian components and predict the label on the V+NP data set by fixing the mixture model which is estimated from the training set (GMM+f). This method further improves the performance compared to the unsupervised approach (Table 3) .", "cite_spans": [], "ref_spans": [ { "start": 422, "end": 431, "text": "(Table 3)", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "GMM estimated from Annotated Data", "sec_num": "4.3" }, { "text": "We also experimented with setting a threshold and abstaining from making a prediction when the probability of an instance belonging to the Gaussian is below the threshold (GMM+f+s). Table 3 shows the performance when only evaluating on the subset for which a classification was made. It can be seen that the accuracy and the overall performance on the literal class improve, but the precision for the nonliteral class remains relatively low, i.e., many literal instances are still misclassified as 'non-literal'. One reason for this may be that there are a few instances containing named entities, which exhibit weak cohesive ties with the context even if though they are used literally. Using a named-entity tagger before applying the GMM might solve the problem. Finally, Table 4 shows the result when using different idioms to generate the nonliteral Gaussian. The literal Gaussian can be generated from the automatically obtained nonliteral examples by . We found the estimation of the GMM is not sensitive to idioms; our model is robust and can use any existing idiom data to discover new figurative expressions. Furthermore, Table 4 also shows that the GMM does not need a large amount of annotated data for parameter estimation. A few hundred instances are sufficient.", "cite_spans": [], "ref_spans": [ { "start": 182, "end": 189, "text": "Table 3", "ref_id": "TABREF1" }, { "start": 774, "end": 781, "text": "Table 4", "ref_id": null }, { "start": 1131, "end": 1138, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "GMM estimated from Annotated Data", "sec_num": "4.3" }, { "text": "We described a GMM based approach for detecting figurative expressions. This method not only works Table 4 : Results on the V+NP dataset, Gaussian component parameters estimated on different idioms for distinguishing literal and non-literal usages of a potential idiomatic expression in a discourse context, but also discovers new figurative expressions. The components of the GMM can be effectively estimated using the EM algorithm. The performance can be further improved when employing an annotated data set for parameter estimation. Our results show that the estimation of Gaussian components are not idiom-dependent. Furthermore, a small annotated data set is enough to obtain good results.", "cite_spans": [], "ref_spans": [ { "start": 99, "end": 106, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "We set k to be 100 in our experiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We used http://www.askoxford.com.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was funded by the DFG within the Cluster of Excellence \"Multimodal Computing and Interaction\". Thanks to Benjamin Roth for discussions and comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "The unsupervised GMM model beats the baseline and achieves good results on the V+NP data set. It also outperforms the Co-Graph approach, which suggests that the statistical model, GMM, is more likely to boost the performance by capturing statistical properties of the data for more difficult cases (idioms vs. general figurative usages), compared with the Co-Graph approach.In conclusion, the model is not only able to classify idiomatic expressions but also to detect new figurative expressions. However, the performance on the second data set is worse compared with running the same model on the idiom data set. This is because the V+NP data set contains more difficult examples, e.g., expressions which are only partially figurative (e.g., (2)). One would expect the literal part of the expression to exhibit cohesive ties with the context, hence the cohesion based features may fail to detect this type of figurative usage. Consequently the performance of the GMM is lower for figuratively used words ('nw') than for idioms ('nsa', 'nsu'). However, even for 'nw' cases the model still obtains a relatively high accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A clustering approach for the nearly unsupervised recognition of nonliteral language", "authors": [ { "first": "J", "middle": [], "last": "Birke", "suffix": "" }, { "first": "A", "middle": [], "last": "Sarkar", "suffix": "" } ], "year": 2006, "venue": "Proceedings of EACL-06", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Birke, A. Sarkar. 2006. A clustering approach for the nearly unsupervised recognition of nonliteral lan- guage. In Proceedings of EACL-06.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Robot Programming by Demonstration: A Probabilistic Approach", "authors": [ { "first": "S", "middle": [], "last": "Calinon", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Calinon. 2009. Robot Programming by Demonstra- tion: A Probabilistic Approach. EPFL/CRC Press.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The Google similarity distance", "authors": [ { "first": "R", "middle": [ "L" ], "last": "Cilibrasi", "suffix": "" }, { "first": "P", "middle": [ "M B" ], "last": "Vitanyi", "suffix": "" } ], "year": 2007, "venue": "IEEE Trans. on Knowl. and Data Eng", "volume": "19", "issue": "3", "pages": "370--383", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. L. Cilibrasi, P. M. B. Vitanyi. 2007. The Google simi- larity distance. IEEE Trans. on Knowl. and Data Eng., 19(3):370-383.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A coefficient of agreement for nominal scales", "authors": [ { "first": "J", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 1960, "venue": "Educational and Psychological Measurements", "volume": "20", "issue": "", "pages": "37--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Cohen. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurements, 20:37-46.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Unsupervised type and token identification of idiomatic expressions", "authors": [ { "first": "A", "middle": [], "last": "Fazly", "suffix": "" }, { "first": "P", "middle": [], "last": "Cook", "suffix": "" }, { "first": "S", "middle": [], "last": "Stevenson", "suffix": "" } ], "year": 2009, "venue": "Computational Linguistics", "volume": "35", "issue": "1", "pages": "61--103", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Fazly, P. Cook, S. Stevenson. 2009. Unsupervised type and token identification of idiomatic expressions. Computational Linguistics, 35(1):61-103.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Clustering Algorithm", "authors": [ { "first": "J", "middle": [ "A" ], "last": "Hartigan", "suffix": "" } ], "year": 1975, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. A. Hartigan. 1975. Clustering Algorithm. Wiley.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Automatic identification of non-compositional multi-word expressions using latent semantic analysis", "authors": [ { "first": "G", "middle": [], "last": "Katz", "suffix": "" }, { "first": "E", "middle": [], "last": "Giesbrecht", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the ACL06 Workshop on Multiword Expressions: Identifying and Exploiting Underlying Properties", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Katz, E. Giesbrecht. 2006. Automatic identification of non-compositional multi-word expressions using la- tent semantic analysis. In Proceedings of the ACL06 Workshop on Multiword Expressions: Identifying and Exploiting Underlying Properties.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Contextual idiom detection without labelled data", "authors": [ { "first": "L", "middle": [], "last": "Li", "suffix": "" }, { "first": "C", "middle": [], "last": "Sporleder", "suffix": "" } ], "year": 2009, "venue": "Proceedings of EMNLP-09", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Li, C. Sporleder. 2009. Contextual idiom detection without labelled data. In Proceedings of EMNLP-09.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Automatic identification of noncompositional phrases", "authors": [ { "first": "D", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1999, "venue": "Proceedings of ACL-99", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Lin. 1999. Automatic identification of non- compositional phrases. In Proceedings of ACL-99.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Unsupervised recognition of literal and non-literal use of idiomatic expressions", "authors": [ { "first": "C", "middle": [], "last": "Sporleder", "suffix": "" }, { "first": "L", "middle": [], "last": "Li", "suffix": "" } ], "year": 2009, "venue": "Proceedings of EACL-09", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Sporleder, L. Li. 2009. Unsupervised recognition of literal and non-literal use of idiomatic expressions. In Proceedings of EACL-09.", "links": null } }, "ref_entries": { "TABREF1": { "type_str": "table", "text": "Results on the V+NP data set, Gaussian component parameters estimated by annotated data", "num": null, "content": "
Train (size) | C Pre. | Rec. | F-S. | Acc. |