{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:42:40.620922Z" }, "title": "Recognizing Euphemisms and Dysphemisms Using Sentiment Analysis", "authors": [ { "first": "Christian", "middle": [], "last": "Felt", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Utah", "location": {} }, "email": "christianfelt@comcast.net" }, { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Utah", "location": {} }, "email": "riloff@cs.utah.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents the first research aimed at recognizing euphemistic and dysphemistic phrases with natural language processing. Euphemisms soften references to topics that are sensitive, disagreeable, or taboo. Conversely, dysphemisms refer to sensitive topics in a harsh or rude way. For example, \"passed away\" and \"departed\" are euphemisms for death, while \"croaked\" and \"six feet under\" are dysphemisms for death. Our work explores the use of sentiment analysis to recognize euphemistic and dysphemistic language. First, we identify near-synonym phrases for three topics (FIRING, LYING, and STEALING) using a bootstrapping algorithm for semantic lexicon induction. Next, we classify phrases as euphemistic, dysphemistic, or neutral using lexical sentiment cues and contextual sentiment analysis. We introduce a new gold standard data set and present our experimental results for this task.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "This paper presents the first research aimed at recognizing euphemistic and dysphemistic phrases with natural language processing. Euphemisms soften references to topics that are sensitive, disagreeable, or taboo. Conversely, dysphemisms refer to sensitive topics in a harsh or rude way. For example, \"passed away\" and \"departed\" are euphemisms for death, while \"croaked\" and \"six feet under\" are dysphemisms for death. Our work explores the use of sentiment analysis to recognize euphemistic and dysphemistic language. First, we identify near-synonym phrases for three topics (FIRING, LYING, and STEALING) using a bootstrapping algorithm for semantic lexicon induction. Next, we classify phrases as euphemistic, dysphemistic, or neutral using lexical sentiment cues and contextual sentiment analysis. We introduce a new gold standard data set and present our experimental results for this task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Euphemisms are expressions used to soften references to topics that are sensitive, disagreeable, or taboo with respect to societal norms. Whether as a lubricant for polite discourse, a means to hide disagreeable truths, or a repository for cultural anxieties, veiled by idioms so familiar we no longer think about what they literally mean, euphemisms are an essential part of human linguistic competence. Conversely, dysphemisms make references more harsh or rude, often using language that is direct or blunt, less formal or polite, and sometimes offensive. For example, \"passed away\" and \"departed\" are common euphemisms for death, while \"croaked\" and \"six feet under\" are dysphemisms for death. Table 1 shows examples of euphemisms and dysphemisms across a variety of topics.", "cite_spans": [], "ref_spans": [ { "start": 698, "end": 705, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Following terminology from linguistics (e.g., (Allan, 2009 ; Rababah, 2014)), we use the term", "cite_spans": [ { "start": 46, "end": 58, "text": "(Allan, 2009", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "x-phemism to refer to the general phenomenon of euphemisms and dysphemisms. Recognizing xphemisms could be valuable for many NLP tasks. Euphemisms are related to politeness, which plays a role in applications involving dialogue and social interactions (e.g., (Danescu-Niculescu-Mizil et al., 2013) ). Dysphemisms can include pejorative and offensive language, which relates to cyberbullying (Xu et al., 2012; Van Hee et al., 2015) , hate speech (Magu and Luo, 2014) , and abusive language (Park et al., 2018; Wiegand et al., 2018) . Recognizing euphemisms and dysphemisms for controversial topics could be valuable for stance detection and argumentation in political discourse or debates (Somasundaran and Wiebe, 2010; Walker et al., 2012; Habernal and Gurevych, 2015) . In medicine, researchers found that medical professionals use xphemisms when talking to patients about serious conditions, and have emphasized the importance of preserving x-phemisms across translations when treating non-English speakers (Rababah, 2014) .", "cite_spans": [ { "start": 259, "end": 297, "text": "(Danescu-Niculescu-Mizil et al., 2013)", "ref_id": "BIBREF4" }, { "start": 391, "end": 408, "text": "(Xu et al., 2012;", "ref_id": "BIBREF32" }, { "start": 409, "end": 430, "text": "Van Hee et al., 2015)", "ref_id": null }, { "start": 445, "end": 465, "text": "(Magu and Luo, 2014)", "ref_id": "BIBREF12" }, { "start": 489, "end": 508, "text": "(Park et al., 2018;", "ref_id": "BIBREF17" }, { "start": 509, "end": 530, "text": "Wiegand et al., 2018)", "ref_id": "BIBREF31" }, { "start": 688, "end": 718, "text": "(Somasundaran and Wiebe, 2010;", "ref_id": "BIBREF26" }, { "start": 719, "end": 739, "text": "Walker et al., 2012;", "ref_id": "BIBREF29" }, { "start": 740, "end": 768, "text": "Habernal and Gurevych, 2015)", "ref_id": "BIBREF8" }, { "start": 1009, "end": 1024, "text": "(Rababah, 2014)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "An area of NLP that relates to x-phemisms is sentiment analysis, although the relationship is complex. A key feature of x-phemisms is that their directionality (euphemism vs. dysphemism) is relative to an underlying topic, which itself often has affective polarity. X-phemisms are usually associated with negative topics that are culturally disagreeable or have a negative connotation, such as death, intoxication, prostitution, old age, mental illness, and defecation. However x-phemisms also occur with topics that are sensitive but not inherently negative, such as pregnancy (e.g., \"in a family way\" is a euphemism, while \"knocked up\" is a dysphemism). In general, dysphemistic language increases the degree of sensitivity, intensifying negative polarity or shifting polarity from neutral to negative. Conversely, euphemistic language generally decreases sensitivity. But euphemisms for inherently negative topics may still have negative polarity (e.g.,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Euphemisms Dysphemisms DEATH passed away, eternal rest, put to sleep croaked, six feet under, bit the dust INTOXICATION tipsy, inebriated, under the influence hammered, plastered, sloshed, wasted LYING falsehood, misrepresent facts, untruth bullshit, rubbish, whopper, quackery PROSTITUTE lady of the night, working girl, sex worker whore, tart, harlot, floozy DEFECATION bowel movement, number two, pass stool take a dump, crap, drop a load VOMITING be sick, regurgitate, heave blow chunks, puke, upchuck vomiting is unpleasant no matter how gently it is referred to). This paper presents the first effort to identify euphemistic and dysphemistic language in text. Since affective polarity clearly plays a role in this phenomenon, our research explores whether sentiment analysis can be useful for recognizing x-phemisms. We deconstructed the problem into two subtasks. First, we identify phrases that refer to three sensitive topics: LYING, STEALING, and FIRING (job termination). We use a weakly supervised algorithm for semantic lexicon induction (Thelen and Riloff, 2002) to semi-automatically generate lists of near-synonym phrases for each topic. Second, we investigate two methods to classify phrases as euphemistic, dysphemistic, or neutral 1 . (1) We use dictionary-based methods to explore the value of several types of information found in sentiment lexicons: affective polarity, connotation, intensity, arousal, and dominance. (2) We use contextual sentiment analysis to classify x-phemism phrases. We collect sentence contexts around instances of each candidate phrase in a large corpus, and assign each phrase to an x-phemism category based on the polarity of its contexts. Finally, we introduce a gold standard data set of human x-phemism judgments and evaluate our models for this task. We hope that this new data set will encourage more work on x-phemisms. Our experiments show that sentiment connotation and affective polarity can be useful for identifying euphemistic and dysphemistic phrases, although this problem remains challenging.", "cite_spans": [ { "start": 1073, "end": 1098, "text": "(Thelen and Riloff, 2002)", "ref_id": "BIBREF27" } ], "ref_spans": [ { "start": 11, "end": 472, "text": "Dysphemisms DEATH passed away, eternal rest, put to sleep croaked, six feet under, bit the dust INTOXICATION tipsy, inebriated, under the influence hammered, plastered, sloshed, wasted LYING falsehood, misrepresent facts, untruth bullshit, rubbish, whopper, quackery PROSTITUTE lady of the night, working girl, sex worker whore, tart, harlot, floozy DEFECATION bowel movement, number two, pass stool take a dump, crap, drop a load VOMITING", "ref_id": null } ], "eq_spans": [], "section": "Topic", "sec_num": null }, { "text": "Euphemisms and dysphemisms have been studied in linguistics and related disciplines (e.g., (Allan and Burridge, 1991; Pfaff et al., 1997; Rawson, 2003; Allan, 2009; Rababah, 2014) ), but they have received little attention in the NLP community. Magu and Luo (2014) recognized code words in \"euphemistic hate speech\" by measuring cosine distance between word embeddings. But their code words conceal references to hate speech rather than soften them (e.g., the code word \"skypes\" covertly referred to Jews), which is different from the traditional definition of euphemisms that is addressed in our work.", "cite_spans": [ { "start": 91, "end": 117, "text": "(Allan and Burridge, 1991;", "ref_id": "BIBREF1" }, { "start": 118, "end": 137, "text": "Pfaff et al., 1997;", "ref_id": "BIBREF19" }, { "start": 138, "end": 151, "text": "Rawson, 2003;", "ref_id": "BIBREF22" }, { "start": 152, "end": 164, "text": "Allan, 2009;", "ref_id": "BIBREF0" }, { "start": 165, "end": 179, "text": "Rababah, 2014)", "ref_id": "BIBREF21" }, { "start": 245, "end": 264, "text": "Magu and Luo (2014)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The NLP community has explored several linguistic phenomena related to x-phemisms, such as metaphor (e.g., (Shutova, 2010; Wallington et al., 2011; Kesarwani et al., 2017) ), politeness (e.g., (Danescu-Niculescu-Mizil et al., 2013; Aubakirova and Bansal, 2016)), and formality (e.g., (Pavlick and Tetreault, 2016) ). Pfaff et al. (1997) found that people comprehend metaphorical euphemisms or dysphemisms more quickly when they share the same underlying conceptual metaphor. For example, people are likely to use the euphemism \"parted ways\" to describe ending a relationship in the context of the conceptual metaphor A RELATIONSHIP IS A JOURNEY, but more likely to use the euphemism \"cut their losses\" in the context of the metaphor A RELATIONSHIP IS AN INVESTMENT.", "cite_spans": [ { "start": 107, "end": 122, "text": "(Shutova, 2010;", "ref_id": "BIBREF24" }, { "start": 123, "end": 147, "text": "Wallington et al., 2011;", "ref_id": "BIBREF30" }, { "start": 148, "end": 171, "text": "Kesarwani et al., 2017)", "ref_id": "BIBREF11" }, { "start": 284, "end": 313, "text": "(Pavlick and Tetreault, 2016)", "ref_id": "BIBREF18" }, { "start": 317, "end": 336, "text": "Pfaff et al. (1997)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our research focuses on the relationship between x-phemisms and sentiment analysis. We take advantage of several existing sentiment resources, including the NRC EmoLex, VAD, and Affective Intensity Lexicons (Mohammad and Turney, 2013; Mohammad, 2018a,b) and Connotation WordNet (Feng et al., 2013; Kang et al., 2014) . We also re-implementated the NRC-Canada sentiment classifier to use in our work. Allan (2009) examined the connotation of color terms according to how often they appear in dysphemistic, euphemistic, or neutral contexts. For instance, \"blue\" is often used as a euphemism for \"sad\", while \"yellow\" can be dysphemistically used to mean \"cowardly\". Our paper takes the reverse approach, recognizing x-phemisms by means of connotation.", "cite_spans": [ { "start": 278, "end": 297, "text": "(Feng et al., 2013;", "ref_id": "BIBREF5" }, { "start": 298, "end": 316, "text": "Kang et al., 2014)", "ref_id": "BIBREF10" }, { "start": 400, "end": 412, "text": "Allan (2009)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Rababah (2014) studied how medical professionals use x-phemisms when talking to patients and found that serious conditions tend to inspire more euphemism. Rababah argued that translating xphemisms appropriately is important when providing medical care to non-English speakers. It follows that it is important for machine translation systems to preserve euphemistic language across translations in medical applications. More generally, machine translation systems should be concerned not only with preserving the intended semantics but also preserving the intended discourse pragmatics, which includes translating euphemisms into euphemisms and translating dysphemisms into dysphemisms. When a speaker chooses to use a euphemistic or dysphemistic expression, that choice usually reflects a viewpoint or bias that is a significant property of the discourse. Consequently, it is important for NLP systems to recognize xphemisms and their polarity, both for applications where views and biases are central (e.g., medicine, argumentation and debate, or stance detection in political discourse) and for comprehensive natural language understanding in general.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "3 Overview of Technical Approach X-phemisms are so pervasive in language that euphemism dictionaries have been published containing manually compiled lists (Bertram, 1998; Holder, 2002; Rawson, 2003) . However these dictionaries are far from complete because new xphemisms are constantly entering language, both for long-standing sensitive topics and new ones. For example, every generation of youth invents new ways of referring to defecation, and political trends can trigger heightened sensitivity to controversial topics (e.g., \"enhanced interrogation\" is a recently introduced euphemism for torture). Euphemistic terms can even become offensive with time and replaced by new euphemisms, a phenomenon known as \"the euphemism treadmill.\" For instance, the phrase \"mentally retarded\" began its life as a euphemism. Now, even \"special needs\" is sometimes viewed as offensive. The goal of our research is to develop methods to automatically curate lists of euphemistic and dysphemistic phrases for a topic from a text corpus, which would enable emerging x-phemisms to be continually discovered.", "cite_spans": [ { "start": 156, "end": 171, "text": "(Bertram, 1998;", "ref_id": "BIBREF3" }, { "start": 172, "end": 185, "text": "Holder, 2002;", "ref_id": "BIBREF9" }, { "start": 186, "end": 199, "text": "Rawson, 2003)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We tackled this problem by decomposing the task into two steps: (1) identifying near-synonym phrases for a topic, and (2) classifying each phrase as euphemistic, dysphemistic, or neutral. For the first step, we considered using existing thesauri (e.g., WordNet, Roget's thesaurus, Wiktionary, etc.) but their synonym lists were relatively small. 2 Roget's thesaurus was among the best resources, but included only a few dozen entries for most topics. Furthermore, x-phemisms can stretch meaning to soften or harden a sensitive subject, so we wanted to include near-synonyms that have a similar (but not identical) meaning. For example, laid off, resigned, and downsized are not strictly synonymous with FIRING, but broadly construed they all refer to job termination.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Ultimately, we decided to use the Basilisk bootstrapping algorithm for weakly supervised semantic lexicon induction (Thelen and Riloff, 2002) . Basilisk begins with a small set of seed terms for a desired category and iteratively learns more terms that consistently occur in the same contexts as the seeds. While there are other methods for nearsynonym generation (e.g., (Gupta et al., 2015 )), we chose Basilisk because it can learn phrases corresponding to syntactic constituents (e.g., NPs and VPs) and can use lexico-syntactic contextual patterns. For the bootstrapping process, we used the English Gigaword corpus because it contains a large and diverse collection of news articles. We focused on three sensitive topics that are common in news and rich in x-phemisms: LYING, STEALING, and FIRING (job termination).", "cite_spans": [ { "start": 116, "end": 141, "text": "(Thelen and Riloff, 2002)", "ref_id": "BIBREF27" }, { "start": 371, "end": 390, "text": "(Gupta et al., 2015", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The Basilisk algorithm learns new phrases for a category using a small list of \"seed\" terms and a text corpus. In an iterative bootstrapping framework, Basilisk extracts contextual patterns surrounding the seed terms, identifies new phrases that consistently occur in the same contexts as the seeds, adds the learned phrases to the seed list, and restarts the process. Our categories of interest (LYING, STEAL-ING, FIRING) are actions, so we wanted to learn verb phrases as well as noun phrases (e.g., event nominals). Consequently, we provided Basilisk with two seed lists for each topic, one list of verb phrases (VPs) and one list of noun phrases (NPs). To collect seed terms, we identified common phrases for each topic that had high frequency in the Gigaword corpus. The seed lists are shown in Table 2 . We included both active and passive voice verb phrase forms for the verbs shown in Table 2 , except we excluded resign in passive voice because \"was resigned to\" is a common expression with a different meaning. Most previous applications of Basilisk have used lexico-syntactic patterns to represent the contexts around seed terms (e.g., (Riloff et al., 2003; Qadir and Riloff, 2012) ). For example, a pattern may indicate that a phrase occurs as the syntactic subject or direct object of a specific verb. So we used the dependency relations produced by the SpaCy parser (https://spacy.io/) 3 for contextual patterns. For generality, we used word lemmas both for the learned phrases and the patterns.", "cite_spans": [ { "start": 1147, "end": 1168, "text": "(Riloff et al., 2003;", "ref_id": "BIBREF23" }, { "start": 1169, "end": 1192, "text": "Qadir and Riloff, 2012)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 800, "end": 807, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 893, "end": 900, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Generating Near-Synonym Phrases with Semantic Lexicon Induction", "sec_num": "4" }, { "text": "We defined a contextual pattern as a dependency relation linked to/from a seed term, coupled with the head of the governing/dependent phrase. For example, consider the sentence \"The lie spread quickly\". The contextual pattern for the noun \"lie\" would be \u2190NSUBJ(spread), indicating that the NP with head \"lie\" occurred as the syntactic subject of a governing VP with head \"spread\". We treated \"have\", \"do\", and \"be\" as special cases because of their generality and paired them with the head of their complement (subject, direct object, predicate nominal, or predicate adjective). For example, given the sentence \"The lie was horrific\", the contextual pattern for \"lie\" would be \u2190NSUBJ(be horrific).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representing Contextual Patterns and Verb Phrases", "sec_num": "4.1" }, { "text": "We also created compound relations for syntactic constructions that rely on pairs of constituents to be meaningful. For example, a preposition alone is not very informative, so we pair each preposition with the head of its object (e.g., \"in jail\"). Specifically, we pair the dependency relation \"prep\" with its \"pobj,\" \"agent\" with its \"pobj\", and \"dative\" with the \"dobj\" of its governing verb. We also create compound dependencies for \"pcomp,\" and \"advcl\" relations and resolve the relative pronoun with its subject for \"relcl\" relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representing Contextual Patterns and Verb Phrases", "sec_num": "4.1" }, { "text": "Basilisk has not previously been used to learn multi-word verb phrases, so we needed to define a VP representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representing Contextual Patterns and Verb Phrases", "sec_num": "4.1" }, { "text": "We represented each VP using the following syntax:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representing Contextual Patterns and Verb Phrases", "sec_num": "4.1" }, { "text": "VP([voice])MOD()DOBJ().", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representing Contextual Patterns and Verb Phrases", "sec_num": "4.1" }, { "text": "The VP() identifies the head verb and voice (Active or Passive), and MOD() contains the first of any adverbs or particles included in the verb phrase. DOBJ() contains the head noun of a VP's direct object, if present. As we did with the contextual patterns, we treat \"have\", \"do\", and \"be,\" as special cases and join the verb with its complement. As an example, the verb phrase \"is clearly distorting\" would be represented as \"VP([active]be distort)MOD(clearly)\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representing Contextual Patterns and Verb Phrases", "sec_num": "4.1" }, { "text": "We observed that many of the most useful contextual patterns for identifying near-synonyms captured conjunction dependency relations. For example, the contextual pattern \u2190CONJ(distortion) occured with 6 seed terms (exaggeration, fabrication, falsehood, lie, misrepresentation, and untruth), as well as several other near-synonyms such as disinformation, inaccuracy, crap, and dishonesty. Near-synonyms also frequently appeared in conjoined verb phrases, such as \"misstate and inflate\" or \"misstate and embellish\". As an example of a different type of dependency relation that proved to be useful, the compound pattern \u2192AgentPhrase(by looter) occurred with several near-synonym VPs for STEAL, such as seize, ran-sack and clean out.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representing Contextual Patterns and Verb Phrases", "sec_num": "4.1" }, { "text": "We had no idea how many near-synonyms we could expect to find for each topic, so we configured Basilisk to learn 1,000 phrases 4 to err on the side of overgeneration. Basilisk learns in a fully automated process, but the resulting lists are not perfect so must be manually filtered to ensure a highquality lexicon. The filtering process took us approximately 1.5 hours to review each list. 5 Most of the correct entries were among the first 400 terms generated by Basilisk.", "cite_spans": [ { "start": 390, "end": 391, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Near-Synonym Generation Results", "sec_num": "4.2" }, { "text": "# Phrases 142 177 146 Table 3 shows the total number of nearsynonyms acquired for each topic, after conflating active and passive voice variants, typos, and including the seed terms. These numbers show that the semantic lexicon induction algorithm enabled us to quickly produce many more near-synonym phrases per topic than we had found in the synonym lists of thesauri. Some of the discovered terms were quite interesting, such as \"infojunk\" and \"puffery\" for LIE, and sometimes unfamiliar to us but relevant, such as \"malversation\" and \"dacoity\" for STEAL.", "cite_spans": [], "ref_spans": [ { "start": 22, "end": 29, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "FIRE LIE STEAL", "sec_num": null }, { "text": "To create a high-quality gold data set for xphemism classification, we asked three people (not the authors) to label the near-synonym phrases for each topic on a scale from 1 to 5, where 1 is most dysphemistic, 3 is neutral, and 5 is most euphemistic. For each phrase, we computed the average score across the three annotators and assigned each phrase to a \"gold\" x-phemism category: phrases with score < 2.5 were labeled dysphemistic, phrases with score > 3.5 were labeled euphemistic, and the rest were labeled neutral.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gold X-Phemism Data Set", "sec_num": "5" }, { "text": "To assess inter-annotator agreement, we assigned each annotator's score to one of the three x-phemism categories using the same ranges as above, and measured the category agreement for each pair of annotators using Cohen's kappa (\u03ba). For LYING, the pairwise \u03ba scores were {.64, .69, .77} with average \u03ba = .70. For FIRE, the \u03ba scores were {.66, .68, .80} with average \u03ba = .71. For STEAL, the \u03ba scores were {.66, .77, .79} with average \u03ba = .74. Since the mean \u03ba scores were \u2265 .70 for all three topics, we concluded that the agreement was reasonably good. Table 4 shows examples of near-synonym phrases 6 with their gold scores and category labels. For example, crap and infojunk were among the most dysphemistic phrases for LIE, while invent and embellish were among the most euphemistic phrases for LIE. Table 5 shows the distribution of labels in the gold data set. ", "cite_spans": [], "ref_spans": [ { "start": 553, "end": 560, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 803, "end": 810, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Gold X-Phemism Data Set", "sec_num": "5" }, { "text": "Euphemisms and dysphemisms capture softer and harsher references to sensitive topics, so one could argue that this phenomenon falls within the realm of sentiment analysis. But x-phemisms are a distinctly different phenomenon. It may be tempting to equate euphemisms with positive sentiment and dysphemisms with negative sentiment, but xphemisms refer to sensitive topics that typically have strong affective polarity (usually negative). For example, vomiting is never a pleasant topic, no matter how it is referred to. Consequently, most euphemisms for vomiting still have negative polarity (e.g., \"be sick\" or \"lose your lunch\"). However some euphemisms can have neutral polarity, such as scientific or formal terms (e.g., \"regurgitation\"), and occasionally a euphemism will evoke positive polarity for a negative topic through metaphor (e.g., \"pushing up daisies\" for death). In this section, we investigate whether sentiment information can be beneficial for recognizing euphemisms and dysphemisms and establish baseline results for this task. We explore five properties associated with sentiment: affective polarity, connotation, intensity, arousal, and dominance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "X-phemism Classification with Sentiment Lexicons", "sec_num": "6" }, { "text": "As our first baseline, we assess the effectiveness of using positive/negative affective polarity (valence) information to label x-phemism phrases using two sentiment lexicons: the NRC EmoLex and VAD Lexicons (Mohammad and Turney, 2013; Mohammad, 2018a) . For the specific emotions, we considered anger, disgust, fear, sadness, and surprise to be negative, and anticipation, joy, and trust to be positive. Another sentiment property related to x-phemisms is connotation. Euphemisms often include terms with positive connotation to soften a reference, and dysphemisms may include terms with negative connotation to make a reference more harsh. But importantly, connotation and x-phemisms are not the same phenomenon. For one, many terms with a strong connotation are not x-phemisms. Also, as with polarity, euphemisms can retain a negative connotation because the underlying topic has negative polarity. But since connotation and x-phemisms are related, we investigate whether connotation polarities from Connota-tionWN (Feng et al., 2013; Kang et al., 2014) can be valuable for labeling x-phemisms.", "cite_spans": [ { "start": 208, "end": 235, "text": "(Mohammad and Turney, 2013;", "ref_id": "BIBREF16" }, { "start": 236, "end": 252, "text": "Mohammad, 2018a)", "ref_id": "BIBREF13" }, { "start": 1018, "end": 1037, "text": "(Feng et al., 2013;", "ref_id": "BIBREF5" }, { "start": 1038, "end": 1056, "text": "Kang et al., 2014)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "X-phemism Classification with Sentiment Lexicons", "sec_num": "6" }, { "text": "We also explored the effectiveness of using affective intensity, arousal, and dominance information from the NRC Affective Intensity and VAD Lexicons (Mohammad, 2018b,a) for recognizing euphemistic and dysphemistic phrases. Dysphemisms are often harsh and can be downright rude, so we hypothesized that terms with high arousal may be dysphemistic. Conversely, euphemisms use softer and gentler language, so they may be associated with low arousal. Dominant terms correspond to power and control, so it would be logical to expect that high dominance may be associated with euphemisms and low dominance may be associated with dysphemisms (e.g., \"frail\" and \"weak\") (Mohammad, 2018a).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "X-phemism Classification with Sentiment Lexicons", "sec_num": "6" }, { "text": "For intensity, we used the NRC Affective Intensity Lexicon (Mohammad, 2018b), which associates words with specific emotions. We mapped the intensity scores so that high intensity values for negative emotions ranged from [0-0.5] (representing dysphemistic to neutral) and high intensity values for positive emotions ranged from [0.5-1] (representing neutral to euphemistic).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "X-phemism Classification with Sentiment Lexicons", "sec_num": "6" }, { "text": "The sentiment resources provide scores between 0 and 1. For polarities and connotation, 0 represents the strongest negative score and 1 represents the strongest positive score. For arousal, and dominance, the range is low (0) to high (1). We expect high arousal to be associated with dysphemism, so to be consistent with the other properties we reverse its range and replace each score S with 1-S. We score multi-word phrases by taking the average score of their words. Once a phrase receives a score S, we map S to one of the three x-phemism categories as follows: S \u2264 0.25 \u21d2 dysphemism, 0.25 < S < 0.75 \u21d2 neutral, and S \u2265 .75 \u21d2 euphemism. We chose these ranges to conservatively divide the space into quadrants, so that scores in the lowest quadrant represent dysphemism, scores in the highest quadrant represent euphemism, and scores in the middle are considered neutral. Table 6 shows the results for the sentiment lexicon experiments. We report F-scores for the euphemism (Euph), neutral (Neu), and dysphemism (Dysph) categories as well as a macro-average Fscore (Avg). The best-performing lexicon across all three topics was ConnotationWN (ConnoWN).", "cite_spans": [], "ref_spans": [ { "start": 875, "end": 882, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "X-phemism Classification with Sentiment Lexicons", "sec_num": "6" }, { "text": "We also experimented with combining multiple dictionaries to see if they were complementary. For these experiments, each dictionary labeled a phrase as euphemistic, dysphemistic, or neutral (as described earlier) or none (i.e., no label if the word was not present in the lexicon). The most frequent label was then assigned to the phrase, except that 'none' labels were ignored. ConnotationWN's label was used to break ties. We evaluated all pairs of lexicons and the best pair turned out to be Con-notationWN plus Valence, which we refer to as BestPair in Table 6 . We also tried using all of the dictionaries, shown as AllDicts in Table 6 . Combining dictionaries did improve performance, with BestPair performing best for FIRE and STEAL, and AllDicts performing best for LIE.", "cite_spans": [], "ref_spans": [ { "start": 557, "end": 564, "text": "Table 6", "ref_id": null }, { "start": 633, "end": 640, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Lexicon Results", "sec_num": "6.1" }, { "text": "Overall, connotation and valence (affective polarity) were the most useful sentiment properties for recognizing x-phemisms. But, thus far we have considered only the words in a phrase. In the next section, we explore an approach that exploits the sentence contexts around the phrases. Table 6 : Results for Sentiment Lexicons (F-scores)", "cite_spans": [], "ref_spans": [ { "start": 285, "end": 292, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Lexicon Results", "sec_num": "6.1" }, { "text": "We hypothesized that the contexts around euphemisms and dysphemisms would be different in terms of sentiment. People often use euphemisms when they want to be comforting, supportive, or put a positive spin on a subject. In obituaries, for example, euphemisms for death are often accompanied by references to peace, heaven, flowers, and courage. In contrast, grisly murder mystery novels often use dysphemisms, speaking about death using harsh or graphic language. X-phemisms are also prevalent in political discourse. People frequently use euphemisms to argue for the merits of a particular subject (e.g., \"enhanced interrogation\" is a euphemism invoked to justify the use of TOR-TURE). Conversely, people use dysphemisms when arguing against something (e.g., \"baby killing\" to refer to ABORTION).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "X-phemism Classification with Contextual Sentiment Analysis", "sec_num": "7" }, { "text": "To investigate this hypothesis, we developed models to classify a phrase with respect to xphemism categories using sentiment analysis of its sentence contexts. We use the Gigaword corpus and experiment with both sentiment lexicons and a sentiment classifier to evaluate sentence polarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "X-phemism Classification with Contextual Sentiment Analysis", "sec_num": "7" }, { "text": "However polysemy and metaphor pose a major challenge: many phrases have multiple meanings. To address this problem, we create a subcorpus for each topic by extracting Gigaword articles that contain a seed term for that topic (see Table 2 ). The seed terms can also be ambiguous, but we expect that the resulting subcorpus will have a higher density of articles about the intended topic than the Gigaword corpus as a whole. Given a candidate x-phemism phrase for a topic, we then extract sentences containing that phrase from the topic's subcorpus. Our expectation is that most documents that contain both the x-phemism phrase and a seed term for the topic will be relevant to the topic.", "cite_spans": [], "ref_spans": [ { "start": 230, "end": 237, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "X-phemism Classification with Contextual Sentiment Analysis", "sec_num": "7" }, { "text": "Once we have a set of sentence contexts for an x-phemism phrase, our first contextual model uses sentiment lexicons to determine each sentence's polarity. For each topic, we use the best-performing lexicons reported in Section 6.1 (i.e., BestPair for FIRE and STEAL, and AllDicts for LIE). First, each word found in the lexicons is labeled positive for scores > 0.5 or negative for scores < 0.5. 7 We then assign a polarity to each sentence based on majority vote among its labeled words. Sentences with an equal number of positive and negative words, or no labeled words, are ignored.", "cite_spans": [ { "start": 396, "end": 397, "text": "7", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "X-phemism Classification with Contextual Sentiment Analysis", "sec_num": "7" }, { "text": "X-phemisms are relative to a topic that itself often has strong affective polarity, so given a phrase P , our goal is to determine whether P 's contexts are positive or negative relative to the topic. To assess this, we generate a polarity distribution across all sentences in the topic's subcorpus. We will refer to all sentences in the subcorpus for topic T as Sents(T) and the sentences in the subcorpus that mention phrase P as Sents(T, P). We define POS(S) as the percent of sentences S labeled positive, and NEG(S) as the percent of sentences S labeled negative, and classify each phrase P as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "X-phemism Classification with Contextual Sentiment Analysis", "sec_num": "7" }, { "text": "If POS(Sents(T, P )) > POS(Sents(T)) + \u03b3 Then label P as euphemistic If NEG(Sents(T, P )) > NEG(Sents(T)) + \u03b3 Then label P as dysphemistic Else label P as neutral", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "X-phemism Classification with Contextual Sentiment Analysis", "sec_num": "7" }, { "text": "We set \u03b3 = 0.10 for our experiments. 8 Intuitively, the \u03b3 parameter dictates that a phrase is labeled as euphemistic (or dysphemistic) only if its sentence contexts have a positive (or negative) percentage at least 10% higher than the sentence contexts for the topic as a whole.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "X-phemism Classification with Contextual Sentiment Analysis", "sec_num": "7" }, { "text": "Our second contextual model uses a sentiment classifier instead of lexicons to assign polarity to each sentence. We used a reimplementation of the NRC-Canada sentiment classifier , which performed well in the SemEval 2013 Task 2. Given a sentence, the classifier returns probabilities that the sentence is positive, negative, or neutral. We label each sentence with the polarity that has the highest probability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "X-phemism Classification with Contextual Sentiment Analysis", "sec_num": "7" }, { "text": "Since the classifier provides labels for all three polarities (whereas we only got positive and negative polarities from the lexicons), we use a slightly different procedure to label a phrase. First, we compute the percent of subcorpus sentences containing phrase P that are assigned each polarity (POS, NEG, NEU), and compute the percent of all subcorpus sentences assigned each polarity. Then we compute the difference for each polarity. For example, \u2206(POS) = POS(Sents(T, P ))-POS(Sents(T)). This represents the difference between the percent of Positive sentences containing P and the percent of Positive sentences in the subcorpus as a whole. Finally, we label phrase P based on the polarity that had the largest difference: POS \u21d2 euphemistic, NEG \u21d2 dysphemistic, NEU \u21d2 neutral. Table 7 shows F-score results for the contextual models on our gold data. We evaluated three contextual models that use different mechanisms to label the affective polarity of a sentence: Con-textNRC uses the NRC sentiment classifier, Con-textAllDicts uses the AllDicts lexicon method, and ContextBestPair uses the BestPair lexicon method. For the sake of comparison, we also re-display the 8 We chose \u03b3 = .10 based on intuition without experimentation, so a different value could perform better. results for the best lexicon model (BestDictModel) presented in Section 6.1 for each topic. For LIE and STEAL, the best contextual model outperformed the best lexicon method, improving the F-score from .39 \u2192 .47 for LIE and from .38 \u2192 .43 for STEAL. For FIRE, the contextual models showed lower performance. We observed that phrases for the FIRE topic exhibited more lexical ambiguity than the other topics, so the subcorpus extracted for FIRE was more noisy than for the other topics. This likely contributed to the inferior performance of the contextual models on this topic. Table 8 shows the recall (R) and precision (P) breakdown for the best performing model for each topic. Euphemisms had the best recall and precision for LIE and STEAL, but lower recall for FIRE. Precision was lowest for the neutral category overall, indicating that too many euphemistic and dysphemistic phrases are being labeled as neutral. Our observation is that the models perform best on strongly euphemistic or dysphemistic phrases, and they have the most trouble categorizing metaphorical expressions, such as \"ax\" for FIRE. It makes sense that the lexicon-based models would have difficulty with these cases, but we had hoped that the contextual models would fare better. We suspect that polysemy is especially problematic for metaphorical phrases, resulting in a subcorpus for the topic that contains many irrelevant contexts. Incorporating understanding of metaphor seems to be an important direction for future research.", "cite_spans": [ { "start": 1175, "end": 1176, "text": "8", "ref_id": null } ], "ref_spans": [ { "start": 784, "end": 791, "text": "Table 7", "ref_id": "TABREF9" }, { "start": 1859, "end": 1866, "text": "Table 8", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "X-phemism Classification with Contextual Sentiment Analysis", "sec_num": "7" }, { "text": "This paper presented the first effort to recognize euphemisms and dysphemisms using natural language processing. Our research examined the relationship between x-phemisms and sentiment analysis, exploring whether information about affective polarity, connotation, arousal, intensity, and dominance could be beneficial for this task. We used semantic lexicon induction to generate near-synonyms for three topics, and developed lexicon-based and context-based sentiment analysis methods to classify phrases as euphemistic, dysphemistic, or neutral. We found that affective polarity and connotation information were useful for this task, and that identifying sentiment in sentence contexts around a phrase was generally more effective than labeling the phrases themselves. Promising avenues for future work include incorporating methods for recognizing politeness, formality, and metaphor. Euphemisms and dysphemisms are an exceedingly rich linguistic phenomenon, and we hope that our research will encourage more work on this interesting yet challenging problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "Direct (\"straight-talking\") references to a topic are called orthophemisms, but for simplicity we refer to them as neutral.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We considered using the Paraphrase Database (PPDB)(Ganitkevitch et al., 2013) as well, but many of its paraphrases are syntactic variations (e.g., active vs. passive) which are not useful for our purpose, and many entries are noisy as they were automatically generated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We used all relations except \"punct\" and \"det\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Basilisk ran for 200 iterations learning 5 words per cycle.5 One of the authors did this filtering. Our goal was merely to obtain a list of near-synonyms to use for x-phemism classification, and not to evaluate the near-synonym generation per se since that is not the main contribution of our work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We display the phrases here as n-grams for readability, but they are actually represented syntactically. For example, \"leave company\" is represented as an active voice VP with head \"leave\" linked to a direct object with head \"company\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "If a word occurred in multiple lexicons, ConnotationWN was given precedence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We gratefully thank Shelley Felt, Shauna Felt, and Claire Moore for their help annotating the gold data for this research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The connotations of English colour terms: Colour-based X-phemisms", "authors": [ { "first": "Keith", "middle": [], "last": "Allan", "suffix": "" } ], "year": 2009, "venue": "Journal of Pragmatics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keith Allan. 2009. The connotations of English colour terms: Colour-based X-phemisms. Journal of Prag- matics, 41.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Euphemism and Dysphemism: Language Used as Shield and Weapon", "authors": [ { "first": "Keith", "middle": [], "last": "Allan", "suffix": "" }, { "first": "Kate", "middle": [], "last": "Burridge", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keith Allan and Kate Burridge. 1991. Euphemism and Dysphemism: Language Used as Shield and Weapon. Oxford University Press, New York.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Interpreting neural networks to improve politeness comprehension", "authors": [ { "first": "Malika", "middle": [], "last": "Aubakirova", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Malika Aubakirova and Mohit Bansal. 2016. Interpret- ing neural networks to improve politeness compre- hension. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Process- ing.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "NTC's Dictionary of Euphemisms", "authors": [ { "first": "Anne", "middle": [], "last": "Bertram", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anne Bertram. 1998. NTC's Dictionary of Eu- phemisms. NTC, Chicago.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A computational approach to politeness with application to social factors", "authors": [ { "first": "Cristian", "middle": [], "last": "Danescu-Niculescu-Mizil", "suffix": "" }, { "first": "Moritz", "middle": [], "last": "Sudhof", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "Jure", "middle": [], "last": "Leskovec", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cristian Danescu-Niculescu-Mizil, Moritz Sudhof, Dan Jurafsky, Jure Leskovec, and Christopher Potts. 2013. A computational approach to politeness with application to social factors. In Proceedings of the 51st Annual Meeting of the Association for Compu- tational Linguistics, ACL 2013.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Connotation Lexicon: A Dash of Sentiment Beneath the Surface Meaning", "authors": [ { "first": "Song", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Jun Seok", "middle": [], "last": "Kang", "suffix": "" }, { "first": "Polina", "middle": [], "last": "Kuznetsova", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL-2013)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Song Feng, Jun Seok Kang, Polina Kuznetsova, and Yejin Choi. 2013. Connotation Lexicon: A Dash of Sentiment Beneath the Surface Meaning. In Pro- ceedings of the 51st Annual Meeting of the Associa- tion for Computational Linguistics (ACL-2013).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "PPDB: The Paraphrase Database", "authors": [ { "first": "Juri", "middle": [], "last": "Ganitkevitch", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Juri Ganitkevitch, , Benjamin Van Durme, and Chris Callison-burch. 2013. PPDB: The Paraphrase Database. In Proceedings of the 2013 North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Unsupervised phrasal nearsynonym generation from text corpora", "authors": [ { "first": "D", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "J", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "A", "middle": [], "last": "Gershman", "suffix": "" }, { "first": "S", "middle": [], "last": "Klein", "suffix": "" }, { "first": "D", "middle": [], "last": "Miller", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 29th AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Gupta, J. Carbonell, A. Gershman, S. Klein, and D. Miller. 2015. Unsupervised phrasal near- synonym generation from text corpora. In Proceed- ings of the 29th AAAI Conference on Artificial Intel- ligence (AAAI 2015).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Exploiting debate portals for semi-supervised argumentation mining in user-generated web discourse", "authors": [ { "first": "Ivan", "middle": [], "last": "Habernal", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Habernal and Iryna Gurevych. 2015. Exploit- ing debate portals for semi-supervised argumenta- tion mining in user-generated web discourse. In Pro- ceedings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "How Not To Say What You Mean: A Dictionary of Euphemisms", "authors": [ { "first": "R", "middle": [ "W" ], "last": "Holder", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. W. Holder. 2002. How Not To Say What You Mean: A Dictionary of Euphemisms. Oxford University Press, Oxford.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "ConnotationWordNet: Learning connotation over the word+sense network", "authors": [ { "first": "Jun Seok", "middle": [], "last": "Kang", "suffix": "" }, { "first": "Song", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Leman", "middle": [], "last": "Akoglu", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun Seok Kang, Song Feng, Leman Akoglu, and Yejin Choi. 2014. ConnotationWordNet: Learning conno- tation over the word+sense network. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Metaphor detection in a poetry corpus", "authors": [ { "first": "Vaibhav", "middle": [], "last": "Kesarwani", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Inkpen", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Szpakowicz", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Tanasescu", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vaibhav Kesarwani, Diana Inkpen, Stan Szpakowicz, and Chris Tanasescu (Margento). 2017. Metaphor detection in a poetry corpus. In Proceedings of the Joint SIGHUM Workshop on Computational Lin- guistics for Cultural Heritage, Social Sciences, Hu- manities and Literature.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Determining Code Words in Euphemistic Hate Speech Using Word Embedding Networks", "authors": [ { "first": "Rijul", "middle": [], "last": "Magu", "suffix": "" }, { "first": "Jiebo", "middle": [], "last": "Luo", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Second Workshop on Abusive Language Online (ALW2)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rijul Magu and Jiebo Luo. 2014. Determining Code Words in Euphemistic Hate Speech Using Word Em- bedding Networks. In Proceedings of the Second Workshop on Abusive Language Online (ALW2).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Obtaining reliable human ratings of valence, arousal, and dominance for 20,000 english words", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "", "middle": [], "last": "Mohammad", "suffix": "" } ], "year": 2018, "venue": "Proceedings of The Annual Conference of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M. Mohammad. 2018a. Obtaining reliable hu- man ratings of valence, arousal, and dominance for 20,000 english words. In Proceedings of The An- nual Conference of the Association for Computa- tional Linguistics (ACL), Melbourne, Australia.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Word affect intensities", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "", "middle": [], "last": "Mohammad", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 11th Edition of the Language Resources and Evaluation Conference (LREC-2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M. Mohammad. 2018b. Word affect intensities. In Proceedings of the 11th Edition of the Language Re- sources and Evaluation Conference (LREC-2018), Miyazaki, Japan.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "NRC-Canada: Building the state-ofthe-art in sentiment analysis of tweets", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Kiritchenko", "suffix": "" }, { "first": "", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Second Joint Conference on Lexical and Computational Semantics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M Mohammad, Svetlana Kiritchenko, and Xiao- dan Zhu. 2013. NRC-Canada: Building the state-of- the-art in sentiment analysis of tweets. In Proceed- ings of the Second Joint Conference on Lexical and Computational Semantics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Crowdsourcing a word-emotion association lexicon", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Peter", "middle": [ "D" ], "last": "Mohammad", "suffix": "" }, { "first": "", "middle": [], "last": "Turney", "suffix": "" } ], "year": 2013, "venue": "Computational Intelligence", "volume": "29", "issue": "3", "pages": "436--465", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M. Mohammad and Peter D. Turney. 2013. Crowd- sourcing a word-emotion association lexicon. Com- putational Intelligence, 29(3):436-465.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Reducing gender bias in abusive language detection", "authors": [ { "first": "Ji", "middle": [], "last": "Ho Park", "suffix": "" }, { "first": "Jamin", "middle": [], "last": "Shin", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ji Ho Park, Jamin Shin, and Pascale Fung. 2018. Re- ducing gender bias in abusive language detection. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "An empirical analysis of formality in online communication", "authors": [ { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Tetreault", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "61--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellie Pavlick and Joel Tetreault. 2016. An empiri- cal analysis of formality in online communication. Transactions of the Association for Computational Linguistics, 4:61-74.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Metaphor in using and understanding euphemism and dysphemism", "authors": [ { "first": "Kerry", "middle": [ "L" ], "last": "Pfaff", "suffix": "" }, { "first": "Raymond", "middle": [ "W" ], "last": "Gibbs", "suffix": "" }, { "first": "Michael", "middle": [ "D" ], "last": "Johnson", "suffix": "" } ], "year": 1997, "venue": "Applied Psycholinguistics", "volume": "18", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kerry L. Pfaff, Raymond W. Gibbs Jr., and Michael D. Johnson. 1997. Metaphor in using and understand- ing euphemism and dysphemism. Applied Psy- cholinguistics, 18.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Ensemble-based semantic lexicon induction for semantic tagging", "authors": [ { "first": "A", "middle": [], "last": "Qadir", "suffix": "" }, { "first": "E", "middle": [], "last": "Riloff", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM 2012", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Qadir and E. Riloff. 2012. Ensemble-based seman- tic lexicon induction for semantic tagging. In Pro- ceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM 2012).", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "The Translatability and Use of X-Phemism Expressions (X-Phemization): Euphemisms, Dysphemisms and Orthophemisms in the Medical Discourse", "authors": [ { "first": "Hussein", "middle": [], "last": "Abdo", "suffix": "" }, { "first": "Rababah", "middle": [], "last": "", "suffix": "" } ], "year": 2014, "venue": "Studies in Literature and Language", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hussein Abdo Rababah. 2014. The Translatability and Use of X-Phemism Expressions (X-Phemization): Euphemisms, Dysphemisms and Orthophemisms in the Medical Discourse. Studies in Literature and Language, 9.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Rawson's Dictionary of Euphemisms and Other Doubletalk", "authors": [ { "first": "Hugh", "middle": [], "last": "Rawson", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hugh Rawson. 2003. Rawson's Dictionary of Eu- phemisms and Other Doubletalk. Castle, Chicago, IL.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Learning Subjective Nouns using Extraction Pattern Bootstrapping", "authors": [ { "first": "E", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "J", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "T", "middle": [], "last": "Wilson", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Seventh Conference on Natural Language Learning (CoNLL-2003)", "volume": "", "issue": "", "pages": "25--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Riloff, J. Wiebe, and T. Wilson. 2003. Learning Sub- jective Nouns using Extraction Pattern Bootstrap- ping. In Proceedings of the Seventh Conference on Natural Language Learning (CoNLL-2003), pages 25-32.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Automatic metaphor interpretation as a paraphrasing task", "authors": [ { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ekaterina Shutova. 2010. Automatic metaphor inter- pretation as a paraphrasing task. In Proceedings of the 2010 Annual Conference of the North American Chapter of the Association for Computational Lin- guistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Metaphor identification using verb and noun clustering", "authors": [ { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" }, { "first": "Lin", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ekaterina Shutova, Lin Sun, and Anna Korhonen. 2010. Metaphor identification using verb and noun cluster- ing. In Proceedings of the 23rd International Con- ference on Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Recognizing stances in ideological on-line debates", "authors": [ { "first": "S", "middle": [], "last": "Somasundaran", "suffix": "" }, { "first": "J", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Somasundaran and J. Wiebe. 2010. Recognizing stances in ideological on-line debates. In Proceed- ings of the NAACL HLT 2010 Workshop on Compu- tational Approaches to Analysis and Generation of Emotion in Text.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A Bootstrapping Method for Learning Semantic Lexicons Using Extraction Pa ttern Contexts", "authors": [ { "first": "M", "middle": [], "last": "Thelen", "suffix": "" }, { "first": "E", "middle": [], "last": "Riloff", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "214--221", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Thelen and E. Riloff. 2002. A Bootstrapping Method for Learning Semantic Lexicons Using Ex- traction Pa ttern Contexts. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing, pages 214-221.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Guy De Pauw, Walter Daelemans, and Veronique Hoste. 2015. Detection and fine-grained classification of cyberbullying events", "authors": [ { "first": "Cynthia", "middle": [], "last": "Van Hee", "suffix": "" }, { "first": "Els", "middle": [], "last": "Lefever", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Verhoeven", "suffix": "" }, { "first": "Julie", "middle": [], "last": "Mennes", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Desmet", "suffix": "" } ], "year": null, "venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cynthia Van Hee, Els Lefever, Ben Verhoeven, Julie Mennes, Bart Desmet, Guy De Pauw, Walter Daele- mans, and Veronique Hoste. 2015. Detection and fine-grained classification of cyberbullying events. In Proceedings of the International Conference Re- cent Advances in Natural Language Processing.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Stance classification using dialogic properties of persuasion", "authors": [ { "first": "M", "middle": [], "last": "Walker", "suffix": "" }, { "first": "P", "middle": [], "last": "Anand", "suffix": "" }, { "first": "R", "middle": [], "last": "Abbott", "suffix": "" }, { "first": "R", "middle": [], "last": "Grant", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Walker, P. Anand, R. Abbott, and R. Grant. 2012. Stance classification using dialogic properties of per- suasion. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Affect transfer by metaphor for an intelligent conversational agent", "authors": [ { "first": "Alan", "middle": [], "last": "Wallington", "suffix": "" }, { "first": "Rodrigo", "middle": [], "last": "Agerri", "suffix": "" }, { "first": "John", "middle": [], "last": "Barnden", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rumbell", "suffix": "" } ], "year": 2011, "venue": "Affective Computing and Sentiment Analysis", "volume": "", "issue": "", "pages": "53--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Wallington, Rodrigo Agerri, John Barnden, Mark Lee, and Tim Rumbell. 2011. Affect transfer by metaphor for an intelligent conversational agent. In Affective Computing and Sentiment Analysis, pages 53-66. Springer.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Inducing a lexicon of abusive words -a feature-based approach", "authors": [ { "first": "Michael", "middle": [], "last": "Wiegand", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Ruppenhoffer", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "Clayton", "middle": [], "last": "Greenberg", "suffix": "" } ], "year": 2018, "venue": "NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Wiegand, Josef Ruppenhoffer, Anna Schmidt, and Clayton Greenberg. 2018. Inducing a lexicon of abusive words -a feature-based approach. In NAACL-HLT 2018.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Learning from bullying traces in social media", "authors": [ { "first": "Jun-Ming", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Kwang-Sung", "middle": [], "last": "Jun", "suffix": "" }, { "first": "Xiaojin", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Amy", "middle": [], "last": "Bellmore", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Conference of the North American Chapter", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun-Ming Xu, Kwang-Sung Jun, Xiaojin Zhu, and Amy Bellmore. 2012. Learning from bullying traces in so- cial media. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics.", "links": null } }, "ref_entries": { "TABREF0": { "html": null, "content": "", "num": null, "text": "Examples of Euphemisms and Dysphemisms", "type_str": "table" }, "TABREF2": { "html": null, "content": "
", "num": null, "text": "Seed Phrases per Topic", "type_str": "table" }, "TABREF3": { "html": null, "content": "
", "num": null, "text": "", "type_str": "table" }, "TABREF5": { "html": null, "content": "
: Examples of Gold Data Scores and Labels
(D = dysphemistic, N = neutral, E = euphemistic)
FIRE LIE STEAL
Euphemism.30.42.24
Neutral.29.30.35
Dysphemism .41.28.41
", "num": null, "text": "", "type_str": "table" }, "TABREF6": { "html": null, "content": "", "num": null, "text": "", "type_str": "table" }, "TABREF9": { "html": null, "content": "
", "num": null, "text": "Results for Contextual Analysis (F-scores)", "type_str": "table" }, "TABREF10": { "html": null, "content": "
NeuDysph
RPRPRP
FIRE.31
", "num": null, "text": ".58 .47 .25 .44 .51 LIE .64 .69 .52 .35 .24 .46 STEAL .68 .56 .32 .26 .33 .50", "type_str": "table" }, "TABREF11": { "html": null, "content": "", "num": null, "text": "Recall and Precision of Best Models", "type_str": "table" } } } }