{ "paper_id": "S19-1017", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:45:54.784391Z" }, "title": "A Corpus of Negations and their Underlying Positive Interpretations", "authors": [ { "first": "Zahra", "middle": [], "last": "Sarabi", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of North Texas", "location": {} }, "email": "zahrasarabi@my.unt.edu" }, { "first": "Erin", "middle": [], "last": "Killian", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of North Texas", "location": {} }, "email": "erinkillian@my.unt.edu" }, { "first": "Eduardo", "middle": [], "last": "Blanco", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of North Texas", "location": {} }, "email": "eduardo.blanco@unt.edu" }, { "first": "Alexis", "middle": [], "last": "Palmer", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of North Texas", "location": {} }, "email": "alexis.palmer@unt.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Negation often conveys implicit positive meaning. In this paper, we present a corpus of negations and their underlying positive interpretations. We work with negations from Simple Wikipedia, automatically generate potential positive interpretations, and then collect manual annotations that effectively rewrite the negation in positive terms. This procedure yields positive interpretations for approximately 77% of negations, and the final corpus includes over 5,700 negations and over 5,900 positive interpretations. We also present baseline results using seq2seq neural models.", "pdf_parse": { "paper_id": "S19-1017", "_pdf_hash": "", "abstract": [ { "text": "Negation often conveys implicit positive meaning. In this paper, we present a corpus of negations and their underlying positive interpretations. We work with negations from Simple Wikipedia, automatically generate potential positive interpretations, and then collect manual annotations that effectively rewrite the negation in positive terms. This procedure yields positive interpretations for approximately 77% of negations, and the final corpus includes over 5,700 negations and over 5,900 positive interpretations. We also present baseline results using seq2seq neural models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Negation is present in every human language. It is in the first place a phenomenon of semantical opposition. As such, negation relates an expression e to another expression with a meaning that is in some way opposed to the meaning of e (Horn and Wansing, 2015) . Sentences containing negation are generally (a) less informative than affirmative ones (e.g., Milan is not the capital of Italy vs. Rome is the capital of Italy), (b) morphosyntactically more marked-all languages have negative markers while only a few have affirmative markers, and (c) psychologically more complex and harder to process (Horn and Wansing, 2015) .", "cite_spans": [ { "start": 236, "end": 260, "text": "(Horn and Wansing, 2015)", "ref_id": "BIBREF12" }, { "start": 600, "end": 624, "text": "(Horn and Wansing, 2015)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Negation often conveys implicit positive meanings (Rooth, 1992) . This meaning ranges from implicatures to entailments, and we refer to it as positive interpretations. Consider the following text from Simple Wikipedia: 1 An abjad is an alphabet in which all its letters are consonants. Though vowels can be added in some abjads, they are not needed to write a word correctly. Some examples of abjads are the Arabic alphabet and the 1 https://simple.wikipedia.org/wiki/ Abjad 1 Mr. Smith apologized for :: not getting involved. Mr. Smith apologized for staying passive. 2 I :::: never heard of this guy before they started doing these commercials on television and radio. I heard of this guy after they started doing these commercials on on television and radio. 3 In Hinduism, beef is :: not allowed to be eaten.", "cite_spans": [ { "start": 50, "end": 63, "text": "(Rooth, 1992)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In Hinduism, chicken is allowed to be eaten. In other religions, beef is allowed to be eaten. Hebrew alphabet. Humans intuitively understand that the negation (second sentence) implies the following positive interpretation: Though vowels can be added in some abjads, only consonants are needed to write a word correctly. Table 1 shows three sentences containing negation and their underlying positive interpretations. Positive interpretations do not have any negation cues (e.g., not, never) and Example 3 shows that some negations may have more than one underlying positive interpretation depending on the context. Revealing the underlying positive interpretation of negation is challenging. First, we need to identify which tokens are intended to be negated (e.g., getting involved and before in Examples 1 and 2 from Table 1 ). Second, we need to rewrite those tokens to generate an actual positive interpretation (e.g., getting involved: staying passive).", "cite_spans": [], "ref_spans": [ { "start": 321, "end": 328, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 820, "end": 827, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper presents a corpus of negations and their underlying positive interpretations. 2 The main contributions are: (a) deterministic procedure to generate potential positive interpretations from negations, (b) corpus of negations and their positive interpretations manually annotated, (c) detailed analysis including which subtrees in the dependency tree are more likely to be rewritten and qualitative analysis of positive interpreta-tions. Additionally, we establish baseline results with sequence-to-sequence neural models.", "cite_spans": [ { "start": 89, "end": 90, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Negation is well-understood in grammars and the valid ways to express negation are documented (Quirk et al., 2000; van der Wouden, 1997) . In this paper, we focus on verbal negations, i.e., when the negation mark-usually an adverb such as never and not-is grammatically associated with a verb. Positive Interpretations. In philosophy and linguistics, it is accepted that negation conveys positive meaning (Horn, 1989) . This positive meaning ranges from implicatures, i.e., what is suggested in an utterance even though neither expressed nor strictly implied (Blackburn, 2008) , to entailments. Other terms used in the literature include implied meanings (Mitkov, 2005) , implied alternatives (Rooth, 1985) and semantically similar (Agirre et al., 2013) . We do not strictly fit into any of this terminology, we reveal positive interpretations as intuitively done by humans when reading text. Note that a positive interpretation is a statement that does not contain negation, not a statement that conveys positive sentiment. For example, The seller didn't ship the right parts implicitly conveys The seller shipped the wrong parts, which has negative sentiment. Potential Positive Interpretations. Given a sentence containing negation, we use the term potential positive interpretation to refer to positive interpretations that are automatically generated by replacing selected tokens with a placeholder. If the placeholder can be rewritten so that the result is an affirmative statement that is true given the original sentence, potential positive interpretations become actual positive interpretations. Negation and natural language understanding. Generating positive interpretations from negation has several potential applications.", "cite_spans": [ { "start": 94, "end": 114, "text": "(Quirk et al., 2000;", "ref_id": "BIBREF23" }, { "start": 115, "end": 136, "text": "van der Wouden, 1997)", "ref_id": "BIBREF28" }, { "start": 405, "end": 417, "text": "(Horn, 1989)", "ref_id": "BIBREF11" }, { "start": 559, "end": 576, "text": "(Blackburn, 2008)", "ref_id": "BIBREF4" }, { "start": 655, "end": 669, "text": "(Mitkov, 2005)", "ref_id": "BIBREF17" }, { "start": 693, "end": 706, "text": "(Rooth, 1985)", "ref_id": "BIBREF24" }, { "start": 732, "end": 753, "text": "(Agirre et al., 2013)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Background and Definitions", "sec_num": "2" }, { "text": "First, while neural machine translation is in general superior to phrase-based methods, that is not the case when translating negation (Bentivogli et al., 2016) . Since our positive interpretations effectively rewrite negation-containing sentences to remove the negation, we argue that they have the potential to help machine translation.", "cite_spans": [ { "start": 135, "end": 160, "text": "(Bentivogli et al., 2016)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Background and Definitions", "sec_num": "2" }, { "text": "Second, current benchmarks for natural language inference (Bowman et al., 2015), do not include challenging examples with negation. As a result, state-of-the-art approaches (Chen et al., 2017 ) trained on these benchmarks are unable to solve text-hypothesis pairs that contain negation. Indeed, we tested the aforecited systems with 100 text-hypothesis pairs from our corpus (text: sentence with negation, hypothesis: positive interpretation with correctness score of 4; see examples in Table 7) , and discovered that 48 of them are predicted contradiction, 30 neutral and only 22 entaioment (the correct prediction is entailment for all of them). While relatively small, we argue that the corpus presented here is a step towards language understanding when negation is present.", "cite_spans": [ { "start": 173, "end": 191, "text": "(Chen et al., 2017", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 487, "end": 495, "text": "Table 7)", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Background and Definitions", "sec_num": "2" }, { "text": "From a theoretical perspective, it is accepted that negation has scope and focus, and that the focus yields positive interpretations (Horn, 1989; Rooth, 1992) . Scope is \"the part of the meaning that is negated\" and focus \"the part of the scope that is most prominently or explicitly negated\" (Huddleston and Pullum, 2002) .", "cite_spans": [ { "start": 133, "end": 145, "text": "(Horn, 1989;", "ref_id": "BIBREF11" }, { "start": 146, "end": 158, "text": "Rooth, 1992)", "ref_id": null }, { "start": 293, "end": 322, "text": "(Huddleston and Pullum, 2002)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "3" }, { "text": "Scope of negation detection has received a lot of attention (\u00d6zg\u00fcr and Radev, 2009; Packard et al., 2014) , mostly using two corpora: BioScope (Szarvas et al., 2008) , and CD-SCO (Morante and Daelemans, 2012) . F-scores are 0.96 for negation cue detection, and 0.89 for negation cue and scope detection (Velldal et al., 2012; Li et al., 2010) .", "cite_spans": [ { "start": 60, "end": 83, "text": "(\u00d6zg\u00fcr and Radev, 2009;", "ref_id": null }, { "start": 84, "end": 105, "text": "Packard et al., 2014)", "ref_id": "BIBREF21" }, { "start": 143, "end": 165, "text": "(Szarvas et al., 2008)", "ref_id": "BIBREF27" }, { "start": 179, "end": 208, "text": "(Morante and Daelemans, 2012)", "ref_id": "BIBREF18" }, { "start": 303, "end": 325, "text": "(Velldal et al., 2012;", "ref_id": "BIBREF29" }, { "start": 326, "end": 342, "text": "Li et al., 2010)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "3" }, { "text": "Identifying the focus of negation is generally more challenging than the scope. The challenge lies on determining which tokens within the scope are intended to be negated. The largest corpus to date is PB-FOC, which was released as part of the *SEM-2012 Shared Task (Morante and Blanco, 2012) . PB-FOC annotates the semantic role most likely to be the focus in the 3,993 negation in PropBank (Palmer et al., 2005) . Anand and Martell (2012) refine PB-FOC and argue that 27.4% of negations with a focus annotated in PB-FOC do not actually have a focus. Sarabi and Blanco (2016) present a complementary approach grounded on syntactic dependencies. All of these efforts identify the tokens that are the focus of negation. We build upon them and generate actual positive interpretations from negation.", "cite_spans": [ { "start": 266, "end": 292, "text": "(Morante and Blanco, 2012)", "ref_id": "BIBREF19" }, { "start": 392, "end": 413, "text": "(Palmer et al., 2005)", "ref_id": "BIBREF22" }, { "start": 416, "end": 440, "text": "Anand and Martell (2012)", "ref_id": "BIBREF1" }, { "start": 552, "end": 576, "text": "Sarabi and Blanco (2016)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "3" }, { "text": "This section details our data collection and annotation effort. We follow 5 steps. First, we describe the source corpus. Second, we ouline the procedure to select negations so that the annota- tion effort is feasible. Third, we discuss the steps to automatically generate potential positive interpretations. Fourth, we detail the annotation effort to rewrite placeholders in the potential positive interpretations to generate actual positive interpretations. Fifth, we present the final validation strategy to ensure quality of the final corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Creation", "sec_num": "4" }, { "text": "We chose to work with Simple Wikipedia texts. 3 Simple Wikipedia is a version of Wikipedia that is written in basic English. Compared to regular Wikipedia, articles in Simple Wikipedia use simpler words, shorter sentences, and simple grammar. These characteristics help us to reduce the overhead of dealing with complex sentences and leads to a more realistic learning task. We process Simple Wikipedia with spaCy (Honnibal and Johnson, 2015) to obtain part-of-speech tags and dependency trees. Inspired by Fancellu et al.", "cite_spans": [ { "start": 414, "end": 442, "text": "(Honnibal and Johnson, 2015)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Selecting the Source Corpus: Simple Wikipedia", "sec_num": "4.1" }, { "text": "3 Version 2018-03-01; available at https://dumps.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selecting the Source Corpus: Simple Wikipedia", "sec_num": "4.1" }, { "text": "wikimedia.org/simplewiki/ (2016), we identify sentences containing negation using the following cues: n't, not, never, no, nothing, nobody, none, nowhere. Note that this method selects negations that would be discarded if we relied only on dependency type neg. Table 2 shows basic counts for sentences containing at least one negation in Simple Wikipedia. 93% of them contain only one negation, and 67% have medium length (between 6 to 25 tokens). Table 3 categorizes the Simple Wikipedia negations based on their type. We identify negation types using the part-of-speech tag of the syntactic head of the negation cue, i.e., the syntactic parent or governor of the negation cue. More than 70% of the negations in Simple Wikipedia are verbal negations, and the verb is the root of the dependency tree in 44% of them. Finally, Figure 1 shows the most frequent verbal negations in Simple Wikipedia. We observe that many verbs and in particular the verb to be are very frequent, and there is a long tail of (relatively) infrequent verbs.", "cite_spans": [], "ref_spans": [ { "start": 261, "end": 268, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 448, "end": 455, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 825, "end": 833, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Selecting the Source Corpus: Simple Wikipedia", "sec_num": "4.1" }, { "text": "Working with all negation types in Simple Wikipedia is out of the scope of this paper. After doing pilot annotations and manual examination, we decided to limit the negation types grounded on the counts presented in Section 4.1. Table 4 summarizes the filters and the number of negations that remain after running each filter. We apply sequentially five filters (Filters 1-5) on negations and four filters (Filters 6-9) on sentences. Filter 1 discards non-verbal negations (recall that 74.6% of negations are verbal, Table 3 ). Filter 2 discards those verbal negations which are not the root of the dependency tree. Filter 3 discards infrequent verbal negations, more specifically, those whose verbs occurred less than five times. Filter 4 caps the number of verbal negations per verb to 200 negations to increase verb coverage (recall that some verbs are negated very frequently, Figure 1) . Filter 5 discards verbal negations with partof-speech tag interjection (less than 1%, e.g., They said \"no\" to his offer). Filter 6 discards sentences whose length is not greater than five tokens and less than 26 tokens (recall that most sentences containing negation satisfy this filter: 67.3%, Table 2 ). Filter 7 discards sentences with more than one verbal negation (93% of sentences containing negation only contain one, Table 2 ). Filter 8 discards negated sentences in question form (i.e., the first token has any of the following part-of-speech tags: WDT, WP, WRB). Filter 9 discards sentences that include any of the following tokens: because, until, but, if, except. The final dataset consists of 7,469 negations, which are approximately 10% of negations in Simple Wikipedia.", "cite_spans": [], "ref_spans": [ { "start": 229, "end": 236, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 517, "end": 524, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 881, "end": 890, "text": "Figure 1)", "ref_id": "FIGREF0" }, { "start": 1188, "end": 1195, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 1318, "end": 1325, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Selecting Negations", "sec_num": "4.2" }, { "text": "We convert each negation into its positive counterpart in four steps following the rules by Huddleston and Pullum 2002: remove the negation cue, remove auxiliaries, fix third-person singular and past tense, and rewrite negatively-oriented polarity-sensitive items. These steps can be implemented using straightforward regular expressions. For example, the positive counterpart of The seller did not ship the right part, is The seller shipped the right parts. Then, we automatically generate all plausible positive interpretations of the negation by traversing the dependency tree and selecting all direct dependents of the negated verb. We filter out subtrees whose syntactic dependency is aux, auxpass, punct (auxiliary, passive auxiliary and punctuation). We also exclude the verb. These exceptions were defined after manual examination of several examples. Finally, we replace the selected subtrees with a placeholder. Table 5 shows the number of negations depending on how many positive interpretations are generated. We generate two or more potential positive interpretations for over 84% of negations.", "cite_spans": [], "ref_spans": [ { "start": 922, "end": 929, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Generating Potential Positive Interpretations", "sec_num": "4.3" }, { "text": "In order to rewrite placeholders in potential positive interpretations and collect actual positive interpretations, we implement an annotation inter- Table 5 : Distribution of negations by number of potential positive interpretations generated. face using Amazon Mechanical Turk Sandbox. 4 This rewriting process was done in-house by one linguistics student. A second annotator validated the rewrites independently (Section 4.5).", "cite_spans": [ { "start": 288, "end": 289, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 150, "end": 157, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Rewriting Placeholders", "sec_num": "4.4" }, { "text": "Each negation along with its context and all its potential positive interpretations are grouped into a Human Intelligence Task (HIT) for annotation purposes. Each HIT presents a set of instructions to the annotator along with examples. Potential positive interpretations are presented in consecutive rows, and each token in a cell. The placeholders generated in Section 4.3 are presented as blank cells and the annotator fills the blanks (or, in other words, the annotator rewrites placeholders) based on the context around the negation or world knowledge. A sample HIT along with the answers collected is shown in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 615, "end": 623, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Rewriting Placeholders", "sec_num": "4.4" }, { "text": "In the rest of the paper, we use unknown answer to refer to placeholders for which the annotator cannot find a rewriting. We divide unknown answers into invalid and not specified, and ask the annotator to distinguish between them. Invalid is used to refer to placeholders that cannot be rewritten. Not specified describes placeholders that hypothetically can be rewritten but the answer is unknown given the context. We also provide an extra empty box at the bottom of the interface for additional positive interpretations. If the annotator cannot find any answers for the rewrites, she can write a positive interpretation from scratch.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewriting Placeholders", "sec_num": "4.4" }, { "text": "In order to validate the rewrites of placeholders and resulting positive interpretations (Section 4.4), a second annotator validates them. We create a similar interface to the one in Figure 2 , but this time we only show the negation in context (Text in Figure 2 ), and one positive interpretation at a time (i.e., potential positive interpretation for which the Figure 2 : Sample negation along with its context and automatically-generated potential positive interpretations. The annotation process reveals three positive interpretations: \"Relationships that end are normaly called breakups,\" \"Marriages which end are rarely called breakups,\" and \"Marriages which end are normaly called divorce.\" placeholder was rewritten). The annotator determines correctness and novelty as follows.", "cite_spans": [], "ref_spans": [ { "start": 183, "end": 191, "text": "Figure 2", "ref_id": null }, { "start": 254, "end": 262, "text": "Figure 2", "ref_id": null }, { "start": 363, "end": 371, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Validating Positive Interpretations", "sec_num": "4.5" }, { "text": "Correctness measures whether a positive interpretation is true given the negation in context. It is measured using the following scale:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Validating Positive Interpretations", "sec_num": "4.5" }, { "text": "1. After reading the text, it is clear that the positive interpretation is false. 2. After reading the text, the positive interpretation is probably false, but I am not sure. 3. After reading the text, the positive interpretation is probably true, but I am not sure. 4. After reading the text, it is clear that the positive interpretation is true. Novelty measures whether the meaning conveyed by a positive interpretation is already explicitly stated in the text, and it is measured using the following numeric scale:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Validating Positive Interpretations", "sec_num": "4.5" }, { "text": "1. The positive interpretation is stated explicitly in the text with the very same words. I could copy and paste chunks from text and get the positive interpretation. 2. The positive interpretation is not stated in the text with the same words. The positive interpretation and the text have synonyms in common, but I could not get the positive interpretation simply copying and pasting from text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Validating Positive Interpretations", "sec_num": "4.5" }, { "text": "text with the same words. Additionally, there are few synonyms in common between the positive interpretation and text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The positive interpretation is not stated in the", "sec_num": "3." }, { "text": "The procedure described in (11,030 not specified and 1,014 invalid). We also rewrite a new positive interpretation from scratch for 2,158 negations for which we cannot find any actual rewrites. Overall, we rewrite 5,989 positive interpretations for 5,770 unique negations. In other words, the procedure in Section 4 yields a positive interpretation for 77% of negations. Table 6 shows the distribution of known vs unknown rewrites per dependency type, where dependency type refers to the dependency type from the selected subtree of the verb to the verb itself. Out of all dependency types, advmod and xcomp (adverbial modifier and open clausal complement respectively) have the highest ratios of known rewrites, and nsubj (nominal subject) has the most unknown answers. In other words, the easiest placeholders to rewrite are those whose syntactic function is adverbial modifier or open clausal complement, and the most challenging are those whose syntactic function is nominal subject.", "cite_spans": [], "ref_spans": [ { "start": 371, "end": 378, "text": "Table 6", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Corpus Analysis", "sec_num": "5" }, { "text": "To understand high-level characteristics of negations and their positive interpretations beyond dependency types, we explore a random sample of 100 negations and all their positive interpretations. We discover six major categories (quantities, times, objects, adjectives, proper nouns and others) and 4 subcategories (Table 7) :", "cite_spans": [], "ref_spans": [ { "start": 317, "end": 326, "text": "(Table 7)", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Corpus Analysis", "sec_num": "5" }, { "text": "\u2022 The first category is quantities and includes both specific and abstract quantities. An example of abstract quantity is Many do not use their real names, as Everett does and its corresponding positive interpretation Few use their real names, as Everett does. A fourth of positive interpretations in the sample were obtained after rewriting quantities. \u2022 The second category is time and includes both actual and abstract times. An example of actual time is Since 2012, this channel never goes off the air during the day and its positive interpretation Before 2012, this channel went off the air during the day. 15% of positive interpretations in the sample were obtained rewriting temporal expressions. \u2022 The third category is objects and refers to positive interpretations obtained by rewriting verbal objects. An example is It does not need sunlight to grow and its positive interpretation It needs water to grow. 9% of positive interpretations in the sample were obtained after rewriting the verbal objects. \u2022 The fourth category is adjectives and refers to positive interpretations obtained by rewriting adjectives. An example is Crops did not grow as well when they were close together and its positive interpretation Crops grew poorly when they were close together. 27% of positive interpretations in the sample were obtained after rewriting adjectives. American open wheel racing series. 2% of positive interpretation in the sample were obtained after rewriting proper nouns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Analysis", "sec_num": "5" }, { "text": "To assess the quality of the rewrites and positive interpretations, we ask a second annotator to validate them based on two criteria: correctness and novelty (Section 4.5). Recall that correctness ranges from 1 (minimum) to 4 (maximum) and novelty from 1 (minimum) to 3 (maximum). We assess novelty only if positive interpretations are correct (correctness scores 3 or 4). Figure 3 reports the validation results. Out of all positive interpretations obtained during the annotation process, 90% are either correct (77%) or probably correct (13%) (correctness scores 4 and 3), and 95% of them are either very novel (52%) or novel (43%). This validation scores mean not only that positive interpretations are sound given the original negation (correctness score), but also that they are not explicitly stated in the context and thus reveal implicit meaning (novelty score). Table 8 presents three negations, all potential positive interpretations, and manual annotations along with the correctnes and novelty scores. Example (1) is a simple negated clause. The procedure described in Section 4.3 generates four potential positive interpretations, and three of them were rewritten. Given Phosgene usually does not cause its worst effects right away and its context, the following positive interpretations are deemed correct (correctness = 4) with different degrees of novelty (2, 3 and 1 respectively): Phosgene rarely causes its worst effects right away (Interpretation 1.2), Phosgene usually causes mild effects right away (Interpretation 1.3), and Phosgene usually causes its worst effects 12 hours after a person breathes it in (Interpretation 1.4). Note that 1 Context: Phosgene can be a liquid or a gas. As a gas, it is heavier than air, so it can stay near the ground (where people can breathe it in for long periods of time). It smells like freshly cut grass or moldy hay. Along with being a choking agent, phosgene is also a blood agent. This means it keeps oxygen from getting into the body's cells. Without oxygen, a person's cells will die, and the person will suffocate. Phosgene usually does not cause its worst effects right away. ---Addtl. Interpretation: He's always trying to get more money despite being rich. 4 3 Table 8 : Sentences containing negation (and context if relevant to obtain positive interpretations), all automaticallygenerated positive interpretations, positive interpretations manually annotated (italics indicate placeholder rewritings), and validation scores (correctness and novelty). Interpretation 1.1 is most likely correct, but context does not provide clues about which chemicals cause their worst effects right away and thus it is annotated not specified (NS).", "cite_spans": [], "ref_spans": [ { "start": 373, "end": 381, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 871, "end": 878, "text": "Table 8", "ref_id": null }, { "start": 2229, "end": 2236, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Annotation Quality", "sec_num": "5.1" }, { "text": "Example (2) has three potential positive interpretations, and we rewrite two of them. Note that Intepretation 2.2, Hungary has observed Central European Time since 1916, is correct but not novel because it is explicitly stated in the context. Interpretation 2.3 is correct but received novelty score of 2 because it only replaces since with prior to.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation Examples", "sec_num": "5.2" }, { "text": "Example (3) shows an example in which rewriting placeholders is not successful. The additional interpretation, however, reveals that He has the intention of getting more money. Context, which is not shown in Table 8 , support the correctness and validation scores (e.g., He is wealthy).", "cite_spans": [], "ref_spans": [ { "start": 208, "end": 215, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Annotation Examples", "sec_num": "5.2" }, { "text": "The task of generating positive interpretations from a sentence containing negation can be approached with sequence-to-sequence (seq2seq) models (input: sentence containing negation, output: positive interpretation). In this section, we present baseline results with existing seq2seq models. Specifically, we experiment with a basic seq2seq model , two seq2seq models with attention (Luong et al., 2015; , and Google's neural machine translation (NMT) system (Wu et al., 2016) , which is also seq2seq model with attention and arguably the most complex. We acknowledge that these systems are usually trained with orders of magnitude more examples, and comparing them when trained with our fairly small corpus may be unfair because they were designed for other tasks. Our goal is not to obtain the best results possible, but rather provide baseline results for our task and corpus.", "cite_spans": [ { "start": 383, "end": 403, "text": "(Luong et al., 2015;", "ref_id": "BIBREF16" }, { "start": 459, "end": 476, "text": "(Wu et al., 2016)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "Short Sentences Long Sentences BLEU Gram. Corr. BLEU Gram. Corr. seq2seq (basic) 10.31 23% 5% 2.13 6% 1% seq2seq + attention 20.51 65% 22% 9.20 41% 12% seq2seq + attention (Luong et al., 2015) 28.08 68% 30% 14.53 51% 19% seq2seq + Google's NMT attention (Wu et al., 2016) 12.54 42% 15% 4.40 12% 3% Table 9 : Results (BLEU-4, grammaticality and correctness) obtained with the test set.", "cite_spans": [ { "start": 172, "end": 192, "text": "(Luong et al., 2015)", "ref_id": "BIBREF16" }, { "start": 254, "end": 271, "text": "(Wu et al., 2016)", "ref_id": null } ], "ref_spans": [ { "start": 298, "end": 305, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Models", "sec_num": null }, { "text": "The 3,831 negations become source sentences and the correct positive interpretations become target sentences. We randomly select 100 short sentences (up to 12 tokens) and 100 long sentences (over 12 tokens) for testing, 200 sentences for development, and the remainder for training. All positive interpretations collected from a negation are assigned to the testing, development or training splits in order to ensure a more realistic scenario. Evaluation and Results. We use three metrics to evaluate the models: BLEU-4, correctness and grammaticality. BLEU-4 is automated, convenient and useful for development purposes. While larger BLEU-4 scores generally indicate better correctness and grammaticality scores, we do not observe a linear correlation (Table 9) . Correctness is measured manually with the scale presented in Section 4.5. Finally, grammaticality is measured manually using the following numeric scale:", "cite_spans": [], "ref_spans": [ { "start": 753, "end": 762, "text": "(Table 9)", "ref_id": null } ], "eq_spans": [], "section": "Models", "sec_num": null }, { "text": "1. The sentence is not grammatical at all, e.g., it does not contain a verb. 2. The sentence is mostly ungrammatical, e.g., it contains a verb but the word order is wrong. 3. The sentence has a few grammatical issues, e.g., the subject-verb agreement is wrong, missing punctuation. 4. The sentence is grammatically correct (regardless of its correctness). Table 9 shows the results. In general terms, results are better for short sentences than long ones. This is not surprising given the small size of our corpus. The basic seq2seq model performs poorly: it barely generates any correct positive interpretaions, and most are ungrammatical. Adding attention performs better. The best results are with the system by Luong et al. (2017) : 30% of the short positive interpretations generated are correct, and 68% grammatical. We believe Google's NMT performs the worst because of the small corpus.", "cite_spans": [ { "start": 715, "end": 734, "text": "Luong et al. (2017)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 356, "end": 363, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Models", "sec_num": null }, { "text": "We also conduct a manual analysis of the correct positive interpretations generated by the best system. Following with the categories described in Section 5 and Table 7 , 37% of them belong to the adjectives category, 27% to abstract quanti-ties, 17% to objects, and 10% to abstract time.", "cite_spans": [], "ref_spans": [ { "start": 161, "end": 168, "text": "Table 7", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Models", "sec_num": null }, { "text": "We have presented a corpus of negations and their positive interpretations. Positive interpretations do not contain negations, range from implicatures to entailments, and are intuitively understood by nonexperts when reading the negations. We work with verbal negations selected from Simple Wikipedia, automatically generate potential positive interpretation by replacing subtrees with placeholders, and manually collect rewrites for the placeholders in order to obtain actual positive interpretations. This strategy yields positive interpretations for 77% of negations, and manual validation step ensures both correctnes and novelty.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "Neural machine translation struggles with negation, and natural language inference benchmarks do not account for the intricacies of negation (Section 2). While small, we believe the corpus presented here is a step towards enabling natural language understanding when negation is present.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "Available at: https://zahrasarabi.com", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://requester.mturk.com/ developer/sandbox", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This material is based upon work supported by the National Science Foundation under Grants Nos. 1734730, 1832267 and 1845757. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. The Titan Xp used for this research was donated by the NVIDIA Corporation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "*sem 2013 shared task: Semantic textual similarity", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Gonzalez-Agirre", "suffix": "" }, { "first": "Weiwei", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity", "volume": "1", "issue": "", "pages": "32--43", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez- Agirre, and Weiwei Guo. 2013. *sem 2013 shared task: Semantic textual similarity. In Second Joint Conference on Lexical and Computational Se- mantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity. Association for Computational Linguistics, Atlanta, Georgia, USA, pages 32- 43. http://www.aclweb.org/anthology/ S13-1004.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Annotating the focus of negation in terms of questions under discussion", "authors": [ { "first": "Pranav", "middle": [], "last": "Anand", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Martell", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "65--69", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pranav Anand and Craig Martell. 2012. Annotating the focus of negation in terms of questions under dis- cussion. In Proceedings of the Workshop on Extra- Propositional Aspects of Meaning in Computational Linguistics. Association for Computational Linguis- tics, Stroudsburg, PA, USA, ExProM '12, pages 65- 69. http://dl.acm.org/citation.cfm? id=2392701.2392709.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv e-prints abs/1409.0473. https://arxiv.org/abs/", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Neural versus phrase-based machine translation quality: a case study", "authors": [ { "first": "Luisa", "middle": [], "last": "Bentivogli", "suffix": "" }, { "first": "Arianna", "middle": [], "last": "Bisazza", "suffix": "" }, { "first": "Mauro", "middle": [], "last": "Cettolo", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "257--267", "other_ids": { "DOI": [ "10.18653/v1/D16-1025" ] }, "num": null, "urls": [], "raw_text": "Luisa Bentivogli, Arianna Bisazza, Mauro Cettolo, and Marcello Federico. 2016. Neural versus phrase-based machine translation quality: a case study. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Pro- cessing. Association for Computational Linguis- tics, pages 257-267. https://doi.org/10. 18653/v1/D16-1025.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The Oxford Dictionary of Philosophy", "authors": [ { "first": "Simon", "middle": [], "last": "Blackburn", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simon Blackburn. 2008. The Oxford Dictio- nary of Philosophy. Oxford University Press.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A large annotated corpus for learning natural language inference", "authors": [ { "first": "R", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "Gabor", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Potts", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "632--642", "other_ids": { "DOI": [ "10.18653/v1/D15-1075" ] }, "num": null, "urls": [], "raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large an- notated corpus for learning natural language in- ference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Pro- cessing. Association for Computational Linguis- tics, pages 632-642. https://doi.org/10. 18653/v1/D15-1075.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Recurrent neural network-based sentence encoder with gated attention for natural language inference", "authors": [ { "first": "Qian", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Zhen-Hua", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Si", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Inkpen", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP", "volume": "", "issue": "", "pages": "36--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Recurrent neural network-based sentence encoder with gated attention for natural language inference. In Pro- ceedings of the 2nd Workshop on Evaluating Vec- tor Space Representations for NLP. Association for Computational Linguistics, Copenhagen, Den- mark, pages 36-40. http://www.aclweb. org/anthology/W17-5307.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merri\u00ebnboer", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Lar G\u00fcl\u00e7ehre", "suffix": "" }, { "first": "Fethi", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "", "middle": [], "last": "Bougares", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1724--1734", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, \u00c7 a?lar G\u00fcl\u00e7ehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Associa- tion for Computational Linguistics, Doha, Qatar, pages 1724-1734. http://www.aclweb.org/ anthology/D14-1179.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Neural networks for negation scope detection", "authors": [ { "first": "Federico", "middle": [], "last": "Fancellu", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lopez", "suffix": "" }, { "first": "Bonnie", "middle": [], "last": "Webber", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "16--1047", "other_ids": {}, "num": null, "urls": [], "raw_text": "Federico Fancellu, Adam Lopez, and Bonnie Web- ber. 2016. Neural networks for negation scope detection. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 495-504. http://www.aclweb.org/ anthology/P16-1047.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "An improved non-monotonic transition system for dependency parsing", "authors": [ { "first": "Matthew", "middle": [], "last": "Honnibal", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1373--1378", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Honnibal and Mark Johnson. 2015. An improved non-monotonic transition system for de- pendency parsing. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computa- tional Linguistics, Lisbon, Portugal, pages 1373- 1378. https://aclweb.org/anthology/ D/D15/D15-1162.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A natural history of negation", "authors": [ { "first": "Laurence", "middle": [ "R" ], "last": "Horn", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laurence R. Horn. 1989. A natural history of negation. Chicago University Press, Chicago.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The Stanford Encyclopedia of Philosophy", "authors": [ { "first": "R", "middle": [], "last": "Laurence", "suffix": "" }, { "first": "Heinrich", "middle": [], "last": "Horn", "suffix": "" }, { "first": "", "middle": [], "last": "Wansing", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laurence R. Horn and Heinrich Wansing. 2015. Nega- tion. In Edward N. Zalta, editor, The Stanford Ency- clopedia of Philosophy. Summer 2015 edition.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The Cambridge Grammar of the English Language", "authors": [ { "first": "D", "middle": [], "last": "Rodney", "suffix": "" }, { "first": "Geoffrey", "middle": [ "K" ], "last": "Huddleston", "suffix": "" }, { "first": "", "middle": [], "last": "Pullum", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rodney D. Huddleston and Geoffrey K. Pullum. 2002. The Cambridge Grammar of the English Language. Cambridge University Press.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Learning the Scope of Negation via Shallow Semantic Parsing", "authors": [ { "first": "Junhui", "middle": [], "last": "Li", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Hongling", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Qiaoming", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "671--679", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junhui Li, Guodong Zhou, Hongling Wang, and Qiaoming Zhu. 2010. Learning the Scope of Negation via Shallow Semantic Parsing. In Pro- ceedings of the 23rd International Conference on Computational Linguistics (Coling 2010). Col- ing 2010 Organizing Committee, Beijing, China, pages 671-679. http://www.aclweb.org/ anthology-new/C/C10/C10-1076.bib.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Neural machine translation (seq2seq) tutorial", "authors": [ { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Brevdo", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minh-Thang Luong, Eugene Brevdo, and Rui Zhao. 2017. Neural machine translation (seq2seq) tutorial. https://github.com/tensorflow/nmt .", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Effective approaches to attention-based neural machine translation", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1412--1421", "other_ids": { "DOI": [ "10.18653/v1/D15-1166" ] }, "num": null, "urls": [], "raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing. Association for Compu- tational Linguistics, pages 1412-1421. https: //doi.org/10.18653/v1/D15-1166.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The Oxford handbook of computational linguistics", "authors": [ { "first": "Ruslan", "middle": [], "last": "Mitkov", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruslan Mitkov. 2005. The Oxford handbook of compu- tational linguistics. Oxford University Press.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Annotating modality and negation for a machine reading evaluation", "authors": [ { "first": "R", "middle": [], "last": "Morante", "suffix": "" }, { "first": "W", "middle": [], "last": "Daelemans", "suffix": "" } ], "year": 2012, "venue": "CLEF 2012 Evaluation Labs and Workshop Online Working Notes", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Morante and W. Daelemans. 2012. Annotating modality and negation for a machine reading evalua- tion. In CLEF 2012 Evaluation Labs and Workshop Online Working Notes.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "*sem 2012 shared task: Resolving the scope and focus of negation", "authors": [ { "first": "Roser", "middle": [], "last": "Morante", "suffix": "" }, { "first": "Eduardo", "middle": [], "last": "Blanco", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics", "volume": "2", "issue": "", "pages": "265--274", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roser Morante and Eduardo Blanco. 2012. *sem 2012 shared task: Resolving the scope and focus of nega- tion. In Proceedings of the First Joint Confer- ence on Lexical and Computational Semantics -Vol- ume 1: Proceedings of the Main Conference and the Shared Task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Eval- uation. Association for Computational Linguistics, Stroudsburg, PA, USA, SemEval '12, pages 265- 274. http://dl.acm.org/citation.cfm? id=2387636.2387679.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Detecting Speculations and their Scopes in Scientific Text", "authors": [ { "first": "Dragomir", "middle": [ "R" ], "last": "Arzucan\u00f6zg\u00fcr", "suffix": "" }, { "first": "", "middle": [], "last": "Radev", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1398--1407", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arzucan\u00d6zg\u00fcr and Dragomir R. Radev. 2009. Detect- ing Speculations and their Scopes in Scientific Text. In Proceedings of the 2009 Conference on Empiri- cal Methods in Natural Language Processing. As- sociation for Computational Linguistics, Singapore, pages 1398-1407. http://www.aclweb.org/ anthology-new/D/D09/D09-1145.bib.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Simple negation scope resolution through deep parsing: A semantic solution to a semantic problem", "authors": [ { "first": "Woodley", "middle": [], "last": "Packard", "suffix": "" }, { "first": "Emily", "middle": [ "M" ], "last": "Bender", "suffix": "" }, { "first": "Jonathon", "middle": [], "last": "Read", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Oepen", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Dridan", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "14--1007", "other_ids": {}, "num": null, "urls": [], "raw_text": "Woodley Packard, Emily M. Bender, Jonathon Read, Stephan Oepen, and Rebecca Dridan. 2014. Sim- ple negation scope resolution through deep parsing: A semantic solution to a semantic problem. In Pro- ceedings of the 52nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Baltimore, Maryland, pages 69-78. http:// www.aclweb.org/anthology/P14-1007.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "The Proposition Bank: An Annotated Corpus of Semantic Roles", "authors": [ { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Kingsbury", "suffix": "" } ], "year": 2005, "venue": "Computational Linguistics", "volume": "31", "issue": "1", "pages": "71--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An Annotated Cor- pus of Semantic Roles. Computational Linguistics 31(1):71-106.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A comprehensive grammar of the English language", "authors": [ { "first": "Randolph", "middle": [], "last": "Quirk", "suffix": "" }, { "first": "Sidney", "middle": [], "last": "Greenbaum", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Leech", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Randolph Quirk, Sidney Greenbaum, and Geoffrey Leech. 2000. A comprehensive grammar of the En- glish language. Longman, London.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Association with focus", "authors": [ { "first": "Mats", "middle": [], "last": "Rooth", "suffix": "" } ], "year": 1985, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mats Rooth. 1985. Association with focus. Ph.D. the- sis.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Mats Rooth. 1992. A theory of focus interpretation", "authors": [], "year": null, "venue": "Natural language semantics", "volume": "1", "issue": "1", "pages": "75--116", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mats Rooth. 1992. A theory of focus interpretation. Natural language semantics 1(1):75-116.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Understanding negation in positive terms using syntactic dependencies", "authors": [ { "first": "Zahra", "middle": [], "last": "Sarabi", "suffix": "" }, { "first": "Eduardo", "middle": [], "last": "Blanco", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1108--1118", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zahra Sarabi and Eduardo Blanco. 2016. Understand- ing negation in positive terms using syntactic de- pendencies. In Proceedings of the 2016 Confer- ence on Empirical Methods in Natural Language Processing. Association for Computational Linguis- tics, Austin, Texas, pages 1108-1118. https: //aclweb.org/anthology/D16-1119.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "The BioScope corpus: annotation for negation, uncertainty and their scopein biomedical texts", "authors": [ { "first": "Gy\u00f6rgy", "middle": [], "last": "Szarvas", "suffix": "" }, { "first": "Veronika", "middle": [], "last": "Vincze", "suffix": "" }, { "first": "Rich\u00e1rd", "middle": [], "last": "Farkas", "suffix": "" }, { "first": "J\u00e1nos", "middle": [], "last": "Csirik", "suffix": "" } ], "year": 2008, "venue": "Proceedings of BioNLP 2008. ACL", "volume": "", "issue": "", "pages": "38--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gy\u00f6rgy Szarvas, Veronika Vincze, Rich\u00e1rd Farkas, and J\u00e1nos Csirik. 2008. The BioScope corpus: anno- tation for negation, uncertainty and their scopein biomedical texts. In Proceedings of BioNLP 2008. ACL, Columbus, Ohio, USA, pages 38-45.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Negative contexts: collocation, polarity, and multiple negation", "authors": [ { "first": "Ton", "middle": [], "last": "Van Der Wouden", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ton van der Wouden. 1997. Negative contexts: collo- cation, polarity, and multiple negation. Routledge, London.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Speculation and negation: Rules, rankers, and the role of syntax", "authors": [ { "first": "Erik", "middle": [], "last": "Velldal", "suffix": "" }, { "first": "Lilja", "middle": [], "last": "Ovrelid", "suffix": "" }, { "first": "Jonathon", "middle": [], "last": "Read", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Oepen", "suffix": "" } ], "year": 2012, "venue": "Comput. Linguist", "volume": "38", "issue": "2", "pages": "369--410", "other_ids": { "DOI": [ "10.1162/COLI_a_00126" ] }, "num": null, "urls": [], "raw_text": "Erik Velldal, Lilja Ovrelid, Jonathon Read, and Stephan Oepen. 2012. Speculation and negation: Rules, rankers, and the role of syntax. Comput. Lin- guist. 38(2):369-410. https://doi.org/10. 1162/COLI_a_00126.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Google's neural machine translation system: Bridging the gap between human and machine translation", "authors": [ { "first": "Macduff", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Hughes", "suffix": "" }, { "first": "", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridg- ing the gap between human and machine translation. CoRR abs/1609.08144.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "The most frequent negated verb lemmas in Simple Wikipedia.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "The fifth category is proper nouns. An example is Cosworth does not currently provide engines to any American open wheel racing series and its positive interpretation IndyCar Series currently provide engines to", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "Distribution of correctness (top) and novelty (bottom) scores in our corpus.", "num": null, "uris": null, "type_str": "figure" }, "TABREF0": { "html": null, "text": "Three sentences containing a negation and their positive interpretations (italics).", "content": "", "type_str": "table", "num": null }, "TABREF2": { "html": null, "text": "Basic counts for sentences containing negation in Simple Wikipedia.", "content": "
#%
Negation TypesVerbal Nominal Adjectival Other Allroot not root24,125 31,386 11,003 2,325 5,507 74,346 100.0% 32.4% 42.2% 14.8% 3.1% 7.4%
", "type_str": "table", "num": null }, "TABREF3": { "html": null, "text": "Distribution of negation type in SimpleWikipedia.", "content": "", "type_str": "table", "num": null }, "TABREF5": { "html": null, "text": "Initial number of negations in Simple Wikipedia, and how many remain after each filter.", "content": "
", "type_str": "table", "num": null }, "TABREF8": { "html": null, "text": "", "content": "
", "type_str": "table", "num": null }, "TABREF9": { "html": null, "text": "Congress cannot serve for more than three out of any six years. 3% abstract Many do not use their real names, as Everett does. 22%Times actualSince 2012, this channel never goes off the air during the day. 4% abstract Rabbits should not be bred too early though. 11% Objects -It does not need sunlight to grow and can stay in the same pot for many years. 9% Adjectives -Crops did not grow as well when they were close together.27% Proper nouns -Cosworth does not currently provide engines to any American open wheel racing series.", "content": "
CategorySubcat. Examples%
Quantitiesspecific Members of 2%
Others-The mass number is not shown on the periodic table.22%
", "type_str": "table", "num": null }, "TABREF10": { "html": null, "text": "Categories and subcategories discovered in a sample of 100 negations and all their positive interpretations.", "content": "", "type_str": "table", "num": null }, "TABREF11": { "html": null, "text": "The worst symptoms do not happen until 12 hours after a person breathed in phosgene. The person usually dies within 24 to 48 hours. Sentence containing negation: Phosgene usually does not cause its worst effects right away.", "content": "
CorrectnessNovelty
-Interpretation 1.1: [NS] usually causes its worst effects right away.--
-Interpretation 1.2: Phosgene rarely causes its worst effects right away.42
-Interpretation 1.3: Phosgene usually causes mild effects right away.43
-Interpretation 1.4: Phosgene usually causes its worst effects 12 hours after a person41
breathes it in.
2 Context: Hungary uses Central European Time (CET) which is 1 hour ahead of Coordinated Universal Time (UTC+1).
Hungary has not observed summer time since 1916.
Sentence containing negation: Hungary has not observed summer time since 1916.
CorrectnessNovelty
-Interpretation 2.1: [NS] has observed summer time since 1916.--
-Interpretation 2.2: Hungary has observed Central European Time since 1916.41
-Interpretation 2.3: Hungary has observed summer time prior to 1916.42
3 Sentence containing negation: This does not stop him from finding ways to try to get more money.
CorrectnessNovelty
-Interpretation 3.1: This stops [NS] from finding ways to try to get more money.--
-Interpretation 3.2: This stops him [NS].
", "type_str": "table", "num": null } } } }