ACL-OCL / Base_JSON /prefixN /json /nodalida /2021.nodalida-main.30.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:31:18.352430Z"
},
"title": "Negation in Norwegian: an annotated dataset",
"authors": [
{
"first": "Petter",
"middle": [],
"last": "Maehlum",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Library of Sweden",
"location": {
"country": "KBLab"
}
},
"email": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Barnes",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Library of Sweden",
"location": {
"country": "KBLab"
}
},
"email": ""
},
{
"first": "Robin",
"middle": [],
"last": "Kurtz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Library of Sweden",
"location": {
"country": "KBLab"
}
},
"email": "robin.kurtz@kb.se"
},
{
"first": "Lilja",
"middle": [],
"last": "\u00d8vrelid",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Library of Sweden",
"location": {
"country": "KBLab"
}
},
"email": ""
},
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Library of Sweden",
"location": {
"country": "KBLab"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper introduces NoReC neg-the first annotated dataset of negation for Norwegian. Negation cues and their in-sentence scopes have been annotated across more than 11K sentences spanning more than 400 documents for a subset of the Norwegian Review Corpus (NoReC). In addition to providing in-depth discussion of the annotation guidelines, we also present a first set of benchmark results based on a graphparsing approach.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper introduces NoReC neg-the first annotated dataset of negation for Norwegian. Negation cues and their in-sentence scopes have been annotated across more than 11K sentences spanning more than 400 documents for a subset of the Norwegian Review Corpus (NoReC). In addition to providing in-depth discussion of the annotation guidelines, we also present a first set of benchmark results based on a graphparsing approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This paper introduces a new data set annotating negation for Norwegian. As shown in the example below, the annotations identify both negation cues (in bold) and their scopes (in brackets) within the sentence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) Men The underlying corpus is the NoReC fine data set (\u00d8vrelid et al., 2020 ) -a subset of the Norwegian Review Corpus (NoReC) (Velldal et al., 2018) annotated for fine-grained sentiment, comprising professional reviews from a range of different domains. The new data set introduced here, named NoReC neg , is the first data set of negation for Norwegian. We also present experimental results for negation resolution based on a graph-parsing approach shown to yield state-of-the-art results for other languages. All the resources described in the paper -the data set, the annotation guidelines, the models and the associated code -are made publicly available. 1 The rest of the paper is structured as follows. We start by reviewing related work on negation 1 https://github.com/ltgoslo/norec_neg for other languages in Section 2, with regards to both annotation and modeling. In Section 3 we detail our annotation guidelines, the annotation procedure and further present an analysis of interannotator agreement. In Section 4 we then summarize the statistics of the final annotated data set, before presenting the first benchmark results for negation resolution in Section 5. Before concluding, we finally provide a discussion of future work in Section 6.",
"cite_spans": [
{
"start": 57,
"end": 78,
"text": "(\u00d8vrelid et al., 2020",
"ref_id": null
},
{
"start": 130,
"end": 152,
"text": "(Velldal et al., 2018)",
"ref_id": "BIBREF30"
},
{
"start": 663,
"end": 664,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Below we discuss related work on negation, starting with datasets before moving on to modeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "While NoReC neg is the first dataset annotated for negation for Norwegian, there are a number of existing negation datasets for a range of other languages, such as Chinese (Zou et al., 2016) , Dutch (Afzal et al., 2014) , English (Pyysalo et al., 2007; Vincze et al., 2008; Morante and Daelemans, 2012; Councill et al., 2010; Konstantinova et al., 2012) , German (Cotik et al., 2016) , Spanish (Jim\u00e9nez-Zafra et al., 2018; Diaz et al., 2017) , Swedish (Dalianis and Velupillai, 2010; Skeppstedt, 2011) , Italian (Altuna et al., 2017) , and Japanese (Matsuyoshi et al., 2014) . Jim\u00e9nez-Zafra et al. (2020) provide a thorough survey of existing negation datasets. A large proportion of negation corpora are based on data from the biomedical or clinical domain (Vincze et al., 2008; Dalianis and Velupillai, 2010; Cotik et al., 2016; Diaz et al., 2017) . We will here focus on the corpora that are most relevant to the current annotation effort: the SFU Corpus and the ConanDoyle-neg corpus. The SFU corpus also annotates review data, hence is similar to our work in terms of text type, whereas ConanDoyle-neg is one of the most widely used datasets in the field.",
"cite_spans": [
{
"start": 172,
"end": 190,
"text": "(Zou et al., 2016)",
"ref_id": "BIBREF33"
},
{
"start": 199,
"end": 219,
"text": "(Afzal et al., 2014)",
"ref_id": "BIBREF0"
},
{
"start": 230,
"end": 252,
"text": "(Pyysalo et al., 2007;",
"ref_id": "BIBREF25"
},
{
"start": 253,
"end": 273,
"text": "Vincze et al., 2008;",
"ref_id": "BIBREF31"
},
{
"start": 274,
"end": 302,
"text": "Morante and Daelemans, 2012;",
"ref_id": "BIBREF22"
},
{
"start": 303,
"end": 325,
"text": "Councill et al., 2010;",
"ref_id": "BIBREF4"
},
{
"start": 326,
"end": 353,
"text": "Konstantinova et al., 2012)",
"ref_id": "BIBREF15"
},
{
"start": 363,
"end": 383,
"text": "(Cotik et al., 2016)",
"ref_id": "BIBREF3"
},
{
"start": 386,
"end": 422,
"text": "Spanish (Jim\u00e9nez-Zafra et al., 2018;",
"ref_id": null
},
{
"start": 423,
"end": 441,
"text": "Diaz et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 452,
"end": 483,
"text": "(Dalianis and Velupillai, 2010;",
"ref_id": "BIBREF5"
},
{
"start": 484,
"end": 501,
"text": "Skeppstedt, 2011)",
"ref_id": "BIBREF28"
},
{
"start": 504,
"end": 533,
"text": "Italian (Altuna et al., 2017)",
"ref_id": null
},
{
"start": 549,
"end": 574,
"text": "(Matsuyoshi et al., 2014)",
"ref_id": "BIBREF19"
},
{
"start": 758,
"end": 779,
"text": "(Vincze et al., 2008;",
"ref_id": "BIBREF31"
},
{
"start": 780,
"end": 810,
"text": "Dalianis and Velupillai, 2010;",
"ref_id": "BIBREF5"
},
{
"start": 811,
"end": 830,
"text": "Cotik et al., 2016;",
"ref_id": "BIBREF3"
},
{
"start": 831,
"end": 849,
"text": "Diaz et al., 2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2.1"
},
{
"text": "The English (Konstantinova et al., 2012) and",
"cite_spans": [
{
"start": 12,
"end": 40,
"text": "(Konstantinova et al., 2012)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2.1"
},
{
"text": "Spanish (Jim\u00e9nez-Zafra et al., 2018) parts of the SFU Review Corpus contain reviews from eight domains (books, cars, computers, cookware, hotels, movies, music, phones) which have been annotated for sentiment at document-level, as well as negation and speculation at sentence-level. The annotation scheme for negation is based primarily on the guidelines developed for the biomedical BioScope corpus (Vincze et al., 2008) , which largely employ syntactic criteria for the determination of scope, choosing the maximal syntactic unit that contains the negated content. Unlike Bio-Scope, however, negation cues are not included within the scope in SFU. The corpus does not annotate affixal cues, e.g. imin impossible.",
"cite_spans": [
{
"start": 8,
"end": 36,
"text": "(Jim\u00e9nez-Zafra et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 103,
"end": 168,
"text": "(books, cars, computers, cookware, hotels, movies, music, phones)",
"ref_id": null
},
{
"start": 400,
"end": 421,
"text": "(Vincze et al., 2008)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2.1"
},
{
"text": "The English ConanDoyle-neg corpus contains Sherlock Holmes stories manually annotated for negation cues, scopes, and events (Morante and Daelemans, 2012) and was employed in the 2012 *SEM shared task on negation detection (Morante and Blanco, 2012) . The annotation scheme is also based on the scheme employed for the Bio-Scope corpus (Vincze et al., 2008) , but with important modifications. In ConanDoyle-neg, the cue is not included in the scope, and it annotates a wide range of cue types, i.e., both sub-token (affixal), single token and multi-token negation cues. Scopes may furthermore be discontinuous, often an effect of the requirement to include the subject within the negation scope. This is in contrast to the annotation scheme found in the SFU corpus, where subjects are not included in the negation scope. Note that the NegPar corpus contains a re-annotated version of the ConanDoyle-neg corpus, which fixes known bugs and also adds Chinese data (Liu et al., 2018) .",
"cite_spans": [
{
"start": 124,
"end": 153,
"text": "(Morante and Daelemans, 2012)",
"ref_id": "BIBREF22"
},
{
"start": 222,
"end": 248,
"text": "(Morante and Blanco, 2012)",
"ref_id": "BIBREF20"
},
{
"start": 335,
"end": 356,
"text": "(Vincze et al., 2008)",
"ref_id": "BIBREF31"
},
{
"start": 961,
"end": 979,
"text": "(Liu et al., 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2.1"
},
{
"text": "Traditional approaches to the task of negation detection have typically employed a wide range of hand-crafted features, and often linguistically informed, derived from constituency parsing Packard et al., 2014) , dependency parsing (Lapponi et al., 2012) , or Minimal Recursion Semantics structures created by an HPSG parser (Packard et al., 2014) . Scope resolution in particular has often been approached as a sequence labeling task, as pioneered by Morante and Daelemans (2009) and later done in several other works (Lapponi et al., 2012; White, 2012; Enger et al., 2017) . More recently, neural approaches have been successfully applied to the task. Qian et al. (2016) propose a CNN model for negation scope detection on the abstracts section of the Bio-Scope corpus, which operates over syntactic paths between the cue and candidate tokens. Fancellu et al. (2016) present and compare two neural architectures for the task of negation scope detection on the ConanDoyle-neg corpus: a simple feedforward network and a bidirectional LSTM. Note that these more recent neural systems disregard the task of cue detection altogether (Fancellu et al., 2016; Qian et al., 2016; Fancellu et al., 2017) , relying instead on gold cues and focusing solely on the task of scope detection.",
"cite_spans": [
{
"start": 189,
"end": 210,
"text": "Packard et al., 2014)",
"ref_id": "BIBREF24"
},
{
"start": 232,
"end": 254,
"text": "(Lapponi et al., 2012)",
"ref_id": "BIBREF17"
},
{
"start": 325,
"end": 347,
"text": "(Packard et al., 2014)",
"ref_id": "BIBREF24"
},
{
"start": 452,
"end": 480,
"text": "Morante and Daelemans (2009)",
"ref_id": "BIBREF21"
},
{
"start": 519,
"end": 541,
"text": "(Lapponi et al., 2012;",
"ref_id": "BIBREF17"
},
{
"start": 542,
"end": 554,
"text": "White, 2012;",
"ref_id": "BIBREF32"
},
{
"start": 555,
"end": 574,
"text": "Enger et al., 2017)",
"ref_id": "BIBREF8"
},
{
"start": 654,
"end": 672,
"text": "Qian et al. (2016)",
"ref_id": "BIBREF26"
},
{
"start": 846,
"end": 868,
"text": "Fancellu et al. (2016)",
"ref_id": "BIBREF10"
},
{
"start": 1130,
"end": 1153,
"text": "(Fancellu et al., 2016;",
"ref_id": "BIBREF10"
},
{
"start": 1154,
"end": 1172,
"text": "Qian et al., 2016;",
"ref_id": "BIBREF26"
},
{
"start": 1173,
"end": 1195,
"text": "Fancellu et al., 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling",
"sec_num": "2.2"
},
{
"text": "Finally, Kurtz et al. (2020) cast negation resolution as a graph parsing problem and perform full negation resolution using a dependency graph parser (Dozat and Manning, 2018) to jointly predict cues and scopes. The neural model uses a BiLSTM to create token-level representations, and then includes two feed-forward networks to create head-and dependent-specific token representations. Finally, each possible head-dependent combination is scored using a bilinear model. Despite the conceptual simplicity, this model achieves state-of-the-art results. As such, we use this model to evaluate our annotations and include further details in Section 5.",
"cite_spans": [
{
"start": 9,
"end": 28,
"text": "Kurtz et al. (2020)",
"ref_id": "BIBREF16"
},
{
"start": 150,
"end": 175,
"text": "(Dozat and Manning, 2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling",
"sec_num": "2.2"
},
{
"text": "In the following section we present our negation annotation effort in more detail, including the underlying source of the data. The guidelines we have developed for the annotation of negation cues and scopes in Norwegian are mainly adapted from ConanDoyle-neg (Morante and Daelemans, 2009) , NegPar (Liu et al., 2018) , and the Spanish SFU corpus (Jim\u00e9nez-Zafra et al., 2018), modified to suit Norwegian, and with simplifications that will be discussed below. Note that while the complete set of guidelines is distributed with the corpus, we provide a brief overview below together with examples, also discussing inter-annotator agreement.",
"cite_spans": [
{
"start": 260,
"end": 289,
"text": "(Morante and Daelemans, 2009)",
"ref_id": "BIBREF21"
},
{
"start": 299,
"end": 317,
"text": "(Liu et al., 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotations",
"sec_num": "3"
},
{
"text": "The negation annotations described below are added to the existing NoReC fine data set 2 (\u00d8vrelid et al., 2020 ) -a subset of the Norwegian Review Corpus (NoReC) annotated for fine-grained sentiment. The negation layer of the corpus is named NoReC neg . The full NoReC corpus (Velldal et al., 2018) contains professional reviews from several Norwegian online news sites, spanning a range of different domains, like music, literature, products, movies, restaurants, and more. While NoReC contains more than 43,000 full-text reviews, the subset annotated in NoReC fine , and hence also NoReC neg , includes 414 full reviews, comprising 11,346 sentences. Note that there are two official standards for written Norwegian; Bokm\u00e5l (the majority variant) and Nynorsk. While the data set contains a majority of documents written according to the Bokm\u00e5l standard, four Nynorsk documents are also included.",
"cite_spans": [
{
"start": 89,
"end": 110,
"text": "(\u00d8vrelid et al., 2020",
"ref_id": null
},
{
"start": 276,
"end": 298,
"text": "(Velldal et al., 2018)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The underlying corpus",
"sec_num": "3.1"
},
{
"text": "Since our starting point for guideline development is English, we will here discuss linguistic differences between the expression of negation in the two languages. Generally speaking, Norwegian negation does not differ greatly from English. The main means of negating a proposition is by using adverbs, prepositions and quantifiers. The largest differences between the two are syntactic in nature and concern the placement of adverbials, caused by the fact that Norwegian, unlike English, is a V2-language. One clear difference with practical consequences is that certain Norwegian negation cues inflect for grammatical gender and number, notable examples being ingen (ingen, inga, intet) 'no' and l\u00f8s (-l\u00f8s, -l\u00f8st, -l\u00f8se) '-less', as seen in example (2) for the affixally negated (a) meningsl\u00f8st 'meaningless' with the neuter ending, (b) hensynsl\u00f8se 'inconsiderate' with plural inflection, and (c) smakl\u00f8s 'tasteless' with no inflection. This property of Norwegian means that there are likely a larger number of different tokens functioning as cues in Norwegian, as compared to English. The discussion of negation in the Norwegian Reference Grammar (Faarlund et al., 1997) is largely limited to a selected few of the possible cues, e.g., ikke 'not', ingen 'none, no-one' and related forms, and the preposition uten 'without'. Golden et al. (2014) contains a brief comment on lexical negation, where they mention nektende verb 'negating verbs'. They also mention negative polarity items under a discussion of separate words and expressions in negations.",
"cite_spans": [
{
"start": 1150,
"end": 1173,
"text": "(Faarlund et al., 1997)",
"ref_id": null
},
{
"start": 1327,
"end": 1347,
"text": "Golden et al. (2014)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Negation in Norwegian",
"sec_num": "3.2"
},
{
"text": "A negation cue is a word or a set of words that serve to signal negation. In our annotation scheme we annotate both single token cues, such as adverbs like ikke 'not', aldri 'never', prepositions, e.g., uten 'without', and quantifiers like ingen 'no'. We also annotate multi-word cues, such as (p\u00e5) ingen m\u00e5te, 'in no way', as well as morphological or affixal negation cues, i.e. affixes such as u-'un-/dis-/non-' and -l\u00f8s '-less'. Example 3shows the widely used negative adverb aldri 'never', which scopes over the whole sentence, including the subject Jeg 'I', whereas (4) exemplifies the negative determiner ingen 'no' which occurs in two conjoined noun phrase objects, where both negation cues scope over the following noun as well as the preceding subject and main verb.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Negation cues",
"sec_num": "3.3"
},
{
"text": "( 3) Multi-word cues Multi-word cues are negation cues that span more than one token. These may further be discontinuous, as in the case of (h)verken ... eller 'neither ... nor', as seen in example (5). As noted by Morante and Daelemans (2012) , multi-word cues tend to be fixed/idiomatic expressions -an observation that is largely true for Norwegian as well. One practical difference between the annotation scheme in Morante and Daelemans (2009) and ours, is that we omit prepositions and particles related to these expressions, as in (6), in favor of creating less variation that might create noise in the data, especially in cases where multiple prepositions are associated with similar cues and the association is less fixed. Affixal cues We annotate both free-standing and affixal negation cues. The affixal cues form a rather closed group of cues, with the prefix uand the suffix -l\u00f8s being the most common. However, our annotations show that there is lexical variation, with less common cues such as -fri '-free' and -tom '-empty'.",
"cite_spans": [
{
"start": 215,
"end": 243,
"text": "Morante and Daelemans (2012)",
"ref_id": "BIBREF22"
},
{
"start": 419,
"end": 447,
"text": "Morante and Daelemans (2009)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Negation cues",
"sec_num": "3.3"
},
{
"text": "Negation vs. Modality One difficulty in annotating cues is to separate between cases of negation in isolation and cases where negation and modality interact. Cases where modality and negation are inseparable, as in neppe 'barely' are not annotated, but cases of negation where the modality can be separated, either by it scoping over the negation, or the negation scoping over it, were annotated as negations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Negation cues",
"sec_num": "3.3"
},
{
"text": "Lexical negation As mentioned above, the discussion of lexical negation in a Norwegian context is limited. We borrow the term 'lexical negation' from Jim\u00e9nez-Zafra et al. 2020, who split cues into syntactic, lexical, and morphological/affixal, and use the lexical category to mean words that fall outside the 'syntactic' and more frequent cues, like negative adverbs and determiners. Examples from Norwegian include verbal constructs, e.g., la vaere 'refrain from' or forsvinne 'disappear' as in (7), and nouns such as mangel 'lack'. Lexicalization and idioms The words that are used as negation cues might also have other functions, and are in some cases part of fossilized expressions. The annotators were instructed to refrain from annotating affixal cues that no longer signal negation. Lexicalization, in particular, is a challenge when it comes to affixal negation, as it can be difficult even for native speakers to judge whether something should be treated as a negation or not. Some cases are clearer than others, such as uansett 'regardless', which stems from ansett 'viewed/respected', which it clearly does not negate, on the one hand, and on the other hand usikker 'uncertain', whose non-negated form sikker 'certain' is also frequent. The absence of the non-negated version of the lemma in the language might be a good indicator of lexicalization, and annotators were instructed to avoid annotating such words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Negation cues",
"sec_num": "3.3"
},
{
"text": "In addition to lexicalized items, there are also cases where a cue can have more than one meaning. One frequent case is the prefix uwith nominal roots, a construction that usually results in nouns meaning bad x, as in u\u00e5r lit. 'un-year', which means 'a bad year', or uvenn, lit. 'unfriend', meaning 'enemy'. The annotators were instructed to try and dismantle the word in order to see if the word made sense without the negative prefix, in which case it would indicate that it is not completely lexicalized. Even so, these are often difficult judgements for the annotators to make. Furthermore, nominalizations of negated adjectives, such as uttrykksl\u00f8shet 'expressionlessness' and umenneskelighet 'inhumanity' were not to be annotated. Table 1 presents the ten most common cues found in the corpus, where we find both affixal and single token cues. We see that variation in the data is further caused by spelling differences. The adverb ikke 'not' can also be used affixally, often, but not always, with a hyphen, as in ikke-produksjonsklart 'not-production-ready'. The variation is also due in part to the two language varieties present in the dataset, e.g in the case of Bokm\u00e5l ikke 'not' and Nynorsk ikkje 'not'.",
"cite_spans": [],
"ref_spans": [
{
"start": 737,
"end": 744,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Negation cues",
"sec_num": "3.3"
},
{
"text": "The scope of a negation is the part of a sentence that has its truth value inverted by the presence of a negation cue. In our annotation scheme, cues are never part of the scope. Subjects are included in the scope if the negation scopes over the main verb, which usually means that the whole proposition is negated, and if the subject or object of a sentence is negated by a determiner or similar, the whole sentence is in the scope, apart from certain fixed elements discussed below. Phrase linking conjunctions are not included. Furthermore, scopes tend be discontinuous. In many cases this is simply due to the the fact that in most sentences, the subject precedes the negation cue, while the predicate follows it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Negation scopes",
"sec_num": "3.4"
},
{
"text": "Trans.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cue",
"sec_num": null
},
{
"text": "Frequency Amb . Rate ikke not 1,364 3 u-un-/dis-/non-514 83 uten without 190 0 ingen none/nobody 134 0 -l\u00f8s -less 123 5 aldri never 95 6 mangle lack 43 14 ingenting nothing 23 0 ikkje not 23 0 verken neither 21 30 Table 1 : List of the 10 most common cues found in the corpus, their translation to English, their frequency as a cue, as well as their ambiguity rate (Amb. Rate), which is defined as 1\u2212 (the frequency as a cue / the absolute frequency) \u00d7100.",
"cite_spans": [],
"ref_spans": [
{
"start": 14,
"end": 261,
"text": ". Rate ikke not 1,364 3 u-un-/dis-/non-514 83 uten without 190 0 ingen none/nobody 134 0 -l\u00f8s -less 123 5 aldri never 95 6 mangle lack 43 14 ingenting nothing 23 0 ikkje not 23 0 verken neither 21 30 Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Cue",
"sec_num": null
},
{
"text": "Implicit scope The scope of a cue can be implicit, meaning it is understood from the context. In practice the scope is often expressed in a sentence before or after the cue itself. This is in particular the case with the interjection nei 'no', which usually refers back to the proposition it negates. Since our annotation does not span across sentence boundaries, the scope is annotated as implicit in these types of cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cue",
"sec_num": null
},
{
"text": "Subordinate clauses If the negation cue modifies a verb in a subordinate clause, the whole subordinate clause, except the initial subjunction, is part of the scope, see (8) below. Modifying subjects and objects If a cue, typically a determiner, modifies the subject or the object of a sentence, the whole clause that contains that subject or object is part of the scope, as in (9) below. Note that certain elements, such as subjunctions, conjunctions and sentence adverbs might still not be included. Cue as subject or object In cases where the subject or object are also neagtion cues, the cue is not included in the scope, see (10). Exception items The annotation of exception items, such as untatt 'except' and bortsett (fra) 'except (for)' depends on whether they are within the scope of a negation cue or not. When the item is not within the scope of another cue, it incurs a negation, as in (11). This closely follows the annotation found in Morante and Daelemans (2012) and Liu et al. (2018) . 'Sport seats -which give good support, except for the thigh support for tall people'",
"cite_spans": [
{
"start": 948,
"end": 976,
"text": "Morante and Daelemans (2012)",
"ref_id": "BIBREF22"
},
{
"start": 981,
"end": 998,
"text": "Liu et al. (2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cue",
"sec_num": null
},
{
"text": "When exception items are found within the scope of another negation cue, however, they remove the elements they scope over from the scope of the other negation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cue",
"sec_num": null
},
{
"text": "Sentential adverbs and adverbs scoping over negation Two types of adverbs pose certain challenges: sentential adverbs and adverbs that indicate modality. Sentential adverbs such as heldigvis 'fortunately' as in (12) are not part of the propositional value of a sentence, but rather function to comment on it (Faarlund et al., 1997 Modal adverbs such as kanskje 'maybe' can occur both within and outside of the scope of a negation cue, and in these cases the annotators were asked to paraphrase in order to pinpoint the placement of these adverbs.",
"cite_spans": [
{
"start": 308,
"end": 330,
"text": "(Faarlund et al., 1997",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cue",
"sec_num": null
},
{
"text": "Negation raising Negation raising is the phenomenon where a negator is \"raised\" further up in a syntactic tree, which in the case of Norwegian means further towards the beginning of a sentence. What characterizes these types of constructions is that the negation is adjacent to the verb in the main sentence, even though the negation only scopes over a subsequent subordinate clause. This happens frequently in Norwegian, as in English, with mental state verbs like mene 'think', tro 'believe', as in (13). 13 Expletive subjects In Norwegian, as in other Scandinavian languages, there are several types of linguistic constructions that involve an expletive subject. A commonly used mechanism in these languages is extraposition, where a clausal argument is postposed, and a formal, semantically void subject det 'it' or der 'there' functions as the syntactic subject, as in (14). Here we do not treat the expletive subject as the subject of the negated proposition, instead only the extraposed subordinate clause is in scope of the negation. Since det 'it' is ambiguous in the sense that it can, in fact, also be referential, the annotators have to assess referentiality during annotation. Negation in conditional, interrogative, and imperative sentences In the annotation scheme of Morante and Daelemans (2012) , they do not annotate negation in non-factual sentences, i.e., conditional, interrogative and imperative sentences. We have chosen to include all negation regardless of its factuality. We believe that negation has implications beyond asserting the factuality of a proposition, and it can be useful for sentiment analysis, among other tasks. For instance, in example (15), the negation is under the scope of the conditional hvis 'if', but is still marked, even though it is not a factual proposition. Negative polarity items (NPIs) NPIs are lexical entities that are used together with negation cues, and which usually render the sentence ungrammatical should the negation cues be removed without further change. In our annotation scheme, they are contained within the scope of the negation cue. In Norwegian, the negative adverb ikke 'not' in combination with the determiner noe/noen 'some/any' is a common negative polarity item. However, the most common type of NPIs are adverbs such as i det hele tatt 'at all', as in (16), that serve to strengthen the negation. Foreign language citations The annotated texts frequently contain titles of various products, such as 'Never Run Away'. These cases of foreign language negation cues are not annotated.",
"cite_spans": [
{
"start": 1283,
"end": 1311,
"text": "Morante and Daelemans (2012)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cue",
"sec_num": null
},
{
"text": "Negation cues not indicating negation It is not uncommon for negation cues to be part of expressions that do not indicate negation in combination, e.g., certain fixed expressions such as hvis ikke 'otherwise'. Other borderline cases such as the focus marker ikke bare 'not only' and the expression ingen tvil 'no doubt', were included after discussion, as they are analyzed as introducing a negated reading.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cue",
"sec_num": null
},
{
"text": "Affixal scope The scope of affixal items is annotated in a slightly different way compared to other cues. If an affixally negated adjective is the predicate, then the whole sentence is included within its scope. If it is part of a noun phrase, then only that noun phrase is inside the scope. Additional adjectives or adverbs in the sentence fall outside the scope, as in (17). ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cue",
"sec_num": null
},
{
"text": "The annotation was performed by several hired student research assistants with a background in linguistics and with Norwegian as their native language. All 414 documents in the original dataset, comprising 11,346 sentences, were annotated independently by two annotators in parallel. The doubly annotated documents were then adjudicated by a third annotator after a final round of discussions concerning difficult cases. Annotators had the possibility to discuss any potential problems during both the annotation and adjudication period, but were encouraged to follow the guidelines as strictly as possible. The annotation and adjudication were both performed using the webbased annotation tool Brat (Stenetorp et al., 2012) .",
"cite_spans": [
{
"start": 700,
"end": 724,
"text": "(Stenetorp et al., 2012)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Procedure",
"sec_num": "3.5"
},
{
"text": "We have measured the inter-annotator agreement over the full (doubly annotated) dataset in terms of both F 1 and \u03ba scores for cues, full scopes, and scope tokens. The scores show that annotators agree to a very high degree on the identification of cues (0.995 F 1 , 0.841 \u03ba). When it comes to negation scopes, the agreement is lower when measured towards full and exact spans (0.632 F 1 , 0.34 \u03ba), but quite high when measured on the tokenlevel (0.912 F 1 , 0.803 \u03ba).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-annotator agreement",
"sec_num": "3.6"
},
{
"text": "Due to the adjudication phase of the annotation process, we also have insight into the sources of disagreements between the annotators. As noted above, agreement between annotators is generally high when it comes to cue detection, but surprising disagreements can be seen. These are most likely due to the guidelines being improved as the annotations continued to uncover new challenges. There seems to be a clear tendency for annotators to disagree on less common cues, such as verbs and nouns that indicate negation, as opposed to the more often discussed adverbs and determiners. The annotators rarely agreed on less frequent lexical items such as forsvinne 'disappear' and takke nei til 'say no to'. However, the disagreements also reflect discussions concerning the inclusion or omission of prepositions, in addition to cue span errors. Annotators generally agree on the more frequent cues. The prefix u-'un-/dis-/non-', seems to have a disproportionately large disagreement score, but discussions among the annotators indicate that this is likely due to prefixes being more difficult to detect when annotating than isolated whole-word tokens. Disagreement is also found regarding modal elements, such as knapt 'barely' (almost not) and for...til 'too...to' (cannot be). Table 2 summarizes the statistics for the final annotated data set. Of the 11,346 sentences in the corpus, we see that just above 20% of them are negated. Out of the negated sentences, 13% contain multiple instances of negation. While, as expected, the number of tokens in a cue averages to 1, the average length of scopes is close to 7 (with a maximum observed length of 53). Note, however, that a small number of cues (1.4%) also have empty ('null') scopes. We report both any kind of discontinuous scopes (disc.) and true discontinuous scopes (true disc.), where the latter does not count scopes which are only discontinuous because of an intervening cue. While discontinuous scopes are very frequent (70% of scopes), truly discontinuous scopes are much fewer (21%). We see that affixal negation is quite widespread in NoReC neg , comprising almost 25% of the cues. Moreover, just above 11% are multi-word cues. While most cues are not particularly ambiguous, e.g., ikke 'not', uten 'without', others, such as u-'un-/dis-', mangle 'lack' or verken 'neither' can have rather high rates of ambiguity (meaning that they can occur with both negated and non-negated readings).",
"cite_spans": [],
"ref_spans": [
{
"start": 1276,
"end": 1283,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Inter-annotator agreement",
"sec_num": "3.6"
},
{
"text": "In order to benchmark the dataset, we use the semantic graph parsing approach to negation detection proposed by Kurtz et al. (2020) , see Section 2. Besides the baseline graph representation originally proposed (point-to-root), where all elements of the scope have arcs that point to the cue, we propose several variants. For head-first, we set the first token of the cue as the root node, and similarly set the first token in the scope as the head of the span. All elements within the span have arcs that point to the head, and heads have arcs that point to the root. head-final is similar, but instead sets the final tokens of spans as the heads. There can be several roots per sequence and not all tokens are connected. Finally, we enrich the dependency labels to distinguish edges that are internal to a holder/target/expression span from those that are external and perform experiments by adding an 'in label' to non-head nodes within the graph, which we call +inlabel. ,543 1,768 2,025 1 3 19 228 508 1,995 6.9 44 1,403 423 30 dev 1,531 301 342 1 2 0 39 88 339 7.1 53 236 85 3 test 1,272 263 305 1 2 2 37 69 301 6.5 27 203 58 4 total 11,346 2,332 2,672 1 3 21 304 665 2,635 6.9 53 1,842 566 37 Table 2 : Statistics of the dataset -per split and in total -including total number of sentences (#), number of sentences that contain negation (neg.), as well as the number (#) of cues and scopes, along with their average and maximum lengths in tokens. Additionally, we include the number of discontinuous cues and scopes (disc.) as well as true discontinuous (true disc.) for scopes which we discuss in Section 4. Finally, we detail the number of sentences that have multiple cues (mult.), the number of affixal cues, and the number of cues that have no scope (null).",
"cite_spans": [
{
"start": 112,
"end": 131,
"text": "Kurtz et al. (2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 975,
"end": 1258,
"text": ",543 1,768 2,025 1 3 19 228 508 1,995 6.9 44 1,403 423 30 dev 1,531 301 342 1 2 0 39 88 339 7.1 53 236 85 3 test 1,272 263 305 1 2 2 37 69 301 6.5 27 203 58 4 total 11,346 2,332 2,672 1 3 21 304 665 2,635 6.9 53 1,842 566 37 Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Modeling approach",
"sec_num": "5.1"
},
{
"text": "The negation parser is evaluated using the metrics from the *SEM 2012 shared task (Morante and Blanco, 2012) : cue-level F 1 (CUE), scope token F 1 over individual tokens (ST), and the full negation F 1 (FN) metric. In contrast to the *SEM 2012 shared task we do not annotate negated events, meaning that FN only requires an exact match of the negation's cue(s) and, if present, all its scope tokens. We run each experiment five times with different random seeds and report an averaged F 1 score and its standard deviation in Table 3 . The simplest graph representation point-to-root generally performs best, most visibly in FN F 1 (66.8). We attribute the variation in performance to a loss of information in the head-first and headfinal variants, making it impossible to retrieve the correct governing negation cue for partially overlapping scopes, thus lowering the score.",
"cite_spans": [
{
"start": 82,
"end": 108,
"text": "(Morante and Blanco, 2012)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 526,
"end": 533,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "In order to see whether these performance differences are statistically significant, we perform bootstrap significance testing (Berg-Kirkpatrick et al., 2012) resampling the test set 10 6 times while setting the significance threshold to p = 0.05.",
"cite_spans": [
{
"start": 127,
"end": 158,
"text": "(Berg-Kirkpatrick et al., 2012)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "Comparing point-to-root to head-first and headfinal shows that while the differences seem substantial they are not statistically significant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "A manual error analysis on point-to-root shows that the model tends not to predict infrequent cues, e.g., null 'zero', istedenfor 'instead-of', savnet 'missing', while it overpredicts frequent cues, e.g., ikke 'not', ingen 'no', as well as overgeneralizing the affixal negation u-'un-/dis-/non-' to other words that begin with 'u', but are not negated, e.g., utfrika 'freaked-out', unnagjort 'finished'. The model also tends to predict slightly shorter scopes (an average of 6.5 tokens for predicted scopes ver-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "ST FN point-to-root 93.4 (0.5) 83.6 (0.7) 66.8 (0.8) head-first 92.7 (0.3) 81.9 (1.4) 65.5 (0.6) +inlabel 92.7 (0.7) 81.8 (1.0) 65.0 (2.2) head-final 92.7 (0.6) 82.7 (1.8) 64.8 (3.1) +inlabel 93.1 (0.3) 82.2 (1.5) 65.8 (0.8) Table 3 : Results of our negation parser using the various graph representations. The results are averaged over 5 runs, additionally reporting standard deviation.",
"cite_spans": [],
"ref_spans": [
{
"start": 225,
"end": 232,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "CUE",
"sec_num": null
},
{
"text": "sus 6.7 for gold scopes), while the most common scope-related errors derive from discontinuous scopes, where the model fails on 75.4%. These errors are often due to inversions with the expletive 'det', which is not considered in scope. Although rare (4 examples in test), multi-word cues are also challenging, and the graph model only correctly predicted one of the four. Finally, affixal cues can pose a challenge as well, with the model failing on 67.1% of the sentences containing affixal negation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CUE",
"sec_num": null
},
{
"text": "As mentioned previously, the underlying corpus NoReC fine is annotated for fine-grained sentiment, including opinion holders, targets, sentiment expressions, and positive/negative polarity. The fact that negation is among the most important compositional phenomena that can affect sentiment in terms of shifting polarity values motivated the choice of this particular dataset for adding the negation annotations. In future work we plan to further investigate the co-dependencies between negation and sentiment, both through analyzing the existing annotations and through joint modeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "6"
},
{
"text": "This paper has introduced the first annotated dataset of negation for Norwegian, NoReC neg , where negation cues and their corresponding insentence scopes have been annotated across more than 11K sentences spanning more than 400 documents; a subset of the Norwegian Review Corpus (NoReC). In addition to providing in-depth discussion of the annotation guidelines, we have also presented a first set of benchmark results based on a graph-parsing approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "7"
},
{
"text": "https://github.com/ltgoslo/norec_fine",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been carried out as part of the SANT project (Sentiment Analysis for Norwegian Text), funded by the Research Council of Norway (grant number 270908). We also want to express our gratitude to the annotators: Anders Naess Evensen, Helen \u00d8rn Gjerdrum, Petter Maehlum, Lilja Charlotte Storset, Carina Thanh-Tam Truong, and Alexandra Wittemann.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "ContextD: an algorithm to identify contextual properties of medical terms in a Dutch clinical corpus",
"authors": [
{
"first": "Zubair",
"middle": [],
"last": "Afzal",
"suffix": ""
},
{
"first": "Ewoud",
"middle": [],
"last": "Pons",
"suffix": ""
},
{
"first": "Ning",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Cjm",
"middle": [],
"last": "Miriam",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sturkenboom",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Martijn",
"suffix": ""
},
{
"first": "Jan",
"middle": [
"A"
],
"last": "Schuemie",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kors",
"suffix": ""
}
],
"year": 2014,
"venue": "BMC bioinformatics",
"volume": "15",
"issue": "1",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zubair Afzal, Ewoud Pons, Ning Kang, Miriam CJM Sturkenboom, Martijn J Schuemie, and Jan A Kors. 2014. ContextD: an algorithm to identify contextual properties of medical terms in a Dutch clinical cor- pus. BMC bioinformatics, 15(1):1-12.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The scope and focus of negation: A complete annotation framework for Italian",
"authors": [
{
"first": "Anne-Lyse",
"middle": [],
"last": "Bego\u00f1a Altuna",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Minard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Speranza",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Workshop Computational Semantics Beyond Events and Roles",
"volume": "",
"issue": "",
"pages": "34--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bego\u00f1a Altuna, Anne-Lyse Minard, and Manuela Sper- anza. 2017. The scope and focus of negation: A complete annotation framework for Italian. In Pro- ceedings of the Workshop Computational Semantics Beyond Events and Roles, pages 34-42, Valencia, Spain.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An Empirical Investigation of Statistical Significance in NLP",
"authors": [
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Burkett",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "995--1005",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taylor Berg-Kirkpatrick, David Burkett, and Dan Klein. 2012. An Empirical Investigation of Statisti- cal Significance in NLP. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 995-1005, Jeju Island, Korea.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Negation detection in clinical reports written in german",
"authors": [
{
"first": "Viviana",
"middle": [],
"last": "Cotik",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Feiyu",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Hans",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Klemens",
"middle": [],
"last": "Budde",
"suffix": ""
},
{
"first": "Danilo",
"middle": [],
"last": "Schmidt",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining (BioTxtM2016)",
"volume": "",
"issue": "",
"pages": "115--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Viviana Cotik, Roland Roller, Feiyu Xu, Hans Uszko- reit, Klemens Budde, and Danilo Schmidt. 2016. Negation detection in clinical reports written in ger- man. In Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining (BioTxtM2016), pages 115-124.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "What's great and what's not: learning to classify the scope of negation for improved sentiment analysis",
"authors": [
{
"first": "Isaac",
"middle": [],
"last": "Councill",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Leonid",
"middle": [],
"last": "Velikovich",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Workshop on Negation and Speculation in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "51--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isaac Councill, Ryan McDonald, and Leonid Ve- likovich. 2010. What's great and what's not: learn- ing to classify the scope of negation for improved sentiment analysis. In Proceedings of the Workshop on Negation and Speculation in Natural Language Processing, pages 51-59, Uppsala, Sweden.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "How Certain are Clinical Assessments? Annotating Swedish Clinical Text for (Un) certainties, Speculations and Negations",
"authors": [
{
"first": "Hercules",
"middle": [],
"last": "Dalianis",
"suffix": ""
},
{
"first": "Sumithra",
"middle": [],
"last": "Velupillai",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Seventh International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hercules Dalianis and Sumithra Velupillai. 2010. How Certain are Clinical Assessments? Annotating Swedish Clinical Text for (Un) certainties, Specula- tions and Negations. In Proceedings of the Seventh International Conference on Language Resources and Evaluation.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Annotating negation in Spanish clinical texts",
"authors": [
{
"first": "Cruz",
"middle": [],
"last": "Noa",
"suffix": ""
},
{
"first": "Roser",
"middle": [],
"last": "Diaz",
"suffix": ""
},
{
"first": "Manuel J Mana",
"middle": [],
"last": "Morante",
"suffix": ""
},
{
"first": "Jacinto",
"middle": [
"Mata"
],
"last": "L\u00f3pez",
"suffix": ""
},
{
"first": "Carlos L Parra",
"middle": [],
"last": "V\u00e1zquez",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Calder\u00f3n",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the workshop computational semantics beyond events and roles",
"volume": "",
"issue": "",
"pages": "53--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noa P Cruz Diaz, Roser Morante, Manuel J Mana L\u00f3pez, Jacinto Mata V\u00e1zquez, and Carlos L Parra Calder\u00f3n. 2017. Annotating negation in Spanish clinical texts. In Proceedings of the workshop com- putational semantics beyond events and roles, pages 53-58, Valencia, Spain.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Simpler but more accurate semantic dependency parsing",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "484--490",
"other_ids": {
"DOI": [
"10.18653/v1/P18-2077"
]
},
"num": null,
"urls": [],
"raw_text": "Timothy Dozat and Christopher D. Manning. 2018. Simpler but more accurate semantic dependency parsing. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguis- tics (Volume 2: Short Papers), pages 484-490, Mel- bourne, Australia.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An open-source tool for negation detection: a maximum-margin approach",
"authors": [
{
"first": "Martine",
"middle": [],
"last": "Enger",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": ""
},
{
"first": "Lilja",
"middle": [],
"last": "\u00d8vrelid",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the EACL workshop on Computational Semantics Beyond Events and Roles (SemBEaR)",
"volume": "",
"issue": "",
"pages": "64--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martine Enger, Erik Velldal, and Lilja \u00d8vrelid. 2017. An open-source tool for negation detection: a maximum-margin approach. In Proceedings of the EACL workshop on Computational Semantics Be- yond Events and Roles (SemBEaR), pages 64-69, Valencia, Spain.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Svein Lie, and Kjell Ivar Vannebo. 1997. Norsk referansegrammatikk. Universitetsforlaget",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Terje",
"suffix": ""
},
{
"first": "Faarlund",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Terje Faarlund, Svein Lie, and Kjell Ivar Vannebo. 1997. Norsk referansegrammatikk. Universitetsfor- laget, Oslo, Norway.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Neural networks for negation scope detection",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Fancellu",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "495--504",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1047"
]
},
"num": null,
"urls": [],
"raw_text": "Federico Fancellu, Adam Lopez, and Bonnie Webber. 2016. Neural networks for negation scope detection. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics, pages 495- 504, Berlin, Germany.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Detecting negation scope is easy, except when it isn't",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Fancellu",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
},
{
"first": "Hangfeng",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "58--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Federico Fancellu, Adam Lopez, Bonnie Webber, and Hangfeng He. 2017. Detecting negation scope is easy, except when it isn't. In Proceedings of the 15th Conference of the European Chapter of the Associ- ation for Computational Linguistics, pages 58-63, Valencia, Spain.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Norsk som fremmedspr\u00e5k: Grammatikk, 4 edition",
"authors": [
{
"first": "Anne",
"middle": [],
"last": "Golden",
"suffix": ""
},
{
"first": "Kirsti",
"middle": [
"Mac"
],
"last": "Donald",
"suffix": ""
},
{
"first": "Else",
"middle": [],
"last": "Ryen",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anne Golden, Kirsti Mac Donald, and Else Ryen. 2014. Norsk som fremmedspr\u00e5k: Grammatikk, 4 edition. Universitetsforlaget, Oslo, Norway.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Mar\u00eda Teresa Mart\u00edn-Valdivia, and L Alfonso Ure\u00f1a-L\u00f3pez. 2020. Corpora annotated with negation: An overview",
"authors": [
{
"first": "Roser",
"middle": [],
"last": "Salud Mar\u00eda Jim\u00e9nez-Zafra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Morante",
"suffix": ""
}
],
"year": null,
"venue": "Computational Linguistics",
"volume": "46",
"issue": "1",
"pages": "1--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Salud Mar\u00eda Jim\u00e9nez-Zafra, Roser Morante, Mar\u00eda Teresa Mart\u00edn-Valdivia, and L Alfonso Ure\u00f1a- L\u00f3pez. 2020. Corpora annotated with negation: An overview. Computational Linguistics, 46(1):1-52.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "SFU Review SP-NEG: a Spanish corpus annotated with negation for sentiment analysis. a typology of negation patterns. Language Resources and Evaluation",
"authors": [
{
"first": "Mariona",
"middle": [],
"last": "Salud Mar\u00eda Jim\u00e9nez-Zafra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Taul\u00e9",
"suffix": ""
},
{
"first": "Alfonso",
"middle": [],
"last": "Teresa Mart\u00edn-Valdivia",
"suffix": ""
},
{
"first": "M Ant\u00f3nia",
"middle": [],
"last": "Urena-L\u00f3pez",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mart\u00ed",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "52",
"issue": "",
"pages": "533--569",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Salud Mar\u00eda Jim\u00e9nez-Zafra, Mariona Taul\u00e9, M Teresa Mart\u00edn-Valdivia, L Alfonso Urena-L\u00f3pez, and M Ant\u00f3nia Mart\u00ed. 2018. SFU Review SP-NEG: a Spanish corpus annotated with negation for senti- ment analysis. a typology of negation patterns. Lan- guage Resources and Evaluation, 52(2):533-569.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A review corpus annotated for negation, speculation and their scope",
"authors": [
{
"first": "Natalia",
"middle": [],
"last": "Konstantinova",
"suffix": ""
},
{
"first": "C",
"middle": [
"M"
],
"last": "Sheila",
"suffix": ""
},
{
"first": "Noa",
"middle": [
"P"
],
"last": "De Sousa",
"suffix": ""
},
{
"first": "Manuel",
"middle": [
"J"
],
"last": "Cruz",
"suffix": ""
},
{
"first": "Maite",
"middle": [],
"last": "Ma\u00f1a",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Taboada",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mitkov",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 8th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "3190--3195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Natalia Konstantinova, Sheila C.M. de Sousa, Noa P. Cruz, Manuel J. Ma\u00f1a, Maite Taboada, and Ruslan Mitkov. 2012. A review corpus annotated for nega- tion, speculation and their scope. In Proceedings of the 8th International Conference on Language Re- sources and Evaluation, pages 3190-3195, Istanbul, Turkey.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "End-to-end negation resolution as graph parsing",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Kurtz",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Kuhlmann",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies",
"volume": "",
"issue": "",
"pages": "14--24",
"other_ids": {
"DOI": [
"10.18653/v1/2020.iwpt-1.3"
]
},
"num": null,
"urls": [],
"raw_text": "Robin Kurtz, Stephan Oepen, and Marco Kuhlmann. 2020. End-to-end negation resolution as graph pars- ing. In Proceedings of the 16th International Con- ference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies, pages 14-24, Online.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "UiO2: Sequence-labeling negation using dependency features",
"authors": [
{
"first": "Emanuele",
"middle": [],
"last": "Lapponi",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": ""
},
{
"first": "Lilja",
"middle": [],
"last": "\u00d8vrelid",
"suffix": ""
},
{
"first": "Jonathon",
"middle": [],
"last": "Read",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "319--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emanuele Lapponi, Erik Velldal, Lilja \u00d8vrelid, and Jonathon Read. 2012. UiO2: Sequence-labeling negation using dependency features. In Proceed- ings of the First Joint Conference on Lexical and Computational Semantics, pages 319-327, Mon- treal, Canada.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "NegPar: A parallel corpus annotated for negation",
"authors": [
{
"first": "Qianchu",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Fancellu",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 11th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "3464--3472",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qianchu Liu, Federico Fancellu, and Bonnie Webber. 2018. NegPar: A parallel corpus annotated for negation. In Proceedings of the 11th International Conference on Language Resources and Evaluation, pages 3464-3472, Miyazaki, Japan.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Annotating the Focus of Negation in Japanese Text",
"authors": [
{
"first": "Suguru",
"middle": [],
"last": "Matsuyoshi",
"suffix": ""
},
{
"first": "Ryo",
"middle": [],
"last": "Otsuki",
"suffix": ""
},
{
"first": "Fumiyo",
"middle": [],
"last": "Fukumoto",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "1743--1750",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suguru Matsuyoshi, Ryo Otsuki, and Fumiyo Fuku- moto. 2014. Annotating the Focus of Negation in Japanese Text. In Proceedings of the Ninth Interna- tional Conference on Language Resources and Eval- uation, pages 1743-1750, Reykjavik, Iceland.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "*SEM 2012 shared task: Resolving the scope and focus of negation",
"authors": [
{
"first": "Roser",
"middle": [],
"last": "Morante",
"suffix": ""
},
{
"first": "Eduardo",
"middle": [],
"last": "Blanco",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM)",
"volume": "",
"issue": "",
"pages": "265--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roser Morante and Eduardo Blanco. 2012. *SEM 2012 shared task: Resolving the scope and focus of negation. In Proceedings of the First Joint Con- ference on Lexical and Computational Semantics (*SEM), pages 265-274, Montr\u00e9al, Canada.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A metalearning approach to processing the scope of negation",
"authors": [
{
"first": "Roser",
"middle": [],
"last": "Morante",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 13th Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roser Morante and Walter Daelemans. 2009. A meta- learning approach to processing the scope of nega- tion. In Proceedings of the 13th Conference on Computational Natural Language Learning, Boul- der, USA.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "ConanDoyle-neg: Annotation of negation cues and their scope in Conan Doyle stories",
"authors": [
{
"first": "Roser",
"middle": [],
"last": "Morante",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 8th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roser Morante and Walter Daelemans. 2012. ConanDoyle-neg: Annotation of negation cues and their scope in Conan Doyle stories. In Pro- ceedings of the 8th International Conference on Language Resources and Evaluation, Istanbul, Turkey.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A fine-grained sentiment dataset for Norwegian",
"authors": [
{
"first": "Lilja",
"middle": [],
"last": "\u00d8vrelid",
"suffix": ""
},
{
"first": "Petter",
"middle": [],
"last": "Maehlum",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Barnes",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "5025--5033",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lilja \u00d8vrelid, Petter Maehlum, Jeremy Barnes, and Erik Velldal. 2020. A fine-grained sentiment dataset for Norwegian. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 5025- 5033, Marseille, France.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Simple negation scope resolution through deep parsing: A semantic solution to a semantic problem",
"authors": [
{
"first": "Woodley",
"middle": [],
"last": "Packard",
"suffix": ""
},
{
"first": "Emily",
"middle": [
"M"
],
"last": "Bender",
"suffix": ""
},
{
"first": "Jonathon",
"middle": [],
"last": "Read",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Dridan",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Woodley Packard, Emily M. Bender, Jonathon Read, Stephan Oepen, and Rebecca Dridan. 2014. Simple negation scope resolution through deep parsing: A semantic solution to a semantic problem. In Pro- ceedings of the 52nd Annual Meeting of the As- sociation for Computational Linguistics, Baltimore, USA.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Bioinfer: a corpus for information extraction in the biomedical domain",
"authors": [
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Juho",
"middle": [],
"last": "Heimonen",
"suffix": ""
},
{
"first": "Jari",
"middle": [],
"last": "Bj\u00f6rne",
"suffix": ""
},
{
"first": "Jorma",
"middle": [],
"last": "Boberg",
"suffix": ""
},
{
"first": "Jouni",
"middle": [],
"last": "J\u00e4rvinen",
"suffix": ""
},
{
"first": "Tapio",
"middle": [],
"last": "Salakoski",
"suffix": ""
}
],
"year": 2007,
"venue": "BMC bioinformatics",
"volume": "8",
"issue": "1",
"pages": "1--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sampo Pyysalo, Filip Ginter, Juho Heimonen, Jari Bj\u00f6rne, Jorma Boberg, Jouni J\u00e4rvinen, and Tapio Salakoski. 2007. Bioinfer: a corpus for information extraction in the biomedical domain. BMC bioinfor- matics, 8(1):1-24.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Speculation and negation scope detection via convolutional neural networks",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Z. Qian, P. Li, Q. Zhu, G. Zhou, Z. Luo, and W. Luo. 2016. Speculation and negation scope detection via convolutional neural networks. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing, Austin, Texas, USA.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "UiO1: Constituent-based discriminative ranking for negation resolution",
"authors": [
{
"first": "Jonathon",
"middle": [],
"last": "Read",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": ""
},
{
"first": "Lilja",
"middle": [],
"last": "\u00d8vrelid",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathon Read, Erik Velldal, Lilja \u00d8vrelid, and Stephan Oepen. 2012. UiO1: Constituent-based discriminative ranking for negation resolution. In Proceedings of the First Joint Conference on Lex- ical and Computational Semantics (*SEM), Mon- treal, Canada.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Negation detection in Swedish clinical text: An adaption of NegEx to Swedish",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Skeppstedt",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Biomedical Semantics",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Skeppstedt. 2011. Negation detection in Swedish clinical text: An adaption of NegEx to Swedish. Journal of Biomedical Semantics, 2 Suppl 3.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "BRAT: A Web-based Tool for NLPassisted Text Annotation",
"authors": [
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Topi\u0107",
"suffix": ""
},
{
"first": "Tomoko",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "102--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pontus Stenetorp, Sampo Pyysalo, Goran Topi\u0107, Tomoko Ohta, Sophia Ananiadou, and Jun'ichi Tsu- jii. 2012. BRAT: A Web-based Tool for NLP- assisted Text Annotation. In Proceedings of the Demonstrations at the 13th Conference of the Euro- pean Chapter of the Association for Computational Linguistics, pages 102-107, Avignon, France.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Samia Touileb, and Fredrik J\u00f8rgensen",
"authors": [
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": ""
},
{
"first": "Lilja",
"middle": [],
"last": "\u00d8vrelid",
"suffix": ""
},
{
"first": "Eivind",
"middle": [
"Alexander"
],
"last": "Bergem",
"suffix": ""
},
{
"first": "Cathrine",
"middle": [],
"last": "Stadsnes",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 11th edition of the Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "4186--4191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik Velldal, Lilja \u00d8vrelid, Eivind Alexander Bergem, Cathrine Stadsnes, Samia Touileb, and Fredrik J\u00f8r- gensen. 2018. NoReC: The Norwegian Review Cor- pus. In Proceedings of the 11th edition of the Lan- guage Resources and Evaluation Conference, pages 4186-4191, Miyazaki, Japan.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "The BioScope corpus: biomedical texts annotated for uncertainty, negation and their scopes",
"authors": [
{
"first": "V",
"middle": [],
"last": "Vincze",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Szarvas",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Farkas",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "M\u00f3ra",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Csirik",
"suffix": ""
}
],
"year": 2008,
"venue": "BMC bioinformatics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Vincze, G. Szarvas, R. Farkas, G. M\u00f3ra, and J. Csirik. 2008. The BioScope corpus: biomedical texts annotated for uncertainty, negation and their scopes. BMC bioinformatics, Suppl 11.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "UWashington: Negation resolution using machine learning methods",
"authors": [
{
"first": "James",
"middle": [],
"last": "Paul White",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Paul White. 2012. UWashington: Negation res- olution using machine learning methods. In Pro- ceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM), Montreal, Canada.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Research on chinese negation and speculation: corpus annotation and identification",
"authors": [
{
"first": "Bowei",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Qiaoming",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2016,
"venue": "Frontiers of Computer Science",
"volume": "10",
"issue": "6",
"pages": "1039--1051",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bowei Zou, Guodong Zhou, and Qiaoming Zhu. 2016. Research on chinese negation and speculation: cor- pus annotation and identification. Frontiers of Com- puter Science, 10(6):1039-1051.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "not completely credible.'",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "(a) [...] blir ganske meningsl\u00f8st (b) [...] hensynsl\u00f8se regnskog-\u00f8deleggere (c) [...] men ikke smakl\u00f8s .",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF2": {
"text": "(7) . . . [Irritasjonen] . . . the.irritation forsvant disappeared da when maten the.food kom arrived . . '. . . The irritation disappeared when the food arrived.'",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF3": {
"text": "Selbekk shows no mercy.'",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF4": {
"text": "is tougher than Reagan.'",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF6": {
"text": "sing at all.'",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF7": {
"text": "are unknown faces to us.'",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF1": {
"content": "<table><tr><td>(6) Og</td><td>mest</td><td>av</td><td>alt</td><td colspan=\"2\">fravaeret</td><td>av</td><td>[mer</td></tr><tr><td>And</td><td>most</td><td>of</td><td>all</td><td colspan=\"2\">the.absence</td><td>of</td><td>more</td></tr><tr><td>enn</td><td>bare</td><td>et</td><td colspan=\"2\">kvarter</td><td>musikk]</td><td>.</td></tr><tr><td>than</td><td>just</td><td>a</td><td colspan=\"2\">quarter</td><td>music</td><td>.</td></tr><tr><td>'</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>(5) [...]</td><td>verken</td><td>[manus]</td><td>eller</td><td>[skuespillere</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>[...]</td><td>neither</td><td>script</td><td>nor</td><td>actors</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>trekker</td><td>oss</td><td>inn</td><td>p\u00e5</td><td>en</td><td>engasjerende</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>pull</td><td>us</td><td>in</td><td>on</td><td>a</td><td>engaging</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>m\u00e5te]</td><td>.</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>method</td><td>.</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>'[...] neither script nor actors pull us inside in an</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>engaging way'</td></tr></table>",
"html": null,
"text": "And most of all, the absence of more than just 15 minutes of music.'",
"type_str": "table",
"num": null
}
}
}
}