Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N19-1038",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:58:14.390641Z"
},
"title": "Learning Interpretable Negation Rules via Weak Supervision at Document Level: A Reinforcement Learning Approach",
"authors": [
{
"first": "Nicolas",
"middle": [],
"last": "Pr\u00f6llochs",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Oxford-Man Institute University of Oxford",
"location": {}
},
"email": "nicolas.prollochs@eng.ox.ac.uk"
},
{
"first": "Dirk",
"middle": [],
"last": "Neumann",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Freiburg",
"location": {}
},
"email": "dirk.neumann@is.uni-freiburg.de"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Negation scope detection is widely performed as a supervised learning task which relies upon negation labels at word level. This suffers from two key drawbacks: (1) such granular annotations are costly and (2) highly subjective, since, due to the absence of explicit linguistic resolution rules, human annotators often disagree in the perceived negation scopes. To the best of our knowledge, our work presents the first approach that eliminates the need for word-level negation labels, replacing it instead with document-level sentiment annotations. For this, we present a novel strategy for learning fully interpretable negation rules via weak supervision: we apply reinforcement learning to find a policy that reconstructs negation rules from sentiment predictions at document level. Our experiments demonstrate that our approach for weak supervision can effectively learn negation rules. Furthermore, an out-of-sample evaluation via sentiment analysis reveals consistent improvements (of up to 4.66 %) over both a sentiment analysis with (i) no negation handling and (ii) the use of word-level annotations from humans. Moreover, the inferred negation rules are fully interpretable.",
"pdf_parse": {
"paper_id": "N19-1038",
"_pdf_hash": "",
"abstract": [
{
"text": "Negation scope detection is widely performed as a supervised learning task which relies upon negation labels at word level. This suffers from two key drawbacks: (1) such granular annotations are costly and (2) highly subjective, since, due to the absence of explicit linguistic resolution rules, human annotators often disagree in the perceived negation scopes. To the best of our knowledge, our work presents the first approach that eliminates the need for word-level negation labels, replacing it instead with document-level sentiment annotations. For this, we present a novel strategy for learning fully interpretable negation rules via weak supervision: we apply reinforcement learning to find a policy that reconstructs negation rules from sentiment predictions at document level. Our experiments demonstrate that our approach for weak supervision can effectively learn negation rules. Furthermore, an out-of-sample evaluation via sentiment analysis reveals consistent improvements (of up to 4.66 %) over both a sentiment analysis with (i) no negation handling and (ii) the use of word-level annotations from humans. Moreover, the inferred negation rules are fully interpretable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Negations are a frequently utilized linguistic tool for expressing disapproval or framing negative content with positive words. Neglecting negations can lead to false attributions (Morante et al., 2008) and, moreover, impair accuracy when analyzing natural language; e. g., in information retrieval (Cruz D\u00edaz et al., 2012; Rokach et al., 2008) and especially in sentiment analysis (Cruz et al., 2015; Wiegand et al., 2010) . Hence, even simple heuristics for identifying negation scopes can yield substantial improvements in such cases (Jia et al., 2009) .",
"cite_spans": [
{
"start": 180,
"end": 202,
"text": "(Morante et al., 2008)",
"ref_id": "BIBREF14"
},
{
"start": 299,
"end": 323,
"text": "(Cruz D\u00edaz et al., 2012;",
"ref_id": "BIBREF3"
},
{
"start": 324,
"end": 344,
"text": "Rokach et al., 2008)",
"ref_id": "BIBREF22"
},
{
"start": 382,
"end": 401,
"text": "(Cruz et al., 2015;",
"ref_id": "BIBREF2"
},
{
"start": 402,
"end": 423,
"text": "Wiegand et al., 2010)",
"ref_id": "BIBREF25"
},
{
"start": 537,
"end": 555,
"text": "(Jia et al., 2009)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Negation scope detection is sometimes implemented as unsupervised learning (e. g., Pr\u00f6llochs et al., 2016) , while a better performance is commonly achieved via supervised learning (see our supplements for a detailed overview): the resulting models thus learn to identify negation scopes from word-level annotations (e. g., Li and Lu, 2018; Reitan et al., 2015) . We argue that this approach suffers from inherent drawbacks. (1) Such granular annotations are costly and, especially at word level, a considerable number of them is needed.",
"cite_spans": [
{
"start": 83,
"end": 106,
"text": "Pr\u00f6llochs et al., 2016)",
"ref_id": "BIBREF20"
},
{
"start": 324,
"end": 340,
"text": "Li and Lu, 2018;",
"ref_id": "BIBREF9"
},
{
"start": 341,
"end": 361,
"text": "Reitan et al., 2015)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) Negation scope detection is highly subjective (Councill et al., 2010) . Due to the absence of explicit linguistic rules for resolutions, existing corpora often come with annotation guidelines (Morante and Blanco, 2012; Morante and Daelemans, 2012 ). Yet there are considerable differences: some corpora were labeled in a way that negation scopes consist of single text spans, while others allowed disjoint spans (Fancellu et al., 2017) . More importantly, given the absence of universal rules, human annotators largely disagree in their perception of what words should be labeled as negated.",
"cite_spans": [
{
"start": 50,
"end": 73,
"text": "(Councill et al., 2010)",
"ref_id": "BIBREF1"
},
{
"start": 196,
"end": 222,
"text": "(Morante and Blanco, 2012;",
"ref_id": "BIBREF12"
},
{
"start": 223,
"end": 250,
"text": "Morante and Daelemans, 2012",
"ref_id": "BIBREF13"
},
{
"start": 416,
"end": 439,
"text": "(Fancellu et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Motivational experiment. Since prevalent corpora were labeled only by a single rater, we now establish the severity of between-rater discrepancies. For this, we carried out an initial analysis of 500 sentences from movie reviews. 1 Each sentence contained at least one explicit negation phrase from the list of Jia et al. (2009) , such as \"not\" or \"no.\" Two human raters were then asked to annotate negation scopes. They could choose an arbitrary selection of words and were not restricted to a single subspan, as recommended by Fancellu et al. (2017) . The annotations resulted in large differences: only 50.20 % of the words were simultaneously labeled as \"negated\" by both raters. Based on this experimental evidence, we showcase there is no universal definition of negation scopes (rather, human annotations are likely to be noisy or even error-prone) and thus highlight the need for further research.",
"cite_spans": [
{
"start": 311,
"end": 328,
"text": "Jia et al. (2009)",
"ref_id": "BIBREF7"
},
{
"start": 529,
"end": 551,
"text": "Fancellu et al. (2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Contributions. To the best of our knowledge, our work presents the first approach that eliminates the need for word-level annotations of negation labels. Instead, we perform negation scope detection merely by utilizing shallow annotations at document level in the form of sentiment labels (e. g., from user reviews). Our novel strategy learns interpretable negation rules via weak supervision: we apply reinforcement learning to find a policy that reconstructs negation rules based on sentiment prediction at document level (as opposed to conventional word-level annotations).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In our approach, a single document d comes with a sentiment label",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "y d . The document con- sists of N d words, w d,1 , . . . , w d,N d ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "where the number of words can easily surpass several hundreds. Based on the sentiment value, we then need to make a decision (especially out-of-sample) for each of the N d words, whether or not it should be negated. In this case, a single sentiment value is outnumbered by potentially hundreds of negation decisions, thus pinpointing to the difficulty of this task. Formally, the goal is to learn individual labels a d,i \u2208 {Negated, \u00acNegated} for each word w d,i . Rewards are the errors in sentiment prediction at document level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Strengths. Our approach exhibits several favorable features that overcome shortcomings found in prior works. Among them, it eliminates the need for manual word-level labels. It thus avoids the detrimental influence of subjectivity and misinterpretation. Instead, our model is solely trained on a document-level variable and can thus learn domain-specific particularities of the given prose. The inferred negation rules are fully interpretable while documents can contain multiple instances of negations with arbitrary complexity, sometimes nested or consisting out of disjoint text spans. Despite facing several times more negation decisions than sentiment labels, our experiments demon-strate that this problem can be effectively learned through reinforcement learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Evaluation. Given the considerable inconsistencies in human annotations of negation scopes and the lack of universal rules, we regard the \"true\" negation scopes as unobservable. Hence, we later compare the identified negation scopes with those from rater 1 and 2 only as a sensitivity check because of the fact that both raters have only 50.2 % overlap. Instead, we choose the following evaluation strategy. We concentrate on the performance of negation scope detection as a supporting tool in natural language processing where its primary role is to facilitate more complex learning tasks such as sentiment analysis. Therefore, we report the performance improvements in sentiment analysis resulting from our approach. For a fair comparison, we use baselines that only rely upon the same information as our weak supervision (and thus have no access to word-level negation labels). Our performance is even on par with a supervised classifier that can exploit richer labels during training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Intuition. The choice of reinforcement learning for weak supervision might not be obvious at first, but, in fact, it is informed by theory: it imitates the human reading process as stipulated by cognitive reading theory (Just and Carpenter, 1980) , where readers iteratively process information word-byword.",
"cite_spans": [
{
"start": 220,
"end": 246,
"text": "(Just and Carpenter, 1980)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Negation Scope Detection via Weak Supervision",
"sec_num": "2"
},
{
"text": "States and actions. In each learning iteration, the reinforcement learning agent observes the current state s i = (w i , a i\u22121 ) that we engineer as the combination of the i-th word w i in a document and the previous action a i\u22121 . This specification establishes a recurrent architecture whereby the previous negation can pass on to the next word. At the same time, this allows for nested negations, as a word can first introduce a negation scope and another subsequent negation can potentially revert it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Negation Scope Detection via Weak Supervision",
"sec_num": "2"
},
{
"text": "After observing the current state, the agent chooses an action a t from of two possibilities:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Negation Scope Detection via Weak Supervision",
"sec_num": "2"
},
{
"text": "(1) it can set the current word to negated or (2) it can mark it as not negated. Hence, we obtain the following set of possible actions A = {Negated, \u00acNegated}. Based on the selected action, the agent receives a reward, r i which updates the knowledge in the state-action function Q(s i , a i ). This state-action function is then used to infer the best possible action a i in each state s i , i. e., the optimal policy \u03c0 * (s i , a i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Negation Scope Detection via Weak Supervision",
"sec_num": "2"
},
{
"text": "Reward function. The reward r i depends upon the correlation between a given a document-level label (e. g., a rating in movie reviews) and the sentiment of a document. We predict the sentiment S d of document d using a widely-used sentiment routine based on the occurrences of positively-and negatively-opinionated terms (see Taboada et al., 2011) . If a term is negated by the policy, the polarity of the corresponding term is inverted, i. e., positively opinionated terms are counted as negative and vice versa. In the following, S 0 d denotes the document sentiment without considering negations; S \u03c0 d the sentiment when incorporating negations based on policy \u03c0.",
"cite_spans": [
{
"start": 326,
"end": 347,
"text": "Taboada et al., 2011)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Negation Scope Detection via Weak Supervision",
"sec_num": "2"
},
{
"text": "When processing a document, we cannot actually compute the reward until we have processed all words. Therefore, we set the reward before the last word to c \u2248 0, i. e., r i = c for i = 1, . . . , N d \u2212 1. For the final word, the agent compares its performance in predicting the document label based on sentiment without considering negations S 0 d to the sentiment when incorporating negations based on the current policy \u03c0 * . The former is defined by the absolute difference between the document label y d and the predicted sentiment without negations S 0 d , whereas the latter is defined by the absolute difference between y d and the adjusted sentiment using the current policy S \u03c0 d . Then the difference between these values returns the terminal reward r N d . Thus the reward is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Negation Scope Detection via Weak Supervision",
"sec_num": "2"
},
{
"text": "r i = 0, if ai = Neg and i < N d , c, if ai = \u00acNeg and i < N d , y d \u2212 S 0 d \u2212 |y d \u2212 S \u03c0 d | , if i = N d ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Negation Scope Detection via Weak Supervision",
"sec_num": "2"
},
{
"text": "with a constant c (we use c = 0.005) that adds a small reward for default (i. e., non-negating) actions to avoid overfitting. Q-learning. During the learning process 2 , the agent then successively observes a sequence of words in which it can select between exploring new actions or taking the current optimal one. This choice is made by \u03b5-greedy selection according to which the agent explores the environment by selecting a random action with probability \u03b5 or,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Negation Scope Detection via Weak Supervision",
"sec_num": "2"
},
{
"text": "Datasets. We use the following benchmark datasets with document-level annotations from the literature (cf. Hogenboom et al., 2011; Pr\u00f6llochs et al., 2016; Wiegand et al., 2010) :",
"cite_spans": [
{
"start": 107,
"end": 130,
"text": "Hogenboom et al., 2011;",
"ref_id": "BIBREF6"
},
{
"start": 131,
"end": 154,
"text": "Pr\u00f6llochs et al., 2016;",
"ref_id": "BIBREF20"
},
{
"start": 155,
"end": 176,
"text": "Wiegand et al., 2010)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "IMDb: movie reviews from the Internet Movie Database archive, each annotated with an overall rating at document level (Pang and Lee, 2005) .",
"cite_spans": [
{
"start": 118,
"end": 138,
"text": "(Pang and Lee, 2005)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "Airport: user reviews of airports from Skytrax, each annotated with an overall rating at document level (P\u00e9rezgonz\u00e1lez and Gilbey, 2011) .",
"cite_spans": [
{
"start": 104,
"end": 136,
"text": "(P\u00e9rezgonz\u00e1lez and Gilbey, 2011)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "Ad hoc: financial announcements with complex, domain-specific language (Pr\u00f6llochs et al., 2016) , labeled with the daily abnormal return of the corresponding stock.",
"cite_spans": [
{
"start": 71,
"end": 95,
"text": "(Pr\u00f6llochs et al., 2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "Learning parameters. We perform 4000 learning iterations with a higher exploration rate as given by the following parameters 3 : exploration \u03b5 = 0.001, discount factor \u03b3 = 0 and learning rate \u03b1 = 0.005. In a second phase, we run 1000 iterations for fine-tuning with exploration \u03b5 = 0.0001, discount factor \u03b3 = 0 and learning rate \u03b1 = 0.001.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "Policy learning. For each dataset, the reinforcement learning process converges to a stationary policy that shows reward fluctuations below 0.05 %. As part of a benchmark, we study the mean squared error (MSE) between y d and the predicted sentiment S 0 d when leaving negations untreated as our benchmark. For all datasets, the in-sample MSE decreases substantially (see Figure 1 ), demonstrating the effectiveness of our learning approach. The reductions number to 14.93 % (IMDb), 16.77 % (airport), and 0.91 % (ad hoc). The latter is a result of the considerably more complex language in financial statements.",
"cite_spans": [],
"ref_spans": [
{
"start": 372,
"end": 380,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "Performance in Sentiment Analysis. We use 10-fold cross validation to compare the out-ofsample performance in sentiment analysis of reinforcement learning to benchmarks without wordlevel labels from previous works. The benchmarks consists of rules (Hogenboom et al., 2011; Taboada et al., 2011) , which search for the occurrence of specific cues based on pre-defined lists and then invert the meaning of a fixed number of surrounding words. Figure 1 : MSE between the document label and predicted sentiment across different learning iterations using 10-fold cross validation. Additional lines in black from smoothing.",
"cite_spans": [
{
"start": 248,
"end": 272,
"text": "(Hogenboom et al., 2011;",
"ref_id": "BIBREF6"
},
{
"start": 273,
"end": 294,
"text": "Taboada et al., 2011)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 441,
"end": 449,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "IMDb: Negating a fixed window of the next 4 words achieves the lowest error among all rules, similar to Dadvar et al. (2011) . This rule reduces the MSE of the benchmark with no negation handling by 1.05 %. Our approach works even more accurately, and dominates all of the rules, reducing the out-of-sample MSE by at least 4.60 %.",
"cite_spans": [
{
"start": 104,
"end": 124,
"text": "Dadvar et al. (2011)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "document-level label: 4",
"sec_num": null
},
{
"text": "Airport: Our method decreases the MSE by 4.66 % compared to the best-performing rule (negating a fixed window of the next 4 words).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "document-level label: 4",
"sec_num": null
},
{
"text": "Ad hoc: Even for complex financial language, reinforcement learning exceeds this benchmark method by 0.19 % in terms of out-of-sample MSE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "document-level label: 4",
"sec_num": null
},
{
"text": "Altogether, our weak supervision improves sentiment analysis consistently across all datasets. 5 Comparison to human raters. For reasons of completeness, our supplements report the overlap with both human raters from our motivational experiment, which is in the range of 18.8 % to 25.2 %. However, these numbers should be treated with caution, as we remind that there is no universal definition of negation scopes and even the two human annotations reveal on 50.2 %. Moreover, our approach was not learned towards reconstructing these human annotations, since we focused on rules that achieve the greatest benefit in sentiment analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "document-level label: 4",
"sec_num": null
},
{
"text": "Comparison to word-level classifiers. We also compared weak supervision against a supervised HMM classifier from Pr\u00f6llochs et al. (2016) that draws upon granular word-level negation labels. Here we report the sentiment analysis on IMDb in order to be able to use the domain-specific negation labels from IMDb text snippets of our initial experiment. In comparison to our reinforcement learning, the supervised classifier results in a 5.79 % higher (and thus inferior) MSE. Yet our weak supervision circumvents costly word-level annotations.",
"cite_spans": [
{
"start": 113,
"end": 136,
"text": "Pr\u00f6llochs et al. (2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": null
},
{
"text": "Interpretability. Our method yields negation rules that are fully interpretable: one simply has to assess the state-action function Q(s i , a i ). Table 2 provides an example excerpt for the document \"this beautiful movie isn't good but fantastic.\" The agent the starts by observing the first state given by the combination of the first word w 1 and the previous action a 0 , i. e. s 1 = (this, \u00acNegated). According to the state-action table, the best action for the agent is to set this state to not negated (a 1 = \u00acNegated). This pattern continues until it observes state s 4 = (isn't, \u00acNegated) in which the policy implies to initiate a negation scope (a 4 = Negated). Subsequently, the negation scope is forwarded until the agent observes s 6 = (but, Negated) in which it terminates the negation scope (a 6 = \u00acNegated). Finally, the agent observes s 7 = (fantastic, \u00acNegated) in which it takes action a 7 = \u00acNegated. ",
"cite_spans": [],
"ref_spans": [
{
"start": 147,
"end": 154,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Approach",
"sec_num": null
},
{
"text": "State-of-the-art methods for detecting, handling and interpreting negations can be grouped into different categories (cf. Pr\u00f6llochs et al., 2015 Pr\u00f6llochs et al., , 2016 Rokach et al., 2008) . Rule-based approaches are among the most common due to their ease of implementation and solid out-of-the-box performance. These usually suppose a forward influence of negation cues based on which they invert the meaning of the whole sentence or a fixed number of subsequent words (Hogenboom et al., 2011) . Furthermore, they can also incorporate syntactic information in order to imitate subject and object (Padmaja et al., 2014; Chowdhury and Lavelli, 2013) . Negation rules have been found to work effectively across different domains and rarely need finetuning (Taboada et al., 2011) . However, rule-based approaches entail several drawbacks, as the list of negations must be pre-defined and the selection criterion according to which rule a rule is chosen is usually random or determined via cross validation. In addition, rules cannot effectively cope with implicit expressions or particular, domainspecific characteristics.",
"cite_spans": [
{
"start": 122,
"end": 144,
"text": "Pr\u00f6llochs et al., 2015",
"ref_id": "BIBREF19"
},
{
"start": 145,
"end": 169,
"text": "Pr\u00f6llochs et al., , 2016",
"ref_id": "BIBREF20"
},
{
"start": 170,
"end": 190,
"text": "Rokach et al., 2008)",
"ref_id": "BIBREF22"
},
{
"start": 473,
"end": 497,
"text": "(Hogenboom et al., 2011)",
"ref_id": "BIBREF6"
},
{
"start": 600,
"end": 622,
"text": "(Padmaja et al., 2014;",
"ref_id": "BIBREF16"
},
{
"start": 623,
"end": 651,
"text": "Chowdhury and Lavelli, 2013)",
"ref_id": "BIBREF0"
},
{
"start": 757,
"end": 779,
"text": "(Taboada et al., 2011)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "Generative probabilistic models (e. g., hidden Markov models or conditional random fields) can partially overcome these shortcomings (Li and Lu, 2018; Reitan et al., 2015; Rokach et al., 2008) , such as the difficulty of recognizing implicit negations. These process narrative language word-byword and move between hidden states representing negated and non-negated parts. Such models can adapt to domain-specific language, but require more computational resources and rely upon ex ante transition probabilities. Although variants based on unsupervised learning avoid the need for any labels, practical applications reveal inferior performance compared to supervised approaches (Pr\u00f6llochs et al., 2015) . The latter usu-ally depend on manual labels at a granular level, which are not only costly but suffer from subjective interpretations (Fancellu et al., 2017) .",
"cite_spans": [
{
"start": 133,
"end": 150,
"text": "(Li and Lu, 2018;",
"ref_id": "BIBREF9"
},
{
"start": 151,
"end": 171,
"text": "Reitan et al., 2015;",
"ref_id": "BIBREF21"
},
{
"start": 172,
"end": 192,
"text": "Rokach et al., 2008)",
"ref_id": "BIBREF22"
},
{
"start": 678,
"end": 702,
"text": "(Pr\u00f6llochs et al., 2015)",
"ref_id": "BIBREF19"
},
{
"start": 839,
"end": 862,
"text": "(Fancellu et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "A third category of methods links the polarity shift effect of negations more closely to sentiment analysis tasks at sentence or document level. For example, text parts can be classified into a polarity-unshifted part and a polarity-shifted part according to certain rules (Li and Huang, 2009) . Sentiment classification models are then trained using both parts (Li et al., 2010) . Alternatively, rule-based algorithms can extract sentences with inconsistent sentiment and omit them from standard sentiment analysis procedures (Orimaye et al., 2012) . Reversely, antonym dictionaries have been used to generate sentiment-inverted texts to classify polarity in pairs (Xia et al., 2016) . Although such data expansion techniques usually enhance the performance of sentiment analysis, they require either complex linguistic knowledge or extra human annotations (Xia et al., 2015) .",
"cite_spans": [
{
"start": 273,
"end": 293,
"text": "(Li and Huang, 2009)",
"ref_id": "BIBREF10"
},
{
"start": 362,
"end": 379,
"text": "(Li et al., 2010)",
"ref_id": "BIBREF11"
},
{
"start": 527,
"end": 549,
"text": "(Orimaye et al., 2012)",
"ref_id": "BIBREF15"
},
{
"start": 666,
"end": 684,
"text": "(Xia et al., 2016)",
"ref_id": "BIBREF26"
},
{
"start": 858,
"end": 876,
"text": "(Xia et al., 2015)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "Research gap. In contrast to these methods, we propose a novel strategy for learning negation rules via weak supervision. Our model uses reinforcement learning to reconstruct negation rules based on an document-level variable and does not require any kind of manual word-level labeling or precoded linguistic patterns. It is able to recognize explicit as well as implicit negations, while avoiding the influence of subjective interpretations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "This paper proposes the first approach for negation scope detection based on weak supervision. Our proposed reinforcement learning strategy circumvents the need for word-level annotations with negation scopes, as it reconstructs negation rules based on a document-level sentiment labels. Our experiments show that our weak supervision is effective in negation scope detection; it yields consistent improvements (of up to 4.66 %) over a sentiment analysis without negation handling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Our works suggests important implications. We are in line with growing literature (e. g., Fancellu et al., 2017) that reports challenges in resolving negation scopes through humans. Beyond prior works, our experiment reveals between-rater inconsistencies. While negation scope detection is widely studied as an isolated task, it could be beneficial when linking its evaluation more closely to context-specific uses such as sentiment analysis.",
"cite_spans": [
{
"start": 90,
"end": 112,
"text": "Fancellu et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Details are reported in our supplementary materials.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use Watkin's Q(\u03bb) with eligibility traces; seeSutton and Barto (1998) for details. At the beginning, we initialize the action-value function Q(s, a) to zero for all states and actions. This also controls our default action when encountering unknown states or out-of-vocabulary (OOV) words. In such cases, the non-negated action is preferred.alternatively, exploits the current knowledge with probability 1 \u2212 \u03b5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Further details regarding the learning parameters are provided in the supplementary materials.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also experimented with performance comparisons in a classification task, yet our approach also yields consistent improvements in this evaluation.5 We also investigated the relationship between prediction performance and text length, finding only minor effects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Exploiting the scope of negations and heterogeneous features for relation extraction: A case study for drug-drug interaction extraction",
"authors": [
{
"first": "Mahbub",
"middle": [],
"last": "Md Faisal",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Chowdhury",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lavelli",
"suffix": ""
}
],
"year": 2013,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "765--771",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Md Faisal Mahbub Chowdhury and Alberto Lavelli. 2013. Exploiting the scope of negations and heterogeneous features for relation extraction: A case study for drug-drug interaction extraction. In NAACL-HLT, pages 765-771.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "What's great and what's not: Learning to classify the scope of negation for improved sentiment analysis",
"authors": [
{
"first": "G",
"middle": [],
"last": "Isaac",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Councill",
"suffix": ""
},
{
"first": "Leonid",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Velikovich",
"suffix": ""
}
],
"year": 2010,
"venue": "Workshop on Negation and Speculation in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "51--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isaac G. Councill, Ryan McDonald, and Leonid Ve- likovich. 2010. What's great and what's not: Learn- ing to classify the scope of negation for improved sentiment analysis. In Workshop on Negation and Speculation in Natural Language Processing, pages 51-59. ACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A machine-learning approach to negation and speculation detection for sentiment analysis",
"authors": [
{
"first": "P",
"middle": [],
"last": "Noa",
"suffix": ""
},
{
"first": "Maite",
"middle": [],
"last": "Cruz",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Taboada",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mitkov",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of the Association for Information Science and Technology",
"volume": "67",
"issue": "9",
"pages": "2118--2136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noa P. Cruz, Maite Taboada, and Ruslan Mitkov. 2015. A machine-learning approach to negation and spec- ulation detection for sentiment analysis. Journal of the Association for Information Science and Tech- nology, 67(9):2118-2136.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A machine-learning approach to negation and speculation detection in clinical texts",
"authors": [
{
"first": "P",
"middle": [],
"last": "Noa",
"suffix": ""
},
{
"first": "Ma\u00f1a",
"middle": [],
"last": "Cruz D\u00edaz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "L\u00f3pez",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Manuel",
"suffix": ""
},
{
"first": "Jacinto",
"middle": [],
"last": "Mata V\u00e1zquez",
"suffix": ""
},
{
"first": "Victoria",
"middle": [],
"last": "Pach\u00f3n\u00e1lvarez",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of the American Society for Information Science and Technology",
"volume": "63",
"issue": "7",
"pages": "1398--1410",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noa P. Cruz D\u00edaz, Ma\u00f1a L\u00f3pez, Manuel J., Jac- into Mata V\u00e1zquez, and Victoria Pach\u00f3n\u00c1lvarez. 2012. A machine-learning approach to negation and speculation detection in clinical texts. Journal of the American Society for Information Science and Tech- nology, 63(7):1398-1410.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Scope of negation detection in sentiment analysis",
"authors": [
{
"first": "Maral",
"middle": [],
"last": "Dadvar",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Hauff",
"suffix": ""
},
{
"first": "Franciska",
"middle": [],
"last": "De",
"suffix": ""
},
{
"first": "Jong",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2011,
"venue": "Dutch-Belgian Information Retrieval Workshop",
"volume": "",
"issue": "",
"pages": "16--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maral Dadvar, Claudia Hauff, and Franciska de Jong. 2011. Scope of negation detection in sentiment anal- ysis. In Dutch-Belgian Information Retrieval Work- shop, pages 16-20.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Detecting negation scope is easy, except when it isn't",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Fancellu",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
},
{
"first": "Hangfeng",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2017,
"venue": "Conference of the European Chapter of the ACL",
"volume": "",
"issue": "",
"pages": "58--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Federico Fancellu, Adam Lopez, Bonnie Webber, and Hangfeng He. 2017. Detecting negation scope is easy, except when it isn't. In Conference of the Eu- ropean Chapter of the ACL, pages 58-63. ACL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Determining negation scope and strength in sentiment analysis",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Hogenboom",
"suffix": ""
},
{
"first": "Bas",
"middle": [],
"last": "Paul Van Iterson",
"suffix": ""
},
{
"first": "Flavius",
"middle": [],
"last": "Heerschop",
"suffix": ""
},
{
"first": "Uzay",
"middle": [],
"last": "Frasincar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kaymak",
"suffix": ""
}
],
"year": 2011,
"venue": "IEEE International Conference on Systems, Man, and Cybernetics",
"volume": "",
"issue": "",
"pages": "2589--2594",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Hogenboom, Paul van Iterson, Bas Heer- schop, Flavius Frasincar, and Uzay Kaymak. 2011. Determining negation scope and strength in senti- ment analysis. In IEEE International Conference on Systems, Man, and Cybernetics, pages 2589-2594.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The effect of negation on sentiment analysis and retrieval effectiveness",
"authors": [
{
"first": "Lifeng",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Weiyi",
"middle": [],
"last": "Meng",
"suffix": ""
}
],
"year": 2009,
"venue": "CIKM",
"volume": "",
"issue": "",
"pages": "1827--1830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lifeng Jia, Clement Yu, and Weiyi Meng. 2009. The effect of negation on sentiment analysis and retrieval effectiveness. In CIKM, pages 1827-1830.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A theory of reading: From eye fixations to comprehension",
"authors": [
{
"first": "A",
"middle": [],
"last": "Marcel",
"suffix": ""
},
{
"first": "Patricia",
"middle": [
"A"
],
"last": "Just",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Carpenter",
"suffix": ""
}
],
"year": 1980,
"venue": "Psychological review",
"volume": "87",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcel A Just and Patricia A Carpenter. 1980. A theory of reading: From eye fixations to comprehension. Psychological review, 87(4):329.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning with structured representations for negation scope extraction",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the ACL",
"volume": "",
"issue": "",
"pages": "533--539",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Li and Wei Lu. 2018. Learning with structured representations for negation scope extraction. In Proceedings of the ACL, pages 533-539.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Sentiment classification considering negation and contrast transition",
"authors": [
{
"first": "Shoushan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Chu-Ren",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2009,
"venue": "Pacific Asia Conference on Language, Information and Computation",
"volume": "",
"issue": "",
"pages": "297--306",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shoushan Li and Chu-Ren Huang. 2009. Sentiment classification considering negation and contrast tran- sition. In Pacific Asia Conference on Language, In- formation and Computation, pages 297-306. ACL.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Sentiment classification and polarity shifting",
"authors": [
{
"first": "Shoushan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sophia Yat Mei",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chu-Ren",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2010,
"venue": "International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "635--643",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shoushan Li, Sophia Yat Mei Lee, Ying Chen, Chu- Ren Huang, and Guodong Zhou. 2010. Senti- ment classification and polarity shifting. In Inter- national Conference on Computational Linguistics, pages 635-643. ACL.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Sem 2012 shared task: Resolving the scope and focus of negation",
"authors": [
{
"first": "Roser",
"middle": [],
"last": "Morante",
"suffix": ""
},
{
"first": "Eduardo",
"middle": [],
"last": "Blanco",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics (SemEval '12)",
"volume": "",
"issue": "",
"pages": "265--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roser Morante and Eduardo Blanco. 2012. Sem 2012 shared task: Resolving the scope and focus of nega- tion. In Proceedings of the First Joint Conference on Lexical and Computational Semantics (SemEval '12), pages 265-274.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Conandoyle-neg: Annotation of negation in conan doyle stories",
"authors": [
{
"first": "Roser",
"middle": [],
"last": "Morante",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roser Morante and Walter Daelemans. 2012. Conandoyle-neg: Annotation of negation in conan doyle stories. In Proceedings of the Eighth International Conference on Language Resources and Evaluation, Istanbul.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Learning the scope of negation in biomedical texts",
"authors": [
{
"first": "Roser",
"middle": [],
"last": "Morante",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Liekens",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2008,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "715--724",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roser Morante, Anthony Liekens, and Walter Daele- mans. 2008. Learning the scope of negation in biomedical texts. In EMNLP, pages 715-724.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Buy it-don't buy it: sentiment classification on amazon reviews using sentence polarity shift",
"authors": [
{
"first": "",
"middle": [],
"last": "Sylvester Olubolu Orimaye",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Saadat",
"suffix": ""
},
{
"first": "Eu-Gene",
"middle": [],
"last": "Alhashmi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Siew",
"suffix": ""
}
],
"year": 2012,
"venue": "Pacific Rim International Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "386--399",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sylvester Olubolu Orimaye, Saadat M Alhashmi, and Eu-Gene Siew. 2012. Buy it-don't buy it: sentiment classification on amazon reviews using sentence po- larity shift. In Pacific Rim International Conference on Artificial Intelligence, pages 386-399. Springer.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Evaluating sentiment analysis methods and identifying scope of negation in newspaper articles",
"authors": [
{
"first": "S",
"middle": [],
"last": "Padmaja",
"suffix": ""
},
{
"first": "Sameen",
"middle": [],
"last": "Fatima",
"suffix": ""
},
{
"first": "Sasidhar",
"middle": [],
"last": "Bandu",
"suffix": ""
}
],
"year": 2014,
"venue": "International Journal of Advanced Research in Artificial Intelligence",
"volume": "3",
"issue": "11",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Padmaja, Sameen Fatima, and Sasidhar Bandu. 2014. Evaluating sentiment analysis methods and identifying scope of negation in newspaper articles. International Journal of Advanced Research in Arti- ficial Intelligence, 3(11):1-6.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ACL",
"volume": "",
"issue": "",
"pages": "115--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang and Lillian Lee. 2005. Seeing stars: Exploit- ing class relationships for sentiment categorization with respect to rating scales. In Proceedings of the ACL, pages 115-124.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Predicting skytrax airport rankings from customer reviews",
"authors": [
{
"first": "Jose",
"middle": [
"D"
],
"last": "P\u00e9rezgonz\u00e1lez",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Gilbey",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Airport Management",
"volume": "5",
"issue": "4",
"pages": "335--339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jose D. P\u00e9rezgonz\u00e1lez and Andrew Gilbey. 2011. Pre- dicting skytrax airport rankings from customer re- views. Journal of Airport Management, 5(4):335- 339.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Enhancing sentiment analysis of financial news by detecting negation scopes",
"authors": [
{
"first": "Nicolas",
"middle": [],
"last": "Pr\u00f6llochs",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Feuerriegel",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Neumann",
"suffix": ""
}
],
"year": 2015,
"venue": "HICSS",
"volume": "",
"issue": "",
"pages": "959--968",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicolas Pr\u00f6llochs, Stefan Feuerriegel, and Dirk Neu- mann. 2015. Enhancing sentiment analysis of finan- cial news by detecting negation scopes. In HICSS, pages 959-968.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Negation scope detection in sentiment analysis: Decision support for news-driven trading",
"authors": [
{
"first": "Nicolas",
"middle": [],
"last": "Pr\u00f6llochs",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Feuerriegel",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Neumann",
"suffix": ""
}
],
"year": 2016,
"venue": "Decision Support Systems",
"volume": "88",
"issue": "",
"pages": "67--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicolas Pr\u00f6llochs, Stefan Feuerriegel, and Dirk Neu- mann. 2016. Negation scope detection in sentiment analysis: Decision support for news-driven trading. Decision Support Systems, 88:67-75.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Negation scope detection for twitter sentiment analysis",
"authors": [
{
"first": "Johan",
"middle": [],
"last": "Reitan",
"suffix": ""
},
{
"first": "J\u00f8rgen",
"middle": [],
"last": "Faret",
"suffix": ""
},
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Gamb\u00e4ck",
"suffix": ""
},
{
"first": "Lars",
"middle": [],
"last": "Bungum",
"suffix": ""
}
],
"year": 2015,
"venue": "Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "99--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johan Reitan, J\u00f8rgen Faret, Bj\u00f6rn Gamb\u00e4ck, and Lars Bungum. 2015. Negation scope detection for twit- ter sentiment analysis. In Workshop on Computa- tional Approaches to Subjectivity, Sentiment and So- cial Media Analysis, pages 99-108.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Negation recognition in medical narrative reports",
"authors": [
{
"first": "Lior",
"middle": [],
"last": "Rokach",
"suffix": ""
},
{
"first": "Roni",
"middle": [],
"last": "Romano",
"suffix": ""
},
{
"first": "Oded",
"middle": [],
"last": "Maimon",
"suffix": ""
}
],
"year": 2008,
"venue": "Information Retrieval",
"volume": "11",
"issue": "6",
"pages": "499--538",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lior Rokach, Roni Romano, and Oded Maimon. 2008. Negation recognition in medical narrative reports. Information Retrieval, 11(6):499-538.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Reinforcement Learning: An Introduction",
"authors": [
{
"first": "Richard",
"middle": [
"S"
],
"last": "Sutton",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"G"
],
"last": "Barto",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard S. Sutton and Andrew G. Barto. 1998. Rein- forcement Learning: An Introduction. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Lexicon-based methods for sentiment analysis",
"authors": [
{
"first": "Maite",
"middle": [],
"last": "Taboada",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Brooke",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Tofiloski",
"suffix": ""
},
{
"first": "Kimberly",
"middle": [],
"last": "Voll",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Stede",
"suffix": ""
}
],
"year": 2011,
"venue": "Computational Linguistics",
"volume": "37",
"issue": "2",
"pages": "267--307",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maite Taboada, Julian Brooke, Milan Tofiloski, Kim- berly Voll, and Manfred Stede. 2011. Lexicon-based methods for sentiment analysis. Computational Lin- guistics, 37(2):267-307.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A survey on the role of negation in sentiment analysis",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Balahur",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Dietrich",
"middle": [],
"last": "Klakow",
"suffix": ""
},
{
"first": "Andr\u00e9s",
"middle": [],
"last": "Montoyo",
"suffix": ""
}
],
"year": 2010,
"venue": "Workshop on Negation and Speculation in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "60--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Wiegand, Alexandra Balahur, Benjamin Roth, Dietrich Klakow, and Andr\u00e9s Montoyo. 2010. A survey on the role of negation in sentiment analy- sis. In Workshop on Negation and Speculation in Natural Language Processing, pages 60-68. ACL.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Polarity shift detection, elimination and ensemble: A three-stage model for document-level sentiment analysis. Information Processing & Management",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Feng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jianfei",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "52",
"issue": "",
"pages": "36--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rui Xia, Feng Xu, Jianfei Yu, Yong Qi, and Erik Cam- bria. 2016. Polarity shift detection, elimination and ensemble: A three-stage model for document-level sentiment analysis. Information Processing & Man- agement, 52(1):36-45.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Dual sentiment analysis: Considering two sides of one review",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Feng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
},
{
"first": "Qianmu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions on Knowledge and Data Engineering",
"volume": "27",
"issue": "8",
"pages": "2120--2133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rui Xia, Feng Xu, Chengqing Zong, Qianmu Li, Yong Qi, and Tao Li. 2015. Dual sentiment analysis: Considering two sides of one review. Transactions on Knowledge and Data Engineering, 27(8):2120- 2133.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td>compares the out-of-</td></tr></table>",
"num": null,
"text": "",
"type_str": "table",
"html": null
},
"TABREF2": {
"content": "<table/>",
"num": null,
"text": "Out-of-sample MSE between sentiment S \u03c0",
"type_str": "table",
"html": null
},
"TABREF4": {
"content": "<table/>",
"num": null,
"text": "Excerpt of state-action function Q(s i , a i ) actions A = {Negated, \u00acNegated} and the learned policy \u03c0",
"type_str": "table",
"html": null
}
}
}
}