{ "paper_id": "P14-1049", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:07:17.875657Z" }, "title": "Negation Focus Identification with Contextual Discourse Information", "authors": [ { "first": "Bowei", "middle": [], "last": "Zou", "suffix": "", "affiliation": { "laboratory": "Natural Language Processing Lab", "institution": "Soochow University", "location": { "postCode": "215006", "settlement": "Suzhou", "country": "China" } }, "email": "zoubowei@gmail.com" }, { "first": "Qiaoming", "middle": [], "last": "Zhu", "suffix": "", "affiliation": { "laboratory": "Natural Language Processing Lab", "institution": "Soochow University", "location": { "postCode": "215006", "settlement": "Suzhou", "country": "China" } }, "email": "qmzhu@suda.edu.cn" }, { "first": "Guodong", "middle": [], "last": "Zhou", "suffix": "", "affiliation": { "laboratory": "Natural Language Processing Lab", "institution": "Soochow University", "location": { "postCode": "215006", "settlement": "Suzhou", "country": "China" } }, "email": "gdzhou@suda.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Negative expressions are common in natural language text and play a critical role in information extraction. However, the performances of current systems are far from satisfaction, largely due to its focus on intrasentence information and its failure to consider inter-sentence information. In this paper, we propose a graph model to enrich intrasentence features with inter-sentence features from both lexical and topic perspectives. Evaluation on the *SEM 2012 shared task corpus indicates the usefulness of contextual discourse information in negation focus identification and justifies the effectiveness of our graph model in capturing such global information. *", "pdf_parse": { "paper_id": "P14-1049", "_pdf_hash": "", "abstract": [ { "text": "Negative expressions are common in natural language text and play a critical role in information extraction. However, the performances of current systems are far from satisfaction, largely due to its focus on intrasentence information and its failure to consider inter-sentence information. In this paper, we propose a graph model to enrich intrasentence features with inter-sentence features from both lexical and topic perspectives. Evaluation on the *SEM 2012 shared task corpus indicates the usefulness of contextual discourse information in negation focus identification and justifies the effectiveness of our graph model in capturing such global information. *", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Negation is a grammatical category which comprises various kinds of devices to reverse the truth value of a proposition (Morante and Sporleder, 2012) . For example, sentence (1) could be interpreted as it is not the case that he stopped.", "cite_spans": [ { "start": 120, "end": 149, "text": "(Morante and Sporleder, 2012)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) He didn't stop.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Negation expressions are common in natural language text. According to the statistics on biomedical literature genre (Vincze et al., 2008) , 19.44% of sentences contain negative expressions. The percentage rises to 22.5% on Conan Doyle stories (Morante and Daelemans, 2012) . It is interesting that a negative sentence may have both negative and positive meanings. For example, sentence (2) could be interpreted as He stopped, but not until he got to Jackson Hole with positive part he stopped and negative part until he got to Jackson Hole. Moreover, a nega-* Corresponding author tive expression normally interacts with some special part in the sentence, referred as negation focus in linguistics. Formally, negation focus is defined as the special part in the sentence, which is most prominently or explicitly negated by a negative expression. Hereafter, we denote negative expression in boldface and negation focus underlined.", "cite_spans": [ { "start": 117, "end": 138, "text": "(Vincze et al., 2008)", "ref_id": "BIBREF20" }, { "start": 244, "end": 273, "text": "(Morante and Daelemans, 2012)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(2) He didn't stop until he got to Jackson Hole.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While people tend to employ stress or intonation in speech to emphasize negation focus and thus it is easy to identify negation focus in speech corpora, such stress or intonation information often misses in the dominating text corpora. This poses serious challenges on negation focus identification. Current studies (e.g., Blanco and Moldovan, 2011; Rosenberg and Bergler, 2012) sort to various kinds of intra-sentence information, such as lexical features, syntactic features, semantic role features and so on, ignoring less-obvious inter-sentence information. This largely defers the performance of negation focus identification and its wide applications, since such contextual discourse information plays a critical role on negation focus identification. Take following sentence as an example.", "cite_spans": [ { "start": 323, "end": 349, "text": "Blanco and Moldovan, 2011;", "ref_id": "BIBREF0" }, { "start": 350, "end": 378, "text": "Rosenberg and Bergler, 2012)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(3) Helen didn't allow her youngest son to play the violin.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In sentence (3), there are several scenarios on identification of negation focus, with regard to negation expression n't, given different contexts: Scenario A: Given sentence But her husband did as next sentence, the negation focus should be Helen, yielding interpretation the person who didn't allow the youngest son to play the violin is Helen but not her husband. Scenario B: Given sentence She thought that he didn't have the artistic talent like her eldest son as next sentence, the negation focus should be the youngest son, yielding interpretation Helen thought that her eldest son had the talent to play the violin, but the youngest son didn't. Scenario C: Given sentence Because of her neighbors' protests as previous sentence, the negation focus should be play the violin, yielding interpretation Helen didn't allow her youngest son to play the violin, but it didn't show whether he was allowed to do other things.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, to well accommodate such contextual discourse information in negation focus identification, we propose a graph model to enrich normal intra-sentence features with various kinds of inter-sentence features from both lexical and topic perspectives. Besides, the standard PageRank algorithm is employed to optimize the graph model. Evaluation on the *SEM 2012 shared task corpus (Morante and Blanco, 2012) justifies our approach over several strong baselines.", "cite_spans": [ { "start": 390, "end": 416, "text": "(Morante and Blanco, 2012)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of this paper is organized as follows. Section 2 overviews the related work. Section 3 presents several strong baselines on negation focus identification with only intra-sentence features. Section 4 introduces our topic-driven word-based graph model with contextual discourse information. Section 5 reports the experimental results and analysis. Finally, we conclude our work in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Earlier studies of negation were almost in linguistics (e.g. Horn, 1989; van der Wouden, 1997) , and there were only a few in natural language processing with focus on negation recognition in the biomedical domain. For example, Chapman et al. (2001) developed a rule-based negation recognition system, NegEx, to determine whether a finding mentioned within narrative medical reports is present or absent. Since the release of the BioScope corpus (Vincze et al., 2008) , a freely available resource consisting of medical and biological texts, machine learning approaches begin to dominate the research on negation recognition (e.g. Morante et al., 2008; Li et al., 2010) .", "cite_spans": [ { "start": 61, "end": 72, "text": "Horn, 1989;", "ref_id": "BIBREF6" }, { "start": 73, "end": 94, "text": "van der Wouden, 1997)", "ref_id": null }, { "start": 228, "end": 249, "text": "Chapman et al. (2001)", "ref_id": "BIBREF1" }, { "start": 446, "end": 467, "text": "(Vincze et al., 2008)", "ref_id": "BIBREF20" }, { "start": 631, "end": 652, "text": "Morante et al., 2008;", "ref_id": null }, { "start": 653, "end": 669, "text": "Li et al., 2010)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Generally, negation recognition includes three subtasks: cue detection, which detects and identifies possible negative expressions in a sentence, scope resolution, which determines the grammatical scope in a sentence affected by a negative expression, and focus identification, which identifies the constituent in a sentence most prominently or explicitly negated by a negative expres-sion. This paper concentrates on the third subtask, negation focus identification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Due to the increasing demand on deep understanding of natural language text, negation recognition has been drawing more and more attention in recent years, with a series of shared tasks and workshops, however, with focus on cue detection and scope resolution, such as the Bi-oNLP 2009 shared task for negative event detection (Kim et al., 2009) and the ACL 2010 Workshop for scope resolution of negation and speculation (Morante and Sporleder, 2010) , followed by a special issue of Computational Linguistics (Morante and Sporleder, 2012) for modality and negation.", "cite_spans": [ { "start": 326, "end": 344, "text": "(Kim et al., 2009)", "ref_id": "BIBREF9" }, { "start": 420, "end": 449, "text": "(Morante and Sporleder, 2010)", "ref_id": null }, { "start": 509, "end": 538, "text": "(Morante and Sporleder, 2012)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The research on negation focus identification was pioneered by Blanco and Moldovan (2011) , who investigated the negation phenomenon in semantic relations and proposed a supervised learning approach to identify the focus of a negation expression. However, although Morante and Blanco (2012) proposed negation focus identification as one of the *SEM'2012 shared tasks, only one team (Rosenberg and Bergler, 2012) 1 participated in this task. They identified negation focus using three kinds of heuristics and achieved 58.40 in F1-measure. This indicates great expectation in negation focus identification.", "cite_spans": [ { "start": 63, "end": 89, "text": "Blanco and Moldovan (2011)", "ref_id": "BIBREF0" }, { "start": 265, "end": 290, "text": "Morante and Blanco (2012)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The key problem in current research on negation focus identification is its focus on intrasentence information and large ignorance of inter-sentence information, which plays a critical role in the success of negation focus identification. For example, Ding (2011) made a qualitative analysis on implied negations in conversation and attempted to determine whether a sentence was negated by context information, from the linguistic perspective. Moreover, a negation focus is always associated with authors' intention in article. This indicates the great challenges in negation focus identification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Negation focus identification in *SEM'2012 shared tasks is restricted to verbal negations annotated with MNEG in PropBank, with only the constituent belonging to a semantic role selected as negation focus. Normally, a verbal negation expression (not or n't) is grammatically associated with its corresponding verb (e.g., He didn't stop). For details on annotation guidelines and examples for verbal negations, please refer to Blanco and Moldovan (2011) .", "cite_spans": [ { "start": 426, "end": 452, "text": "Blanco and Moldovan (2011)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "3" }, { "text": "For comparison, we choose the state-of-the-art system described in Blanco and Moldovan (2011) , which employed various kinds of syntactic features and semantic role features, as one of our baselines. Since this system adopted C4.5 for training, we name it as Baseline C4.5 . In order to provide a stronger baseline, besides those features adopted in Baseline C4.5 , we added more refined intra-sentence features and adopted ranking Support Vector Machine (SVM) model for training. We name it as Baseline SVM .", "cite_spans": [ { "start": 67, "end": 93, "text": "Blanco and Moldovan (2011)", "ref_id": "BIBREF0" }, { "start": 268, "end": 272, "text": "C4.5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "3" }, { "text": "Following is a list of features adopted in the two baselines, for both Baseline C4.5 and Baseline SVM , \uf0d8 Basic features: first token and its part-ofspeech (POS) tag of the focus candidate; the number of tokens in the focus candidate; relative position of the focus candidate among all the roles present in the sentence; negated verb and its POS tag of the negative expression; \uf0d8 Syntactic features: the sequence of words from the beginning of the governing VP to the negated verb; the sequence of POS tags from the beginning of the governing VP to the negated verb; whether the governing VP contains a CC; whether the governing VP contains a RB. \uf0d8 Semantic features: the syntactic label of semantic role A1; whether A1 contains POS tag DT, JJ, PRP, CD, RB, VB, and WP, as defined in Blanco and Moldovan (2011) ; whether A1 contains token any, anybody, anymore, anyone, anything, anytime, anywhere, certain, enough, full, many, much, other, some, specifics, too, and until, as defined in Blanco and Moldovan (2011) ; the syntactic label of the first semantic role in the sentence; the semantic label of the last semantic role in the sentence; the thematic role for A0/A1/A2/A3/A4 of the negated predicate. and for Baseline SVM only, \uf0d8 Basic features: the named entity and its type in the focus candidate; relative position of the focus candidate to the negative expression (before or after). \uf0d8 Syntactic features: the dependency path and its depth from the focus candidate to the negative expression; the constituent path and its depth from the focus candidate to the negative expression;", "cite_spans": [ { "start": 784, "end": 810, "text": "Blanco and Moldovan (2011)", "ref_id": "BIBREF0" }, { "start": 988, "end": 1014, "text": "Blanco and Moldovan (2011)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "3" }, { "text": "While some of negation focuses could be identified by only intra-sentence information, others must be identified by contextual discourse information. Section 1 illustrates the necessity of such contextual discourse information in negation focus identification by giving three scenarios of different discourse contexts for negation expression n't in sentence (3). For better illustration of the importance of contextual discourse information, Table 1 shows the statistics of intra-and inter-sentence information necessary for manual negation focus identification with 100 instances randomly extracted from the held-out dataset of *SEM'2012 shared task corpus. It shows that only 17 instances can be identified by intra-sentence information. It is surprising that inter-sentence information is indispensable in 77 instances, among which 42 instances need only inter-sentence information and 35 instances need both intra-and intersentence information. This indicates the great importance of contextual discourse information on negation focus identification. It is also interesting to note 6 instances are hard to determine even given both intra-and inter-sentence information.", "cite_spans": [], "ref_spans": [ { "start": 442, "end": 449, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Exploring Contextual Discourse Information for Negation Focus Identification", "sec_num": "4" }, { "text": "Number #Intra-Sentence Only 17 #Inter-Sentence Only 42 #Both 35 #Hard to Identify 6 (Note: \"Hard to Identify\" means that it is hard for a human being to identify the negation focus even given both intra-and inter-sentence information.) Table 1 . Statistics of intra-and inter-sentence information on negation focus identification.", "cite_spans": [], "ref_spans": [ { "start": 236, "end": 243, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Info", "sec_num": null }, { "text": "Statistically, we find that negation focus is always related with what authors repeatedly states in discourse context. This explains why contextual discourse information could help identify negation focus. While inter-sentence information provides the global characteristics from the discourse context perspective and intra-sentence information provides the local features from lexical, syntactic and semantic perspectives, both have their own contributions on negation focus identification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Info", "sec_num": null }, { "text": "In this paper, we first propose a graph model to gauge the importance of contextual discourse information. Then, we incorporate both intraand inter-sentence features into a machine learning-based framework for negation focus identification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Info", "sec_num": null }, { "text": "Graph models have been proven successful in many NLP applications, especially in representing the link relationships between words or sentences (Wan and Yang, 2008; Li et al., 2009) . Generally, such models could construct a graph to compute the relevance between document theme and words.", "cite_spans": [ { "start": 144, "end": 164, "text": "(Wan and Yang, 2008;", "ref_id": "BIBREF21" }, { "start": 165, "end": 181, "text": "Li et al., 2009)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Graph Model", "sec_num": "4.1" }, { "text": "In this paper, we propose a graph model to represent the contextual discourse information from both lexical and topic perspectives. In particular, a word-based graph model is proposed to represent the explicit relatedness among words in a discourse from the lexical perspective, while a topic-driven word-based model is proposed to enrich the implicit relatedness between words, by adding one more layer to the word-based graph model in representing the global topic distribution of the whole dataset. Besides, the PageRank algorithm (Page et al., 1998 ) is adopted to optimize the graph model.", "cite_spans": [ { "start": 534, "end": 552, "text": "(Page et al., 1998", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Graph Model", "sec_num": "4.1" }, { "text": "A word-based graph model can be defined as G word (W, E), where W={w i } is the set of words in one document and E={e ij |w i , w j \u2208W} is the set of directed edges between these words, as shown in Figure 1 . In the word-based graph model, word node w i is weighted to represent the correlation of the word with authors' intention. Since such correlation is more from the semantic perspective than the grammatical perspective, only content words are considered in our graph model, ignoring functional words (e.g., the, to,\u2026). Especially, the content words limited to those with part-of-speech tags of JJ, NN, PRP, and VB. For simplicity, the weight of word node w i is initialized to 1.", "cite_spans": [], "ref_spans": [ { "start": 198, "end": 206, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Word-based Graph Model:", "sec_num": null }, { "text": "In addition, directed edge e ij is weighted to represent the relatedness between word w i and word w j in a document with transition probability P(j|i) from i to j, which is normalized as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-based Graph Model:", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "| , \u2211 ,", "eq_num": "(1)" } ], "section": "Word-based Graph Model:", "sec_num": null }, { "text": "where k represents the nodes in discourse, and Sim(w i ,w j ) denotes the similarity between w i and w j . In this paper, two kinds of information are used to calculate the similarity between words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-based Graph Model:", "sec_num": null }, { "text": "One is word co-occurrence (if word w i and word w j occur in the same sentence or in the adjacent sentences, Sim(w i ,w j ) increases 1), and the other is WordNet (Miller, 1995) based similarity. Please note that Sim(w i ,w i ) = 0 to avoid selftransition, and Sim(w i ,w j ) and Sim(w j ,w i ) may not be equal. Finally, the weights of word nodes are calculated using the PageRank algorithm as follows:", "cite_spans": [ { "start": 163, "end": 177, "text": "(Miller, 1995)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Word-based Graph Model:", "sec_num": null }, { "text": "1 \u2211 | 1 (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-based Graph Model:", "sec_num": null }, { "text": "where d is the damping factor as in the PageRank algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-based Graph Model:", "sec_num": null }, { "text": "While the above word-based graph model can well capture the relatedness between content words, it can only partially model the focus of a negation expression since negation focus is more directly related with topic than content. In order to reduce the gap, we propose a topic-driven word-based model by adding one more layer to refine the word-based graph model over the global topic distribution, as shown in Figure 2 . Here, the topics are extracted from all the documents in the *SEM 2012 shared task using the LDA Gibbs Sampling algorithm (Griffiths, 2002) . In the topic-driven word-based graph model, the first layer denotes the relatedness among content words as captured in the above word-based graph model, and the second layer denotes the topic distribution, with the dashed lines between these two layers indicating the word-topic model return by LDA.", "cite_spans": [ { "start": 543, "end": 560, "text": "(Griffiths, 2002)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 410, "end": 418, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Topic-driven Word-based Graph Model", "sec_num": null }, { "text": "Formally, the topic-driven word-based twolayer graph is defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic-driven Word-based Graph Model", "sec_num": null }, { "text": "G topic (W, T, E w , E t ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic-driven Word-based Graph Model", "sec_num": null }, { "text": "where W={w i } is the set of words in one document and T={t i } is the set of topics in all documents; E w ={ew ij |w i , w j \u2208W} is the set of directed edges between words and E t ={et ij |w i \u2208W, t j \u2208T} is the set of undirected edges between words and topics; transition probability P w (j|i) of ew ij is defined as the same as P(j|i) of the word-based graph model. Besides, transition probability P t (i,m) of et ij in the word-topic model is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic-driven Word-based Graph Model", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ", , \u2211 ,", "eq_num": "(3)" } ], "section": "Topic-driven Word-based Graph Model", "sec_num": null }, { "text": "where Rel(w i , t m ) is the weight of word w i in topic t m calculated by the LDA Gibbs Sampling algorithm. On the basis, the transition probability P w (j|i) of ew ij is updated by calculating as following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic-driven Word-based Graph Model", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2032 | \u03b8 \u2022 | 1 \u03b8 \u2022 , , \u2211 , ,", "eq_num": "(4)" } ], "section": "Topic-driven Word-based Graph Model", "sec_num": null }, { "text": "where k represents all topics linked to both word w i and word w j , and \u03b8\u2208[0,1] is the coefficient controlling the relative contributions from the lexical information in current document and the topic information in all documents. Finally, the weights of word nodes are calculated using the PageRank algorithm as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic-driven Word-based Graph Model", "sec_num": null }, { "text": "1 \u2211 \u2032 | 1 (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic-driven Word-based Graph Model", "sec_num": null }, { "text": "where d is the damping factor as in the PageRank algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic-driven Word-based Graph Model", "sec_num": null }, { "text": "Given the graph models and the PageRank optimization algorithm discussed above, four kinds of contextual discourse information are extracted as inter-sentence features (Table 2 ).", "cite_spans": [], "ref_spans": [ { "start": 168, "end": 176, "text": "(Table 2", "ref_id": null } ], "eq_spans": [], "section": "Negation Focus Identification via Graph Model", "sec_num": "4.2" }, { "text": "In particular, the total weight and the max weight of words in the focus candidate are calculated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Negation Focus Identification via Graph Model", "sec_num": "4.2" }, { "text": "\u2211 (6) max (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Negation Focus Identification via Graph Model", "sec_num": "4.2" }, { "text": "where i represents the content words in the focus candidate. These two kinds of weights focus on different aspects about the focus candidate with the former on the contribution of content words, which is more beneficial for a long focus candidate, and the latter biased towards the focus candidate which contains some critical word in a discourse.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Negation Focus Identification via Graph Model", "sec_num": "4.2" }, { "text": "No Feature 1 Total weight of words in the focus candidate using the co-occurrence similarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Negation Focus Identification via Graph Model", "sec_num": "4.2" }, { "text": "2 Max weight of words in the focus candidate using the co-occurrence similarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Negation Focus Identification via Graph Model", "sec_num": "4.2" }, { "text": "3 Total weight of words in the focus candidate using the WordNet similarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Negation Focus Identification via Graph Model", "sec_num": "4.2" }, { "text": "Max weight of words in the focus candidate using the WordNet similarity. Table 2 . Inter-sentence features extracted from graph model.", "cite_spans": [], "ref_spans": [ { "start": 73, "end": 80, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "4", "sec_num": null }, { "text": "For evaluating the contribution of contextual discourse information on negation focus identification directly, we incorporate the four intersentence features from the topic-driven wordbased graph model into a negation focus identifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4", "sec_num": null }, { "text": "In this section, we describe experimental settings and systematically evaluate our negation focus identification approach with focus on exploring the effectiveness of contextual discourse information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimentation", "sec_num": "5" }, { "text": "In all our experiments, we employ the *SEM'2012 shared task corpus (Morante and Blanco, 2012) 2 . As a freely downloadable resource, the *SEM shared task corpus is annotated on top of PropBank, which uses the WSJ section of the Penn TreeBank. In particular, negation focus annotation on this corpus is restricted to verbal negations (with corresponding mark MNEG in PropBank). On 50% of the corpus annotated by two annotators, the inter-annotator agreement was 0.72 (Blanco and Moldovan, 2011) . Along with negation focus annotation, this corpus also contains other annotations, such as POS tag, named entity, chunk, constituent tree, dependency tree, and semantic role.", "cite_spans": [ { "start": 67, "end": 93, "text": "(Morante and Blanco, 2012)", "ref_id": "BIBREF14" }, { "start": 466, "end": 493, "text": "(Blanco and Moldovan, 2011)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": null }, { "text": "In total, this corpus provides 3,544 instances of negation focus annotations. For fair comparison, we adopt the same partition as *SEM'2012 shared task in all our experiments, i.e., with 2,302 for training, 530 for development, and 712 for testing. Although for each instance, the corpus only provides the current sentence, the previous and next sentences as its context, we sort to the Penn TreeBank 3 to obtain the corresponding document as its discourse context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": null }, { "text": "Same as the *SEM'2012 shared task, the evaluation is made using precision, recall, and F1-score. Especially, a true positive (TP) requires an exact match for the negation focus, a false positive (FP) occurs when a system predicts a non-existing negation focus, and a false negative (FN) occurs when the gold annotations specify a negation focus but the system makes no prediction. For each instance, the predicted focus is considered correct if it is a complete match with a gold annotation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": null }, { "text": "Beside, to show whether an improvement is significant, we conducted significance testing using z-test, as described in Blanco and Moldovan (2011) .", "cite_spans": [ { "start": 119, "end": 145, "text": "Blanco and Moldovan (2011)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": null }, { "text": "In our experiments, we report not only the default performance with gold additional annotated features provided by the *SEM'2012 shared task corpus and the Penn TreeBank, but also the performance with various kinds of features extracted automatically, using following toolkits: \uf0d8 Syntactic Parser: We employ the Stanford Parser 4 (Klein and Manning, 2003; De Marneffe et al., 2006) for tokenization, constituent and dependency parsing. \uf0d8 Named Entity Recognizer: We employ the Stanford NER 5 (Finkel et al., 2005) to obtain named entities.", "cite_spans": [ { "start": 330, "end": 355, "text": "(Klein and Manning, 2003;", "ref_id": "BIBREF10" }, { "start": 356, "end": 381, "text": "De Marneffe et al., 2006)", "ref_id": "BIBREF2" }, { "start": 492, "end": 513, "text": "(Finkel et al., 2005)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Toolkits", "sec_num": null }, { "text": "\uf0d8 Semantic Role Labeler: We employ the semantic role labeler, as described in Punyakanok et al (2008) . \uf0d8 Topic Modeler: For estimating transition probability Pt(i,m), we employ GibbsLDA++ 6 , an LDA model using Gibbs Sampling technique for parameter estimation and inference. \uf0d8 Classifier: We employ SVM Light 7 with default parameters as our classifier. Table 3 shows the performance of the two baselines, the decision tree-based classifier as in Blanco and Moldovan (2011) and our ranking SVM-based classifier. It shows that our ranking SVM-based baseline slightly improves the F1measure by 2.52% over the decision tree-based baseline, largely due to the incorporation of more refined features.", "cite_spans": [ { "start": 78, "end": 101, "text": "Punyakanok et al (2008)", "ref_id": "BIBREF18" }, { "start": 449, "end": 475, "text": "Blanco and Moldovan (2011)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 356, "end": 363, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Toolkits", "sec_num": null }, { "text": "System P(%) R(%) F1 Baseline C4.5 66.73 49.93 57.12 Baseline SVM 60.22 59.07 59.64 Table 3 . Performance of baselines with only intra-sentence information.", "cite_spans": [ { "start": 29, "end": 33, "text": "C4.5", "ref_id": null } ], "ref_spans": [ { "start": 83, "end": 90, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "With Only Intra-sentence Information", "sec_num": null }, { "text": "Error analysis of the ranking SVM-based baseline on development data shows that 72% of them are caused by the ignorance of intersentence information. For example, among the 42 instances listed in the category of \"#Inter-Sentence Only\" in Table 1 , only 7 instances can be identified correctly by the ranking SVMbased classifier. With about 4 focus candidates in one sentence on average, this percentage is even lower than random.", "cite_spans": [], "ref_spans": [ { "start": 238, "end": 245, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "With Only Intra-sentence Information", "sec_num": null }, { "text": "For exploring the usefulness of pure contextual discourse information in negation focus identification, we only employ inter-sentence features into ranking SVM-based classifier. First of all, we estimate two parameters for our topic-driven word-based graph model: topic number T for topic model and coefficient \u03b8 between P w (j|i) and P t (i,m) in Formula 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "With Only Inter-sentence Information", "sec_num": null }, { "text": "Given the LDA Gibbs Sampling model with parameters \u03b1 = 50/T and \u03b2 = 0.1, we vary T from 20 to 100 with an interval of 10 to find the opti-mal T. Figure 3 shows the experiment results of varying T (with \u03b8 = 0.5) on development data. It shows that the best performance is achieved when T = 50 with 51.11 in F1). Therefore, we set T as 50 in our following experiments. For parameter \u03b8, a trade-off between the transition probability P w (j|i) (word to word) and the transition probability P t (i,m) (word and topic) to update P' w (j|i), we vary it from 0 to 1 with an interval of 0.1. Figure 4 shows the experiment results of varying \u03b8 (with T=50) on development data. It shows that the best performance is achieved when \u03b8 = 0.6, which are adopted hereafter in all our experiments. This indicates that direct lexical information in current document contributes more than indirect topic information in all documents on negation focus identification. It also shows that direct lexical information in current document and indirect topic information in all documents are much complementary on negation focus identification. Table 4 . Performance with only inter-sentence information. Table 4 shows the performance of negation focus identification with only inter-sentence features. It also shows that the system with intersentence features from the topic-driven wordbased graph model significantly improves the F1-measure by 8.86 over the system with intersentence features from the word-based graph model, largely due to the usefulness of topic information.", "cite_spans": [], "ref_spans": [ { "start": 145, "end": 153, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 583, "end": 591, "text": "Figure 4", "ref_id": null }, { "start": 1118, "end": 1125, "text": "Table 4", "ref_id": null }, { "start": 1178, "end": 1185, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "With Only Inter-sentence Information", "sec_num": null }, { "text": "In comparison with Table 3 , it shows that the system with only intra-sentence features achieves better performance than the one with only intersentence features (59.64 vs. 52.61 in F1measure). Table 5 shows that enriching intra-sentence features with inter-sentence features significantly (p<0.01) improve the performance by 9.85 in F1measure than the better baseline. This indicates the usefulness of such contextual discourse information and the effectiveness of our topicdriven word-based graph model in negation focus identification.", "cite_spans": [], "ref_spans": [ { "start": 19, "end": 26, "text": "Table 3", "ref_id": null }, { "start": 194, "end": 201, "text": "Table 5", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "With Only Inter-sentence Information", "sec_num": null }, { "text": "System P(%) R(%) F1 Baseline C4.5 Table 6 . Performance comparison of systems on negation focus identification with automatically extracted features.", "cite_spans": [ { "start": 29, "end": 33, "text": "C4.5", "ref_id": null } ], "ref_spans": [ { "start": 34, "end": 41, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "With both Intra-and Inter-sentence Information", "sec_num": null }, { "text": "Besides, Table 6 shows the performance of our best system with all features automatically extracted using the toolkits as described in Section 5.1. Compared with our best system employing gold additional annotated features (the last line in Table 5 ), the homologous system with automatically extracted features (the last line in Table 6 ) only decrease of less than 4 in F1measure. This demonstrates the achievability of our approach.", "cite_spans": [], "ref_spans": [ { "start": 9, "end": 16, "text": "Table 6", "ref_id": null }, { "start": 241, "end": 248, "text": "Table 5", "ref_id": "TABREF1" }, { "start": 330, "end": 337, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "With both Intra-and Inter-sentence Information", "sec_num": null }, { "text": "In comparison with the best-reported performance on the *SEM'2012 shared task (Rosenberg and Bergler, 2012), our system performs better by about 11 in F-measure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "With both Intra-and Inter-sentence Information", "sec_num": null }, { "text": "While this paper verifies the usefulness of contextual discourse information on negation focus identification, the performance with only intersentence features is still weaker than that with only intra-sentence features. There are two main reasons. On the one hand, the former employs an unsupervised approach without prior knowledge for training. On the other hand, the usefulness of inter-sentence features depends on the assumption that a negation focus relates to the meaning of which is most relevant to authors' intention in a discourse. If there lacks relevant information in a discourse context, negation focus will become difficult to be identified only by inter-sentence features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.3" }, { "text": "Error analysis also shows that some of the negation focuses are very difficult to be identified, even for a human being. Consider the sentence (3) in Section 1, if given sentence because of her neighbors' protests, but her husband doesn't think so as its following context, both Helen and to play the violin can become the negation focus. Moreover, the inter-annotator agreement in the first round of negation focus annotation can only reach 0.72 (Blanco and Moldovan, 2011) . This indicates inherent difficulty in negation focus identification.", "cite_spans": [ { "start": 447, "end": 474, "text": "(Blanco and Moldovan, 2011)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.3" }, { "text": "In this paper, we propose a graph model to enrich intra-sentence features with inter-sentence features from both lexical and topic perspectives. In this graph model, the relatedness between words is calculated by word co-occurrence, WordNetbased similarity, and topic-driven similarity. Evaluation on the *SEM 2012 shared task corpus indicates the usefulness of contextual discourse information on negation focus identification and our graph model in capturing such global information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "In future work, we will focus on exploring more contextual discourse information via the graph model and better ways of integrating intraand inter-sentence information on negation focus identification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "In *SEM'2013, the shared task is changed with focus on \"Semantic Textual Similarity\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.clips.ua.ac.be/sem2012-st-neg/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.cis.upenn.edu/~treebank/ 4 http://nlp.stanford.edu/software/lex-parser.shtml 5 http://nlp.stanford.edu/ner/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://gibbslda.sourceforge.net/ 7 http://svmlight.joachims.org", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research is supported by the National Natural Science Foundation of China, No.61272260, No.61331011, No.61273320 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Semantic Representation of Negation Using Focus Detection", "authors": [ { "first": "Eduardo", "middle": [], "last": "Blanco", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Moldovan", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "581--589", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eduardo Blanco and Dan Moldovan. 2011. Semantic Representation of Negation Using Focus Detection. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 581-589, Portland, Oregon, June 19-24, 2011.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A simple algorithm for identifying negated findings and diseases in discharge summaries", "authors": [ { "first": "Wendy", "middle": [ "W" ], "last": "Chapman", "suffix": "" }, { "first": "Will", "middle": [], "last": "Bridewell", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Hanbury", "suffix": "" }, { "first": "Gregory", "middle": [ "F" ], "last": "Cooper", "suffix": "" }, { "first": "Bruce", "middle": [ "G" ], "last": "Buchanan", "suffix": "" } ], "year": 2001, "venue": "Journal of Biomedical Informatics", "volume": "34", "issue": "", "pages": "301--310", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wendy W. Chapman, Will Bridewell, Paul Hanbury, Gregory F. Cooper, and Bruce G. Buchanan. 2001. A simple algorithm for identifying negated find- ings and diseases in discharge summaries. Journal of Biomedical Informatics, 34:301-310.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Generating Typed Dependency Parses from Phrase Structure Parses", "authors": [ { "first": "Marie-Catherine De", "middle": [], "last": "Marneffe", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Maccartney", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2006, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marie-Catherine De Marneffe, Bill MacCartney and Christopher D. Manning. 2006. Generating Typed Dependency Parses from Phrase Structure Parses. In Proceedings of LREC'2006.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Implied Negation in Discourse", "authors": [ { "first": "Yun", "middle": [], "last": "Ding", "suffix": "" } ], "year": 2011, "venue": "Journal of Theory and Practice in Language Studies", "volume": "1", "issue": "1", "pages": "44--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yun Ding. 2011. Implied Negation in Discourse. Journal of Theory and Practice in Language Stud- ies, 1(1): 44-51, Jan 2011.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Incorporating non-local information into information extraction systems by gibbs sampling", "authors": [ { "first": "Jenny", "middle": [ "Rose" ], "last": "Finkel", "suffix": "" }, { "first": "Trond", "middle": [], "last": "Grenager", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "363--370", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local infor- mation into information extraction systems by gibbs sampling. In Proceedings of the 43rd Annual Meeting on Association for Computational Lin- guistics, pages 363-370, Stroudsburg, PA, USA.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Gibbs sampling in the generative model of Latent Dirichlet Allocation", "authors": [ { "first": "Tom", "middle": [], "last": "Griffiths", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Griffiths. 2002. Gibbs sampling in the generative model of Latent Dirichlet Allocation. Tech. rep., Stanford University.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A Natural History of Negation", "authors": [ { "first": "", "middle": [], "last": "Laurence R Horn", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laurence R Horn. 1989. A Natural History of Nega- tion. Chicago University Press, Chicago, IL.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Answering Opinion Questions with Random Walks on Graphs", "authors": [ { "first": "Fangtao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Minlie", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Xiaoyan", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP", "volume": "", "issue": "", "pages": "2--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fangtao Li, Yang Tang, Minlie Huang, and Xiaoyan Zhu. 2009. Answering Opinion Questions with Random Walks on Graphs. In Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 737-745, Suntec, Singapore, 2-7 Aug 2009.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Learning the Scope of Negation via Shallow Semantic Parsing", "authors": [ { "first": "Junhui", "middle": [], "last": "Li", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Hongling", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Qiaoming", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "671--679", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junhui Li, Guodong Zhou, Hongling Wang, and Qi- aoming Zhu. 2010. Learning the Scope of Negation via Shallow Semantic Parsing. In Proceedings of the 23rd International Conference on Computa- tional Linguistics. Stroudsburg, PA, USA: Associa- tion for Computational Linguistics, 671-679.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Overview of BioNLP'09 Shared Task on Event Extraction", "authors": [ { "first": "Jin-Dong", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Tomoko", "middle": [], "last": "Ohta", "suffix": "" }, { "first": "Sampo", "middle": [], "last": "Pyysalo", "suffix": "" }, { "first": "Yoshinobu", "middle": [], "last": "Kano", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the BioNLP'2009 Workshop Companion Volume for Shared Task", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jin-Dong Kim, Tomoko Ohta, Sampo Pyysalo, Yoshinobu Kano, and Jun'ichi Tsujii. 2009. Over- view of BioNLP'09 Shared Task on Event Extrac- tion. In Proceedings of the BioNLP'2009 Workshop Companion Volume for Shared Task. Stroudsburg, PA, USA: Association for Computational Linguis- tics, 1-9.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Accurate Unlexicalized Parsing", "authors": [ { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "423--430", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Klein and Christopher D. Manning. 2003. Accu- rate Unlexicalized Parsing. In Proceedings of the 41st Meeting of the Association for Computational Linguistics, pages 423-430.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Wordnet: a lexical database for english", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1995, "venue": "Commun. ACM", "volume": "38", "issue": "11", "pages": "39--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A. Miller. 1995. Wordnet: a lexical database for english. Commun. ACM, 38(11):39-41.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Learning the Scope of Negation in Biomedical Texts", "authors": [], "year": 2008, "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "715--724", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roser Morante, Anthony Liekens and Walter Daele- mans. 2008. Learning the Scope of Negation in Bi- omedical Texts. In Proceedings of the 2008 Con- ference on Empirical Methods in Natural Lan- guage Processing, pages 715-724, Honolulu, Oc- tober 2008.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Proceedings of the Workshop on Negation and Speculation in Natural Language Processing. University of Antwerp", "authors": [], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roser Morante and Caroline Sporleder, editors. 2010. In Proceedings of the Workshop on Negation and Speculation in Natural Language Processing. Uni- versity of Antwerp, Uppsala, Sweden.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "*SEM 2012 Shared Task: Resolving the Scope and Focus of Negation", "authors": [ { "first": "Roser", "middle": [], "last": "Morante", "suffix": "" }, { "first": "Eduardo", "middle": [], "last": "Blanco", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM)", "volume": "", "issue": "", "pages": "265--274", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roser Morante and Eduardo Blanco. 2012. *SEM 2012 Shared Task: Resolving the Scope and Focus of Negation. In Proceedings of the First Joint Con- ference on Lexical and Computational Semantics (*SEM), pages 265-274, Montreal, Canada, June 7- 8, 2012.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Modality and Negation: An Introduction to the Special Issue", "authors": [ { "first": "Roser", "middle": [], "last": "Morante", "suffix": "" }, { "first": "Caroline", "middle": [], "last": "Sporleder", "suffix": "" } ], "year": 2012, "venue": "Computational Linguistics", "volume": "38", "issue": "2", "pages": "223--260", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roser Morante and Caroline Sporleder. 2012. Modali- ty and Negation: An Introduction to the Special Is- sue. Computational Linguistics, 2012, 38(2): 223- 260.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Conan Doyle-neg: Annotation of negation cues and their scope in Conan Doyle stories", "authors": [ { "first": "Roser", "middle": [], "last": "Morante", "suffix": "" }, { "first": "Walter", "middle": [], "last": "Daelemans", "suffix": "" } ], "year": 2012, "venue": "Proceedings of LREC 2012", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roser Morante and Walter Daelemans. 2012. Conan Doyle-neg: Annotation of negation cues and their scope in Conan Doyle stories. In Proceedings of LREC 2012, Istambul.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The pagerank citation ranking: Bringing order to the web", "authors": [ { "first": "Lawrence", "middle": [], "last": "Page", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Brin", "suffix": "" }, { "first": "Rajeev", "middle": [], "last": "Motwani", "suffix": "" }, { "first": "Terry", "middle": [], "last": "Winograd", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1998. The pagerank citation rank- ing: Bringing order to the web. Technical report, Stanford University.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The importance of syntactic parsing and inference in semantic role labeling", "authors": [ { "first": "Vasin", "middle": [], "last": "Punyakanok", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics", "volume": "34", "issue": "2", "pages": "257--287", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguis- tics, 34(2):257-287, June.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "UConcordia: CLaC Negation Focus Detection at *Sem", "authors": [ { "first": "Sabine", "middle": [], "last": "Rosenberg", "suffix": "" }, { "first": "Sabine", "middle": [], "last": "Bergler", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM)", "volume": "", "issue": "", "pages": "294--300", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sabine Rosenberg and Sabine Bergler. 2012. UCon- cordia: CLaC Negation Focus Detection at *Sem 2012. In Proceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM), pages 294-300, Montreal, Canada, June 7-8, 2012. Ton van der Wouden. 1997. Negative Contexts: Col- location, Polarity, and Multiple Negation. Routledge, London.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The Bio-Scope corpus: biomedical texts annotated for uncertainty,negation and their scopes", "authors": [ { "first": "Veronika", "middle": [], "last": "Vincze", "suffix": "" }, { "first": "Gy\u00f6rgy", "middle": [], "last": "Szarvas", "suffix": "" }, { "first": "Rich\u00e1rd", "middle": [], "last": "Farkas", "suffix": "" }, { "first": "Gy\u00f6rgy", "middle": [], "last": "M\u00f3ra", "suffix": "" }, { "first": "J\u00e1nos", "middle": [], "last": "Csirik", "suffix": "" } ], "year": 2008, "venue": "BMC Bioinformatics", "volume": "9", "issue": "11", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Veronika Vincze, Gy\u00f6rgy Szarvas, Rich\u00e1rd Farkas, Gy\u00f6rgy M\u00f3ra, and J\u00e1nos Csirik. 2008. The Bio- Scope corpus: biomedical texts annotated for un- certainty,negation and their scopes. BMC Bioin- formatics, 9(Suppl 11):S9.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Multidocument summarization using cluster-based link analysis", "authors": [ { "first": "Xiaojun", "middle": [], "last": "Wan", "suffix": "" }, { "first": "Jianwu", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval", "volume": "", "issue": "", "pages": "299--306", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaojun Wan and Jianwu Yang. 2008. Multi- document summarization using cluster-based link analysis. In Proceedings of the 31st annual inter- national ACM SIGIR conference on Research and development in information retrieval, pages 299- 306.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "Word-based graph model.", "uris": null }, "FIGREF1": { "num": null, "type_str": "figure", "text": "Topic-driven word-based graph model.", "uris": null }, "FIGREF2": { "num": null, "type_str": "figure", "text": "Performance with varying T.", "uris": null }, "FIGREF3": { "num": null, "type_str": "figure", "text": "topic-driven wordbased graph model 54.59 50.76 52.61", "uris": null }, "TABREF1": { "content": "
Performance comparison of systems on
negation focus identification.
System Baseline C4.5 with intra feat. only (auto) Baseline SVM with intra feat. Only (auto)P(%) R(%) 60.94 44.62 51.52 F1 53.81 51.67 52.72
Ours with Both feat.
using word-based GM58.77 57.19 57.97
(auto)
Ours with Both feat.
using topic-driven66.74 64.53 65.62
word-based GM (auto)
", "text": "", "html": null, "num": null, "type_str": "table" } } } }