{ "paper_id": "Y07-1031", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:46:47.788423Z" }, "title": "Using Non-Local Features to Improve Named Entity Recognition Recall *", "authors": [ { "first": "Xinnian", "middle": [], "last": "Mao", "suffix": "", "affiliation": { "laboratory": "", "institution": "France Telecom R&D Center (Beijing)", "location": { "postCode": "100080", "settlement": "Beijing", "country": "P.R.China" } }, "email": "xinnian.mao@orange-ftgroup.com" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "", "affiliation": { "laboratory": "", "institution": "France Telecom R&D Center (Beijing)", "location": { "postCode": "100080", "settlement": "Beijing", "country": "P.R.China" } }, "email": "" }, { "first": "Yuan", "middle": [], "last": "Dong", "suffix": "", "affiliation": { "laboratory": "", "institution": "France Telecom R&D Center (Beijing)", "location": { "postCode": "100080", "settlement": "Beijing", "country": "P.R.China" } }, "email": "yuan.dong@orange-ftgroup.com" }, { "first": "Saike", "middle": [], "last": "He", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Posts and Telecommunications", "location": { "postCode": "100876", "settlement": "Beijing", "country": "P.R.China" } }, "email": "" }, { "first": "Haila", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "France Telecom R&D Center (Beijing)", "location": { "postCode": "100080", "settlement": "Beijing", "country": "P.R.China" } }, "email": "haila.wang@orange-ftgroup.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Named Entity Recognition (NER) is always limited by its lower recall resulting from the asymmetric data distribution where the NONE class dominates the entity classes. This paper presents an approach that exploits non-local information to improve the NER recall. Several kinds of non-local features encoding entity token occurrence, entity boundary and entity class are explored under Conditional Random Fields (CRFs) framework. Experiments on SIGHAN 2006 MSRA (CityU) corpus indicate that non-local features can effectively enhance the recall of the state-of-the-art NER systems. Incorporating the non-local features into the NER systems using local features alone, our best system achieves a 23.56% (25.26%) relative error reduction on the recall and 17.10% (11.36%) relative error reduction on the F1 score; the improved F1 score 89.38% (90.09%) is significantly superior to the best NER system with F1 of 86.51% (89.03%) participated in the closed track.", "pdf_parse": { "paper_id": "Y07-1031", "_pdf_hash": "", "abstract": [ { "text": "Named Entity Recognition (NER) is always limited by its lower recall resulting from the asymmetric data distribution where the NONE class dominates the entity classes. This paper presents an approach that exploits non-local information to improve the NER recall. Several kinds of non-local features encoding entity token occurrence, entity boundary and entity class are explored under Conditional Random Fields (CRFs) framework. Experiments on SIGHAN 2006 MSRA (CityU) corpus indicate that non-local features can effectively enhance the recall of the state-of-the-art NER systems. Incorporating the non-local features into the NER systems using local features alone, our best system achieves a 23.56% (25.26%) relative error reduction on the recall and 17.10% (11.36%) relative error reduction on the F1 score; the improved F1 score 89.38% (90.09%) is significantly superior to the best NER system with F1 of 86.51% (89.03%) participated in the closed track.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Named entity recognition (NER) is a subtask of information extraction that seeks to locate and classify predefined entities, such as names of persons, locations, organizations, etc. in unstructured texts. It is the fundamental step to many natural language processing applications, like Information Extraction (IE), Information Retrieval (IR) and Question Answering (QA). Most empirical approaches currently employed in NER task make decision only on local context for extract inference, which is based on the data independent assumption (Krishnan and Manning, 2006) . But often this assumption does not hold because non-local dependencies are prevalent in natural language (including the NER task). How to utilize the non-local dependencies effectively is a key issue in NER task. Unfortunately, few researches have been devoted to this issue, existing works mainly focus on using the non-local information for further improving NER label consistency.", "cite_spans": [ { "start": 538, "end": 566, "text": "(Krishnan and Manning, 2006)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "There are two methods to use non-local information. One is to add additional edges to graphical model structure to represent the distant dependencies and the other is to encode the non-locality with non-local features. However, in the first approach, heuristic rules are used to find the dependencies (Bunescu and Mooney, 2004; Sutton and McCallum, 2004) or penalties for label inconsistency are required to handset ad-hoc (Finkel et al., 2005) . Furthermore, high computational cost is spent for approximate inference. In order to establish the long dependencies easily and overcome the disadvantage of the approximate inference, Krishnan and Manning (2006) propose a two-stage approach using Conditional Random Fields (CRFs) with extract inference. They represent the non-locality with non-local features, and extract the nonlocal features from the output of the first stage CRF using local context alone; then they incorporate the non-local features into the second CRF. But the features in this approach are only used to improve label consistency.", "cite_spans": [ { "start": 301, "end": 327, "text": "(Bunescu and Mooney, 2004;", "ref_id": "BIBREF0" }, { "start": 328, "end": 354, "text": "Sutton and McCallum, 2004)", "ref_id": "BIBREF11" }, { "start": 423, "end": 444, "text": "(Finkel et al., 2005)", "ref_id": "BIBREF3" }, { "start": 631, "end": 658, "text": "Krishnan and Manning (2006)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "To our best knowledge, up to now, non-local information has not been explored to improve NER recall in previous researches; on the other hand, NER is always impaired by its lower recall due to the imbalanced distribution where the NONE class dominates the entity classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Classifiers built on such data typically have a higher precision and a lower recall and tend to overproduce the NONE class (Kambhatla, 2006) . In this paper, we employ non-local information to recall the missed entities. Similar to Krishnan and Manning (2006) , we also encode non-local information with features and apply the simple two-stage architecture. Different from their work for improve label consistency, their features are activated on the recognized entities coming from the first CRF, the non-local features we design are used to recall more missed entities which are seen in the training data or unseen entities but some of their occurrences being recognized correctly in the first stage, our features are fired on the raw token sequence directly with forward maximum match. Compared to their non-local information extracted from training data with 10-fold cross-validation, our non-local information is extracted from the training date directly; our approach obtaining the non-local features is simpler. Moreover, we design different non-local features encoding different useful information for NER two subtasks: entity boundary detection and entity semantic classification. Our features are also inspired by Wong and Ng (2007) . They extract entity majority type features from unlabelled data with an initial maximum entropy classifier. Our approach is validated on the third International Chinese language processing bakeoff (SIGHAN 2006) MSRA and CityU NER closed track, the experimental results show that non-local features can significantly improve the recall of the state-of-the-art NER system using local context alone. The remainder of the paper is structured as follows. In Section 2, we introduce the first stage CRF with local features alone; then we describe the second stage CRF using non-local features we design in Section 3. We demonstrate the experiments in Section 4 and we conclude the paper in Section 5.", "cite_spans": [ { "start": 123, "end": 140, "text": "(Kambhatla, 2006)", "ref_id": "BIBREF4" }, { "start": 232, "end": 259, "text": "Krishnan and Manning (2006)", "ref_id": "BIBREF5" }, { "start": 1224, "end": 1242, "text": "Wong and Ng (2007)", "ref_id": "BIBREF12" }, { "start": 1442, "end": 1455, "text": "(SIGHAN 2006)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "To validate the effectiveness of our approach of exploiting non-local features, we need to establish a baseline with state-of-the-art performance using local context alone. Similar to (Krishnan and Manning, 2006) , we employ two-stage architecture under conditional random fields (CRFs) framework. In the first stage, we build the baseline with local features only, and then we build the second NER system with non-local features. We will introduce them step by step.", "cite_spans": [ { "start": 184, "end": 212, "text": "(Krishnan and Manning, 2006)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Our Baseline NER System", "sec_num": "2." }, { "text": "We regard the NER task as a sequence labeling problem and apply Conditional Random Fields (Lafferty et al., 2001; Sha and Pereira, 2003) since it represents the state of the art in sequence modeling and has also been very effective at NER task. It is undirected graph established on G = (V, E), where V is the set of random variables Y = {Y i |1\u2264i\u2264 n} for each the n tokens in an input sequence and E = {(Y i\u22121 , Y i ) |1\u2264i\u2264n} is the set of (n \u2212 1) edges forming a linear chain. Following Lafferty et al. (2001) , the conditional probability of the state sequence (s1, s2\u2026sn) given the input sequence (o1, o2\u2026on) is computed as follows:", "cite_spans": [ { "start": 90, "end": 113, "text": "(Lafferty et al., 2001;", "ref_id": "BIBREF6" }, { "start": 114, "end": 136, "text": "Sha and Pereira, 2003)", "ref_id": "BIBREF10" }, { "start": 489, "end": 511, "text": "Lafferty et al. (2001)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Conditional random fields", "sec_num": "2.1." }, { "text": "1 ( | ) exp( ( , , , )) (1) 1 1 1 ( , ) \u2211 \u2211 \u220f = \u039b \u2212 = = \u2208 T K P s o f s s o t t t k k t k c C s o Z o \u03bb", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional random fields", "sec_num": "2.1." }, { "text": "Where f k is an arbitrary feature function; and \u03bb k is the weight for the feature function; it can be optimized through iterative algorithms like GIS (Darroch and Ratcliff, 1972) and IIS (Della Pietra et al., 1997) . However recent research y been shown that quasi-Newton methods, such as L-BFGS, are significantly more efficient (Byrd et al., 1994; Malouf, 2002; Sha and Pereira, 2003) .", "cite_spans": [ { "start": 150, "end": 178, "text": "(Darroch and Ratcliff, 1972)", "ref_id": "BIBREF2" }, { "start": 183, "end": 214, "text": "IIS (Della Pietra et al., 1997)", "ref_id": null }, { "start": 330, "end": 349, "text": "(Byrd et al., 1994;", "ref_id": "BIBREF1" }, { "start": 350, "end": 363, "text": "Malouf, 2002;", "ref_id": "BIBREF8" }, { "start": 364, "end": 386, "text": "Sha and Pereira, 2003)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Conditional random fields", "sec_num": "2.1." }, { "text": "The fist stage CRF labels for token directly depends on the labels corresponding the previous and next token, namely C -2 , C -1 , C 0 , C -1 , C 2 , C -2 C -1 , C -1 C 0 , C 0 C -1 , C 1 C 2 , and C -1 C 1 , where C 0 is the current character, C 1 the next character, C 2 the second character after C 0 , C -1 the character preceding C 0 , and C -2 the second character before C 0 . In addition, the first CRF used the tag bigram feature. Although these local features are simple, they give us state-of-the-art baseline using local information alone as described in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Local features", "sec_num": "2.2." }, { "text": "As Kambhatla (2006) points out that NER system typically have a higher precision and a lower recall and tends to overproduce the NONE class because the NONE class dominates all other classes in the task. In natural language, different sentences contain different useful contextual information; the missed entities are happened when their context surroundings are not indicative enough for the statistical-based approaches (including the CRFs) to make a correct decision. When we analyze these missed occurrences of the missed entities further, we can put them into three groups. The first is the seen entities in the training data; the second is the unseen occurrences, but some other occurrences of the entities have been correctly recognized in certain indicative context surroundings. The third is the unseen occurrences with no any occurrences recognized correctly. In NER task, considering influences between extractions can be very useful, if the context surrounding one occurrence of a token sequence is very indicative of it being an entity, then this should also influence the tagging of another occurrence of the same token sequence in a different context that is not indicative of entity (Bunescu and Mooney, 2004) . So if we consider the non-local dependencies between the same entities, some of these missed occurrences will be recognized correctly. We will describe how to capture the nonlocality to recall more missed entities in Section 3.", "cite_spans": [ { "start": 3, "end": 19, "text": "Kambhatla (2006)", "ref_id": "BIBREF4" }, { "start": 1199, "end": 1225, "text": "(Bunescu and Mooney, 2004)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Low recall in NER task", "sec_num": "2.3." }, { "text": "In natural language, difference sentences contain different useful context information; the missed entities happen when their context surroundings are not indicative enough for the first stage CRF to make correct decisions. If the context surrounding one occurrence of a token sequence is very indicative of it being an entity, then this should also influence the labeling of another occurrence of the same token sequence in a different context that is not indicative of entity (Bunescu and Mooney, 2004) . So considering the non-local dependencies between the same entities can be very useful, if these non-local dependencies are incorporated into the CRF model, some of the missed entities will be recalled correctly. Figure 1 shows the flow using non-local features in two-stage architecture under CRFs framework. The first CRF is trained with local features alone as baseline (described in Section 2), and then we test the testing data with the first CRF and get the entities plus their type from the output. The second CRF utilizes the non-local features derived from the entity list which is merged by the output of the first CRF from the testing data and the entities extracted directly from the training data. To provide flexible and general conclusion, we only use non-local information found in labeled training data and test data rather than external knowledge sources, such as post-of-speech, gazetteers, external lexica and etc. ", "cite_spans": [ { "start": 478, "end": 504, "text": "(Bunescu and Mooney, 2004)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 720, "end": 728, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Recalling Missed Entities with Non-local Features", "sec_num": "3." }, { "text": "We design four kinds of non-local features which encode different useful information for the two NER subtasks, i.e. entity boundary detection and entity semantic classification, the nonlocal features are fired on the token sequences if they are matched with certain entity in the entity list in forward maximum matching (FMM) way. I will describe them one by one as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Four kinds of non-local features", "sec_num": "3.2." }, { "text": "These refer to the occurrence information assigned to the token sequence which is matched with the entity list exactly. These features capture the dependencies between the identical candidate entities; which results in the same candidate entities of different occurrences can be recalled favorably.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity-occurrence features (F1):", "sec_num": null }, { "text": "These refer to the position information (start, middle and last) assigned to the token sequence which is matched with the entity list exactly. These features enable us to capture the dependencies between the identical candidate entities and their boundaries. Entity-majority features (F3): These refer to the majority label assigned to the token sequence which is matched with the entity list exactly. These features enable us to capture the dependencies between the identical entities and their classes, so that the same candidate entities of different occurrences can be recalled favorably, and their label consistencies can be considered too.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Token-position features (F2):", "sec_num": null }, { "text": "These features capture non-local information from F2 and F3 simultaneously. They take into account the entity boundary and semantic class information at the same time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Token-position & entity-majority features (F4):", "sec_num": null }, { "text": "These non-local features are applied in English NER in one-step approach (Krishnan and Manning, 2006; Wong and Ng, 2007) , they employ these features to improve entity consistence among their different occurrences. These features are assigned to token sequences that are matched exactly with the (entity, majority-type) list in forward maximum matching (FMM) way. During training or testing, when the CRFs tagger encounters a token sequence C 1 ...C n such that (C k ...C s ) (k\u22651, s \u2264n) is the longest token sequence existing in the entity list; the correspondent features will be turned on to each token in C k ....C s . For example, considering the following sentence: \u6211(wo)\u7231 (ai)\u5317(bei)\u4eac(jing)\u5929(tian)\u5b89(an)\u95e8 (men)(I love Beijing Tiananmen). If (\u5317 \u4eac, Maj-LOC), (\u4eac, Maj-LOC), and (\u5929\u5b89\u95e8, Maj-LOC) are presented in the (entity, majority type) list, the features below will be turned on as table 1 shows.", "cite_spans": [ { "start": 73, "end": 101, "text": "(Krishnan and Manning, 2006;", "ref_id": "BIBREF5" }, { "start": 102, "end": 120, "text": "Wong and Ng, 2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Token-position & entity-majority features (F4):", "sec_num": null }, { "text": "matching sequence is \u5317\u4eac. Different from (Krishnan and Manning, 2006; Wong and Ng, 2007) , they only assign the majority type information, like Maj-Loc, to each token in matched candidates, boundary information like B, I and E is ignored, it is acceptable because they utilize these features only for English corpora, and the boundary information can be captured by the capitalization characteristics. But in Chinese NER, NED is more difficult than NEC, so we assign the boundary information, representing with B, I and E, to each token in the matched candidates. Please note that not all matching token sequences are true candidates. The false candidates come from two aspects: the first is the boundaries are correct, but the occurrences are common words 1 ; the second errors come from FMM, so the features are soft constraints. T Ta ab bl le e 1 1. . E Ex xa am mp pl le e f fo or r T To ok ke en n--M Ma aj jo or ri it ty y--T Ty yp pe e f fe ea at tu ur re es s", "cite_spans": [ { "start": 40, "end": 68, "text": "(Krishnan and Manning, 2006;", "ref_id": "BIBREF5" }, { "start": 69, "end": 87, "text": "Wong and Ng, 2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Notice that the feature turned on for \u4eacis E E--M Ma aj j--L LO OC C, , not B-M Ma aj j--L LO OC C, because the longest", "sec_num": null }, { "text": "Our investigation is based on the MSRA and CityU datasets from the NER closed track of the third International Chinese language processing bakeoff (SIGHAN 2006) (Levow, 2006) ; its goal is to perform NER on three entity classes: PERSON, LOCATION and ORGANIZATION. We give up the LDC corpus because it is initially designed for ACE Evaluation and the definition of named entity is different from traditional definition. The named entities in SIGHAN training data sets are labeled in IOB-2 format, we covert the corpus to OBIE as a preprocessing, because some existing work and our experiments show that OBIE scheme outperforms other formats when applying machine learning to NER. In OBIE format, tokens outside of entities are tagged with O (NONE class), while the first token in an entity is tagged with B-k to begin class k, the token inside the entity is tagged with I-k and the end token in the entity is tagged with E-k; single-token entity is labeled as B-k. General information for each dataset appears in Table 2 . It also summarizes the statistic information of seen and unseen entities in the test sets. A seen named entity in test set means that it exists in its correspondent training data set. From the table, we can find that the proportion of seen entities is very high. 71.86% of named entities in MSRA test data can be found in MSRA training data, while 73.53% for CityU corpus. In fact, most of named entities may appear frequently in our generally lives. To make use of existing named entities in training data is crucial to improving capability to capture seen entities and thereby unseen entities, since many models consider the possibilities of labels in context. We also see an interesting phenomenon in MSRA corpus that many named entities are consecutive without punctuations, especially the person names. Particularly, in MSRA testing data, nearly 20% named entities appear consecutively. It brings great difficulties for NER system to capture such entities separately. ", "cite_spans": [ { "start": 147, "end": 160, "text": "(SIGHAN 2006)", "ref_id": null }, { "start": 161, "end": 174, "text": "(Levow, 2006)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 1012, "end": 1019, "text": "Table 2", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Corpus analysis", "sec_num": "4.1." }, { "text": "1 For the string\u4e24\u5cb8, when it refers to Mainland and Taiwan, it is an entity, when it refers to the bank of rivers, it is a common word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problems of NER with only local information", "sec_num": "4.2." }, { "text": "To ok ke en n Entity-Majority-Type Feature Table 3 displays the performance on MSRA and CityU NER closed track. The F0 row lists the precision, recall and F-measure (\u03b2=1) got by the first CRF (described in Section 2) using local features alone. The score makes the first CRF rank the top position on the MSRA and the second on the CityU in SIGHAN bakeoff (Levow, 2006) 2 . It shows that our baseline has achieved the state-of-the-art performance. However, comparing the recall with the precision on each dataset, we find that the performance is impaired by the relatively low recall. To investigate the causes of this problem, we analyze the missed entities further. We categorize them into two classes, seen and unseen in training data. Five kinds of statistic information are collected and listed in F0 column in table 4. 1The number of different missed named entities;", "cite_spans": [ { "start": 355, "end": 368, "text": "(Levow, 2006)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 43, "end": 50, "text": "Table 3", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "T", "sec_num": null }, { "text": "\u6211 \u6211 - - \u7231 \u7231 - - \uf963 \uf963 B B--M Ma aj j--L LO OC C \u4eac \u4eac E E--M Ma aj j--L LO OC C \u5929 \u5929 B B--M Ma aj j--L LO OC C \u5b89 \u5b89 I I--M Ma aj j--L LO OC C \u95e8 \u95e8 E E--M Ma aj j--L LO OC C", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "T", "sec_num": null }, { "text": "(2) The times of missing occurrences;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "T", "sec_num": null }, { "text": "(3) The number of different missed named entities which are detected correctly at least once; (4) The times missing occurrences under the case of (3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "T", "sec_num": null }, { "text": "From (1) and(2) measurements of the seen entities in the F0 column in table 4, we find that there are many seen named enmities are missed. Though identifying unseen named entities is more difficult than seen named entities, the boldfaced number indicates that about 10% (24 of 254) for MSRA and 23% (111 of 476) for CityU of unseen and missed named entities have been labeled out correctly for at least once. The difficulty to capture unseen named entities in training data is because of the nature of machine learning techniques. However, the statistical results in Table 4 show there is a great potential (200+48)/(200+330)=47% for MSRA and (384+396)/ (384+1144)=51% for CityU, to improve recall by enhancing the capture of seen named entities and making use of labeled outputs from test data to capture more unseen named entities. What is more, performance can be improved further when more named entities are labeled correctly, because many models, such as CRF, assign labels according to the possibilities of whole sequence.", "cite_spans": [], "ref_spans": [ { "start": 567, "end": 574, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "T", "sec_num": null }, { "text": "After we feed the non-local features (described in Section 3) to the second CRF, we test it on the testing data of MSRA and CityU again. Table 3 lists the performance got by each kind of feature configurations. F0 means the first CRF (baseline) using local features alone, and the F0+Fi (i=1, 2, 3, 4) means the second CRF using local features (F0) as well as the non-local features Fi. From the table 3, we can conclude that exploiting non-local information is a good choice to recall more missed entities. Comparing with the baseline using only local context, the recalls of NER systems are improved after taking non-local information into account by -0.34%~3.76% on MSRA, 2.92%~3.68% on CityU. And the overall F-measures increase by -0.54%~2.19% and 0.72%~1.27% on MSRA and CityU each. The MSRA performance got by F0+F1 decrease slightly because there are many consecutive entities in the testing data. Since F1 does not encode boundary and class information, more entity tokens are recalled, but their boundaries or classes are wrong. After we implement a post-processing step with person name list extracted from the MSRA training data to separate the consecutive candidate entities, the performance lists with F0+F1 (PP) increases. The performance difference among F1, F2, F3 and F4 are mainly because they encode different useful non-local information as described in Section 3.2. For F1, it only encode whether a token sequence is an entity. No boundary and class are considered which are represented in F2 and F3 respectively, so F2 and F3 both achieve high performance than F0, and F4 consider both boundary and class simultaneously, so it is the best choice of exploiting non-local information to improve NER recall. We can not compare between F2 and F3 directly because boundary detection and semantic class classification are the two different sub-tasks in NER. The performance difference between the performance on CityU and MSRA come from two folds. One is because CityU testing data contains more seen entities than that of MSRA since the seen entities can be captured easily by the non-local features. The other is because MSRA data sets contain much more consecutive named entities than CityU. Since NER with non-local information prefers to dig out more and thereby longer named entities, it may tend to label more continuous named entities as a single named entities and introduce more errors damaging both in recall and precision. Then, we investigate the situation of missed seen and missed unseen named entity in NER with non-local information by filling the Table 4 . F0 is the first CRF (baseline) using local features alone, and the F0+Fi (i=1, 2, 3, 4) means the second CRF using local features (F0) as well as the non-local features Fi. The four same measurements are used as described in Section 4.2.", "cite_spans": [], "ref_spans": [ { "start": 137, "end": 144, "text": "Table 3", "ref_id": "TABREF1" }, { "start": 2582, "end": 2589, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Influence of using non-local features in NER", "sec_num": "4.3." }, { "text": "Compared with the numbers in F0 column, significant reduction of missing of seen entities is achieved by adding non-local features. What is more, the hit of unseen entities is also increased as we predicted in previous analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Influence of using non-local features in NER", "sec_num": "4.3." }, { "text": "entities have been recognized correctly with local context alone. We also compare the different kinds of non-local features which fit to different NER sub-tasks and find that non-local feature considering the boundary and class information simultaneously is the best. Our approach is language independent, due to lack of annotated corpora of other languages, the experiments have only been conducted on Chinese corpora, and related experiments on other languages can be done in the future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Influence of using non-local features in NER", "sec_num": "4.3." }, { "text": "The best F1-score on MSRA and CityU is 86.51% and 89.03% respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We do not perform post-processing step on CityU testing data", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "In this paper, we propose an approach of exploiting non-local information to improve NER recall. To our best knowledge, our work is the first attempt to utilize non-local information to improve NER recall, our work demonstrates that non-local information are effective to recall the missed entities which are seen in training data or unseen but some occurrences of these unseen", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5." } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Collective information extraction with relational Markov networks", "authors": [ { "first": "R", "middle": [], "last": "Bunescu", "suffix": "" }, { "first": "R", "middle": [ "J" ], "last": "Mooney", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd ACL", "volume": "", "issue": "", "pages": "439--446", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Bunescu and R. J. Mooney. 2004. Collective information extraction with relational Markov networks. Proceedings of the 42nd ACL, pp. 439-446.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Representations of quasi-Newton matrices and their use in limited memory methods", "authors": [ { "first": "R", "middle": [ "H" ], "last": "Byrd", "suffix": "" }, { "first": "J", "middle": [], "last": "Nocedal", "suffix": "" }, { "first": "R", "middle": [ "B" ], "last": "Schnabel", "suffix": "" } ], "year": 1994, "venue": "Mathematical Programming", "volume": "", "issue": "63", "pages": "129--156", "other_ids": {}, "num": null, "urls": [], "raw_text": "R.H. Byrd, J. Nocedal and R.B. Schnabel. 1994. Representations of quasi-Newton matrices and their use in limited memory methods. Mathematical Programming, (63):129-156.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Generalized iterative scaling for log-linear models", "authors": [ { "first": "J", "middle": [ "N" ], "last": "Darroch", "suffix": "" }, { "first": "D", "middle": [], "last": "Ratcliff", "suffix": "" } ], "year": 1972, "venue": "The Annals of Mathematical Statistics", "volume": "43", "issue": "5", "pages": "1470--1480", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.N. Darroch and D. Ratcliff. 1972. Generalized iterative scaling for log-linear models. The Annals of Mathematical Statistics, 43 (5):1470-1480.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Incorporating non-local information into information extraction systems by gibbs sampling", "authors": [ { "first": "J", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "T", "middle": [], "last": "Grenager", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 42nd ACL", "volume": "", "issue": "", "pages": "363--370", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Finkel, T. Grenager, and C. D. Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. Proceedings of the 42nd ACL, pp. 363-370.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Minority Vote: At-Least-N Voting Improves Recall for Extracting Relations. Proceeding of the 44th ACL", "authors": [ { "first": "N", "middle": [], "last": "Kambhatla", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "460--466", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Kambhatla. 2006. Minority Vote: At-Least-N Voting Improves Recall for Extracting Relations. Proceeding of the 44th ACL, pp. 460-466.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "An Effective Two-Stage Model for Exploiting Non-Local Dependencies in Named Entity Recognition", "authors": [ { "first": "V", "middle": [], "last": "Krishnan", "suffix": "" }, { "first": "C", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 44th ACL", "volume": "", "issue": "", "pages": "1121--1128", "other_ids": {}, "num": null, "urls": [], "raw_text": "V. Krishnan and C. D Manning. 2006. An Effective Two-Stage Model for Exploiting Non- Local Dependencies in Named Entity Recognition. Proceedings of the 44th ACL, pp. 1121-1128.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Conditional Random Fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "J", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "F", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 18th ICML", "volume": "", "issue": "", "pages": "282--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional Random Fields: Probabilistic models for segmenting and labeling sequence data. Proceedings of the 18th ICML, pp. 282-289. Morgan Kaufmann, San Francisco, CA", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The Third International Chinese Language Processing Bakeoff: Word Segmentation and Named Entity Recognition", "authors": [ { "first": "G", "middle": [], "last": "Levow", "suffix": "" } ], "year": 2006, "venue": "Proceedings of SIGHAN-2006", "volume": "", "issue": "", "pages": "108--117", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Levow. 2006. The Third International Chinese Language Processing Bakeoff: Word Segmentation and Named Entity Recognition. Proceedings of SIGHAN-2006, pp. 108- 117. Sydney, Australia.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A comparison of algorithms for maximum entropy parameter estimation", "authors": [ { "first": "R", "middle": [], "last": "Malouf", "suffix": "" } ], "year": 2002, "venue": "Proc. of CoNLL-2002", "volume": "", "issue": "", "pages": "49--55", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Malouf. 2002. A comparison of algorithms for maximum entropy parameter estimation. Proc. of CoNLL-2002, 49-55. Taipei, Taiwan.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Inducing features of random fields", "authors": [ { "first": "D", "middle": [], "last": "Pietra", "suffix": "" }, { "first": "V", "middle": [ "Della" ], "last": "Pietra", "suffix": "" }, { "first": "J", "middle": [], "last": "Lafferty", "suffix": "" } ], "year": 1997, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "19", "issue": "4", "pages": "380--393", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Pietra, V. Della Pietra, and J. Lafferty, 1997. Inducing features of random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4):380-393, 1997.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Shallow parsing with conditional random fields", "authors": [ { "first": "F", "middle": [], "last": "Sha", "suffix": "" }, { "first": "F", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2003, "venue": "Proc. of HLT/NAACL-2003", "volume": "213", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Sha and F. Pereira. 2003. Shallow parsing with conditional random fields. Proc. of HLT/NAACL-2003, 213-220. Edmonton, Canada.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Collective segmentation and labeling of distant entities in information extraction", "authors": [ { "first": "C", "middle": [], "last": "Sutton", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2004, "venue": "ICML Workshop on Statistical Relational Learning and Its connections to Other Fields", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Sutton and A. McCallum. 2004. Collective segmentation and labeling of distant entities in information extraction. In ICML Workshop on Statistical Relational Learning and Its connections to Other Fields.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "One class per named entity: exploiting unlabeled text for named entity recognition", "authors": [ { "first": "Y", "middle": [ "Ch" ], "last": "Wong", "suffix": "" }, { "first": "H", "middle": [ "T" ], "last": "Ng", "suffix": "" } ], "year": 2007, "venue": "Proc. of IJCAI-2007", "volume": "", "issue": "", "pages": "1763--1768", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. CH. Wong and H. T. Ng. 2007. One class per named entity: exploiting unlabeled text for named entity recognition. Proc. of IJCAI-2007. 1763-1768. India.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "The flow using non-local features in two-stage architecture", "uris": null, "num": null }, "FIGREF1": { "type_str": "figure", "text": ": the size of words; (E): the size of entities;#(C): the proportion of the consecutive entities; #(S): the proportion of the seen entities", "uris": null, "num": null }, "TABREF0": { "type_str": "table", "content": "
overall statistics
#(W)#(E)#(C)#(S)
MSRA (Training)1.3M7506010.93%---
MSRA(Testing)100k619019.68%71.86%
CityU (Training)1.6M 11234710.13%---
CityU (Testing)220k164079.60%
", "text": "Corpus", "num": null, "html": null }, "TABREF1": { "type_str": "table", "content": "
CorpusSystemPRF
F090.5884.0487.19
F0+F189.8183.7086.65
MSRAF0+F1(PP) 89.40 F0+F2 89.7385.46 85.9687.39 87.81
F0+F390.5887.1688.84
F0+F491.0187.8089.38
F092.4885.4388.82
F0+F190.7388.3589.53
CityUF0+F290.9688.8389.88
F0+F390.9088.6589.76
F0+F491.0989.1190.09
", "text": "NER performance on MSRA and CityU", "num": null, "html": null } } } }