|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:56:58.585088Z" |
|
}, |
|
"title": "Nested Named Entity Recognition via Second-best Sequence Learning and Decoding", |
|
"authors": [ |
|
{ |
|
"first": "Takashi", |
|
"middle": [], |
|
"last": "Shibuya", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University", |
|
"location": { |
|
"postCode": "15213", |
|
"settlement": "Pittsburgh", |
|
"region": "PA" |
|
} |
|
}, |
|
"email": "shibuyat@jp.sony.com" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University", |
|
"location": { |
|
"postCode": "15213", |
|
"settlement": "Pittsburgh", |
|
"region": "PA" |
|
} |
|
}, |
|
"email": "hovy@cmu.edu" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "When an entity name contains other names within it, the identification of all combinations of names can become difficult and expensive. We propose a new method to recognize not only outermost named entities but also inner nested ones. We design an objective function for training a neural model that treats the tag sequence for nested entities as the second best path within the span of their parent entity. In addition, we provide the decoding method for inference that extracts entities iteratively from outermost ones to inner ones in an outsideto-inside way. Our method has no additional hyperparameters to the conditional random field based model widely used for flat named entity recognition tasks. Experiments demonstrate that our method performs better than or at least as well as existing methods capable of handling nested entities, achieving F1-scores of 85.82%, 84.34%, and 77.36% on ACE-2004, ACE-2005, and GENIA datasets, respectively.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "When an entity name contains other names within it, the identification of all combinations of names can become difficult and expensive. We propose a new method to recognize not only outermost named entities but also inner nested ones. We design an objective function for training a neural model that treats the tag sequence for nested entities as the second best path within the span of their parent entity. In addition, we provide the decoding method for inference that extracts entities iteratively from outermost ones to inner ones in an outsideto-inside way. Our method has no additional hyperparameters to the conditional random field based model widely used for flat named entity recognition tasks. Experiments demonstrate that our method performs better than or at least as well as existing methods capable of handling nested entities, achieving F1-scores of 85.82%, 84.34%, and 77.36% on ACE-2004, ACE-2005, and GENIA datasets, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Named entity recognition (NER) is the task of identifying text spans associated with proper names and classifying them according to their semantic class such as person or organization. NER, or in general the task of recognizing entity mentions, is one of the first stages in deep language understanding, and its importance has been well recognized in the NLP community (Nadeau and Sekine, 2007) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 369, |
|
"end": 394, |
|
"text": "(Nadeau and Sekine, 2007)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One popular approach to the NER task is to regard it as a sequence labeling problem. In this case, it is implicitly assumed that mentions are not nested in texts. However, names often contain entities nested within themselves, as illustrated in Figure 1 , which contains 3 mentions of the same type (PROTEIN) in the span taken from the GENIA dataset . Name nesting is common, especially in technical domains (Alex et al., 2007; Byrne, 2007; Wang, 2009) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 321, |
|
"end": 321, |
|
"text": "", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 409, |
|
"end": 428, |
|
"text": "(Alex et al., 2007;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 429, |
|
"end": 441, |
|
"text": "Byrne, 2007;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 442, |
|
"end": 453, |
|
"text": "Wang, 2009)", |
|
"ref_id": "BIBREF45" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 245, |
|
"end": 253, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The assumption of no nesting leads to loss of potentially important information and may negatively impact subsequent downstream tasks. For instance, a downstream entity linking system that relies on NER may fail to link the correct entity if the entity mention is nested.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Various approaches to recognizing nested entities have been proposed. Many of them rely on producing and rating all possible (sub)spans, which can be computationally expensive. provided a hypergraph-based approach to consider all possible spans. Sohrab and Miwa (2018) proposed a neural exhaustive model that enumerates and classifies all possible spans. These methods, however, achieve high performance at the cost of time complexity. To reduce the running time, they set a threshold to discard longer entity mentions. If the hyperparameter is set low, running time is reduced but longer mentions are missed. In contrast, Muis and Lu (2017) proposed a sequence labeling approach that assigns tags to gaps between words, which efficiently handles sequences using Viterbi decoding. However, this approach suffers from structural ambiguity issues during inference, as explained by . Katiyar and Cardie (2018) proposed another hypergraph-based approach that learns the structure in a greedy manner. However, their method uses an additional hyperparameter as the threshold for selecting multiple mention candidates. This hyperparameter affects the trade-off between recall and precision.", |
|
"cite_spans": [ |
|
{ |
|
"start": 246, |
|
"end": 268, |
|
"text": "Sohrab and Miwa (2018)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 881, |
|
"end": 906, |
|
"text": "Katiyar and Cardie (2018)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we propose new learning and decoding methods to extract nested entities without any additional hyperparameters. We summarize our contributions as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We describe a decoding method that iteratively recognizes entities from outermost ones to inner ones without structural ambiguity. It recursively searches a span of each extracted entity for inner nested entities using the Viterbi algorithm. This algorithm does not require hyperparameters for the maximal length or number of mentions considered.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We also provide a novel learning method that ensures the aforementioned decoding. Models are optimized based on an objective function designed according to the decoding procedure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Empirically, we demonstrate that our method performs better than or at least as well as the current state-of-the-art methods with 85.82%, 84.34%, and 77.36% in F1-score on three standard datasets: ACE-2004 , 1 ACE-2005 and GENIA.", |
|
"cite_spans": [ |
|
{ |
|
"start": 199, |
|
"end": 207, |
|
"text": "ACE-2004", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 208, |
|
"end": 220, |
|
"text": ", 1 ACE-2005", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We propose applying conditional random fields (CRFs) (Lafferty et al., 2001) , which is commonly used for flat NER (Lample et al., 2016; Ma and Hovy, 2016; Chiu and Nichols, 2016; Reimers and Gurevych, 2017; Strubell et al., 2017; Akbik et al., 2018) , to nested NER in this study. We first explain our usage of CRF, which is the base of our decoding and training methods. Then, we introduce our decoding and training methods. Our decoding and training methods focus on the output layer of neural architectures and therefore can be combined with any neural model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 76, |
|
"text": "(Lafferty et al., 2001)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 115, |
|
"end": 136, |
|
"text": "(Lample et al., 2016;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 137, |
|
"end": 155, |
|
"text": "Ma and Hovy, 2016;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 156, |
|
"end": 179, |
|
"text": "Chiu and Nichols, 2016;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 180, |
|
"end": 207, |
|
"text": "Reimers and Gurevych, 2017;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 208, |
|
"end": 230, |
|
"text": "Strubell et al., 2017;", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 231, |
|
"end": 250, |
|
"text": "Akbik et al., 2018)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "1 https://catalog.ldc.upenn.edu/LDC2005T09. 2 https://catalog.ldc.upenn.edu/LDC2006T06.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our decoding and training methods are based on two key points about our usage of CRF. The first key point is that we prepare a separate CRF for each named entity type. This enables our method to handle the situation where the same mention span is assigned multiple entity types. The GENIA dataset indeed has such mention spans. In the literature, Muis and Lu (2017) demonstrated that this approach of multiple CRFs would perform better on nested NER datasets and even a flat NER dataset than the standard approach of a single CRF for all entity types. The second key point is that each element of the transition matrix of each CRF has a fixed value according to whether it corresponds to a legal transition (e.g., B-X to I-X in IOBES tagging scheme, where X is the name of entity type) or an illegal one (e.g., O to I-X). This is helpful for keeping the scores for tag sequences including outer entities higher than those of tag sequences including inner entities. Formally, we use Z = {z 1 , . . . , z n } to represent a sequence output from the last hidden layer of a neural model, where z i is the vector for the i-th word, and n is the number of tokens.", |
|
"cite_spans": [ |
|
{ |
|
"start": 347, |
|
"end": 365, |
|
"text": "Muis and Lu (2017)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Usage of CRF", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "y (k) = {y (k) 1 , . . . , y (k)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Usage of CRF", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "n } represents a sequence of IOBES tags of entity type k for Z. Here, we define the score function to be", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Usage of CRF", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03c6 k y (k) i\u22121 , y (k) i , z i = P (k) y (k) i ,i + A (k) y (k) i\u22121 ,y (k) i ,", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Usage of CRF", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Usage of CRF", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "P (k) y (k) i ,i = W (k) y (k) i \u2022 z i + b (k) y (k) i , A (k) y (k) i\u22121 ,y (k) i = \uf8f1 \uf8f2 \uf8f3 \u2212\u221e, if y (k) i\u22121 \u2192 y (k) i is illegal, 0, otherwise. W (k) y (k) i and b (k) y (k)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Usage of CRF", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "i denote the weight matrix and the bias vector corresponding to y", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Usage of CRF", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "i , respectively. A (k) stands for the transition matrix from the previous token to the current token, and A (k) y (k) i\u22121 ,y (k) i is the transition scores from y", |
|
"cite_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 118, |
|
"text": "(k)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 126, |
|
"end": 129, |
|
"text": "(k)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Usage of CRF", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "(k) i\u22121 to y (k)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Usage of CRF", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "i . Z is shared between all of the multiple CRFs as their input.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Usage of CRF", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We use three strategies for decoding. First, we consider each entity type separately using multiple CRFs in decoding, which makes it possible to Algorithm 1: Nested NER via 2nd-best sequence decoding K = the set of entity types; Function main(z i ) M = {}; # the set of detected mentions. Each element of M is a tuple (s, e, k) regarding a mention. # s, e, and k are the start position, the end position, and the entity type of the mention, respectively. foreach k \u2208 K do calculate CRF scores \u03a6 for entity type k with the score function", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u03c6 k y (k) i\u22121 , y (k)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "i , z i ; find the best path of the span from position 1 to position n based on the scores \u03a6; M = the set of the mentions detected in the best path;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "M = M \u222aM ; foreach m \u2208M do detectNestedMentions(\u03a6, m.s, m.e, k, M ); return M ;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Function detectNestedMentions(\u03a6, s, e, k, M ) if e \u2212 s > 1 then find the 2nd best path of the span from position s to position e based on the scores \u03a6; M = the set of the mentions detected in the 2nd best path;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "M = M \u222aM ; foreach m \u2208M do detectNestedMentions(\u03a6, m.s, m.e, k, M ); return;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Figure 2: Overview of our second-best path decoding algorithm to iteratively find nested entities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "handle the situation that the same mention span is assigned multiple entity types. Second, our decoder searches nested entities in an outside-toinside way, 3 which realizes efficient processing by eliminating the spans of non-entity at an early stage. More specifically, our method recursively narrows down the spans to Viterbi-decode. The spans to Viterbi-decode are dynamically decided according to the preceding Viterbi-decoding result.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Only the spans that have just been recognized as entity mentions are Viterbi-decoded again. Third, we use the same scores", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03c6 k y (k) i\u22121 , y", |
|
"eq_num": "(k)" |
|
} |
|
], |
|
"section": "Decoding", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "i , z i of Equation (1) to extract outermost entities and even inner entities without re-encoding, which makes inference more efficient and faster. These three strategies are deployed and completed only in the output layer of neural architectures.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We describe the pseudo-code of our decoding method in Algorithm 1. Also, we depict the overview of our decoding method with an example in Figure 2 . We use the term level in the sense of the depth of entity nesting. [S] and [E] in Figure 2 stand for the START and END tags, respectively. We always attach these tags to both ends of every sequence of IOBES tags in Viterbi-decoding.", |
|
"cite_spans": [ |
|
{ |
|
"start": 216, |
|
"end": 219, |
|
"text": "[S]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 138, |
|
"end": 146, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 231, |
|
"end": 239, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Decoding", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We explain the decoding procedure and mechanism in detail below. We consider each entity type separately and iterate the same decoding process regarding distinct entity types as described in Algorithm 1. In the decoding process for each entity type k, we first calculate the CRF scores", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u03c6 k y (k) i\u22121 , y (k)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "i , z i over the entire sentence. Next, we decode a sequence with the standard 1-best Viterbi decoding as with the conventional linearchain CRF. ''Ca2+ -dependent PKC isoforms'' is extracted at the 1st level with regard to the example of Figure 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 238, |
|
"end": 246, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Decoding", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Then, we start our recursive decoding to extract nested entities within previously extracted entity spans by finding the 2nd best path. In Figure 2 , the span ''Ca2+ -dependent PKC isoforms'' is processed at the 2nd level. Here, if we search for the best path within each span, the same tag sequence will be obtained, even though the processed span is different. This is because we continue using the same scores", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 139, |
|
"end": 147, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Decoding", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u03c6 k y (k) i\u22121 , y (k)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "i , z i and because all the values of A (k) corresponding to legal transitions are equal to 0. Regarding the example of Figure 2 , the score of the transition from [S] to B-P at the 2nd level is equal to the score of the transition from O to B-P at the 1st level. This is true for the transition from E-P to [E] at the 2nd level and the one from E-P to O at the 1st level. The best path between the [S] and [E] tags is identical to the best path between the two O tags under our restriction about the transition matrix of CRF. Therefore, we search for the 2nd best path within the span by utilizing the N -best Viterbi A* algorithm (Seshadri and Sundberg, 1994; Huang et al., 2012) . 4 Note that our situation is different from normal situations where N -best decoding is needed. We already know the best path within the span and want to find only the 2nd best path. Thus, we can extract nested entities by finding the 2nd best path within each extracted entity. Regarding the example of Figure 2 , ''PKC isoforms'' is extracted from the span ''Ca2+ -dependent PKC isoforms'' at the 2nd level.", |
|
"cite_spans": [ |
|
{ |
|
"start": 632, |
|
"end": 661, |
|
"text": "(Seshadri and Sundberg, 1994;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 662, |
|
"end": 681, |
|
"text": "Huang et al., 2012)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 684, |
|
"end": 685, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 120, |
|
"end": 128, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 988, |
|
"end": 996, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Decoding", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We continue this recursive decoding until no multi-token entities are detected within a span. In Figure 2 , the span ''PKC isoforms'' is processed at the 3rd level. At the 3rd or deeper levels, the tag sequence of its grandparent level is no longer either the best path or the 2nd best path because the start or end position of the current span is in the middle of the entity mention span at the grandparent level. As for the example shown in Figure 2 , the word ''PKC'' is tagged I-P at the 1st level, and the transition from [S] to I-P is illegal. The scores of the paths that includes illegal transitions cannot be larger than those of the paths that consist of only legal transitions because the elements of the transition matrix A (k) corresponding to illegal transitions are set to \u2212\u221e. That is why at all levels below the 1st level we only need to find the 2nd best path.", |
|
"cite_spans": [ |
|
{ |
|
"start": 736, |
|
"end": 739, |
|
"text": "(k)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 105, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 443, |
|
"end": 451, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Decoding", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "This recursive processing is stopped when no entities are predicted or when only single-token entities are detected within a span. 5 In Figure 2 , the span ''PKC'' is not processed any more because it is a single-token entity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 131, |
|
"end": 132, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 136, |
|
"end": 144, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Decoding", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Only one nested entity is extracted within each decoded span in Figure 2 , but there can be cases where multiple multi-token entities are detected within a decoded span. In such cases, our algorithm Viterbi-decodes each of their spans in the way of the depth-first search algorithm. The aforementioned processing is executed on all entity types, and all detected entities are returned as an output result.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 72, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Decoding", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "To extract entities from outside to inside successfully, a model has to be trained in a way that the scores for the paths including outer entities will be higher than those for the paths including inner entities. We propose a new objective function to achieve this requirement.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We maximize the log-likelihood of the correct tag sequence as with the conventional CRF-based model. Considering that our model has a separate CRF for each entity type, the log-likelihood for one training data, L (\u03b8), is as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L (\u03b8) = k log p Y (k) |Z; \u03b8 ,", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Training", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "where \u03b8 is the set of parameters of a neural model, and Y (k) denotes the collection of the gold IOBES tags for all levels regarding the entity type k. As we mentioned in Section 2.1, Z is a sequence output from the last hidden layer of a neural model and is shared between all of the multiple CRFs. Therefore, \u03b8 is updated through a backpropagation process so that Z can represent information about all entity types. In the following, we decompose the log-likelihood for all levels into the ones for each level. Let s 1,1 = n because we consider the whole span of a sentence. The spans considered at each deeper level, l > 1, are determined according to the spans of multi-token entities at its immediate parent level. As for the example of Figure 2 , only the span of ''Ca2+ -dependent PKC isoforms'' is considered at the 2nd level. Here, the log-likelihood for each entity type can be expressed as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 742, |
|
"end": 750, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "log p Y (k) |Z; \u03b8 = L 1st y (k) 1,1 , . . . , y (k) 1,n |Z; \u03b8 + l>1 j L 2nd y (k) l,s (k) l,j , . . . , y (k) l,e (k) l,j |Z; \u03b8 , (3) where L 1st (. . . ) and L 2nd (. . . )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "are the loglikelihoods of the (1st) best and 2nd best paths for each span, respectively. y (k) l,i denotes the correct IOBES tag of the position i of the l-th level of the entity type k.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Best path. L 1st (. . . ) can be calculated in the same manner as the conventional linear-chain CRF:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L 1st y (k) 1,1 , . . . , y (k) 1,n |Z; \u03b8 = \u03c8 (k) 1:n y (k) 1,1 , Z \u2212 log y \u2032 \u2208Y (k) 1:n exp \u03c8 (k) 1:n (y \u2032 , Z) ,", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Training", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "where \u03c8 (k) s:e (y,", |
|
"cite_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 11, |
|
"text": "(k)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Z) = e i=s \u03c6 k (y i\u22121 , y i , z i ) + A (k) y e ,y e+1 , y s\u22121 = [S], y e+1 = [E].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Algorithm 2: LogSumExp of the scores of all possible paths C = {B-X, I-X, E-X, S-X, O}; s = 1; # the start position e = n; # the end position", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "foreach c \u2208 C do \u03b1(c) = P (k) c,s + A (k) [S],c ; for i = s + 1; i \u2264 e; i + + do foreach c \u2208 C do foreach c \u2032 \u2208 C do \u03b1 c (c \u2032 ) = \u03b1 (c \u2032 ) + P (k) c,i + A (k) c \u2032 ,c ; foreach c \u2208 C do \u03b1(c) = LogSumExp (\u03b1 c ); foreach c \u2208 C do \u03b1(c)+ = A (k) c,[E] ; return LogSumExp (\u03b1); Y (k)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "s:e denotes the set of all possible tag sequences from position s to position e of the entity type k. The first term of Equation 4is the score of the gold tag sequence, and the second term is the logarithm of the summation of the exponential scores of all possible tag sequences. It is well known that the second term of Equation (4) can be efficiently calculated by the algorithm shown in Algorithm 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "2nd best path. L 2nd (. . . ) given the best path can be calculated by excluding the best path from all possible paths. This concept is also adopted by ListNet (Cao et al., 2007) , which is used for ranking tasks such as document retrieval or recommendation. L 2nd (. . . ) can be expressed by the following equation:", |
|
"cite_spans": [ |
|
{ |
|
"start": 160, |
|
"end": 178, |
|
"text": "(Cao et al., 2007)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L 2nd y (k) l,s (k) l,j , . . . , y (k) l,e (k) l,j |Z; \u03b8 = \u03c8 (k) s (k) l,j :e (k) l,j y (k) l,j , Z \u2212 log y \u2032 \u2208\u1ef8 (k) s (k) l,j :e (k) l,j exp \u03c8 (k) s (k) l,j :e (k) l,j (y \u2032 , Z) ,", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Training", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "where\u1ef8 (k) s:e denotes the set of all possible tag sequences except the best path within the span from position s to position e of the entity type k.", |
|
"cite_spans": [ |
|
{ |
|
"start": 7, |
|
"end": 10, |
|
"text": "(k)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "However, to the best of our knowledge, the way of efficiently computing the second term of Equation (5) has not been proposed yet in the literature. Simply subtracting the exponential score of the best path from the summation of the exponential scores of all possible paths causes underflow, overflow, or loss of significant digits.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We introduce a way of accurately computing it with the same time complexity as Algorithm 2 for Equation (4). For explanation, we use the simplified example of the lattice depicted in Figure 3 , in which the span length is 4 and the number of states is 3. The special nodes for start and end states are attached to the both ends of the span. There are 81(= 3 4 ) paths in this lattice. We assume that the path that consists of top nodes of all time steps are the best path as shown in Figure 3 . No generality is lost by making this assumption. To calculate the second term of Equation 5, we have to consider the exponential scores for all the possible paths except the best path, 80(= 81 \u2212 1) paths.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 183, |
|
"end": 191, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 484, |
|
"end": 492, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We first give a way of thinking, which is not our algorithm itself but helpful to understand it. In the example, we can further group these 80 paths according to the steps where the best path is not taken. In this way, we have 4 spaces in total as illustrated in Figure 4 . In Space 1, the top node of time step 4 is excluded from consideration. 54(= 3 3 \u00d7 2) paths are taken into account here. Since this space covers all paths that do not go through the top node of time step 4, we only have to consider the paths that go through this node in other spaces. In Space 2, this node is always passed through, and instead the top node of time step 3 is excluded. 18(= 3 2 \u00d7 2) paths are considered in this space. Similarly, 6(= 3 1 \u00d7 2) paths and 2(= 3 0 \u00d7 2) paths are taken into consideration in Space 3 and Space 4, respectively. Thus, we can consider all the possible paths except the best path, 80(= 54 + 18 + 6 + 2) paths. However, this is not our algorithm itself as we mentioned.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 263, |
|
"end": 271, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We introduce two tricks for making the calculation more efficient. We explain them with Figure 5 , in which Spaces 2 and 3 are picked up. The first trick is that the separated two spaces can be merged at time step 4 because the paths later than time step 3 are identical. When we reach time step 4 in the forward iteration in each of the two spaces, we can merge them using the calculation results at time step 3, as shown with the red edges in Figure 5 . The second trick is that the blue nodes in Figure 5 can be copied from Space 2 to Space 3 at time step 2 since the considered paths until that time step are also the same. These two tricks can be applied to other pairs of two adjacent spaces, which relieves the need to separately calculate the summation of the exponential scores for each space. Therefore, the second term of Equation 5can be calculated as shown in Algorithm 3.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 96, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF4" |
|
}, |
|
{ |
|
"start": 445, |
|
"end": 453, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF4" |
|
}, |
|
{ |
|
"start": 499, |
|
"end": 507, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Thus, we can train a model using the objective function of Equations 2, 3, 4, and 5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Time complexity. Regarding the time complexity of decoder, the worst case for our method is when our decoder narrows down the spans one by one, from n tokens (a whole sentence) to 2 tokens. The time complexity for the worst case is therefore O (n + \u2022 \u2022 \u2022 + 2) = O n 2 for each entity type, O mn 2 in total, where m denotes the number of entity types. However, this rarely happens. The ideal average processing time in the case where our decoding method narrows down spans successfully according to gold labels is O(dmn), where d is the average number of gold IOBES tags of each entity type assigned to a word. The average numbers calculated from the gold labels of ACE-2004 , ACE-2005 , and GENIA are 1.06, 1.06, and 1.05, respectively.", |
|
"cite_spans": [ |
|
{ |
|
"start": 665, |
|
"end": 673, |
|
"text": "ACE-2004", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 674, |
|
"end": 684, |
|
"text": ", ACE-2005", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Characteristics", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "Usability. Some existing methods have hyperparameters, such as the maximal length of considered entities or the threshold that affects the number of detected entities, beyond those of the conventional CRF-based model used for flat NER tasks. These hyperparameters must be tuned depending on datasets. On the other hand, our method does not have such hyperparameters and is easy to use from this viewpoint. In addition, our method focuses on the output layer of neural architectures; therefore our method can be combined with any neural model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Characteristics", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "We verify the empirical performances of our methods in the successive sections.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Characteristics", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "3 Experimental Settings", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Characteristics", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "We perform nested entity extraction experiments intensively on ACE-2005 (Doddington et al., 2004 and GENIA . For ACE-2005, we use the same splits of documents as Lu and Roth (2015) , published on their website. 6 For GENIA, we use GENIAcorpus3.02p, 7 in which sentences are already tokenized (Tateisi and Tsujii, 2004) . Following previous work (Finkel and Manning, 2009; Lu and Roth, 2015) , we first split the last 10% of sentences as the test set. Next, we use the first 81% and the subsequent 9% for training and development sets, respectively. We make the same modifications as described by Finkel and Manning (2009) by collapsing all DNA, RNA, and protein subtypes into DNA, RNA, and protein, keeping cell line and cell type, and removing other entity types, resulting in 5 entity types. The statistics of each dataset are shown in Table 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 71, |
|
"text": "ACE-2005", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 72, |
|
"end": 96, |
|
"text": "(Doddington et al., 2004", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 162, |
|
"end": 180, |
|
"text": "Lu and Roth (2015)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 292, |
|
"end": 318, |
|
"text": "(Tateisi and Tsujii, 2004)", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 345, |
|
"end": 371, |
|
"text": "(Finkel and Manning, 2009;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 372, |
|
"end": 390, |
|
"text": "Lu and Roth, 2015)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 596, |
|
"end": 621, |
|
"text": "Finkel and Manning (2009)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 838, |
|
"end": 845, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In this study, we adopt as baseline a BiLSTM-CRF model, which is widely used for NER tasks (Lample et al., 2016; Ma and Hovy, 2016; Chiu and Nichols, 2016; Reimers and Gurevych, 2017) . We apply our usage of CRF to this baseline. We prepare three types of models for fair comparisons with existing methods. The first one is the model to which is fed conventional word embeddings and CNN-based character-level representation (Ma and Hovy, 2016; Chiu and Nichols, 2016; 6 http://www.statnlp.org/research/ie. 7 http://www.geniaproject.org/genia-corpus/ pos-annotation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 91, |
|
"end": 112, |
|
"text": "(Lample et al., 2016;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 113, |
|
"end": 131, |
|
"text": "Ma and Hovy, 2016;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 132, |
|
"end": 155, |
|
"text": "Chiu and Nichols, 2016;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 156, |
|
"end": 183, |
|
"text": "Reimers and Gurevych, 2017)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 424, |
|
"end": 443, |
|
"text": "(Ma and Hovy, 2016;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 444, |
|
"end": 467, |
|
"text": "Chiu and Nichols, 2016;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 468, |
|
"end": 469, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model and Training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Algorithm 3: LogSumExp of the scores of all possible paths except the best path", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model and Training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "C = {B-X, I-X, E-X, S-X, O}; s = s (k) l,j ; # the start position e = e (k)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model and Training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "l,j ; # the end position c 1 (s) = B-X; # the best path", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model and Training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "for i = s + 1; i \u2264 e \u2212 1; i + + do c 1 (i) = I-X; c 1 (e) = E-X; foreach c \u2208 C do \u03b1(c) = P (k) c,s + A (k) [S],c ; \u03b2 = \u2212\u221e; for i = s + 1; i \u2264 e; i + + do foreach c \u2208 C do foreach c \u2032 \u2208 C do \u03b1 c (c \u2032 ) = \u03b1 (c \u2032 ) + P (k) c,i + A (k) c \u2032 ,c ; if c == c 1 (i) then foreach c \u2032 \u2208 C\\{c 1 (i \u2212 1)} do \u03b2 c (c \u2032 ) = \u03b1 c (c \u2032 ); \u03b2 c (c 1 (i \u2212 1)) = \u03b2 + P (k) c,i + A (k) c 1 (i\u22121),c ; foreach c \u2208 C do \u03b1(c) = LogSumExp (\u03b1 c ); \u03b2 = LogSumExp (\u03b2 c ); foreach c \u2208 C\\{c 1 (e)} do \u03b1(c)+ = A (k) c,[E] ; \u03b1 (c 1 (e)) = \u03b2 + A (k) E-X,[E] ; return LogSumExp (\u03b1);", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model and Training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Reimers and Gurevych, 2017). 8 We initialize word embeddings with the pretrained embeddings GloVe (Pennington et al., 2014) of dimension 100 in ACE-2005. For GENIA, we adopt the pretrained embeddings trained on MEDLINE abstracts instead. The initialized word embeddings are fixed during training. The vectors of the word embeddings and the character-level representation are concatenated and then input into (1) 30(1) 23 (0) 0 (0) 0 (0) -4th level 9 (0) 0 (0) 2 (0) 0 (0) 0 (0) 0 (0) # labels per token ( the BiLSTM layer. The second model is the model combined with the pretrained BERT model (Devlin et al., 2019) . 9 We use the uncased version of BERT large model as a contextual word embeddings generator without fine-tuning and stack the BiLSTM layers on top of the BERT model. The third model is the BiLSTM-CRF model to which is fed word embeddings, character-level representation, BERT embeddings, and FLAIR embeddings (Akbik et al., 2018) using FLAIR framework (Akbik et al., 2019) . 10 All our models have 2 BiLSTM hidden layers, and the dimensionality of each hidden unit is 256 in all our experiments. Table 2 lists the hyperparameters used for our experimental evaluations. We adopt AdaBound (Luo et al., 2019) as an optimizer. Early stopping is used based on the performance of development set. We repeat the experiment 5 times with different random seeds and report average and standard deviation of F1-scores on a test set as the final performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 29, |
|
"end": 30, |
|
"text": "8", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 98, |
|
"end": 123, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 593, |
|
"end": 614, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 617, |
|
"end": 618, |
|
"text": "9", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 925, |
|
"end": 945, |
|
"text": "(Akbik et al., 2018)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 968, |
|
"end": 988, |
|
"text": "(Akbik et al., 2019)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1203, |
|
"end": 1221, |
|
"text": "(Luo et al., 2019)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1112, |
|
"end": 1119, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model and Training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "9 https://github.com/yahshibu/nested-ner-tacl2020-transformers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model and Training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "10 https://github.com/yahshibu/nested-ner-tacl2020-flair. Table 3 presents comparisons of our model with existing methods. Note that some existing methods use embeddings of POS tags as an additional input feature whereas our method does not. Our method outperforms the existing methods with 76.83% and 77.19% in terms of F1-score when using only word embeddings and character-level representation. Especially, our method brings much higher recall values than the other methods. The recall scores are improved by 3.1% and 2.4% on ACE-2005 and GENIA datasets, respectively. These results demonstrate that our training and decoding algorithms are quite effective for extracting nested entities. Moreover, when we use BERT and FLAIR as contextual word embeddings, we achieve an F1-score of 83.99% with BERT and 84.34% with BERT and FLAIR on ACE-2005. On the other hand, BERT does not perform well on GENIA. We assume that this is because the domain of GENIA is quite different from that of the corpus used for training the BERT model. Regardless, it is demonstrated that our method performs better than or at least as well as existing methods.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 65, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model and Training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We conduct an ablation study to verify the effectiveness of our learning and decoding methods. We first replace our objective function for training with the standard objective function of the linearchain CRF. The methods for decoding N -best paths have been well studied because such algorithms have been required in many domains (Soong and Huang, 1990; Kaji et al., 2010; Huang et al., 2012) . However, we hypothesize that our learning method, as well as our decoding method, helps to improve performance. That is why we first remove only our learning method. Then, we also replace our decoding algorithm with the standard decoding algorithm of the linear-chain CRF. It is equivalent to preparing the conventional CRF for each entity type separately. The results are shown in Table 4 . They demonstrate that introducing only our decoding algorithm results in high recall scores but hurts precision. This suggests that our learning method should be necessary for achieving high precision. Besides, removing the decoding algorithm results in lower recall. This is natural because it does not intend to find any nested entity after extracting outermost entities. Thus, it is demonstrated that both our learning and decoding algorithms contribute much to good performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 341, |
|
"end": 353, |
|
"text": "Huang, 1990;", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 354, |
|
"end": 372, |
|
"text": "Kaji et al., 2010;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 373, |
|
"end": 392, |
|
"text": "Huang et al., 2012)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 777, |
|
"end": 784, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Ablation Study", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To further understand how our method handles nested entities, we investigate the performance for entities of each level. Table 5 shows the recall scores for gold entities of each level when using conventional word embeddings. Among all levels, our model results in the best performance at the 1st level that consists of only gold outermost entities. The deeper a level, the lower recall scores. On the other hand, Table 6 shows the precision scores for predicted entities in each level of one trial on each dataset. Because the number of levels in the predictions vary between trials, taking macro average of precision scores over multiple trials is not representative. Therefore, we show only the precision scores from one trial in Table 6 . The precision score for the 5th level on ACE-2005 is as high as or higher than those of other levels. Precision scores are less dependent on level. This tendency is also shown in other trials.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 128, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 414, |
|
"end": 421, |
|
"text": "Table 6", |
|
"ref_id": "TABREF8" |
|
}, |
|
{ |
|
"start": 733, |
|
"end": 740, |
|
"text": "Table 6", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis of Behavior", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In addition, we compare the tendency of our method with that of an existing method. We select", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of Behavior", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "ACE-2005 GENIA Level Recall (%) Num. Rcall (%) Num. 1st", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of Behavior", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "76.10 \u00b1 0.50 2,686 77.92 \u00b1 0.72 5,273 2nd 71.70 \u00b1 0.70 323 40.61 \u00b1 1.74 327 3rd", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of Behavior", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "58.00 \u00b1 5.42 30 -0 4th", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of Behavior", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "50.00 \u00b1 0.00 2 -0 Table 5 : Recall scores for gold annotations of each level.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 25, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis of Behavior", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Wang and Lu (2018) method for comparison. 14 We train their model with the ACE-2005 dataset using their original implementation and repeat that 5 times. The recall scores from the 1st level to the 4th level are 66.52%, 65.34%, 42.14%, and 50.00%, respectively. The tendency about the difference across levels is common to Wang and Lu (2018) method and our method, and the scores from our method (Table 5 ) are entirely higher than those from their method. It is demonstrated that our method can extract both outer and inner entities better. On the other hand, their method can extract crossing entities (two entities overlap but neither is contained in the other), although our method cannot. Actually, their model outputs some crossing spans in our experiments. In this case, we cannot analyze the results regarding precision scores in the same manner as Table 6 . There are cases where one cannot uniquely decide the level of an span nested within multiple crossing spans. Regardless, our method cannot handle crossing entities. However, crossing entities are very rare (Lu and Roth, 2015; . The test sets of ACE-2005 and GENIA have no crossing entities. This property of our method does not have a negative impact on performance, at least on the ACE-2005 and GENIA datasets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1072, |
|
"end": 1091, |
|
"text": "(Lu and Roth, 2015;", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 395, |
|
"end": 403, |
|
"text": "(Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 856, |
|
"end": 863, |
|
"text": "Table 6", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis of Behavior", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We manually scan the test set predictions on ACE-2005. We find that many of the errors can be classified into two types. The first type is partial prediction error. Given the following sentence: ''Let me set aside the hypocrisy of a man who became president because of a lawsuit trying to eliminate everybody else's lawsuits, but instead focus on his own experience''. The annotation marks ''a man who became president because of a lawsuit'', but our model extracts a shorter or longer span. It is difficult to extract the proper spans of clauses that contain numerous modifiers. The second type is error derived from pronominal mention. Consider the following example: ''They roar, they screech.''. These ''They''s refer to ''tanks'' in another sentence of the same document and are indeed annotated as VEH (Vehicle). Our model fails to detect these pronominal mentions or wrongly labels them as PER (Person). Document context should be taken into consideration to solve this problem.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "These types of errors have been reported by Katiyar and Cardie (2018) , Ju et al. (2018) , and Lin et al. (2019) and still remain as challenges.", |
|
"cite_spans": [ |
|
{ |
|
"start": 44, |
|
"end": 69, |
|
"text": "Katiyar and Cardie (2018)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 72, |
|
"end": 88, |
|
"text": "Ju et al. (2018)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 95, |
|
"end": 112, |
|
"text": "Lin et al. (2019)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "We investigate how our recursive decoding method impacts on the decoding speed in terms of the number of words processed per second. We use the model trained with ACE-2005 used for Table 6 and change the maximal depth of decoding to 1, 2, 3, 4, 5, and \u221e. When the maximal depth is n, our decoder Viterbi-decodes only from the 1st level to the n-th level. Note that, when the maximal depth is 1, the decoding process is completely the same as the Viterbi decoding of the standard CRF. We run them on an Intel i7 (2.7 GHz) CPU.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 188, |
|
"text": "Table 6", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Running Time", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "Results are listed in Table 7 . The processed words per second decrease by 38% when the Method P (%) R (%) F1 (%) Katiyar and Cardie (2018) 72 maximal depth varies from 1 to 2. There are two main reasons for this phenomenon. First, our decoder needs the processing for moving across different levels. That processing is not necessary when the maximal depth is 1. Second, the number of the extracted spans at the 2nd level is large and not negligible (12.5% of that of the extracted spans at the 1st level as shown in Table 6 ). The numbers of the extracted spans at the 3rd and lower levels are small, and then the processed words do not largely decrease when the maximal depth increases over 2. Regardless, our decoder does not take twice as long as the standard CRF on ACE-2005.", |
|
"cite_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 139, |
|
"text": "Katiyar and Cardie (2018)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 22, |
|
"end": 29, |
|
"text": "Table 7", |
|
"ref_id": "TABREF9" |
|
}, |
|
{ |
|
"start": 517, |
|
"end": 524, |
|
"text": "Table 6", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Running Time", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "We also compare our method with existing methods on the ACE-2004 dataset. We use the same splits as Lu and Roth (2015) . The setups are the same as those of our experiment on ACE-2005. Table 8 shows the results. As shown, our method significantly outperforms existing methods. Note that most of them use POS tags as an additional input feature whereas our method does not.", |
|
"cite_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 118, |
|
"text": "Lu and Roth (2015)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 185, |
|
"end": 192, |
|
"text": "Table 8", |
|
"ref_id": "TABREF11" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison on ACE-2004", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "To assess how our model works on flat NER task, we additionally evaluate our model on CoNLL-2003 (Tjong Kim Sang and De Meulder, 2003) , which is annotated with outermost entities only. The setups here are the same as those of our experiment on ACE-2005. We not only prepare our proposed model but also the ablated model without our training nor decoding method, as in Section 4.2. The former model can extract spans nested within other extracted spans regardless of the property of the dataset, but the latter model never extracts spans within other extracted spans. We use the 100-dimensional GloVe embeddings for both models as in our previous experiments. The results are in Table 9 . We compare our method with existing methods that do not adopt any contextual word embeddings (the upside of Table 9 ) here, although we also show results from recent work with contextual word embeddings for reference. First, in comparison with the methods designed for nested NER Strakov\u00e1 et al., 2019) , our method performs better even on CoNLL-2003. This means that our method works well on not only nested NER but also flat NER. Next, we compare with methods that can handle only flat NER. Table 9 shows that our method is comparable to the standard BiLSTM-CRF models (Lample et al., 2016; Ma and Hovy, 2016) on CoNLL-2003. However, note that there are some differences between the experiments of the previous studies (Lample et al., 2016; Ma and Hovy, 2016) and our experiment. For example, different word embeddings are used, or the hidden size of LSTM is not aligned. Nevertheless, we can compare our proposed model to the ablated model. As shown in Table 9 , there is a significant gap (p < 0.005 with the permutation test) between the two scores, 91.14(\u00b10.04)% and 90.84(\u00b10.10)%. We analyze this gap in detail and find that our proposed model performs well especially in the cases where it is difficult to decide which is suitable, an inner span or an outer span. Given the following sentence: ''An assessment group made up of the State Council's Port Office, the Civil Aviation Administration of China, the General Administration of Customs and other authorities had granted the airport permission to handle foreign aircraft, Xinhua said .''. In the CoNLL-2003 dataset, the four spans ''State Council'', ''Civil Aviation Administration of China'', ''General Administration of Customs'', and ''Xinhua'' are annotated as ORG (Organization). Both models correctly detect the latter three entities in most trials, but the ablated model tends to extract ''State Council 's Port Office'' instead of ''State Council''. On the other hand, our proposed model tends to extract both ''State Council 's Port Office'' and ''State Council''. ''State Council 's Port Office'' is indeed a false-positive, but our model can detect the correct entity span ''State Council'' more steadily than the ablated model. Thus, our proposed model achieves the higher F1-score.", |
|
"cite_spans": [ |
|
{ |
|
"start": 86, |
|
"end": 116, |
|
"text": "CoNLL-2003 (Tjong Kim Sang and", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 117, |
|
"end": 134, |
|
"text": "De Meulder, 2003)", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 969, |
|
"end": 991, |
|
"text": "Strakov\u00e1 et al., 2019)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 1260, |
|
"end": 1281, |
|
"text": "(Lample et al., 2016;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1282, |
|
"end": 1300, |
|
"text": "Ma and Hovy, 2016)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 1410, |
|
"end": 1431, |
|
"text": "(Lample et al., 2016;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1432, |
|
"end": 1450, |
|
"text": "Ma and Hovy, 2016)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 679, |
|
"end": 686, |
|
"text": "Table 9", |
|
"ref_id": "TABREF13" |
|
}, |
|
{ |
|
"start": 797, |
|
"end": 804, |
|
"text": "Table 9", |
|
"ref_id": "TABREF13" |
|
}, |
|
{ |
|
"start": 1182, |
|
"end": 1189, |
|
"text": "Table 9", |
|
"ref_id": "TABREF13" |
|
}, |
|
{ |
|
"start": 1645, |
|
"end": 1652, |
|
"text": "Table 9", |
|
"ref_id": "TABREF13" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Flat NER", |
|
"sec_num": "4.7" |
|
}, |
|
{ |
|
"text": "Recently, proposed a new architecture for sequence labeling, which can capture global information at the sentence level better than BiLSTM, and reported an F1score of 91.96% when using conventional word embeddings (93.47% when using BERT). It is true that our model based on BiLSTM does not perform as well as their model, but our decoder can be combined with their proposed encoder. We leave it for future work. Alex et al. (2007) proposed several ways to combine multiple CRFs for such tasks. They found that, when they cascaded separate CRFs of each entity type by using the output from the previous CRF as the input features of the current CRF, best performance was yielded. However, their method could not handle nested entities of the same entity type. In contrast, Ju et al. (2018) dynamically stacked multiple layers that recognize entities sequentially from innermost ones to outermost ones. Their method can deal with nested entities of the same entity type. Finkel and Manning (2009) proposed a CRFbased constituency parser for this task such that each named entity is a node in the parse tree. However, its time complexity is the cube of the length of a given sentence, making it not scalable to large datasets involving long sentences. Later on, proposed a scalable transition-based approach, a constituency forest (a collection of constituency trees). Its time complexity is linear in the sentence length.", |
|
"cite_spans": [ |
|
{ |
|
"start": 413, |
|
"end": 431, |
|
"text": "Alex et al. (2007)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 772, |
|
"end": 788, |
|
"text": "Ju et al. (2018)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 969, |
|
"end": 994, |
|
"text": "Finkel and Manning (2009)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Flat NER", |
|
"sec_num": "4.7" |
|
}, |
|
{ |
|
"text": "Lu and Roth (2015) introduced a mention hypergraph representation for capturing nested entities as well as crossing entities (two entities overlap but neither is contained in the other). One issue in their approach is the spurious structures of the representation. Muis and Lu (2017) incorporated mention separators to address the spurious structures issue, but it still suffers from the structural ambiguity issue. proposed a hypergraph representation free of structural ambiguity. However, they introduced a hyperparameter, the maximal length of an entity, to reduce the time complexity. Setting the hyperparameter to a small number results in speeding up but ignoring longer entity segments. Katiyar and Cardie (2018) proposed another hypergraph-based approach that learns the structure using an LSTM network in a greedy manner. However, their method has a hyperparameter that sets a threshold for selecting multiple candidate mentions. It must be carefully tuned for adjusting the trade-off between recall and precision. Sohrab and Miwa (2018) proposed a neural exhaustive model that enumerates all possible spans as potential entity mentions and classifies them. However, they also use the maximal-length hyperparameter to reduce time complexity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 695, |
|
"end": 720, |
|
"text": "Katiyar and Cardie (2018)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1025, |
|
"end": 1047, |
|
"text": "Sohrab and Miwa (2018)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Fisher and Vlachos (2019) proposed a novel neural network architecture that merges tokens or entities into entities forming nested structures and then labels each of them. Their architecture, however, needs the maximal nesting level hyperparameter. Lin et al. (2019) proposed a sequence-to-nuggets architecture that first identify anchor words of all mentions and then recognize the mention boundaries for each anchor word. Their method also use the maximal-length hyperparameter to reduce time complexity. Strakov\u00e1 et al. (2019) proposed an encoding algorithm to allow the modeling of multiple named entity labels in a linearized scheme and proposed a neural model that predicts sequential labels for each token. Zheng et al. (2019) proposed a method that can detect entities boundaries with sequence labeling models. These two methods do not require special hyperparameters. They can also deal with crossing entities as well as nested entities in contrast to our method, but our experiments demonstrate that our method can perform well because crossing entities are very rare (Lu and Roth, 2015; .", |
|
"cite_spans": [ |
|
{ |
|
"start": 249, |
|
"end": 266, |
|
"text": "Lin et al. (2019)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 507, |
|
"end": 529, |
|
"text": "Strakov\u00e1 et al. (2019)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 714, |
|
"end": 733, |
|
"text": "Zheng et al. (2019)", |
|
"ref_id": "BIBREF46" |
|
}, |
|
{ |
|
"start": 1078, |
|
"end": 1097, |
|
"text": "(Lu and Roth, 2015;", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We propose learning and decoding methods for extracting nested entities. Our decoding method iteratively recognizes entities from outermost ones to inner ones in an outside-to-inside way. It recursively searches a span of each extracted entity for nested entities with second-best sequence decoding. We also design an objective function for training that ensures our decoding algorithm. Our method has no hyperparameters beyond those of conventional CRF-based models. Our method achieves 85.82%, 84.34%, and 77.36% F1-scores on ACE-2004, ACE-2005, and GENIA datasets, respectively. For future work, one interesting direction is joint modeling of NER with entity linking or coreference resolution. Previous studies (Durrett and Klein, 2014; Luo et al., 2015; Nguyen et al., 2016; Martins et al., 2019) demonstrated that leveraging mutual dependency of the NER, linking, and coreference tasks could boost each performance. We would like to address this issue while taking nested entities into account.", |
|
"cite_spans": [ |
|
{ |
|
"start": 528, |
|
"end": 551, |
|
"text": "ACE-2004, ACE-2005, and", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 552, |
|
"end": 581, |
|
"text": "GENIA datasets, respectively.", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 714, |
|
"end": 739, |
|
"text": "(Durrett and Klein, 2014;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 740, |
|
"end": 757, |
|
"text": "Luo et al., 2015;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 758, |
|
"end": 778, |
|
"text": "Nguyen et al., 2016;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 779, |
|
"end": 800, |
|
"text": "Martins et al., 2019)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Our usage of inside/outside is different from the insideoutside algorithm in dynamic programming.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Without our restriction about the transition matrix of CRF, we would have to watch both the best path and the 2nd best path. Besides, if a single CRF was used for all entity types, the decoder could not always narrow down spans with the 2nd best path. The 2nd best path in a single CRF could result in the same span tagged a different entity type. We would have to watch lower-ranked paths.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We do not need to recursively decode the span of each extracted single-token entity because a single-token entity cannot contain another entity of the same entity type.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/yahshibu/nested-ner-tacl2020.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that inACE-2005, Ju et al. (2018 did their experiments with a different split from Lu and Roth (2015) that we follow.12 Wang et al. (2018) did not report precision and recall scores. Instead of, reported the scores for the model of.13 Strakov\u00e1 et al. (2019) did not report precision and recall scores in their paper. We requested this information from the authors, and they provided their score data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We do not use POS tags as one of input features for a fair comparison with our method.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "did not report precision and recall scores. Instead of, reported the scores for the model of.16 Strakov\u00e1 et al. (2019) did not report precision and recall scores in their paper. We requested this information from the authors, and they provided their score data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank Aldrian Obaja Muis for helpful comments, and many anonymous reviewers and the action editor for helpful feedback on various drafts of the paper. We are also grateful to Jana Strakov\u00e1 for sharing experimental results. Eduard Hovy was supported in part by DARPA grant FA8750-18-2-0018 funded under the AIDA program.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "FLAIR: An easy-to-use framework for state-of-the-art NLP", |
|
"authors": [ |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Akbik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tanja", |
|
"middle": [], |
|
"last": "Bergmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Duncan", |
|
"middle": [], |
|
"last": "Blythe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kashif", |
|
"middle": [], |
|
"last": "Rasul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Schweter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roland", |
|
"middle": [], |
|
"last": "Vollgraf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "54--59", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. 2019. FLAIR: An easy-to-use frame- work for state-of-the-art NLP. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 54-59, Minneapolis, Minnesota, Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Contextual string embeddings for sequence labeling", |
|
"authors": [ |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Akbik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Duncan", |
|
"middle": [], |
|
"last": "Blythe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roland", |
|
"middle": [], |
|
"last": "Vollgraf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1638--1649", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for se- quence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638-1649, Santa Fe, New Mexico, USA, Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Recognising nested named entities in biomedical text", |
|
"authors": [ |
|
{ |
|
"first": "Beatrice", |
|
"middle": [], |
|
"last": "Alex", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Grover", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Biological, Translational, and Clinical Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "65--72", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Beatrice Alex, Barry Haddow, and Claire Grover. 2007. Recognising nested named entities in biomedical text. In Biological, Translational, and Clinical Language Processing, pages 65-72, Prague, Czech Republic. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Clozedriven pretraining of self-attention networks", |
|
"authors": [ |
|
{ |
|
"first": "Alexei", |
|
"middle": [], |
|
"last": "Baevski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Edunov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5360--5369", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, and Michael Auli. 2019. Cloze- driven pretraining of self-attention networks. In Proceedings of the 2019 Conference on Em- pirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5360-5369, Hong Kong, China. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Nested named entity recognition in historical archive text", |
|
"authors": [ |
|
{ |
|
"first": "Kate", |
|
"middle": [], |
|
"last": "Byrne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "International Conference on Semantic Computing (ICSC 2007)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "589--596", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kate Byrne. 2007. Nested named entity recogni- tion in historical archive text. In International Conference on Semantic Computing (ICSC 2007), pages 589-596.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Learning to rank: From pairwise approach to listwise approach", |
|
"authors": [ |
|
{ |
|
"first": "Zhe", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tie-Yan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Feng", |
|
"middle": [], |
|
"last": "Tsai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hang", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 24th International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "129--136", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. 2007. Learning to rank: From pairwise approach to listwise approach. In Proceedings of the 24th International Con- ference on Machine Learning, pages 129-136.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "How to train good word embeddings for biomedical NLP", |
|
"authors": [ |
|
{ |
|
"first": "Billy", |
|
"middle": [], |
|
"last": "Chiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gamal", |
|
"middle": [], |
|
"last": "Crichton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Korhonen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sampo", |
|
"middle": [], |
|
"last": "Pyysalo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 15th Workshop on Biomedical Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "166--174", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Billy Chiu, Gamal Crichton, Anna Korhonen, and Sampo Pyysalo. 2016. How to train good word embeddings for biomedical NLP. In Proceed- ings of the 15th Workshop on Biomedical Natural Language Processing, pages 166-174, Berlin, Germany. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Named entity recognition with bidirectional LSTM-CNNs", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Jason", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Chiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Nichols", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "357--370", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jason P. C. Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM- CNNs. Transactions of the Association for Computational Linguistics, 4:357-370.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "The automatic content extraction (ACE) program -tasks, data, and evaluation", |
|
"authors": [ |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Doddington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Przybocki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lance", |
|
"middle": [], |
|
"last": "Ramshaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephanie", |
|
"middle": [], |
|
"last": "Strassel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Weischedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC'04)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "George Doddington, Alexis Mitchell, Mark Przybocki, Lance Ramshaw, Stephanie Strassel, and Ralph Weischedel. 2004. The automatic content extraction (ACE) program -tasks, data, and evaluation. In Proceedings of the Fourth International Conference on Lan- guage Resources and Evaluation (LREC'04), Lisbon, Portugal. European Language Re- sources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A joint model for entity analysis: Coreference, typing, and linking", |
|
"authors": [ |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Durrett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "477--490", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Greg Durrett and Dan Klein. 2014. A joint model for entity analysis: Coreference, typing, and linking. Transactions of the Association for Computational Linguistics, 2:477-490.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Nested named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Jenny", |
|
"middle": [ |
|
"Rose" |
|
], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "141--150", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jenny Rose Finkel and Christopher D. Manning. 2009. Nested named entity recognition. In Pro- ceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 141-150, Singapore. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Merge and label: A novel neural network architecture for nested NER", |
|
"authors": [ |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "Fisher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Vlachos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5840--5850", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joseph Fisher and Andreas Vlachos. 2019. Merge and label: A novel neural network architecture for nested NER. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5840-5850, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Iterative Viterbi A* algorithm for k-best sequential decoding", |
|
"authors": [ |
|
{ |
|
"first": "Zhiheng", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bo", |
|
"middle": [], |
|
"last": "Long", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean-Francois", |
|
"middle": [], |
|
"last": "Crespo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anlei", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sathiya", |
|
"middle": [], |
|
"last": "Keerthi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Su-Lin", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "611--619", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhiheng Huang, Yi Chang, Bo Long, Jean- Francois Crespo, Anlei Dong, Sathiya Keerthi, and Su-Lin Wu. 2012. Iterative Viterbi A* algorithm for k-best sequential decoding. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 611-619, Jeju Island, Korea. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Improved differentiable architecture search for language modeling and named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Yufan", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chi", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tong", |
|
"middle": [], |
|
"last": "Xiao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chunliang", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingbo", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3585--3590", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yufan Jiang, Chi Hu, Tong Xiao, Chunliang Zhang, and Jingbo Zhu. 2019. Improved differentiable architecture search for language modeling and named entity recognition. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3585-3590, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "A neural layered model for nested named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Meizhi", |
|
"middle": [], |
|
"last": "Ju", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Makoto", |
|
"middle": [], |
|
"last": "Miwa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sophia", |
|
"middle": [], |
|
"last": "Ananiadou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1446--1459", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Meizhi Ju, Makoto Miwa, and Sophia Ananiadou. 2018. A neural layered model for nested named entity recognition. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1446-1459, New Orleans, Louisiana. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Efficient staggered decoding for sequence labeling", |
|
"authors": [ |
|
{ |
|
"first": "Nobuhiro", |
|
"middle": [], |
|
"last": "Kaji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yasuhiro", |
|
"middle": [], |
|
"last": "Fujiwara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naoki", |
|
"middle": [], |
|
"last": "Yoshinaga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masaru", |
|
"middle": [], |
|
"last": "Kitsuregawa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "485--494", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nobuhiro Kaji, Yasuhiro Fujiwara, Naoki Yoshinaga, and Masaru Kitsuregawa. 2010. Efficient staggered decoding for sequence labeling. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 485-494, Uppsala, Sweden, Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Nested named entity recognition revisited", |
|
"authors": [ |
|
{ |
|
"first": "Arzoo", |
|
"middle": [], |
|
"last": "Katiyar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "861--871", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Arzoo Katiyar and Claire Cardie. 2018. Nested named entity recognition revisited. In Pro- ceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 861-871, New Orleans, Louisiana, Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "GENIA corpus-a semantically annotated corpus for bio-textmining", |
|
"authors": [ |
|
{ |
|
"first": "J.-D", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Ohta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Tateisi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Bioinformatics", |
|
"volume": "19", |
|
"issue": "1", |
|
"pages": "180--182", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J.-D. Kim, T. Ohta, Y. Tateisi, and J. Tsujii. 2003. GENIA corpus-a semantically annotated corpus for bio-textmining. Bioinformatics, 19(Suppl 1):i180-i182.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [ |
|
"C N" |
|
], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the Eighteenth International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "282--289", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional ran- dom fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, pages 282-289.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Neural architectures for named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Ballesteros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandeep", |
|
"middle": [], |
|
"last": "Subramanian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kazuya", |
|
"middle": [], |
|
"last": "Kawakami", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "260--270", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260-270, San Diego, California. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Sequence-to-nuggets: Nested entity mention detection via anchor-region networks", |
|
"authors": [ |
|
{ |
|
"first": "Hongyu", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yaojie", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xianpei", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Le", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5182--5192", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hongyu Lin, Yaojie Lu, Xianpei Han, and Le Sun. 2019. Sequence-to-nuggets: Nested entity mention detection via anchor-region networks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5182-5192, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "GCDT: A global context enhanced deep transition architecture for sequence labeling", |
|
"authors": [ |
|
{ |
|
"first": "Yijin", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fandong", |
|
"middle": [], |
|
"last": "Meng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinchao", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinan", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yufeng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jie", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2431--2441", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yijin Liu, Fandong Meng, Jinchao Zhang, Jinan Xu, Yufeng Chen, and Jie Zhou. 2019. GCDT: A global context enhanced deep transition architecture for sequence labeling. In Pro- ceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2431-2441, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Joint mention extraction and classification with mention hypergraphs", |
|
"authors": [ |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "857--867", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 857-867, Lisbon, Portugal. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Joint entity recognition and disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Gang", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaojiang", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gang Luo, Xiaojiang Huang, Chin-Yew Lin, and Zaiqing Nie. 2015. Joint entity recognition and disambiguation. In Proceedings of the 2015", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Conference on Empirical Methods in Natural Language Processing", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "879--888", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Conference on Empirical Methods in Natural Language Processing, pages 879-888, Lisbon, Portugal. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Adaptive gradient methods with dynamic bound of learning rate", |
|
"authors": [ |
|
{ |
|
"first": "Liangchen", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuanhao", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xu", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liangchen Luo, Yuanhao Xiong, Yan Liu, and Xu Sun. 2019. Adaptive gradient methods with dynamic bound of learning rate. CoRR, abs/ 1902.09843. Version 1.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Endto-end sequence labeling via bi-directional LSTM-CNNs-CRF", |
|
"authors": [ |
|
{ |
|
"first": "Xuezhe", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1064--1074", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End- to-end sequence labeling via bi-directional LSTM-CNNs-CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064-1074, Berlin, Germany. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "The Stanford CoreNLP natural language processing toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Bauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jenny", |
|
"middle": [], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bethard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mcclosky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "55--60", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natu- ral language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demon- strations, pages 55-60, Baltimore, Maryland. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Joint learning of named entity recognition and entity linking", |
|
"authors": [ |
|
{ |
|
"first": "Pedro", |
|
"middle": [ |
|
"Henrique" |
|
], |
|
"last": "Martins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zita", |
|
"middle": [], |
|
"last": "Marinho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andr\u00e9", |
|
"middle": [ |
|
"F T" |
|
], |
|
"last": "Martins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "190--196", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pedro Henrique Martins, Zita Marinho, and Andr\u00e9 F. T. Martins. 2019. Joint learning of named entity recognition and entity linking. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 190-196, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Labeling gaps between words: Recognizing overlapping mentions with mention separators", |
|
"authors": [ |
|
{ |
|
"first": "Aldrian", |
|
"middle": [], |
|
"last": "Obaja", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Muis", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2608--2618", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aldrian Obaja Muis and Wei Lu. 2017. Label- ing gaps between words: Recognizing overlap- ping mentions with mention separators. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2608-2618, Copenhagen, Denmark. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "A survey of named entity recognition and classification", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Nadeau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Satoshi", |
|
"middle": [], |
|
"last": "Sekine", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Lingvisticae Investigationes", |
|
"volume": "30", |
|
"issue": "", |
|
"pages": "3--26", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Nadeau and Satoshi Sekine. 2007. A survey of named entity recognition and classification. Lingvisticae Investigationes, 30(1):3-26.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "J-NERD: Joint named entity recognition and disambiguation with rich linguistic features", |
|
"authors": [ |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Dat Ba Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerhard", |
|
"middle": [], |
|
"last": "Theobald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Weikum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "215--229", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dat Ba Nguyen, Martin Theobald, and Gerhard Weikum. 2016. J-NERD: Joint named entity recognition and disambiguation with rich linguistic features. Transactions of the Associa- tion for Computational Linguistics, 4:215-229.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "GloVe: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Ocher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Ocher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging", |
|
"authors": [ |
|
{ |
|
"first": "Nils", |
|
"middle": [], |
|
"last": "Reimers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "338--348", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a differ- ence: Performance study of LSTM-networks for sequence tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 338-348, Copenhagen, Denmark. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "List Viterbi decoding algorithms with applications", |
|
"authors": [ |
|
{ |
|
"first": "Nambirajan", |
|
"middle": [], |
|
"last": "Seshadri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Carl-Erik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sundberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "IEEE Transactions on Communications", |
|
"volume": "42", |
|
"issue": "234", |
|
"pages": "313--323", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nambirajan Seshadri and Carl-Erik W. Sundberg. 1994. List Viterbi decoding algorithms with applications. IEEE Transactions on Communi- cations, 42(234):313-323.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Deep exhaustive model for nested named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Golam", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Makoto", |
|
"middle": [], |
|
"last": "Sohrab", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Miwa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2843--2849", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohammad Golam Sohrab and Makoto Miwa. 2018. Deep exhaustive model for nested named entity recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2843-2849, Brus- sels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "A Tree.Trellis based fast search for finding the n best sentence hypotheses in continuous speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eng-Fong", |
|
"middle": [], |
|
"last": "Soong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Frank K. Soong and Eng-Fong Huang. 1990. A Tree.Trellis based fast search for finding the n best sentence hypotheses in continuous speech recognition. In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27,1990.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Neural architectures for nested NER through linearization", |
|
"authors": [ |
|
{ |
|
"first": "Jana", |
|
"middle": [], |
|
"last": "Strakov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Milan", |
|
"middle": [], |
|
"last": "Straka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5326--5331", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jana Strakov\u00e1, Milan Straka, and Jan Hajic. 2019, Jul. Neural architectures for nested NER through linearization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5326-5331, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Fast and accurate entity recognition with iterated dilated convolutions", |
|
"authors": [ |
|
{ |
|
"first": "Emma", |
|
"middle": [], |
|
"last": "Strubell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Verga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Belanger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emma Strubell, Patrick Verga, David Belanger, and Andrew McCallum. 2017. Fast and accurate entity recognition with iterated dilated convolutions. In Proceedings of the 2017", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Association for Computational Linguistics", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2670--2680", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Conference on Empirical Methods in Natural Language Processing, pages 2670-2680, Copenhagen, Denmark. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Part-ofspeech annotation of biology research abstracts", |
|
"authors": [ |
|
{ |
|
"first": "Yuka", |
|
"middle": [], |
|
"last": "Tateisi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun-Ichi", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC'04)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuka Tateisi and Jun-ichi Tsujii. 2004. Part-of- speech annotation of biology research abstracts. In Proceedings of the Fourth International Con- ference on Language Resources and Evalua- tion (LREC'04), Lisbon, Portugal. European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Erik", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Tjong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kim", |
|
"middle": [], |
|
"last": "Sang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fien", |
|
"middle": [], |
|
"last": "De Meulder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "142--147", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity re- cognition. In Proceedings of the Seventh Con- ference on Natural Language Learning at HLT-NAACL 2003, pages 142-147.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Neural segmental hypergraphs for overlapping mention recognition", |
|
"authors": [ |
|
{ |
|
"first": "Bailin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "204--214", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bailin Wang and Wei Lu. 2018. Neural segmental hypergraphs for overlapping mention recogni- tion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 204-214, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "A neural transition-based model for nested mention recognition", |
|
"authors": [ |
|
{ |
|
"first": "Bailin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongxia", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1011--1017", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bailin Wang, Wei Lu, Yu Wang, and Hongxia Jin. 2018. A neural transition-based model for nested mention recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1011-1017, Brussels, Belgium. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Annotating and recognising named entities in clinical notes", |
|
"authors": [ |
|
{ |
|
"first": "Yefeng", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the ACL-IJCNLP 2009 Student Research Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "18--26", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yefeng Wang. 2009. Annotating and recognising named entities in clinical notes. In Proceedings of the ACL-IJCNLP 2009 Student Research Workshop, pages 18-26, Suntec, Singapore. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "A boundaryaware neural model for nested named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Changmeng", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingyun", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guandong", |
|
"middle": [], |
|
"last": "Ho-Fung Leung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "357--366", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Changmeng Zheng, Yi Cai, Jingyun Xu, Ho-fung Leung, and Guandong Xu. 2019. A boundary- aware neural model for nested named entity recognition. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Lan- guage Processing and the 9th International Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 357-366, Hong Kong, China. Association for Compu- tational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"uris": null, |
|
"text": "Example of nested entities.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"uris": null, |
|
"text": "denote the start and end positions of the j-th span at the l-th level. With regard to the 1st level, s", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"uris": null, |
|
"text": "Lattice and best path.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"uris": null, |
|
"text": "Divided search spaces.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"num": null, |
|
"uris": null, |
|
"text": "Merge of search spaces.", |
|
"type_str": "figure" |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"text": "Statistics of the datasets used in the experiments. Note that in ACE-2005, sentences are not originally split. We report the numbers of sentences based on the preprocessing with the Stanford CoreNLP.", |
|
"html": null, |
|
"content": "<table><tr><td>Hyperparameter</td><td>Value</td></tr><tr><td>word dropout rate</td><td>0.05</td></tr><tr><td>character embedding dimension</td><td>128</td></tr><tr><td>CNN window size</td><td>3</td></tr><tr><td>CNN filter number</td><td>256</td></tr><tr><td>batch size</td><td>32</td></tr><tr><td>LSTM hidden size</td><td>256</td></tr><tr><td>LSTM dropout rate</td><td>0.2 (w/o BERT)</td></tr><tr><td/><td>0.5 (w/ BERT)</td></tr><tr><td>gradient clipping</td><td>5.0</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"text": "", |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"text": "\u00b1 0.81 75.44 \u00b1 0.37 76.83 \u00b1 0.36 78.70 \u00b1 0.69 75.74 \u00b1 0.64 77.19 \u00b1 0.10", |
|
"html": null, |
|
"content": "<table><tr><td/><td/><td>ACE-2005</td><td/><td/><td>GENIA</td><td/></tr><tr><td>Method</td><td>Precision (%)</td><td>Recall (%)</td><td>F1 (%)</td><td>Precision (%)</td><td>Recall (%)</td><td>F1 (%)</td></tr><tr><td>Katiyar and Cardie (2018)</td><td>70.6</td><td>70.4</td><td>70.5</td><td>79.8</td><td>68.2</td><td>73.6</td></tr><tr><td>Ju et al. (2018) 11</td><td>74.2</td><td>70.3</td><td>72.2</td><td>78.5</td><td>71.3</td><td>74.7</td></tr><tr><td>Wang et al. (2018) \u2020 12</td><td>74.5</td><td>71.5</td><td>73.0</td><td>78.0</td><td>70.2</td><td>73.9</td></tr><tr><td>Wang and Lu (2018) \u2020</td><td>76.8</td><td>72.3</td><td>74.5</td><td>77.0</td><td>73.3</td><td>75.1</td></tr><tr><td>Sohrab and Miwa (2018)</td><td>-</td><td>-</td><td>-</td><td>93.2</td><td>64.0</td><td>77.1</td></tr><tr><td>Zheng et al. (2019)</td><td>-</td><td>-</td><td>-</td><td>75.9</td><td>73.6</td><td>74.7</td></tr><tr><td>Fisher and Vlachos (2019)</td><td>75.1</td><td>74.1</td><td>74.6</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Lin et al. (2019) \u2020</td><td>76.2</td><td>73.6</td><td>74.9</td><td>75.8</td><td>73.9</td><td>74.8</td></tr><tr><td>Strakov\u00e1 et al. (2019) \u202013</td><td>76.35</td><td>74.39</td><td>75.36</td><td>79.60</td><td>73.53</td><td>76.44</td></tr><tr><td colspan=\"2\">This work 78.27 Fisher and Vlachos (2019) [BERT] 82.7</td><td>82.1</td><td>82.4</td><td>\u2212</td><td>\u2212</td><td>\u2212</td></tr><tr><td>Strakov\u00e1 et al. (2019) [BERT] \u2020</td><td>82.58</td><td>84.29</td><td>83.42</td><td>79.92</td><td>76.55</td><td>78.20</td></tr><tr><td>This work [BERT]</td><td colspan=\"6\">83.30 \u00b1 0.22 84.69 \u00b1 0.37 83.99 \u00b1 0.27 77.46 \u00b1 0.65 76.65 \u00b1 0.58 77.05 \u00b1 0.12</td></tr><tr><td colspan=\"2\">Strakov\u00e1 et al. (2019) [BERT+FLAIR] \u2020 83.48</td><td>85.21</td><td>84.33</td><td>80.11</td><td>76.60</td><td>78.31</td></tr><tr><td>This work [BERT+FLAIR]</td><td colspan=\"6\">83.83 \u00b1 0.39 84.87 \u00b1 0.09 84.34 \u00b1 0.20 77.81 \u00b1 0.69 76.94 \u00b1 1.12 77.36 \u00b1 0.26</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"text": "Main results. We group methods into three types. The first group consists of the methods that do not use any contextual word embeddings. The second group consists of the methods that use BERT but do not use any other contextual word embeddings. The third group consists of the methods that use both BERT and FLAIR. '' \u2020'' indicates the methods using POS tags. This work 78.27 \u00b1 0.81 75.44 \u00b1 0.37 76.83 \u00b1 0.36 78.70 \u00b1 0.69 75.74 \u00b1 0.64 77.19 \u00b1 0.10 -L 60.89 \u00b1 1.30 75.38 \u00b1 1.27 67.34 \u00b1 0.37 70.72 \u00b1 0.39 79.20 \u00b1 1.27 74.71 \u00b1 0.18 -L&D 77.77 \u00b1 0.31 67.42 \u00b1 0.29 72.22 \u00b1 0.13 79.70 \u00b1 0.56 73.41 \u00b1 0.35 76.43 \u00b1 0.28", |
|
"html": null, |
|
"content": "<table><tr><td>ACE-2005</td><td/><td>GENIA</td><td/></tr><tr><td>Precision (%) Recall (%)</td><td>F1 (%)</td><td>Precision (%) Recall (%)</td><td>F1 (%)</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"text": "Results when ablating away the learning (L) and decoding (D) components of our proposed method.", |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF8": { |
|
"num": null, |
|
"text": "Precision scores for predictions of each level of one trial.", |
|
"html": null, |
|
"content": "<table><tr><td>Maximal depth</td><td># tokens per second</td></tr><tr><td>1</td><td>6,083</td></tr><tr><td>2</td><td>3,761</td></tr><tr><td>3</td><td>3,655</td></tr><tr><td>4</td><td>3,742</td></tr><tr><td>5</td><td>3,723</td></tr><tr><td>\u221e (no restriction)</td><td>3,701</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF9": { |
|
"num": null, |
|
"text": "Decoding speed on ACE-2005.", |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF11": { |
|
"num": null, |
|
"text": "Comparison on ACE-2004. '' \u2020'' indicates the methods using POS tags.", |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF13": { |
|
"num": null, |
|
"text": "", |
|
"html": null, |
|
"content": "<table><tr><td>: Comparison on CoNLL-2003. We</td></tr><tr><td>group methods into two types. The first</td></tr><tr><td>group consists of the methods that do not</td></tr><tr><td>use any contextual word embeddings. The</td></tr><tr><td>second one consists of the methods that use</td></tr><tr><td>contextual word embeddings such as BERT</td></tr><tr><td>and FLAIR. '' \u2020'' indicates the methods</td></tr><tr><td>using POS tags. '' \u2021'' indicates the methods</td></tr><tr><td>not designed to extract nested entities.</td></tr></table>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |