{ "paper_id": "C18-1003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:11:37.275068Z" }, "title": "They Exist! Introducing Plural Mentions to Coreference Resolution and Entity Linking", "authors": [ { "first": "Ethan", "middle": [], "last": "Zhou", "suffix": "", "affiliation": { "laboratory": "", "institution": "Emory University Atlanta", "location": { "postCode": "30322", "region": "GA" } }, "email": "ethan.zhou@emory.edu" }, { "first": "Jinho", "middle": [ "D" ], "last": "Choi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Emory University Atlanta", "location": { "postCode": "30322", "region": "GA" } }, "email": "jinho.choi@emory.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper analyzes arguably the most challenging yet under-explored aspect of resolution tasks such as coreference resolution and entity linking, that is the resolution of plural mentions. Unlike singular mentions each of which represents one entity, plural mentions stand for multiple entities. To tackle this aspect, we take the character identification corpus from the SemEval 2018 shared task that consists of entity annotation for singular mentions, and expand it by adding annotation for plural mentions. We then introduce a novel coreference resolution algorithm that selectively creates clusters to handle both singular and plural mentions, and also a deep learning-based entity linking model that jointly handles both types of mentions through multi-task learning. Adjusted evaluation metrics are proposed for these tasks as well to handle the uniqueness of plural mentions. Our experiments show that the new coreference resolution and entity linking models significantly outperform traditional models designed only for singular mentions. To the best of our knowledge, this is the first time that plural mentions are thoroughly analyzed for these two resolution tasks.", "pdf_parse": { "paper_id": "C18-1003", "_pdf_hash": "", "abstract": [ { "text": "This paper analyzes arguably the most challenging yet under-explored aspect of resolution tasks such as coreference resolution and entity linking, that is the resolution of plural mentions. Unlike singular mentions each of which represents one entity, plural mentions stand for multiple entities. To tackle this aspect, we take the character identification corpus from the SemEval 2018 shared task that consists of entity annotation for singular mentions, and expand it by adding annotation for plural mentions. We then introduce a novel coreference resolution algorithm that selectively creates clusters to handle both singular and plural mentions, and also a deep learning-based entity linking model that jointly handles both types of mentions through multi-task learning. Adjusted evaluation metrics are proposed for these tasks as well to handle the uniqueness of plural mentions. Our experiments show that the new coreference resolution and entity linking models significantly outperform traditional models designed only for singular mentions. To the best of our knowledge, this is the first time that plural mentions are thoroughly analyzed for these two resolution tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Resolution tasks such as coreference resolution and entity linking are challenging because they require a holistic view of a document (or across multiple documents) to find correct entities. Although many models have been proposed for these tasks (Clark and Manning, 2016; Francis-Landau et al., 2016; Wiseman et al., 2016; Gupta et al., 2017; Lee et al., 2017) , most of them are focused on singular mentions such that they are insufficient for resolving the other type of mentions, plural, although the amount of plural mentions is not negligible in practice. 1 Table 1 illustrates how mentions are annotated for coreference resolution by the CoNLL'12 shared task (Pradhan et al., 2012) and our proposed work. In the CoNLL'12 annotation, the plural mention They 8 is grouped with the noun phrase [Mary 1 and John 2 ] 3 ; however, the other plural mention We 7 becomes a singleton because there is no noun phrase representing such an entity. Since CoNLL'12 limits each plural mention to be linked to a single noun phrase, it loses connections to individual entities that exist within the document but not grouped as a noun phrase.", "cite_spans": [ { "start": 247, "end": 272, "text": "(Clark and Manning, 2016;", "ref_id": "BIBREF3" }, { "start": 273, "end": 301, "text": "Francis-Landau et al., 2016;", "ref_id": "BIBREF5" }, { "start": 302, "end": 323, "text": "Wiseman et al., 2016;", "ref_id": "BIBREF14" }, { "start": 324, "end": 343, "text": "Gupta et al., 2017;", "ref_id": "BIBREF6" }, { "start": 344, "end": 361, "text": "Lee et al., 2017)", "ref_id": "BIBREF9" }, { "start": 562, "end": 563, "text": "1", "ref_id": null }, { "start": 666, "end": 688, "text": "(Pradhan et al., 2012)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 564, "end": 571, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "[Mary1 and John2]3 came to see me4 yesterday. She5 looked happy, and so did he6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document", "sec_num": null }, { "text": "We7 had a great time together. They8 left around noon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document", "sec_num": null }, { "text": "CoNLL'12 {Mary1, She5}, {John2, he6}, {[Mary1 and John2]3, They8}, {me4}, {We7} Our Work {Mary1, She5, We7, They8}, {John2, he6, We7, They8}, {me4, We7} In our work, the plural mentions We 7 and They 8 are linked to multiple entities that those mentions refer to. This allows higher-level NLP tasks such as question answering or machine translation to reason more explicitly about those entities while adding another level of challenges to the resolution tasks. In this paper, we first present the annotation scheme for resolving plural mentions that is used to expand the corpus provided by the Character Mining project (Section 3). We then introduce a novel algorithm for coreference resolution that selectively creates clusters for singular and plural mentions, as well as evaluation metrics to handle plural mentions for coreference resolution (Section 4) . We also present a new deep learning-based entity linking model that jointly identifies both singular and plural mentions (Section 5). All models are evaluated on our dataset (Section 6); the experiments reveal significant improvement from our new models compared to the previous state-of-the-art models dedicated for singular mentions. As far as we can tell, this is the first time that such annotation for plural mentions is provided in a large enough scale that deep learning models can be trained on, at the same time, machine learning models are developed to achieve promising results for the resolution of plural mentions.", "cite_spans": [ { "start": 848, "end": 859, "text": "(Section 4)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Document", "sec_num": null }, { "text": "Chen and Choi (2016) were the first to introduce the task of character identification and provided a new corpus based on TV show transcripts. Given a dialogue transcribed in text where all mentions are detected, character identification aims to find the entity for each personal mention, who may or may not be active in the dialogue. Unlike most other entity linking tasks focusing on Wikification, this task is challenging because it is dialogue-based where the entities are general characters in the show. This corpus was later expanded by Chen et al. 2017who added annotation for the ambiguous entity types. In this work, we expanded the corpus further by doubling the size of the annotation and adding new annotation for plurals. The character identification corpus can be used for both coreference resolution and entity linking tasks. Our approach to coreference resolution was partially motivated by the previous works, Clark and Manning (2016) and Durrett et al. (2013) , who tackled the general cases of coreference resolution including plurals; however, since their approaches were based on the annotation provided by CoNLL'12, they did not handle plural mentions to our satisfaction (Table 1) . Jain et al. (2004) presented a rule-based system for resolving plural mentions, which was limited to unambiguous plural types. Our work is distinguished because we handle both ambiguous and unambiguous types of plural mentions, which makes it more challenging. Chen et al. 2017presented an entity linking model that identified the real entity of each singular mention, which we adapted to develop a new multi-task learning model that jointly handles singulars and plurals.", "cite_spans": [ { "start": 926, "end": 950, "text": "Clark and Manning (2016)", "ref_id": "BIBREF3" }, { "start": 955, "end": 976, "text": "Durrett et al. (2013)", "ref_id": "BIBREF4" }, { "start": 1205, "end": 1223, "text": "Jain et al. (2004)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 1193, "end": 1202, "text": "(Table 1)", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The Character Mining project provides transcripts from the TV show Friends for all ten seasons in JSON. 2 A subset of the first two seasons of this show was annotated for the task of character identification by Chen et al. (2017), who made it publicly available through the International Workshop on Semantic Evaluation (SemEval 2018). 3 Given this annotation, we expanded the corpus as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.1" }, { "text": "1. We realized that about 20% of the first two seasons were not covered by the previous annotation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.1" }, { "text": "Following the annotation guidelines suggested by Chen et al. 2017, we completed the annotation for the first two seasons and further annotated two more seasons. As a result, the first four seasons are completely annotated for character identification in our corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.1" }, { "text": "2. There were quite a few mismatches among the speaker and the entity labels in the previous annotation. For instance, while mentions were annotated by the entity's full name such as Monica Geller, some utterances were paired with speaker labels represented by only the first name, Monica, which could cause confusions for machine learning models. We manually went through the entire annotation and made sure the speaker and the entity labels were coherent across all seasons.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.1" }, { "text": "3. The previous annotation consisted of only singular mentions such that each mention was guaranteed to be linked to exactly one entity. We annotated plural mentions for the first four seasons through crowdsourcing. Unlike singular mentions that were automatically recognized by the heuristic-based mention detector (Chen and Choi, 2016), plural mentions in our corpus were manually detected by the crowd workers who were also asked to link each plural mention to a set of its referent entities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.1" }, { "text": "The annotation guidelines used for singular mentions are adapted to annotate plural mentions as well such that the only difference in annotation between these two types of mentions is the number of entities to which the mentions refer. Formally, each mention m is annotated with a set of entities E, where each element in E belongs to one of the following four groups:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.1" }, { "text": "1. Known entities: include all the primary and secondary characters recurring in the show.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.1" }, { "text": "2. GENERIC: indicates actual characters in the show whose identities are unknown across the show: e.g., That waitress is really cute, I am going to ask her out.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.1" }, { "text": "3. GENERAL: indicates mentions referring to a general case rather than a specific entity: e.g., The ideal guy you look for doesn't exist.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.1" }, { "text": "4. OTHER: indicates actual characters in the show whose identities are unknown in this dialogue but revealed in some other dialogue.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.1" }, { "text": "The COLLECTIVE type, used to distinguish the plural usage of the pronoun you in the previous annotation, is discarded in our annotation because each you is now annotated with a set of entities such that the plural usage can be deterministically distinguished by the size of its entity set. Table 2 : An example of entity annotation in our corpus, where Our 4 and you 7 are the plural mentions. Table 2 shows examples of all types of entities for both singular and plural mentions. The mention women 2 does not refer to any specific character so it is identified as GENERAL. Both the mentions guy 11 and He 13 refer to a specific person whose identity is never revealed so it is annotated with the generic type, MAN 1. There are two plural mentions, Our 4 and you 7 , which are handled differently. All entities of Our 4 can be identified from the context of this dialogue so it is annotated with the known entities Jack and Judy. However, only one of you 7 can be identified in this context so it is annotated with the known entity Ross and also OTHER, implying that it refers to some other entity that can be identified in a separate dialogue. This method is used to distinguish non-immediately identifiable entities from the generic case of MAN 1 whose identity is unknown across the entire show. Table 3 shows the statistics of our corpus. Compared to the previous annotation including 18,608 mentions, our corpus is comprised of 47,367 annotated mentions, which is 2.5 times larger. Plural mentions together compose about 9% of the entire dataset, which is significant enough to make a difference in resolution. Each cluster contains about 6 mentions on average when each scene is treated as an independent dialogue. All mentions were double-annotated by crowd workers. From this double-annotation, Cohen's kappa score of 56.88% was achieved for plural mentions, which was about 20% lower than the one achieved for singular mentions (Chen and Choi, 2016). The lower inter-annotator agreement was expected due to the high complexity of this task. A subset of the disagreed annotation was manually adjudicated by experts, from which we found that taking the union of the entity sets annotated by two workers would effectively give the correct set of entities for each of those disagreed plural mentions. Thus, a vast amount of plural mentions were pseudo-adjudicated by taking their unions of double-annotation. 1 24 326 5,968 107 10,313 1,147 11,460 2,162 270 2 24 293 5,747 107 10,521 1,156 11,677 1,934 285 3 25 348 6,495 108 11,458 907 12,365 1,925 230 4 24 334 6,318 100 10,726 1,139 11,865 1,881 175 Total 97 1,301 24,528 331 43,018 4,349 47,367 7,902 781 Table 3 : The overall statistics of our corpus. All columns show raw counts except that the speaker column and the type column in the entity section give the set counts of all speakers and entities, respectively. Table 4 shows the distributions of entity types. The primary characters compose about 67% of all mentions whereas the ambiguous types together compose about 8.6%, which implies that the majority of mentions can be linked to known entities. Notice that the total count of GENERAL increases by 554 from Seasons 1-2 to 3-4, whereas the total count of OTHER decreases by 654 for those seasons; these two ambiguous entity types are easily confused because they do not refer to any specific entity within the dialogue.", "cite_spans": [], "ref_spans": [ { "start": 290, "end": 297, "text": "Table 2", "ref_id": null }, { "start": 394, "end": 401, "text": "Table 2", "ref_id": null }, { "start": 1299, "end": 1306, "text": "Table 3", "ref_id": null }, { "start": 2414, "end": 2718, "text": "1 24 326 5,968 107 10,313 1,147 11,460 2,162 270 2 24 293 5,747 107 10,521 1,156 11,677 1,934 285 3 25 348 6,495 108 11,458 907 12,365 1,925 230 4 24 334 6,318 100 10,726 1,139 11,865 1,881 175 Total 97 1,301 24,528 331 43,018 4,349 47,367 7,902 781 Table 3", "ref_id": "TABREF0" }, { "start": 2924, "end": 2931, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Annotation", "sec_num": "3.1" }, { "text": "{I1, I3, Our4, dad9} \u2192 Jack {Our4, mom8} \u2192 Judy, {Harmonica5} \u2192 Monica, {Ross6, you7, I10, me12} \u2192 Ross, {women2} \u2192 GENERAL, {you7} \u2192 OTHER, {guy11, He13} \u2192 MAN 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.1" }, { "text": "Considering that annotation tasks for the first two seasons were mostly conducted by Chen et al. 2017whereas the next two seasons were conducted by us, it is possible that our crowdsourcing instructions were more biased towards GENERAL than OTHER, which we will analyze in the future. Table 3 (57,073 vs. 47,367) because each plural mention is counted more than once in this table.", "cite_spans": [], "ref_spans": [ { "start": 285, "end": 292, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Analytics", "sec_num": "3.2" }, { "text": "The presence of plural mentions brings up several challenges for coreference resolution. First, the search scope becomes broader. For each mention m j , a typical coreference resolution system would find another mention m i that is referent to m j , and assigns m j to the cluster C i that m i belongs to if it exists; otherwise, creates a new cluster and assigns both m i and m j to that cluster. 4 As soon as m j is assigned, the search can stop for m j . This strategy works for singular mentions but fails with plural mentions because they can be assigned to more than one cluster. Second, the referent relations are no longer transitive. Let", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coreference Resolution", "sec_num": "4" }, { "text": "m i \u2190 m j , m i \u2192 m j , m i m j stand", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coreference Resolution", "sec_num": "4" }, { "text": "for referent relations such that m j is referent to m i , m i is referent to m j , m i is coreferent to m j , respectively. Then, m i m j and m j m k would imply m i m k for singular mentions, but this transitivity fails with plural mentions when m j belongs to two different clusters C i = {m i , m j } and C k = {m j , m k } such that m i and m k have no referent relation. Third, some of the popular evaluation metrics for coreference resolution such as B 3 (Bagga and Baldwin, 1998) are not necessarily designed for plural mentions such that they need to be revisited.", "cite_spans": [ { "start": 461, "end": 486, "text": "(Bagga and Baldwin, 1998)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Coreference Resolution", "sec_num": "4" }, { "text": "Section 4.1 introduces our new coreference resolution algorithm that selectively creates clusters with respect to different mention types. This algorithm ensures singular mentions representing different entities get assigned to separate clusters. For example, let m p be a plural mention and m i be a singular mention such that m p \u2192 m i . When the referent relation is found, the cluster C i is created and both m p and m i are assigned to C i . Let m j be another singular mention such that m p \u2192 m j . Now, the algorithm must decide whether to assign m j to C i or create another cluster C j for m j . If m i m j , m j should be assigned to C i ; otherwise to C j . Our algorithm allows a model to learn this decision during training so that the clusters can be created accordingly during decoding. Section 4.3 describes how existing evaluation metrics can be adjusted to evaluate both singular and plural mentions for coreference resolution, which is the first time that these metrics are adapted for plural mentions linked to multiple entities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coreference Resolution", "sec_num": "4" }, { "text": "For each mention m j , our algorithm compares it against all of the preceding mentions m i to determine whether or not they are referent, where i and j are the ordered indices such that 0 < i < j. Additionally, two more mentions, m g and m o , are compared to m j that represent the GENERAL and the OTHER types, respectively (Section 3.1). For each mention pair (m i , m j ), the algorithm assigns one of the following three labels for multi-classification: During training, labels are determined by consulting the oracle. L is labeled if m i is a singular mention. R is labeled if m i is plural and m j is singular. N is labeled for all the other cases. Notice that this algorithm does not allow any plural mention to be directly linked to another plural mention; in other words, it does not create any cluster consisting of only plural mentions. Plural mentions can still be indirectly linked through clusters created for singular mentions. The creation of clusters comprising only plural mentions would not help identifying the known entities of those mentions, which defeats the purpose of character identification. It is possible to link plural mentions directly by using the GENERIC type (Section 3.1), which is not adapted to annotate entities for plural mentions in the current annotation scheme. : A demonstration of our algorithm using the example in Table 2 . The m j column indicates the index of m j that the algorithm is currently processing. The first column shows the labels generated for all mention pairs (m i , m j ), where the indices of m i are indicated inside the square brackets (e.g, [O, 1] stands for m o and m 1 ) and the labels are indicated next to the right arrows (e.g., \u2192 L). The clusters column shows the list of entity sets created by taking the labeling information from the first column. Table 5 depicts how this algorithm finds the referent relations for all the mentions in Table 2 . Note that the special mentions m g and m o are considered singular and placed prior to any other mention here. The algorithm labels L for (m g , m 2 ), which makes women 2 \u2208 C g representing GENERAL. For m 4 , it labels L for m 1 and m 2 , which makes Our 4 \u2208 C 1 representing JACK; although Our 4 is a plural mention, it gets assigned to only one cluster at the moment since the other entity has yet been revealed. For m 7 , it labels L for both m o and m 6 , which makes you 7 \u2208 C o representing OTHER and \u2208 C 6 representing Ross. For (m 4 , m 8 ), it labels R because m 4 is plural and m 8 is singular, which creates a new cluster C 8 and assigns both Our 4 and mom 8 to C 8 . Once all mention pairs are compared, the algorithm collects mentions that are not assigned to any cluster, and makes them singletons such that Harmonica 5 becomes the singleton C 5 . Furthermore, every mention that belongs to either C g or C o gets turned into a singleton such that C 2 and C 7 are created at the end. This is because mentions assigned to those ambiguous types are not referent to one another, if they were, they would have been assigned to GENERIC instead.", "cite_spans": [], "ref_spans": [ { "start": 1361, "end": 1368, "text": "Table 2", "ref_id": null }, { "start": 1824, "end": 1831, "text": "Table 5", "ref_id": "TABREF5" }, { "start": 1912, "end": 1919, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Algorithm", "sec_num": "4.1" }, { "text": "1. N: m i is not referent to m j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm", "sec_num": "4.1" }, { "text": "[m i ] \u2192 {N, L, R} m j Clusters [G, O] \u2192 N 1 \u2205g, \u2205o [O, 1] \u2192 N, [G] \u2192 L 2 {2}g, \u2205o [G, O, 2] \u2192 N, [1] \u2192 L 3 {2}g, \u2205o, {1, 3}1 [G, O, 2] \u2192 N, [1, 3] \u2192 L 4 {2}g, \u2205o, {1, 3, 4}1 [G, O, 1..4] \u2192 N 5 {2}g, \u2205o, {1, 3, 4}1 [G, O, 1..5] \u2192 N 6 {2}g, \u2205o, {1, 3, 4}1 [G, 1..5] \u2192 N, [O, 6] \u2192 L 7 {2}g, {7}o, {1, 3, 4}1, {6, 7}6 [G, O, 1..3, 5..7] \u2192 N, [4] \u2192 R 8 {2}g, {7}o, {1, 3, 4}1, {6, 7}6, {4, 8}8 [G, O, 2, 5..8] \u2192 N, [1, 3, 4] \u2192 L 9 {2}g, {7}o, {1, 3, 4, 9}1, {6, 7}6, {4, 8}8 [G, O, 1..5, 8, 9] \u2192 N, [6] \u2192 L 10 {2}g, {7}o, {1, 3, 4, 9}1, {6, 7, 10}6, {4, 8}8 [G, O, 1..10] \u2192 N 11 {2}g, {7}o, {1, 3, 4, 9}1, {6, 7, 10}6, {4, 8}8 [G, O, 1..5, 8, 9, 11] \u2192 N, [6, 10] \u2192 L 12 {2}g, {7}o, {1, 3, 4, 9}1, {6, 7, 10, 12}6, {4, 8}8 [G, O, 1..10, 12] \u2192 N, [11] \u2192 L 13 {2}g, {7}o, {1, 3, 4, 9}1, {6, 7, 10, 12}6, {4, 8}8, {11, 13}11 Singleton Processing {2}2, {7}7, {1, 3, 4, 9}1, {6, 7, 10, 12}6, {4, 8}8, {11, 13}11, {5}5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm", "sec_num": "4.1" }, { "text": "Our To be adapted to our coreference resolution algorithm in Section 4.1, ACNN is modified at the output layer to include three labels, N, L, and R, such that it is optimized for multi-class instead of binary classification. The modified ACNN, called the multi-class ACNN, generates mention embeddings, r s (m i ) and r s (m j ), as well as mention pair embeddings, r p (m i , m j ), which are used to create cluster embeddings and fed as input to our entity linking model in Section 5.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Model", "sec_num": "4.2" }, { "text": "Three metrics proposed by the CoNLL'12 shared task (Pradhan et al., 2012), B 3 , CEAF \u03c6 4 , and BLANC, are used to evaluate our coreference resolution models. B 3 (Bagga and Baldwin, 1998) ", "cite_spans": [ { "start": 163, "end": 188, "text": "(Bagga and Baldwin, 1998)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.3" }, { "text": "P = 1 N d\u2208D m\u2208d |C s m \u2229 C o m | |C s m | R = 1 N d\u2208D m\u2208d |C s m \u2229 C o m | |C o m |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.3" }, { "text": "In our case, each mention can be assigned to more than one cluster; thus, C * m is replaced by the union of all clusters that the mention m belongs to, which enables this metric to evaluate plural mentions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.3" }, { "text": "CEAF \u03c6 4 (Luo, 2005) is an entity-based metric that first creates a similarity matrix M \u2208 R |S|\u00d7|O| where S and O are the sets of clusters produced by the system and the oracle, respectively. It then measures the similarity between every pair of clusters (C s , C o ) \u2208 S \u00d7 O where s \u2208 [1, |S|] and o \u2208 [1, |O|] such that:", "cite_spans": [ { "start": 9, "end": 20, "text": "(Luo, 2005)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.3" }, { "text": "M s,o = 2 \u00d7 |C s \u2229 C o | |C s | + |C o |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.3" }, { "text": "Given this similarity matrix, the Hungarian algorithm is used to find the list H that contains similarity scores from the most similar matching pairs of clusters (C s , C o ) \u2208 S \u00d7 O such that |H| = min(|S|, |O|). Finally, the overall similarity between S and O is measured as \u03a6 = \u03c6\u2208H \u03c6, and precision and recall are measured as P = \u03a6 /|S| and R = \u03a6 /|O|, respectively. Since CEAF \u03c6 4 is entity-based, the metric can be used to evaluate plural mentions without any modification. The potential pitfall is that certain clusters may include a greater number of plural mentions than singular mentions, in which case, distinct clusters with similar sets of plural mentions may yield a high similarity score. However, plural mentions make up less than 10% of the dataset, so we are not concerned about these plural-majority clusters, since most if not all clusters would be dominated by singular mentions. BLANC (Recasens and Hovy, 2011 ) is a link-based metric. Let L s and L o be the sets of links generated by the system (s) and the oracle (o), respectively. Let G be the set of all possible links between every pair of mentions whether or not they are referent. This metric first creates a confusion matrix", "cite_spans": [ { "start": 906, "end": 930, "text": "(Recasens and Hovy, 2011", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.3" }, { "text": "B \u2208 R 2\u00d72 such that B 0,0 = |L s \u2229 L o |, B 0,1 = |L o \u2212 L s |, B 1,0 = |L s \u2212 L o |, and B 1,1 = |(G \u2212 L s ) \u2229 (G \u2212 L o )|.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.3" }, { "text": "It then measures precision and recall for referent links (P c and R c ) and also for non-referent links (P n and R n ):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.3" }, { "text": "P c = B[0, 0] B[0, 0] + B[1, 0] R c = B[0, 0] B[0, 0] + B[0, 1] P n = B[1, 1] B[1, 1] + B[0, 1] R n = B[1, 1] B[1, 1] + B[1, 0] F 1 c = 2 \u00d7 P c \u2022 R c P c + R c F 1 n = 2 \u00d7 P n \u2022 R n P n + R n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.3" }, { "text": "Finally, precision, recall, and F 1 -score are measured as P = Pc+Pn /2, R = rc+rn /2, and F 1 = F 1c+F 1n /2. Note that we decide to replace MUC (Vilain et al., 1995) , another popular metric used by the CoNLL'12 shared task, with BLANC because both are link-based and BLANC takes singletons into consideration, which consume a large portion of our dataset (over 20%), whereas MUC does not so that BLANC is more appropriate for our case. It is worth mentioning that a separate confusion matrix B d is constructed for each document d such that B = d\u2208D B d where B d is based on links only in d. This prevents potential inflation of B[1, 1], which could become huge if it were to be measured across the entire dataset. BLANC can also be used to evaluate plural mentions without any modification because each link is treated independently regardless of its mention type in this metric.", "cite_spans": [ { "start": 146, "end": 167, "text": "(Vilain et al., 1995)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.3" }, { "text": "5 Entity Linking", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.3" }, { "text": "The task of character identification requires each mention to be identified by the names of actual characters (e.g., Monica, Ross in Table 2 ). Figure 2 gives the overview of our entity linking model, which adapts the underlying architecture from the entity linking model proposed by Chen et al. (2017) and generalizes it to jointly handle singular and plural mentions. It assumes the output from ACNN in Section 4.2 such that for each mention m i , the embedding of that mention and the set of clusters {C 1 , . . . , C k } that m i belongs to are taken. For each cluster C a , ACNN gives the list of mention pair embeddings m Ca i,j , where m i , m j \u2208 C a . Similarly to the previous model, the cluster embedding and the cluster pair embedding are created. Unlike the previous model, our model creates multiple cluster and cluster pair embeddings when m is assigned to more than one cluster during coreference resolution so that the average vectors of those embeddings are generated, which get concatenated with the mention embedding of m i and passed onto the fully-connected layers for prediction. ", "cite_spans": [], "ref_spans": [ { "start": 133, "end": 140, "text": "Table 2", "ref_id": null }, { "start": 144, "end": 152, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Multi-Task Learning", "sec_num": "5.1" }, { "text": "m i m c1 1 CONVs CONVp m c1 n m c1 i,1 m c1 i,m m c k i,m m c k i,1 m c k 1 m c k n C 1 C p 1 C p k C k 1 k k X i=1 C p i 1 k k X i=1 C i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Task Learning", "sec_num": "5.1" }, { "text": "Figure 2: The overview of our entity linking model using multi-task learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SOFTMAX SIGMOID", "sec_num": null }, { "text": "The final ReLu layer is fed to two output layers optimized by softmax and sigmoid functions, respectively. The dimension of the output layer from softmax is |E| + 1 where E is the set of all entities such that each cell represents an entity and the extra cell gives an indication of m being plural. When this extra cell is predicted, the output layer from sigmoid is used, whose dimension is |E|, to predict multiple entities for m. Since the sigmoid function optimizes each cell to be between 0 and 1, any entity whose score is greater than 0.5 is taken. These two output layers are optimized jointly, treating the resolution of singular and plural mentions as multi-task learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SOFTMAX SIGMOID", "sec_num": null }, { "text": "Two metrics are used to evaluate the entity linking models. One is the micro-average F1 score whose precision (P ) and recall (R) are measured as follows (D: a set of documents, E s/o m : the set of entities found for m by the system (s) or the oracle (o)):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "5.2" }, { "text": "P = d\u2208D m\u2208d |E s m \u2229 E o m | d\u2208D m\u2208d |E s m | R = d\u2208D m\u2208d |E s m \u2229 E o m | d\u2208D m\u2208d |E o m |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "5.2" }, { "text": "The micro-average F1 tends to weigh more on frequently occurring entities so it is useful if you need to know the raw prediction power of your model. The other is macro-average F1 score that measures the micro-average F1 for each entity e, say F e 1 , and takes the average, that is 1 /|E| e\u2208E F e 1 where E is the set of all entities. The macro-average F1 treats all entities evenly so it is useful if you need to optimize your model to make correct predictions for as many entities as possible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "5.2" }, { "text": "Experiments are conducted on two tasks, coreference resolution and entity linking. For both tasks, models from Chen et al. (2017) are used to establish strong baseline (CZC). Since they take only singular mentions, a pseudo-singular dataset is created where exactly one entity is chosen for each plural mention based on the closest matching previous speaker or if there is none, chosen randomly. Thus, the models trained on this pseudo-singular dataset always predicts one entity per mention. These models are compared to our models described in Sections 4 and 5 (Ours). Additionally, CZC models are evaluated on the singular-only dataset (S-only) where all plural mentions are filtered out, which should give an intuition of how much impact the addition of plural mentions has on the predictions for singular mentions. All results reported from these experiments are averages of three randomly initialized trials. The corpus in Section 3 is split into training, development, and evaluation sets, where all models are tuned on the development set and the best models are tested on the evaluation set. Episodes 1-19, 20-21, and the rest from each season are used to generate the training, development, and evaluation sets, respectively. Table 6 shows that our coreference model is capable of learning to handle plural mentions effectively while significantly outperforms the CZC model. The CZC model is trained on the pseudo-singular dataset but evaluated on the full dataset by the metrics adjusted for plurals (Section 4.3) such that it is penalized for not predicting multiple entities for plural mentions. Both the B 3 and BLANC metrics show a similar trend that the CZC and our models achieve higher precision and recall, respectively, whereas our model dominates both precision and recall for the CEAF \u03c6 4 metric. The remarkable gap in performance between these two models signals that our model finds referents for plural mentions well without compromising its ability to find referents for singular mentions. The S-only model gives comparable performance as the one reported by Chen et al. (2017) , ensuring that our implementation of the CZC model is robust. Table 6 : Coreference resolution results on the evaluation set (\u00b1: standard deviation). Tables 7 and 8 show the micro and macro average scores achieved by all models. For the micro average, the trend is clear across all types of mentions such that the CZC and our models achieve higher precision and recall, respectively. The precision gap for micro average is quite small, signaling that there is no significant loss of ability in entity resolution for singular mentions in our model. For the macro average, our model completely dominates except for the precision of plural mentions, which implies that our model is more generalizable across different entities regardless of their frequency rates in the training set. The recall of micro-average for plural mentions shows relatively high standard deviations for our model. We expect that running more trials of experiments potentially mitigates this variance, which we will explore. It is expected for the micro average scores to be higher than the macro average scores because the micro average favors frequently appearing entities such that it is possible to achieve high micro average scores without handling infrequent entities well, whereas that is not the case for the macro average. Table 7 : Micro-average scores for entity linking on the evaluation set (\u00b1: standard deviation). Table 9 shows the micro average F1 score for each entity. The top-15 frequently appearing characters are considered to be known entities, whereas all the other secondary characters are considered OTHER, which composes about 26.8%. Our model dominates all the main characters (the first six entities) and OTHER, together of which gives about 90% of the entire annotation. Given that these results are achieved by using automatically generated clusters from our coreference resolution models, they are encouraging. Table 9 : Entity linking results on evaluation set per character.Ro: Ross, Ra: Rachel, Ch: Chandler, Mo: Monica, Jo: Joey, Ph: Phoebe, Em: Emily, Ri: Richard, Ca: Carol, Be: Ben, Pe: Peter, Ju: Judy, Ba: Barry, Ja: Jack, Ka: Kate, OT: OTHER; GN: GENERAL.", "cite_spans": [ { "start": 2085, "end": 2103, "text": "Chen et al. (2017)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 1236, "end": 1243, "text": "Table 6", "ref_id": null }, { "start": 2167, "end": 2174, "text": "Table 6", "ref_id": null }, { "start": 2255, "end": 2269, "text": "Tables 7 and 8", "ref_id": "TABREF9" }, { "start": 3408, "end": 3415, "text": "Table 7", "ref_id": null }, { "start": 3505, "end": 3512, "text": "Table 9", "ref_id": null }, { "start": 4018, "end": 4025, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Configuration", "sec_num": "6.1" }, { "text": "In this paper, we explore a new paradigm for handling plural mentions in two resolution tasks, coreference resolution and entity linking, on multiparty dialogue. We address this challenge by showing the inadequacy of traditional approaches in handling plural mentions, and present an innovative approach to overcome the shortcomings of existing methods for these tasks at hand. For resource creation, we expand upon the Character Identification corpus and augment it with the manual annotation of plural mentions (Section 3). For linguistic analysis, we propose a novel transition-based algorithm and evaluation metrics to process different types of mentions for coreference resolution (Section 4). For NLP engineering, we introduce a neural-based entity linking model using multi-task learning that comprehensively handles plural mentions (Section 5). The results of our models demonstrate significant improvements on these tasks, implying the feasibility of our approach to handle plural mentions (Section 6).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "To the best of our knowledge, this paper provides the first extensive framework for resolving referents for plural mentions, which is a critical problem in any resolution task. Further work includes improving the quality of the dataset as well as expansion of its size, and addressing the issue of extracting global and external features for complete coreference and entity resolution for both singular and plural mentions. All resources including the annotated corpus and source codes are publicly available through the Character Identification project: https://github.com/emorynlp/character-identification. 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Character Mining: https://github.com/emorynlp/character-mining 3 SemEval 2018 Task 4: https://competitions.codalab.org/competitions/17310", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The term 'cluster' indicates a group of mentions that refer to the same entity within a document such that each cluster represents a distinct entity although a cluster in one document can represent the same entity as another cluster in a different document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The Character Mining project provides a superset of the corpus presented in this paper for several other tasks: https://github.com/emorynlp/character-mining.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Algorithms for Scoring Coreference Chains", "authors": [ { "first": "Amit", "middle": [], "last": "Bagga", "suffix": "" }, { "first": "Breck", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 1998, "venue": "The first international conference on language resources and evaluation workshop on linguistics coreference", "volume": "", "issue": "", "pages": "563--566", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amit Bagga and Breck Baldwin. 1998. Algorithms for Scoring Coreference Chains. In The first international conference on language resources and evaluation workshop on linguistics coreference, pages 563-566.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Character Identification on Multiparty Conversation: Identifying Mentions of Characters in TV Shows", "authors": [ { "first": "Henry", "middle": [], "last": "Yu-Hsin Chen", "suffix": "" }, { "first": "D", "middle": [], "last": "Jinho", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGDIAL'16", "volume": "", "issue": "", "pages": "90--100", "other_ids": {}, "num": null, "urls": [], "raw_text": "Henry Yu-Hsin Chen and Jinho D. Choi. 2016. Character Identification on Multiparty Conversation: Identifying Mentions of Characters in TV Shows. In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGDIAL'16, pages 90-100.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Robust Coreference Resolution and Entity Linking on Dialogues: Character Identification on TV Show Transcripts", "authors": [ { "first": "Henry", "middle": [], "last": "Yu-Hsin Chen", "suffix": "" }, { "first": "Ethan", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Jinho", "middle": [ "D" ], "last": "Choi", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 21st Conference on Computational Natural Language Learning, CoNLL'17", "volume": "", "issue": "", "pages": "216--225", "other_ids": {}, "num": null, "urls": [], "raw_text": "Henry Yu-Hsin Chen, Ethan Zhou, and Jinho D. Choi. 2017. Robust Coreference Resolution and Entity Linking on Dialogues: Character Identification on TV Show Transcripts. In Proceedings of the 21st Conference on Computational Natural Language Learning, CoNLL'17, pages 216-225, Vancouver, Canada.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Deep reinforcement learning for mention-ranking coreference models", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "16", "issue": "", "pages": "2256--2262", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Clark and Christopher D. Manning. 2016. Deep reinforcement learning for mention-ranking corefer- ence models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP'16, pages 2256-2262.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Decentralized entity-level modeling for coreference resolution", "authors": [ { "first": "Greg", "middle": [], "last": "Durrett", "suffix": "" }, { "first": "David", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "114--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Greg Durrett, David Hall, and Dan Klein. 2013. Decentralized entity-level modeling for coreference resolution. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 114-124, Sofia, Bulgaria, August. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Capturing semantic similarity for entity linking with convolutional neural networks", "authors": [ { "first": "Matthew", "middle": [], "last": "Francis-Landau", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Durrett", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1256--1261", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Francis-Landau, Greg Durrett, and Dan Klein. 2016. Capturing semantic similarity for entity linking with convolutional neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1256-1261, San Diego, California, June. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Entity linking via joint encoding of types, descriptions, and context", "authors": [ { "first": "Nitish", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2681--2690", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Gupta, Sameer Singh, and Dan Roth. 2017. Entity linking via joint encoding of types, descriptions, and context. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2681-2690, Copenhagen, Denmark, September. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Anaphora resolution in multi-person dialogues", "authors": [ { "first": "Prateek", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Manav Ratan Mital", "suffix": "" }, { "first": "Amitabha", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Achla", "middle": [ "M" ], "last": "Mukerjee", "suffix": "" }, { "first": "", "middle": [], "last": "Raina", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 5th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Prateek Jain, Manav Ratan Mital, Sumit Kumar, Amitabha Mukerjee, and Achla M. Raina. 2004. Anaphora resolution in multi-person dialogues. In Michael Strube and Candy Sidner, editors, Proceedings of the 5th", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "End-to-end neural coreference resolution", "authors": [ { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "188--197", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188-197, Copenhagen, Denmark, September. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "On coreference resolution performance metrics", "authors": [ { "first": "Xiaoqiang", "middle": [], "last": "Luo", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "25--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoqiang Luo. 2005. On coreference resolution performance metrics. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 25-32, Vancouver, British Columbia, Canada, October. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "CoNLL-2012 Shared Task: Modeling Multilingual Unrestricted Coreference in OntoNotes", "authors": [ { "first": "Alessandro", "middle": [], "last": "Sameer Pradhan", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Yuchen", "middle": [], "last": "Uryupina", "suffix": "" }, { "first": "", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Sixteenth Conference on Computational Natural Language Learning: Shared Task, CoNLL'12", "volume": "", "issue": "", "pages": "1--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL-2012 Shared Task: Modeling Multilingual Unrestricted Coreference in OntoNotes. In Proceedings of the Sixteenth Conference on Computational Natural Language Learning: Shared Task, CoNLL'12, pages 1-40.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "BLANC: Implementing the Rand index for coreference evaluation", "authors": [ { "first": "Marta", "middle": [], "last": "Recasens", "suffix": "" }, { "first": "Eduard", "middle": [ "H" ], "last": "Hovy", "suffix": "" } ], "year": 2011, "venue": "Natural Language Engineering", "volume": "17", "issue": "4", "pages": "485--510", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marta Recasens and Eduard H. Hovy. 2011. BLANC: Implementing the Rand index for coreference evaluation. Natural Language Engineering, 17(4):485-510.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A Model-theoretic Coreference Scoring Scheme", "authors": [ { "first": "Marc", "middle": [], "last": "Vilain", "suffix": "" }, { "first": "John", "middle": [], "last": "Burger", "suffix": "" }, { "first": "John", "middle": [], "last": "Aberdeen", "suffix": "" }, { "first": "Dennis", "middle": [], "last": "Connolly", "suffix": "" }, { "first": "Lynette", "middle": [], "last": "Hirschman", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 6th Conference on Message Understanding, MUC6 '95", "volume": "", "issue": "", "pages": "45--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marc Vilain, John Burger, John Aberdeen, Dennis Connolly, and Lynette Hirschman. 1995. A Model-theoretic Coreference Scoring Scheme. In Proceedings of the 6th Conference on Message Understanding, MUC6 '95, pages 45-52, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Learning global features for coreference resolution", "authors": [ { "first": "Sam", "middle": [], "last": "Wiseman", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" }, { "first": "Stuart", "middle": [ "M" ], "last": "Shieber", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "994--1004", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sam Wiseman, Alexander M. Rush, and Stuart M. Shieber. 2016. Learning global features for coreference reso- lution. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 994-1004, San Diego, California, June. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "2. L: m j gets assigned to the cluster that m i belongs to. If m i does not yet belong to any cluster, a new cluster C i is created and both m i and m j are assigned to C i . 3. R: m i gets assigned to the cluster that m j belongs to. If m j does not yet belong to any cluster, a new cluster C j is created and both m i and m j are assigned to C j .", "uris": null, "type_str": "figure" }, "FIGREF1": { "num": null, "text": "learning model uses a modified version of the Agglomerative Convolutional Neural Networks (ACNN) introduced by Chen et al. (2017). This architecture incorporates multiple sets of features and learns the most optimized feature combination at each convolution layer. It also allows the model to dynamically accumulate the most salient features for eventual inclusion in the mention and mention pair embeddings. ACNN takes a mention pair (m i , m j ), performs multiple convolutions to extract features from different groups (CONV 1 ), combines the extracted features among groups using more convolutions (CONV 2 ), and generates mention embeddings r s (m i ) and r s (m j ). These mention embeddings are then concatenated with discrete features \u03c6 d (m) and combined through convolutions to generate a mention-pair embedding r p (m i , m j ). The mention-pair embedding together with pairwise features \u03c6 p (m i , m j ) are used to make a binary classification for m i and m j being referent or not. The overview of our coreference resolution model using the multi-class ACNN.", "uris": null, "type_str": "figure" }, "FIGREF2": { "num": null, "text": "is a mention-based metric that measures precision (P ) and recall (R) as follows (D: a set of documents, N : the total number of mentions in D, C s/o m : the cluster from the system (s) or the oracle (o) that the mention m belongs to):", "uris": null, "type_str": "figure" }, "TABREF0": { "num": null, "type_str": "table", "text": "Snippets of how mentions are annotated by the CoNLL'12 shared task and our work.", "html": null, "content": "" }, "TABREF1": { "num": null, "type_str": "table", "text": "I10 just got this from the guy11 next to me12. He13 was selling a whole bunch of stuff.", "html": null, "content": "
SpeakerUtterance
JackAnd I1 read about these women2 trying it all, and I3 thank God 'Our4 Harmonica5' doesn't have this problem.
MonicaSo, Ross6, what's going on with you7 two? Any stories? No little anecdotes to share with mom8 and dad9?
RossOkay,
" }, "TABREF4": { "num": null, "type_str": "table", "text": "The distributions of entity types. Each column shows the number of mentions annotated with the corresponding entity type. Note that the total number of mentions here is different from the one in", "html": null, "content": "" }, "TABREF5": { "num": null, "type_str": "table", "text": "", "html": null, "content": "
" }, "TABREF8": { "num": null, "type_str": "table", "text": "9\u00b15.0 55.5\u00b11.0 59.4\u00b12.3 37.9\u00b11.0 10.5\u00b10.3 14.0\u00b10.3 71.1\u00b14.6 46.2\u00b11.1 53.2\u00b11.9 Our 75.8\u00b11.4 56.9\u00b11.1 61.8\u00b11.1 34.8\u00b15.0 15.8\u00b11.7 20.5\u00b11.6 74.2\u00b11.4 48.8\u00b11.5 55.5\u00b10.8 S-only 73.3\u00b12.5 55.4\u00b11.6 59.6\u00b12.3", "html": null, "content": "
SingularPluralAll
PRF1PRF1PRF1
CZC72.
" }, "TABREF9": { "num": null, "type_str": "table", "text": "Macro-average scores for entity linking on the evaluation set (\u00b1: standard deviation).", "html": null, "content": "
RoRaChMoJoPhEmRiCaBePeJuBaJaKaOTGN
CZC69.277.569.0 71.3 71.5 79.0 63.4 76.4 31.3 41.8 56.4 09.3 49.2 11.8 24.758.2 45.1
Our71.978.471.5 72.2 72.3 79.7 61.5 82.0 29.6 41.8 54.8 12.8 45.0 18.2 47.359.2 45.1
S-only78.386.578.8 81.7 78.3 88.8 69.2 83.9 40.3 39.3 59.2 16.1 39.8 24.8 35.264.0 49.7
%12.65 11.58 11.16 9.71 9.33 8.61 0.98 0.96 0.71 0.64 0.57 0.44 0.34 0.28 0.26 26.79 5.01
" } } } }