{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:21:14.993519Z" }, "title": "A Benchmark of Rule-Based and Neural Coreference Resolution in Dutch Novels and News", "authors": [ { "first": "Corb\u00e8n", "middle": [], "last": "Poot", "suffix": "", "affiliation": {}, "email": "c.poot@student.rug.nl" }, { "first": "Andreas", "middle": [], "last": "Van Cranenburgh", "suffix": "", "affiliation": {}, "email": "a.w.van.cranenburgh@rug.nl" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We evaluate a rule-based (Lee et al., 2013) and neural (Lee et al., 2018) coreference system on Dutch datasets of two domains: literary novels and news/Wikipedia text. The results provide insight into the relative strengths of data-driven and knowledge-driven systems, as well as the influence of domain, document length, and annotation schemes. The neural system performs best on news/Wikipedia text, while the rule-based system performs best on literature. The neural system shows weaknesses with limited training data and long documents, while the rule-based system is affected by annotation differences.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We evaluate a rule-based (Lee et al., 2013) and neural (Lee et al., 2018) coreference system on Dutch datasets of two domains: literary novels and news/Wikipedia text. The results provide insight into the relative strengths of data-driven and knowledge-driven systems, as well as the influence of domain, document length, and annotation schemes. The neural system performs best on news/Wikipedia text, while the rule-based system performs best on literature. The neural system shows weaknesses with limited training data and long documents, while the rule-based system is affected by annotation differences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In recent years, the best results for coreference resolution of English have been obtained with end-to-end neural models (Lee et al., 2017b Joshi et al., 2019 Joshi et al., , 2020 Wu et al., 2020) . However for Dutch, the existing systems are still using either a rule-based (van der Goot et al., 2015; van Cranenburgh, 2019) or a machine learning approach (Hendrickx et al., 2008a; De Clercq et al., 2011) . The rule-based system dutchcoref (van Cranenburgh, 2019) outperformed previous systems on two existing datasets and also presented a corpus and evaluation of literary novels (RiddleCoref).", "cite_spans": [ { "start": 121, "end": 139, "text": "(Lee et al., 2017b", "ref_id": "BIBREF22" }, { "start": 140, "end": 158, "text": "Joshi et al., 2019", "ref_id": "BIBREF16" }, { "start": 159, "end": 179, "text": "Joshi et al., , 2020", "ref_id": "BIBREF15" }, { "start": 180, "end": 196, "text": "Wu et al., 2020)", "ref_id": "BIBREF37" }, { "start": 275, "end": 302, "text": "(van der Goot et al., 2015;", "ref_id": "BIBREF9" }, { "start": 303, "end": 325, "text": "van Cranenburgh, 2019)", "ref_id": "BIBREF5" }, { "start": 357, "end": 382, "text": "(Hendrickx et al., 2008a;", "ref_id": "BIBREF11" }, { "start": 383, "end": 406, "text": "De Clercq et al., 2011)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we compare this rule-based system to an end-to-end neural coreference resolution system: e2e-Dutch. This system is a variant of with BERT token representations. We evaluate and compare the performance of e2e-Dutch to dutchcoref on two different datasets: (1) the SoNaR-1 corpus (Schuurman et al., 2010) , a genre-balanced corpus of 1 million words, and (2) the RiddleCoref corpus of contemporary novels (van Cranenburgh, 2019) . This provides insights into (1) the relative strengths of a neural system versus a rule-based system for Dutch coreference, and (2) the effect of domain differences (news/Wikipedia versus literature).", "cite_spans": [ { "start": 292, "end": 316, "text": "(Schuurman et al., 2010)", "ref_id": "BIBREF32" }, { "start": 417, "end": 440, "text": "(van Cranenburgh, 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The two datasets we consider vary greatly in terms of overall size and length of the individual documents; the training subset of RiddleCoref contains only 23 documents (novel fragments) compared to 581 documents for SoNaR-1. However, the average number of sentences per document is higher for RiddleCoref than for SoNaR-1 (295.78 vs. 64.28 respectively). We also conduct an error analysis for both of the systems to examine the types of errors that the systems make.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main differences between traditional and neural approaches can be summarized as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "\u2022 Rule-based systems are knowledge-intensive; machine learning systems are data-driven but require feature engineering; end-to-end neural systems only require sufficient training data and hyperparameter tuning to perform well. \u2022 Rule-based and machine learning coreference systems rely on features from syntactic parses and named-entities provided by an NLP pipeline whereas neural systems rely on distributed representations; end-to-end systems do not require any other features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Rule-based (Lee et al., 2011) 58.3 Perceptron (Fernandes et al., 2012) 58.7 Hybrid: rules + ML (Lee et al., 2017a) 63.2 Embeddings (Wiseman et al., 2015) 63.4 + RL (Clark and Manning, 2016a) 65.3 + Entity embeddings (Clark and Manning, 2016b) 65.7", "cite_spans": [ { "start": 11, "end": 29, "text": "(Lee et al., 2011)", "ref_id": "BIBREF20" }, { "start": 46, "end": 70, "text": "(Fernandes et al., 2012)", "ref_id": "BIBREF8" }, { "start": 95, "end": 114, "text": "(Lee et al., 2017a)", "ref_id": "BIBREF21" }, { "start": 131, "end": 153, "text": "(Wiseman et al., 2015)", "ref_id": "BIBREF36" }, { "start": 164, "end": 190, "text": "(Clark and Manning, 2016a)", "ref_id": "BIBREF3" }, { "start": 216, "end": 242, "text": "(Clark and Manning, 2016b)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "System CoNLL", "sec_num": null }, { "text": "System CoNLL End-to-end (Lee et al., 2017b) 68.8 Higher-order + CTF + ELMo 73.0 Finetuning BERT base (Joshi et al., 2019) 73.9 Finetuning BERT large (Joshi et al., 2019) 76.9 Pretraining SpanBERT (Joshi et al., 2019) 79.6 SpanBERT + QA (Wu et al., 2020) 83.1 Table 1 : English coreference scores on the OntoNotes CoNLL 2012 shared task dataset. ML: Machine Learning, RL: Reinforcement Learning, CTF: Coarse-to-Fine, QA: Question Answering.", "cite_spans": [ { "start": 24, "end": 43, "text": "(Lee et al., 2017b)", "ref_id": "BIBREF22" }, { "start": 101, "end": 121, "text": "(Joshi et al., 2019)", "ref_id": "BIBREF16" }, { "start": 149, "end": 169, "text": "(Joshi et al., 2019)", "ref_id": "BIBREF16" }, { "start": 196, "end": 216, "text": "(Joshi et al., 2019)", "ref_id": "BIBREF16" }, { "start": 236, "end": 253, "text": "(Wu et al., 2020)", "ref_id": "BIBREF37" } ], "ref_spans": [ { "start": 259, "end": 266, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "System CoNLL", "sec_num": null }, { "text": "\u2022 The rule-based system by Lee et al. (2013) is entity-based and exploits global features, while endto-end systems such as Lee et al. (2017b) rank mentions and make greedy decisions based on local features. Although does approximate higher-order inference, their model does not build representations of entities.", "cite_spans": [ { "start": 27, "end": 44, "text": "Lee et al. (2013)", "ref_id": "BIBREF19" }, { "start": 123, "end": 141, "text": "Lee et al. (2017b)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "System CoNLL", "sec_num": null }, { "text": "The rest of this section discusses the current best systems for Dutch and English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System CoNLL", "sec_num": null }, { "text": "The largest dataset available for Dutch coreference resolution is the SoNaR-1 dataset (Schuurman et al., 2010) which consists of 1 million words annotated for coreference. This corpus was a continuation of the Corea project (Bouma et al., 2007; Hendrickx et al., 2008a,b) . De Clercq et al. (2011) present a cross-domain coreference resolution study conducted on this corpus. They use a mention-pair system, which was originally developed with the KNACK-2002 corpus and then further improved in the Corea project, and observe that the influence of domain and training size is large, thus underlining the importance of this large and genre-balanced SoNaR-1 dataset. The current best coreference resolution system for Dutch is called \"dutchcoref\" (van Cranenburgh, 2019) and is based on the rule-based Stanford system (Lee et al., 2011 (Lee et al., , 2013 . This system improved on the systems in the SemEval-2010 shared task (Recasens et al., 2010 ) and a previous implementation of the Stanford system for Dutch (GroRef; van der Goot et al., 2015) . The main focus of van Cranenburgh (2019) was evaluating coreference on literary texts, for which a corpus and evaluation is presented. Most coreference resolution systems are evaluated using newswire texts, but a domain such as literary text presents its own challenges (Bamman, 2017) ; for example, novels are longer than news articles, and novels can therefore contain longer coreference chains.", "cite_spans": [ { "start": 86, "end": 110, "text": "(Schuurman et al., 2010)", "ref_id": "BIBREF32" }, { "start": 224, "end": 244, "text": "(Bouma et al., 2007;", "ref_id": "BIBREF2" }, { "start": 245, "end": 271, "text": "Hendrickx et al., 2008a,b)", "ref_id": null }, { "start": 816, "end": 833, "text": "(Lee et al., 2011", "ref_id": "BIBREF20" }, { "start": 834, "end": 853, "text": "(Lee et al., , 2013", "ref_id": "BIBREF19" }, { "start": 924, "end": 946, "text": "(Recasens et al., 2010", "ref_id": "BIBREF31" }, { "start": 1012, "end": 1020, "text": "(GroRef;", "ref_id": null }, { "start": 1021, "end": 1047, "text": "van der Goot et al., 2015)", "ref_id": "BIBREF9" }, { "start": 1320, "end": 1334, "text": "(Bamman, 2017)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Dutch coreference resolution", "sec_num": "2.1" }, { "text": "The main benchmark for English is the CoNLL 2012 shared task (Pradhan et al., 2012) . Table 1 reports a timeline of results for this task, which shows the dramatic improvements brought by neural networks, especially the end-to-end systems on the right. Neural coreference systems improved on previous work but were still relying on mention detection rules, syntactic parsers, and heavy feature engineering (Table 1, left) . They were outperformed by the first end-to-end coreference resolution system by Lee et al. (2017b) . This system looks at all the spans (expressions) in a text, up to a maximum length, and then uses a span-ranking model that decides for each span which previous spans are good antecedents, if any. The spans themselves are represented by word embeddings.", "cite_spans": [ { "start": 61, "end": 83, "text": "(Pradhan et al., 2012)", "ref_id": "BIBREF29" }, { "start": 505, "end": 523, "text": "Lee et al. (2017b)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 86, "end": 93, "text": "Table 1", "ref_id": null }, { "start": 406, "end": 422, "text": "(Table 1, left)", "ref_id": null } ], "eq_spans": [], "section": "English Coreference resolution", "sec_num": "2.2" }, { "text": "Although the models by Clark and Manning (2016a) and Lee et al. (2017b) are computationally efficient and scalable to long documents, they are heavily relying on first order models where they are only scoring pairs of mentions. Because they make independent decisions regarding coreference links, they might make predictions which are locally consistent but globally inconsistent . introduce an approximation of higher-order inference, which uses the span-ranking architecture from Lee et al. (2017b) described above in an iterative fashion, and also propose a coarse-to-fine approach to lower the computational cost of this iterative higher-order approximation. Further improvements over Lee et al. (2017b) were obtained through the use of deep contextualized ELMo (Peters et al., 2018) Table 2 : Dataset statistics embeddings. The current state-of-the-art scores are even higher by using BERT finetuning (Joshi et al., 2019 (Joshi et al., , 2020 Wu et al., 2020) However, this paper focuses on the model by . Bamman et al. (2020) present coreference results on English literature with an end-to-end model comparable to the one used in this paper, except for using a separate mention detection step. However, their dataset consist of a larger number of shorter novel fragments (2000 words). They report a CoNLL score of 68.1 on the novel fragments.", "cite_spans": [ { "start": 23, "end": 48, "text": "Clark and Manning (2016a)", "ref_id": "BIBREF3" }, { "start": 53, "end": 71, "text": "Lee et al. (2017b)", "ref_id": "BIBREF22" }, { "start": 689, "end": 707, "text": "Lee et al. (2017b)", "ref_id": "BIBREF22" }, { "start": 766, "end": 787, "text": "(Peters et al., 2018)", "ref_id": "BIBREF28" }, { "start": 906, "end": 925, "text": "(Joshi et al., 2019", "ref_id": "BIBREF16" }, { "start": 926, "end": 947, "text": "(Joshi et al., , 2020", "ref_id": "BIBREF15" }, { "start": 948, "end": 964, "text": "Wu et al., 2020)", "ref_id": "BIBREF37" }, { "start": 1011, "end": 1031, "text": "Bamman et al. (2020)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 788, "end": 795, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "English Coreference resolution", "sec_num": "2.2" }, { "text": "In this paper we consider entity coreference and focus on the relations of identity and predication. The rest of this section describes the two Dutch corpora we use.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coreference corpora", "sec_num": "3" }, { "text": "The SoNaR-1 corpus (Schuurman et al., 2010) contains about 1 million words of Dutch text from various genres, predominantly news and Wikipedia text. Coreference was annotated from scratch (i.e., annotation did not proceed by correcting the output of a system), based on automatically extracted markables. The markables include singleton mentions but also non-referring expressions such as pleonastic pronouns. The annotation was not corrected by a second annotator. Hendrickx et al. (2008b) estimated the inter-annotator agreement of a different corpus with the same annotation scheme and obtained a MUC score of 76 % for identity relations (which form the majority).", "cite_spans": [ { "start": 19, "end": 43, "text": "(Schuurman et al., 2010)", "ref_id": "BIBREF32" }, { "start": 466, "end": 490, "text": "Hendrickx et al. (2008b)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "SoNaR-1: news and Wikipedia text", "sec_num": "3.1" }, { "text": "We have created a genre-balanced train/dev/test split for SoNaR-1 of 70/15/15. The documents are from a range of different genres and we therefore ensure that the subsets are a stratified sample in terms of genres, to avoid distribution shifts between the train and test set. 1 .", "cite_spans": [ { "start": 276, "end": 277, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "SoNaR-1: news and Wikipedia text", "sec_num": "3.1" }, { "text": "We convert the SoNaR-1 coreference annotations from MMAX2 format into the CoNLL-2012 format. Since dutchcoref requires parse trees as input, we use the manually corrected Lassy Small treebank Van Noord, 2009) , which is a superset of the SoNaR-1 corpus. 2 We align the Lassy Small trees at the sentence and token level to the SoNaR-1 coreference annotations, since there are some differences in tokenization and sentence order. 3 We also add gold standard NER annotations from SoNaR-1. The manually corrected trees lack some additional features produced by the Alpino parser (van Noord, 2006) which are needed by dutchcoref; we merge these predicted features into the gold standard trees.", "cite_spans": [ { "start": 192, "end": 208, "text": "Van Noord, 2009)", "ref_id": "BIBREF33" }, { "start": 254, "end": 255, "text": "2", "ref_id": null }, { "start": 428, "end": 429, "text": "3", "ref_id": null }, { "start": 575, "end": 592, "text": "(van Noord, 2006)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "SoNaR-1: news and Wikipedia text", "sec_num": "3.1" }, { "text": "The RiddleCoref corpus consists of contemporary Dutch novels (both translated and originally Dutch), and was presented in van Cranenburgh (2019). The corpus is a subset of the Riddle of Literary Quality corpus of 401 bestselling novels (Koolen et al., 2020) . This dataset was annotated by correcting the output of dutchcoref. Most novels in the dataset were corrected by two annotators, with the second performing another round of correction after the first. In this dataset, mentions include singletons and are manually corrected; i.e., only expressions that refer to a person or object are annotated as mentions. Besides this difference, relative clauses and discontinuous constituents have different boundaries (minimal spans).", "cite_spans": [ { "start": 236, "end": 257, "text": "(Koolen et al., 2020)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "RiddleCoref: contemporary novels", "sec_num": "3.2" }, { "text": "The system by van Cranenburgh (2019) is a rule-based system that does not require a training data, and therefore the dev/test split used in this paper is not suitable for a supervised system. To avoid this issue, we create a new train/dev/test split which reserves 70% for training data. We also evaluate dutchcoref on this new split. The new dev and test sets have no overlap with the original development set on which the rules of dutchcoref were tuned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RiddleCoref: contemporary novels", "sec_num": "3.2" }, { "text": "No gold standard parse trees are available for the novels. Instead, we use automatically predicted parses from the Alpino parser (van Noord, 2006) . Table 2 shows statistics of the two datasets and their respective splits. The documents in RiddleCoref are almost four times as long as those in SoNaR-1, and this is reflected in a higher number of mentions per entity, while SoNaR-1 has a higher density of entities to tokens. We also see a difference due to the more selective, manual annotation of mentions: almost 30% of SoNaR-1 tokens are part of a mention, compared to less than 25% for RiddleCoref. Finally, we see large differences in the proportion of pronouns, nominals and names, due to the genre difference.", "cite_spans": [ { "start": 129, "end": 146, "text": "(van Noord, 2006)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 149, "end": 156, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "RiddleCoref: contemporary novels", "sec_num": "3.2" }, { "text": "We now describe the two coreference systems, dutchcoref and e2e-Dutch, which we evaluate on the coreference corpora described in the previous section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coreference systems", "sec_num": "4" }, { "text": "The dutchcoref system 4 (van Cranenburgh, 2019) is an implementation of the rule-based coreference system by Lee et al. (2011 Lee et al. ( , 2013 . The input to the system consists of Alpino parse trees (van Noord, 2006) , which include named entities. The system infers information about speakers and addressees of direct speech using heuristic rules. This information is used for coreference decisions. Note that this information is not given as part of the input.", "cite_spans": [ { "start": 109, "end": 125, "text": "Lee et al. (2011", "ref_id": "BIBREF20" }, { "start": 126, "end": 145, "text": "Lee et al. ( , 2013", "ref_id": "BIBREF19" }, { "start": 203, "end": 220, "text": "(van Noord, 2006)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Rule-based: dutchcoref", "sec_num": "4.1" }, { "text": "We have made some improvements to the rules of this system in order to make it more compatible with the SoNaR-1 annotations; this was however based only on the output of a single document in the development set, as well as on the original, RiddleCoref development set on which dutchcoref was developed. When evaluating on SoNaR-1, we apply rules to filter links and mentions from the output to adapt to the annotation scheme of this dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule-based: dutchcoref", "sec_num": "4.1" }, { "text": "The e2e-Dutch system 5 is fully end-to-end in the sense that it is trained only on the token and coreference column of the CoNLL files of the dataset, without using any metadata. Our data does not contain speaker information which is used by models trained on the OntoNotes dataset (Hovy et al., 2006) . In addition, models trained on OntoNotes use genre information; while our data does have genre metadata, we have not experimented with using this feature. For English, such information provides additional improvement in scores (Lee et al., 2017b) . The model that e2e-Dutch is based on uses a combination of character n-gram embeddings, non-contextual word embeddings (GloVe; Pennington et al., 2014) and contextualized word embeddings (ELMo; Peters et al., 2018) . These embeddings are concatenated and fed into a bidirectional LSTM. Span heads are approximated using an attention mechanism; while this step is intended to approximate syntactic heads, it does not rely on parse tree information. Figure 1 shows an overview of the model. e2e-Dutch adapts this architecture by adding support for singletons; i.e., during mention detection, each span is classified as not a mention, a singleton, or a coreferent mention.", "cite_spans": [ { "start": 282, "end": 301, "text": "(Hovy et al., 2006)", "ref_id": "BIBREF14" }, { "start": 531, "end": 550, "text": "(Lee et al., 2017b)", "ref_id": "BIBREF22" }, { "start": 680, "end": 704, "text": "Pennington et al., 2014)", "ref_id": "BIBREF27" }, { "start": 747, "end": 767, "text": "Peters et al., 2018)", "ref_id": "BIBREF28" } ], "ref_spans": [ { "start": 1001, "end": 1009, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "End-to-end, neural: e2e-Dutch", "sec_num": "4.2" }, { "text": "Character n-gram embeddings are extracted by iterating over the data and feeding the character n-grams to a Convolutional Neural Network (CNN) which then represents these n-grams as learned 8-dimensional embeddings. The GloVe embeddings were replaced with fastText 6 embeddings (Grave et al., 2018) . We also trained fastText embeddings on our own datasets but saw a performance decrease; we therefore stick with pre-trained embeddings. Lastly, the ELMo embeddings were replaced by BERT (Devlin et al., 2019) token embeddings, since BERT tends to outperform ELMo (Devlin et al., 2019) and because there is a pretrained, monolingual Dutch BERT model available whose pretraining data includes novels (BERTje; Vries et al., 2019). However, there is no overlap between the 7000+ novels that BERTje is trained on and the RiddleCoref corpus. Whenever there is a mismatch between the subtokens of BERT and the tokens in the coreference data, the model takes the average of the BERT subtoken embeddings as token representation. The last BERT layer is used for the token representation; however, recent research has showed that layer 9 actually performs best for Dutch coreference (de Vries et al., 2020) . Note also that we do not finetune BERT for this task, contrary to Joshi et al. (2019) ; this is left for future work.", "cite_spans": [ { "start": 278, "end": 298, "text": "(Grave et al., 2018)", "ref_id": "BIBREF10" }, { "start": 487, "end": 508, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF7" }, { "start": 563, "end": 584, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF7" }, { "start": 1172, "end": 1195, "text": "(de Vries et al., 2020)", "ref_id": "BIBREF34" }, { "start": 1264, "end": 1283, "text": "Joshi et al. (2019)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "End-to-end, neural: e2e-Dutch", "sec_num": "4.2" }, { "text": "We use some different hyperparameters compared to . Our model only considers up to 30 antecedents per span instead of 50; this only leads to marginally worse performance, a 0.03 decrease in the LEA F1-score, while reducing the computational cost substantially. During training, each document is randomly truncated at 30 sentences, but different random parts are selected at each epoch. We have experimented with higher values for this parameter with RiddleCoref, but only obtained marginal improvements (0.01 difference), and did not pursue this further. The top span ratio controls the number of mentions that are considered and determines the precision/recall tradeoff for mentions. We experimented with tuning this parameter, but settled on the default of 0.4. Mentions up to 50 tokens long are considered.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "End-to-end, neural: e2e-Dutch", "sec_num": "4.2" }, { "text": "During training, the model is evaluated every 1500 epochs (2500 for SoNaR-1). If the CoNLL score on the development set does not increase after three rounds, training is stopped.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "End-to-end, neural: e2e-Dutch", "sec_num": "4.2" }, { "text": "Before presenting our main benchmark results, we discuss the issue of coreference evaluation metrics. Table 3 : Coreference results (predicted mentions, including singletons).", "cite_spans": [], "ref_spans": [ { "start": 102, "end": 109, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "The challenge with evaluating coreference resolution lies in the fact that it involves several levels: mentions, links and entities. Results can be correct on one level and incorrect on another, and the levels interact. One of the most important factors in coreference performance is the performance of mention detection, since an incorrect or missed mention can lead to a large number of missed coreference links (especially for a long coreference chain). We therefore report mention scores. It turns out that mention performance also has a large influence on coreference evaluation metrics (Moosavi and Strube, 2016). We will use two coreference metrics. The CoNLL score (Pradhan et al., 2011) is the standard benchmark, but it does not have a precision and recall score, and the MUC, B 3 , and CEAFe metrics on which it is based have their own flaws. Therefore we will also look at the LEA metric (Moosavi and Strube, 2016). LEA gives more weight to larger entities, so that mistakes on more important chains have more effect on the score than mistakes on smaller entities. Unless otherwise noted, all our results include singletons. Evaluating with and without singletons will affect all of the scores, and the two datasets differ in the way they annotated singletons. Singletons inflate coreference scores due to the mention identification effect. Since most mentions are easy to identify based on form, singletons reduce the informativeness of the coreference score. SoNaR-1 includes automatically extracted markables instead of manually annotated mentions, as in RiddleCoref. The automatically extracted markables are more numerous and easier to identify (they were extracted based on syntax) than manually annotated mentions that are restricted to potentially referring expressions (a semantic distinction). One possibility to rule out the mention identification effect completely is to present the systems with gold mentions. However, this still leaves the singleton-effect. If singletons are included, the system will not know which of the gold mentions are singletons, and this can lead to incorrect coreference links. A dataset with more singletons (such as SoNaR-1) will thus have more potential for incorrect coreference links (precision errors). If singleton mentions are excluded from the set of gold mentions, it is given that all mentions are coreferent. The system should then use this information and force every mention to have at least one link. However, this requires re-training or re-designing the coreference system, and does not allow us to do a realistic end-to-end coreference evaluation. We are therefore stuck with the complications that come with combining mention identification and coreference resolution.", "cite_spans": [ { "start": 673, "end": 695, "text": "(Pradhan et al., 2011)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Metrics", "sec_num": "5.1" }, { "text": "The main results are presented in Table 3 . For RiddleCoref, dutchcoref outperforms e2e-Dutch by a 6 point margin. For SoNar-1, e2e-Dutch comes out first, and the gap is even larger. Despite the advantage dutchcoref has due to its use of gold standard parse trees, its performance is lower than e2e-Dutch. We can see from the mention recall score that dutchcoref misses a large number of potential mentions; this may be due to the fact that SoNaR-1 markables include singletons and non-referential mentions. However, dutchcoref also has a lower LEA recall, so the gap with e2e-Dutch on SoNar-1 is not only due to mention Table 4 : Performance difference between e2e-Dutch and dutchcoref for each individual novel performance. While results for different datasets and languages are not comparable, the performance difference for SoNaR-1 has the same order of magnitude as the difference for OntoNotes between the comparable rule-based and neural systems of Lee et al. (2011) and in Table 1 . RiddleCoref is much smaller than the SoNaR-1 dataset. Is there enough training data for the neural model? Figure 2 shows a learning curve for e2e-Dutch. This curve suggests that for the coreference scores the answer is no, because the performance does not reach a plateau-instead the curve is steep until the end. The performance of dutchcoref is the top of the plot; if we extrapolate the curve linearly, we might expect e2e-Dutch to outperform dutchcoref with 1.1-1.3 times the current training data. However, as an anonymous reviewer pointed out, training curves are usually logarithmic, so more training data may be required. Mention performance does reach a plateau, which suggests this task is easier.", "cite_spans": [ { "start": 956, "end": 973, "text": "Lee et al. (2011)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 34, "end": 41, "text": "Table 3", "ref_id": null }, { "start": 621, "end": 628, "text": "Table 4", "ref_id": null }, { "start": 981, "end": 988, "text": "Table 1", "ref_id": null }, { "start": 1097, "end": 1105, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "The previous section showed some surprising results. We now take a closer look at the differences between the two coreference systems, datasets, and the annotations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "6" }, { "text": "See Table 4 for a novel by novel comparison of dutchcoref and e2e-Dutch. On 3 out of 5 novels, dutchcoref is better on both LEA F1 and CoNLL. Interestingly, on 1 novel, LEA F1 and CoNLL disagree on the ranking of the systems. Mention performance is high across all novels, except for a large discrepancy on Forsyth in which e2e-Dutch scores 10 points lower.", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 11, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Rule-based versus neural coreference", "sec_num": "6.1" }, { "text": "To get more insight in the particular errors made by the systems, we perform an error analysis using the tool by Kummerfeld and Klein (2013 73 476 130 96 587 379 154 e2e-Dutch RiddleCoref 47 321 101 36 420 511 369 dutchcoref SoNaR-1 352 2432 2327 1772 2640 2469 1519 e2e-Dutch SoNaR-1 203 1187 895 695 1994 3428 2330 Table 6 : Left: Counts of missing and extra mention errors by mention type. Right: A breakdown of conflated/divided entity errors on RiddleCoref grouped by Name/Nominal/Pronoun composition; 1+ means that the entity contains one or more mentions of the given type.", "cite_spans": [ { "start": 113, "end": 139, "text": "Kummerfeld and Klein (2013", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 140, "end": 335, "text": "73 476 130 96 587 379 154 e2e-Dutch RiddleCoref 47 321 101 36 420 511 369 dutchcoref SoNaR-1 352 2432 2327 1772 2640 2469 1519 e2e-Dutch SoNaR-1 203 1187 895 695 1994", "ref_id": "TABREF4" }, { "start": 346, "end": 353, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Rule-based versus neural coreference", "sec_num": "6.1" }, { "text": "mentions/entities, and entities which are divided (incorrectly split) or conflated (incorrectly merged). We use the default configuration of ignoring singletons mentions, but add an option to support the Dutch parse tree labels. Table 5 shows an overview of these error types by the systems on the RiddleCoref and SoNaR-1 test sets. We can see that e2e-Dutch makes less errors of all types, except for missing mentions and entities, which is due to its lower mention recall. Even though e2e-Dutch showed a high score for mention recall on SoNaR-1 in Table 3 , we actually find that dutchcoref and e2e-Dutch both show a similarly low mention recall when singletons are excluded (65.8 and 64.3, respectively). Finally, note that a lower mention recall means that there is less opportunity to make errors of other types, so this comparison is not conclusive.", "cite_spans": [], "ref_spans": [ { "start": 229, "end": 236, "text": "Table 5", "ref_id": "TABREF4" }, { "start": 550, "end": 557, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Rule-based versus neural coreference", "sec_num": "6.1" }, { "text": "To understand what is going on with mention identification, we can look at a breakdown by mention type, see Table 6 . We see that e2e-Dutch produces substantially less extra nominal (NP) mentions, but is otherwise similar. In terms of missing mentions, e2e-Dutch makes substantially more errors on names and nominals, but on RiddleCoref it has less missing pronouns, while it has more missing pronouns with SoNaR-1. Although pronouns form a closed class, the issue of pleonastic pronouns still makes pronoun mention detection non-trivial for RiddleCoref, where pleonastic pronouns are not annotated as mentions. Since dutchcoref has no rules to detect non-pleonastic uses of potentially pleonastic pronouns, it defaults to treating them as non-mentions. For SoNaR-1, the performance difference on missing mentions may be due to information from the gold parse trees which is used by dutchcoref; for example the possessive zijn (his) has the same form as the infinitive of the verb to be, but POS tags disambiguate this, and this information is not available to e2e-Dutch.", "cite_spans": [], "ref_spans": [ { "start": 108, "end": 115, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Rule-based versus neural coreference", "sec_num": "6.1" }, { "text": "Finally, we can try to understand the coreference link errors. Table 6 shows the counts of link errors on RiddleCoref by the two systems, with the entities categorized by their configuration. We see that for both dutchcoref and e2e-Dutch, the most common divided and conflated entity errors have a pronoun present in the incorrect part, although dutchcoref makes more of these errors. We can thus reconfirm the finding by Kummerfeld and Klein (2013) and van Cranenburgh (2019) who report that the most common link error involves pronouns. Coreference resolution for Dutch provides an extra challenge in the fact that the third person singular pronouns can refer to either biological or linguistic gender (Hoste, 2005) . Are the scores on the two datasets comparable? There are several issues which hinder the comparison: document length, domain differences, and mention annotation. We first look at document length. It could be that the evaluation metrics are influenced by document length, since longer documents offer more opportunities for errors. We will investigate this effect by truncating the documents before evaluation, while keeping other factors such as the model or training data constant. We truncate after running the coreference system because we want to focus on the effect of document length on the evaluation, and we have no reason to expect the coreference systems to behave differently on truncated texts. We truncate the novels at different lengths based on the number of words, rounded to the nearest sentence. Note that truncating does not cause additional errors, because gold and system output are both truncated. Figure 3 shows coreference scores as a function of document length for the novels. We conclude that e2e-Dutch seems to perform worse on longer documents, based on the negative correlation of scores and document length. While LEA weighs larger entities more, we also see this effect with the CoNLL score, so it is not an artifact of the LEA metric. Moreover, we do not see the effect for dutchcoref, so the effect is not inherent to the coreference metrics. The documents in SoNaR-1 are much shorter (number of sentences and words), and this may be an advantage for e2e-Dutch. Joshi et al. (2019) report a similar document length effect for English with their end-to-end model. Table 2 shows there is large difference in distribution of pronouns, names, and noun phrases, which are not equally difficult. Novels tend to have a larger proportion of pronouns. However, it is hard to say a priori whether this would make novels easier or more difficult in terms of coreference.", "cite_spans": [ { "start": 422, "end": 449, "text": "Kummerfeld and Klein (2013)", "ref_id": "BIBREF18" }, { "start": 704, "end": 717, "text": "(Hoste, 2005)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 63, "end": 70, "text": "Table 6", "ref_id": null }, { "start": 1640, "end": 1648, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 2317, "end": 2324, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Rule-based versus neural coreference", "sec_num": "6.1" }, { "text": "In order to see the influence of the mention identification effect, as well as the influence of evaluating with and without singletons, Table 7 shows a comparison on the development set. Note that in our experiments with e2e-Dutch, singletons are always included during training; excluding singletons only refers to excluding them from the system output and gold data during evaluation. We see that ignoring singletons has a counter-intuitively large effect on coreference scores, while it has a relatively small effect on mention identification for RiddleCoref, but a large effect with SoNaR-1. However, whether singletons are included or not does not change the ranking of the systems. Finally, when gold mentions are given during evaluation we see the large effect that mention identification has downstream, although again the ranking is preserved.", "cite_spans": [], "ref_spans": [ { "start": 136, "end": 143, "text": "Table 7", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Rule-based versus neural coreference", "sec_num": "6.1" }, { "text": "Since the gap between the performance of e2e-Dutch and dutchcoref on SoNaR-1 is so large, we take a quick look at the SoNaR-1 annotations of a single development set document (WR-P-E-C-0000000021), in order to understand the errors made by dutchcoref. However, it is apparent that part of these errors are actually errors in the annotation. The first thing that stands out are mentions with exact string matches which are not linked; for example: Amsterdam (5x), Hilversum (6x), de zeventiende eeuw (the seventeenth century, 4x), etc. Other errors are due to missing mentions; for example, 2 out of 10 mentions of the artist Japix are missing, probably because the name occurs twice as part of a possessive. A corpus based on semi-automatic annotation would not contain such errors, while it is understandable that such links are easy to overlook in a longer document when manually annotating from scratch.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SoNaR-1 annotation issues", "sec_num": "6.3" }, { "text": "An example of a questionable mention boundary (with corrected boundary underlined):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SoNaR-1 annotation issues", "sec_num": "6.3" }, { "text": "( This is actually an example of a downside of semi-automatic annotation, at least if there is no correction, since the markable boundaries of SoNaR-1 were automatically extracted and could not be changed by annotators. For the RiddleCoref corpus, such boundaries were corrected. An example of a missing anaphoric link (second hij was not linked):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SoNaR-1 annotation issues", "sec_num": "6.3" }, { "text": "(2) Een vers aan [Caspar Barlaeus] 1 ondertekent [hij] 2 met 'Dando petere solitus' dat wil zeggen:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SoNaR-1 annotation issues", "sec_num": "6.3" }, { "text": "[hij] 2 schrijft po\u00ebzie in de hoop betere verzen terug te krijgen . A verse to [Caspar Barlaeus] 1 he 2 signes with 'Dando petere solitus' which is to say: he 2 writes poetry in the hope to get better verses back.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SoNaR-1 annotation issues", "sec_num": "6.3" }, { "text": "This only scratches the surface of the SoNaR-1 annotations. A more systematic study should be done.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SoNaR-1 annotation issues", "sec_num": "6.3" }, { "text": "We found large gaps in performance for the two systems across the two domains, but this result is not conclusive due to several reasons, which are as follows. The neural system shows a weakness with the long documents in the novel corpus, but also needs more training data to reach its full potential. The rule-based system should be better adapted to the SoNaR-1 annotation scheme, but the neural system's capacity to adapt to arbitrary annotation conventions does not necessarily imply better linguistic performance. To maximize the comparability and usefulness of the corpora, their annotations should be harmonized, which involves manual mention annotation. In future work we want to improve the neural system by using genre metadata and finetuning BERT, and the rule-based system should be extended to a hybrid system by adding supervised classifiers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Cf. https://gist.github.com/CorbenPoot/ee1c97209cb9c5fc50f9528c7fdcdc93 2 We could also evaluate with predicted parses from the Alpino parser, but components of the Alpino parser have been trained on subsets of Lassy Small, so predicted parses of Lassy Small are not representative of Alpino's heldout performance.3 The conversion script is part of https://github.com/andreasvc/dutchcoref/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/andreasvc/dutchcoref 5 The e2e-Dutch system is being developed as part of the Filter Bubble project at the VU and eScience center.The specific commit we used is https://github.com/Filter-Bubble/e2e-Dutch/tree/ 056dcf7d3d711a3c7b8cda241a16cdd76158a823", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use Fasttext common crawl embeddings, https://fasttext.cc/docs/en/crawl-vectors.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We adapted this tool for Dutch: https://github.com/andreasvc/berkeley-coreference-analyser", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We are grateful to Gertjan van Noord and Peter Kleiweg for help with preprocessing the Lassy Small treebank, to Wietse de Vries and Malvina Nissim for comments on the evaluation, and to three anonymous reviewers for their suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Natural language processing for the long tail", "authors": [ { "first": "David", "middle": [], "last": "Bamman", "suffix": "" } ], "year": 2017, "venue": "Proceedings of Digital Humanities", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Bamman. 2017. Natural language processing for the long tail. In Proceedings of Digital Humanities.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "An annotated dataset of coreference in English literature", "authors": [ { "first": "David", "middle": [], "last": "Bamman", "suffix": "" }, { "first": "Olivia", "middle": [], "last": "Lewke", "suffix": "" }, { "first": "Anya", "middle": [], "last": "Mansoor", "suffix": "" } ], "year": 2020, "venue": "Proceedings of The 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "44--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Bamman, Olivia Lewke, and Anya Mansoor. 2020. An annotated dataset of coreference in English literature. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 44-54.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The COREA-project: manual for the annotation of coreference in Dutch texts", "authors": [ { "first": "Gosse", "middle": [], "last": "Bouma", "suffix": "" }, { "first": "Walter", "middle": [], "last": "Daelemans", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gosse Bouma, Walter Daelemans, Iris Hendrickx, V\u00e9ronique Hoste, and Anne-Marie Mineur. 2007. The COREA-project: manual for the annotation of coreference in Dutch texts. Technical report, University of Groningen.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Deep reinforcement learning for mention-ranking coreference models", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2016, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "2256--2262", "other_ids": { "DOI": [ "10.18653/v1/D16-1245" ] }, "num": null, "urls": [], "raw_text": "Kevin Clark and Christopher D. Manning. 2016a. Deep reinforcement learning for mention-ranking coreference models. In Proceedings of EMNLP, pages 2256-2262.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Improving coreference resolution by learning entitylevel distributed representations", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2016, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "643--653", "other_ids": { "DOI": [ "10.18653/v1/P16-1061" ] }, "num": null, "urls": [], "raw_text": "Kevin Clark and Christopher D. Manning. 2016b. Improving coreference resolution by learning entity- level distributed representations. In Proceedings of ACL, pages 643-653.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A Dutch coreference resolution system with an evaluation on literary fiction", "authors": [ { "first": "Andreas", "middle": [], "last": "Van Cranenburgh", "suffix": "" } ], "year": 2019, "venue": "Computational Linguistics in the Netherlands Journal", "volume": "9", "issue": "", "pages": "27--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas van Cranenburgh. 2019. A Dutch coreference resolution system with an evaluation on literary fiction. Computational Linguistics in the Netherlands Journal, 9:27-54.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Cross-domain Dutch coreference resolution", "authors": [ { "first": "Clercq", "middle": [], "last": "Orph\u00e9e De", "suffix": "" } ], "year": 2011, "venue": "Proceedings of RANLP", "volume": "", "issue": "", "pages": "186--193", "other_ids": {}, "num": null, "urls": [], "raw_text": "Orph\u00e9e De Clercq, V\u00e9ronique Hoste, and Iris Hendrickx. 2011. Cross-domain Dutch coreference resolution. In Proceedings of RANLP, pages 186-193.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL, pages 4171-4186.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Latent structure perceptron with feature induction for unrestricted coreference resolution", "authors": [ { "first": "Eraldo", "middle": [], "last": "Fernandes", "suffix": "" }, { "first": "Santos", "middle": [], "last": "C\u00edcero Dos", "suffix": "" }, { "first": "Ruy", "middle": [], "last": "Milidi\u00fa", "suffix": "" } ], "year": 2012, "venue": "Joint Conference on EMNLP and CoNLL -Shared Task", "volume": "", "issue": "", "pages": "41--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eraldo Fernandes, C\u00edcero dos Santos, and Ruy Milidi\u00fa. 2012. Latent structure perceptron with feature induction for unrestricted coreference resolution. In Joint Conference on EMNLP and CoNLL -Shared Task, pages 41-48.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "GroRef: Rule-based coreference resolution for Dutch", "authors": [ { "first": "Rob", "middle": [], "last": "Van Der Goot", "suffix": "" }, { "first": "Hessel", "middle": [], "last": "Haagsma", "suffix": "" }, { "first": "Dieke", "middle": [], "last": "Oele", "suffix": "" } ], "year": 2015, "venue": "CLIN26 shared task", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rob van der Goot, Hessel Haagsma, and Dieke Oele. 2015. GroRef: Rule-based coreference resolution for Dutch. In CLIN26 shared task.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning word vectors for 157 languages", "authors": [ { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Prakhar", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In Proceedings of LREC.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A coreference corpus and resolution system for Dutch", "authors": [ { "first": "Iris", "middle": [], "last": "Hendrickx", "suffix": "" }, { "first": "Gosse", "middle": [], "last": "Bouma", "suffix": "" }, { "first": "Frederik", "middle": [], "last": "Coppens", "suffix": "" }, { "first": "Walter", "middle": [], "last": "Daelemans", "suffix": "" }, { "first": "Veronique", "middle": [], "last": "Hoste", "suffix": "" }, { "first": "Geert", "middle": [], "last": "Kloosterman", "suffix": "" }, { "first": "Anne-Marie", "middle": [], "last": "Mineur", "suffix": "" }, { "first": "Joeri", "middle": [], "last": "Van Der", "suffix": "" }, { "first": "Jean-Luc", "middle": [], "last": "Vloet", "suffix": "" }, { "first": "", "middle": [], "last": "Verschelde", "suffix": "" } ], "year": 2008, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iris Hendrickx, Gosse Bouma, Frederik Coppens, Walter Daelemans, Veronique Hoste, Geert Kloosterman, Anne-Marie Mineur, Joeri van der Vloet, and Jean-Luc Verschelde. 2008a. A coreference corpus and resolution system for Dutch. In Proceedings of LREC.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Semantic and syntactic features for Dutch coreference resolution", "authors": [ { "first": "Iris", "middle": [], "last": "Hendrickx", "suffix": "" }, { "first": "Veronique", "middle": [], "last": "Hoste", "suffix": "" }, { "first": "Walter", "middle": [], "last": "Daelemans", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics and Intelligent Text Processing", "volume": "", "issue": "", "pages": "351--361", "other_ids": { "DOI": [ "10.1007/978-3-540-78135-6_30" ] }, "num": null, "urls": [], "raw_text": "Iris Hendrickx, Veronique Hoste, and Walter Daelemans. 2008b. Semantic and syntactic features for Dutch coreference resolution. In Computational Linguistics and Intelligent Text Processing, pages 351-361, Berlin, Heidelberg. Springer Berlin Heidelberg.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Optimization issues in machine learning of coreference resolution", "authors": [ { "first": "V\u00e9ronique", "middle": [], "last": "Hoste", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "V\u00e9ronique Hoste. 2005. Optimization issues in machine learning of coreference resolution. Ph.D. thesis, Universiteit Antwerpen. Faculteit Letteren en Wijsbegeerte.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "OntoNotes: The 90% solution", "authors": [ { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Mitchell", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Lance", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Weischedel", "suffix": "" } ], "year": 2006, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "57--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. OntoNotes: The 90% solution. In Proceedings of NAACL, pages 57-60.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Span-BERT: Improving pre-training by representing and predicting spans", "authors": [ { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Daniel", "middle": [ "S" ], "last": "Weld", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2020, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "64--77", "other_ids": { "DOI": [ "10.1162/tacl_a_00300" ] }, "num": null, "urls": [], "raw_text": "Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. Span- BERT: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "BERT for coreference resolution: Baselines and analysis", "authors": [ { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Weld", "suffix": "" } ], "year": 2019, "venue": "Proceedings of EMNLP-IJCNLP", "volume": "", "issue": "", "pages": "5807--5812", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mandar Joshi, Omer Levy, Luke Zettlemoyer, and Daniel Weld. 2019. BERT for coreference resolution: Baselines and analysis. In Proceedings of EMNLP-IJCNLP, pages 5807-5812.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Literary quality in the eye of the Dutch reader: The national reader survey", "authors": [ { "first": "Corina", "middle": [], "last": "Koolen", "suffix": "" }, { "first": "Karina", "middle": [], "last": "Van Dalen-Oskam", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Van Cranenburgh", "suffix": "" }, { "first": "Erica", "middle": [], "last": "Nagelhout", "suffix": "" } ], "year": 2020, "venue": "Poetics", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1016/j.poetic.2020.101439" ] }, "num": null, "urls": [], "raw_text": "Corina Koolen, Karina van Dalen-Oskam, Andreas van Cranenburgh, and Erica Nagelhout. 2020. Literary quality in the eye of the Dutch reader: The national reader survey. Poetics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Error-driven analysis of challenges in coreference resolution", "authors": [ { "first": "Jonathan", "middle": [ "K" ], "last": "Kummerfeld", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2013, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "265--277", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan K. Kummerfeld and Dan Klein. 2013. Error-driven analysis of challenges in coreference resolution. In Proceedings of EMNLP, pages 265-277.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Deterministic coreference resolution based on entity-centric, precision-ranked rules", "authors": [ { "first": "Heeyoung", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Angel", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Peirsman", "suffix": "" }, { "first": "Nathanael", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2013, "venue": "Computational Linguistics", "volume": "39", "issue": "4", "pages": "885--916", "other_ids": { "DOI": [ "10.1162/COLI_a_00152" ] }, "num": null, "urls": [], "raw_text": "Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2013. Deterministic coreference resolution based on entity-centric, precision-ranked rules. Computa- tional Linguistics, 39(4):885-916.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Stanford's multi-pass sieve coreference resolution system at the CoNLL-2011 shared task", "authors": [ { "first": "Heeyoung", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Peirsman", "suffix": "" }, { "first": "Angel", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Nathanael", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2011, "venue": "Proceedings of CoNLL", "volume": "", "issue": "", "pages": "28--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heeyoung Lee, Yves Peirsman, Angel Chang, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2011. Stanford's multi-pass sieve coreference resolution system at the CoNLL-2011 shared task. In Proceedings of CoNLL, pages 28-34.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A scaffolding approach to coreference resolution integrating statistical and rule-based models", "authors": [ { "first": "Heeyoung", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2017, "venue": "Natural Language Engineering", "volume": "23", "issue": "5", "pages": "733--762", "other_ids": { "DOI": [ "10.1017/S1351324917000109" ] }, "num": null, "urls": [], "raw_text": "Heeyoung Lee, Mihai Surdeanu, and Dan Jurafsky. 2017a. A scaffolding approach to coreference resolution integrating statistical and rule-based models. Natural Language Engineering, 23(5):733- 762.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "End-to-end neural coreference resolution", "authors": [ { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2017, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "188--197", "other_ids": { "DOI": [ "10.18653/v1/D17-1018" ] }, "num": null, "urls": [], "raw_text": "Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017b. End-to-end neural coreference resolution. In Proceedings of EMNLP, pages 188-197.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Higher-order coreference resolution with coarseto-fine inference", "authors": [ { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "687--692", "other_ids": { "DOI": [ "10.18653/v1/N18-2108" ] }, "num": null, "urls": [], "raw_text": "Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse- to-fine inference. In Proceedings of NAACL, pages 687-692.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Which coreference evaluation metric do you trust? A proposal for a link-based entity aware metric", "authors": [ { "first": "Sadat", "middle": [], "last": "Nafise", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Moosavi", "suffix": "" }, { "first": "", "middle": [], "last": "Strube", "suffix": "" } ], "year": 2016, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "632--642", "other_ids": { "DOI": [ "10.18653/v1/P16-1060" ] }, "num": null, "urls": [], "raw_text": "Nafise Sadat Moosavi and Michael Strube. 2016. Which coreference evaluation metric do you trust? A proposal for a link-based entity aware metric. In Proceedings of ACL, pages 632-642.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "At last parsing is now operational", "authors": [ { "first": "", "middle": [], "last": "Gertjan Van Noord", "suffix": "" } ], "year": 2006, "venue": "TALN06. Verbum Ex Machina. Actes de la 13e conference sur le traitement automatique des langues naturelles", "volume": "", "issue": "", "pages": "20--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gertjan van Noord. 2006. At last parsing is now operational. In TALN06. Verbum Ex Machina. Actes de la 13e conference sur le traitement automatique des langues naturelles, pages 20-42.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Syntactic annotation of large corpora in STEVIN", "authors": [ { "first": "Ineke", "middle": [], "last": "Gertjan Van Noord", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Schuurman", "suffix": "" }, { "first": "", "middle": [], "last": "Vandeghinste", "suffix": "" } ], "year": 2006, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gertjan van Noord, Ineke Schuurman, and Vincent Vandeghinste. 2006. Syntactic annotation of large corpora in STEVIN. In Proceedings of LREC.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "GloVe: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": { "DOI": [ "10.3115/v1/D14-1162" ] }, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of EMNLP, pages 1532-1543.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "2227--2237", "other_ids": { "DOI": [ "10.18653/v1/N18-1202" ] }, "num": null, "urls": [], "raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of NAACL, pages 2227-2237.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "CoNLL-2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes", "authors": [ { "first": "Alessandro", "middle": [], "last": "Sameer Pradhan", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Yuchen", "middle": [], "last": "Uryupina", "suffix": "" }, { "first": "", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2012, "venue": "Joint Conference on EMNLP and CoNLL -Shared Task", "volume": "", "issue": "", "pages": "1--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL- 2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes. In Joint Conference on EMNLP and CoNLL -Shared Task, pages 1-40.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "CoNLL-2011 shared task: Modeling unrestricted coreference in OntoNotes", "authors": [ { "first": "Sameer", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "Lance", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "Mitchell", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Weischedel", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" } ], "year": 2011, "venue": "Proceedings of CoNLL", "volume": "", "issue": "", "pages": "1--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sameer Pradhan, Lance Ramshaw, Mitchell Marcus, Martha Palmer, Ralph Weischedel, and Nianwen Xue. 2011. CoNLL-2011 shared task: Modeling unrestricted coreference in OntoNotes. In Proceedings of CoNLL, pages 1-27.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "SemEval-2010 task 1: Coreference resolution in multiple languages", "authors": [ { "first": "Marta", "middle": [], "last": "Recasens", "suffix": "" }, { "first": "Llu\u00eds", "middle": [], "last": "M\u00e0rquez", "suffix": "" }, { "first": "Emili", "middle": [], "last": "Sapena", "suffix": "" }, { "first": "M", "middle": [ "Ant\u00f2nia" ], "last": "Mart\u00ed", "suffix": "" }, { "first": "Mariona", "middle": [], "last": "Taul\u00e9", "suffix": "" }, { "first": "V\u00e9ronique", "middle": [], "last": "Hoste", "suffix": "" }, { "first": "Massimo", "middle": [], "last": "Poesio", "suffix": "" }, { "first": "Yannick", "middle": [], "last": "Versley", "suffix": "" } ], "year": 2010, "venue": "Proceedings of SemEval", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marta Recasens, Llu\u00eds M\u00e0rquez, Emili Sapena, M. Ant\u00f2nia Mart\u00ed, Mariona Taul\u00e9, V\u00e9ronique Hoste, Massimo Poesio, and Yannick Versley. 2010. SemEval-2010 task 1: Coreference resolution in multiple languages. In Proceedings of SemEval, pages 1-8.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Interacting semantic layers of annotation in SoNaR, a reference corpus of contemporary written Dutch", "authors": [ { "first": "Ineke", "middle": [], "last": "Schuurman", "suffix": "" }, { "first": "V\u00e9ronique", "middle": [], "last": "Hoste", "suffix": "" }, { "first": "Paola", "middle": [], "last": "Monachesi", "suffix": "" } ], "year": 2010, "venue": "Proc. of LREC", "volume": "", "issue": "", "pages": "2471--2477", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ineke Schuurman, V\u00e9ronique Hoste, and Paola Monachesi. 2010. Interacting semantic layers of annotation in SoNaR, a reference corpus of contemporary written Dutch. In Proc. of LREC, pages 2471-2477.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Huge parsed corpora in Lassy", "authors": [ { "first": "Gertjan", "middle": [], "last": "Van Noord", "suffix": "" } ], "year": 2009, "venue": "Proceedings of TLT7", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gertjan Van Noord. 2009. Huge parsed corpora in Lassy. In Proceedings of TLT7, Groningen, The Netherlands. LOT.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "What's so special about BERT's layers? A closer look at the NLP pipeline in monolingual and multilingual models", "authors": [ { "first": "Andreas", "middle": [], "last": "Wietse De Vries", "suffix": "" }, { "first": "Malvina", "middle": [], "last": "Van Cranenburgh", "suffix": "" }, { "first": "", "middle": [], "last": "Nissim", "suffix": "" } ], "year": 2020, "venue": "Findings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wietse de Vries, Andreas van Cranenburgh, and Malvina Nissim. 2020. What's so special about BERT's layers? A closer look at the NLP pipeline in monolingual and multilingual models. In Findings of EMNLP.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "BERTje: A Dutch BERT model", "authors": [ { "first": "Andreas", "middle": [], "last": "Wietse De Vries", "suffix": "" }, { "first": "Arianna", "middle": [], "last": "Van Cranenburgh", "suffix": "" }, { "first": "Tommaso", "middle": [], "last": "Bisazza", "suffix": "" }, { "first": "", "middle": [], "last": "Caselli", "suffix": "" }, { "first": "Malvina", "middle": [], "last": "Gertjan Van Noord", "suffix": "" }, { "first": "", "middle": [], "last": "Nissim", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1912.09582" ] }, "num": null, "urls": [], "raw_text": "Wietse de Vries, Andreas van Cranenburgh, Arianna Bisazza, Tommaso Caselli, Gertjan van Noord, and Malvina Nissim. 2019. BERTje: A Dutch BERT model. arXiv:1912.09582.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Learning anaphoricity and antecedent ranking features for coreference resolution", "authors": [ { "first": "Sam", "middle": [], "last": "Wiseman", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" }, { "first": "Stuart", "middle": [], "last": "Shieber", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "1416--1426", "other_ids": { "DOI": [ "10.3115/v1/P15-1137" ] }, "num": null, "urls": [], "raw_text": "Sam Wiseman, Alexander M. Rush, Stuart Shieber, and Jason Weston. 2015. Learning anaphoricity and antecedent ranking features for coreference resolution. In Proceedings of ACL, pages 1416-1426.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "CorefQA: Coreference resolution as query-based span prediction", "authors": [ { "first": "Wei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Arianna", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" } ], "year": 2020, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "6953--6963", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.622" ] }, "num": null, "urls": [], "raw_text": "Wei Wu, Fei Wang, Arianna Yuan, Fei Wu, and Jiwei Li. 2020. CorefQA: Coreference resolution as query-based span prediction. In Proceedings of ACL, pages 6953-6963.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "Overview of the first step of the end-to-end model in which the embedding representations and mention scores are computed. The model considers all possible spans up to a maximum width but only a small subset is shown here.Figure adaptedfromLee et al. (2017b)." }, "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": "Learning curve of e2e-Dutch on RiddleCoref dev set, showing performance as a function of amount of training data (initial segments of novels)." }, "FIGREF2": { "type_str": "figure", "uris": null, "num": null, "text": "Coreference scores as a function of document length. Gold and system output are truncated at different lengths (based on % of words, rounded to the nearest sentence boundary); r is the Pearson correlation coefficient." }, "TABREF1": { "html": null, "num": null, "type_str": "table", "content": "
SystemdatasetMentionsLEACoNLL
RPF1RPF1
dutchcoref 63.55
dutchcoref SoNaR-1, dev64.88 86.78 74.25 37.98 52.23 43.9855.45
e2e-Dutch SoNaR-1, dev90.24 88.09 89.16 65.02 65.55 65.2971.53
dutchcoref SoNaR-1, test65.32 85.94 74.22 37.87 52.55 44.0255.91
e2e-Dutch SoNaR-1, test88.96 86.81 87.87 60.67 62.48 61.5668.45
", "text": "RiddleCoref, dev 86.85 85.84 86.34 49.18 58.03 53.24 65.91 e2e-Dutch RiddleCoref, dev 83.12 87.65 85.33 48.37 50.99 49.65 64.81 dutchcoref RiddleCoref, test 87.65 90.80 89.20 50.83 64.78 56.97 69.86 e2e-Dutch RiddleCoref, test 81.95 89.00 85.33 44.82 50.48 47.48" }, "TABREF3": { "html": null, "num": null, "type_str": "table", "content": "
SystemDatasetSpan ConflatedExtraExtra Divided Missing Missing
ErrorEntities Mention EntityEntity MentionEntity
dutchcoref RiddleCoref
", "text": "). 7 This tool attributes errors to mention spans, missing or extra" }, "TABREF4": { "html": null, "num": null, "type_str": "table", "content": "
DatasetSystem errorname nom. pron.Incorrect partRest of entityDividedConflated
RiddleCoref d.c.extra58342Na No PrNa No Prd.c. e2e d.c. e2e
RiddleCoref e2eextra65540--1+ -1+ 1+ 10466 11874
RiddleCoref d.c.missing11163205--1+ 1+ 1+ 1+ 202491172
RiddleCoref e2emissing115274122--1+ -1+ -6266 15631
SoNaR-1 SoNaR-1 SoNaR-1 SoNaR-1d.c. e2e d.c. e2eextra extra missing missing544 1473 175 550 283 1842 825 2124310 170 344 479-----1+ -1+ 1+ --1+ --1+ 1+ 1+ 1+ 1+ 1+ -1+ 1+ 1+ 1+ -1+ --1+22 33 34 36 1530 31 18 29 1133 16 33 2 2520 13 6 12 14
Other79 1208279
", "text": "Error types and their respective counts for both systems and datasets" }, "TABREF6": { "html": null, "num": null, "type_str": "table", "content": "
6.2 RiddleCoref (novels) versus SoNaR-1 (news/Wikipedia)
", "text": "Development set results under different conditions." }, "TABREF7": { "html": null, "num": null, "type_str": "table", "content": "", "text": "1) [Hij] was [burgemeester van Franeker en later gedeputeerde van Friesland in de Staten-Generaal]. [He] was [mayor of Franeker and later deputy of Frisia in the Senate]." } } } }