{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:10:49.966083Z" }, "title": "Priberam Labs at the 3 rd Shared Task on SlavNER", "authors": [ { "first": "Pedro", "middle": [], "last": "Ferreira", "suffix": "", "affiliation": { "laboratory": "Priberam Labs", "institution": "", "location": { "country": "Portugal" } }, "email": "pedro.ferreira@priberam.pt" }, { "first": "R\u00faben", "middle": [], "last": "Cardoso", "suffix": "", "affiliation": { "laboratory": "Priberam Labs", "institution": "", "location": { "country": "Portugal" } }, "email": "ruben.cardoso@priberam.pt" }, { "first": "Afonso", "middle": [], "last": "Mendes", "suffix": "", "affiliation": { "laboratory": "Priberam Labs", "institution": "", "location": { "country": "Portugal" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This document describes our participation at the 3 rd Shared Task on SlavNER, part of the 8 th Balto-Slavic Natural Language Processing Workshop, where we focused exclusively in the Named Entity Recognition (NER) task. We addressed this task by combining multilingual contextual embedding models, such as XLM-R (Conneau et al., 2020), with characterlevel embeddings and a biaffine classifier (Yu et al., 2020). This allowed us to train downstream models for NER using all the available training data. We are able to show that this approach results in good performance when replicating the scenario of the 2 nd Shared Task.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "This document describes our participation at the 3 rd Shared Task on SlavNER, part of the 8 th Balto-Slavic Natural Language Processing Workshop, where we focused exclusively in the Named Entity Recognition (NER) task. We addressed this task by combining multilingual contextual embedding models, such as XLM-R (Conneau et al., 2020), with characterlevel embeddings and a biaffine classifier (Yu et al., 2020). This allowed us to train downstream models for NER using all the available training data. We are able to show that this approach results in good performance when replicating the scenario of the 2 nd Shared Task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "This document describes our participation at the 3 rd Shared Task on SlavNER, part of the 8 th Balto-Slavic Natural Language Processing Workshop, held in conjunction with the 16 th Conference of the European Chapter of the Association for Computational Linguistics (EACL). It includes three different subtasks: Named Entity Recognition (NER), including detection and classification, lemmatization, and cross-lingual entity linking. The differentiating feature of this shared task is the focus on six Slavic languages: Bulgarian (BG), Czech (CS), Polish (PL), Russian (RU), Slovene (SL), and Ukrainian (UK).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We focus our participation exclusively on the task of NER, and we base ourselves on recent developments that show that cross-lingual embeddings produce good results for a wide range of languages and tasks (Pires et al., 2019; Hu et al., 2020) .", "cite_spans": [ { "start": 205, "end": 225, "text": "(Pires et al., 2019;", "ref_id": "BIBREF14" }, { "start": 226, "end": 242, "text": "Hu et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our overall approach is heavily based in Yu et al. (2020) , and uses both contextual and characterlevel embeddings as input to a sequence of models which culminates in a biaffine classifier. Due to the multilingual nature of this task we explore multilingual contextual embedding models, such as Multilingual BERT (Devlin et al., 2019) and XLM-RoBERTa (Conneau et al., 2020) , as a way of leveraging all the available training data at once. It differs from the approaches presented in the previous edition of the shared task (Piskorski et al., 2019) in the following aspects: (i) we explore more contextual embedding approaches; (ii) we further finetune the contextual embedding model while training the downstream model; (iii) we use the same topics in our train and development sets; and (iv) we use a different classifier architecture.", "cite_spans": [ { "start": 41, "end": 57, "text": "Yu et al. (2020)", "ref_id": "BIBREF22" }, { "start": 314, "end": 335, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" }, { "start": 352, "end": 374, "text": "(Conneau et al., 2020)", "ref_id": "BIBREF3" }, { "start": 525, "end": 549, "text": "(Piskorski et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our approach resulted in strong performance when replicating the scenario of the 2 nd Shared Task on SlavNER.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Named Entity Recognition corresponds to the task of both finding and classifying named entities in text. Some of the most common approaches to tackle NER that make use of neural networks involve combining models such as Conditional Random Fields (CRFs, Lafferty et al. 2001) , bidirectional Long-Short-Term-Memory Neural Networks (biLSTMs, Schuster and Paliwal 1997), and Convolutional Neural Networks (CNNs, LeCun et al. 1989) . Two examples of such approaches are Chiu and Nichols (2016) and Ma and Hovy (2016) , which use biLSTMs and CNNs to build both wordand character-level features, diverging in the fact that the latter performs decoding with a CRF, while the former uses a linear layer.", "cite_spans": [ { "start": 246, "end": 274, "text": "(CRFs, Lafferty et al. 2001)", "ref_id": null }, { "start": 340, "end": 352, "text": "Schuster and", "ref_id": "BIBREF16" }, { "start": 353, "end": 371, "text": "Paliwal 1997), and", "ref_id": "BIBREF16" }, { "start": 372, "end": 427, "text": "Convolutional Neural Networks (CNNs, LeCun et al. 1989)", "ref_id": null }, { "start": 494, "end": 512, "text": "Ma and Hovy (2016)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "With the development of better pre-trained embedding models, transfer-learning became a significant part of approaches that tackle NER. This technique implies using a pre-trained model in order to obtain an embedding for each word of the input. These representations are then used as input to a model that is able to solve the desired downstream task. In terms of pre-trained embeddings we highlight ELMo embeddings (Peters et al., 2018) , which are trained using a bidirectional language model, the Flair embeddings (Akbik et al., 2018) , trained with a character-level language model, and finally embeddings retrieved from Transformerbased models (Vaswani et al., 2017) , such as BERT (Devlin et al., 2019) , and RoBERTa (Liu et al., 2019) . Any combination of these embeddings can be used as input to a model that is able to predict entity classes, such as a LSTM-CRF (Strakov\u00e1 et al., 2019) , a biaffine-classifier (Yu et al., 2020) , or a linear layer (Devlin et al., 2019) .", "cite_spans": [ { "start": 416, "end": 437, "text": "(Peters et al., 2018)", "ref_id": "BIBREF13" }, { "start": 517, "end": 537, "text": "(Akbik et al., 2018)", "ref_id": "BIBREF0" }, { "start": 649, "end": 671, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF20" }, { "start": 687, "end": 708, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" }, { "start": 723, "end": 741, "text": "(Liu et al., 2019)", "ref_id": "BIBREF10" }, { "start": 871, "end": 894, "text": "(Strakov\u00e1 et al., 2019)", "ref_id": "BIBREF17" }, { "start": 919, "end": 936, "text": "(Yu et al., 2020)", "ref_id": "BIBREF22" }, { "start": 957, "end": 978, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Particularly relevant to this work is the possibility of using pre-trained cross-lingual embeddings, such as multilingual BERT (Devlin et al., 2019) and XLM-RoBERTa (Conneau et al., 2020) , both covering a high-number of languages with different scripts. Such models have been shown to perform well on different tasks and languages (Pires et al., 2019; Lewis et al., 2020; Hu et al., 2020) .", "cite_spans": [ { "start": 127, "end": 148, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" }, { "start": 165, "end": 187, "text": "(Conneau et al., 2020)", "ref_id": "BIBREF3" }, { "start": 332, "end": 352, "text": "(Pires et al., 2019;", "ref_id": "BIBREF14" }, { "start": 353, "end": 372, "text": "Lewis et al., 2020;", "ref_id": "BIBREF9" }, { "start": 373, "end": 389, "text": "Hu et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In the previous edition of the SlavNER shared task (Piskorski et al., 2019 ) multiple submissions used multilingual BERT to retrieve cross-lingual representations. In particular, Tsygankova et al. (2019) used both word-and character-embeddings as input to a biLSTM-CRF, Arkhipov et al. (2019) further pretrained multilingual BERT on the four target Slavic languages of last shared task's edition and combined its representations with a word-level CRF, and the submission by IIUWR.PL used a combination of different embeddings, where BERT and Flair were included.", "cite_spans": [ { "start": 51, "end": 74, "text": "(Piskorski et al., 2019", "ref_id": "BIBREF15" }, { "start": 270, "end": 292, "text": "Arkhipov et al. (2019)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our approach makes use of three key components: a multilingual contextual embedding model, a character-level embedding model, and a biaffine classifier model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "In terms of multilingual contextual embedding models we have explored three options:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "1. Multilingual BERT model, which covers 104 languages, and follows the configuration of the BERT-base model (Devlin et al., 2019) .", "cite_spans": [ { "start": 109, "end": 130, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "2. XLM-RoBERTa (XLM-R) model (Conneau et al., 2020) , trained on 100 languages. We use the large version of the model.", "cite_spans": [ { "start": 29, "end": 51, "text": "(Conneau et al., 2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "3. Slavic BERT model (Arkhipov et al., 2019) , which corresponds to the Multilingual BERTmodel further finetuned using resources for Bulgarian, Czech, Polish and Russian. Furthermore, it also rebuilds the original vocabulary to better match these languages.", "cite_spans": [ { "start": 21, "end": 44, "text": "(Arkhipov et al., 2019)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "Besides finetuning the the top-layers of the contextual embedding model during training, we complement these representations with character-level embeddings, obtained with a single-layer CNN.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "The biaffine classifier model follows the work by Yu et al. (2020) . In particular, the token-and character-level embeddings are concatenated and fed into a Highway BiLSTM (Zhang et al., 2016) which yields a representation for each token. These representations are given to two individual Feed-Forward Neural Networks (FFNNs), responsible for creating a representation that models whether a token is the start/end of a span. Finally, these are passed to a biaffine model, which returns a scores tensor with all possible start-end combinations with shape n \u00d7 n \u00d7 c, where n is the number of tokens and c is the number of NER classes plus one, corresponding to the no-entity prediction. This scores tensor masks non-valid spans, i.e., spans where the end position is lower than the start position.", "cite_spans": [ { "start": 50, "end": 66, "text": "Yu et al. (2020)", "ref_id": "BIBREF22" }, { "start": 172, "end": 192, "text": "(Zhang et al., 2016)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "A series of heuristics is then applied to the scores tensor in order to predict spans. First, all the valid spans are retrieved and matched with the corresponding highest-scoring label. Then, all spans whose highest-scoring label corresponds to an entity are sorted by score, from highest to lowest, and are evaluated sequentially. All predicted spans are kept, unless they clash with some of the spans already validated for that input, i.e., unless they overlap with entities that were given an higher score. One of the advantages of this model is that it can model both flat and nested entities, based upon the heuristics we apply.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "The model is optimized with the softmax crossentropy loss.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "The 3 rd edition of the SlavNER Shared Task includes three subtasks: Named Entity Recognition, lemmatization, and cross-lingual entity linking. The available data for this edition adds two extra languages (Slovene and Ukrainian) to the four languages covered in the 2 nd edition of the shared task (Bulgarian, Czech, Polish, and Russian).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task", "sec_num": "4.1" }, { "text": "Our work targets exclusively the subtask of NER, for which there are five types of entities: per-sons (PER), locations (LOC), organizations (ORG), events (EVT), and products (PRO).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task", "sec_num": "4.1" }, { "text": "The evaluation of this subtask is case-insensitive, and since the goal is to correctly identify a \"bagof-mentions\" in a document, it uses three specific metrics, two \"relaxed\" and one \"strict\":", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task", "sec_num": "4.1" }, { "text": "\u2022 Relaxed Partial Matching (RPM) and Relaxed Exact Matching (REM), where the system only needs to identify at least one of the forms of a given entity to count a match (e.g. it would only need to identify Alexandr Kogan for both Alexandr Kogan and Alexandra Kogana to be matched). The difference between partial and strict is that the former requires matching only a part of the named entity, while the latter requires an exact match (e.g. in the previous example, matching Kogan would be enough for the partial metric).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task", "sec_num": "4.1" }, { "text": "\u2022 Strict Matching (SM), where the system has to identify each unique form of a named entity present in a given document (i.e. in the previous example both Alexandr Kogan and Alexandra Kogana would have to be predicted).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task", "sec_num": "4.1" }, { "text": "The train data for the 3 rd edition of the SlavNER shared task (SLAVNER2021) includes four topics and covers a total of six languages. The four topics: ASIA_BIBI, BREXIT, NORD_STREAM, and RYANAIR, were part of the data used in the 2 nd edition of the SlavNER shared task (SLAVNER2019), apart from minor revisions and two extra languages, Slovene and Ukrainian. Furthermore, there is also an additional generic topic this year, OTHER, which includes only Slovene data. The two added languages have the smallest amount of documents available, 279 and 159 respectively for Slovene and Ukrainian. The other languages have more data available, ranging from 571 in the case of Russian to 918 in the case of Bulgarian. This edition's test data includes two topics, \"Covid-19\" and \"US_election_2020\", which are particularly challenging due to the their very specific vocabulary. The most represented language is Slovene, with 333 documents, and the least represented language is Ukrainian, with 168 documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4.2" }, { "text": "Similarly to Tsygankova et al. 2019, we consider data from the 1 st edition of the SlavNER shared task, which we will refer to as SLAVNER2017. It includes two topics, EU and TRUMP, and covers seven languages: Czech, Croatian, Polish, Russian, Slovak, Slovene, and Ukrainian. Despite the different set of tags, the extra data can improve the overall performance (Tsygankova et al., 2019) .", "cite_spans": [ { "start": 361, "end": 386, "text": "(Tsygankova et al., 2019)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4.2" }, { "text": "We use an internal tokenizer that is able to split sentences and also words from punctuation. The data was processed in order to match the format expected by our internal framework used to train NER models, on a sentence-by-sentence basis. This requires matching the document-level annotations, as provided by the organizers of the shared task, with all the individual occurrences in the text. As mentioned in Tsygankova et al. (2019) , this leads to two possible errors: matching occurrences of words that do not correspond to entities, and the opposite. The relative difference between the expected number of entities at the document-level and our number of annotations is between 0.78% and 1.6%. Besides the two aforementioned errors, we found the most common mismatches to be related with typos in entities, encoding errors of Latin-text annotations in Cyrillic documents, and errors due to our tokenization.", "cite_spans": [ { "start": 410, "end": 434, "text": "Tsygankova et al. (2019)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4.2" }, { "text": "To make our predictions match the expected format we keep only the unique (case-insensitive) predicted entities, and remove the ones tagged as MISC, obtained when using SLAVNER2017 data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4.2" }, { "text": "Unless otherwise noted, train and development data use the same topics and are split by using the top 5% of sentences as development data and the remainder 95% as training data. This split is performed at the level of each topic + language, so that the original ratio of data is kept. SLAVNER2017 data is not included by default.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4.2" }, { "text": "We implement our approach using PyTorch (Paszke et al., 2019) . The contextual embedding models use the Hugging-Face Transformers library (Wolf et al., 2019) , and our biaffine classifier implementation mostly follows the original one 1 . Further training details can be seen in Appendix A.", "cite_spans": [ { "start": 40, "end": 61, "text": "(Paszke et al., 2019)", "ref_id": "BIBREF12" }, { "start": 138, "end": 157, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.3" }, { "text": "Our first set of experiments was conducted using SLAVNER2019 and SLAVNER2021 data. We keep NORD_STREAM and RYANAIR as test topics, and the remainder as train topics. These experiments have the following goals: (i) compare our approach's performance with the official SLAVNER2019 scores; and (ii) evaluate the impact of the added languages for the same topics in SLAVNER2021.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminary Experiments", "sec_num": "5.1" }, { "text": "For the SLAVNER2019 experiments we used the original test set data. As for the train data, we noticed a mismatch between the number of documents in the original data available for download and the information reported in Piskorski et al. (2019) . Since the equivalent data (i.e., the same topics and languages) in this year's data matched both the expected number of documents and entities, we used it instead. For the SLAVNER2021 experiments we use the available data with the aforementioned topic splits.", "cite_spans": [ { "start": 221, "end": 244, "text": "Piskorski et al. (2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Preliminary Experiments", "sec_num": "5.1" }, { "text": "The obtained results can be seen in Table 1 . The impact of further finetuning Multilingual BERT with the four SLAVNER2019 languages in Slavic BERT is noticeable and it results in the best overall scores for that edition's data. However, the extra finetuning step degrades considerably the performance for the two added language in SLAVNER2021, where the model performs worse than any other. This finetune mismatch might partially explain the fact why XLM-R outperforms Slavic BERT in the overall metrics of SLAVNER2021, as opposed to what is observed for SLAVNER2019. Another key aspect for XLM-R's performance is the fact this contextual embedding model is much larger than its BERT-base counterparts (300M vs 120M parameters).", "cite_spans": [], "ref_spans": [ { "start": 36, "end": 43, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Preliminary Experiments", "sec_num": "5.1" }, { "text": "Overall, all models trained with SLAVNER2019 largely outperformed the best scores reported for the 2 nd edition of the shared task 2 . We hypothesize these differences are due to: (i) a larger multilingual contextual embedding model, such as XLM-R Large; (ii) a biaffine approach which has been shown to outperform CRF approaches (Yu et al., 2020) ; (iii) finetuning the top layers of the contextual embedding model during training, which has a positive impact over simple feature extraction (Sun et al., 2019) ; and (iv) train/development sets using all non-test available topics.", "cite_spans": [ { "start": 330, "end": 347, "text": "(Yu et al., 2020)", "ref_id": "BIBREF22" }, { "start": 492, "end": 510, "text": "(Sun et al., 2019)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Preliminary Experiments", "sec_num": "5.1" }, { "text": "The first hypothesis matches what we have observed in the results presented in Table 1 . In particular, the increased model capacity of XLM-R helps to mitigate the multilingual language model tradeoff highlighted by Conneau et al. (2020) , \"for a fixed-size model, the per-language capacity decreases as we increase the number of languages\".", "cite_spans": [ { "start": 216, "end": 237, "text": "Conneau et al. (2020)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 79, "end": 86, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Preliminary Experiments", "sec_num": "5.1" }, { "text": "The last two hypotheses can be discussed together with the results presented in Table 2 , where we perform a simple ablation study. We can observe that not finetuning the top-layers of the contextual embedding model hurts performance. It is plausible to attribute this to the unique characteristics of the languages and entities of the task at hand, where making part of the parameters trainable allows the model to learn better contextual representations for the NER task.", "cite_spans": [], "ref_spans": [ { "start": 80, "end": 87, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Preliminary Experiments", "sec_num": "5.1" }, { "text": "Using both topics as train and development data has a large impact in terms of performance, when compared with using BREXIT as training data and ASIA_BIBI as development data. Even though the scores obtained for the development data are artificially larger, it appears that the model's ability to generalize to new topics is not affected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminary Experiments", "sec_num": "5.1" }, { "text": "The two last results of Table 2 provide us interesting insights. First, it is noticeable that adding SLAVNER2017 data degrades performance. This was something we observed for all SLAVNER2021 experiments, and for all SLAVNER2019 experiments excluding the one that used Multilingual BERT. Secondly, removing the character-level embeddings seems to have an almost negligible impact. However, we noticed some variance in scores among runs with regard to this change.", "cite_spans": [], "ref_spans": [ { "start": 24, "end": 31, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Preliminary Experiments", "sec_num": "5.1" }, { "text": "We have made a total of five submissions to this year's shared task, as detailed in Appendix B. Following the scores reported in Table 1 , and the corresponding discussion in Subsection 5.1, we have decided to use XLM-R in most of our submissions. The first four submissions correspond to a XLM-R with all the possible combinations with/without character embeddings and including/not including the SLAVNER2017 data in the training data. We selected these combinations due to the observed variance of scores when using character embeddings, and due to the fact that both Ukrainian and Slovene are part of the SLAVNER2017 data. The fifth submission is an hybrid approach, where the Ukrainian and Slovene documents are predicted with a model trained with a XLM-R using both SLAVNER2021 and SLAVNER2017 data, and the predictions for the remaining languages were obtained with a Slavic BERT model. Our submissions' scores can be seen in Table 3 . The second submission, which uses a XLM-R with character embeddings, scored the highest in terms of F1 for the overall metrics and in four of the six languages in terms of strict matching F1. It is noticeable that adding the SLAVNER2017 data to the training data had a negative impact, and that the impact of character embeddings is variable. The hybrid approach (S5) did not yield the expected scores, since using Slavic BERT as the contextual embedding model did not improve performance for BG/CS/PL/RU, as opposed to what we observed in our preliminary experiments.", "cite_spans": [], "ref_spans": [ { "start": 129, "end": 136, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 932, "end": 940, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "3 rd SlavNER Shared Task Submissions", "sec_num": "5.2" }, { "text": "With regard to the entity-level results, as observed in Table 4 , our scores for PRO and EVT seem subpar. After analyzing the error-log files, we noticed some common mistakes: (i) We miss some quotation marks, e.g., we predict abcd instead of \u00ababcd\u00bb; (ii) Covid-19 related tags are mostly erroneously classified as PRO and not as EVT; and (iii) we miss topic-specific vocabulary. Some of the most common wrong/missed target predictions for the \"covid-19\" topic are EVT-Covid-19, EVT-Pandemic, and ORG-BioNTech, and for the \"us_election_2020\" topic are PRO-CNN, ORG-The-White-House, and ORG-Republican-Party-USA.", "cite_spans": [], "ref_spans": [ { "start": 56, "end": 63, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "3 rd SlavNER Shared Task Submissions", "sec_num": "5.2" }, { "text": "At the time of writing this document we do not yet have access to the overall results, and therefore cannot comment on our relative performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3 rd SlavNER Shared Task Submissions", "sec_num": "5.2" }, { "text": "We have proposed a multilingual approach to the 3 rd SlavNER Shared Task, where we can make use of a single model trained with multiple source languages to predict NER tags. In particular, we have shown that using a large model, such as XLM-R, coupled with character-level embeddings and a biaffine classifier is able to perform well when replicating the scenario of the 2 nd Shared Task on SlavNER, as well as for the languages added to this year's edition of the SlavNER shared task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "https://github.com/juntaoy/biaffine-ner", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://bsnlp.cs.helsinki.fi/bsnlp-2019/final_ranking.pdf", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/juntaoy/biaffine-ner 4 See tf.keras.optimizers.schedules.ExponentialDecay", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work is supported by the EU H2020 SELMA project (grant agreement No 957017) and the Lisbon Regional Operational Programme (Lisboa 2020), under the Portugal 2020 Partnership Agreement, through the European Regional Development Fund (ERDF), within project TRAINER (N\u00ba045347).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "We implement our approach using PyTorch (Paszke et al., 2019) . We train models for 60 epochs, evaluating the model twice per epoch, and stop training early if NERC F1 does not improve in the development set after 24 validation steps. All models are optimized with Adam (Kingma and Ba, 2015), with a batch size of 32 and a maximum grad norm of 5. We keep the model with the highest NERC F1 score in the development set. Using scores from 33 experiments we have calculated Pearson Correlation Coefficient values of 0.89/0.79/0.84 between the NERC F1 test set values and the official RPM/REM/SM metrics. Despite the mismatch, the correlation values show that NERC F1 is a good approximation of the official metrics.The character embeddings are learned during training, and the contextual embedding model, implemented using Hugging-Face Transformers library (Wolf et al., 2019) , is further finetuned. In particular, until otherwise mentioned, we freeze all parameters of the contextual embedding model apart from the embedding-layer and the top-4 layers. Moreover, an embedding pooler learns a weighted average of the contextual embedding model's top-4 layers for each token. We have not observed gains from either finetuning more layers, nor using more layers when pooling the representation for a given token. Following Devlin et al. (2019) we represent each token by its first subtoken. Both embeddings models use a learning rate of 5e-5, with a linear scheduler where the maximum value occurs after 10% of the training steps.All the contextual embedding models are available in the HuggingFace Transformers library, under the names BERT-BASE-MULTILINGUAL-CASED (Multilingual BERT), XLM-ROBERTA-LARGE (XLM-R Large), and DEEPPAVLOV/BERT-BASE-BG-CS-PL-RU-CASED (Slavic BERT). Both multilingual BERT and XLM-R cover all the six languages that are part of this shared task.Our implementation of the biaffine classifier model mostly follows the original implementation 3 . It uses a learning rate of 1e-3 with a exponential scheduler, implemented from its TensorFlow version 4 . We follow Yu et al. (2020) choice of hyperparameters, as described in Table 5 . ", "cite_spans": [ { "start": 40, "end": 61, "text": "(Paszke et al., 2019)", "ref_id": "BIBREF12" }, { "start": 855, "end": 874, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF21" }, { "start": 2085, "end": 2101, "text": "Yu et al. (2020)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 2145, "end": 2152, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "A Training Details", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Contextual string embeddings for sequence labeling", "authors": [ { "first": "Alan", "middle": [], "last": "Akbik", "suffix": "" }, { "first": "Duncan", "middle": [], "last": "Blythe", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Vollgraf", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th international conference on computational linguistics", "volume": "", "issue": "", "pages": "1638--1649", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence la- beling. In Proceedings of the 27th international con- ference on computational linguistics, pages 1638- 1649.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Tuning multilingual transformers for language-specific named entity recognition", "authors": [ { "first": "Mikhail", "middle": [], "last": "Arkhipov", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Trofimova", "suffix": "" }, { "first": "Yurii", "middle": [], "last": "Kuratov", "suffix": "" }, { "first": "Alexey", "middle": [], "last": "Sorokin", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 7th Workshop on Balto-Slavic Natural Language Processing", "volume": "", "issue": "", "pages": "89--93", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikhail Arkhipov, Maria Trofimova, Yurii Kuratov, and Alexey Sorokin. 2019. Tuning multilingual transformers for language-specific named entity recognition. In Proceedings of the 7th Workshop on Balto-Slavic Natural Language Processing, pages 89-93.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Named entity recognition with bidirectional lstm-cnns", "authors": [ { "first": "P", "middle": [ "C" ], "last": "Jason", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "", "middle": [], "last": "Nichols", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "357--370", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason PC Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. Transac- tions of the Association for Computational Linguis- tics, 4:357-370.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Unsupervised cross-lingual representation learning at scale", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Kartikay", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "\u00c9douard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8440--8451", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, \u00c9douard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Xtreme: A massively multilingual multitask benchmark for evaluating cross-lingual generalisation", "authors": [ { "first": "Junjie", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Siddhant", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Orhan", "middle": [], "last": "Firat", "suffix": "" }, { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2020, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "4411--4421", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junjie Hu, Sebastian Ruder, Aditya Siddhant, Gra- ham Neubig, Orhan Firat, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi- task benchmark for evaluating cross-lingual gener- alisation. In International Conference on Machine Learning, pages 4411-4421. PMLR.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "ICLR (Poster)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR (Poster).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Fernando Cn", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Backpropagation applied to handwritten zip code recognition", "authors": [ { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" }, { "first": "Bernhard", "middle": [], "last": "Boser", "suffix": "" }, { "first": "S", "middle": [], "last": "John", "suffix": "" }, { "first": "Donnie", "middle": [], "last": "Denker", "suffix": "" }, { "first": "Richard", "middle": [ "E" ], "last": "Henderson", "suffix": "" }, { "first": "Wayne", "middle": [], "last": "Howard", "suffix": "" }, { "first": "Lawrence", "middle": [ "D" ], "last": "Hubbard", "suffix": "" }, { "first": "", "middle": [], "last": "Jackel", "suffix": "" } ], "year": 1989, "venue": "Neural computation", "volume": "1", "issue": "4", "pages": "541--551", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yann LeCun, Bernhard Boser, John S Denker, Don- nie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. 1989. Backpropagation ap- plied to handwritten zip code recognition. Neural computation, 1(4):541-551.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Mlqa: Evaluating cross-lingual extractive question answering", "authors": [ { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Barlas", "middle": [], "last": "Oguz", "suffix": "" }, { "first": "Ruty", "middle": [], "last": "Rinott", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7315--7330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Lewis, Barlas Oguz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2020. Mlqa: Evaluat- ing cross-lingual extractive question answering. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 7315- 7330.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Roberta: A robustly optimized bert pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "End-to-end sequence labeling via bi-directional lstm-cnns-crf", "authors": [ { "first": "Xuezhe", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1064--1074", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end se- quence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1064-1074.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Pytorch: An imperative style, high-performance deep learning library", "authors": [ { "first": "Adam", "middle": [], "last": "Paszke", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Massa", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lerer", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Chanan", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Killeen", "suffix": "" }, { "first": "Zeming", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Gimelshein", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Antiga", "suffix": "" }, { "first": "Alban", "middle": [], "last": "Desmaison", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Kopf", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zachary", "middle": [], "last": "Devito", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Raison", "suffix": "" }, { "first": "Alykhan", "middle": [], "last": "Tejani", "suffix": "" }, { "first": "Sasank", "middle": [], "last": "Chilamkurthy", "suffix": "" }, { "first": "Benoit", "middle": [], "last": "Steiner", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Junjie", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Soumith", "middle": [], "last": "Chintala", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "8024--8035", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Py- torch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 32, pages 8024-8035. Curran Asso- ciates, Inc.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2227--2237", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227- 2237.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "How multilingual is multilingual bert?", "authors": [ { "first": "Telmo", "middle": [], "last": "Pires", "suffix": "" }, { "first": "Eva", "middle": [], "last": "Schlinger", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Garrette", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4996--5001", "other_ids": {}, "num": null, "urls": [], "raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual bert? In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996-5001.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The second crosslingual challenge on recognition, normalization, classification, and linking of named entities across slavic languages", "authors": [ { "first": "Jakub", "middle": [], "last": "Piskorski", "suffix": "" }, { "first": "Laska", "middle": [], "last": "Laskova", "suffix": "" }, { "first": "Micha\u0142", "middle": [], "last": "Marci\u0144czuk", "suffix": "" }, { "first": "Lidia", "middle": [], "last": "Pivovarova", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "P\u0159ib\u00e1\u0148", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Steinberger", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Yangarber", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 7th Workshop on Balto-Slavic Natural Language Processing", "volume": "", "issue": "", "pages": "63--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jakub Piskorski, Laska Laskova, Micha\u0142 Marci\u0144czuk, Lidia Pivovarova, Pavel P\u0159ib\u00e1\u0148, Josef Steinberger, and Roman Yangarber. 2019. The second cross- lingual challenge on recognition, normalization, classification, and linking of named entities across slavic languages. In Proceedings of the 7th Work- shop on Balto-Slavic Natural Language Processing, pages 63-74.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Bidirectional recurrent neural networks", "authors": [ { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "K", "middle": [], "last": "Kuldip", "suffix": "" }, { "first": "", "middle": [], "last": "Paliwal", "suffix": "" } ], "year": 1997, "venue": "IEEE transactions on Signal Processing", "volume": "45", "issue": "11", "pages": "2673--2681", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Schuster and Kuldip K Paliwal. 1997. Bidirec- tional recurrent neural networks. IEEE transactions on Signal Processing, 45(11):2673-2681.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Neural architectures for nested ner through linearization", "authors": [ { "first": "Jana", "middle": [], "last": "Strakov\u00e1", "suffix": "" }, { "first": "Milan", "middle": [], "last": "Straka", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5326--5331", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jana Strakov\u00e1, Milan Straka, and Jan Hajic. 2019. Neu- ral architectures for nested ner through linearization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5326-5331.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "How to fine-tune bert for text classification?", "authors": [ { "first": "Chi", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Yige", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2019, "venue": "China National Conference on Chinese Computational Linguistics", "volume": "", "issue": "", "pages": "194--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to fine-tune bert for text classification? In China National Conference on Chinese Computa- tional Linguistics, pages 194-206. Springer.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Bsnlp2019 shared task submission: Multisource neural ner transfer", "authors": [ { "first": "Tatiana", "middle": [], "last": "Tsygankova", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Mayhew", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 7th Workshop on Balto-Slavic Natural Language Processing", "volume": "", "issue": "", "pages": "75--82", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tatiana Tsygankova, Stephen Mayhew, and Dan Roth. 2019. Bsnlp2019 shared task submission: Multi- source neural ner transfer. In Proceedings of the 7th Workshop on Balto-Slavic Natural Language Pro- cessing, pages 75-82.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "6000--6010", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Sys- tems, pages 6000-6010.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Huggingface's transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R\u00e9mi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.03771" ] }, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Fun- towicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Named entity recognition as dependency parsing", "authors": [ { "first": "Juntao", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Bernd", "middle": [], "last": "Bohnet", "suffix": "" }, { "first": "Massimo", "middle": [], "last": "Poesio", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6470--6476", "other_ids": {}, "num": null, "urls": [], "raw_text": "Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020. Named entity recognition as dependency parsing. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 6470- 6476.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Highway long short-term memory rnns for distant speech recognition", "authors": [ { "first": "Yu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Guoguo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Kaisheng", "middle": [], "last": "Yaco", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Khudanpur", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" } ], "year": 2016, "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "5755--5759", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu Zhang, Guoguo Chen, Dong Yu, Kaisheng Yaco, Sanjeev Khudanpur, and James Glass. 2016. High- way long short-term memory rnns for distant speech recognition. In 2016 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 5755-5759. IEEE.", "links": null } }, "ref_entries": { "TABREF0": { "num": null, "type_str": "table", "text": "Best Reported Scores 90.94 86.40 85.66 84.07 88.52 88.25 82.03 --Multilingual BERT 92.26 88.06 88.62 91.56 84.91 84.12 91.62 --Slavic BERT 95.07 91.73 91.75 94.01 87.14 91.69 93.52 88.69 89.32 92.68 85.00 88.28 91.47 85.05 89.99 Slavic BERT 93.54 88.86 89.48 93.38 87.14 92.51 93.04 77.91 85.34 XLM-R Large 94.04 89.73 90.18 93.88 86.14 89.33 92.76 82.86 90.64", "html": null, "content": "
F1 -AllF1 -SM -All
ModelRPM REMSMCSRUBGPLUKSL
ST
2 ndXLM-R Large94.91 90.68 91.07 93.98 87.11 89.15 92.83----
STMultilingual BERT93.32
3 rd
" }, "TABREF1": { "num": null, "type_str": "table", "text": "Results obtained for the topics NORD_STREAM and RYANAIR using SLAVNER2019 data (2 nd ST) and SLAVNER2021 data (3 rd ST).", "html": null, "content": "
F1 -All -Score Diff
BaseRPM REMSM
XLM-RoBERTa-Large 94.04 89.73 90.18
-Finetune Layers-1.10-0.94-1.21
-Both Topics-2.97-3.67-3.38
-Char Embeds+0.08 +0.13 +0.18
+ 2017 Data-0.31-0.34-0.31
" }, "TABREF2": { "num": null, "type_str": "table", "text": "Difference in performance obtained for the NORD_STREAM and RYANAIR topics of the SLAVNER2021 when modifying our approach.", "html": null, "content": "" }, "TABREF3": { "num": null, "type_str": "table", "text": "80.07 79.34 79.94 81.43 85.38 73.65 84.24 77.95 78.51 77.99 77.67 80.98 82.94 72.57 83.46 76.27", "html": null, "content": "
F1 -AllF1 -SM -All
ModelCE 2017 RPM REM SMBGCSPLRUSLUK
(S1)XLM-R--85.24 79.51 78.92 78.94 82.10 84.83 73.48 83.80 76.84
(S2) 85.66 (S3) XLM-R x -XLM-R -x 84.78 79.51 78.83 80.00 80.69 84.76 73.39 83.46 76.27
(S4)XLM-Rxx83.77 78.03 77.82 77.61 80.03 82.92 72.90 82.35 76.67
(S5) XLM-R/Slavic BERT -/-x/-84.29
" }, "TABREF4": { "num": null, "type_str": "table", "text": "Results obtained for the test set of the 3 rd edition of the shared task. CE -Includes contextual embeddings. 2017 -Includes SLAVNER2017 data.", "html": null, "content": "
F1 -SM -All
ModelPER LOC ORG PROEVT
S190.88 90.30 70.17 52.00 11.63
S290.69 90.67 70.68 54.27 10.46
S390.69 90.79 69.74 43.35 05.13
S490.55 90.37 69.06 39.62 07.28
S589.55 89.75 69.16 50.40 06.11
" }, "TABREF5": { "num": null, "type_str": "table", "text": "Entity-level results for the test set of the 3 rd edition of the shared task.", "html": null, "content": "" } } } }