{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:26:47.513594Z" }, "title": "French Contextualized Word-Embeddings with a sip of CaBeRnet: a New French Balanced Reference Corpus", "authors": [ { "first": "Murielle", "middle": [], "last": "Popa-Fabre", "suffix": "", "affiliation": {}, "email": "murielle.fabre@inria.fr" }, { "first": "Pedro", "middle": [ "Javier" ], "last": "Ortiz Su\u00e1rez", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Beno\u00eet", "middle": [], "last": "Sagot", "suffix": "", "affiliation": {}, "email": "benoit.sagot@inria.fr" }, { "first": "Eric", "middle": [], "last": "De La Clergerie", "suffix": "", "affiliation": {}, "email": "eric.de_la_clergerie@inria.fr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper investigates the impact of different types and size of training corpora on language models. By asking the fundamental question of quality versus quantity, we compare four French corpora by pre-training four different ELMOs and evaluating them on dependency parsing, POS-tagging and Named Entities Recognition downstream tasks. We present and asses the relevance of a new balanced French corpus, CaBeRnet, that features a representative range of language usage, including a balanced variety of genres (oral transcriptions, newspapers, popular magazines, technical reports, fiction, academic texts), in oral and written styles. We hypothesize that a linguistically representative corpus will allow the language models to be more efficient, and therefore yield better evaluation scores on different evaluation sets and tasks.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "This paper investigates the impact of different types and size of training corpora on language models. By asking the fundamental question of quality versus quantity, we compare four French corpora by pre-training four different ELMOs and evaluating them on dependency parsing, POS-tagging and Named Entities Recognition downstream tasks. We present and asses the relevance of a new balanced French corpus, CaBeRnet, that features a representative range of language usage, including a balanced variety of genres (oral transcriptions, newspapers, popular magazines, technical reports, fiction, academic texts), in oral and written styles. We hypothesize that a linguistically representative corpus will allow the language models to be more efficient, and therefore yield better evaluation scores on different evaluation sets and tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The question of quality versus size of training corpora is increasingly gaining attention and interest in the context of the latest developments in neural language models' performance. The longstanding issue of corpora \"representativeness\" is here addressed, in order to grasp to what extent a linguistically balanced cross-genre language sample is sufficient for a language model to gain in accuracy for contextualized word-embeddings on different NLP tasks. Several increasingly larger corpora are nowadays compiled from the web, i.e. frWAC (Baroni et al., 2009) , CCNet (Wenzek et al., 2019) and OSCAR-fr (Ortiz Su\u00e1rez et al., 2019) . However, does large size necessarily go along with better performance for language model training? Their alleged lack of representativeness has called for inventive ways of building a French balanced corpus offering new insights into language variation and NLP. Following Biber's definition, \"representativeness refers to the extent to which a sample includes the full range of variability in a population\" (Biber, 1993, 244) . We adopt a balanced approach by sampling a wide spectrum of language use and its cross-genre variability, be it situational (e.g. format, author, addressee, purposes, settings or topics) or linguistic, e.g. linked to distributional parameters like frequencies of word classes and genres. In this way, we developed two newly built corpora. The French Balanced Reference Corpus -CaBeRnet -includes a wide-ranging and balanced coverage of cross-genre language use to be maximally representative of French language and therefore yield good generalizations from. The second corpus, the French Children Book Test (CBT-fr), includes both narrative material and oral language use as present in youth literature, and will be used for domain-specific language model training. Both are inspired by existing American and English corpora, respectively COCA, the balanced Corpus of Contemporary American English (Davies, 2008) , and the Children Book Test (Hill et al., 2015, CBT) . The second main contribution of this paper lies in the eval-uation of the quality of the word-embeddings obtained by pre-training and fine-tuning on different corpora, that are made here publicly available. Based on the underlying assumption that a linguistically representative corpus would possibly generate better word-embeddings. We provide an evaluation-based investigation of how a balanced crossgenre corpus can yield improvements in the performance of neural language models like ELMo (Peters et al., 2018) on various downstream tasks. The two corpora, CaBeRnet and CBT-fr, and the ELMos will be distributed freely under Creative Commons License.", "cite_spans": [ { "start": 543, "end": 564, "text": "(Baroni et al., 2009)", "ref_id": "BIBREF1" }, { "start": 573, "end": 594, "text": "(Wenzek et al., 2019)", "ref_id": "BIBREF35" }, { "start": 599, "end": 635, "text": "OSCAR-fr (Ortiz Su\u00e1rez et al., 2019)", "ref_id": null }, { "start": 1045, "end": 1063, "text": "(Biber, 1993, 244)", "ref_id": null }, { "start": 1964, "end": 1978, "text": "(Davies, 2008)", "ref_id": "BIBREF9" }, { "start": 2008, "end": 2032, "text": "(Hill et al., 2015, CBT)", "ref_id": null }, { "start": 2528, "end": 2549, "text": "(Peters et al., 2018)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Specifically, we want to investigate the contribution of oral language use as present in different corpora. Through a series of comparisons, we contrast a more domain-specific and written corpus like Wikipedia-fr with the newly built domain-specific CBT-fr corpus which additionally features oral style dialogues, like the ones one can find in youth literature. To test for the effect of corpus size, we further compare a wide ranging corpora characterized by a variety of linguistic phenomena crawled from internet, like OSCAR (Ortiz Su\u00e1rez et al., 2019) , with our newly built French Balanced Reference Corpus CaBeRnet. Our aim is assess the benefits that can be gained from a balanced, multi-domain corpus such as CaBeRnet, despite its being 34 times smaller than the web-based OSCAR.", "cite_spans": [ { "start": 528, "end": 555, "text": "(Ortiz Su\u00e1rez et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The paper is organized as follows. Sections 2. and 3. are dedicated to a descriptive overlook of the building of our two newly brewed corpora CaBeRnet and CBT-fr, including quantitative measures like type-token ratio and morphological richness. Section 4. presents the evaluation methods for POS-tagging, NER and dependency Parsing tasks, while results are introduced in \u00a75. Finally, we conclude in \u00a76. on the computational relevance of word-embeddings obtained through a balanced and representative corpus, and broaden the discussion on the benefits of smaller and noiseless corpora in neural NLP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "CaBeRnet corpus was inspired by the genre partition of the American balanced corpus COCA, which currently contains over 618 million words of text (20 million words each year 1990-2019) and is equally divided among spoken, fiction, popular magazines, newspapers, and academic texts (Davies, 2008) . A second reference, guiding our approach and sampling method, is one of the earliest precursors of balanced reference corpora: the BNC (Burnard, 2007) , first covered a wide variety of genres, with the intention to be a representative sample of spoken and written language. CaBeRnet was obtained by compiling existing data-sets and web-text extracted from different sources as detailed in this section. As shown in Table 1 , genres sources are evenly divided (\u223c120 million words each) into spoken, fiction, magazine, newspaper, academic to achieve genre-balanced between oral and written modality in newspapers or popular written style, technical reports and Wikipedia entries, fiction, literature or academic production).", "cite_spans": [ { "start": 281, "end": 295, "text": "(Davies, 2008)", "ref_id": "BIBREF9" }, { "start": 433, "end": 448, "text": "(Burnard, 2007)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 713, "end": 720, "text": "Table 1", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "CaBeRnet", "sec_num": "2.1." }, { "text": "The oral sub-portion gathers both oral transcriptions (ORFEO and Rhapsodie 1 ) and Films subtitles (Open Subtitles.org), pruned from diacritics, interlocutors tagging and time stamps. To these transcriptions, the French European Parliament Proceedings (1996) (1997) (1998) (1999) (2000) (2001) (2002) (2003) (2004) (2005) (2006) (2007) (2008) (2009) (2010) (2011) , as presented in Koehn (2005) , contributed a sample of more complex oral style with longer sentences and richer vocabulary.", "cite_spans": [ { "start": 252, "end": 258, "text": "(1996)", "ref_id": null }, { "start": 259, "end": 265, "text": "(1997)", "ref_id": null }, { "start": 266, "end": 272, "text": "(1998)", "ref_id": null }, { "start": 273, "end": 279, "text": "(1999)", "ref_id": null }, { "start": 280, "end": 286, "text": "(2000)", "ref_id": null }, { "start": 287, "end": 293, "text": "(2001)", "ref_id": null }, { "start": 294, "end": 300, "text": "(2002)", "ref_id": null }, { "start": 301, "end": 307, "text": "(2003)", "ref_id": null }, { "start": 308, "end": 314, "text": "(2004)", "ref_id": null }, { "start": 315, "end": 321, "text": "(2005)", "ref_id": null }, { "start": 322, "end": 328, "text": "(2006)", "ref_id": null }, { "start": 329, "end": 335, "text": "(2007)", "ref_id": null }, { "start": 336, "end": 342, "text": "(2008)", "ref_id": null }, { "start": 343, "end": 349, "text": "(2009)", "ref_id": null }, { "start": 350, "end": 356, "text": "(2010)", "ref_id": null }, { "start": 357, "end": 363, "text": "(2011)", "ref_id": null }, { "start": 382, "end": 394, "text": "Koehn (2005)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "CaBeRnet Oral", "sec_num": null }, { "text": "The whole sub-portion of Popular Press is gathered from an open data-set from the Est R\u00e9publicain (1999, 2002 and 2003) , a regional press format 2 . It was selected to match popular style as it is characterized by easy-to-read press style and a wide range of every-day topics characterizing local regional press. 2002 -2003 , La D\u00e9p\u00e8che 2002 -2003 , L'Humanit\u00e9 2002 -2003 reports (AFP, 2007 (AFP, -2011 (AFP, -2012 competed with more simple and telegraphic style the newspaper written sample of the corpus. 4 CaBeRnet Academic The academic genre was also built from different sources including technical and educational texts from WikiBooks and Wikipedia dump (prior to 2016) for their thematic variety of highly specialized written production. ORFEO Corpus offered a small sample of academic writings like PHD dissertations and scientific articles encompassing a wide choice of disciplinary topics, and TALN Corpus 5 was included to represent more concise written style characterizing scientific abstracts and proceedings.", "cite_spans": [ { "start": 98, "end": 119, "text": "(1999, 2002 and 2003)", "ref_id": null }, { "start": 314, "end": 318, "text": "2002", "ref_id": null }, { "start": 319, "end": 324, "text": "-2003", "ref_id": "BIBREF0" }, { "start": 325, "end": 342, "text": ", La D\u00e9p\u00e8che 2002", "ref_id": null }, { "start": 343, "end": 348, "text": "-2003", "ref_id": "BIBREF0" }, { "start": 349, "end": 366, "text": ", L'Humanit\u00e9 2002", "ref_id": null }, { "start": 367, "end": 372, "text": "-2003", "ref_id": "BIBREF0" }, { "start": 381, "end": 391, "text": "(AFP, 2007", "ref_id": null }, { "start": 392, "end": 403, "text": "(AFP, -2011", "ref_id": null }, { "start": 404, "end": 415, "text": "(AFP, -2012", "ref_id": null }, { "start": 508, "end": 509, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "CaBeRnet Popular Press", "sec_num": null }, { "text": "CABERNET Table 2 : Lexical statistics of French CBT, performed as described in \u00a73.", "cite_spans": [], "ref_spans": [ { "start": 9, "end": 16, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "CaBeRnet Fiction & Literature", "sec_num": null }, { "text": "We used two different tokenizers: SEM, Segmenteur-\u00c9tiqueteur Markovien standalone Dupont 2017and Tree-Tagger. Both are based on cascades of regular expressions, and both perform tokenization and sentence splitting. The first was used for descriptive purposes because it technically allowed to segment and tokenize all corpora including OSCAR (23 billion words). Hence, all corpora were entirely segmented into sentences and tokenized using SEM. The second tokenization method was run only on 3 million words samples to automatically tag them with TreeTagger into part-of-speech and lemmatize them. 7 All corpora were randomly shuffled by sentence to then select samples of 3 million words, to be able to compare them in terms of lexical composition (Type-Token Ratio, see Table 4 ).", "cite_spans": [ { "start": 598, "end": 599, "text": "7", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 772, "end": 779, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Corpora Descriptive Comparison", "sec_num": "3." }, { "text": "Length of sentences is a simple measure to quantify both sentence syntactic complexity and genre. Hence, the number of sentences reported in Table 3 shows interesting patterns of distributions across genres, consider the comparison between CaBeRnet an Wiki-fr. In our effort to evaluate the impact of corpora pre-training on ELMo-based contextualized word-embedding, we introduce here our two terms of comparison, namely the crawled corpus OSCAR-fr and the Wikipedia-fr one.", "cite_spans": [], "ref_spans": [ { "start": 141, "end": 148, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Corpora Size and Composition", "sec_num": "3.1." }, { "text": "As it has been shown that pre-trained language models can be significantly improved by using more data (Liu et al., 2019; Raffel et al., 2019) , we decided to include in our comparison a corpus of French text extracted from Common Crawl 8 . We leverage on a recently published corpus, OSCAR (Ortiz Su\u00e1rez et al., 2019) , which offers a preclassified and pre-filtered version of the November 2018 Common Craw snapshot. 7 Based on the tag-set available at https://www.cis.", "cite_spans": [ { "start": 103, "end": 121, "text": "(Liu et al., 2019;", "ref_id": "BIBREF20" }, { "start": 122, "end": 142, "text": "Raffel et al., 2019)", "ref_id": "BIBREF29" }, { "start": 291, "end": 318, "text": "(Ortiz Su\u00e1rez et al., 2019)", "ref_id": null }, { "start": 418, "end": 419, "text": "7", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "OSCAR fr", "sec_num": "3.1.1." }, { "text": "uni-muenchen.de/~schmid/tools/TreeTagger/ data/french-tagset.html. 8 More information available at https://commoncrawl. org/about/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OSCAR fr", "sec_num": "3.1.1." }, { "text": "OSCAR gathers a set of monolingual text extracted from Common Crawl -in plain text WET format -where all HTML tags are removed and all text encodings are converted to UTF-8. It follows a similar approach to (Grave et al., 2018) by using a language classification model based on the fastText linear classifier (Joulin et al., 2016; Grave et al., 2017) pre-trained on Wikipedia, Tatoeba and SETimes, supporting 176 different languages.", "cite_spans": [ { "start": 207, "end": 227, "text": "(Grave et al., 2018)", "ref_id": "BIBREF14" }, { "start": 309, "end": 330, "text": "(Joulin et al., 2016;", "ref_id": "BIBREF16" }, { "start": 331, "end": 350, "text": "Grave et al., 2017)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "OSCAR fr", "sec_num": "3.1.1." }, { "text": "After language classification, a deduplication step is performed without introducing a specialized filtering scheme: paragraphs containing 100 or more UTF-8 encoded characters are kept. This makes OSCAR an example of unfiltered data that is nearly as noisy as to the original Crawled data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OSCAR fr", "sec_num": "3.1.1." }, { "text": "This corpus collects a selection of pages from Wikipediafr from a dump executed in April 2019, where HTML tags and tables were removed, together with template expansion using Attardi's tool (WikiExtractor, \u00a72.1.). As reported on ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FrWIKI", "sec_num": "3.1.2." }, { "text": "Focusing on a useful measure of complexity that documents lexical richness or variety in vocabulary, we present the type-token ration (TTR) of the corpora under analysis. Generally used to asses language use aspects like the variety of different words used to communicate by learners or children, it represents the total number of unique words (types/forms) divided by the total number of tokens in a given sample of language production. Hence, the closer the TTR ratio is to 1, the greater the lexical richness of the corpus. Table 1 summarizes the lexical variety of the five subportions of CaBeRnet, respectively taken as representative of Oral, Popular, Fiction, News, and Academic genres. Domain diversity of texts can be observed in the lexical statistics showing a gradual increase in the number of distinct lexical forms (cf. TTR). This pattern reflects a generally acknowledged distributional pattern of vocabulary-size across genres. Oral style shows a poorer lexical variety compared to newspapers/magazines' textual typology. The lexically rich fictional/classic literature is outreached by academic writing-style with its wide-ranging specialized vocabulary. All in all, Table 1 quantitatively demonstrates that the selected textual and oral materials are indeed representative of the five types of genres of CaBeRnet.", "cite_spans": [], "ref_spans": [ { "start": 527, "end": 534, "text": "Table 1", "ref_id": "TABREF3" }, { "start": 1184, "end": 1191, "text": "Table 1", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Corpora Lexical Variety", "sec_num": "3.2." }, { "text": "To select a measure that would help quantifying the different corpora morphological richness, we follow (Bonami and Beniamine, 2015) . Hence, the proportion of lemmas with multiple forms in a given vocabulary size was evaluated on randomly selected samples of 3-million-words from each corpus under analysis (see Table 4 ). Table 4 reports some more in-depth lexical and morphological statistics across corpora. Although OSCAR is 34 times bigger than CaBeRnet, their total number of forms and the proportion of lemmas having more than one form in a 3-million-word sample are comparable. FrWiki shows a radically different lexical distribution with numerous hapaxes but a lower morphological richness. Although its total number of forms is more than one third higher than in OSCAR and CaBeRnet samples, the proportion of lemmas having more than one distinct form is around four points below CaBeRnet and OSCAR. Comparatively, youth literature in CBT-fr shows the greatest morphological richness, around 56% of lemmas have more than one form.", "cite_spans": [ { "start": 104, "end": 132, "text": "(Bonami and Beniamine, 2015)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 313, "end": 320, "text": "Table 4", "ref_id": "TABREF7" }, { "start": 324, "end": 331, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Corpora Morphological richness", "sec_num": "3.3." }, { "text": "This section reports the method of experiments designed to better understand the computational impact of the quality, size and linguistic balance of ELMo's (Peters et al., 2018) pre-training ( \u00a74.1.) and their evaluations tasks ( \u00a74.3.).", "cite_spans": [ { "start": 156, "end": 177, "text": "(Peters et al., 2018)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Corpora Evaluation Tasks", "sec_num": "4." }, { "text": "Embeddings from Language Models ELMo is an LSTM-based language model. More precisely, it uses a bidirectional language model, which combines a both forward and a backward LSTM-based language models. ELMo also computes a context-independent token representation via a CNN over characters. Methodologically, we selected ELMo which not only performs generally better on sequence tagging than other architectures, but which is also better suited to pre-train on small corpora because of its smaller number of parameters (93.6 million) compared to the RoBERTa-base architecture used for Cam-BERT (BERTbase, 12,110 million -Transformer) (Martin et al., 2019) .", "cite_spans": [ { "start": 631, "end": 652, "text": "(Martin et al., 2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Corpora Evaluation Tasks", "sec_num": "4." }, { "text": "Two protocols were carried out to evaluate the impact of corpora characteristics on the tasks under analysis. Method 1 implies a full pre-training ELMo-based language models for each of the corpora mentioned in ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ELMo Pre-traing & Fine-tuning Method", "sec_num": "4.1." }, { "text": "UDPipe Future (Straka, 2018) is an LSTM based model ranked 3 rd in dependency parsing and 6 th in POS tagging during the CoNLL 2018 shared task (Seker et al., 2018) . We report the scores as they appear in Kondratyuk (2019)'s paper. We add to UDPipe Future, five differently trained ELMo language model pre-trained on the qualitatively and quantitatively different corpora under comparison. Additionally, we also test the impact of the CaBeRnet Corpus on ELMo fine-tuning.", "cite_spans": [ { "start": 14, "end": 28, "text": "(Straka, 2018)", "ref_id": "BIBREF33" }, { "start": 144, "end": 164, "text": "(Seker et al., 2018)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Base evaluation systems", "sec_num": "4.2." }, { "text": "The LSTM-CRF is a model originally concived by Lample et al. 2016is just a Bi-LSTM pre-appended by both character level word embeddings and pre-trained word embeddings and pos-appended by a CRF decoder layer. For our experiments, we use the implementation of (Strakov\u00e1 et al., 2019) which is readily available 9 and it is designed to easily pre-append contextualized word-embeddings to the model.", "cite_spans": [ { "start": 259, "end": 282, "text": "(Strakov\u00e1 et al., 2019)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Base evaluation systems", "sec_num": "4.2." }, { "text": "We distinguish three main evaluation tasks that were performed to asses the lexical and syntactic quality of contextualized word-embeddings obtained from different pretraining corpora under comparison.Crucially, comparing them with and ELMo pre-trained on OSCAR and fine-tuned with CaBeRnet, i.e. ELMo OSCAR+CaBeRnet , will allow to control for the presence of oral transcriptions and proceeding in order to understand its impact on the accuracy of our language model and on the development experiments after fine-tuning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Tasks", "sec_num": "4.3." }, { "text": "The evaluation tasks were selected to probe to what extent corpus \"representativeness\" and balance is impacting syntactic representations, in both (1) low-level syntactic relations in POS-tagging tasks, and (2) higher level syntactic relations at constituent-and sentence-level thanks to dependency-parsing evaluation task. Namely, POS-tagging is a low-level syntactic task, which consists in assigning to each word its corresponding grammatical category. Dependency-parsing consists of higher order syntactic task like predicting the labeled syntactic tree capturing the syntactic relations between words. We evaluate the performance of our models using the standard UPOS accuracy for POS-tagging, and Unlabeled Attachment Score (UAS) and Labeled Attachment Score (LAS) for dependency parsing. We assume gold tokenisation and gold word segmentation as provided in the UD treebanks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic tasks", "sec_num": null }, { "text": "Lexical tasks To test for word-level representation obtained through the different pre-training corpora and finetunings, Named Entity Recognition task (NER) was retained (4.3.2.). As it involves a sequence labeling task that Table 5 : Sizes of the 4 treebanks used in the evaluations of POS-tagging and dependency parsing.", "cite_spans": [], "ref_spans": [ { "start": 225, "end": 232, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Syntactic tasks", "sec_num": null }, { "text": "consists in predicting which words refer to real-world objects, such as people, locations, artifacts and organizations, it directly probes the quality and specificity of semantic representations issued by the more or less balanced corpora under comparison.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic tasks", "sec_num": null }, { "text": "Experiments were run using the Universal Dependencies (UD) paradigm and its corresponding UD POS-tag set (Petrov et al., 2011) and UD treebank collection version 2.2 (Nivre et al., 2018) , which was used for the CoNLL 2018 shared task. Different terms of comparisons were considered on the two downstream tasks of part-of-speech (POS) tagging and dependency parsing.", "cite_spans": [ { "start": 105, "end": 126, "text": "(Petrov et al., 2011)", "ref_id": "BIBREF28" }, { "start": 166, "end": 186, "text": "(Nivre et al., 2018)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "POS-tagging and dependency parsing", "sec_num": "4.3.1." }, { "text": "Treebanks test data-set We perform our work on the four freely available French UD treebanks in UD v2.2: GSD, Sequoia, Spoken, and ParTUT, presented in Table 5 . GSD treebank (McDonald et al., 2013) is the second-largest tree-bank available for French after the FTB (described in subsection 4.3.2.), it contains data from blogs, news, reviews, and Wikipedia. Sequoia tree-bank (Candito et al., 2014) comprises more than 3000 sentences, from the French Europarl, the regional newspaper L'Est R\u00e9publicain, the French Wikipedia and documents from the European Medicines Agency. Spoken was automatically converted from the Rhapsodie tree-bank (Lacheret et al., 2014) with manual corrections. It consists of 57 sound samples of spoken French with phonetic transcription aligned with sound (word boundaries, syllables, and phonemes), syntactic and prosodic annotations.", "cite_spans": [ { "start": 175, "end": 198, "text": "(McDonald et al., 2013)", "ref_id": "BIBREF22" }, { "start": 377, "end": 399, "text": "(Candito et al., 2014)", "ref_id": "BIBREF6" }, { "start": 639, "end": 662, "text": "(Lacheret et al., 2014)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 152, "end": 159, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "POS-tagging and dependency parsing", "sec_num": "4.3.1." }, { "text": "Finally, ParTUT is a conversion of a multilingual parallel treebank developed at the University of Turin, and consisting of a variety of text genres, including talks, legal texts, and Wikipedia articles, among others; ParTUT data is derived from the already-existing parallel treebank, Par(allel)TUT (Sanguinetti and Bosco, 2015). Table 5 contains a summary comparing the sizes of the treebanks.", "cite_spans": [], "ref_spans": [ { "start": 331, "end": 338, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "POS-tagging and dependency parsing", "sec_num": "4.3.1." }, { "text": "State-of-the-art For POS-tagging and Parsing we select as a baseline UDPipe Future (2.0), without any additional contextualized embeddings (Straka, 2018) . This model was ranked 3rd in dependency parsing and 6th in POS-tagging during the CoNLL 2018 shared task (Seker et al., 2018) . Notably, UDPipe Future provides us a strong baseline that does not make use of any pre-trained contextual embedding. We report on Table 6 the published results on UDify by (Kondratyuk, 2019), a multitask and multilingual model based on mBERT that is near state-of-the-art on all UD lan-guages including French for both POS-tagging and dependency parsing. Finally, it is also relevant to compare our results with CamemBERT on the selected tasks, because compared to UDify it is the work that pushed the furthest the performance in fine-tuning end-to-end a BERT-based model.", "cite_spans": [ { "start": 139, "end": 153, "text": "(Straka, 2018)", "ref_id": "BIBREF33" }, { "start": 261, "end": 281, "text": "(Seker et al., 2018)", "ref_id": "BIBREF31" } ], "ref_spans": [ { "start": 414, "end": 421, "text": "Table 6", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "POS-tagging and dependency parsing", "sec_num": "4.3.1." }, { "text": "Treebanks test data-set The benchmark data set from the French Treebank (FTB) (Abeill\u00e9 et al., 2003) was selected in its 2008 version, as introduced by Candito and Crabb\u00e9 (2009) and complemented with NER annotations by Sagot et al. (2012) 10 . The tree-bank, shows a large proportion of the entity mentions that are multi-word entities. We therefore report the three metrics that are commonly used to evaluate models: precision, recall, and F1 score. NER State-of-the-art English has received the most attention in NER in the past, with some recent developments in German, Dutch and Spanish by Strakov\u00e1 et al. (2019) . In French, no extensive work has been done due to the limited availability of NER corpora. We compare our model with the stable baselines settled by (Dupont, 2018) , who trained both CRF and BiLSTM-CRF architectures on the FTB and enhanced them using heuristics and pre-trained word-embeddings. And additional term of comparison was identified in a recently released state-of-the-art language model for French, CamemBERT (Martin et al., 2019) , based on the RoBERTa architecture pre-trained on the French sub-corpus of the newly available multilingual corpus OSCAR (Ortiz Su\u00e1rez et al., 2019).", "cite_spans": [ { "start": 78, "end": 100, "text": "(Abeill\u00e9 et al., 2003)", "ref_id": "BIBREF0" }, { "start": 152, "end": 177, "text": "Candito and Crabb\u00e9 (2009)", "ref_id": "BIBREF5" }, { "start": 219, "end": 238, "text": "Sagot et al. (2012)", "ref_id": "BIBREF30" }, { "start": 594, "end": 616, "text": "Strakov\u00e1 et al. (2019)", "ref_id": "BIBREF34" }, { "start": 768, "end": 782, "text": "(Dupont, 2018)", "ref_id": "BIBREF11" }, { "start": 1040, "end": 1061, "text": "(Martin et al., 2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Named Entity Recognition", "sec_num": "4.3.2." }, { "text": "ELMo CaBeRnet : a test for balance The word-embeddings representations offered by ELMo CaBeRnet are not only competitive but sometimes better than Wikipedia ones. One should keep in mind that almost all of the four treebanks we use in this section include Wikipedia data. ELMo CaBeRnet is reaching state-of-the-are results in POS-tagging on Spoken. Notably, it performs better than CamemBERT, the previous state of the art on this oral specialized tree-bank (cf. dark gray highlight on Table 6 ). We understand this results as a clear effect of balance when testing upon a purely spoken test-set. Importantly, this effect is difficultly explainable by the size of oral-style data in CaBeRnet. The oral sub-part is only one fifth of the total, and in this one fifth, only an even smaller amount of data comes from purely oral transcripts comparable the ones in the Spoken tree-bank, namely 67,444 words from Rhapsodie corpus, and 575,894 words form ORFEO. Hence, CaBeRnet's balanced oral language use shows to pay off in POS-tagging. These results are extremely surprising especially given the fact that our evaluation method was aiming at comparing the quality of word-embedding representations and not beating the state-of-the-art.", "cite_spans": [], "ref_spans": [ { "start": 486, "end": 493, "text": "Table 6", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Dependency Parsing and POS-tagging", "sec_num": "5.1." }, { "text": "ELMo CaBeRnet : a test for coverage From Table 6 , we discover that not only balance, but also the broad and diverse genre converge of CaBeRnet may play a role in its POS-tagging success is we compare its results with ELMo CBT that also features oral dialogues in youth literature. The fact that ELMo CBT does not show a comparable performance in POS-tagging, can be interpreted as linked to its size, but possibly also to its lack of variety in genres, thus, suggesting the advantage of a comprehensive coverage of language use. This suggests that a balanced sample may enhance the convergence of generalization about oralstyle from distinct genre that still imply oral-like dialogues like in fiction. In sum, broad coverage may contribute to enhancing representations about oral language.", "cite_spans": [], "ref_spans": [ { "start": 41, "end": 48, "text": "Table 6", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Dependency Parsing and POS-tagging", "sec_num": "5.1." }, { "text": "The effect of balance on Fine-tuning For POS-tagging in GSD the results of ELMo OSCAR are in second place position compared to ELMo OSCAR+CaBeRnet that is extremely close to ELMo Wikipedia . While in POS-tagging in ParTUT, ELMo Wikipedia exhibits better results than ELMo OSCAR , and ELMo OSCAR+CaBeRnet is in second position. Further comparing GSD and Sequoia scores from ELMo OSCAR and ELMo OSCAR+CaBeRnet , we observe that fine-tuning with CaBeRnet the emdeddings that were pretrained on OSCAR, yields better representations for the three tasks compared to both the original ELMo OSCAR and ELMo CaBeRnet . However, fine-tuning does not always yield better findings than ELMo OSCAR on Spoken and Par-TUT, where ELMo OSCAR+CaBeRnet places in second after ELMo OSCAR for parsing scores UAS/LAS (cf. Table 6 ).", "cite_spans": [], "ref_spans": [ { "start": 799, "end": 806, "text": "Table 6", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Dependency Parsing and POS-tagging", "sec_num": "5.1." }, { "text": "A closer look on Parsing results reveals an interesting pattern of results across treebanks (see light gray highlights on Table 6 ). We see that for GSD and Sequoia the CaBeRnet fine-tuned version ELMo OSCAR+CaBeRnet compared to the pure OSCAR pre-trained ELMo OSCAR is achieving higher scores. While a reverse and less clear-cut pattern is observable for the other two treebanks, namely Spoken and ParTUT. This configuration can be explained if we understand this pattern as due to the reinforcement and unlearning of ELMo OSCAR representations during the process of fine-tuning. Specifically, we can observe that parsing scores are better on treebanks that share the kind of language use represented in CaBeRnet, while they are worst on corpora that are closer in language sample to OSCAR corpus, like Spoken and ParTuT. This calls for further developments of CaBeRnet ( \u00a76.).", "cite_spans": [], "ref_spans": [ { "start": 122, "end": 129, "text": "Table 6", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Dependency Parsing and POS-tagging", "sec_num": "5.1." }, { "text": "ELMo CBT : small but relevant ELMo CBT shows an intriguing pattern of results. Even if its scores are under the baseline on GSD and Sequoia, it yields over the baseline results for Spoken and ParTUT. Given its reduced size, one would expect it to overfit, this would explain the under baseline performance. However, this was not the case on Spoken and ParTUT treebanks, thus showing ELMo CBT contribution in generating representations that are useful to UDPipe model to achieve better results in POS-tagging and parsing tasks on the ParTUT and Spoken tree-banks. The presence of oral dialogues is certainly playing a role in this results' pattern. This unexpected result calls for further investigation on the impact of pre-training with reduced-size, noiseless, domain-specific corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Parsing and POS-tagging", "sec_num": "5.1." }, { "text": "For named entity recognition, LSTM-CRF +FastText +ELMo OSCAR+CaBeRnet achieves a better precision, recall and F1 than the traditional CRF-based SEM architectures ( \u00a7 4.3.2.) and CamemBERT, which is currently state-of-the-art.Importantly, LSTM-CRF +FastText +ELMo CaBeRnet reaches better results in finding entity mentions, than Wikipedia which is a highly specialized corpus in terms of vocabulary variety and size, as can be seen in the overwhelming total number of unique forms it contains (see Table 4 ). We can conclude that both pre-training and finetuning with CaBeRnet on ELMo OSCAR generates better word-embedding representations than Wikipedia in this downstream task. CBT-fr NER results are under the LSTM-CRF baseline. This can possibly be explained by the distance in terms of topics and domain from FTB tree-bank (i.e. newspaper articles), or by the reduced-size of the corpus to yield goodenough representation to perform entity mentions recognition. All in all, our evaluations confirm the effectiveness of large ELMo-based language models fine-tuned or pre-trained with a balanced and linguistically representative corpus, like CaBeRnet as opposed to domain-specific ones, or to an extra-large and noisy one like OSCAR.", "cite_spans": [], "ref_spans": [ { "start": 497, "end": 504, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "NER", "sec_num": "5.2." }, { "text": "The paper investigates the relevance of different types of corpora on ELMo's pre-training and fine-tuning. It confirms the effectiveness and quality of word-embeddings obtained through balanced and linguistically representative corpora. By adding to UDPipe Future 5 differently trained ELMo language models that were pre-trained on qualitatively and quantitatively different corpora, our French Balanced Reference Corpus CaBeRnet unexpectedly establishes a new state-of-the-art for POS-tagging over previous monolingual (Straka, 2018) and multilingual approaches (Straka et al., 2019; Kondratyuk, 2019) . The proposed evaluation methods are showing that the two newly built corpora that are published here are not only relevant for neural NLP and language modeling in French, but that corpus balance shows to be a significant predictor of ELMo's accuracy on Spoken test data-set and for NER tasks.", "cite_spans": [ { "start": 520, "end": 534, "text": "(Straka, 2018)", "ref_id": "BIBREF33" }, { "start": 563, "end": 584, "text": "(Straka et al., 2019;", "ref_id": "BIBREF32" }, { "start": 585, "end": 602, "text": "Kondratyuk, 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Perspectives & Conclusion", "sec_num": "6." }, { "text": "Other perspective uses of CaBeRnet involve it use as a corpus offering a reference point for lexical frequency measures, like association measures. Its comparability with English COCA further grants the cross-linguistic validity of measures like Point-wise Mutual Information or DICE's Coefficient. The representativeness probed through our experimental approach are key aspects that allow such measures to be tested against psycho-linguistic and neurolinguistic data as shown in previous neuro-imaging studies (Fabre et al., 2018) . The results obtained for the parsing tasks on ParTUT open a new perspective for the development of the French Balanced Reference Corpus, involving the enhancement of the terminological coverage of CaBeRnet. A sixth sub-part could be included to cover technical domains like legal and medical ones, and thereby enlarge the specialized lexical coverage of CaBeRnet. Further developments of this resource would involve an extension to cover user-generated content, ranging from well written blogs, tweets to more variable written productions like newspaper's comment or forums, as present in the CoMeRe corpus (Chanier et al., 2014) .The computational experiments conducted here also show that pre-training language models like ELMo on a very small sample like the French Children Book Test corpus or CaBeRnet yields unexpected results. This opens a perspective for languages that have smaller training corpora. ELMo could be a better suited language model for those languages than it is for others having larger size resources. Results on the NER task show that size -usually presented as the more important factor to enhance the precision of representation of word-embeddings -matters less than linguistic representativeness, as achieved through corpus linguistic balance. ELMo OSCAR+CaBeRnet sets state-of-the art results in NER (i.e. Precision, Recall and F1) that are superior than those obtained with a 30 times larger corpus, like OSCAR.", "cite_spans": [ { "start": 511, "end": 531, "text": "(Fabre et al., 2018)", "ref_id": "BIBREF12" }, { "start": 1141, "end": 1163, "text": "(Chanier et al., 2014)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Perspectives & Conclusion", "sec_num": "6." }, { "text": "To conclude, our current evaluations show that linguistic quality in terms of representativeness and balance is yielding better performing contextualized word-embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Perspectives & Conclusion", "sec_num": "6." }, { "text": "At the time being, this part of CaBeRnet corpus is still subject to Licence restrictions. This restricted amount of AFP news reports can reasonably fall in the public domain.5 TALN proceedings corpus (about 2 million) builds on a subset of 586 scientific articles (from 2007 to 2013), namely TALN and RECITAL. Available at redac.univ-tlse2.fr/ corpus/taln_en.html.6 This data-set can be found at www.fb.ai/babi/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Available at https://github.com/ufal/acl2019_ nested_ner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The NER-annotated FTB contains approximately than 12k sentences, and more than 350k tokens were extracted from articles of Le Monde newspaper(1989 -1995). As a whole, it encompasses 11,636 entity mentions distributed among 7 different types : 2025 mentions of \"Person\", 3761 of \"Location\", 2382 of \"Organisation\", 3357 of \"Company\", 67 of \"Product\", 15 of \"POI\" (Point of Interest) and 29 of \"Fictional Character\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We acknowledge Benoit Crabb\u00e9 for his helpful suggestions at the beginning of reflection on balanced corpora. We are indebted to Yoann Dupont for his help in collecting data from Wikimedia dumps and for his critical comments. Olivier Bonami and Kim Gerdes conversations were instrumental. This work was supported by the French National Research Agency (ANR) under grant ANR-14-CERA-0001 and BASNUM (ANR-18-CE38-0003). The authors are grateful to Inria Sophia Antipolis -M\u00e9diterran\u00e9e \"Nef\" computation cluster for providing resources and support.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Building a Treebank for French", "authors": [ { "first": "A", "middle": [], "last": "Abeill\u00e9", "suffix": "" }, { "first": "L", "middle": [], "last": "Cl\u00e9ment", "suffix": "" }, { "first": "F", "middle": [], "last": "Toussenel", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "165--187", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abeill\u00e9, A., Cl\u00e9ment, L., and Toussenel, F., (2003). Build- ing a Treebank for French, pages 165-187. Kluwer, Dor- drecht.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The wacky wide web: A collection of very large linguistically processed web-crawled corpora", "authors": [ { "first": "M", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "S", "middle": [], "last": "Bernardini", "suffix": "" }, { "first": "A", "middle": [], "last": "Ferraresi", "suffix": "" }, { "first": "E", "middle": [], "last": "Zanchetta", "suffix": "" } ], "year": 2009, "venue": "Language Resources and Evaluation", "volume": "43", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baroni, M., Bernardini, S., Ferraresi, A., and Zanchetta, E. (2009). The wacky wide web: A collection of very large linguistically processed web-crawled corpora. Language Resources and Evaluation, 43:209-226, 09.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Representativeness in Corpus Design", "authors": [], "year": 1993, "venue": "Literary and Linguistic Computing 8.4", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douglas Biber, editor. (1993). Representativeness in Cor- pus Design. In: Literary and Linguistic Computing 8.4.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Implicative structure and joint predictiveness", "authors": [ { "first": "O", "middle": [], "last": "Bonami", "suffix": "" }, { "first": "S", "middle": [], "last": "Beniamine", "suffix": "" } ], "year": 2015, "venue": "editors, Word Structure and Word Usage. Proceedings of the NetWordS Final Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bonami, O. and Beniamine, S. (2015). Implicative struc- ture and joint predictiveness. In Vito Pirelli, et al., edi- tors, Word Structure and Word Usage. Proceedings of the NetWordS Final Conference, Pisa, Italy.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "520 million words, 1990-present", "authors": [ { "first": "L", "middle": [], "last": "Burnard", "suffix": "" } ], "year": 2007, "venue": "The British National Corpus, version 3 -BNC XML Edition", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Burnard, L. (2007). 520 million words, 1990-present. In The British National Corpus, version 3 -BNC XML Edi- tion.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Improving generative statistical parsing with semi-supervised word clustering", "authors": [ { "first": "M", "middle": [], "last": "Candito", "suffix": "" }, { "first": "B", "middle": [], "last": "Crabb\u00e9", "suffix": "" } ], "year": 2009, "venue": "Proc. of IWPT'09", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Candito, M. and Crabb\u00e9, B. (2009). Improving generative statistical parsing with semi-supervised word clustering. In Proc. of IWPT'09, Paris, France.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Deep syntax annotation of the sequoia french treebank", "authors": [ { "first": "M", "middle": [], "last": "Candito", "suffix": "" }, { "first": "G", "middle": [], "last": "Perrier", "suffix": "" }, { "first": "B", "middle": [], "last": "Guillaume", "suffix": "" }, { "first": "C", "middle": [], "last": "Ribeyre", "suffix": "" }, { "first": "K", "middle": [], "last": "Fort", "suffix": "" }, { "first": "D", "middle": [], "last": "Seddah", "suffix": "" }, { "first": "\u00c9", "middle": [ "V" ], "last": "De La Clergerie", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation, LREC 2014", "volume": "", "issue": "", "pages": "2298--2305", "other_ids": {}, "num": null, "urls": [], "raw_text": "Candito, M., Perrier, G., Guillaume, B., Ribeyre, C., Fort, K., Seddah, D., and de la Clergerie, \u00c9. V. (2014). Deep syntax annotation of the sequoia french treebank. In Nicoletta Calzolari, et al., editors, Proceedings of the Ninth International Conference on Language Resources and Evaluation, LREC 2014, Reykjavik, Iceland, May 26-31, 2014., pages 2298-2305. European Language Re- sources Association (ELRA).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "):1-30. Final version to Special Issue of JLCL", "authors": [], "year": null, "venue": "The CoMeRe corpus for French: structuring and annotating heterogeneous CMC genres. JLCL -Journal for Language Technology and Computational Linguistics", "volume": "29", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "The CoMeRe corpus for French: structuring and an- notating heterogeneous CMC genres. JLCL -Jour- nal for Language Technology and Computational Lin- guistics, 29(2):1-30. Final version to Special Issue of JLCL (Journal of Language Technology and Computa- tional Linguistics (JLCL, http://jlcl.org/): BUILDING AND ANNOTATING CORPORA OF COMPUTER- MEDIATED DISCOURSE: Issues and Challenges at the Interface of Corpus and Computational Linguistics (ed. by Michael Bei\u00dfwenger, Nelleke Oostdijk, Angelika Storrer & Henk van den Heuvel).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "520 million words, 1990-present", "authors": [ { "first": "M", "middle": [], "last": "Davies", "suffix": "" } ], "year": 2008, "venue": "The Corpus of Contemporary American English (COCA)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Davies, M. (2008). 520 million words, 1990-present. In The Corpus of Contemporary American English (COCA).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Exploration de traits pour la reconnaissance d'entit\u00e9s nomm\u00e9es du fran\u00e7ais par apprentissage automatique", "authors": [ { "first": "Y", "middle": [], "last": "Dupont", "suffix": "" } ], "year": 2017, "venue": "24e Conf\u00e9rence sur le Traitement Automatique des Langues Naturelles (TALN)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dupont, Y. (2017). Exploration de traits pour la reconnais- sance d'entit\u00e9s nomm\u00e9es du fran\u00e7ais par apprentissage automatique. In 24e Conf\u00e9rence sur le Traitement Au- tomatique des Langues Naturelles (TALN), page 42.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Exploration de traits pour la reconnaissance d'entit'es nomm'ees du fran\u00e7ais par apprentissage automatique", "authors": [ { "first": "Y", "middle": [], "last": "Dupont", "suffix": "" } ], "year": 2018, "venue": "24e Conf'erence sur le Traitement Automatique des Langues Naturelles (TALN)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dupont, Y. (2018). Exploration de traits pour la reconnais- sance d'entit'es nomm'ees du fran\u00e7ais par apprentissage automatique. In 24e Conf'erence sur le Traitement Au- tomatique des Langues Naturelles (TALN), page 42.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Processing mwes: Neurocognitive bases of verbal mwes and lexical cohesiveness within mwes", "authors": [ { "first": "M", "middle": [], "last": "Fabre", "suffix": "" }, { "first": "S", "middle": [], "last": "Bhattasali", "suffix": "" }, { "first": "J", "middle": [], "last": "Hale", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 14th Workshop on Multiword Expressions (COLING 2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabre, M., Bhattasali, S., and Hale, J. (2018). Processing mwes: Neurocognitive bases of verbal mwes and lexical cohesiveness within mwes. In Proceedings of the 14th Workshop on Multiword Expressions (COLING 2018), Santa Fe, NM.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Bag of tricks for efficient text classification", "authors": [ { "first": "E", "middle": [], "last": "Grave", "suffix": "" }, { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "A", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "P", "middle": [], "last": "Bojanowski", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "427--431", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grave, E., Mikolov, T., Joulin, A., and Bojanowski, P. (2017). Bag of tricks for efficient text classification. In Mirella Lapata, et al., editors, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 2: Short Papers, pages 427-431. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Learning word vectors for 157 languages", "authors": [ { "first": "E", "middle": [], "last": "Grave", "suffix": "" }, { "first": "P", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "P", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "A", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grave, E., Bojanowski, P., Gupta, P., Joulin, A., and Mikolov, T. (2018). Learning word vectors for 157 lan- guages. In Nicoletta Calzolari, et al., editors, Proceed- ings of the Eleventh International Conference on Lan- guage Resources and Evaluation, LREC 2018, Miyazaki, Japan, May 7-12, 2018. European Language Resources Association (ELRA).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The goldilocks principle: Reading children's books with explicit memory representations", "authors": [ { "first": "F", "middle": [], "last": "Hill", "suffix": "" }, { "first": "A", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "S", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Weston", "middle": [], "last": "", "suffix": "" }, { "first": "J", "middle": [], "last": "", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hill, F., Bordes, A., Chopra, S., and Weston, J. (2015). The goldilocks principle: Reading children's books with explicit memory representations.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Fasttext.zip: Compressing text classification models", "authors": [ { "first": "A", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "E", "middle": [], "last": "Grave", "suffix": "" }, { "first": "P", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "M", "middle": [], "last": "Douze", "suffix": "" }, { "first": "H", "middle": [], "last": "J\u00e9gou", "suffix": "" }, { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joulin, A., Grave, E., Bojanowski, P., Douze, M., J\u00e9gou, H., and Mikolov, T. (2016). Fasttext.zip: Compressing text classification models. CoRR, abs/1612.03651.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Europarl: A Parallel Corpus for Statistical Machine Translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2005, "venue": "Conference Proceedings: the tenth Machine Translation Summit", "volume": "", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koehn, P. (2005). Europarl: A Parallel Corpus for Sta- tistical Machine Translation. In Conference Proceed- ings: the tenth Machine Translation Summit, pages 79- 86, Phuket, Thailand. AAMT, AAMT. Kondratyuk, D. (2019). 75 languages, 1 model: Parsing universal dependencies universally. CoRR, abs/1904.02099.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Rhapsodie: a prosodic-syntactic treebank for spoken French", "authors": [ { "first": "A", "middle": [], "last": "Lacheret", "suffix": "" }, { "first": "S", "middle": [], "last": "Kahane", "suffix": "" }, { "first": "J", "middle": [], "last": "Beliao", "suffix": "" }, { "first": "A", "middle": [], "last": "Dister", "suffix": "" }, { "first": "K", "middle": [], "last": "Gerdes", "suffix": "" }, { "first": "J.-P", "middle": [], "last": "Goldman", "suffix": "" }, { "first": "N", "middle": [], "last": "Obin", "suffix": "" }, { "first": "P", "middle": [], "last": "Pietrandrea", "suffix": "" }, { "first": "A", "middle": [], "last": "Tchobanov", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)", "volume": "", "issue": "", "pages": "295--301", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lacheret, A., Kahane, S., Beliao, J., Dister, A., Gerdes, K., Goldman, J.-P., Obin, N., Pietrandrea, P., and Tchobanov, A. (2014). Rhapsodie: a prosodic-syntactic treebank for spoken French. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 295-301, Reykjavik, Ice- land, May. European Language Resources Association (ELRA).", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Neural architectures for named entity recognition", "authors": [ { "first": "G", "middle": [], "last": "Lample", "suffix": "" }, { "first": "M", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "S", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "K", "middle": [], "last": "Kawakami", "suffix": "" }, { "first": "C", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2016, "venue": "The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "260--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lample, G., Ballesteros, M., Subramanian, S., Kawakami, K., and Dyer, C. (2016). Neural architectures for named entity recognition. In Kevin Knight, et al., editors, NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 260-270. The Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Roberta: A robustly optimized BERT pretraining approach", "authors": [ { "first": "Y", "middle": [], "last": "Liu", "suffix": "" }, { "first": "M", "middle": [], "last": "Ott", "suffix": "" }, { "first": "N", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "J", "middle": [], "last": "Du", "suffix": "" }, { "first": "M", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "D", "middle": [], "last": "Chen", "suffix": "" }, { "first": "O", "middle": [], "last": "Levy", "suffix": "" }, { "first": "M", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "L", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "V", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. (2019). Roberta: A robustly optimized BERT pretrain- ing approach. CoRR, abs/1907.11692.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "CamemBERT: a Tasty French Language Model. arXiv e-prints", "authors": [ { "first": "L", "middle": [], "last": "Martin", "suffix": "" }, { "first": "B", "middle": [], "last": "Muller", "suffix": "" }, { "first": "P", "middle": [ "J" ], "last": "Ortiz Su\u00e1rez", "suffix": "" }, { "first": "Y", "middle": [], "last": "Dupont", "suffix": "" }, { "first": "L", "middle": [], "last": "Romary", "suffix": "" }, { "first": "\u00c9", "middle": [], "last": "Villemonte De La Clergerie", "suffix": "" }, { "first": "D", "middle": [], "last": "Seddah", "suffix": "" }, { "first": "B", "middle": [], "last": "Sagot", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.03894" ] }, "num": null, "urls": [], "raw_text": "Martin, L., Muller, B., Ortiz Su\u00e1rez, P. J., Dupont, Y., Ro- mary, L., Villemonte de la Clergerie, \u00c9., Seddah, D., and Sagot, B. (2019). CamemBERT: a Tasty French Lan- guage Model. arXiv e-prints, page arXiv:1911.03894, Nov.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Universal dependency annotation for multilingual parsing", "authors": [ { "first": "R", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "J", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Y", "middle": [], "last": "Quirmbach-Brundage", "suffix": "" }, { "first": "Y", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "D", "middle": [], "last": "Das", "suffix": "" }, { "first": "K", "middle": [], "last": "Ganchev", "suffix": "" }, { "first": "K", "middle": [], "last": "Hall", "suffix": "" }, { "first": "S", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "H", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "O", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" }, { "first": "C", "middle": [], "last": "Bedini", "suffix": "" }, { "first": "N", "middle": [], "last": "Bertomeu Castell\u00f3", "suffix": "" }, { "first": "J", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "92--97", "other_ids": {}, "num": null, "urls": [], "raw_text": "McDonald, R., Nivre, J., Quirmbach-Brundage, Y., Goldberg, Y., Das, D., Ganchev, K., Hall, K., Petrov, S., Zhang, H., T\u00e4ckstr\u00f6m, O., Bedini, C., Bertomeu Castell\u00f3, N., and Lee, J. (2013). Universal dependency annotation for multilingual parsing. In Pro- ceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 92-97, Sofia, Bulgaria, August. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Faculty of Mathematics and Physics", "authors": [ { "first": "J", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "M", "middle": [], "last": "Abrams", "suffix": "" }, { "first": "\u017d", "middle": [], "last": "Agi\u0107", "suffix": "" }, { "first": "L", "middle": [], "last": "Ahrenberg", "suffix": "" }, { "first": "L", "middle": [], "last": "Antonsen", "suffix": "" }, { "first": "M", "middle": [ "J" ], "last": "Aranzabe", "suffix": "" }, { "first": "G", "middle": [], "last": "Arutie", "suffix": "" }, { "first": "M", "middle": [], "last": "Asahara", "suffix": "" }, { "first": "L", "middle": [], "last": "Ateyah", "suffix": "" }, { "first": "M", "middle": [], "last": "Attia", "suffix": "" }, { "first": "A", "middle": [], "last": "Atutxa", "suffix": "" }, { "first": "L", "middle": [], "last": "Augustinus", "suffix": "" }, { "first": "E", "middle": [], "last": "Badmaeva", "suffix": "" }, { "first": "M", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "E", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "S", "middle": [], "last": "Bank", "suffix": "" }, { "first": "V", "middle": [], "last": "Barbu Mititelu", "suffix": "" }, { "first": "J", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "S", "middle": [], "last": "Bellato", "suffix": "" }, { "first": "K", "middle": [], "last": "Bengoetxea", "suffix": "" }, { "first": "R", "middle": [ "A" ], "last": "Bhat", "suffix": "" }, { "first": "E", "middle": [], "last": "Biagetti", "suffix": "" }, { "first": "E", "middle": [], "last": "Bick", "suffix": "" }, { "first": "R", "middle": [], "last": "Blokland", "suffix": "" }, { "first": "V", "middle": [], "last": "Bobicev", "suffix": "" }, { "first": "C", "middle": [], "last": "B\u00f6rstell", "suffix": "" }, { "first": "C", "middle": [], "last": "Bosco", "suffix": "" }, { "first": "G", "middle": [], "last": "Bouma", "suffix": "" }, { "first": "S", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "A", "middle": [], "last": "Boyd", "suffix": "" }, { "first": "A", "middle": [], "last": "Burchardt", "suffix": "" }, { "first": "M", "middle": [], "last": "Candito", "suffix": "" }, { "first": "B", "middle": [], "last": "Caron", "suffix": "" }, { "first": "G", "middle": [], "last": "Caron", "suffix": "" }, { "first": "G", "middle": [], "last": "Cebiroglu Eryigit", "suffix": "" }, { "first": "G", "middle": [ "G A" ], "last": "Celano", "suffix": "" }, { "first": "S", "middle": [], "last": "Cetin", "suffix": "" }, { "first": "F", "middle": [], "last": "Chalub", "suffix": "" }, { "first": "J", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Y", "middle": [], "last": "Cho", "suffix": "" }, { "first": "J", "middle": [], "last": "Chun", "suffix": "" }, { "first": "S", "middle": [], "last": "Cinkov\u00e1", "suffix": "" }, { "first": "A", "middle": [], "last": "Collomb", "suffix": "" }, { "first": "\u00c7", "middle": [], "last": "\u00c7\u00f6ltekin", "suffix": "" }, { "first": "M", "middle": [], "last": "Connor", "suffix": "" }, { "first": "M", "middle": [], "last": "Courtin", "suffix": "" }, { "first": "E", "middle": [], "last": "Davidson", "suffix": "" }, { "first": "M.-C", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "V", "middle": [], "last": "De Paiva", "suffix": "" }, { "first": "A", "middle": [], "last": "Diaz De Ilarraza", "suffix": "" }, { "first": "C", "middle": [], "last": "Dickerson", "suffix": "" }, { "first": "P", "middle": [], "last": "Dirix", "suffix": "" }, { "first": "K", "middle": [], "last": "Dobrovoljc", "suffix": "" }, { "first": "T", "middle": [], "last": "Dozat", "suffix": "" }, { "first": "K", "middle": [], "last": "Droganova", "suffix": "" }, { "first": "P", "middle": [], "last": "Dwivedi", "suffix": "" }, { "first": "M", "middle": [], "last": "Eli", "suffix": "" }, { "first": "A", "middle": [], "last": "Elkahky", "suffix": "" }, { "first": "B", "middle": [], "last": "Ephrem", "suffix": "" }, { "first": "T", "middle": [], "last": "Erjavec", "suffix": "" }, { "first": "A", "middle": [], "last": "Etienne", "suffix": "" }, { "first": "R", "middle": [], "last": "Farkas", "suffix": "" }, { "first": "H", "middle": [], "last": "Fernandez Alcalde", "suffix": "" }, { "first": "J", "middle": [], "last": "Foster", "suffix": "" }, { "first": "C", "middle": [], "last": "Freitas", "suffix": "" }, { "first": "K", "middle": [], "last": "Gajdo\u0161ov\u00e1", "suffix": "" }, { "first": "D", "middle": [], "last": "Galbraith", "suffix": "" }, { "first": "M", "middle": [], "last": "Garcia", "suffix": "" }, { "first": "M", "middle": [], "last": "G\u00e4rdenfors", "suffix": "" }, { "first": "K", "middle": [], "last": "Gerdes", "suffix": "" }, { "first": "F", "middle": [], "last": "Ginter", "suffix": "" }, { "first": "I", "middle": [], "last": "Goenaga", "suffix": "" }, { "first": "K", "middle": [], "last": "Gojenola", "suffix": "" }, { "first": "M", "middle": [], "last": "G\u00f6k\u0131rmak", "suffix": "" }, { "first": "Y", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "X", "middle": [], "last": "G\u00f3mez Guinovart", "suffix": "" }, { "first": "B", "middle": [], "last": "Saavedra", "suffix": "" }, { "first": "M", "middle": [], "last": "Grioni", "suffix": "" }, { "first": "N", "middle": [], "last": "Gr\u016bz\u012btis", "suffix": "" }, { "first": "B", "middle": [], "last": "Guillaume", "suffix": "" }, { "first": "C", "middle": [], "last": "Guillot-Barbance", "suffix": "" }, { "first": "N", "middle": [], "last": "Habash", "suffix": "" }, { "first": "J", "middle": [], "last": "Haji\u010d", "suffix": "" }, { "first": "J", "middle": [], "last": "Haji\u010d Jr", "suffix": "" }, { "first": "L", "middle": [], "last": "H\u00e0 M\u1ef9", "suffix": "" }, { "first": "N.-R", "middle": [], "last": "Han", "suffix": "" }, { "first": "K", "middle": [], "last": "Harris", "suffix": "" }, { "first": "D", "middle": [], "last": "Haug", "suffix": "" }, { "first": "B", "middle": [], "last": "Hladk\u00e1", "suffix": "" }, { "first": "J", "middle": [], "last": "Hlav\u00e1\u010dov\u00e1", "suffix": "" }, { "first": "F", "middle": [], "last": "Hociung", "suffix": "" }, { "first": "P", "middle": [], "last": "Hohle", "suffix": "" }, { "first": "J", "middle": [], "last": "Hwang", "suffix": "" }, { "first": "R", "middle": [], "last": "Ion", "suffix": "" }, { "first": "E", "middle": [], "last": "Irimia", "suffix": "" }, { "first": "T", "middle": [], "last": "Jel\u00ednek", "suffix": "" }, { "first": "A", "middle": [], "last": "Johannsen", "suffix": "" }, { "first": "F", "middle": [], "last": "J\u00f8rgensen", "suffix": "" }, { "first": "H", "middle": [], "last": "Ka\u015f\u0131kara", "suffix": "" }, { "first": "S", "middle": [], "last": "Kahane", "suffix": "" }, { "first": "H", "middle": [], "last": "Kanayama", "suffix": "" }, { "first": "J", "middle": [], "last": "Kanerva", "suffix": "" }, { "first": "T", "middle": [], "last": "Kayadelen", "suffix": "" }, { "first": "V", "middle": [], "last": "Kettnerov\u00e1", "suffix": "" }, { "first": "J", "middle": [], "last": "Kirchner", "suffix": "" }, { "first": "N", "middle": [], "last": "Kotsyba", "suffix": "" }, { "first": "S", "middle": [], "last": "Krek", "suffix": "" }, { "first": "S", "middle": [], "last": "Kwak", "suffix": "" }, { "first": "V", "middle": [], "last": "Laippala", "suffix": "" }, { "first": "L", "middle": [], "last": "Lambertino", "suffix": "" }, { "first": "T", "middle": [], "last": "Lando", "suffix": "" }, { "first": "S", "middle": [ "D" ], "last": "Larasati", "suffix": "" }, { "first": "A", "middle": [], "last": "Lavrentiev", "suffix": "" }, { "first": "J", "middle": [], "last": "Lee", "suffix": "" }, { "first": "P", "middle": [], "last": "L\u00ea H\u1ed3ng", "suffix": "" }, { "first": "A", "middle": [], "last": "Lenci", "suffix": "" }, { "first": "S", "middle": [], "last": "Lertpradit", "suffix": "" }, { "first": "H", "middle": [], "last": "Leung", "suffix": "" }, { "first": "C", "middle": [ "Y" ], "last": "Li", "suffix": "" }, { "first": "J", "middle": [], "last": "Li", "suffix": "" }, { "first": "K", "middle": [], "last": "Li", "suffix": "" }, { "first": "K", "middle": [], "last": "Lim", "suffix": "" }, { "first": "N", "middle": [], "last": "Ljube\u0161i\u0107", "suffix": "" }, { "first": "O", "middle": [], "last": "Loginova", "suffix": "" }, { "first": "O", "middle": [], "last": "Lyashevskaya", "suffix": "" }, { "first": "T", "middle": [], "last": "Lynn", "suffix": "" }, { "first": "V", "middle": [], "last": "Macketanz", "suffix": "" }, { "first": "A", "middle": [], "last": "Makazhanov", "suffix": "" }, { "first": "M", "middle": [], "last": "Mandl", "suffix": "" }, { "first": "C", "middle": [], "last": "Manning", "suffix": "" }, { "first": "R", "middle": [], "last": "Manurung", "suffix": "" }, { "first": "C", "middle": [], "last": "M\u0203r\u0203nduc", "suffix": "" }, { "first": "D", "middle": [], "last": "Mare\u010dek", "suffix": "" }, { "first": "K", "middle": [], "last": "Marheinecke", "suffix": "" }, { "first": "H", "middle": [], "last": "Mart\u00ednez Alonso", "suffix": "" }, { "first": "A", "middle": [], "last": "Martins", "suffix": "" }, { "first": "J", "middle": [], "last": "Ma\u0161ek", "suffix": "" }, { "first": "Y", "middle": [], "last": "Matsumoto", "suffix": "" }, { "first": "R", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "G", "middle": [], "last": "Mendon\u00e7a", "suffix": "" }, { "first": "N", "middle": [], "last": "Miekka", "suffix": "" }, { "first": "A", "middle": [], "last": "Missil\u00e4", "suffix": "" }, { "first": "C", "middle": [], "last": "Mititelu", "suffix": "" }, { "first": "Y", "middle": [], "last": "Miyao", "suffix": "" }, { "first": "S", "middle": [], "last": "Montemagni", "suffix": "" }, { "first": "A", "middle": [], "last": "More", "suffix": "" }, { "first": "L", "middle": [], "last": "Moreno Romero", "suffix": "" }, { "first": "S", "middle": [], "last": "Mori", "suffix": "" }, { "first": "B", "middle": [], "last": "Mortensen", "suffix": "" }, { "first": "B", "middle": [], "last": "Moskalevskyi", "suffix": "" }, { "first": "K", "middle": [], "last": "Muischnek", "suffix": "" }, { "first": "Y", "middle": [], "last": "Murawaki", "suffix": "" }, { "first": "K", "middle": [], "last": "M\u00fc\u00fcrisep", "suffix": "" }, { "first": "P", "middle": [], "last": "Nainwani", "suffix": "" }, { "first": "J", "middle": [ "I" ], "last": "Navarro Hor\u00f1iacek", "suffix": "" }, { "first": "A", "middle": [], "last": "Nedoluzhko", "suffix": "" }, { "first": "G", "middle": [], "last": "Ne\u0161pore-B\u0113rzkalne", "suffix": "" }, { "first": "L", "middle": [], "last": "Nguy\u1ec5n Thi", "suffix": "" }, { "first": "", "middle": [], "last": "Nguy\u1ec5n Thi", "suffix": "" }, { "first": "H", "middle": [], "last": "Minh", "suffix": "" }, { "first": "V", "middle": [], "last": "Nikolaev", "suffix": "" }, { "first": "R", "middle": [], "last": "Nitisaroj", "suffix": "" }, { "first": "H", "middle": [], "last": "Nurmi", "suffix": "" }, { "first": "S", "middle": [], "last": "Ojala", "suffix": "" }, { "first": "A", "middle": [], "last": "Ol\u00fa\u00f2kun", "suffix": "" }, { "first": "M", "middle": [], "last": "Omura", "suffix": "" }, { "first": "P", "middle": [], "last": "Osenova", "suffix": "" }, { "first": "R", "middle": [], "last": "\u00d6stling", "suffix": "" }, { "first": "L", "middle": [], "last": "\u00d8vrelid", "suffix": "" }, { "first": "N", "middle": [], "last": "Partanen", "suffix": "" }, { "first": "E", "middle": [], "last": "Pascual", "suffix": "" }, { "first": "M", "middle": [], "last": "Passarotti", "suffix": "" }, { "first": "A", "middle": [], "last": "Patejuk", "suffix": "" }, { "first": "S", "middle": [], "last": "Peng", "suffix": "" }, { "first": "C.-A", "middle": [], "last": "Perez", "suffix": "" }, { "first": "G", "middle": [], "last": "Perrier", "suffix": "" }, { "first": "S", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "J", "middle": [], "last": "Piitulainen", "suffix": "" }, { "first": "E", "middle": [], "last": "Pitler", "suffix": "" }, { "first": "B", "middle": [], "last": "Plank", "suffix": "" }, { "first": "T", "middle": [], "last": "Poibeau", "suffix": "" }, { "first": "M", "middle": [], "last": "Popel", "suffix": "" }, { "first": "L", "middle": [], "last": "Pretkalnin", "suffix": "" }, { "first": "S", "middle": [], "last": "Pr\u00e9vost", "suffix": "" }, { "first": "P", "middle": [], "last": "Prokopidis", "suffix": "" }, { "first": "A", "middle": [], "last": "Przepi\u00f3rkowski", "suffix": "" }, { "first": "T", "middle": [], "last": "Puolakainen", "suffix": "" }, { "first": "S", "middle": [], "last": "Pyysalo", "suffix": "" }, { "first": "A", "middle": [], "last": "R\u00e4\u00e4bis", "suffix": "" }, { "first": "A", "middle": [], "last": "Rademaker", "suffix": "" }, { "first": "L", "middle": [], "last": "Ramasamy", "suffix": "" }, { "first": "T", "middle": [], "last": "Rama", "suffix": "" }, { "first": "C", "middle": [], "last": "Ramisch", "suffix": "" }, { "first": "V", "middle": [], "last": "Ravishankar", "suffix": "" }, { "first": "L", "middle": [], "last": "Real", "suffix": "" }, { "first": "S", "middle": [], "last": "Reddy", "suffix": "" }, { "first": "G", "middle": [], "last": "Rehm", "suffix": "" }, { "first": "M", "middle": [], "last": "Rie\u00dfler", "suffix": "" }, { "first": "L", "middle": [], "last": "Rinaldi", "suffix": "" }, { "first": "L", "middle": [], "last": "Rituma", "suffix": "" }, { "first": "L", "middle": [], "last": "Rocha", "suffix": "" }, { "first": "M", "middle": [], "last": "Romanenko", "suffix": "" }, { "first": "R", "middle": [], "last": "Rosa", "suffix": "" }, { "first": "D", "middle": [], "last": "Rovati", "suffix": "" }, { "first": "V", "middle": [], "last": "Ros", "suffix": "" }, { "first": "O", "middle": [], "last": "Rudina", "suffix": "" }, { "first": "S", "middle": [], "last": "Sadde", "suffix": "" }, { "first": "S", "middle": [], "last": "Saleh", "suffix": "" }, { "first": "T", "middle": [], "last": "Samard\u017ei\u0107", "suffix": "" }, { "first": "S", "middle": [], "last": "Samson", "suffix": "" }, { "first": "M", "middle": [], "last": "Sanguinetti", "suffix": "" }, { "first": "B", "middle": [], "last": "Saul\u012bte", "suffix": "" }, { "first": "Y", "middle": [], "last": "Sawanakunanon", "suffix": "" }, { "first": "N", "middle": [], "last": "Schneider", "suffix": "" }, { "first": "S", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "D", "middle": [], "last": "Seddah", "suffix": "" }, { "first": "W", "middle": [], "last": "Seeker", "suffix": "" }, { "first": "M", "middle": [], "last": "Seraji", "suffix": "" }, { "first": "M", "middle": [], "last": "Shen", "suffix": "" }, { "first": "A", "middle": [], "last": "Shimada", "suffix": "" }, { "first": "M", "middle": [], "last": "Shohibussirri", "suffix": "" }, { "first": "D", "middle": [], "last": "Sichinava", "suffix": "" }, { "first": "N", "middle": [], "last": "Silveira", "suffix": "" }, { "first": "M", "middle": [], "last": "Simi", "suffix": "" }, { "first": "R", "middle": [], "last": "Simionescu", "suffix": "" }, { "first": "K", "middle": [], "last": "Simk\u00f3", "suffix": "" }, { "first": "M", "middle": [], "last": "\u0160imkov\u00e1", "suffix": "" }, { "first": "K", "middle": [], "last": "Simov", "suffix": "" }, { "first": "A", "middle": [], "last": "Smith", "suffix": "" }, { "first": "I", "middle": [], "last": "Soares-Bastos", "suffix": "" }, { "first": "A", "middle": [], "last": "Stella", "suffix": "" }, { "first": "M", "middle": [], "last": "Straka", "suffix": "" }, { "first": "J", "middle": [], "last": "Strnadov\u00e1", "suffix": "" }, { "first": "A", "middle": [], "last": "Suhr", "suffix": "" }, { "first": "U", "middle": [], "last": "Sulubacak", "suffix": "" }, { "first": "Z", "middle": [], "last": "Sz\u00e1nt\u00f3", "suffix": "" }, { "first": "D", "middle": [], "last": "Taji", "suffix": "" }, { "first": "Y", "middle": [], "last": "Takahashi", "suffix": "" }, { "first": "T", "middle": [], "last": "Tanaka", "suffix": "" }, { "first": "I", "middle": [], "last": "Tellier", "suffix": "" }, { "first": "T", "middle": [], "last": "Trosterud", "suffix": "" }, { "first": "A", "middle": [], "last": "Trukhina", "suffix": "" }, { "first": "R", "middle": [], "last": "Tsarfaty", "suffix": "" }, { "first": "F", "middle": [], "last": "Tyers", "suffix": "" }, { "first": "S", "middle": [], "last": "Uematsu", "suffix": "" }, { "first": "Z", "middle": [], "last": "Ure\u0161ov\u00e1", "suffix": "" }, { "first": "L", "middle": [], "last": "Uria", "suffix": "" }, { "first": "H", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "S", "middle": [], "last": "Vajjala", "suffix": "" }, { "first": "D", "middle": [], "last": "Van Niekerk", "suffix": "" }, { "first": "G", "middle": [], "last": "Van Noord", "suffix": "" }, { "first": "V", "middle": [], "last": "Varga", "suffix": "" }, { "first": "V", "middle": [], "last": "Vincze", "suffix": "" }, { "first": "L", "middle": [], "last": "Wallin", "suffix": "" }, { "first": "J", "middle": [ "N" ], "last": "Washington", "suffix": "" }, { "first": "S", "middle": [], "last": "Williams", "suffix": "" }, { "first": "M", "middle": [], "last": "Wir\u00e9n", "suffix": "" }, { "first": "T", "middle": [], "last": "Woldemariam", "suffix": "" }, { "first": "T", "middle": [], "last": "Wong", "suffix": "" }, { "first": "C", "middle": [], "last": "Yan", "suffix": "" }, { "first": "M", "middle": [ "M" ], "last": "Yavrumyan", "suffix": "" }, { "first": "Z", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Z", "middle": [], "last": "\u017dabokrtsk\u00fd", "suffix": "" }, { "first": "A", "middle": [], "last": "Zeldes", "suffix": "" }, { "first": "D", "middle": [], "last": "Zeman", "suffix": "" }, { "first": "M", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "H", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nivre, J., Abrams, M., Agi\u0107, \u017d., Ahrenberg, L., Antonsen, L., Aranzabe, M. J., Arutie, G., Asahara, M., Ateyah, L., Attia, M., Atutxa, A., Augustinus, L., Badmaeva, E., Ballesteros, M., Banerjee, E., Bank, S., Barbu Mi- titelu, V., Bauer, J., Bellato, S., Bengoetxea, K., Bhat, R. A., Biagetti, E., Bick, E., Blokland, R., Bobicev, V., B\u00f6rstell, C., Bosco, C., Bouma, G., Bowman, S., Boyd, A., Burchardt, A., Candito, M., Caron, B., Caron, G., Cebiroglu Eryigit, G., Celano, G. G. A., Cetin, S., Chalub, F., Choi, J., Cho, Y., Chun, J., Cinkov\u00e1, S., Collomb, A., \u00c7\u00f6ltekin, \u00c7., Connor, M., Courtin, M., Davidson, E., de Marneffe, M.-C., de Paiva, V., Diaz de Ilarraza, A., Dickerson, C., Dirix, P., Dobro- voljc, K., Dozat, T., Droganova, K., Dwivedi, P., Eli, M., Elkahky, A., Ephrem, B., Erjavec, T., Etienne, A., Farkas, R., Fernandez Alcalde, H., Foster, J., Freitas, C., Gajdo\u0161ov\u00e1, K., Galbraith, D., Garcia, M., G\u00e4rden- fors, M., Gerdes, K., Ginter, F., Goenaga, I., Gojenola, K., G\u00f6k\u0131rmak, M., Goldberg, Y., G\u00f3mez Guinovart, X., Gonz\u00e1les Saavedra, B., Grioni, M., Gr\u016bz\u012btis, N., Guil- laume, B., Guillot-Barbance, C., Habash, N., Haji\u010d, J., Haji\u010d jr., J., H\u00e0 M\u1ef9, L., Han, N.-R., Harris, K., Haug, D., Hladk\u00e1, B., Hlav\u00e1\u010dov\u00e1, J., Hociung, F., Hohle, P., Hwang, J., Ion, R., Irimia, E., Jel\u00ednek, T., Johannsen, A., J\u00f8rgensen, F., Ka\u015f\u0131kara, H., Kahane, S., Kanayama, H., Kanerva, J., Kayadelen, T., Kettnerov\u00e1, V., Kirch- ner, J., Kotsyba, N., Krek, S., Kwak, S., Laippala, V., Lambertino, L., Lando, T., Larasati, S. D., Lavrentiev, A., Lee, J., L\u00ea H\u1ed3ng, P., Lenci, A., Lertpradit, S., Le- ung, H., Li, C. Y., Li, J., Li, K., Lim, K., Ljube\u0161i\u0107, N., Loginova, O., Lyashevskaya, O., Lynn, T., Macke- tanz, V., Makazhanov, A., Mandl, M., Manning, C., Ma- nurung, R., M\u0203r\u0203nduc, C., Mare\u010dek, D., Marheinecke, K., Mart\u00ednez Alonso, H., Martins, A., Ma\u0161ek, J., Mat- sumoto, Y., McDonald, R., Mendon\u00e7a, G., Miekka, N., Missil\u00e4, A., Mititelu, C., Miyao, Y., Montemagni, S., More, A., Moreno Romero, L., Mori, S., Mortensen, B., Moskalevskyi, B., Muischnek, K., Murawaki, Y., M\u00fc\u00fcrisep, K., Nainwani, P., Navarro Hor\u00f1iacek, J. I., Nedoluzhko, A., Ne\u0161pore-B\u0113rzkalne, G., Nguy\u1ec5n Thi . , L., Nguy\u1ec5n Thi . Minh, H., Nikolaev, V., Nitisaroj, R., Nurmi, H., Ojala, S., Ol\u00fa\u00f2kun, A., Omura, M., Osen- ova, P., \u00d6stling, R., \u00d8vrelid, L., Partanen, N., Pascual, E., Passarotti, M., Patejuk, A., Peng, S., Perez, C.-A., Perrier, G., Petrov, S., Piitulainen, J., Pitler, E., Plank, B., Poibeau, T., Popel, M., Pretkalnin , a, L., Pr\u00e9vost, S., Prokopidis, P., Przepi\u00f3rkowski, A., Puolakainen, T., Pyysalo, S., R\u00e4\u00e4bis, A., Rademaker, A., Ramasamy, L., Rama, T., Ramisch, C., Ravishankar, V., Real, L., Reddy, S., Rehm, G., Rie\u00dfler, M., Rinaldi, L., Rituma, L., Rocha, L., Romanenko, M., Rosa, R., Rovati, D., Ros , ca, V., Rudina, O., Sadde, S., Saleh, S., Samard\u017ei\u0107, T., Sam- son, S., Sanguinetti, M., Saul\u012bte, B., Sawanakunanon, Y., Schneider, N., Schuster, S., Seddah, D., Seeker, W., Ser- aji, M., Shen, M., Shimada, A., Shohibussirri, M., Sichi- nava, D., Silveira, N., Simi, M., Simionescu, R., Simk\u00f3, K., \u0160imkov\u00e1, M., Simov, K., Smith, A., Soares-Bastos, I., Stella, A., Straka, M., Strnadov\u00e1, J., Suhr, A., Suluba- cak, U., Sz\u00e1nt\u00f3, Z., Taji, D., Takahashi, Y., Tanaka, T., Tellier, I., Trosterud, T., Trukhina, A., Tsarfaty, R., Ty- ers, F., Uematsu, S., Ure\u0161ov\u00e1, Z., Uria, L., Uszkoreit, H., Vajjala, S., van Niekerk, D., van Noord, G., Varga, V., Vincze, V., Wallin, L., Washington, J. N., Williams, S., Wir\u00e9n, M., Woldemariam, T., Wong, T.-s., Yan, C., Yavrumyan, M. M., Yu, Z., \u017dabokrtsk\u00fd, Z., Zeldes, A., Zeman, D., Zhang, M., and Zhu, H. (2018). Universal dependencies 2.2. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics, Charles University.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Asynchronous Pipeline for Processing Huge Corpora on Medium to Low Resource Infrastructures", "authors": [], "year": null, "venue": "7th Workshop on the Challenges in the Management of Large Corpora", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Asynchronous Pipeline for Processing Huge Corpora on Medium to Low Resource Infrastructures. In Pi- otr Ba\u0144ski, et al., editors, 7th Workshop on the Chal- lenges in the Management of Large Corpora (CMLC-", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Leibniz-Institut f\u00fcr Deutsche Sprache", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": ", Cardiff, United Kingdom, July. Leibniz-Institut f\u00fcr Deutsche Sprache.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Deep contextualized word representations", "authors": [ { "first": "M", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "M", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "M", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "M", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "C", "middle": [], "last": "Clark", "suffix": "" }, { "first": "K", "middle": [], "last": "Lee", "suffix": "" }, { "first": "L", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2227--2237", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., and Zettlemoyer, L. (2018). Deep contex- tualized word representations. In Marilyn A. Walker, et al., editors, Proceedings of the 2018 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 2227-2237. Association for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A universal part-of-speech tagset", "authors": [ { "first": "S", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "D", "middle": [], "last": "Das", "suffix": "" }, { "first": "R", "middle": [], "last": "Mcdonald", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1104.2086" ] }, "num": null, "urls": [], "raw_text": "Petrov, S., Das, D., and McDonald, R. (2011). A universal part-of-speech tagset. arXiv preprint arXiv:1104.2086.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Exploring the limits of transfer learning with a unified textto", "authors": [ { "first": "C", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "N", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "A", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "K", "middle": [], "last": "Lee", "suffix": "" }, { "first": "S", "middle": [], "last": "Narang", "suffix": "" }, { "first": "M", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Y", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "W", "middle": [], "last": "Li", "suffix": "" }, { "first": "P", "middle": [ "J" ], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., and Liu, P. J. (2019). Ex- ploring the limits of transfer learning with a unified text- to-text transformer. CoRR, abs/1910.10683.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Annotation r\u00e9f\u00e9rentielle du corpus arbor\u00e9 de Paris 7 en entit\u00e9s nomm\u00e9es (referential named entity annotation of the paris 7 french treebank)", "authors": [ { "first": "B", "middle": [], "last": "Sagot", "suffix": "" }, { "first": "M", "middle": [], "last": "Richard", "suffix": "" }, { "first": "R", "middle": [], "last": "Stern", "suffix": "" }, { "first": "", "middle": [], "last": "Atala/Afcp", "suffix": "" }, { "first": "M", "middle": [], "last": "Sanguinetti", "suffix": "" }, { "first": "C", "middle": [], "last": "Bosco", "suffix": "" } ], "year": 2012, "venue": "Harmonization and Development of Resources and Tools for Italian Natural Language Processing within the PARLI Project", "volume": "2", "issue": "", "pages": "51--69", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sagot, B., Richard, M., and Stern, R. (2012). Annotation r\u00e9f\u00e9rentielle du corpus arbor\u00e9 de Paris 7 en entit\u00e9s nom- m\u00e9es (referential named entity annotation of the paris 7 french treebank) [in french]. In Georges Antoniadis, et al., editors, Proceedings of the Joint Conference JEP-TALN-RECITAL 2012, volume 2: TALN, Grenoble, France, June 4-8, 2012, pages 535-542. ATALA/AFCP. Sanguinetti, M. and Bosco, C. (2015). PartTUT: The Turin University Parallel Treebank. In Roberto Basili, et al., editors, Harmonization and Development of Re- sources and Tools for Italian Natural Language Process- ing within the PARLI Project, volume 589 of Studies in Computational Intelligence, pages 51-69. Springer.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Universal morpho-syntactic parsing and the contribution of lexica: Analyzing the onlp lab submission to the conll 2018 shared task", "authors": [ { "first": "A", "middle": [], "last": "Seker", "suffix": "" }, { "first": "A", "middle": [], "last": "More", "suffix": "" }, { "first": "R", "middle": [], "last": "Tsarfaty", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies", "volume": "", "issue": "", "pages": "208--215", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seker, A., More, A., and Tsarfaty, R. (2018). Universal morpho-syntactic parsing and the contribution of lex- ica: Analyzing the onlp lab submission to the conll 2018 shared task. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 208-215.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Evaluating contextualized embeddings on 54 languages in POS tagging, lemmatization and dependency parsing", "authors": [ { "first": "M", "middle": [], "last": "Straka", "suffix": "" }, { "first": "J", "middle": [], "last": "Strakov\u00e1", "suffix": "" }, { "first": "J", "middle": [], "last": "Hajic", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Straka, M., Strakov\u00e1, J., and Hajic, J. (2019). Evaluat- ing contextualized embeddings on 54 languages in POS tagging, lemmatization and dependency parsing. CoRR, abs/1908.07448.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "UDPipe 2.0 prototype at CoNLL 2018 UD shared task", "authors": [ { "first": "M", "middle": [], "last": "Straka", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies", "volume": "", "issue": "", "pages": "197--207", "other_ids": {}, "num": null, "urls": [], "raw_text": "Straka, M. (2018). UDPipe 2.0 prototype at CoNLL 2018 UD shared task. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 197-207, Brussels, Bel- gium, October. Association for Computational Linguis- tics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Neural architectures for nested NER through linearization", "authors": [ { "first": "J", "middle": [], "last": "Strakov\u00e1", "suffix": "" }, { "first": "M", "middle": [], "last": "Straka", "suffix": "" }, { "first": "J", "middle": [], "last": "Hajic", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019", "volume": "1", "issue": "", "pages": "5326--5331", "other_ids": {}, "num": null, "urls": [], "raw_text": "Strakov\u00e1, J., Straka, M., and Hajic, J. (2019). Neural archi- tectures for nested NER through linearization. In Anna Korhonen, et al., editors, Proceedings of the 57th Con- ference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Vol- ume 1: Long Papers, pages 5326-5331. Association for Computational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data. arXiv e-prints", "authors": [ { "first": "G", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "M.-A", "middle": [], "last": "Lachaux", "suffix": "" }, { "first": "A", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "V", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "F", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "A", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "E", "middle": [], "last": "Grave", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.00359" ] }, "num": null, "urls": [], "raw_text": "Wenzek, G., Lachaux, M.-A., Conneau, A., Chaud- hary, V., Guzm\u00e1n, F., Joulin, A., and Grave, E. (2019). CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data. arXiv e-prints, page arXiv:1911.00359, Nov.", "links": null } }, "ref_entries": { "TABREF3": { "html": null, "num": null, "type_str": "table", "content": "
: Comparison of number of unique forms in the
different genres represented by CaBeRnet partition. TTR:
Type-Token Ration. Lemmatization and tokenization was
performed as described in \u00a73..
For all sub-portions of CaBeRnet, visual inspection was
performed to remove section titles, redundant meta-
information linked to publishing schemes of each of the
six news editor includes. This was manually achieved by
compiling a rich set of regular expressions specific of each
textual source to obtain clean plain text as an outcome.
2.2. French Children Book Test (CBT-fr)
The French Children Book Test (CBT-fr) was built upon
its original English version, the Children Book Test (CBT)
Hill et al. (2015) 6 , which consists of books freely available
on www.gutenberg.orgProject Gutenberg.
Using youth literature and children books guarantees a clear
narrative structure, and a large amount of dialogues, which
enrich with oral register the literary style of this corpus.
", "text": "The English version of this corpus was originally built as benchmark data-set to test how well language models capture meaning in context. It contains 108 books, and a vocabulary size of 53,628. French version of CBT, named CBT-fr, was constructed to guarantee enough linguistic similarities between the collected books in the two languages. 104 freely available books were included. One third of the books were purposely chosen because they were classical translations of English literary classics. Chapter heads, titles, notes and all types of editorial information were removed to obtain a plain narrative text. The effort of keeping proportion, genre, domain, and time as equal as possible yields a multilingual set of comparable corpora with a similar balance and representativeness." }, "TABREF4": { "html": null, "num": null, "type_str": "table", "content": "
CORPUSWORDFORMSTOKENSSENTENCES
OSCAR-fr 23 212 459 287 27 439 082 933 1 003 261 066
Wiki-fr665 599 545802 283 13021 775 351
CaBeRnet697 119 013830 894 13354 216 010
CBT-fr5 697 5846 910 201317 239
", "text": "in this data-set (660 million words) sentences are relatively longer compared to other corpora. It has the advantage of having a comparable size to CaBeRnet, but its homogeneity in terms of written genre is set to Wikipedia entries descriptive style." }, "TABREF5": { "html": null, "num": null, "type_str": "table", "content": "", "text": "Comparing the corpora under study." }, "TABREF7": { "html": null, "num": null, "type_str": "table", "content": "
: Lexical statistics on morphological richness over
randomly selected samples of 3 million words from each
corpus. nb : number
", "text": "" }, "TABREF8": { "html": null, "num": null, "type_str": "table", "content": "", "text": "Method 1) yields the following four language models which were pre-trained on the four corpora under comparison : ELMo OSCAR , ELMo Wikipedia , ELMo CaBeRnet and ELMo CBT ." }, "TABREF11": { "html": null, "num": null, "type_str": "table", "content": "
: Final POS and dependency parsing scores on 4 French treebanks (French GSD, Spoken, Sequoia and ParTUT),
reported on test sets (4 averaged runs) assuming gold tokenisation. Best scores in bold, second to best underlined, state-of-
the-art results in italics.
NER -RESULTS on FTBPrecision RecallF1
Baselines Models
SEM (CRF) (Dupont, 2018)87.8982.34 85.02
LSTM-CRF (Dupont, 2018)87.2383.96 85.57
LSTM-CRF test models85.8781.35 83.55
+FastText88.5384.63 86.53
+FastText+ELMo CBT79.7777.63 78.69
+FastText+ELMo Wikipedia88.8787.56 88.21
+FastText+ELMo CaBeRnet88.9187.22 88.06
+FastText+ELMo OSCAR88.8988.43 88.66
+FastText+ELMo OSCAR+CaBeRnet90.7089.12 89.93
State-of-the-art Models
CamemBERT (Martin et al., 2019)88.3587.46 87.93
", "text": "" }, "TABREF12": { "html": null, "num": null, "type_str": "table", "content": "", "text": "NER Results on French Treebank (FTB): best scores, second to best." } } } }