|
{ |
|
"paper_id": "N19-1019", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:59:34.685410Z" |
|
}, |
|
"title": "How Bad are PoS Taggers in Cross-Corpora Settings? Evaluating Annotation Divergence in the UD Project", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Wisniewski", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Universit\u00e9 Paris-Saclay", |
|
"location": { |
|
"postCode": "F-91405", |
|
"settlement": "Orsay", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "guillaume.wisniewski@limsi.fr" |
|
}, |
|
{ |
|
"first": "Fran\u00e7ois", |
|
"middle": [], |
|
"last": "Yvon", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Universit\u00e9 Paris-Saclay", |
|
"location": { |
|
"postCode": "F-91405", |
|
"settlement": "Orsay", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "francois.yvon@limsi.fr" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "The performance of Part-of-Speech tagging varies significantly across the treebanks of the Universal Dependencies project. This work points out that these variations may result from divergences between the annotation of train and test sets. We show how the annotation variation principle, introduced by Dickinson and Meurers (2003) to automatically detect errors in gold standard, can be used to identify inconsistencies between annotations ; we also evaluate their impact on prediction performance.", |
|
"pdf_parse": { |
|
"paper_id": "N19-1019", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "The performance of Part-of-Speech tagging varies significantly across the treebanks of the Universal Dependencies project. This work points out that these variations may result from divergences between the annotation of train and test sets. We show how the annotation variation principle, introduced by Dickinson and Meurers (2003) to automatically detect errors in gold standard, can be used to identify inconsistencies between annotations ; we also evaluate their impact on prediction performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The performance of Part-of-Speech (PoS) taggers significantly degrades when they are applied to test sentences that depart from training data. To illustrate this claim, Table 1 reports the error rate achieved by our in-house PoS tagger on the different combinations of train and test sets of the French treebanks of the Universal Dependencies (UD) project (Nivre et al., 2018) . 1 It shows that depending on the train and test sets considered, the performance can vary by a factor of more than 25.", |
|
"cite_spans": [ |
|
{ |
|
"start": 356, |
|
"end": 376, |
|
"text": "(Nivre et al., 2018)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 169, |
|
"end": 176, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Many studies (Foster, 2010; Plank et al., 2014) attribute this drop in accuracy to covariate shift (Shimodaira, 2000) , characterizing the differences between domains by a change in the marginal distribution p(x) of the input (e.g. increase of out-of-vocabulary words, missing capitalization, different usage of punctuation, etc), while assuming that the conditional label distribution remains unaffected.", |
|
"cite_spans": [ |
|
{ |
|
"start": 13, |
|
"end": 27, |
|
"text": "(Foster, 2010;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 28, |
|
"end": 47, |
|
"text": "Plank et al., 2014)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 99, |
|
"end": 117, |
|
"text": "(Shimodaira, 2000)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This work adopts a different point of view : we believe that the variation in tagging performance is due to a dataset shift (Candela et al., 2009) , i.e. a change in the joint distribution of the features and labels. We assume that this change mainly results 1. See Section 2 for details regarding our experimental setting from incoherences in the annotations between corpora or even within the same corpus. Indeed, ensuring inter-annotator agreement in PoS tagging is known to be a difficult task as annotation guidelines are not always interpreted in a consistent manner (Marcus et al., 1993) . For instance, Manning (2011) shows that many errors in the WSJ corpus are just mistakes rather than uncertainties or difficulties in the task ; Table 2 reports some of these annotation divergences that can be found in UD project. The situation is naturally worse in cross-corpora settings, in which treebanks are annotated by different laboratories or groups.", |
|
"cite_spans": [ |
|
{ |
|
"start": 124, |
|
"end": 146, |
|
"text": "(Candela et al., 2009)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 573, |
|
"end": 594, |
|
"text": "(Marcus et al., 1993)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 611, |
|
"end": 625, |
|
"text": "Manning (2011)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 741, |
|
"end": 748, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The contribution of this paper is threefold : -we show that, as already pointed out by de Marneffe et al. (2017) , the variation principle of Boyd et al. (2008) can be used to flag potential annotation discrepancies in the UD project. Building on this principle, we introduce, to evaluate the annotation consistency of a corpus, several methods and metrics that can be used, during the annotation to improve the quality of the corpus. -we generalize the conclusions of Manning (2011), highlighting how error rates in PoS tagging are stemming from the poor quality of annotations and inconsistencies in the resources ; we also systematically quantify the impact of annotation variation on PoS tagging performance for a large number of languages and corpora. -we show that the evaluation of PoS taggers in cross-corpora settings (typically in domain adaptation experiments) is hindered by systematic annotation discrepancies between the corpora and quantify the impact of this divergence on PoS tagger evaluation. Our observations stress the fact that comparing in-and out-domain scores as many works do (e.g. to evaluate the quality of a domain adaptation method or the measure the difficulty of the domain adaptation task) can be flawed and that this metrics has to be corrected to take into account the annotation divergences that exists between corpora. The rest of this paper is organized as follows. We first present the corpora and the tools used in our experiments ( \u00a7 2). We then describe the annotation variation principle of Dickinson and Meurers (2003) ( \u00a7 3) and its application to the treebanks of the Universal Dependencies project ( \u00a7 4). We eventually assess the impact of annotation variations on prediction performance ( \u00a7 5 and \u00a7 6).", |
|
"cite_spans": [ |
|
{ |
|
"start": 90, |
|
"end": 112, |
|
"text": "Marneffe et al. (2017)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 142, |
|
"end": 160, |
|
"text": "Boyd et al. (2008)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1534, |
|
"end": 1562, |
|
"text": "Dickinson and Meurers (2003)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1563, |
|
"end": 1569, |
|
"text": "( \u00a7 3)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The code and annotations of all experiments are available on the first author website. 2 For the sake of clarity, we have only reported our observations for the English treebanks of the UD project and, sometimes, for the French treebanks (because it has seven treebanks). Similar results have however been observed for other languages and corpora.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Data All experiments presented in this work use the Universal Dependencies (UD) 2.3 dataset (Nivre et al., 2018 ) that aims at developing cross-linguistically consistent treebank annotations for a wide array of languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 111, |
|
"text": "(Nivre et al., 2018", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "This version of the UD project contains 129 treebanks covering 76 languages. Among those, 97 treebanks define a train set that contains between 19 sentences and 68,495 sentences and a test set that contains between 34 and 10,148 sentences. For 21 languages, several test sets are available : there are, for instance, 7 test sets for French, 2. https://perso.limsi.fr/wisniews/ recherche/#coherence 6 for English, 5 for Czech and 4 for Swedish, Chinese, Japanese, Russian and Italian. Overall, it is possible to train and test 290 taggers (i.e. there are 290 possible combinations of a train and a test set of the same language), 191 of these conditions (i.e. pairs of a train set and a test set) correspond to a cross-corpus setting and can be considered for domain adaptation experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Many of these corpora 3 result from an automatic transformation (with, for some of them, manual corrections) from existing dependency or constituent treebanks (Bosco et al., 2013; Lipenkova and Sou\u010dek, 2014) . Because most treebanks have been annotated and/or converted independently by different groups, 4 the risk of inconsistencies and errors in the application of annotation guidelines is increased. There may indeed be several sources of inconsistencies in the gold annotations : in addition to the divergences in the theoretical linguistic principles that governed the design of the original annotation guidelines, inconsistencies may also result from automatic (pre-)processing, human post-editing, or human annotation. Actually, several studies have recently pointed out that treebanks for the same language are not consistently annotated (Vilares and G\u00f3mez-Rodr\u00edguez, 2017; Aufrant et al., 2017) . In a closely related context, Wisniewski et al. (2014) have also shown that, in spite of common annotation guidelines, one of the main bottleneck in cross-lingual transfer between UD corpora is the difference in the annotation conventions across treebanks and languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 159, |
|
"end": 179, |
|
"text": "(Bosco et al., 2013;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 180, |
|
"end": 207, |
|
"text": "Lipenkova and Sou\u010dek, 2014)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 847, |
|
"end": 882, |
|
"text": "(Vilares and G\u00f3mez-Rodr\u00edguez, 2017;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 883, |
|
"end": 904, |
|
"text": "Aufrant et al., 2017)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 937, |
|
"end": 961, |
|
"text": "Wisniewski et al. (2014)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3. For PoS, only 23 treebanks have been manually annotated natively with the Universal PoS tagset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "4. almost 65% of the UD contributors have participated in the annotation of only one corpus ; for more than 15% of the treebanks all contributors have annotated a single corpus. With regard to the effect of the programme on the convergence of high level ADJ training for trainers , it was not possible to make an assessment as there was not sufficient information on the link between national strategies and the activities under Pericles .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "With a view to enabling the assessment of the effect of the programme , among others on the convergence of high level NOUN training for trainers , the evaluator recommends the preparation of a strategy document , to be finalised before the new Pericles enters into effect .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Notice NOUN Regarding Privacy and Confidentiality : PaineWebber reserves the right to monitor and review the content of all e-mail communications sent and or received by its employees .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Notice PROPN Regarding Privacy and Confidentiality : PaineWebber reserves the right to monitor and review the content of all e-mail communications sent and or received by its employees .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The above applies to the Work as incorporated in a Collective Work , but this does not require the Collective Work apart from the Work itself to be made subject ADJ to the terms of this License.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The above applies to the Derivative Work as incorporated in a Collective Work , but this does not require the Collective Work apart from the Derivative Work itself to be made subject NOUN to the terms of this License . ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In all our experiments, we use a history-based model (Black et al., 1992 ) with a LaSO-like training method (Daum\u00e9 III and Marcu, 2005) . This model reduces PoS tagging to a sequence of multi-class classification problems : the PoS of the words in the sentence are predicted one after the other using an averaged perceptron. We consider the standard feature set for PoS tagging (Zhang and Nivre, 2011) : current word, two previous and following words, the previous two predicted labels, etc. This 'standard' feature set has been designed for English and has not been adapted to the other languages considered in our experiments.", |
|
"cite_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 72, |
|
"text": "(Black et al., 1992", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 108, |
|
"end": 135, |
|
"text": "(Daum\u00e9 III and Marcu, 2005)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 378, |
|
"end": 401, |
|
"text": "(Zhang and Nivre, 2011)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PoS tagger", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our PoS tagger achieves an average precision of 91.10% over all UD treebanks, a result comparable to the performance of UDPipe 1.2 (Straka and Strakov\u00e1, 2017) , the baseline of CoNLL'17 Shared Task 'Multilingual Parsing from Raw Text to Universal Dependencies' that achieves an average precision of 91.22%. When not otherwise specified, all PoS tagging scores reported below are averaged over 10 runs (i.e. independent training of a model and evaluation of the test performance).", |
|
"cite_spans": [ |
|
{ |
|
"start": 131, |
|
"end": 158, |
|
"text": "(Straka and Strakov\u00e1, 2017)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PoS tagger", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The annotation variation principle (Boyd et al., 2008) states that if two identical sequences appear with different annotations, one of these two label sequences may be inconsistently annotated. Our work relies on this principle to identify discrepancies in the PoS annotation of treebanks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 54, |
|
"text": "(Boyd et al., 2008)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation variation principle", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We call repeat a sequence of words that appears in, at least, two sentences and suspicious repeat a repeat that is annotated in at least two different ways. Identifying suspicious repeats requires, first, to find all sequences of words that appear in two different sentences ; this is an instance of the maximal repeat problem : a maximal repeat, is a substring that occurs at least in two different sentences and cannot be extended to the left or to right to a longer common substring. Extracting maximal repeats allows us to find all sequence of words common to at least two sentences without extracting all their substrings. This problem can be solved efficiently using Generalized Suffix Tree (GST) (Gusfield, 1997) : if the corpus contains n words, extracting all the maximal repeats takes O (n) to build the GST and O (n) to list all the repeats. PoS annotations for these repeats can then be easily extracted and the ones that are identical can be filtered out to gather all suspicious repeats in a set of corpora. A detailed description of our implementation can be found in (Wisniewski, 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 1083, |
|
"end": 1101, |
|
"text": "(Wisniewski, 2018)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation variation principle", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Filtering heuristics Suspicious repeats can of course correspond to words or structures that are ambiguity", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation variation principle", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The early voting suggests that this time the Latin Americans will come out to PART vote in greater numbers , but it is unclear whether the increase will have an impact .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation variation principle", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Keep his cage open and go on your computer , or read a book , etc and maybe he will come out to ADP you .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation variation principle", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Trudeau will extend that invitation to the 45th president NOUN of the United ADJ States NOUN , whoever he or she may be .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "inconsistency", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "I am GEORGE WALKER BUSH , son of the former president PROPN of the United PROPN States PROPN of America George Herbert Walker Bush , and currently serving as President of the United States of America . truly ambiguous. We consider two heuristics to filter out suspicious repeats. First with the size heuristic, we assume that longer suspicious repeats are more likely to result from annotation errors than shorter ones. For instance, Table 2 displays suspicious repeats with at least 10 words that all stem from an annotation error.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 434, |
|
"end": 441, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "inconsistency", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Second, with the disjoint heuristic, we assume that actual ambiguities will be reflected in intracorpus suspicious repeats, whereas errors will likely correspond to cases where differences in labelings are observed in different corpora. Formally, the disjoint heuristic flags repeats m occurring in at least two corpora A and B, and such that the set of labelings of m observed in A are disjoint from the set of labelings observed in B.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "inconsistency", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For instance, in French, \"la porte\" can either be a determiner and a noun (e.g. in the sentence \"la porte est ferm\u00e9e\" -the door is closed) or a pronoun followed by a verb (e.g. in the sentence \"je la porte\" -I carry her). Observing these two possible labelings in at least two corpora is a good sign of an actual ambiguity. The disjoint heuristic allows us to detect that this suspicious repeat is an actual ambiguity. To reiterate, the intuition beyond the disjoint heuristic is that for ambiguities, the two possible annotations will appear in, at least, one of the two corpora.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "inconsistency", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Conversely, systematic divergences in labeling observed across corpora are likely to be errors : for instance, in English, depending on the treebank, cardinal points are labeled as either proper nouns or as nouns. In this case, the set of labelings of the repeats in the first corpus is disjoint from the set of labeling in the second corpus and the the disjoint heuristic captures the annotation inconsistency.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "inconsistency", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Analyzing filtering heuristics To further analyze these two heuristics, we have manually annotated the suspicious repeats between the train set of the English EWT corpus and the test set of the English PUD corpus. For each suspicious repeat, we record whether it is an annotation error or an actual ambiguity. Examples of annotations are given in Table 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 347, |
|
"end": 354, |
|
"text": "Table 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "inconsistency", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Results are in Table 4 . It appears that, for the heuristics considered, a large part of the suspicious repeats correspond to annotation discrepancies rather than ambiguities. In many cases, these discrepancies result from systematic divergences in the interpretation of the UD guidelines. 5 For instance, the contraction \"n't\" is always labeled as a particle in the train set of the EWT corpus, but either as particle or an adverb in the PUD corpus. Most of these systematic differences involve distinction between nouns and proper nouns, auxiliaries and verbs and adjectives and verbs (for past participles).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 15, |
|
"end": 22, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "inconsistency", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We will first show how the annotation variation principle allows us to characterize the noise and/or the difficulty of PoS tagging. Table 5 reports the number of repeats and suspicious repeats in the English corpora of the UD project. These numbers have been calculated by applying the method described in the previous section to the concatenation of train, development and test sets of each treebanks. To calibrate these measures, we conducted 5. Discrepancies are not only due to improper interpretations of the guidelines, but also sometimes to actual ambiguities in the annotation rules. Table 4 : Percentage of suspicious repeats between the EWT and PUD corpora that contain an annotation inconsistency according to a human annotator either when the disjoint heuristic is used or when only suspicious repeats with at least n words are considered.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 139, |
|
"text": "Table 5", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 592, |
|
"end": 599, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotation Variations in the UD", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "the same experiments with the Wall Street Journal (Marcus et al., 1993) , 6 the iconic corpus of PoS tagging for which a thorough manual analysis of the annotation quality is described in (Manning, 2011). The observations reported in Table 5 show that the number of repeats varies greatly from one corpus to another, which is not surprising considering the wide array of genres covered by the treebanks that includes sentences written by journalists or learner of English (the genres with the largest number of repeats) or sentences generated by users on social media (that contain far less repeated parts). These observations also show that the percentage of repeats that are not consistently annotated is slightly larger in the UD treebanks than in the WSJ, a corpus in which a manual inspection of the corpus reveals that many variations are 'mistakes' rather than representing uncertainties or difficulties in the PoS prediction (Manning, 2011).", |
|
"cite_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 71, |
|
"text": "(Marcus et al., 1993)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 234, |
|
"end": 241, |
|
"text": "Table 5", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotation Variations in the UD", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "More interestingly, Table 6 shows the percen- 6. The Penn Treebank tagset has been manually converted to the Universal PoS tagset using the mapping of (Petrov et al., 2012) generalized to the extended UD PoS tagset.", |
|
"cite_spans": [ |
|
{ |
|
"start": 151, |
|
"end": 172, |
|
"text": "(Petrov et al., 2012)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 20, |
|
"end": 27, |
|
"text": "Table 6", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotation Variations in the UD", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "tage of repeats that are not consistently annotated for all possible combinations of a train and a test sets (ignoring sequences of words that do not appear at least once in both corpora). It appears that in all cases there are (sometimes significantly) more variations in annotations in cross-treebank settings than in situations where the train and the test sets belong to the same treebank. This observation suggests that there may be systematic differences in the annotations of different treebanks which could make the domain adaptation setting artificially more difficult.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Variations in the UD", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "To characterize the difference between two treebanks, we measure the error rate of a binary classifier deciding from which corpus an annotated sentence is coming from. 7 Intuitively, the higher this error rate, the more difficult it is to distinguish sentences of the two corpora and the more similar the treebanks are. More formally, it can be shown (Ben-David et al., 2010 ) that this error rate is an estimation of the H -divergence (Kifer et al., 2004) , a metric introduced in machine learning theory to quantify the impact of a change in domains by measuring the divergence between the distributions of examples sampled from two datasets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 351, |
|
"end": 374, |
|
"text": "(Ben-David et al., 2010", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 436, |
|
"end": 456, |
|
"text": "(Kifer et al., 2004)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "How do treebanks differ ?", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In our experiments, we use a Naive Bayes classifier 8 and three sets of features to describe a sentence pair and their annotation : words, in which each example is represented by the bag of its 1-gram and 2-gram of words ; labels, in which examples are represented in the same way, but this time, considering PoS ; and combi which uses the same representation after the words of all the treebanks have been concatenated with their PoS. The first set aims at capturing a potential covariate shift, the last two target divergence in annotations. To reduce the impact of the strong between-class imbalance, 9 in all our experiments we sub-sample the largest set to ensure that the two datasets we try to distinguish always have the same number of examples. All scores in this experiment are averaged over 20 train-test splits.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "How do treebanks differ ?", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "7. More precisely, the classifier analyses pairs of sentences and predicts whether they belong to th same corpus or not.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "How do treebanks differ ?", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "8. We used the implementation provided by (Pedregosa et al., 2011) without tuning any hyper-parameters. Experiments with a logistic regression show similar results. 9. The ratio between the number of examples in the two corpora can be as large as 88. Table 7 reports the results achieved with the different features sets averaged over all combinations of a train and a test set of the same language and gives the percentage of conditions for which each feature set achieved the best results ; Figure 1 details these results for the English and French treebanks. Results for other languages show similar patterns. These results suggest that, in many cases, it is possible to accurately identify from which treebank a sentence and its annotation are coming, although these raw numbers are difficult to interpret as prediction performances are averaged over many different experimental conditions. In more than 50% of the cases, combining words to their PoS results in the best performance, which is consistent to the qualitative study reported in Section 3 : some words appear in two corpora with different PoS allowing to distinguish these corpora. This observation strongly suggests that divergence in annotations across corpora are often genuine.", |
|
"cite_spans": [ |
|
{ |
|
"start": 42, |
|
"end": 66, |
|
"text": "(Pedregosa et al., 2011)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 251, |
|
"end": 258, |
|
"text": "Table 7", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 493, |
|
"end": 501, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "How do treebanks differ ?", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To study annotation divergence in the UD project, we propose to analyze suspicious repeats (i.e. sequence of repeated words with different annotations). We start by extracting all the suspicious repeats that can be found when considering all the possible combinations of a train set and a test features median % best words 78.2 31.0 labels 70.9 13.5 combi 78.8 55.5 Table 7 : Precision (%) achieved over all cross-treebank conditions by a classifier identifying to which treebank a sentence belongs to.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 366, |
|
"end": 373, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Impact of annotation variation on prediction performance", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "or development set of a given language. These matches are then filtered using the heuristics described in \u00a73. There are, overall, 357, 301 matches in the UD project, 69, 157 of which involve 3 words or more and 14, 142 5 words or more ; the disjoint heuristic selects 122, 634 of these matches (see Table 8 in \u00a7A).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 299, |
|
"end": 306, |
|
"text": "Table 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Impact of annotation variation on prediction performance", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To highlight the connection between prediction errors and annotation divergence, we compute, for each possible combination of a train and a test set (considering all languages in the UD project), the correlation between the error rate achieved on a corpus B when training our PoS on a corpus A and the number of suspicious repeats between A and B normalized by the number of tokens in A and B. The Spearman correlation coefficient between these two values is 0.72 indicating a correlation generally qualified as 'strong' following the interpretation proposed by (Cohen, 1988) : the more there are sequences of words with different annotations in the train and test sets, the worse the tagging performance, which shows that annotation inconsistencies play an important role in explaining the poor performance of PoS tagger on some conditions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 562, |
|
"end": 575, |
|
"text": "(Cohen, 1988)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Impact of annotation variation on prediction performance", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For a more precise picture, we also estimate the number of suspicious repeats that contain a prediction error. Using the disjoint heuristics to filter suspicious repeats, it appears that 70.2% (resp. 73.0%) of the suspicious repeats for English (resp. French) contain a prediction error. As expected, these numbers fall to 51.7% (resp. 49.9%) when the suspicious repeats are not filtered and therefore contain more ambiguous words. Figure 2 displays a similar trend when the suspicious repeats are filtered by their length ; similar results are observed for all other languages.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 432, |
|
"end": 440, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Impact of annotation variation on prediction performance", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "These observations suggest that annotation variations often results in prediction errors, espe- cially when there are good reasons to assume that the variation actually stems from an inconsistency. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Impact of annotation variation on prediction performance", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To evaluate the impact of annotation errors on prediction performance, we propose, for each combination of a train and a test set, to train a PoS tagger and compare \u03b5 full , the error rate achieved on the full test set to \u03b5 ignoring the error rate achieved ignoring errors that occur in a suspicious repeat.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Re-Assessing the Performance of PoS Tagger in Cross-Corpus Setting", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "More precisely, \u03b5 ignoring is defined as :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Re-Assessing the Performance of PoS Tagger in Cross-Corpus Setting", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "\u03b5 ignoring = # {err} \u2212 # {err in suspicious repeats} # {words}", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Re-Assessing the Performance of PoS Tagger in Cross-Corpus Setting", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "(1) where # {err in suspicious repeats} in the number of errors in the suspicious repeats that have survived filtering. Intuitively \u03b5 ignoring can be seen as an 'oracle' score corresponding to a tagger that would always predict the labels of suspicious repeat correctly. In the following, We will consider three different filters : the disjoint heuristic, keeping only suspicious repeats with more than three words and keeping all of them. Figure 3 reports these errors rates for French and English. Results for other languages show similar results. As expected, ignoring errors in suspicious repeats significantly improve prediction performance. It even appears that \u03b5 ignoring is often on par with the score achieved on in-domain sets. Overall, in more than 43% (resp. 25%) of all the conditions the error rate ignoring errors in suspicious repeats filtered with the disjoint heuristic (resp. minimum heuristic) is lower than the error rate achieved on in-domain data. These values are naturally over-estimated as, in these experiments, we remove all potential annotation errors as well as words and structures that are ambiguous and therefore are more difficult to label. They can however be considered as lower-bound on the predic- tion quality.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 440, |
|
"end": 448, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Re-Assessing the Performance of PoS Tagger in Cross-Corpus Setting", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "To assess their quality, we have manually checked all the suspicious repeats between the train set of French UD and the test set of the French FTB correcting inconsistencies and errors (almost 2,000 PoS were modified). 10 When trained on the original UD corpus, the PoS tagger achieved an error rate of 6.78% on the FTB corpus (4.51% on indomain data). After correcting inconsistencies, the out-domain error rate falls down to 5.11%. This value is close to the error rate ignoring suspicious repeats containing three and more words, showing the validity of the heuristics we have considered.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Re-Assessing the Performance of PoS Tagger in Cross-Corpus Setting", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In this work, we have shown that, for PoS tagging, many prediction errors in cross-corpora settings (which is a typical domain adaptation scenario) stem from divergence between annotations. We have also described a method to quantify this divergence. We have only considered here corpora from the UD project and PoS annotation, but we consider that our method is very generic and can be easily applied to other corpora or tasks (e.g. tokenization, dependency parsing, etc.) that we will address in future work. We also plan to see how the different experiments we have made to identify annotation errors and inconsistencies can be used during the annotation process to reduce the workload 10. The 'corrected' corpora will be made available upon publication. In this experiment, the impact of annotation errors is under-estimated as we have only corrected errors that appear in a suspicious repeat without trying to 'generalize' these corrections to words that appear only in one corpus. of annotators and help them creating high-quality corpora. Table 8 : Number of repeated sequence of words across the different combinations of a train set and a test set ('repeats' column) and number of these sequences that are annotated differently ('suspicious repeats' column) when no filtering is applied.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1046, |
|
"end": 1053, |
|
"text": "Table 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work has been partly funded by the French Agence Nationale de la Recherche under Par-SiTi (ANR-16-CE33-0021) and MultiSem projects (ANR-16-CE33-0013).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Association for Computational Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Lauriane", |
|
"middle": [], |
|
"last": "Aufrant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Wisniewski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fran\u00e7ois", |
|
"middle": [], |
|
"last": "Yvon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the CoNLL 2017 Shared Task : Multilingual Parsing from Raw Text to Universal Dependencies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "163--173", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lauriane Aufrant, Guillaume Wisniewski, and Fran\u00e7ois Yvon. 2017. LIMSI@CoNLL'17 : UD shared task. In Proceedings of the CoNLL 2017 Shared Task : Multilingual Parsing from Raw Text to Universal De- pendencies, pages 163-173, Vancouver, Canada. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A theory of learning from different domains", |
|
"authors": [ |
|
{ |
|
"first": "Shai", |
|
"middle": [], |
|
"last": "Ben-David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Blitzer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Koby", |
|
"middle": [], |
|
"last": "Crammer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Kulesza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennifer", |
|
"middle": [ |
|
"Wortman" |
|
], |
|
"last": "Vaughan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Machine Learning", |
|
"volume": "79", |
|
"issue": "", |
|
"pages": "151--175", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. 2010. A theory of learning from different domains. Machine Learning, 79(1-2) :151-175.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Towards history-based grammars : Using richer models for probabilistic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Ezra", |
|
"middle": [], |
|
"last": "Black", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fred", |
|
"middle": [], |
|
"last": "Jelinek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Magerman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Mercer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of the Workshop on Speech and Natural Language, HLT'91", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "134--139", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ezra Black, Fred Jelinek, John Lafferty, David M. Ma- german, Robert Mercer, and Salim Roukos. 1992. Towards history-based grammars : Using richer mo- dels for probabilistic parsing. In Proceedings of the Workshop on Speech and Natural Language, HLT'91, pages 134-139, Stroudsburg, PA, USA. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Converting italian treebanks : Towards an Italian stanford dependency treebank", |
|
"authors": [ |
|
{ |
|
"first": "Cristina", |
|
"middle": [], |
|
"last": "Bosco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simonetta", |
|
"middle": [], |
|
"last": "Montemagni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Simi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "61--69", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cristina Bosco, Simonetta Montemagni, and Maria Simi. 2013. Converting italian treebanks : Towards an Italian stanford dependency treebank. In Pro- ceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 61-69, Sofia, Bulgaria. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "On detecting errors in dependency treebanks", |
|
"authors": [ |
|
{ |
|
"first": "Adriane", |
|
"middle": [], |
|
"last": "Boyd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Dickinson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Detmar Meurers", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Research on Language and Computation", |
|
"volume": "6", |
|
"issue": "2", |
|
"pages": "113--137", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adriane Boyd, Markus Dickinson, and W. Detmar Meurers. 2008. On detecting errors in dependency treebanks. Research on Language and Computation, 6(2) :113-137.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Dataset Shift in Machine Learning", |
|
"authors": [ |
|
{ |
|
"first": "Joaquin", |
|
"middle": [ |
|
"Q" |
|
], |
|
"last": "Candela", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masashi", |
|
"middle": [], |
|
"last": "Sugiyama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anton", |
|
"middle": [], |
|
"last": "Schwaighofer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Neil", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Lawrence", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joaquin Q. Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D. Lawrence. 2009. Data- set Shift in Machine Learning. The MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Statistical Power Analysis for the Behavioral Sciences", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Cohen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Cohen. 1988. Statistical Power Analysis for the Behavioral Sciences. Routledge.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Learning as search optimization : Approximate large margin methods for structured prediction", |
|
"authors": [ |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 22nd International Conference on Machine Learning, ICML'05", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "169--176", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hal Daum\u00e9 III and Daniel Marcu. 2005. Learning as search optimization : Approximate large margin methods for structured prediction. In Proceedings of the 22nd International Conference on Machine Learning, ICML'05, pages 169-176, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Detecting errors in part-of-speech annotation", |
|
"authors": [ |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Dickinson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W. Detmar", |
|
"middle": [], |
|
"last": "Meurers", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the Tenth Conference on European Chapter of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "107--114", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Markus Dickinson and W. Detmar Meurers. 2003. Detecting errors in part-of-speech annotation. In Proceedings of the Tenth Conference on European Chapter of the Association for Computational Lin- guistics -Volume 1, EACL '03, pages 107-114, Stroudsburg, PA, USA. Association for Computatio- nal Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "cba to check the spelling\" : Investigating parser performance on discussion forum posts", |
|
"authors": [ |
|
{ |
|
"first": "Jennifer", |
|
"middle": [], |
|
"last": "Foster", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Human Language Technologies : The", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jennifer Foster. 2010. \"cba to check the spelling\" : In- vestigating parser performance on discussion forum posts. In Human Language Technologies : The 2010", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "381--384", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annual Conference of the North American Chap- ter of the Association for Computational Linguistics, pages 381-384, Los Angeles, California. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Algorithms on Strings, Trees, and Sequences : Computer Science and Computational Biology", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Gusfield", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Gusfield. 1997. Algorithms on Strings, Trees, and Sequences : Computer Science and Computational Biology. Cambridge University Press, New York, NY, USA.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Detecting change in data streams", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Kifer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shai", |
|
"middle": [], |
|
"last": "Ben-David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johannes", |
|
"middle": [], |
|
"last": "Gehrke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the Thirtieth International Conference on Very Large Data Bases", |
|
"volume": "30", |
|
"issue": "", |
|
"pages": "180--191", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Kifer, Shai Ben-David, and Johannes Gehrke. 2004. Detecting change in data streams. In Pro- ceedings of the Thirtieth International Conference on Very Large Data Bases -Volume 30, VLDB '04, pages 180-191. VLDB Endowment.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Converting Russian dependency treebank to Stanford typed dependencies representation", |
|
"authors": [ |
|
{ |
|
"first": "Janna", |
|
"middle": [], |
|
"last": "Lipenkova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Milan", |
|
"middle": [], |
|
"last": "Sou\u010dek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "143--147", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Janna Lipenkova and Milan Sou\u010dek. 2014. Conver- ting Russian dependency treebank to Stanford typed dependencies representation. In Proceedings of the 14th Conference of the European Chapter of the As- sociation for Computational Linguistics, volume 2 : Short Papers, pages 143-147, Gothenburg, Sweden. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Part-of-speech tagging from 97% to 100% : Is it time for some linguistics ?", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Computational Linguistics and Intelligent Text Processing, 12th International Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "171--189", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher D. Manning. 2011. Part-of-speech tagging from 97% to 100% : Is it time for some linguis- tics ? In Proceedings of the Computational Linguis- tics and Intelligent Text Processing, 12th Interna- tional Conference, CICLing 2011, pages 171-189. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Building a large annotated corpus of english : The penn treebank", |
|
"authors": [ |
|
{ |
|
"first": "Mitchell", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mary", |
|
"middle": [ |
|
"Ann" |
|
], |
|
"last": "Marcinkiewicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beatrice", |
|
"middle": [], |
|
"last": "Santorini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Comput. Linguist", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "313--330", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annota- ted corpus of english : The penn treebank. Comput. Linguist., 19(2) :313-330.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Assessing the annotation consistency of the universal dependencies corpora", |
|
"authors": [ |
|
{ |
|
"first": "Marie-Catherine", |
|
"middle": [], |
|
"last": "De Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matias", |
|
"middle": [], |
|
"last": "Grioni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jenna", |
|
"middle": [], |
|
"last": "Kanerva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Filip", |
|
"middle": [], |
|
"last": "Ginter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Fourth International Conference on Dependency Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "108--115", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marie-Catherine de Marneffe, Matias Grioni, Jenna Kanerva, and Filip Ginter. 2017. Assessing the an- notation consistency of the universal dependencies corpora. In Proceedings of the Fourth Internatio- nal Conference on Dependency Linguistics (Depling 2017), pages 108-115. Link\u00f6ping University Elec- tronic Press.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Universal dependencies 2.3. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (\u00daFAL)", |
|
"authors": [ |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mitchell", |
|
"middle": [], |
|
"last": "Abrams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u017deljko", |
|
"middle": [], |
|
"last": "Agi\u0107", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Faculty of Mathematics and Physics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joakim Nivre, Mitchell Abrams,\u017deljko Agi\u0107, Lars Ah- renberg, and other. 2018. Universal dependencies 2.3. LINDAT/CLARIN digital library at the Insti- tute of Formal and Applied Linguistics (\u00daFAL), Fa- culty of Mathematics and Physics, Charles Univer- sity.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Scikit-learn : Machine learning in Python", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Pedregosa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Varoquaux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Gramfort", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Michel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Thirion", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Grisel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Blondel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Prettenhofer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Weiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Dubourg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Vanderplas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Passos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Cournapeau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Brucher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Perrot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Duchesnay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "2825--2830", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Du- chesnay. 2011. Scikit-learn : Machine learning in Python. Journal of Machine Learning Research, 12 :2825-2830.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "A universal part-of-speech tagset", |
|
"authors": [ |
|
{ |
|
"first": "Slav", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012). European Language Resources Association (ELRA)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A universal part-of-speech tagset. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012). European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Importance weighting and unsupervised domain adaptation of POS taggers : a negative result", |
|
"authors": [ |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Plank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "Johannsen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "968--973", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Barbara Plank, Anders Johannsen, and Anders S\u00f8gaard. 2014. Importance weighting and unsuper- vised domain adaptation of POS taggers : a negative result. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 968-973, Doha, Qatar. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Improving predictive inference under covariate shift by weighting the loglikelihood function", |
|
"authors": [ |
|
{ |
|
"first": "Hidetoshi", |
|
"middle": [], |
|
"last": "Shimodaira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Journal of Statistical Planning and Inference", |
|
"volume": "90", |
|
"issue": "2", |
|
"pages": "227--244", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hidetoshi Shimodaira. 2000. Improving predictive in- ference under covariate shift by weighting the log- likelihood function. Journal of Statistical Planning and Inference, 90(2) :227 -244.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Tokenizing, pos tagging, lemmatizing and parsing ud 2.0 with udpipe", |
|
"authors": [ |
|
{ |
|
"first": "Milan", |
|
"middle": [], |
|
"last": "Straka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jana", |
|
"middle": [], |
|
"last": "Strakov\u00e1", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the CoNLL 2017 Shared Task : Multilingual Parsing from Raw Text to Universal Dependencies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "88--99", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Milan Straka and Jana Strakov\u00e1. 2017. Tokenizing, pos tagging, lemmatizing and parsing ud 2.0 with ud- pipe. In Proceedings of the CoNLL 2017 Shared Task : Multilingual Parsing from Raw Text to Univer- sal Dependencies, pages 88-99, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "A non-projective greedy dependency parser with bidirectional LSTMs", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Vilares", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlos", |
|
"middle": [], |
|
"last": "G\u00f3mez-Rodr\u00edguez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the CoNLL 2017 Shared Task : Multilingual Parsing from Raw Text to Universal Dependencies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "152--162", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Vilares and Carlos G\u00f3mez-Rodr\u00edguez. 2017. A non-projective greedy dependency parser with bi- directional LSTMs. In Proceedings of the CoNLL 2017 Shared Task : Multilingual Parsing from Raw Text to Universal Dependencies, pages 152-162, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Errator : a tool to help detect annotation errors in the universal dependencies project", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Wisniewski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018). European Language Resource Association", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Wisniewski. 2018. Errator : a tool to help detect annotation errors in the universal dependen- cies project. In Proceedings of the Eleventh Interna- tional Conference on Language Resources and Eva- luation (LREC-2018). European Language Resource Association.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Crosslingual part-of-speech tagging through ambiguous learning", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Wisniewski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicolas", |
|
"middle": [], |
|
"last": "P\u00e9cheux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Souhir", |
|
"middle": [], |
|
"last": "Gahbiche-Braham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fran\u00e7ois", |
|
"middle": [], |
|
"last": "Yvon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1779--1785", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Wisniewski, Nicolas P\u00e9cheux, Souhir Gahbiche-Braham, and Fran\u00e7ois Yvon. 2014. Cross- lingual part-of-speech tagging through ambiguous learning. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1779-1785, Doha, Qatar. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Transition-based Dependency Parsing with Rich Non-local Features", |
|
"authors": [ |
|
{ |
|
"first": "Yue", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of ACL 2011, the 49th Annual Meeting of the Association for Computational Linguistics : Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "188--193", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yue Zhang and Joakim Nivre. 2011. Transition-based Dependency Parsing with Rich Non-local Features. In Proceedings of ACL 2011, the 49th Annual Mee- ting of the Association for Computational Linguis- tics : Human Language Technologies, pages 188- 193, Portland, Oregon, USA. Association for Com- putational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Precision of a classifier identifying to which French (top) or English (bottom) treebank a sentence belongs to. Train corpora are on the y-axis and test corpora on the x-axis." |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Percentage of suspicious repeats that contain at least one prediction error in function of their size." |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Error rate achieved by a PoS tagger on the different English treebanks of the UD project when errors in suspicious repeats are ignored. The red line indicates the error rate on in-domain data." |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"text": "PoS tagger trained and tested on all possible combinations of the French train and test sets of the UD project. To mitigate the variability of our learning algorithm, all scores are averaged over 10 training sessions.", |
|
"type_str": "table", |
|
"content": "<table><tr><td>test \u2192</td><td>FTB</td><td colspan=\"5\">GSD ParTUT SRCMF Sequoia Spoken</td><td>PUD</td></tr><tr><td>\u2193 train</td><td/><td/><td/><td/><td/></tr><tr><td>FTB</td><td>2.8%</td><td>7.0%</td><td>6.5%</td><td>45.4%</td><td colspan=\"2\">5.4% 18.7% 12.9%</td></tr><tr><td>GSD</td><td colspan=\"2\">6.7% 3.7%</td><td>7.2%</td><td>45.5%</td><td colspan=\"2\">5.4% 16.3% 10.2%</td></tr><tr><td colspan=\"3\">ParTUT 11.2% 10.9%</td><td>5.9%</td><td>55.7%</td><td colspan=\"2\">11.3% 22.9% 15.8%</td></tr><tr><td colspan=\"3\">SRCMF 38.8% 37.8%</td><td>36.2%</td><td>7.5%</td><td colspan=\"2\">37.4% 34.7% 36.1%</td></tr><tr><td>Sequoia</td><td>7.5%</td><td>7.5%</td><td>8.4%</td><td>48.0%</td><td colspan=\"2\">4.0% 19.3% 13.6%</td></tr><tr><td colspan=\"3\">Spoken 32.1% 30.3%</td><td>25.7%</td><td>51.8%</td><td>29.5%</td><td>7.9% 30.1%</td></tr><tr><td colspan=\"2\">Table 1: Error rate (%) achieved by a</td><td/><td/><td/><td/></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"text": "Examples of annotation divergences in the English Web Treebank (EWT) corpus : these sentences share some common words (in bold) that do not have the same annotation. Only the labels that differ are represented.", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"text": "Example of an actual ambiguity and of an annotation inconsistency between the English EWT and PUD corpora. Repeated words are in bold and words with different PoS in red.", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"text": "Percentage of sentences with a repeat of at least three words in the English treebanks (% sent.", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null |
|
}, |
|
"TABREF7": { |
|
"html": null, |
|
"text": "Percentage of repeats between a train and a test sets that are not annotated consistently. In-domain settings (i.e. when the train and test sets come from the same treebank) are reported in bold ; for each train set, the most consistent setting is underlined.", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null |
|
}, |
|
"TABREF8": { |
|
"html": null, |
|
"text": "0% 69.0% 80.1% 77.2% 98.3% 67.8% 97.9% 82.0% 57.9% 69.4% 75.3% 98.7% 69.4% 97.3% 82.8% 73.3% 82.9% 68.5% 98.5% 72.5% 97.5% 80.1% 68.4% 79.5% 74.3% 95.7% 48.0% 94.9% 2% 66.6% 80.8% 75.8% 99.1% 67.1% 97.9% 75.1% 61.1% 77.8% 72.3% 99.6% 66.7% 98.1% 75.4% 71.0% 85.8% 63.7% 99.5% 72.8% 98.2% 71.0% 64.3% 81.1% 70.8% 98.2% 51.5% 95.8% 0% 71.9% 85.6% 79.5% 98.2% 70.7% 98.4% 85.1% 58.8% 81.4% 73.6% 98.6% 69.2% 97.7% 87.1% 76.3% 87.3% 68.0% 98.7% 75.5% 98.1% 81.6% 69.8% 86.8% 73.1% 95.2% 49.0% 95.5%", |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td/><td/><td/><td/><td/><td/><td>French</td></tr><tr><td/><td/><td/><td>features words</td><td/><td/><td/><td/><td>features labels</td><td>features combi</td></tr><tr><td>FTB GSD ParTUT Sequoia Spoken train</td><td colspan=\"3\">FTB GSD PUD ParTUT SRCMF Sequoia Spoken test 60.97.1% 95.2% 99.2% 97.8% 95.3% 93.4% 67.9%</td><td>0.5 0.6 0.7 0.8 0.9</td><td>FTB GSD ParTUT Sequoia Spoken train</td><td colspan=\"3\">FTB GSD PUD ParTUT SRCMF Sequoia Spoken test 54.97.3% 97.5% 98.8% 94.0% 89.1% 94.8% 61.3%</td><td>0.6 0.7 0.8 0.9</td><td>FTB GSD ParTUT Sequoia Spoken train</td><td>test FTB GSD PUD ParTUT SRCMF Sequoia Spoken 60.97.7% 94.9% 99.3% 97.3% 96.4% 93.1% 68.2%</td><td>0.5 0.6 0.7 0.8 0.9</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td>English</td></tr><tr><td>EWT</td><td colspan=\"3\">73.0% 62.6% 73.4% 72.3% 79.1% 76.1% features words</td><td>0.80 0.85</td><td>EWT</td><td colspan=\"3\">features labels 87.1% 59.4% 66.7% 65.9% 73.9% 78.2%</td><td>0.78 0.84</td><td>EWT</td><td>features combi 87.0% 62.7% 74.1% 74.0% 77.6% 75.6%</td><td>0.84 0.90</td></tr><tr><td>GUM</td><td colspan=\"3\">79.3% 76.0% 67.2% 75.0% 71.7% 71.3%</td><td>0.75</td><td>GUM</td><td colspan=\"3\">85.5% 69.3% 56.2% 67.9% 69.1% 75.3%</td><td>0.72</td><td>GUM</td><td>89.2% 77.2% 66.6% 76.5% 72.7% 74.3%</td><td>0.78</td></tr><tr><td>LinES ParTUT train</td><td colspan=\"3\">78.1% 79.3% 78.4% 60.4% 77.9% 73.5% 85.0% 83.8% 75.7% 78.9% 74.0% 65.6%</td><td>0.65 0.70</td><td>ParTUT LinES train</td><td colspan=\"3\">86.5% 81.1% 73.0% 74.7% 67.8% 63.5% 84.0% 73.7% 71.1% 51.1% 75.1% 73.1%</td><td>0.54 0.60 0.66</td><td>ParTUT train LinES</td><td>91.3% 85.4% 78.8% 81.9% 78.6% 65.2% 87.8% 80.4% 77.9% 58.8% 78.3% 73.8%</td><td>0.60 0.66 0.72</td></tr><tr><td/><td>ESL</td><td>EWT</td><td>GUM LinES PUD ParTUT test</td><td/><td/><td>ESL</td><td>EWT</td><td>GUM LinES PUD ParTUT test</td><td>ESL</td><td>EWT</td><td>GUM LinES PUD ParTUT test</td></tr></table>", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |