{ "paper_id": "P12-1025", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:26:59.129220Z" }, "title": "Reducing Approximation and Estimation Errors for Chinese Lexical Processing with Heterogeneous Annotations", "authors": [ { "first": "Weiwei", "middle": [], "last": "Sun", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Xiaojun", "middle": [], "last": "Wan", "suffix": "", "affiliation": {}, "email": "wanxiaojun@pku.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We address the issue of consuming heterogeneous annotation data for Chinese word segmentation and part-of-speech tagging. We empirically analyze the diversity between two representative corpora, i.e. Penn Chinese Treebank (CTB) and PKU's People's Daily (PPD), on manually mapped data, and show that their linguistic annotations are systematically different and highly compatible. The analysis is further exploited to improve processing accuracy by (1) integrating systems that are respectively trained on heterogeneous annotations to reduce the approximation error, and (2) retraining models with high quality automatically converted data to reduce the estimation error. Evaluation on the CTB and PPD data shows that our novel model achieves a relative error reduction of 11% over the best reported result in the literature.", "pdf_parse": { "paper_id": "P12-1025", "_pdf_hash": "", "abstract": [ { "text": "We address the issue of consuming heterogeneous annotation data for Chinese word segmentation and part-of-speech tagging. We empirically analyze the diversity between two representative corpora, i.e. Penn Chinese Treebank (CTB) and PKU's People's Daily (PPD), on manually mapped data, and show that their linguistic annotations are systematically different and highly compatible. The analysis is further exploited to improve processing accuracy by (1) integrating systems that are respectively trained on heterogeneous annotations to reduce the approximation error, and (2) retraining models with high quality automatically converted data to reduce the estimation error. Evaluation on the CTB and PPD data shows that our novel model achieves a relative error reduction of 11% over the best reported result in the literature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "A majority of data-driven NLP systems rely on large-scale, manually annotated corpora that are important to train statistical models but very expensive to build. Nowadays, for many tasks, multiple heterogeneous annotated corpora have been built and publicly available. For example, the Penn Treebank is popular to train PCFG-based parsers, while the Redwoods Treebank is well known for HPSG research; the Propbank is favored to build general semantic role labeling systems, while the FrameNet is attractive for predicate-specific labeling. The anno-tation schemes in different projects are usually different, since the underlying linguistic theories vary and have different ways to explain the same language phenomena. Though statistical NLP systems usually are not bound to specific annotation standards, almost all of them assume homogeneous annotation in the training corpus. The co-existence of heterogeneous annotation data therefore presents a new challenge to the consumers of such resources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There are two essential characteristics of heterogeneous annotations that can be utilized to reduce two main types of errors in statistical NLP, i.e. the approximation error that is due to the intrinsic suboptimality of a model and the estimation error that is due to having only finite training data. First, heterogeneous annotations are (similar but) different as a result of different annotation schemata. Systems respectively trained on heterogeneous annotation data can produce different but relevant linguistic analysis. This suggests that complementary features from heterogeneous analysis can be derived for disambiguation, and therefore the approximation error can be reduced. Second, heterogeneous annotations are (different but) similar because their linguistic analysis is highly correlated. This implies that appropriate conversions between heterogeneous corpora could be reasonably accurate, and therefore the estimation error can be reduced by reason of the increase of reliable training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper explores heterogeneous annotations to reduce both approximation and estimation errors for Chinese word segmentation and part-of-speech (POS) tagging, which are fundamental steps for more advanced Chinese language processing tasks. We empirically analyze the diversity between two representative popular heterogeneous corpora, i.e. Penn Chinese Treebank (CTB) and PKU's People's Daily (PPD) . To that end, we manually label 200 sentences from CTB with PPD-style annotations. 1 Our analysis confirms the aforementioned two properties of heterogeneous annotations. Inspired by the sub-word tagging method introduced in (Sun, 2011), we propose a structure-based stacking model to fully utilize heterogeneous word structures to reduce the approximation error. In particular, joint word segmentation and POS tagging is addressed as a two step process. First, character-based taggers are respectively trained on heterogeneous annotations to produce multiple analysis. The outputs of these taggers are then merged into sub-word sequences, which are further re-segmented and tagged by a sub-word tagger. The sub-word tagger is designed to refine the tagging result with the help of heterogeneous annotations. To reduce the estimation error, we employ a learning-based approach to convert complementary heterogeneous data to increase labeled training data for the target task. Both the character-based tagger and the sub-word tagger can be refined by re-training with automatically converted data.", "cite_spans": [ { "start": 395, "end": 400, "text": "(PPD)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We conduct experiments on the CTB and PPD data, and compare our system with state-of-theart systems. Our structure-based stacking model achieves an f-score of 94.36, which is superior to a feature-based stacking model introduced in (Jiang et al., 2009) . The converted data can also enhance the baseline model. A simple character-based model can be improved from 93.41 to 94.11. Since the two treatments are concerned with reducing different types of errors and thus not fully overlapping, the combination of them gives a further improvement. Our final system achieves an f-score of 94.68, which yields a relative error reduction of 11% over the best published result (94.02).", "cite_spans": [ { "start": 232, "end": 252, "text": "(Jiang et al., 2009)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Different from English and other Western languages, Chinese is written without explicit word delimiters such as space characters. To find and classify the basic language units, i.e. words, word segmentation and POS tagging are important initial steps for Chinese language processing. Supervised learning with specifically defined training data has become a dominant paradigm. Joint approaches that resolve the two tasks simultaneously have received much attention in recent research. Previous work has shown that joint solutions led to accuracy improvements over pipelined systems by avoiding segmentation error propagation and exploiting POS information to help segmentation (Ng and Low, 2004; Jiang et al., 2008a; Zhang and Clark, 2008; Sun, 2011) . Two kinds of approaches are popular for joint word segmentation and POS tagging. The first is the \"character-based\" approach, where basic processing units are characters which compose words (Jiang et al., 2008a) . In this kind of approach, the task is formulated as the classification of characters into POS tags with boundary information. For example, the label B-NN indicates that a character is located at the begging of a noun. Using this method, POS information is allowed to interact with segmentation. The second kind of solution is the \"word-based\" method, also known as semi-Markov tagging (Zhang and Clark, 2008; Zhang and Clark, 2010) , where the basic predicting units are words themselves. This kind of solver sequentially decides whether the local sequence of characters makes up a word as well as its possible POS tag. Solvers may use previously predicted words and their POS information as clues to process a new word.", "cite_spans": [ { "start": 676, "end": 694, "text": "(Ng and Low, 2004;", "ref_id": "BIBREF8" }, { "start": 695, "end": 715, "text": "Jiang et al., 2008a;", "ref_id": "BIBREF2" }, { "start": 716, "end": 738, "text": "Zhang and Clark, 2008;", "ref_id": "BIBREF20" }, { "start": 739, "end": 749, "text": "Sun, 2011)", "ref_id": "BIBREF15" }, { "start": 942, "end": 963, "text": "(Jiang et al., 2008a)", "ref_id": "BIBREF2" }, { "start": 1351, "end": 1374, "text": "(Zhang and Clark, 2008;", "ref_id": "BIBREF20" }, { "start": 1375, "end": 1397, "text": "Zhang and Clark, 2010)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Joint Chinese Word Segmentation and POS Tagging", "sec_num": "2" }, { "text": "In addition, we proposed an effective and efficient stacked sub-word tagging model, which combines strengths of both character-based and word-based approaches (Sun, 2011) . First, different characterbased and word-based models are trained to produce multiple segmentation and tagging results. Second, the outputs of these coarse-grained models are merged into sub-word sequences, which are further bracketed and labeled with POS tags by a finegrained sub-word tagger. Their solution can be viewed as utilizing stacked learning to integrate heterogeneous models.", "cite_spans": [ { "start": 159, "end": 170, "text": "(Sun, 2011)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Joint Chinese Word Segmentation and POS Tagging", "sec_num": "2" }, { "text": "Supervised segmentation and tagging can be improved by exploiting rich linguistic resources. Jiang et al. (2009) presented a preliminary study for annotation ensemble, which motivates our research as well as similar investigations for other NLP tasks, e.g. parsing (Niu et al., 2009; . In their solution, heterogeneous data is used to train an auxiliary segmentation and tagging system to produce informative features for target prediction. Our previous work (Sun and Xu, 2011) and Wang et al. (2011) explored unlabeled data to enhance strong supervised segmenters and taggers. Both of their work fall into the category of feature induction based semi-supervised learning. In brief, their methods harvest useful string knowledge from unlabeled or automatically analyzed data, and apply the knowledge to design new features for discriminative learning.", "cite_spans": [ { "start": 93, "end": 112, "text": "Jiang et al. (2009)", "ref_id": "BIBREF4" }, { "start": 265, "end": 283, "text": "(Niu et al., 2009;", "ref_id": "BIBREF9" }, { "start": 459, "end": 477, "text": "(Sun and Xu, 2011)", "ref_id": "BIBREF12" }, { "start": 482, "end": 500, "text": "Wang et al. (2011)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Joint Chinese Word Segmentation and POS Tagging", "sec_num": "2" }, { "text": "For Chinese word segmentation and POS tagging, supervised learning has become a dominant paradigm. Much of the progress is due to the development of both corpora and machine learning techniques. Although several institutions to date have released their segmented and POS tagged data, acquiring sufficient quantities of high quality training examples is still a major bottleneck. The annotation schemes of existing lexical resources are different, since the underlying linguistic theories vary. Despite the existence of multiple resources, such data cannot be simply put together for training systems, because almost all of statistical NLP systems assume homogeneous annotation. Therefore, it is not only interesting but also important to study how to fully utilize heterogeneous resources to improve Chinese lexical processing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "About Heterogeneous Annotations", "sec_num": "3" }, { "text": "There are two main types of errors in statistical NLP: (1) the approximation error that is due to the intrinsic suboptimality of a model and (2) the estimation error that is due to having only finite training data. Take Chinese word segmentation for example. Our previous analysis (Sun, 2010) shows that one main intrinsic disadvantage of characterbased model is the difficulty in incorporating the whole word information, while one main disadvantage of word-based model is the weak ability to express word formation. In both models, the significant decrease of the prediction accuracy of out-ofvocabulary (OOV) words indicates the impact of the estimation error. The two essential characteristics about systematic diversity of heterogeneous annota-tions can be utilized to reduce both approximation and estimation errors.", "cite_spans": [ { "start": 281, "end": 292, "text": "(Sun, 2010)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "About Heterogeneous Annotations", "sec_num": "3" }, { "text": "This paper focuses on two representative popular corpora for Chinese lexical processing: (1) the Penn Chinese Treebank (CTB) and (2) the PKU's People's Daily data (PPD) . To analyze the diversity between their annotation standards, we pick up 200 sentences from CTB and manually label them according to the PPD standard. Specially, we employ a PPD-style segmentation and tagging system to automatically label these 200 sentences. A linguistic expert who deeply understands the PPD standard then manually checks the automatic analysis and corrects its errors.", "cite_spans": [ { "start": 163, "end": 168, "text": "(PPD)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Analysis of the CTB and PPD Standards", "sec_num": "3.1" }, { "text": "These 200 sentences are segmented as 3886 and 3882 words respectively according to the CTB and PPD standards. The average lengths of word tokens are almost the same. However, the word boundaries or the definitions of words are different. 3561 word tokens are consistently segmented by both standards. In other words, 91.7% CTB word tokens share the same word boundaries with 91.6% PPD word tokens. Among these 3561 words, there are 552 punctuations that are simply consistently segmented. If punctuations are filtered out to avoid overestimation of consistency, 90.4% CTB words have same boundaries with 90.3% PPD words. The boundaries of words that are differently segmented are compatible. Among all annotations, only one cross-bracketing occurs. The statistics indicates that the two heterogenous segmented corpora are systematically different, and confirms the aforementioned two properties of heterogeneous annotations. Table 1 is the mapping between CTB-style tags and PPD-style tags. For the definition and illustration of these tags, please refers to the annotation guidelines 2 . The statistics after colons are how many times this POS tag pair appears among the 3561 words that are consistently segmented. From this table, we can see that (1) there is no one-to-one mapping between their heterogeneous word classification but (2) the mapping between heterogeneous tags is not very uncertain. This simple analysis indicates that the two POS tagged corpora also hold the two properties of heterogeneous annotations. The differences between the POS annotation standards are systematic. The annotations in CTB are treebankdriven, and thus consider more functional (dynamic) information of basic lexical categories. The annotations in PPD are lexicon-driven, and thus focus on more static properties of words. Limited to the document length, we only illustrate the annotation of verbs and nouns for better understanding of the differences.", "cite_spans": [], "ref_spans": [ { "start": 925, "end": 932, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Analysis of the CTB and PPD Standards", "sec_num": "3.1" }, { "text": "\u2022 The CTB tag VV indicates common verbs that are mainly labeled as verbs (v) too according to the PPD standard. However, these words can be also tagged as nominal categories (a, vn, n). The main reason is that there are a large number of Chinese adjectives and nouns that can be realized as predicates without linking verbs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of the CTB and PPD Standards", "sec_num": "3.1" }, { "text": "\u2022 The tag NN indicates common nouns in CTB. Some of them are labeled as verbal categories (vn, v) . The main reason is that a majority of Chinese verbs could be realized as subjects and objects without form changes.", "cite_spans": [ { "start": 90, "end": 97, "text": "(vn, v)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Analysis of the CTB and PPD Standards", "sec_num": "3.1" }, { "text": "4 Structure-based Stacking", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of the CTB and PPD Standards", "sec_num": "3.1" }, { "text": "Each annotation data set alone can yield a predictor that can be taken as a mechanism to produce structured texts. With different training data, we can construct multiple heterogeneous systems. These systems produce similar linguistic analysis which holds the same high level linguistic principles but differ in details. A very simple idea to take advantage of heterogeneous structures is to design a predictor which can predict a more accurate target structure based on the input, the less accurate target structure and complementary structures. This idea is very close to stacked learning (Wolpert, 1992) , which is well developed for ensemble learning, and successfully applied to some NLP tasks, e.g. dependency parsing (Nivre and McDonald, 2008; Torres Martins et al., 2008) . Formally speaking, our idea is to include two \"levels\" of processing. The first level includes one AS \u21d2 u:44; CD \u21d2 m:134; DEC \u21d2 u:83; DEV \u21d2 u:7; DEG \u21d2 u:123; ETC \u21d2 u:9; LB \u21d2 p:1; NT \u21d2 t:98; OD \u21d2 m:41; PU \u21d2 w:552; SP \u21d2 u:1; VC \u21d2 v:32; VE \u21d2 v:13; BA \u21d2 p:2; d:1; CS \u21d2 c:3; d:1; DT \u21d2 r:15; b:1; MSP \u21d2 c:2; u:1; PN \u21d2 r:53; n:2; CC \u21d2 c:73; p:5; v:2; M \u21d2 q:101; n:11; v:1; LC \u21d2 f:51; Ng:3; v:1; u:1; P \u21d2 p:133; v:4; c:2; Vg:1; VA \u21d2 a:57; i:4; z:2; ad:1; b:1; NR \u21d2 ns:170; nr:65; j:23; nt:21; nz:7; n:2; s:1; VV \u21d2 v:382; i:5; a:3; Vg:2; vn:2; n:2; p:2; w:1; JJ \u21d2 a:43; b:13; n:3; vn:3; d:2; j:2; f:2; t:2; z:1; AD \u21d2 d:149; c:11; ad:6; z:4; a:3; v:2; n:1; r:1; m:1; f:1; t:1; NN \u21d2 n:738; vn:135; v:26; j:19; Ng:5; an:5; a:3; r:3; s:3; Ag:2; nt:2; f:2; q:2; i:1; t:1; nz:1; b:1; or more base predictors f 1 , ..., f K that are independently built on different training data. The second level processing consists of an inference function h that takes as input", "cite_spans": [ { "start": 591, "end": 606, "text": "(Wolpert, 1992)", "ref_id": "BIBREF18" }, { "start": 724, "end": 750, "text": "(Nivre and McDonald, 2008;", "ref_id": "BIBREF10" }, { "start": 751, "end": 779, "text": "Torres Martins et al., 2008)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Reducing the Approximation Error via Stacking", "sec_num": "4.1" }, { "text": "x, f 1 (x), ..., f K (x) 3 and out- puts a final prediction h(x, f 1 (x), ..., f K (x)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reducing the Approximation Error via Stacking", "sec_num": "4.1" }, { "text": "The only difference between model ensemble and annotation ensemble is that the output spaces of model ensemble are the same while the output spaces of annotation ensemble are different. This framework is general and flexible, in the sense that it assumes almost nothing about the individual systems and take them as black boxes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reducing the Approximation Error via Stacking", "sec_num": "4.1" }, { "text": "With IOB2 representation (Ramshaw and Marcus, 1995) , the problem of joint segmentation and tagging can be regarded as a character classification task. Previous work shows that the character-based approach is an effective method for Chinese lexical processing. Both of our feature-and structure-based stacking models employ base character-based taggers to generate multiple segmentation and tagging results. Our base tagger use a discriminative sequential classifier to predict the POS tag with positional information for each character. Each character can be assigned one of two possible boundary tags: \"B\" for a character that begins a word and \"I\" for a character that occurs in the middle of a word. We denote a candidate character token c i with a fixed window c i\u22122 c i\u22121 c i c i+1 c i+2 . The following features are used for classification:", "cite_spans": [ { "start": 25, "end": 51, "text": "(Ramshaw and Marcus, 1995)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "A Character-based Tagger", "sec_num": "4.2" }, { "text": "\u2022 Character unigrams: c k (i \u2212 l \u2264 k \u2264 i + l) \u2022 Character bigrams: c k c k+1 (i \u2212 l \u2264 k < i + l)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Character-based Tagger", "sec_num": "4.2" }, { "text": "Jiang et al. 2009introduced a feature-based stacking solution for annotation ensemble. In their solution, an auxiliary tagger CTag ppd is trained on a complementary corpus, i.e. PPD, to assist the target CTB-style tagging. To refine the character-based tagger CTag ctb , PPD-style character labels are directly incorporated as new features. The stacking model relies on the ability of discriminative learning method to explore informative features, which play central role to boost the tagging performance. To compare their feature-based stacking model and our structure-based model, we implement a similar system CTag ppd\u2192ctb . Apart from character uni/bigram features, the PPD-style character labels are used to derive the following features to enhance our CTBstyle tagger:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature-based Stacking", "sec_num": "4.3" }, { "text": "\u2022 Character label unigrams: c ppd k (i\u2212l ppd \u2264 k \u2264 i + l ppd ) \u2022 Character label bigrams: c ppd k c ppd k+1 (i \u2212 l ppd \u2264 k < i + l ppd )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature-based Stacking", "sec_num": "4.3" }, { "text": "In the above descriptions, l and l ppd are the window sizes of features, which can be tuned on development data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature-based Stacking", "sec_num": "4.3" }, { "text": "We propose a novel structured-based stacking model for the task, in which heterogeneous word structures are used not only to generate features but also to derive a sub-word structure. Our work is inspired by the stacked sub-word tagging model introduced in (Sun, 2011) . Their work is motivated by the diversity of heterogeneous models, while our work is motivated by the diversity of heterogeneous annotations. The workflow of our new system is shown in Figure 1 . In the first phase, one character-based CTB-style tagger (CTag ctb ) and one character-based PPD-style tagger (CTag ppd ) are respectively trained to produce heterogenous word boundaries. In the second phase, this system first combines the two segmentation and tagging results to get sub-words which maximize the agreement about word boundaries. Finally, a fine-grained sub-word tagger (STag ctb ) is applied to bracket subwords into words and also to label their POS tags. We can also apply a PPD-style sub-word tagger. To compare with previous work, we specially concentrate on the PPD-to-CTB adaptation.", "cite_spans": [ { "start": 257, "end": 268, "text": "(Sun, 2011)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 455, "end": 463, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Structure-based Stacking", "sec_num": "4.4" }, { "text": "Following (Sun, 2011) , the intermediate sub-word structures is defined to maximize the agreement of CTag ctb and CTag ppd . In other words, the goal is to make merged sub-words as large as possible but not overlap with any predicted word produced by the two taggers. If the position between two continuous characters is predicted as a word boundary by any segmenter, this position is taken as a separation position of the sub-word sequence. This strategy makes sure that it is still possible to correctly re-segment the strings of which the boundaries are disagreed with by the heterogeneous segmenters in the sub-word tagging stage.", "cite_spans": [ { "start": 10, "end": 21, "text": "(Sun, 2011)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Structure-based Stacking", "sec_num": "4.4" }, { "text": "To train the sub-word tagger STag ctb , features are formed making use of both CTB-style and PPDstyle POS tags provided by the character-based taggers. In the following description, \"C\" refers to the content of a sub-word; \"T ctb \" and \"T ppd \" refers to the positional POS tags generated from CTag ctb and CTag ppd ; l C , l ctb T and l ppd T are the window sizes. For convenience, we denote a sub-word with its con-text ...s i\u22121 s i s i+1 ..., where s i is the current token. The following features are applied:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structure-based Stacking", "sec_num": "4.4" }, { "text": "\u2022 Unigram features: C(s k ) (i \u2212 l C \u2264 k \u2264 +l C ), T ctb (s k ) (i \u2212 l ctb T \u2264 k \u2264 i + l ctb T ), T ppd (s k ) (i \u2212 l ppd T \u2264 k \u2264 i + l ppd T ) \u2022 Bigram features: C(s k )C(s k+1 ) (i \u2212 l C \u2264 k < i + l C ), T ctb (s k )T ctb (s k+1 ) (i \u2212 l ctb T \u2264 k < i + l ctb T ), T ppd (s k )T ppd (s k+1 ) (i \u2212 l ppd T \u2264 k < i + l ppd T ) \u2022 C(s i\u22121 )C(s i+1 ) (if l C \u2265 1), T ctb (s i\u22121 )T ctb (s i+1 ) (if l ctb T \u2265 1), T ppd (s i\u22121 )T ppd (s i+1 ) (if l ppd T \u2265 1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structure-based Stacking", "sec_num": "4.4" }, { "text": "\u2022 Word formation features: character n-gram prefixes and suffixes for n up to 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structure-based Stacking", "sec_num": "4.4" }, { "text": "Cross-validation CTag ctb and CTag ppd are directly trained on the original training data, i.e. the CTB and PPD data. Cross-validation technique has been proved necessary to generate the training data for sub-word tagging, since it deals with the training/test mismatch problem (Sun, 2011) . To construct training data for the new heterogeneous subword tagger, a 10-fold cross-validation on the original CTB data is performed too.", "cite_spans": [ { "start": 278, "end": 289, "text": "(Sun, 2011)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Structure-based Stacking", "sec_num": "4.4" }, { "text": "It is possible to acquire high quality labeled data for a specific annotation standard by exploring existing heterogeneous corpora, since the annotations are normally highly compatible. Moreover, the exploitation of additional (pseudo) labeled data aims to reduce the estimation error and enhances a NLP system in a different way from stacking. We therefore expect the improvements are not much overlapping and the combination of them can give a further improvement. The stacking models can be viewed as annotation converters: They take as input complementary structures and produce as output target structures. In other words, the stacking models actually learn statistical models to transform the lexical representations. We can acquire informative extra samples by processing the PPD data with our stacking models. Though the converted annotations are imperfect, they are still helpful to reduce the estimation error.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data-driven Annotation Conversion", "sec_num": "5" }, { "text": "Character-based Conversion The feature-based stacking model CTag ppd\u2192ctb maps the input character sequence c and its PPD-style character label sequence to the corresponding CTB-style character label sequence. This model by itself can be taken as a corpus conversion model to transform a PPD-style analysis to a CTB-style analysis. By processing the auxiliary corpus D ppd with CTag ppd\u2192ctb , we acquire a new labeled data set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data-driven Annotation Conversion", "sec_num": "5" }, { "text": "D ctb = D CT ag ppd\u2192ctb ppd\u2192ctb .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data-driven Annotation Conversion", "sec_num": "5" }, { "text": "We can re-train the CT ag ctb model with both original and converted data D ctb \u222a D ctb .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data-driven Annotation Conversion", "sec_num": "5" }, { "text": "Sub-word-based Conversion Similarly, the structure-based stacking model can be also taken as a corpus conversion model. By processing the auxiliary corpus D ppd with STag ctb , we acquire a new labeled data set D ctb = D ST ag ctb ppd\u2192ctb . We can re-train the STag ctb model with D ctb \u222a D ctb . If we use the gold PPD-style labels of D ctb to extract sub-words, the new model will overfit to the gold PPD-style labels, which are unavailable at test time. To avoid this training/test mismatch problem, we also employ a 10-fold cross validation procedure to add noise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data-driven Annotation Conversion", "sec_num": "5" }, { "text": "It is not a new topic to convert corpus from one formalism to another. A well known work is transforming Penn Treebank into resources for various deep linguistic processing, including LTAG (Xia, 1999) , CCG (Hockenmaier and Steedman, 2007) , HPSG (Miyao et al., 2004) and LFG (Cahill et al., 2002) . Such work for corpus conversion mainly leverages rich sets of hand-crafted rules to convert corpora. The construction of linguistic rules is usually time-consuming and the rules are not full coverage. Compared to rule-based conversion, our statistical converters are much easier to built and empirically perform well.", "cite_spans": [ { "start": 189, "end": 200, "text": "(Xia, 1999)", "ref_id": "BIBREF19" }, { "start": 207, "end": 239, "text": "(Hockenmaier and Steedman, 2007)", "ref_id": "BIBREF1" }, { "start": 247, "end": 267, "text": "(Miyao et al., 2004)", "ref_id": "BIBREF7" }, { "start": 276, "end": 297, "text": "(Cahill et al., 2002)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Data-driven Annotation Conversion", "sec_num": "5" }, { "text": "Previous studies on joint Chinese word segmentation and POS tagging have used the CTB in experiments. We follow this setting in this paper. We use CTB 5.0 as our main corpus and define the training, development and test sets according to (Jiang et al., 2008a; Jiang et al., 2008b; Kruengkrai et al., 2009; Zhang and Clark, 2010; Sun, 2011) . Jiang et al. (2009) present a preliminary study for the annotation adaptation topic, and conduct experiments with the extra PPD data 4 . In other words, the CTB-sytle annotation is the target analysis while the PPD-style annotation is the complementary/auxiliary analysis. Our experiments for annotation ensemble follows their setting to lead to a fair comparison of our system and theirs. A CRF learning toolkit, wapiti 5 (Lavergne et al., 2010) , is used to resolve sequence labeling problems. Among several parameter estimation methods provided by wapiti, our auxiliary experiments indicate that the \"rprop-\" method works best. Three metrics are used for evaluation: precision (P), recall (R) and balanced f-score (F) defined by 2PR/(P+R). Precision is the relative amount of correct words in the system output. Recall is the relative amount of correct words compared to the gold standard annotations. A token is considered to be correct if its boundaries match the boundaries of a word in the gold standard and their POS tags are identical. Table 2 summarizes the segmentation and tagging performance of the baseline and different stacking models. The baseline of the character-based joint solver (CTag ctb ) is competitive, and achieves an f-score of 92.93. By using the character labels from a heterogeneous solver (CTag ppd ), which is trained on the PPD data set, the performance of this character-based system (CTag ppd\u2192ctb ) is improved to 93.67. This result confirms the importance of a heterogeneous structure. Our structure-based stacking solution is effective and outperforms the featurebased stacking. By better exploiting the heterogeneous word boundary structures, our sub-word tagging model achieves an f-score of 94.03 (l ctb T and l ppd T are tuned on the development data and both set to 1).", "cite_spans": [ { "start": 238, "end": 259, "text": "(Jiang et al., 2008a;", "ref_id": "BIBREF2" }, { "start": 260, "end": 280, "text": "Jiang et al., 2008b;", "ref_id": "BIBREF3" }, { "start": 281, "end": 305, "text": "Kruengkrai et al., 2009;", "ref_id": "BIBREF5" }, { "start": 306, "end": 328, "text": "Zhang and Clark, 2010;", "ref_id": "BIBREF21" }, { "start": 329, "end": 339, "text": "Sun, 2011)", "ref_id": "BIBREF15" }, { "start": 342, "end": 361, "text": "Jiang et al. (2009)", "ref_id": "BIBREF4" }, { "start": 765, "end": 788, "text": "(Lavergne et al., 2010)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 1387, "end": 1394, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Setting", "sec_num": "6.1" }, { "text": "The contribution of the auxiliary tagger is twofold. On one hand, the heterogeneous solver provides structural information, which is the basis to construct the sub-word sequence. On the other hand, this tagger provides additional POS information, which is helpful for disambiguation. To eval- uate these two contributions, we do another experiment by just using the heterogeneous word boundary structures without the POS information. The f-score of this type of sub-word tagging is 93.73. This result indicates that both the word boundary and POS information are helpful.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results of Stacking", "sec_num": "6.2" }, { "text": "We do additional experiments to evaluate the effect of heterogeneous features as the amount of PPD data is varied. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Curves", "sec_num": "6.3" }, { "text": "The stacking models can be viewed as data-driven annotation converting models. However they are not trained on \"real\" labeled samples. Although the target representation (CTB-style analysis in our case) is gold standard, the input representation (PPD-style analysis in our case) is labeled by a automatic tagger CTag ppd . To make clear whether these stacking models trained with noisy inputs can tolerate perfect inputs, we evaluate the two stacking models on our manually converted data. The accuracies presented in Table 4 indicate that though the conversion models are learned by applying noisy data, they can refine target tagging with gold auxiliary tagging. Another interesting thing is that the gold PPD-style analysis does not help the sub-word tagging model as much as the character tagging model. 6.5 Results of Re-training Table 5 shows accuracies of re-trained models. Note that a sub-word tagger is built on character taggers, so when we re-train a sub-word system, we should consider whether or not re-training base character taggers. The error rates decrease as automatically converted data is added to the training pool, especially for the character-based tagger CTag ctb . When the base CTB-style tagging is improved, the final tagging is improved in the end. The re-training does not help the sub-word tagging much; the improvement is very modest. 6.6 Comparison to the State-of-the-Art Table 6 summarizes the tagging performance of different systems. The baseline of the characterbased tagger is competitive, and achieve an f-score of 93.41. By better using the heterogeneous word boundary structures, our sub-word tagging model achieves an f-score of 94.36. Both character and sub-word tagging model can be enhanced with automatically converted corpus. With the pseudo labeled data, the performance goes up to 94.11 and 94.68. These results are also better than the best published result on the same data set that is reported in (Jiang et al., 2009) . Test P R F (Sun, 2011) ----94.02 (Jiang et al., 2009) ----94.02 (Wang et al., 2011) - ", "cite_spans": [ { "start": 1950, "end": 1970, "text": "(Jiang et al., 2009)", "ref_id": "BIBREF4" }, { "start": 2006, "end": 2026, "text": "(Jiang et al., 2009)", "ref_id": "BIBREF4" }, { "start": 2037, "end": 2056, "text": "(Wang et al., 2011)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 518, "end": 525, "text": "Table 4", "ref_id": "TABREF7" }, { "start": 835, "end": 842, "text": "Table 5", "ref_id": "TABREF9" }, { "start": 1406, "end": 1413, "text": "Table 6", "ref_id": "TABREF11" }, { "start": 1984, "end": 1995, "text": "(Sun, 2011)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Results of Annotation Conversion", "sec_num": "6.4" }, { "text": "Our theoretical and empirical analysis of two representative popular corpora highlights two essential characteristics of heterogeneous annotations which are explored to reduce approximation and estimation errors for Chinese word segmentation and POS tagging. We employ stacking models to incorporate features derived from heterogeneous analysis and apply them to convert heterogeneous labeled data for re-training. The appropriate application of heterogeneous annotations leads to a significant improvement (a relative error reduction of 11%) over the best performance for this task. Although our discussion is for a specific task, the key idea to leverage heterogeneous annotations to reduce the approximation error with stacking models and the estimation error with automatically converted corpora is very general and applicable to other NLP tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "The first 200 sentences of the development data for experiments are selected. This data set is submitted as a supplemental material for research purposes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Available athttp://www.cis.upenn.edu/ chinese/posguide.3rd.ch.pdf and http://www. icl.pku.edu.cn/icl_groups/corpus/spec.htm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "x is a given Chinese sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://icl.pku.edu.cn/icl_res/ 5 http://wapiti.limsi.fr/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This result is achieved with much unlabeled data, which is different from our setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work is mainly finished when the first author was in Saarland University and DFKI. At that time, this author was funded by DFKI and German Academic Exchange Service (DAAD). While working in Peking University, both author are supported by NSFC (61170166) and National High-Tech R&D Program (2012AA011101).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Automatic annotation of the penn treebank with lfg f-structure information", "authors": [ { "first": "Aoife", "middle": [], "last": "Cahill", "suffix": "" }, { "first": "Mairead", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Way", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the LREC Workshop on Linguistic Knowledge Acquisition and Representation: Bootstrapping Annotated Language Data", "volume": "", "issue": "", "pages": "8--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aoife Cahill, Mairead Mccarthy, Josef Van Genabith, and Andy Way. 2002. Automatic annotation of the penn treebank with lfg f-structure information. In Proceed- ings of the LREC Workshop on Linguistic Knowledge Acquisition and Representation: Bootstrapping Anno- tated Language Data, Las Palmas, Canary Islands, pages 8-15.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Ccgbank: A corpus of ccg derivations and dependency structures extracted from the penn treebank", "authors": [ { "first": "Julia", "middle": [], "last": "Hockenmaier", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics", "volume": "33", "issue": "3", "pages": "355--396", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julia Hockenmaier and Mark Steedman. 2007. Ccgbank: A corpus of ccg derivations and dependency structures extracted from the penn treebank. Computational Lin- guistics, 33(3):355-396.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A cascaded linear model for joint Chinese word segmentation and part-of-speech tagging", "authors": [ { "first": "Wenbin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yajuan", "middle": [], "last": "L\u00fc", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "897--904", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenbin Jiang, Liang Huang, Qun Liu, and Yajuan L\u00fc. 2008a. A cascaded linear model for joint Chinese word segmentation and part-of-speech tagging. In Proceedings of ACL-08: HLT, pages 897-904, Colum- bus, Ohio, June. Association for Computational Lin- guistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Word lattice reranking for Chinese word segmentation and part-of-speech tagging", "authors": [ { "first": "Wenbin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Haitao", "middle": [], "last": "Mi", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 22nd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "385--392", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenbin Jiang, Haitao Mi, and Qun Liu. 2008b. Word lattice reranking for Chinese word segmentation and part-of-speech tagging. In Proceedings of the 22nd In- ternational Conference on Computational Linguistics (Coling 2008), pages 385-392, Manchester, UK, Au- gust. Coling 2008 Organizing Committee.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Automatic adaptation of annotation standards: Chinese word segmentation and pos tagging -a case study", "authors": [ { "first": "Wenbin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", "volume": "", "issue": "", "pages": "522--530", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenbin Jiang, Liang Huang, and Qun Liu. 2009. Au- tomatic adaptation of annotation standards: Chinese word segmentation and pos tagging -a case study. In Proceedings of the Joint Conference of the 47th An- nual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 522-530, Suntec, Singapore, Au- gust. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "An error-driven word-character hybrid model for joint Chinese word segmentation and pos tagging", "authors": [ { "first": "Canasai", "middle": [], "last": "Kruengkrai", "suffix": "" }, { "first": "Kiyotaka", "middle": [], "last": "Uchimoto", "suffix": "" }, { "first": "Yiou", "middle": [], "last": "Jun'ichi Kazama", "suffix": "" }, { "first": "Kentaro", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Hitoshi", "middle": [], "last": "Torisawa", "suffix": "" }, { "first": "", "middle": [], "last": "Isahara", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", "volume": "", "issue": "", "pages": "513--521", "other_ids": {}, "num": null, "urls": [], "raw_text": "Canasai Kruengkrai, Kiyotaka Uchimoto, Jun'ichi Kazama, Yiou Wang, Kentaro Torisawa, and Hitoshi Isahara. 2009. An error-driven word-character hybrid model for joint Chinese word segmentation and pos tagging. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th Interna- tional Joint Conference on Natural Language Process- ing of the AFNLP, pages 513-521, Suntec, Singapore, August. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Practical very large scale CRFs", "authors": [ { "first": "Thomas", "middle": [], "last": "Lavergne", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Capp\u00e9", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Yvon", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "504--513", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Lavergne, Olivier Capp\u00e9, and Fran\u00e7ois Yvon. 2010. Practical very large scale CRFs. pages 504- 513, July.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Corpus-oriented grammar development for acquiring a head-driven phrase structure grammar from the penn treebank", "authors": [ { "first": "Yusuke", "middle": [], "last": "Miyao", "suffix": "" }, { "first": "Takashi", "middle": [], "last": "Ninomiya", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2004, "venue": "IJCNLP", "volume": "", "issue": "", "pages": "684--693", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yusuke Miyao, Takashi Ninomiya, and Jun ichi Tsujii. 2004. Corpus-oriented grammar development for ac- quiring a head-driven phrase structure grammar from the penn treebank. In IJCNLP, pages 684-693.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Chinese part-ofspeech tagging: One-at-a-time or all-at-once? wordbased or character-based?", "authors": [ { "first": "Tou", "middle": [], "last": "Hwee", "suffix": "" }, { "first": "Jin", "middle": [ "Kiat" ], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Low", "suffix": "" } ], "year": 2004, "venue": "Proceedings of EMNLP 2004", "volume": "", "issue": "", "pages": "277--284", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hwee Tou Ng and Jin Kiat Low. 2004. Chinese part-of- speech tagging: One-at-a-time or all-at-once? word- based or character-based? In Dekang Lin and Dekai Wu, editors, Proceedings of EMNLP 2004, pages 277- 284, Barcelona, Spain, July. Association for Computa- tional Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Exploiting heterogeneous treebanks for parsing", "authors": [ { "first": "Zheng-Yu", "middle": [], "last": "Niu", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", "volume": "", "issue": "", "pages": "46--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zheng-Yu Niu, Haifeng Wang, and Hua Wu. 2009. Ex- ploiting heterogeneous treebanks for parsing. In Pro- ceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 46-54, Suntec, Singapore, August. As- sociation for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Integrating graph-based and transition-based dependency parsers", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "950--958", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre and Ryan McDonald. 2008. Integrating graph-based and transition-based dependency parsers. In Proceedings of ACL-08: HLT, pages 950-958, Columbus, Ohio, June. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Text chunking using transformation-based learning", "authors": [ { "first": "Lance", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "Mitch", "middle": [], "last": "Marcus", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the Third Workshop on Very Large Corpora", "volume": "", "issue": "", "pages": "82--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lance Ramshaw and Mitch Marcus. 1995. Text chunk- ing using transformation-based learning. In David Yarowsky and Kenneth Church, editors, Proceedings of the Third Workshop on Very Large Corpora, pages 82-94, Somerset, New Jersey. Association for Compu- tational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Enhancing Chinese word segmentation using unlabeled data", "authors": [ { "first": "Weiwei", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Jia", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "970--979", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weiwei Sun and Jia Xu. 2011. Enhancing Chinese word segmentation using unlabeled data. In Proceedings of the 2011 Conference on Empirical Methods in Natu- ral Language Processing, pages 970-979, Edinburgh, Scotland, UK., July. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Discriminative parse reranking for Chinese with homogeneous and heterogeneous annotations", "authors": [ { "first": "Weiwei", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2010, "venue": "Proceedings of Joint Conference on Chinese Language Processing (CIPS-SIGHAN)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weiwei Sun, Rui Wang, and Yi Zhang. 2010. Dis- criminative parse reranking for Chinese with homoge- neous and heterogeneous annotations. In Proceedings of Joint Conference on Chinese Language Processing (CIPS-SIGHAN), Beijing, China, August.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Word-based and character-based word segmentation models: Comparison and combination", "authors": [ { "first": "Weiwei", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1211--1219", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weiwei Sun. 2010. Word-based and character-based word segmentation models: Comparison and combi- nation. In Proceedings of the 23rd International Con- ference on Computational Linguistics (Coling 2010), pages 1211-1219, Beijing, China, August. Coling 2010 Organizing Committee.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A stacked sub-word model for joint Chinese word segmentation and part-of-speech tagging", "authors": [ { "first": "Weiwei", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1385--1394", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weiwei Sun. 2011. A stacked sub-word model for joint Chinese word segmentation and part-of-speech tag- ging. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies, pages 1385-1394, Port- land, Oregon, USA, June. Association for Computa- tional Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Stacking dependency parsers", "authors": [ { "first": "Andr\u00e9 Filipe Torres", "middle": [], "last": "Martins", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "Eric", "middle": [ "P" ], "last": "Xing", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "157--166", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andr\u00e9 Filipe Torres Martins, Dipanjan Das, Noah A. Smith, and Eric P. Xing. 2008. Stacking dependency parsers. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 157-166, Honolulu, Hawaii, October. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Improving chinese word segmentation and pos tagging with semi-supervised methods using large auto-analyzed data", "authors": [ { "first": "Yiou", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yoshimasa", "middle": [], "last": "Kazama", "suffix": "" }, { "first": "Wenliang", "middle": [], "last": "Tsuruoka", "suffix": "" }, { "first": "Yujie", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Kentaro", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "", "middle": [], "last": "Torisawa", "suffix": "" } ], "year": 2011, "venue": "Proceedings of 5th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "309--317", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yiou Wang, Jun'ichi Kazama, Yoshimasa Tsuruoka, Wenliang Chen, Yujie Zhang, and Kentaro Torisawa. 2011. Improving chinese word segmentation and pos tagging with semi-supervised methods using large auto-analyzed data. In Proceedings of 5th Interna- tional Joint Conference on Natural Language Process- ing, pages 309-317, Chiang Mai, Thailand, Novem- ber. Asian Federation of Natural Language Processing.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Original contribution: Stacked generalization", "authors": [ { "first": "H", "middle": [], "last": "David", "suffix": "" }, { "first": "", "middle": [], "last": "Wolpert", "suffix": "" } ], "year": 1992, "venue": "Neural Netw", "volume": "5", "issue": "", "pages": "241--259", "other_ids": {}, "num": null, "urls": [], "raw_text": "David H. Wolpert. 1992. Original contribution: Stacked generalization. Neural Netw., 5:241-259, February.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Extracting tree adjoining grammars from bracketed corpora", "authors": [ { "first": "Fei", "middle": [], "last": "Xia", "suffix": "" } ], "year": 1999, "venue": "Proceedings of Natural Language Processing Pacific Rim Symposium", "volume": "", "issue": "", "pages": "398--403", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fei Xia. 1999. Extracting tree adjoining grammars from bracketed corpora. In Proceedings of Natural Lan- guage Processing Pacific Rim Symposium, pages 398- 403.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Joint word segmentation and POS tagging using a single perceptron", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "888--896", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yue Zhang and Stephen Clark. 2008. Joint word segmen- tation and POS tagging using a single perceptron. In Proceedings of ACL-08: HLT, pages 888-896, Colum- bus, Ohio, June. Association for Computational Lin- guistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A fast decoder for joint word segmentation and POS-tagging using a single discriminative model", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "843--852", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yue Zhang and Stephen Clark. 2010. A fast decoder for joint word segmentation and POS-tagging using a sin- gle discriminative model. In Proceedings of the 2010 Conference on Empirical Methods in Natural Lan- guage Processing, pages 843-852, Cambridge, MA, October. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Sub-word tagging based on heterogeneous taggers.", "type_str": "figure", "num": null }, "TABREF0": { "content": "", "type_str": "table", "num": null, "text": "Mapping between CTB and PPD POS Tags.", "html": null }, "TABREF2": { "content": "
Devel.PRF
", "type_str": "table", "num": null, "text": "CTag ctb 93.28% 92.58% 92.93 CTag ppd\u2192ctb 93.89% 93.46% 93.67 STag ctb 94.07% 93.99% 94.03", "html": null }, "TABREF3": { "content": "", "type_str": "table", "num": null, "text": "Performance of different stacking models on the development data.", "html": null }, "TABREF4": { "content": "
summarizes the f-score change.
The feature-based model works well only when a
considerable amount of heterogeneous data is avail-
able. When a small set is added, the performance is
even lower than the baseline (92.93). The structure-
based stacking model is more robust and obtains
consistent gains regardless of the size of the com-
plementary data.
PPD \u2192 CTB
#CTB #PPD CTag STag
181047381 92.21 93.26
18104 14545 93.22 93.82
18104 21745 93.58 93.96
18104 28767 93.55 93.87
18104 35996 93.67 94.03
90529052 92.10 92.40
", "type_str": "table", "num": null, "text": "", "html": null }, "TABREF5": { "content": "", "type_str": "table", "num": null, "text": "F-scores relative to sizes of training data. Sizes (shown in column #CTB and #PPD) are numbers of sentences in each training corpus.", "html": null }, "TABREF7": { "content": "
: F-scores with gold PPD-style tagging on the
manually converted data.
", "type_str": "table", "num": null, "text": "", "html": null }, "TABREF8": { "content": "
ctbST ag ctbP(%) R(%)F
D ctb \u222a D ctb--94.46
", "type_str": "table", "num": null, "text": "94.06 94.26 D ctb \u222a D ctb D ctb 94.61 94.43 94.52 D ctb D ctb \u222a D ctb 94.05 94.08 94.06 D ctb \u222a D ctb D ctb \u222a D ctb 94.71 94.53 94.62", "html": null }, "TABREF9": { "content": "", "type_str": "table", "num": null, "text": "Performance of re-trained models on the development data.", "html": null }, "TABREF11": { "content": "
", "type_str": "table", "num": null, "text": "Performance of different systems on the test data.", "html": null } } } }