{ "paper_id": "D10-1019", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:53:05.969563Z" }, "title": "Joint Training and Decoding Using Virtual Nodes for Cascaded Segmentation and Tagging Tasks", "authors": [ { "first": "Xian", "middle": [], "last": "Qian", "suffix": "", "affiliation": { "laboratory": "", "institution": "Fudan University", "location": { "addrLine": "825 Zhangheng Road", "settlement": "Shanghai", "country": "P.R.China" } }, "email": "qianxian@fudan.edu.cn" }, { "first": "Qi", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Fudan University", "location": { "addrLine": "825 Zhangheng Road", "settlement": "Shanghai", "country": "P.R.China" } }, "email": "" }, { "first": "Yaqian", "middle": [], "last": "Zhou", "suffix": "", "affiliation": { "laboratory": "", "institution": "Fudan University", "location": { "addrLine": "825 Zhangheng Road", "settlement": "Shanghai", "country": "P.R.China" } }, "email": "zhouyaqian@fudan.edu.cn" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Fudan University", "location": { "addrLine": "825 Zhangheng Road", "settlement": "Shanghai", "country": "P.R.China" } }, "email": "xjhuang@fudan.edu.cn" }, { "first": "Lide", "middle": [], "last": "Wu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Fudan University", "location": { "addrLine": "825 Zhangheng Road", "settlement": "Shanghai", "country": "P.R.China" } }, "email": "ldwu@fudan.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Many sequence labeling tasks in NLP require solving a cascade of segmentation and tagging subtasks, such as Chinese POS tagging, named entity recognition, and so on. Traditional pipeline approaches usually suffer from error propagation. Joint training/decoding in the cross-product state space could cause too many parameters and high inference complexity. In this paper, we present a novel method which integrates graph structures of two subtasks into one using virtual nodes, and performs joint training and decoding in the factorized state space. Experimental evaluations on CoNLL 2000 shallow parsing data set and Fourth SIGHAN Bakeoff CTB POS tagging data set demonstrate the superiority of our method over cross-product, pipeline and candidate reranking approaches.", "pdf_parse": { "paper_id": "D10-1019", "_pdf_hash": "", "abstract": [ { "text": "Many sequence labeling tasks in NLP require solving a cascade of segmentation and tagging subtasks, such as Chinese POS tagging, named entity recognition, and so on. Traditional pipeline approaches usually suffer from error propagation. Joint training/decoding in the cross-product state space could cause too many parameters and high inference complexity. In this paper, we present a novel method which integrates graph structures of two subtasks into one using virtual nodes, and performs joint training and decoding in the factorized state space. Experimental evaluations on CoNLL 2000 shallow parsing data set and Fourth SIGHAN Bakeoff CTB POS tagging data set demonstrate the superiority of our method over cross-product, pipeline and candidate reranking approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "There is a typical class of sequence labeling tasks in many natural language processing (NLP) applications, which require solving a cascade of segmentation and tagging subtasks. For example, many Asian languages such as Japanese and Chinese which do not contain explicitly marked word boundaries, word segmentation is the preliminary step for solving part-of-speech (POS) tagging problem. Sentences are firstly segmented into words, then each word is assigned with a part-of-speech tag. Both syntactic parsing and dependency parsing usually start with a textual input that is tokenized, and POS tagged.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The most commonly approach solves cascaded subtasks in a pipeline, which is very simple to implement and allows for a modular approach. While, the key disadvantage of such method is that errors propagate between stages, significantly affecting the quality of the final results. To cope with this problem, Shi and Wang (2007) proposed a reranking framework in which N-best segment candidates generated in the first stage are passed to the tagging model, and the final output is the one with the highest overall segmentation and tagging probability score. The main drawback of this method is that the interaction between tagging and segmentation is restricted by the number of candidate segmentation outputs. Razvan C. Bunescu (2008) presented an improved pipeline model in which upstream subtask outputs are regarded as hidden variables, together with their probabilities are used as probabilistic features in the downstream subtasks. One shortcoming of this method is that calculation of marginal probabilities of features may be inefficient and some approximations are required for fast computation. Another disadvantage of these two methods is that they employ separate training and the segmentation model could not take advantages of tagging information in the training procedure.", "cite_spans": [ { "start": 305, "end": 324, "text": "Shi and Wang (2007)", "ref_id": "BIBREF12" }, { "start": 707, "end": 731, "text": "Razvan C. Bunescu (2008)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "On the other hand, joint learning and decoding using cross-product of segmentation states and tagging states does not suffer from error propagation problem and achieves higher accuracy on both subtasks (Ng and Low, 2004) . However, two problems arises due to the large state space, one is that the amount of parameters increases rapidly, which is apt to overfit on the training corpus, the other is that the inference by dynamic programming could be inefficient. Sutton (2004) proposed Dynamic Conditional Random Fields (DCRFs) to perform joint training/decoding of subtasks using much fewer parameters than the cross-product approach. How-ever, DCRFs do not guarantee non-violation of hardconstraints that nodes within the same segment get a single consistent tagging label. Another drawback of DCRFs is that exact inference is generally time consuming, some approximations are required to make it tractable.", "cite_spans": [ { "start": 202, "end": 220, "text": "(Ng and Low, 2004)", "ref_id": "BIBREF10" }, { "start": 463, "end": 476, "text": "Sutton (2004)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, perceptron based learning framework has been well studied for incorporating node level and segment level features together (Kazama and Torisawa, 2007; Zhang and Clark, 2008) . The main shortcoming is that exact inference is intractable for those dynamically generated segment level features, so candidate based searching algorithm is used for approximation. On the other hand, Jiang (2008) proposed a cascaded linear model which has a two layer structure, the inside-layer model uses node level features to generate candidates with their weights as inputs of the outside layer model which captures non-local features. As pipeline models, error propagation problem exists for such method.", "cite_spans": [ { "start": 133, "end": 160, "text": "(Kazama and Torisawa, 2007;", "ref_id": "BIBREF6" }, { "start": 161, "end": 183, "text": "Zhang and Clark, 2008)", "ref_id": "BIBREF17" }, { "start": 387, "end": 399, "text": "Jiang (2008)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we present a novel graph structure that exploits joint training and decoding in the factorized state space. Our method does not suffer from error propagation, and guards against violations of those hard-constraints imposed by segmentation subtask. The motivation is to integrate two Markov chains for segmentation and tagging subtasks into a single chain, which contains two types of nodes, then standard dynamic programming based exact inference is employed on the hybrid structure. Experiments are conducted on two different tasks, CoNLL 2000 shallow parsing and SIGHAN 2008 Chinese word segmentation and POS tagging. Evaluation results of shallow parsing task show the superiority of our proposed method over traditional joint training/decoding approach using crossproduct state space, and achieves the best reported results when no additional resources at hand. For Chinese word segmentation and POS tagging task, a strong baseline pipeline model is built, experimental results show that the proposed method yields a more substantial improvement over the baseline than candidate reranking approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of this paper is organized as follows: In Section 2, we describe our novel graph structure. In Section 3, we analyze complexity of our proposed method. Experimental results are shown in Section 4. We conclude the work in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 Multi-chain integration using Virtual Nodes", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We begin with a brief review of the Conditional Random Fields(CRFs). Let x = x 1 x 2 . . . x l denote the observed sequence, where x i is the i th node in the sequence, l is sequence length, y = y 1 y 2 . . . y l is a label sequence over x that we wish to predict. CRFs (Lafferty et al., 2001 ) are undirected graphic models that use Markov network distribution to learn the conditional probability. For sequence labeling task, linear chain CRFs are very popular, in which a first order Markov assumption is made on the labels:", "cite_spans": [ { "start": 270, "end": 292, "text": "(Lafferty et al., 2001", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields", "sec_num": "2.1" }, { "text": "p(y|x) = 1 Z(x) i \u03c6(x, y, i)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields", "sec_num": "2.1" }, { "text": ",where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields", "sec_num": "2.1" }, { "text": "\u03c6(x, y, i) = exp w T f (x, y i\u22121 , y i , i) Z(x) = y i \u03c6(x, y, i) f (x, y i\u22121 , y i , i) = [f 1 (x, y i\u22121 , y i , i), . . .,f m (x, y i\u22121 , y i , i)] T , each ele- ment f j (x, y i\u22121 , y i , i)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields", "sec_num": "2.1" }, { "text": "is a real valued feature function, here we simplify the notation of state feature by writing", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields", "sec_num": "2.1" }, { "text": "f j (x, y i , i) = f j (x, y i\u22121 , y i , i), m is the cardinality of feature set {f j }. w = [w 1 , . . . , w m ] T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields", "sec_num": "2.1" }, { "text": "is a weight vector to be learned from the training set. Z(x) is the normalization factor over all label sequences for x.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields", "sec_num": "2.1" }, { "text": "In the traditional joint training/decoding approach for cascaded segmentation and tagging task, each label y i has the form s i -t i , which consists of segmentation label s i and tagging label t i . Let s = s 1 s 2 . . . s l be the segmentation label sequence over x. There are several commonly used label sets such as BI, BIO, IOE, BIES, etc. To facilitate our discussion, in later sections we will use BIES label set, where B,I,E represents Beginning, Inside and End of a multi-node segment respectively, S denotes a single node segment. Let t = t 1 t 2 . . . t l be the tagging label sequence over x. For example, in named entity recognition task, t i \u2208 {PER, LOC, ORG, MISC, O} represents an entity type (person name, location name, organization name, miscellaneous entity Figure 1 , where tagging label \"P\" is the simplification of \"PER\". For nodes that are labeled as other, we define s i =S, t i =O.", "cite_spans": [], "ref_spans": [ { "start": 778, "end": 786, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Conditional Random Fields", "sec_num": "2.1" }, { "text": "Different from traditional joint approach, our method integrates two linear markov chains for segmentation and tagging subtasks into one that contains two types of nodes. Specifically, we first regard segmentation and tagging as two independent sequence labeling tasks, corresponding chain structures are built, as shown in the top and middle sub-figures of Figure 2 . Then a chain of twice length of the observed sequence is built, where nodes x 1 , . . . , x l on the even positions are original observed nodes, while nodes v 1 , . . . , v l on the odd positions are virtual nodes that have no content information. For original nodes x i , the state space is the tagging label set, while for virtual nodes, their states are segmentation labels. The label sequence of the hybrid chain is y = y 1 . . .", "cite_spans": [], "ref_spans": [ { "start": 358, "end": 366, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Hybrid structure for cascaded labeling tasks", "sec_num": "2.2" }, { "text": "y 2l = s 1 t 1 . . . s l t l ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid structure for cascaded labeling tasks", "sec_num": "2.2" }, { "text": "where combination of consecutive labels s i t i represents the full label for node x i . Then we let s i be connected with s i\u22121 and s i+1 , so that first order Markov assumption is made on segmentation states. Similarly, t i is connected with t i\u22121 and t i+1 . Then neighboring tagging and segmentation states are connected as shown in the bottom sub-figure of Figure 2 . Non-violation of hard-constraints that nodes within the same segment get a single consistent tagging label is guaranteed by introducing second order transition features", "cite_spans": [], "ref_spans": [ { "start": 362, "end": 370, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Hybrid structure for cascaded labeling tasks", "sec_num": "2.2" }, { "text": "f (t i\u22121 , s i , t i , i) that are true if t i\u22121 = t i and s i \u2208 {I,E}. For example, f j (t i\u22121 , s i , t i , i) is de- fined as true if t i\u22121 =PER, s i =I and t i =LOC.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid structure for cascaded labeling tasks", "sec_num": "2.2" }, { "text": "In other words, it is true, if a segment is partially tagging as PER, and partially tagged as LOC. Since such features are always false in the training corpus, their corresponding weights will be very low so that inconsistent label assignments impossibly appear in decoding procedure. The hybrid graph structure can be regarded as a special case of second order Markov chain. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid structure for cascaded labeling tasks", "sec_num": "2.2" }, { "text": "Compared with traditional joint model that exploits cross-product state space, our hybrid structure uses factorized states, hence could handle more flexible features. Any state feature g(x, y i , i) defined in the cross-product state space can be replaced by a first order transition feature in the factorized space:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factorized features", "sec_num": "2.3" }, { "text": "f (x, s i , t i , i). As for the transition features, we use f (s i\u22121 , t i\u22121 , s i , i) and f (t i\u22121 , s i , t i , i) instead of g(y i\u22121 , y i , i)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factorized features", "sec_num": "2.3" }, { "text": "in the conventional joint model. Features in cross-product state space require that segmentation label and tagging label take on particular values simultaneously, however, sometimes we want to specify requirement on only segmentation or tagging label. For example, \"Smith\" may be an end of a person name, \"Speaker: John Smith\"; or a single word person name \"Professor Smith will . . . \". In such case, our observation is that \"Smith\" is likely a (part of) person name, we do not care about its segmentation label. So we could define state feature f (x, t i , i) = true, if x i is \"Smith\" with tagging label t i =PER.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factorized features", "sec_num": "2.3" }, { "text": "Further more, we could define features like", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factorized features", "sec_num": "2.3" }, { "text": "f (x, t i\u22121 , t i , i), f (x, s i\u22121 , s i , i), f (x, t i\u22121 , s i , i), etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factorized features", "sec_num": "2.3" }, { "text": "The hybrid structure facilitates us to use varieties of features. In the remainder of the paper, we use notations", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factorized features", "sec_num": "2.3" }, { "text": "f (x, t i\u22121 , s i , t i , i) and f (x, s i\u22121 , t i\u22121 , s i , i) for simplicity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factorized features", "sec_num": "2.3" }, { "text": "A hybrid CRFs is a conditional distribution that factorizes according to the hybrid graphical model, and is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid CRFs", "sec_num": "2.4" }, { "text": "p(s, t|x) = 1 Z(x) i \u03c6(x, s, t, i) i \u03c8(x, s, t, i) Where \u03c6(x, s, t, i) = exp w T 1 f (x, s i\u22121 , t i\u22121 , s i ) \u03c8(x, s, t, i) = exp w T 2 f (x, t i\u22121 , s i , t i ) Z(x) = s,t i \u03c6(x, s, t, i) i \u03c8(x, s, t, i)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid CRFs", "sec_num": "2.4" }, { "text": "Where w 1 , w 2 are weight vectors. Luckily, unlike DCRFs, in which graph structure can be very complex, and the cross-product state space can be very large, in our cascaded labeling task, the segmentation label set is often small, so far as we known, the most complicated segmentation label set has only 6 labels (Huang and Zhao, 2007) . So exact dynamic programming based algorithms can be efficiently performed.", "cite_spans": [ { "start": 314, "end": 336, "text": "(Huang and Zhao, 2007)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Hybrid CRFs", "sec_num": "2.4" }, { "text": "In the training stage, we use second order forward backward algorithm to compute the marginal proba-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid CRFs", "sec_num": "2.4" }, { "text": "bilities p(x, s i\u22121 , t i\u22121 , s i ) and p(x, t i\u22121 , s i , t i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid CRFs", "sec_num": "2.4" }, { "text": ", and the normalization factor Z(x). In decoding stage, we use second order Viterbi algorithm to find the best label sequence. The Viterbi decoding can be ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid CRFs", "sec_num": "2.4" }, { "text": "(|S| 2 c s + |T | 2 c t )L (|S| 2 + |T | 2 )U Cross-Product (|S||T |) 2 cL (|S||T |) 2 U Reranking (|S| 2 c s + |T | 2 c t )L (|S| 2 + |T | 2 )N U Hybrid (|S| + |T |)|S||T |cL (|S| + |T |)|S||T |U", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid CRFs", "sec_num": "2.4" }, { "text": "used to label a new sequence, and marginal computation is used for parameter estimation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid CRFs", "sec_num": "2.4" }, { "text": "The time complexity of the hybrid CRFs training and decoding procedures is higher than that of pipeline methods, but lower than traditional crossproduct methods. Let", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity Analysis", "sec_num": "3" }, { "text": "\u2022 |S| = size of the segmentation label set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity Analysis", "sec_num": "3" }, { "text": "\u2022 |T | = size of the tagging label set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity Analysis", "sec_num": "3" }, { "text": "\u2022 L = total number of nodes in the training data set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity Analysis", "sec_num": "3" }, { "text": "\u2022 U = total number of nodes in the testing data set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity Analysis", "sec_num": "3" }, { "text": "\u2022 c = number of joint training iterations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity Analysis", "sec_num": "3" }, { "text": "\u2022 c s = number of segmentation training iterations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity Analysis", "sec_num": "3" }, { "text": "\u2022 c t = number of tagging training iterations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity Analysis", "sec_num": "3" }, { "text": "\u2022 N = number of candidates in candidate reranking approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity Analysis", "sec_num": "3" }, { "text": "Time requirements for pipeline, cross-product, candidate reranking and hybrid CRFs are summarized in Table 1 . For Hybrid CRFs, original node In real applications, |S| is small, |T | could be very large, we assume that |T | >> |S|, so for each iteration, hybrid CRFs is about |S| times slower than pipeline and |S| times faster than cross-product ", "cite_spans": [], "ref_spans": [ { "start": 101, "end": 108, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Complexity Analysis", "sec_num": "3" }, { "text": "x i has features {f j (t i\u22121 , s i , t i )}, accessing all label subse- quences t i\u22121 s i t i takes |S||T | 2 time, while virtual node v i has features {f j (s i\u22121 , t i\u22121 , s i )}, accessing all label subsequences s i\u22121 t i\u22121 s i takes |S| 2 |T |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity Analysis", "sec_num": "3" }, { "text": "w i\u22122 y i , w i\u22121 y i , w i y i w i\u22121 s i , w i s i , w i+1 s i w i+1 y i , w i+2 y i w i\u22122 t i , w i\u22121 t i , w i t i , w i+1 t i , w i+2 t i w i\u22121 w i y i , w i w i+1 y i w i\u22121 w i s i , w i w i+1 s i w i\u22121 w i t i , w i w i+1 t i p i\u22122 y i , p i\u22121 y i , p i y i p i\u22121 s i , p i s i , p i+1 s i p i+1 y i , p i+2 y i p i\u22122 t i , p i\u22121 t i , p i+1 t i , p i+2 t i p i\u22122 p i\u22121 y i , p i\u22121 p i y i , p i p i+1 y i , p i+1 p i+2 y i p i\u22122 p i\u22121 s i , p i\u22121 p i s i , p i p i+1 s i , p i+1 p i+2 s i p i\u22123 p i\u22122 t i , p i\u22122 p i\u22121 t i , p i\u22121 p i t i , p i p i+1 t i , p i+1 p i+2 t i , p i+2 p i+3 t i , p i\u22121 p i+1 t i p i\u22122 p i\u22121 p i y i , p i\u22121 p i p i+1 y i , p i p i+1 p i+2 y i p i\u22122 p i\u22121 p i s i , p i\u22121 p i p i+1 s i , p i p i+1 p i+2 s i w i p i t i w i s i\u22121 s i w i\u22121 t i\u22121 t i , w i t i\u22121 t i , p i\u22121 t i\u22121 t i , p i t i\u22121 t i y i\u22121 y i s i\u22121 t i\u22121 s i , t i\u22121 s i t i method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity Analysis", "sec_num": "3" }, { "text": "When decoding, candidate reranking approach requires more time if candidate number N > |S|.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity Analysis", "sec_num": "3" }, { "text": "Though the space complexity could not be compared directly among some of these methods, hybrid CRFs require less parameters than cross-product CRFs due to the factorized state space. This is similar with factorized CRFs (FCRFs) (Sutton et al., 2004) .", "cite_spans": [ { "start": 228, "end": 249, "text": "(Sutton et al., 2004)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Complexity Analysis", "sec_num": "3" }, { "text": "Our first experiment is the shallow parsing task. We use corpus from CoNLL 2000 shared task, which contains 8936 sentences for training and 2012 sentences for testing. There are 11 tagging labels: noun phrase(NP), verb phrase(VP) , . . . and other (O), the segmentation state space we used is BIES label set, since we find that it yields a little improvement over BIO set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shallow Parsing", "sec_num": "4.1" }, { "text": "We use the standard evaluation metrics, which are precision P (percentage of output phrases that exactly match the reference phrases), recall R (percentage of reference phrases returned by our system), and their harmonic mean, the F1 score F 1 = 2P R P +R (which we call F score in what follows).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shallow Parsing", "sec_num": "4.1" }, { "text": "We compare our approach with traditional crossproduct method. To find good feature templates, development data are required. Since CoNLL2000 does not provide development data set, we divide the training data into 10 folds, of which 9 folds for training and 1 fold for developing. After selecting feature templates by cross validation, we extract features and learn their weights on the whole training data set. Feature templates are summarized in Table 2 , where w i denotes the i th word, p i denotes the i th POS tag.", "cite_spans": [], "ref_spans": [ { "start": 447, "end": 455, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Shallow Parsing", "sec_num": "4.1" }, { "text": "Notice that in the second row, feature templates of the hybrid CRFs does not contain w i\u22122 s i , w i+2 s i , since we find that these two templates degrade performance in cross validation. However, w i\u22122 t i , w i+2 t i are useful, which implies that the proper context window size for segmentation is smaller than tagging. Similarly, for hybrid CRFs, the window size of POS bigram features for segmentation is 5 (from p i\u22122 to p i+2 , see the eighth row in the second column); while for tagging, the size is 7 (from p i\u22123 to p i+3 , see the ninth row in the second column). However for cross-product method, their window sizes must be consistent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shallow Parsing", "sec_num": "4.1" }, { "text": "For traditional cross-product CRFs and our hybrid CRFs, we use fixed gaussian prior \u03c3 = 1.0 for both methods, we find that this parameter does not signifi- cantly affect the results when it varies between 1 and 10. LBFGS (Nocedal and Wright, 1999) method is employed for numerical optimization. Experimental results are shown in Table 3 . Our proposed CRFs achieve a performance gain of 0.43 points in F-score over cross-product CRFs that use state space while require less training time. For comparison, we also listed the results of previous top systems, as shown in Table 4 . Our proposed method outperforms other systems when no additional resources at hand. Though recently semisupervised learning that incorporates large mounts of unlabeled data has been shown great improvement over traditional supervised methods, such as the last row in Table 4 , supervised learning is fundamental. We believe that combination of our method and semi-supervised learning will achieve further improvement.", "cite_spans": [ { "start": 221, "end": 247, "text": "(Nocedal and Wright, 1999)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 329, "end": 336, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 569, "end": 576, "text": "Table 4", "ref_id": "TABREF4" }, { "start": 846, "end": 853, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Shallow Parsing", "sec_num": "4.1" }, { "text": "Our second experiment is the Chinese word segmentation and POS tagging task. To facilitate comparison, we focus only on the closed test, which means that the system is trained only with a designated training corpus, any extra knowledge is not allowed, including Chinese and Arabic numbers, letters and so on. We use the Chinese Treebank (CTB) POS corpus from the Fourth International SIGHAN Bakeoff data sets (Jin and Chen, 2008) . The training data consist of 23444 sentences, 642246 Chinese words, 1.05M Chinese characters and testing data consist of 2079 sentences, 59955 Chinese words, 0.1M Chinese characters. We compare our hybrid CRFs with pipeline and candidate reranking methods (Shi and Wang, 2007) (Carreras and Marquez, 2003) ETL (Milidiu et al., 2008) 92.79 (Wu et al., 2006) 94.21 Extended features such as token features, affixes HySOL 94.36 17M words unlabeled (Suzuki et al., 2007) data ASO-semi 94.39 15M words unlabeled (Ando and Zhang, 2005) data (Zhang et al., 2002) 94.17 full parser output (Suzuki and Isozaki, 2008) 95.15 1G words unlabeled data using the same evaluation metrics as shallow parsing. We do not compare with cross-product CRFs due to large amounts of parameters. For pipeline method, we built our word segmenter based on the work of Huang and Zhao (2007) , which uses 6 label representation, 7 feature templates (listed in Table 5 , where c i denotes the i th Chinese character in the sentence) and CRFs for parameter learning. We compare our segmentor with other top systems using SIGHAN CTB corpus and evaluation metrics. Comparison results are shown in Table 6 , our segmenter achieved 95.12 F-score, which is ranked 4th of 26 official runs. Except for the first system which uses extra unlabeled data, differences between rest systems are not significant.", "cite_spans": [ { "start": 409, "end": 429, "text": "(Jin and Chen, 2008)", "ref_id": "BIBREF5" }, { "start": 688, "end": 708, "text": "(Shi and Wang, 2007)", "ref_id": "BIBREF12" }, { "start": 709, "end": 737, "text": "(Carreras and Marquez, 2003)", "ref_id": "BIBREF2" }, { "start": 742, "end": 764, "text": "(Milidiu et al., 2008)", "ref_id": "BIBREF9" }, { "start": 771, "end": 788, "text": "(Wu et al., 2006)", "ref_id": "BIBREF16" }, { "start": 877, "end": 898, "text": "(Suzuki et al., 2007)", "ref_id": "BIBREF15" }, { "start": 939, "end": 961, "text": "(Ando and Zhang, 2005)", "ref_id": "BIBREF0" }, { "start": 967, "end": 987, "text": "(Zhang et al., 2002)", "ref_id": "BIBREF18" }, { "start": 1013, "end": 1039, "text": "(Suzuki and Isozaki, 2008)", "ref_id": "BIBREF14" }, { "start": 1272, "end": 1293, "text": "Huang and Zhao (2007)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 1362, "end": 1369, "text": "Table 5", "ref_id": "TABREF5" }, { "start": 1595, "end": 1602, "text": "Table 6", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Chinese word segmentation and POS tagging", "sec_num": "4.2" }, { "text": "Our POS tagging system is based on linear chain CRFs. Since SIGHAN dose not provide development data, we use the 10 fold cross validation described in the previous experiment to turning feature templates and Gaussian prior. Feature templates are listed in Table 5 , where w i denotes the i th word in the sentence, c j (w i ), j > 0 denotes the j th Chinese character of word w i , c j (w i ), j < 0 denotes the j th last Chinese character, l(w i ) denotes the word length of w i . We compare our POS tagger with other top systems on Bakeoff CTB POS corpus where sentences are perfectly segmented into words, our POS tagger achieved 94.29 accuracy, which is the best of 7 official runs. Comparison results are shown in Table 7 .", "cite_spans": [], "ref_spans": [ { "start": 256, "end": 263, "text": "Table 5", "ref_id": "TABREF5" }, { "start": 719, "end": 726, "text": "Table 7", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Chinese word segmentation and POS tagging", "sec_num": "4.2" }, { "text": "(1.1) c i\u22122 s i , c i\u22121 s i , c i s i , c i+1 s i , c i+2 s i (1.2) c i\u22121 c i s i , c i c i+1 s i , c i\u22121 c i+1 s i (1.3) s i\u22121 s i POS tagging feature templates (2.1) w i\u22122 t i , w i\u22121 t i , w i t i , w i+1 t i , w i+2 t i (2.2) w i\u22122 w i\u22121 t i , w i\u22121 w i t i , w i w i+1 t i , w i+1 w i+2 t i , w i\u22121 w i+1 t i (2.3) c 1 (w i )t i , c 2 (w i )t i , c 3 (w i )t i , c \u22122 (w i )t i , c \u22121 (w i )t i (2.4) c 1 (w i )c 2 (w i )t i , c \u22122 (w i )c \u22121 (w i )t i (2.5) l(w i )t i (2.6) t i\u22121 t i Joint segmentation and POS tagging feature templates (3.1) c i\u22122 s i , c i\u22121 s i , c i s i , c i+1 s i , c i+2 s i (3.2) c i\u22121 c i s i , c i c i+1 s i , c i\u22121 c i+1 s i (3.3) c i\u22123 t i , c i\u22122 t i , c i\u22121 t i , c i t i , c i+1 t i , c i+2 t i , c i+3 t i (3.4) c i\u22123 c i\u22122 t i , c i\u22122 c i\u22121 t i , c i\u22121 c i t i , c i c i+1 t i c i+1 c i+2 t i , c i+2 c i+3 t i , c i\u22122 c i t i , c i c i+2 t i (3.5) c i s i t i (3.6) c i t i\u22121 t i (3.7) s i\u22121 t i\u22121 s i , t i\u22121 s i t i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese word segmentation and POS tagging", "sec_num": "4.2" }, { "text": "For reranking method, we varied candidate numbers n among n \u2208 {10, 20, 50, 100}. For hybrid CRFs, we use the same segmentation label set as the segmentor in pipeline. Feature templates are listed in Table 5 . Experimental results are shown in Figure 3 . The gain of hybrid CRFs over the baseline pipeline model is 0.48 points in F-score, about 3 times higher than 100-best reranking approach which achieves 0.13 points improvement. Though larger candidate number can achieve higher performance, such improvement becomes trivial for n > 20. 90.88 Pipeline (Shi and Wang, 2007) 91.67 20-Best Reranking (Shi and Wang, 2007) 91.86", "cite_spans": [ { "start": 555, "end": 575, "text": "(Shi and Wang, 2007)", "ref_id": "BIBREF12" }, { "start": 600, "end": 620, "text": "(Shi and Wang, 2007)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 199, "end": 206, "text": "Table 5", "ref_id": "TABREF5" }, { "start": 243, "end": 251, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Chinese word segmentation and POS tagging", "sec_num": "4.2" }, { "text": "Pipeline (Zhang and Clark, 2008) 90.33 Joint Perceptron (Zhang and Clark, 2008) 91.34 Perceptron Only (Jiang et al., 2008) 92.5 Cascaded Linear (Jiang et al., 2008) 93.4 sources. One common conclusion is that joint models generally outperform pipeline models.", "cite_spans": [ { "start": 9, "end": 32, "text": "(Zhang and Clark, 2008)", "ref_id": "BIBREF17" }, { "start": 56, "end": 79, "text": "(Zhang and Clark, 2008)", "ref_id": "BIBREF17" }, { "start": 102, "end": 122, "text": "(Jiang et al., 2008)", "ref_id": "BIBREF4" }, { "start": 144, "end": 164, "text": "(Jiang et al., 2008)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Chinese word segmentation and POS tagging", "sec_num": "4.2" }, { "text": "We introduced a framework to integrate graph structures for segmentation and tagging subtasks into one using virtual nodes, and performs joint training and decoding in the factorized state space. Our approach does not suffer from error propagation, and guards against violations of those hard-constraints imposed by segmentation subtask. Experiments on shallow parsing and Chinese word segmentation tasks demonstrate our technique.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [ { "text": "The author wishes to thank the anonymous reviewers for their helpful comments. This work was partially funded by 973 Program 2010CB327906 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "6" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A high-performance semisupervised learning method for text chunking", "authors": [ { "first": "R", "middle": [], "last": "Ando", "suffix": "" }, { "first": "T", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Ando and T. Zhang. 2005. A high-performance semi- supervised learning method for text chunking. In Pro- ceedings of ACL, pages 1-9.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Learning with probabilistic features for improved pipeline models", "authors": [ { "first": "C", "middle": [], "last": "Razvan", "suffix": "" }, { "first": "", "middle": [], "last": "Bunescu", "suffix": "" } ], "year": 2008, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Razvan C. Bunescu. 2008. Learning with probabilistic features for improved pipeline models. In Proceedings of EMNLP, Waikiki, Honolulu, Hawaii.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Phrase recognition by filtering and ranking with perceptrons", "authors": [ { "first": "X", "middle": [], "last": "Carreras", "suffix": "" }, { "first": "", "middle": [], "last": "Marquez", "suffix": "" } ], "year": 2003, "venue": "Proceedings of RANLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "X Carreras and L Marquez. 2003. Phrase recognition by filtering and ranking with perceptrons. In Proceedings of RANLP.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Chinese word segmentation: A decade review", "authors": [ { "first": "Changning", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2007, "venue": "Journal of Chinese Information Processing", "volume": "21", "issue": "", "pages": "8--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "Changning Huang and Hai Zhao. 2007. Chinese word segmentation: A decade review. Journal of Chinese Information Processing, 21:8-19.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A cascaded linear model for joint chinese word segmentation and part-of-speech tagging", "authors": [ { "first": "Wenbin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yajuan", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenbin Jiang, Liang Huang, Qun Liu, and Yajuan Lu. 2008. A cascaded linear model for joint chinese word segmentation and part-of-speech tagging. In Proceed- ings of ACL, Columbus, Ohio, USA.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The fourth international chinese language processing bakeoff: Chinese word segmentation, named entity recognition and chinese pos tagging", "authors": [ { "first": "Guangjin", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Xiao", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2008, "venue": "Proceedings of Sixth SIGHAN Workshop on Chinese Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guangjin Jin and Xiao Chen. 2008. The fourth interna- tional chinese language processing bakeoff: Chinese word segmentation, named entity recognition and chi- nese pos tagging. In Proceedings of Sixth SIGHAN Workshop on Chinese Language Processing, India.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A new perceptron algorithm for sequence labeling with nonlocal features", "authors": [ { "first": "Junichi", "middle": [], "last": "Kazama", "suffix": "" }, { "first": "Kentaro", "middle": [], "last": "Torisawa", "suffix": "" } ], "year": 2007, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "315--324", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junichi Kazama and Kentaro Torisawa. 2007. A new perceptron algorithm for sequence labeling with non- local features. In Proceedings of EMNLP, pages 315- 324, Prague, June.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Chunking with support vector machines", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2001, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taku Kudo and Yuji Matsumoto. 2001. Chunking with support vector machines. In Proceedings of NAACL.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "J", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "F", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Proceedings of ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Lafferty, A. McCallum, and F. Pereira. 2001. Con- ditional random fields: Probabilistic models for seg- menting and labeling sequence data. In Proceedings of ICML.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Phrase chunking using entropy guided transformation learning", "authors": [ { "first": "L", "middle": [], "last": "Ruy", "suffix": "" }, { "first": "Cicero", "middle": [], "last": "Milidiu", "suffix": "" }, { "first": "Santos", "middle": [], "last": "Nogueira Dos", "suffix": "" }, { "first": "Julio", "middle": [ "C" ], "last": "Duarte", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "647--655", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruy L. Milidiu, Cicero Nogueira dos Santos, and Julio C. Duarte. 2008. Phrase chunking using entropy guided transformation learning. In Proceedings of ACL, pages 647-655.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Chinese partofspeech tagging: One-at-a-time or all-at-once? wordbased or character-based?", "authors": [ { "first": "Tou", "middle": [], "last": "Hwee", "suffix": "" }, { "first": "Jin", "middle": [ "Kiat" ], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Low", "suffix": "" } ], "year": 2004, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hwee Tou Ng and Jin Kiat Low. 2004. Chinese part- ofspeech tagging: One-at-a-time or all-at-once? word- based or character-based? In Proceedings of EMNLP.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Numerical Optimization", "authors": [ { "first": "J", "middle": [], "last": "Nocedal", "suffix": "" }, { "first": "S", "middle": [ "J" ], "last": "Wright", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Nocedal and S. J. Wright. 1999. Numerical Optimiza- tion. Springer.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A dual-layer crfs based joint decoding method for cascaded segmentation and labeling tasks", "authors": [ { "first": "Yanxin", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Mengqiu", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2007, "venue": "Proceedings of IJCAI", "volume": "", "issue": "", "pages": "1707--1712", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yanxin Shi and Mengqiu Wang. 2007. A dual-layer crfs based joint decoding method for cascaded segmenta- tion and labeling tasks. In Proceedings of IJCAI, pages 1707-1712, Hyderabad, India.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Dynamic conditional random fields: Factorized probabilistic models for labeling and segmenting sequence data", "authors": [ { "first": "K", "middle": [], "last": "Sutton", "suffix": "" }, { "first": "A", "middle": [], "last": "Rohanimanesh", "suffix": "" }, { "first": "", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2004, "venue": "Proceedings of ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sutton, K. Rohanimanesh, and A. McCallum. 2004. Dynamic conditional random fields: Factorized prob- abilistic models for labeling and segmenting sequence data. In Proceedings of ICML.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Semi-supervised sequential labeling and segmentation using giga-word scale unlabeled data", "authors": [ { "first": "Jun", "middle": [], "last": "Suzuki", "suffix": "" }, { "first": "Hideki", "middle": [], "last": "Isozaki", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "665--673", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun Suzuki and Hideki Isozaki. 2008. Semi-supervised sequential labeling and segmentation using giga-word scale unlabeled data. In Proceedings of ACL, pages 665-673.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Semi-supervised structured output learning based on a hybrid generative and discriminative approach", "authors": [ { "first": "Jun", "middle": [], "last": "Suzuki", "suffix": "" }, { "first": "Akinori", "middle": [], "last": "Fujino", "suffix": "" }, { "first": "Hideki", "middle": [], "last": "Isozaki", "suffix": "" } ], "year": 2007, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun Suzuki, Akinori Fujino, and Hideki Isozaki. 2007. Semi-supervised structured output learning based on a hybrid generative and discriminative approach. In Proceedings of EMNLP, Prague.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A general and multi-lingual phrase chunking model based on masking method", "authors": [ { "first": "Yu-Chieh", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Chia-Hui", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Yue-Shi", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2006, "venue": "Proceedings of Intelligent Text Processing and Computational Linguistics", "volume": "", "issue": "", "pages": "144--155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu-Chieh Wu, Chia-Hui Chang, and Yue-Shi Lee. 2006. A general and multi-lingual phrase chunking model based on masking method. In Proceedings of Intel- ligent Text Processing and Computational Linguistics, pages 144-155.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Joint word segmentation and pos tagging using a single perceptron", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yue Zhang and Stephen Clark. 2008. Joint word seg- mentation and pos tagging using a single perceptron. In Proceedings of ACL, Columbus, Ohio, USA.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Text chunking based on a generalization of winnow. machine learning research", "authors": [ { "first": "F", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "D", "middle": [], "last": "Damerau", "suffix": "" }, { "first": "", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2002, "venue": "Machine Learning Research", "volume": "2", "issue": "", "pages": "615--637", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang, F. Damerau, and D. Johnson. 2002. Text chunking based on a generalization of winnow. ma- chine learning research. Machine Learning Research, 2:615-637.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Unsupervised segmentation helps supervised learning of character tagging forword segmentation and named entity recognition", "authors": [ { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Chunyu", "middle": [], "last": "Kit", "suffix": "" } ], "year": 2008, "venue": "Proceedings of Sixth SIGHAN Workshop on Chinese Language Processing", "volume": "", "issue": "", "pages": "106--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hai Zhao and Chunyu Kit. 2008. Unsupervised segmen- tation helps supervised learning of character tagging forword segmentation and named entity recognition. In Proceedings of Sixth SIGHAN Workshop on Chinese Language Processing, pages 106-111.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "Graphical representation of linear chain CRFs for traditional joint learning/decoding name and other). Graphical representation of linear chain CRFs is shown in" }, "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "time, so the final complexity is (|S| + |T |)|S||T |cL." }, "FIGREF2": { "uris": null, "type_str": "figure", "num": null, "text": "Results for Chinese word segmentation and POS tagging task, Hybrid CRFs significantly outperform 100-Best Reranking (McNemar's test; p < 0.01)" }, "TABREF1": { "content": "
MethodTrainingDecoding
Pipeline
", "num": null, "text": "Time Complexity", "type_str": "table", "html": null }, "TABREF2": { "content": "
Cross Product CRFsHybrid CRFs
", "num": null, "text": "Feature templates for shallow parsing task", "type_str": "table", "html": null }, "TABREF3": { "content": "
MethodCross-ProductHybrid
CRFsCRFs
Training Time11.6 hours6.3 hours
Feature Num-13 million10 mil-
berlion
Iterations118141
F193.8894.31
", "num": null, "text": "Results for shallow parsing task, Hybrid CRFs significantly outperform Cross-Product CRFs (McNemar's test; p < 0.01)", "type_str": "table", "html": null }, "TABREF4": { "content": "
: Comparison with other systems on shallow pars-
ing task
MethodF1AdditionalRe-
sources
Cross-Product CRFs 93.88
Hybrid CRFs94.31
SVM combination93.91
(Kudo and Mat-
sumoto, 2001)
Voted Perceptrons93.74 none
", "num": null, "text": "", "type_str": "table", "html": null }, "TABREF5": { "content": "", "num": null, "text": "Feature templates for Chinese word segmentation and POS tagging task Segmentation feature templates", "type_str": "table", "html": null }, "TABREF6": { "content": "
Bakeoff CTB corpus
RankF1Description
1/2695.89 * official best, using extra un-
labeled data (Zhao and Kit,
2008)
2/2695.33official second
3/2695.17official third
4/2695.12segmentor in pipeline sys-
tem
", "num": null, "text": "Word segmentation results on Fourth SIGHAN", "type_str": "table", "html": null }, "TABREF7": { "content": "
Rank Accuracy Description
1/794.29POS tagger in pipeline sys-
tem
2/794.28official best
3/794.01official second
4/793.24official third
", "num": null, "text": "POS results on Fourth SIGHAN Bakeoff CTB corpus", "type_str": "table", "html": null }, "TABREF8": { "content": "
90.9
90.8
F score90.6 90.7
90.5
90.4candidate reranking
Hybrid CRFs
90.3020406080100
candidate number
", "num": null, "text": "shows the comparison between our work and other relevant work. Notice that, such comparison is indirect due to different data sets and re-", "type_str": "table", "html": null }, "TABREF9": { "content": "
ModelF1
Pipeline (ours)90.40
100-Best Reranking (ours)90.53
Hybrid CRFs (ours)
", "num": null, "text": "Comparison of word segmentation and POS tagging, such comparison is indirect due to different data sets and resources.", "type_str": "table", "html": null } } } }