{ "paper_id": "P06-1028", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:26:27.271558Z" }, "title": "Training Conditional Random Fields with Multivariate Evaluation Measures", "authors": [ { "first": "Jun", "middle": [], "last": "Suzuki", "suffix": "", "affiliation": { "laboratory": "", "institution": "NTT Corp", "location": { "addrLine": "2-4 Hikaridai, Seika-cho, Soraku-gun", "postCode": "619-0237", "settlement": "Kyoto", "country": "Japan" } }, "email": "" }, { "first": "Erik", "middle": [], "last": "Mcdermott", "suffix": "", "affiliation": { "laboratory": "", "institution": "NTT Corp", "location": { "addrLine": "2-4 Hikaridai, Seika-cho, Soraku-gun", "postCode": "619-0237", "settlement": "Kyoto", "country": "Japan" } }, "email": "" }, { "first": "Hideki", "middle": [], "last": "Isozaki", "suffix": "", "affiliation": { "laboratory": "", "institution": "NTT Corp", "location": { "addrLine": "2-4 Hikaridai, Seika-cho, Soraku-gun", "postCode": "619-0237", "settlement": "Kyoto", "country": "Japan" } }, "email": "isozaki@cslab.kecl.ntt.co.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper proposes a framework for training Conditional Random Fields (CRFs) to optimize multivariate evaluation measures, including non-linear measures such as F-score. Our proposed framework is derived from an error minimization approach that provides a simple solution for directly optimizing any evaluation measure. Specifically focusing on sequential segmentation tasks, i.e. text chunking and named entity recognition, we introduce a loss function that closely reflects the target evaluation measure for these tasks, namely, segmentation F-score. Our experiments show that our method performs better than standard CRF training.", "pdf_parse": { "paper_id": "P06-1028", "_pdf_hash": "", "abstract": [ { "text": "This paper proposes a framework for training Conditional Random Fields (CRFs) to optimize multivariate evaluation measures, including non-linear measures such as F-score. Our proposed framework is derived from an error minimization approach that provides a simple solution for directly optimizing any evaluation measure. Specifically focusing on sequential segmentation tasks, i.e. text chunking and named entity recognition, we introduce a loss function that closely reflects the target evaluation measure for these tasks, namely, segmentation F-score. Our experiments show that our method performs better than standard CRF training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Conditional random fields (CRFs) are a recently introduced formalism (Lafferty et al., 2001) for representing a conditional model p(y|x), where both a set of inputs, x, and a set of outputs, y, display non-trivial interdependency. CRFs are basically defined as a discriminative model of Markov random fields conditioned on inputs (observations) x. Unlike generative models, CRFs model only the output y's distribution over x. This allows CRFs to use flexible features such as complicated functions of multiple observations. The modeling power of CRFs has been of great benefit in several applications, such as shallow parsing (Sha and Pereira, 2003) and information extraction (McCallum and Li, 2003) .", "cite_spans": [ { "start": 69, "end": 92, "text": "(Lafferty et al., 2001)", "ref_id": "BIBREF10" }, { "start": 626, "end": 649, "text": "(Sha and Pereira, 2003)", "ref_id": "BIBREF20" }, { "start": 677, "end": 700, "text": "(McCallum and Li, 2003)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Since the introduction of CRFs, intensive research has been undertaken to boost their effectiveness. The first approach to estimating CRF parameters is the maximum likelihood (ML) criterion over conditional probability p(y|x) itself (Lafferty et al., 2001 ). The ML criterion, however, is prone to over-fitting the training data, especially since CRFs are often trained with a very large number of correlated features. The maximum a posteriori (MAP) criterion over parameters, \u03bb, given x and y is the natural choice for reducing over-fitting (Sha and Pereira, 2003) . Moreover, the Bayes approach, which optimizes both MAP and the prior distribution of the parameters, has also been proposed (Qi et al., 2005) . Furthermore, large margin criteria have been employed to optimize the model parameters (Taskar et al., 2004; Tsochantaridis et al., 2005) .", "cite_spans": [ { "start": 233, "end": 255, "text": "(Lafferty et al., 2001", "ref_id": "BIBREF10" }, { "start": 542, "end": 565, "text": "(Sha and Pereira, 2003)", "ref_id": "BIBREF20" }, { "start": 692, "end": 709, "text": "(Qi et al., 2005)", "ref_id": "BIBREF14" }, { "start": 799, "end": 820, "text": "(Taskar et al., 2004;", "ref_id": "BIBREF21" }, { "start": 821, "end": 849, "text": "Tsochantaridis et al., 2005)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "These training criteria have yielded excellent results for various tasks. However, real world tasks are evaluated by task-specific evaluation measures, including non-linear measures such as Fscore, while all of the above criteria achieve optimization based on the linear combination of average accuracies, or error rates, rather than a given task-specific evaluation measure. For example, sequential segmentation tasks (SSTs) , such as text chunking and named entity recognition, are generally evaluated with the segmentation F-score. This inconsistency between the objective function during training and the task evaluation measure might produce a suboptimal result.", "cite_spans": [ { "start": 419, "end": 425, "text": "(SSTs)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In fact, to overcome this inconsistency, an SVM-based multivariate optimization method has recently been proposed (Joachims, 2005) . Moreover, an F-score optimization method for logistic regression has also been proposed (Jansche, 2005) . In the same spirit as the above studies, we first propose a generalization framework for CRF training that allows us to optimize directly not only the error rate, but also any evaluation measure. In other words, our framework can incorporate any evaluation measure of interest into the loss function and then optimize this loss function as the training objective function. Our proposed framework is fundamentally derived from an approach to (smoothed) error rate minimization well known in the speech and pattern recognition community, namely the Minimum Classification Error (MCE) framework (Juang and Katagiri, 1992) . The framework of MCE criterion training supports the theoretical background of our method. The approach proposed here subsumes the conventional ML/MAP criteria training of CRFs, as described in the following.", "cite_spans": [ { "start": 114, "end": 130, "text": "(Joachims, 2005)", "ref_id": "BIBREF5" }, { "start": 221, "end": 236, "text": "(Jansche, 2005)", "ref_id": "BIBREF4" }, { "start": 831, "end": 857, "text": "(Juang and Katagiri, 1992)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "After describing the new framework, as an example of optimizing multivariate evaluation measures, we focus on SSTs and introduce a segmentation F-score loss function for CRFs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Given an input (observation) x\u2208 X and parameter vector \u03bb = {\u03bb 1 , . . . , \u03bb M }, CRFs define the conditional probability p(y|x) of a particular output y \u2208 Y as being proportional to a product of potential functions on the cliques of a graph, which represents the interdependency of y and x. That is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CRFs and Training Criteria", "sec_num": "2" }, { "text": "p(y|x; \u03bb) = 1 Z \u03bb (x) c\u2208C(y,x) \u03a6c(y, x; \u03bb)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CRFs and Training Criteria", "sec_num": "2" }, { "text": "where \u03a6 c (y, x; \u03bb) is a non-negative real value potential function on a clique c \u2208 C(y, x). Z \u03bb (x) = \u1ef9\u2208Y c\u2208C(\u1ef9,x) \u03a6 c (\u1ef9, x; \u03bb) is a normalization factor over all output values, Y.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CRFs and Training Criteria", "sec_num": "2" }, { "text": "Following the definitions of (Sha and Pereira, 2003) , a log-linear combination of weighted features, \u03a6 c (y, x; \u03bb) = exp(\u03bb \u2022 f c (y, x)), is used as individual potential functions, where f c represents a feature vector obtained from the corresponding clique c. That is,", "cite_spans": [ { "start": 29, "end": 52, "text": "(Sha and Pereira, 2003)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "CRFs and Training Criteria", "sec_num": "2" }, { "text": "c\u2208C(y,x) \u03a6 c (y, x) = exp(\u03bb\u2022F (y, x)), where F (y, x) = c f c (y, x)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CRFs and Training Criteria", "sec_num": "2" }, { "text": "is the CRF's global feature vector for x and y.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CRFs and Training Criteria", "sec_num": "2" }, { "text": "The most probable output\u0177 is given by\u0177 = arg max y\u2208Y p(y|x; \u03bb). However Z \u03bb (x) never affects the decision of\u0177 since Z \u03bb (x) does not depend on y. Thus, we can obtain the following discriminant function for CRFs:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CRFs and Training Criteria", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y = arg max y\u2208Y \u03bb \u2022 F (y, x).", "eq_num": "(1)" } ], "section": "CRFs and Training Criteria", "sec_num": "2" }, { "text": "The maximum (log-)likelihood (ML) of the conditional probability p(y|x; \u03bb) of training data {(x k , y * k )} N k=1 w.r.t. parameters \u03bb is the most basic CRF training criterion, that is, arg max \u03bb k log p(y * k |x k ; \u03bb), where y * k is the correct output for the given x k . Maximizing the conditional log-likelihood given by CRFs is equivalent to minimizing the log-loss function, k \u2212 log p(y * k |x k ; \u03bb). We minimize the following loss function for the ML criterion training of CRFs:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CRFs and Training Criteria", "sec_num": "2" }, { "text": "L ML \u03bb = k \u2212\u03bb \u2022 F (y * k , x k ) + log Z \u03bb (x k ) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CRFs and Training Criteria", "sec_num": "2" }, { "text": "To reduce over-fitting, the Maximum a Posteriori (MAP) criterion of parameters \u03bb, that is, arg max \u03bb k log p(\u03bb|y * k , x k ) \u221d k log p(y * k |x k ; \u03bb)p(\u03bb), is now the most widely used CRF training criterion. Therefore, we minimize the following loss function for the MAP criterion training of CRFs:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CRFs and Training Criteria", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L MAP \u03bb = L ML \u03bb \u2212 log p(\u03bb).", "eq_num": "(2)" } ], "section": "CRFs and Training Criteria", "sec_num": "2" }, { "text": "There are several possible choices when selecting a prior distribution p(\u03bb). This paper only considers L \u03c6 -norm prior, p(\u03bb) \u221d exp(\u2212||\u03bb|| \u03c6 /\u03c6C), which becomes a Gaussian prior when \u03c6=2. The essential difference between ML and MAP is simply that MAP has this prior term in the objective function. This paper sometimes refers to the ML and MAP criterion training of CRFs as ML/MAP. In order to estimate the parameters \u03bb, we seek a zero of the gradient over the parameters \u03bb:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CRFs and Training Criteria", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2207L MAP \u03bb = \u2212\u2207 log p(\u03bb) + k \u2212F (y * k , x k ) + y\u2208Y k exp(\u03bb\u2022F (y, x k )) Z \u03bb (x k ) \u2022F (y, x k ) .", "eq_num": "(3)" } ], "section": "CRFs and Training Criteria", "sec_num": "2" }, { "text": "The gradient of ML is Eq. 3 without the gradient term of the prior, \u2212\u2207 log p(\u03bb). The details of actual optimization procedures for linear chain CRFs, which are typical CRF applications, have already been reported (Sha and Pereira, 2003) .", "cite_spans": [ { "start": 213, "end": 236, "text": "(Sha and Pereira, 2003)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "CRFs and Training Criteria", "sec_num": "2" }, { "text": "The Minimum Classification Error (MCE) framework first arose out of a broader family of approaches to pattern classifier design known as Generalized Probabilistic Descent (GPD) (Katagiri et al., 1991) . The MCE criterion minimizes an empirical loss corresponding to a smooth approximation of the classification error. This MCE loss is itself defined in terms of a misclassification measure derived from the discriminant functions of a given task. Via the smoothing parameters, the MCE loss function can be made arbitrarily close to the binary classification error. An important property of this framework is that it makes it possible in principle to achieve the optimal Bayes error even under incorrect modeling assumptions. It is easy to extend the MCE framework to use evaluation measures other than the classification error, namely the linear combination of error rates. Thus, it is possible to optimize directly a variety of (smoothed) evaluation measures. This is the approach proposed in this article.", "cite_spans": [ { "start": 177, "end": 200, "text": "(Katagiri et al., 1991)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "MCE Criterion Training for CRFs", "sec_num": "3" }, { "text": "We first introduce a framework for MCE criterion training, focusing only on error rate optimization. Sec. 4 then describes an example of minimizing a different multivariate evaluation measure using MCE criterion training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MCE Criterion Training for CRFs", "sec_num": "3" }, { "text": "Let x \u2208 X be an input, and y \u2208 Y be an output. The Bayes decision rule decides the most probable output\u0177 for x, by using the maximum a posteriori probability,\u0177 = arg max y\u2208Y p(y|x; \u03bb). In general, p(y|x; \u03bb) can be replaced by a more general discriminant function, that is,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Brief Overview of MCE", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y = arg max y\u2208Y g(y, x, \u03bb).", "eq_num": "(4)" } ], "section": "Brief Overview of MCE", "sec_num": "3.1" }, { "text": "Using the discriminant functions for the possible output of the task, the misclassification measure d() is defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Brief Overview of MCE", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "d(y * ,x, \u03bb) = \u2212g(y * ,x, \u03bb) + max y\u2208Y\\y * g(y, x, \u03bb).", "eq_num": "(5)" } ], "section": "Brief Overview of MCE", "sec_num": "3.1" }, { "text": "where y * is the correct output for x. Here it can be noted that, for a given x, d() \u2265 0 indicates misclassification. By using d(), the minimization of the error rate can be rewritten as the minimization of the sum of 0-1 (step) losses of the given training data. That is, arg min \u03bb L \u03bb where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Brief Overview of MCE", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L \u03bb = k \u03b4(d(y * k , x k , \u03bb)).", "eq_num": "(6)" } ], "section": "Brief Overview of MCE", "sec_num": "3.1" }, { "text": "\u03b4(r) is a step function returning 0 if r<0 and 1 otherwise. That is, \u03b4 is 0 if the value of the discriminant function of the correct output g(y * k , x k , \u03bb) is greater than that of the maximum incorrect output g(y k , x k , \u03bb), and \u03b4 is 1 otherwise. Eq. 5 is not an appropriate function for optimization since it is a discontinuous function w.r.t. the parameters \u03bb. One choice of continuous misclassification measure consists of substituting 'max' with 'soft-max', max k r k \u2248 log k exp(r k ). As a result", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Brief Overview of MCE", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "d(y * , x, \u03bb) = \u2212g * +log A y\u2208Y\\y * exp(\u03c8g) 1 \u03c8 ,", "eq_num": "(7)" } ], "section": "Brief Overview of MCE", "sec_num": "3.1" }, { "text": "where g * = g(y * , x, \u03bb), g = g(y, x, \u03bb), and A = 1 |Y|\u22121 . \u03c8 is a positive constant that represents L \u03c8norm. When \u03c8 approaches \u221e, Eq. 7 converges to Eq. 5. Note that we can design any misclassification measure, including non-linear measures for d(). Some examples are shown in the Appendices.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Brief Overview of MCE", "sec_num": "3.1" }, { "text": "Of even greater concern is the fact that the step function \u03b4 is discontinuous; minimization of Eq. 6 is therefore NP-complete. In the MCE formalism, \u03b4() is replaced with an approximated 0-1 loss function, l(), which we refer to as a smoothing function. A typical choice for l() is the sigmoid function, l sig (), which is differentiable and provides a good approximation of the 0-1 loss when the hyper-parameter \u03b1 is large (see Eq. 8). Another choice is the (regularized) logistic function, l log (), that gives the upper bound of the 0-1 loss. Logistic loss is used as a conventional CRF loss function and provides convexity while the sigmoid function does not. These two smoothing functions can be written as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Brief Overview of MCE", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "l sig = (1 + exp(\u2212\u03b1 \u2022 d(y * , x, \u03bb) \u2212 \u03b2)) \u22121 l log = \u03b1 \u22121 \u2022 log(1 + exp(\u03b1 \u2022 d(y * , x, \u03bb) + \u03b2)),", "eq_num": "(8)" } ], "section": "Brief Overview of MCE", "sec_num": "3.1" }, { "text": "where \u03b1 and \u03b2 are the hyper-parameters of the training. We can introduce a regularization term to reduce over-fitting, which is derived using the same sense as in MAP, Eq. 2. Finally, the objective function of the MCE criterion with the regularization term can be rewritten in the following form:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Brief Overview of MCE", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L MCE \u03bb = F l,d,g,\u03bb {(x k , y * k )} N k=1 + ||\u03bb|| \u03c6 \u03c6C .", "eq_num": "(9)" } ], "section": "Brief Overview of MCE", "sec_num": "3.1" }, { "text": "Then, the objective function of the MCE criterion that minimizes the error rate is Eq. 9 and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Brief Overview of MCE", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "F MCE l,d,g,\u03bb = 1 N N k=1 l(d(y * k , x k , \u03bb))", "eq_num": "(10)" } ], "section": "Brief Overview of MCE", "sec_num": "3.1" }, { "text": "is substituted for F l,d,g,\u03bb . Since N is constant, we can eliminate the term 1/N in actual use.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Brief Overview of MCE", "sec_num": "3.1" }, { "text": "We simply substitute the discriminant function of the CRFs into that of the MCE criterion:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formalization", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "g(y, x, \u03bb) = log p(y|x; \u03bb) \u221d \u03bb \u2022 F (y, x)", "eq_num": "(11)" } ], "section": "Formalization", "sec_num": "3.2" }, { "text": "Basically, CRF training with the MCE criterion optimizes Eq. 9 with Eq. 11 after the selection of an appropriate misclassification measure, d(), and smoothing function, l(). Although there is no restriction on the choice of d() and l(), in this work we select sigmoid or logistic functions for l() and Eq. 7 for d().", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formalization", "sec_num": "3.2" }, { "text": "The gradient of the loss function Eq. 9 can be decomposed by the following chain rule:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formalization", "sec_num": "3.2" }, { "text": "\u2207L MCE \u03bb = \u2202F() \u2202l() \u2022 \u2202l() \u2202d() \u2022 \u2202d() \u2202\u03bb + ||\u03bb|| \u03c6\u22121 C .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formalization", "sec_num": "3.2" }, { "text": "The derivatives of l() w.r.t. d() given in Eq. 8 are written as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formalization", "sec_num": "3.2" }, { "text": "\u2202l sig /\u2202d = \u03b1 \u2022 l sig \u2022 (1 \u2212 l sig ) and \u2202l log /\u2202d = l sig .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formalization", "sec_num": "3.2" }, { "text": "The derivative of d() of Eq. 7 w.r.t. parameters \u03bb is written in this form:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formalization", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2202d() \u2202\u03bb = \u2212 Z \u03bb (x, \u03c8) Z \u03bb (x, \u03c8)\u2212exp(\u03c8g * ) \u2022F (y * , x) + y\u2208Y exp(\u03c8g) Z \u03bb (x, \u03c8)\u2212exp(\u03c8g * ) \u2022F (y, x)", "eq_num": "(12)" } ], "section": "Formalization", "sec_num": "3.2" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formalization", "sec_num": "3.2" }, { "text": "g = \u03bb \u2022 F (y, x), g * = \u03bb \u2022 F (y * , x)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formalization", "sec_num": "3.2" }, { "text": ", and Z \u03bb (x, \u03c8)= y\u2208Y exp(\u03c8g).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formalization", "sec_num": "3.2" }, { "text": "Note that we can obtain exactly the same loss function as ML/MAP with appropriate choices of F(), l() and d(). The details are provided in the Appendices. Therefore, ML/MAP can be seen as one special case of the framework proposed here. In other words, our method provides a generalized framework of CRF training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formalization", "sec_num": "3.2" }, { "text": "With linear chain CRFs, we can calculate the objective function, Eq. 9 combined with Eq. 10, and the gradient, Eq. 12, by using the variant of the forward-backward and Viterbi algorithm described in (Sha and Pereira, 2003) . Moreover, for the parameter optimization process, we can simply exploit gradient descent or quasi-Newton methods such as L-BFGS (Liu and Nocedal, 1989) as well as ML/MAP optimization.", "cite_spans": [ { "start": 199, "end": 222, "text": "(Sha and Pereira, 2003)", "ref_id": "BIBREF20" }, { "start": 353, "end": 376, "text": "(Liu and Nocedal, 1989)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Optimization Procedure", "sec_num": "3.3" }, { "text": "If we select \u03c8 = \u221e for Eq. 7, we only need to evaluate the correct and the maximum incorrect output. As we know, the maximum output can be efficiently calculated with the Viterbi algorithm, which is the same as calculating Eq. 1. Therefore, we can find the maximum incorrect output by using the A* algorithm (Hart et al., 1968) , if the maximum output is the correct output, and by using the Viterbi algorithm otherwise. It may be feared that since the objective function is not differentiable everywhere for \u03c8 = \u221e, problems for optimization would occur. However, it has been shown (Le Roux and McDer-mott, 2005 ) that even simple gradient-based (firstorder) optimization methods such as GPD and (approximated) second-order methods such as Quick-Prop (Fahlman, 1988) and BFGS-based methods have yielded good experimental optimization results.", "cite_spans": [ { "start": 308, "end": 327, "text": "(Hart et al., 1968)", "ref_id": "BIBREF3" }, { "start": 586, "end": 611, "text": "Roux and McDer-mott, 2005", "ref_id": null }, { "start": 751, "end": 766, "text": "(Fahlman, 1988)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Optimization Procedure", "sec_num": "3.3" }, { "text": "Thus far, we have discussed the error rate version of MCE. Unlike ML/MAP, the framework of MCE criterion training allows the embedding of not only a linear combination of error rates, but also any evaluation measure, including non-linear measures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multivariate Evaluation Measures", "sec_num": "4" }, { "text": "Several non-linear objective functions, such as F-score for text classification (Gao et al., 2003) , and BLEU-score and some other evaluation measures for statistical machine translation (Och, 2003) , have been introduced with reference to the framework of MCE criterion training.", "cite_spans": [ { "start": 80, "end": 98, "text": "(Gao et al., 2003)", "ref_id": "BIBREF2" }, { "start": 187, "end": 198, "text": "(Och, 2003)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Multivariate Evaluation Measures", "sec_num": "4" }, { "text": "Hereafter, we focus solely on CRFs in sequences, namely the linear chain CRF. We assume that x and y have the same length: x=(x 1 , . . . , x n ) and y=(y 1 , . . . , y n ). In a linear chain CRF, y i depends only on y i\u22121 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequential Segmentation Tasks (SSTs)", "sec_num": "4.1" }, { "text": "Sequential segmentation tasks (SSTs), such as text chunking (Chunking) and named entity recognition (NER), which constitute the shared tasks of the Conference of Natural Language Learning (CoNLL) 2000 , 2002 , are typical CRF applications. These tasks require the extraction of pre-defined segments, referred to as target segments, from given texts. Fig. 1 shows typical examples of SSTs. These tasks are generally treated as sequential labeling problems incorporating the IOB tagging scheme (Ramshaw and Marcus, 1995) . The IOB tagging scheme, where we only consider the IOB2 scheme, is also shown in Fig. 1 . B-X, I-X and O indicate that the word in question is the beginning of the tag 'X', inside the tag 'X', and outside any target segment, respectively. Therefore, a segment is defined as a sequence of a few outputs.", "cite_spans": [ { "start": 179, "end": 200, "text": "Learning (CoNLL) 2000", "ref_id": null }, { "start": 201, "end": 207, "text": ", 2002", "ref_id": "BIBREF7" }, { "start": 492, "end": 518, "text": "(Ramshaw and Marcus, 1995)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 350, "end": 356, "text": "Fig. 1", "ref_id": null }, { "start": 602, "end": 608, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Sequential Segmentation Tasks (SSTs)", "sec_num": "4.1" }, { "text": "The standard evaluation measure of SSTs is the segmentation F-score (Sang and Buchholz, 2000):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Segmentation F-score Loss for SSTs", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "F\u03b3 = (\u03b3 2 + 1) \u2022 T P \u03b3 2 \u2022 F N + F P + (\u03b3 2 + 1) \u2022 T P", "eq_num": "(13)" } ], "section": "Segmentation F-score Loss for SSTs", "sec_num": "4.2" }, { "text": "He reckons the current account deficit will narrow to only # 1.8 billion . Figure 1 : Examples of sequential segmentation tasks (SSTs): text chunking (Chunking) and named entity recognition (NER).", "cite_spans": [], "ref_spans": [ { "start": 75, "end": 83, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Segmentation F-score Loss for SSTs", "sec_num": "4.2" }, { "text": "where T P , F P and F N represent true positive, false positive and false negative counts, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NP", "sec_num": null }, { "text": "The individual evaluation units used to calculate T P , F N and P N , are not individual outputs y i or output sequences y, but rather segments. We need to define a segment-wise loss, in contrast to the standard CRF loss, which is sometimes referred to as an (entire) sequential loss (Kakade et al., 2002; Altun et al., 2003) . First, we consider the point-wise decision w.r.t. Eq. 1, that is, y i = arg max y i \u2208Y 1 g(y, x, i, \u03bb). The point-wise discriminant function can be written as follows:", "cite_spans": [ { "start": 284, "end": 305, "text": "(Kakade et al., 2002;", "ref_id": "BIBREF7" }, { "start": 306, "end": 325, "text": "Altun et al., 2003)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "NP", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "g(y, x, i, \u03bb) = max y \u2208Y |y| [y i ] \u03bb \u2022 F (y , x)", "eq_num": "(14)" } ], "section": "NP", "sec_num": null }, { "text": "where Y j represents a set of all y whose length is j, and Y[y i ] represents a set of all y that contain y i in the i'th position. Note that the same output\u0177 can be obtained with Eqs. 1 and 14, that is,\u0177 = (\u0177 1 , . . . ,\u0177 n ). This point-wise discriminant function is different from that described in (Kakade et al., 2002; Altun et al., 2003) , which is calculated based on marginals. Let y s j be an output sequence corresponding to the j-th segment of y, where s j represents a sequence of indices of y, that is, s j = (s j,1 , . . . , s j,|s j | ). An example of the Chunking data shown in Fig. 1 , y s 4 is (B-VP, I-VP) where s 4 = (7, 8). Let Y[y s j ] be a set of all outputs whose positions from s j,1 to s j,|s j | are y s j = (y s j,1 , . . . , y s j,|s j | ). Then, we can define a segment-wise discriminant function w.r.t. Eq. 1. That is, g(y, x, sj, \u03bb) = max", "cite_spans": [ { "start": 302, "end": 323, "text": "(Kakade et al., 2002;", "ref_id": "BIBREF7" }, { "start": 324, "end": 343, "text": "Altun et al., 2003)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 594, "end": 600, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "NP", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y \u2208Y |y| [y s j ] \u03bb \u2022 F (y , x).", "eq_num": "(15)" } ], "section": "NP", "sec_num": null }, { "text": "Note again that the same output\u0177 can be obtained using Eqs. 1 and 15, as with the piece-wise discriminant function described above. This property is needed for evaluating segments since we do not know the correct segments of the test data; we can maintain consistency even if we use Eq. 1 for testing and Eq. 15 for training. Moreover, Eq. 15 ob-viously reduces to Eq. 14 if the length of all segments is 1. Then, the segment-wise misclassification measure d(y * , x, s j , \u03bb) can be obtained simply by replacing the discriminant function of the entire sequence g(y, x, \u03bb) with that of segmentwise g(y, x, s j , \u03bb) in Eq. 7. Let s * k be a segment sequence corresponding to the correct output y * k for a given x k , and S(x k ) be all possible segments for a given x k . Then, approximated evaluation functions of T P , F P and F N can be defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NP", "sec_num": null }, { "text": "T P l = k s * j \u2208s * k 1\u2212l(d(y * k , x k , s * j , \u03bb)) \u2022\u03b4(s * j ) F P l = k s j \u2208S(x k )\\s * k l(d(y * k , x k , s j , \u03bb))\u2022\u03b4(s j ) F N l = k s * j \u2208s * k l(d(y * k , x k , s * j , \u03bb))\u2022\u03b4(s * j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NP", "sec_num": null }, { "text": "where \u03b4(s j ) returns 1 if segment s j is a target segment, and returns 0 otherwise. For the NER data shown in Fig. 1 , 'ORG', 'PER' and 'LOC' are the target segments, while segments that are labeled 'O' in y are not. Since T P l should not have a value of less than zero, we select sigmoid loss as the smoothing function l(). The second summation of T P l and F N l performs a summation over correct segments s * . In contrast, the second summation in F P l takes all possible segments into account, but excludes the correct segments s * . Although an efficient way to evaluate all possible segments has been proposed in the context of semi-Markov CRFs (Sarawagi and Cohen, 2004) , we introduce a simple alternative method. If we select \u03c8 = \u221e for d() in Eq. 7, we only need to evaluate the segments corresponding to the maximum incorrect output\u1ef9 to calculate F P l . That is, s j \u2208 S(x k )\\s * k can be reduced to s j \u2208s k , wheres k represents segments corresponding to the maximum incorrect output\u1ef9. In practice, this reduces the calculation cost and so we used this method for our experiments described in the next section.", "cite_spans": [ { "start": 654, "end": 680, "text": "(Sarawagi and Cohen, 2004)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 111, "end": 117, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "NP", "sec_num": null }, { "text": "Maximizing the segmentation F \u03b3 -score, Eq. 13, is equivalent to minimizing \u03b3 2 \u2022F N +F P (\u03b3 2 +1)\u2022T P , since Eq. 13 can also be written as F", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NP", "sec_num": null }, { "text": "\u03b3 = 1 1+ \u03b3 2 \u2022F N +F P (\u03b3 2 +1)\u2022T P . Thus,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NP", "sec_num": null }, { "text": "an objective function closely reflecting the segmentation F \u03b3 -score based on the MCE criterion can be written as Eq. 9 while replacing F l,d,g,\u03bb with:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NP", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "F MCE-F l,d,g,\u03bb = \u03b3 2 \u2022 F N l + F P l (\u03b3 2 + 1) \u2022 T P l .", "eq_num": "(16)" } ], "section": "NP", "sec_num": null }, { "text": "The derivative of Eq. 16 w.r.t. l() is given by the following equation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NP", "sec_num": null }, { "text": "\u2202F MCE-F l,d,g,\u03bb \u2202l() = \u03b3 2 Z D + (\u03b3 2 +1)\u2022Z N Z 2 D , if \u03b4(s * j ) = 1 1 Z D , otherwise", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NP", "sec_num": null }, { "text": "where Z N and Z D represent the numerator and denominator of Eq. 16, respectively. In the optimization process of the segmentation F-score objective function, we can efficiently calculate Eq. 15 by using the forward and backward Viterbi algorithm, which is almost the same as calculating Eq. 3 with a variant of the forwardbackward algorithm (Sha and Pereira, 2003) . The same numerical optimization methods described in Sec. 3.3 can be employed for this optimization.", "cite_spans": [ { "start": 342, "end": 365, "text": "(Sha and Pereira, 2003)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "NP", "sec_num": null }, { "text": "We used the same Chunking and 'English' NER task data used for the shared tasks of CoNLL-2000 (Sang and Buchholz, 2000) and CoNLL-2003 (Sang and De Meulder, 2003) , respectively.", "cite_spans": [ { "start": 83, "end": 103, "text": "CoNLL-2000 (Sang and", "ref_id": null }, { "start": 104, "end": 119, "text": "Buchholz, 2000)", "ref_id": "BIBREF17" }, { "start": 124, "end": 144, "text": "CoNLL-2003 (Sang and", "ref_id": null }, { "start": 145, "end": 162, "text": "De Meulder, 2003)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "Chunking data was obtained from the Wall Street Journal (WSJ) corpus: sections 15-18 as training data (8,936 sentences and 211,727 tokens), and section 20 as test data (2,012 sentences and 47,377 tokens), with 11 different chunk-tags, such as NP and VP plus the 'O' tag, which represents the outside of any target chunk (segment).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "The English NER data was taken from the Reuters Corpus2 1 . The data consists of 203,621, 51,362 and 46,435 tokens from 14,987, 3,466 and 3,684 sentences in training, development and test data, respectively, with four named entity tags, PERSON, LOCATION, ORGANIZATION and MISC, plus the 'O' tag.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "For ML and MAP, we performed exactly the same training procedure described in (Sha and Pereira, 2003) with L-BFGS optimization. For MCE, we 1 http://trec.nist.gov/data/reuters/reuters.html only considered d() with \u03c8 = \u221e as described in Sec. 4.2, and used QuickProp optimization 2 .", "cite_spans": [ { "start": 78, "end": 101, "text": "(Sha and Pereira, 2003)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison Methods and Parameters", "sec_num": "5.1" }, { "text": "For MAP, MCE and MCE-F, we used the L 2norm regularization. We selected a value of C from 1.0 \u00d7 10 n where n takes a value from -5 to 5 in intervals 1 by development data 3 . The tuning of smoothing function hyper-parameters is not considered in this paper; that is, \u03b1=1 and \u03b2=0 were used for all the experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison Methods and Parameters", "sec_num": "5.1" }, { "text": "We evaluated the performance by Eq. 13 with \u03b3 = 1, which is the evaluation measure used in CoNLL-2000 and 2003 . Moreover, we evaluated the performance by using the average sentence accuracy, since the conventional ML/MAP objective function reflects this sequential accuracy.", "cite_spans": [ { "start": 91, "end": 105, "text": "CoNLL-2000 and", "ref_id": null }, { "start": 106, "end": 110, "text": "2003", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison Methods and Parameters", "sec_num": "5.1" }, { "text": "As regards the basic feature set for Chunking, we followed (Kudo and Matsumoto, 2001) , which is the same feature set that provided the best result in CoNLL-2000. We expanded the basic features by using bigram combinations of the same types of features, such as words and part-of-speech tags, within window size 5.", "cite_spans": [ { "start": 59, "end": 85, "text": "(Kudo and Matsumoto, 2001)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "5.2" }, { "text": "In contrast to the above, we used the original feature set for NER. We used features derived only from the data provided by CoNLL-2003 with the addition of character-level regular expressions of uppercases [A-Z] , lowercases [a-z], digits [0-9] or others, and prefixes and suffixes of one to four letters. We also expanded the above basic features by using bigram combinations within window size 5. Note that we never used features derived from external information such as the Web, or a dictionary, which have been used in many previous studies but which are difficult to employ for validating the experiments.", "cite_spans": [ { "start": 206, "end": 211, "text": "[A-Z]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "5.2" }, { "text": "Our experiments were designed to investigate the impact of eliminating the inconsistency between objective functions and evaluation measures, that is, to compare ML/MAP and MCE-F. Table 1 shows the results of Chunking and NER. The F \u03b3=1 and 'Sent' columns show the performance evaluated using segmentation F-score and sentence accuracy, respectively. MCE-F refers to the results obtained from optimizing Eq. 9 based on Eq. 16. In addition, we evaluated the error rate version of MCE. MCE(log) and MCE(sig) indicate that logistic and sigmoid functions are selected for l(), respectively, when optimizing Eq. 9 based on Eq. 10. Moreover, MCE(log) and MCE(sig) used d() based on \u03c8=\u221e, and were optimized using QuickProp; these are the same conditions as used for MCE-F. We found that MCE-F exhibited the best results for both Chunking and NER. There is a significant difference (p < 0.01) between MCE-F and ML/MAP with the McNemar test, in terms of the correctness of both individual outputs, y k i , and sentences, y k . NER data has 83.3% (170524/204567) and 82.6% (38554/46666) of 'O' tags in the training and test data, respectively while the corresponding values of the Chunking data are only 13.1% (27902/211727) and 13.0% (6180/47377). In general, such an imbalanced data set is unsuitable for accuracy-based evaluation. This may be one reason why MCE-F improved the NER results much more than the Chunking results.", "cite_spans": [], "ref_spans": [ { "start": 180, "end": 187, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5.3" }, { "text": "The only difference between MCE(sig) and MCE-F is the objective function. The corresponding results reveal the effectiveness of using an objective function that is consistent as the evaluation measure for the target task. These results show that minimizing the error rate is not optimal for improving the segmentation F-score evaluation measure. Eliminating the inconsistency between the task evaluation measure and the objective function during the training can improve the overall performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5.3" }, { "text": "While ML/MAP and MCE(log) is convex w.r.t. the parameters, neither the objective function of MCE-F, nor that of MCE(sig), is convex. Therefore, initial parameters can affect the optimization results, since QuickProp as well as L-BFGS can only find local optima.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Influence of Initial Parameters", "sec_num": "5.3.1" }, { "text": "The previous experiments were only performed with all parameters initialized at zero. In this experiment, the parameters obtained by the MAPtrained model were used as the initial values of MCE-F and MCE(sig). This evaluation setting appears to be similar to reranking, although we used exactly the same model and feature set. Table 2 shows the results of Chunking and NER obtained with this parameter initialization setting. When we compare Tables 1 and 2, we find that the initialization with the MAP parameter values further improves performance.", "cite_spans": [], "ref_spans": [ { "start": 326, "end": 333, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Influence of Initial Parameters", "sec_num": "5.3.1" }, { "text": "Various loss functions have been proposed for designing CRFs (Kakade et al., 2002; Altun et al., 2003) . This work also takes the design of the loss functions for CRFs into consideration. However, we proposed a general framework for designing these loss function that included non-linear loss functions, which has not been considered in previous work.", "cite_spans": [ { "start": 61, "end": 82, "text": "(Kakade et al., 2002;", "ref_id": "BIBREF7" }, { "start": 83, "end": 102, "text": "Altun et al., 2003)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "With Chunking, (Kudo and Matsumoto, 2001) reported the best F-score of 93.91 with the voting of several models trained by Support Vector Machine in the same experimental settings and with the same feature set. MCE-F with the MAP parameter initialization achieved an F-score of 94.03, which surpasses the above result without manual parameter tuning.", "cite_spans": [ { "start": 15, "end": 41, "text": "(Kudo and Matsumoto, 2001)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "With NER, we cannot make a direct comparison with previous work in the same experimental settings because of the different feature set, as described in Sec. 5.2. However, MCE-F showed the better performance of 85.29 compared with (Mc-Callum and Li, 2003) of 84.04, which used the MAP training of CRFs with a feature selection architecture, yielding similar results to the MAP results described here.", "cite_spans": [ { "start": 230, "end": 254, "text": "(Mc-Callum and Li, 2003)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "We proposed a framework for training CRFs based on optimization criteria directly related to target multivariate evaluation measures. We first provided a general framework of CRF training based on MCE criterion. Then, specifically focusing on SSTs, we introduced an approximate segmentation F-score objective function. Experimental results showed that eliminating the inconsistency between the task evaluation measure and the objective function used during training improves the overall performance in the target task without any change in feature set or model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "Another type of misclassification measure using soft-max is (Katagiri et al., 1991) :", "cite_spans": [ { "start": 60, "end": 83, "text": "(Katagiri et al., 1991)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Appendices Misclassification measures", "sec_num": null }, { "text": "d(y, x, \u03bb) = \u2212g * + A y\u2208Y\\y * g \u03c8 1 \u03c8 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendices Misclassification measures", "sec_num": null }, { "text": "Another d(), for g in the range [0, \u221e):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendices Misclassification measures", "sec_num": null }, { "text": "d(y, x, \u03bb) = A y\u2208Y\\y * g \u03c8 1 \u03c8 /g * .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendices Misclassification measures", "sec_num": null }, { "text": "If we select l log () with \u03b1 = 1 and \u03b2 = 0, and use Eq. 7 with \u03c8 = 1 and without the term A for d(). We can obtain the same loss function as ML/MAP: log (1 + exp(\u2212g * + log(Z \u03bb \u2212 exp(g * )))) = log exp(g * ) + (Z \u03bb \u2212 exp(g * )) exp(g * ) = \u2212g * + log(Z \u03bb ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of ML/MAP and MCE", "sec_num": null }, { "text": "In order to realize faster convergence, we applied online GPD optimization for the first ten iterations.3 Chunking has no common development set. We first train the systems with all but the last 2000 sentences in the training data as a development set to obtain C, and then retrain them with all the training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Investigating Loss Functions and Optimization Methods for Discriminative Learning of Label Sequences", "authors": [ { "first": "Y", "middle": [], "last": "Altun", "suffix": "" }, { "first": "M", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "T", "middle": [], "last": "Hofmann", "suffix": "" } ], "year": 2003, "venue": "Proc. of EMNLP-2003", "volume": "", "issue": "", "pages": "145--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Altun, M. Johnson, and T. Hofmann. 2003. Investigating Loss Functions and Optimization Methods for Discrimi- native Learning of Label Sequences. In Proc. of EMNLP- 2003, pages 145-152.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "An Empirical Study of Learning Speech in Backpropagation Networks", "authors": [ { "first": "S", "middle": [ "E" ], "last": "Fahlman", "suffix": "" } ], "year": 1988, "venue": "Technical Report CMU-CS-88-162", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. E. Fahlman. 1988. An Empirical Study of Learning Speech in Backpropagation Networks. In Technical Re- port CMU-CS-88-162, Carnegie Mellon University.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A Maximal Figure-of-Merit Approach to Text Categorization", "authors": [ { "first": "S", "middle": [], "last": "Gao", "suffix": "" }, { "first": "W", "middle": [], "last": "Wu", "suffix": "" }, { "first": "C.-H", "middle": [], "last": "Lee", "suffix": "" }, { "first": "T.-S", "middle": [], "last": "Chua", "suffix": "" } ], "year": 2003, "venue": "Proc. of SIGIR'03", "volume": "", "issue": "", "pages": "174--181", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Gao, W. Wu, C.-H. Lee, and T.-S. Chua. 2003. A Maxi- mal Figure-of-Merit Approach to Text Categorization. In Proc. of SIGIR'03, pages 174-181.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A Formal Basis for the Heuristic Determination of Minimum Cost Paths", "authors": [ { "first": "P", "middle": [ "E" ], "last": "Hart", "suffix": "" }, { "first": "N", "middle": [ "J" ], "last": "Nilsson", "suffix": "" }, { "first": "B", "middle": [], "last": "Raphael", "suffix": "" } ], "year": 1968, "venue": "IEEE Trans. on Systems Science and Cybernetics", "volume": "", "issue": "2", "pages": "100--107", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. E. Hart, N. J. Nilsson, and B. Raphael. 1968. A Formal Basis for the Heuristic Determination of Minimum Cost Paths. IEEE Trans. on Systems Science and Cybernetics, SSC-4(2):100-107.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Maximum Expected F-Measure Training of Logistic Regression Models", "authors": [ { "first": "M", "middle": [], "last": "Jansche", "suffix": "" } ], "year": 2005, "venue": "Proc. of HLT/EMNLP-2005", "volume": "", "issue": "", "pages": "692--699", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Jansche. 2005. Maximum Expected F-Measure Training of Logistic Regression Models. In Proc. of HLT/EMNLP- 2005, pages 692-699.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A Support Vector Method for Multivariate Performance Measures", "authors": [ { "first": "T", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 2005, "venue": "Proc. of ICML-2005", "volume": "", "issue": "", "pages": "377--384", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Joachims. 2005. A Support Vector Method for Multivari- ate Performance Measures. In Proc. of ICML-2005, pages 377-384.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Discriminative Learning for Minimum Error Classification", "authors": [ { "first": "B", "middle": [ "H" ], "last": "Juang", "suffix": "" }, { "first": "S", "middle": [], "last": "Katagiri", "suffix": "" } ], "year": 1992, "venue": "IEEE Trans. on Signal Processing", "volume": "40", "issue": "12", "pages": "3043--3053", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. H. Juang and S. Katagiri. 1992. Discriminative Learning for Minimum Error Classification. IEEE Trans. on Signal Processing, 40(12):3043-3053.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "An Alternative Objective Function for Markovian Fields", "authors": [ { "first": "S", "middle": [], "last": "Kakade", "suffix": "" }, { "first": "Y", "middle": [ "W" ], "last": "Teh", "suffix": "" }, { "first": "S", "middle": [], "last": "Roweis", "suffix": "" } ], "year": 2002, "venue": "Proc. of ICML-2002", "volume": "", "issue": "", "pages": "275--282", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Kakade, Y. W. Teh, and S. Roweis. 2002. An Alterna- tive Objective Function for Markovian Fields. In Proc. of ICML-2002, pages 275-282.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "New Discriminative Training Algorithms based on the Generalized Descent Method", "authors": [ { "first": "S", "middle": [], "last": "Katagiri", "suffix": "" }, { "first": "C", "middle": [ "H" ], "last": "Lee", "suffix": "" }, { "first": "B.-H", "middle": [], "last": "Juang", "suffix": "" } ], "year": 1991, "venue": "Proc. of IEEE Workshop on Neural Networks for Signal Processing", "volume": "", "issue": "", "pages": "299--308", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Katagiri, C. H. Lee, and B.-H. Juang. 1991. New Dis- criminative Training Algorithms based on the Generalized Descent Method. In Proc. of IEEE Workshop on Neural Networks for Signal Processing, pages 299-308.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Chunking with Support Vector Machines", "authors": [ { "first": "T", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "Y", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2001, "venue": "Proc. of NAACL-2001", "volume": "", "issue": "", "pages": "192--199", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Kudo and Y. Matsumoto. 2001. Chunking with Support Vector Machines. In Proc. of NAACL-2001, pages 192- 199.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data", "authors": [ { "first": "J", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "F", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Proc. of ICML-2001", "volume": "", "issue": "", "pages": "282--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. In Proc. of ICML-2001, pages 282-289.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "On the Limited Memory BFGS Method for Large-scale Optimization", "authors": [ { "first": "D", "middle": [ "C" ], "last": "Liu", "suffix": "" }, { "first": "J", "middle": [], "last": "", "suffix": "" } ], "year": 1989, "venue": "Mathematic Programming", "volume": "", "issue": "", "pages": "503--528", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. C. Liu and J. Nocedal. 1989. On the Limited Memory BFGS Method for Large-scale Optimization. Mathematic Programming, (45):503-528.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Early Results for Named Entity Recognition with Conditional Random Fields Feature Induction and Web-Enhanced Lexicons", "authors": [ { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "W", "middle": [], "last": "Li", "suffix": "" } ], "year": 2003, "venue": "Proc. of CoNLL-2003", "volume": "", "issue": "", "pages": "188--191", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. McCallum and W. Li. 2003. Early Results for Named Entity Recognition with Conditional Random Fields Fea- ture Induction and Web-Enhanced Lexicons. In Proc. of CoNLL-2003, pages 188-191.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Minimum Error Rate Training in Statistical Machine Translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "Proc. of ACL-2003", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. J. Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proc. of ACL-2003, pages 160- 167.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Bayesian Conditional Random Fields", "authors": [ { "first": "Y", "middle": [], "last": "Qi", "suffix": "" }, { "first": "M", "middle": [], "last": "Szummer", "suffix": "" }, { "first": "T", "middle": [ "P" ], "last": "Minka", "suffix": "" } ], "year": 2005, "venue": "Proc. of AI & Statistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Qi, M. Szummer, and T. P. Minka. 2005. Bayesian Con- ditional Random Fields. In Proc. of AI & Statistics 2005.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Text Chunking using Transformation-based Learning", "authors": [ { "first": "L", "middle": [ "A" ], "last": "Ramshaw", "suffix": "" }, { "first": "M", "middle": [ "P" ], "last": "Marcus", "suffix": "" } ], "year": 1995, "venue": "Proc. of VLC-1995", "volume": "", "issue": "", "pages": "88--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. A. Ramshaw and M. P. Marcus. 1995. Text Chunking using Transformation-based Learning. In Proc. of VLC- 1995, pages 88-94.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Optimization Methods for Discriminative Training", "authors": [ { "first": "J", "middle": [], "last": "", "suffix": "" }, { "first": "Le", "middle": [], "last": "Roux", "suffix": "" }, { "first": "E", "middle": [], "last": "Mcdermott", "suffix": "" } ], "year": 2005, "venue": "Proc. of Eurospeech", "volume": "", "issue": "", "pages": "3341--3344", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Le Roux and E. McDermott. 2005. Optimization Methods for Discriminative Training. In Proc. of Eurospeech 2005, pages 3341-3344.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Introduction to the CoNLL-2000 Shared Task: Chunking", "authors": [ { "first": "E", "middle": [ "F" ], "last": "Tjong Kim Sang", "suffix": "" }, { "first": "S", "middle": [], "last": "Buchholz", "suffix": "" } ], "year": 2000, "venue": "Proc. of CoNLL/LLL-2000", "volume": "", "issue": "", "pages": "127--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. F. Tjong Kim Sang and S. Buchholz. 2000. Introduction to the CoNLL-2000 Shared Task: Chunking. In Proc. of CoNLL/LLL-2000, pages 127-132.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition", "authors": [ { "first": "E", "middle": [ "F" ], "last": "Tjong Kim Sang", "suffix": "" }, { "first": "F. De", "middle": [], "last": "Meulder", "suffix": "" } ], "year": 2003, "venue": "Proc. of CoNLL-2003", "volume": "", "issue": "", "pages": "142--147", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. F. Tjong Kim Sang and F. De Meulder. 2003. Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition. In Proc. of CoNLL-2003, pages 142-147.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Semi-Markov Conditional Random Fields for Information Extraction", "authors": [ { "first": "S", "middle": [], "last": "Sarawagi", "suffix": "" }, { "first": "W", "middle": [ "W" ], "last": "Cohen", "suffix": "" } ], "year": 2004, "venue": "Proc of NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Sarawagi and W. W. Cohen. 2004. Semi-Markov Condi- tional Random Fields for Information Extraction. In Proc of NIPS-2004.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Shallow Parsing with Conditional Random Fields", "authors": [ { "first": "F", "middle": [], "last": "Sha", "suffix": "" }, { "first": "F", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2003, "venue": "Proc. of HLT/NAACL-2003", "volume": "", "issue": "", "pages": "213--220", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Sha and F. Pereira. 2003. Shallow Parsing with Con- ditional Random Fields. In Proc. of HLT/NAACL-2003, pages 213-220.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Max-Margin Markov Networks", "authors": [ { "first": "B", "middle": [], "last": "Taskar", "suffix": "" }, { "first": "C", "middle": [], "last": "Guestrin", "suffix": "" }, { "first": "D", "middle": [], "last": "Koller", "suffix": "" } ], "year": 2004, "venue": "Proc. of NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Taskar, C. Guestrin, and D. Koller. 2004. Max-Margin Markov Networks. In Proc. of NIPS-2004.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Large Margin Methods for Structured and Interdependent Output Variables", "authors": [ { "first": "I", "middle": [], "last": "Tsochantaridis", "suffix": "" }, { "first": "T", "middle": [], "last": "Joachims", "suffix": "" }, { "first": "T", "middle": [], "last": "Hofmann", "suffix": "" }, { "first": "Y", "middle": [], "last": "Altun", "suffix": "" } ], "year": 2005, "venue": "JMLR", "volume": "6", "issue": "", "pages": "1453--1484", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Tsochantaridis, T. Joachims and T. Hofmann, and Y. Altun. 2005. Large Margin Methods for Structured and Interde- pendent Output Variables. JMLR, 6:1453-1484.", "links": null } }, "ref_entries": { "TABREF1": { "num": null, "type_str": "table", "text": "Performance of text chunking and named entity recognition data(CoNLL-2000 and2003)", "html": null, "content": "
ChunkingNER
l()n F\u03b3=1 Sent n F\u03b3=1 Sent
MCE-F (sig) 5 93.96 60.44 4 84.72 78.72
MCE (log) 3 93.92 60.19 3 84.30 78.02
MCE (sig) 3 93.85 60.14 3 83.82 77.52
MAP0 93.71 59.15 0 83.79 77.39
ML-93.19 56.26 -82.39 75.71
" }, "TABREF2": { "num": null, "type_str": "table", "text": "Performance when initial parameters are derived from MAP MCE (sig) 3 93.97 60.59 3 84.57 77.71", "html": null, "content": "
ChunkingNER
l() n F\u03b3=1 Sent n F\u03b3=1 Sent
MCE-F (sig) 5
" } } } }