{ "paper_id": "J14-3004", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:41:35.090116Z" }, "title": "Feature-Frequency-Adaptive On-line Training for Fast and Accurate Natural Language Processing", "authors": [ { "first": "Xu", "middle": [], "last": "Sun", "suffix": "", "affiliation": { "laboratory": "Key Laboratory of Computational Linguistics (Peking University", "institution": "Peking University", "location": { "settlement": "Beijing, Beijing", "country": "China, China" } }, "email": "xusun@pku.edu.cn." }, { "first": "Wenjie", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "Key Laboratory of Computational Linguistics (Peking University", "institution": "Peking University", "location": { "settlement": "Beijing, Beijing", "country": "China, China" } }, "email": "" }, { "first": "Houfeng", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "Key Laboratory of Computational Linguistics (Peking University", "institution": "Peking University", "location": { "settlement": "Beijing, Beijing", "country": "China, China" } }, "email": "wanghf@pku.edu.cn." }, { "first": "Qin", "middle": [], "last": "Lu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Kong Polytechnic University", "location": { "addrLine": "Hung Hom", "postCode": "999077", "settlement": "Hong, Kowloon, Hong Kong" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Training speed and accuracy are two major concerns of large-scale natural language processing systems. Typically, we need to make a tradeoff between speed and accuracy. It is trivial to improve the training speed via sacrificing accuracy or to improve the accuracy via sacrificing speed. Nevertheless, it is nontrivial to improve the training speed and the accuracy at the same time, which is the target of this work. To reach this target, we present a new training method, featurefrequency-adaptive on-line training, for fast and accurate training of natural language processing systems. It is based on the core idea that higher frequency features should have a learning rate that decays faster. Theoretical analysis shows that the proposed method is convergent with a fast convergence rate. Experiments are conducted based on well-known benchmark tasks, including named entity recognition, word segmentation, phrase chunking, and sentiment analysis. These tasks consist of three structured classification tasks and one non-structured classification task, with binary features and real-valued features, respectively. Experimental results demonstrate that the proposed method is faster and at the same time more accurate than existing methods, achieving state-of-the-art scores on the tasks with different characteristics.", "pdf_parse": { "paper_id": "J14-3004", "_pdf_hash": "", "abstract": [ { "text": "Training speed and accuracy are two major concerns of large-scale natural language processing systems. Typically, we need to make a tradeoff between speed and accuracy. It is trivial to improve the training speed via sacrificing accuracy or to improve the accuracy via sacrificing speed. Nevertheless, it is nontrivial to improve the training speed and the accuracy at the same time, which is the target of this work. To reach this target, we present a new training method, featurefrequency-adaptive on-line training, for fast and accurate training of natural language processing systems. It is based on the core idea that higher frequency features should have a learning rate that decays faster. Theoretical analysis shows that the proposed method is convergent with a fast convergence rate. Experiments are conducted based on well-known benchmark tasks, including named entity recognition, word segmentation, phrase chunking, and sentiment analysis. These tasks consist of three structured classification tasks and one non-structured classification task, with binary features and real-valued features, respectively. Experimental results demonstrate that the proposed method is faster and at the same time more accurate than existing methods, achieving state-of-the-art scores on the tasks with different characteristics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Training speed is an important concern of natural language processing (NLP) systems. Large-scale NLP systems are computationally expensive. In many real-world applications, we further need to optimize high-dimensional model parameters. For example, the state-of-the-art word segmentation system uses more than 40 million features (Sun, Wang, and Li 2012) . The heavy NLP models together with high-dimensional parameters lead to a challenging problem on model training, which may require week-level training time even with fast computing machines.", "cite_spans": [ { "start": 330, "end": 354, "text": "(Sun, Wang, and Li 2012)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Accuracy is another very important concern of NLP systems. Nevertheless, usually it is quite difficult to build a system that has fast training speed and at the same time has high accuracy. Typically we need to make a tradeoff between speed and accuracy, to trade training speed for higher accuracy or vice versa. In this work, we have tried to overcome this problem: to improve the training speed and the model accuracy at the same time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "There are two major approaches for parameter training: batch and on-line. Standard gradient descent methods are normally batch training methods, in which the gradient computed by using all training instances is used to update the parameters of the model. The batch training methods include, for example, steepest gradient descent, conjugate gradient descent (CG), and quasi-Newton methods like limited-memory BFGS (Nocedal and Wright 1999) . The true gradient is usually the sum of the gradients from each individual training instance. Therefore, batch gradient descent requires the training method to go through the entire training set before updating parameters. This is why batch training methods are typically slow.", "cite_spans": [ { "start": 414, "end": 439, "text": "(Nocedal and Wright 1999)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "On-line learning methods can significantly accelerate the training speed compared with batch training methods. A representative on-line training method is the stochastic gradient descent method (SGD) and its extensions (e.g., stochastic meta descent) (Bottou 1998; Vishwanathan et al. 2006) . The model parameters are updated more frequently compared with batch training, and fewer passes are needed before convergence. For large-scale data sets, on-line training methods can be much faster than batch training methods.", "cite_spans": [ { "start": 251, "end": 264, "text": "(Bottou 1998;", "ref_id": "BIBREF2" }, { "start": 265, "end": 290, "text": "Vishwanathan et al. 2006)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "However, we find that the existing on-line training methods are still not good enough for training large-scale NLP systems-probably because those methods are not well-tailored for NLP systems that have massive features. First, the convergence speed of the existing on-line training methods is not fast enough. Our studies show that the existing on-line training methods typically require more than 50 training passes before empirical convergence, which is still slow. For large-scale NLP systems, the training time per pass is typically long and fast convergence speed is crucial. Second, the accuracy of the existing on-line training methods is not good enough. We want to further improve the training accuracy. We try to deal with the two challenges at the same time. Our goal is to develop a new training method for faster and at the same time more accurate natural language processing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this article, we present a new on-line training method, adaptive on-line gradient descent based on feature frequency information (ADF), 1 for very accurate and fast on-line training of NLP systems. Other than the high training accuracy and fast training speed, we further expect that the proposed training method has good theoretical properties. We want to prove that the proposed method is convergent and has a fast convergence rate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In the proposed ADF training method, we use a learning rate vector in the on-line updating. This learning rate vector is automatically adapted based on feature frequency information in the training data set. Each model parameter has its own learning rate adapted on feature frequency information. This proposal is based on the simple intuition that a feature with higher frequency in the training process should have a learning rate that decays faster. This is because a higher frequency feature is expected to be well optimized with higher confidence. Thus, a higher frequency feature is expected to have a lower learning rate. We systematically formalize this intuition into a theoretically sound training algorithm, ADF.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The main contributions of this work are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "r On the methodology side, we propose a general purpose on-line training method, ADF. The ADF method is significantly more accurate than existing on-line and batch training methods, and has faster training speed. Moreover, theoretical analysis demonstrates that the ADF method is convergent with a fast convergence rate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "r On the application side, for the three well-known tasks, including named entity recognition, word segmentation, and phrase chunking, the proposed simple method achieves equal or even better accuracy than the existing gold-standard systems, which are complicated and use extra resources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Our main focus is on structured classification models with high dimensional features. For structured classification, the conditional random fields model is widely used. To illustrate that the proposed method is a general-purpose training method not limited to a specific classification task or model, we also evaluate the proposal for non-structured classification tasks like binary classification. For non-structured classification, the maximum entropy model (Berger, Della Pietra, and Della Pietra 1996; Ratnaparkhi 1996) is widely used. Here, we review the conditional random fields model and the related work of on-line training methods.", "cite_spans": [ { "start": 460, "end": 505, "text": "(Berger, Della Pietra, and Della Pietra 1996;", "ref_id": "BIBREF0" }, { "start": 506, "end": 523, "text": "Ratnaparkhi 1996)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "The conditional random field (CRF) model is a representative structured classification model and it is well known for its high accuracy in real-world applications. The CRF model is proposed for structured classification by solving \"the label bias problem\" (Lafferty, McCallum, and Pereira 2001) . Assuming a feature function that maps a pair of observation sequence x x x and label sequence y y y to a global feature vector f f f, the probability of a label sequence y y y conditioned on the observation sequence x x x is modeled as follows (Lafferty, McCallum, and Pereira 2001) :", "cite_spans": [ { "start": 256, "end": 294, "text": "(Lafferty, McCallum, and Pereira 2001)", "ref_id": "BIBREF14" }, { "start": 541, "end": 579, "text": "(Lafferty, McCallum, and Pereira 2001)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P(y y y|x x x, w w w) = exp {w w w \u22a4 f f f (y y y, x x x)} \u2211 \u2200y \u2032 y \u2032 y \u2032 exp {w w w \u22a4 f f f (y \u2032 y \u2032 y \u2032 , x x x)}", "eq_num": "(1)" } ], "section": "Conditional Random Fields", "sec_num": "2.1" }, { "text": "where w w w is a parameter vector.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields", "sec_num": "2.1" }, { "text": "Given a training set consisting of n labeled sequences, z z z i = (x x x i , y y y i ), for i = 1 . . . n, parameter estimation is performed by maximizing the objective function,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L(w w w) = n \u2211 i=1 log P(y y y i |x x x i , w w w) \u2212 R(w w w)", "eq_num": "(2)" } ], "section": "Conditional Random Fields", "sec_num": "2.1" }, { "text": "The first term of this equation represents a conditional log-likelihood of training data. The second term is a regularizer for reducing overfitting. We use an L 2 prior, R(w w w) = ||w w w|| 2 2\u03c3 2 . In what follows, we denote the conditional log-likelihood of each sample as log P(y y y i |x x x i , w w w) as \u2113(z z z i , w w w). The final objective function is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields", "sec_num": "2.1" }, { "text": "L(w w w) = n \u2211 i=1 \u2113(z z z i , w w w) \u2212 ||w w w|| 2 2\u03c3 2 (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields", "sec_num": "2.1" }, { "text": "The most representative on-line training method is the SGD method (Bottou 1998; Tsuruoka, Tsujii, and Ananiadou 2009; . The SGD method uses a randomly selected small subset of the training sample to approximate the gradient of an objective function. The number of training samples used for this approximation is called the batch size. By using a smaller batch size, one can update the parameters more frequently and speed up the convergence. The extreme case is a batch size of 1, and it gives the maximum frequency of updates, which we adopt in this work. In this case, the model parameters are updated as follows:", "cite_spans": [ { "start": 66, "end": 79, "text": "(Bottou 1998;", "ref_id": "BIBREF2" }, { "start": 80, "end": 117, "text": "Tsuruoka, Tsujii, and Ananiadou 2009;", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "On-line Training", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "w w w t+1 = w w w t + \u03b3 t \u2207 w w w t L stoch (z z z i , w w w t )", "eq_num": "(4)" } ], "section": "On-line Training", "sec_num": "2.2" }, { "text": "where t is the update counter, \u03b3 t is the learning rate or so-called decaying rate, and L stoch (z z z i , w w w t ) is the stochastic loss function based on a training sample z z z i . (More details of SGD are described in Bottou [1998] , Tsuruoka, Tsujii, and Ananiadou [2009] , and .) Following the most recent work of SGD, the exponential decaying rate works the best for natural language processing tasks, and it is adopted in our implementation of the SGD (Tsuruoka, Tsujii, and Ananiadou 2009; .", "cite_spans": [ { "start": 224, "end": 237, "text": "Bottou [1998]", "ref_id": "BIBREF2" }, { "start": 240, "end": 278, "text": "Tsuruoka, Tsujii, and Ananiadou [2009]", "ref_id": "BIBREF31" }, { "start": 462, "end": 500, "text": "(Tsuruoka, Tsujii, and Ananiadou 2009;", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "On-line Training", "sec_num": "2.2" }, { "text": "Other well-known on-line training methods include perceptron training (Freund and Schapire 1999) , averaged perceptron training (Collins 2002) , more recent development/extensions of stochastic gradient descent (e.g., the second-order stochastic gradient descent training methods like stochastic meta descent) (Vishwanathan et al. 2006; Hsu et al. 2009) , and so on. However, the second-order stochastic gradient descent method requires the computation or approximation of the inverse of the Hessian matrix of the objective function, which is typically slow, especially for heavily structured classification models. Usually the convergence speed based on number of training iterations is moderately faster, but the time cost per iteration is slower. Thus the overall time cost is still large.", "cite_spans": [ { "start": 70, "end": 96, "text": "(Freund and Schapire 1999)", "ref_id": "BIBREF8" }, { "start": 128, "end": 142, "text": "(Collins 2002)", "ref_id": "BIBREF4" }, { "start": 310, "end": 336, "text": "(Vishwanathan et al. 2006;", "ref_id": null }, { "start": 337, "end": 353, "text": "Hsu et al. 2009)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "On-line Training", "sec_num": "2.2" }, { "text": "Compared with the related work on batch and on-line training (Jacobs 1988; Sperduti and Starita 1993; Dredze, Crammer, and Pereira 2008; Duchi, Hazan, and Singer 2010; McMahan and Streeter 2010) , our work is fundamentally different. The proposed ADF training method is based on feature frequency adaptation, and to the best of our knowledge there is no prior work on direct feature-frequency-adaptive on-line training. Compared with the confidence-weighted (CW) classification method and its variation AROW (Dredze, Crammer, and Pereira 2008; Crammer, Kulesza, and Dredze 2009) , the proposed method is substantially different. While the feature frequency information is implicitly modeled via a complicated Gaussian distribution framework in Dredze, Crammer, and Pereira (2008) and Crammer, Kulesza, and Dredze (2009) , the frequency information is explicitly modeled in our proposal via simple learning rate adaptation. Our proposal is more straightforward in capturing feature frequency information, and it has no need to use Gaussian distributions and KL divergence, which are important in the CW and AROW methods. In addition, our proposal is a probabilistic learning method for training probabilistic models such as CRFs, whereas the CW and AROW methods (Dredze, Crammer, and Pereira 2008; Crammer, Kulesza, and Dredze 2009) are non-probabilistic learning methods extended from perceptronstyle approaches. Thus, the framework is different. This work is a substantial extension of the conference version (Sun, Wang, and Li 2012) . Sun, Wang, and Li (2012) focus on the specific task of word segmentation, whereas this article focuses on the proposed training algorithm.", "cite_spans": [ { "start": 61, "end": 74, "text": "(Jacobs 1988;", "ref_id": "BIBREF11" }, { "start": 75, "end": 101, "text": "Sperduti and Starita 1993;", "ref_id": "BIBREF23" }, { "start": 102, "end": 136, "text": "Dredze, Crammer, and Pereira 2008;", "ref_id": "BIBREF5" }, { "start": 137, "end": 167, "text": "Duchi, Hazan, and Singer 2010;", "ref_id": "BIBREF6" }, { "start": 168, "end": 194, "text": "McMahan and Streeter 2010)", "ref_id": "BIBREF16" }, { "start": 508, "end": 543, "text": "(Dredze, Crammer, and Pereira 2008;", "ref_id": "BIBREF5" }, { "start": 544, "end": 578, "text": "Crammer, Kulesza, and Dredze 2009)", "ref_id": null }, { "start": 744, "end": 779, "text": "Dredze, Crammer, and Pereira (2008)", "ref_id": "BIBREF5" }, { "start": 806, "end": 819, "text": "Dredze (2009)", "ref_id": null }, { "start": 1261, "end": 1296, "text": "(Dredze, Crammer, and Pereira 2008;", "ref_id": "BIBREF5" }, { "start": 1297, "end": 1331, "text": "Crammer, Kulesza, and Dredze 2009)", "ref_id": null }, { "start": 1510, "end": 1534, "text": "(Sun, Wang, and Li 2012)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "On-line Training", "sec_num": "2.2" }, { "text": "In traditional on-line optimization methods such as SGD, no distinction is made for different parameters in terms of the learning rate, and this may result in slow convergence of the model training. For example, in the on-line training process, suppose the high frequency feature f 1 and the low frequency feature f 2 are observed in a training sample and their corresponding parameters w 1 and w 2 are to be updated via the same learning rate \u03b3 t . Suppose the high frequency feature f 1 has been updated 100 times and the low frequency feature f 2 has only been updated once. Then, it is possible that the weight w 1 is already well optimized and the learning rate \u03b3 t is too aggressive for updating w 1 . Updating the weight w 1 with the learning rate \u03b3 t may make w 1 be far from the well-optimized value, and it will require corrections in the future updates. This causes fluctuations in the on-line training and results in slow convergence speed. On the other hand, it is possible that the weight w 2 is poorly optimized and the same learning rate \u03b3 t is too conservative for updating w 2 . This also results in slow convergence speed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature-Frequency-Adaptive On-line Learning", "sec_num": "3." }, { "text": "To solve this problem, we propose ADF. In spite of the high accuracy and fast convergence speed, the proposed method is easy to implement. The proposed method with feature-frequency-adaptive learning rates can be seen as a learning method with specific diagonal approximation of the Hessian information based on assumptions of feature frequency information. In this approximation, the diagonal elements of the diagonal matrix correspond to the feature-frequency-adaptive learning rates. According to the aforementioned example and analysis, it assumes that a feature with higher frequency in the training process should have a learning rate that decays faster.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature-Frequency-Adaptive On-line Learning", "sec_num": "3." }, { "text": "In the proposed ADF method, we try to use more refined learning rates than traditional SGD training. Instead of using a single learning rate (a scalar) for all weights, we extend the learning rate scalar to a learning rate vector, which has the same dimension as the weight vector w w w. The learning rate vector is automatically adapted based on feature frequency information. By doing so, each weight has its own learning rate, and we will show that this can significantly improve the convergence speed of on-line learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm", "sec_num": "3.1" }, { "text": "In the ADF learning method, the update formula is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm", "sec_num": "3.1" }, { "text": "w w w t+1 = w w w t + \u03b3 \u03b3 \u03b3 t \u2022 \u2022 \u2022 g g g t (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm", "sec_num": "3.1" }, { "text": "The update term g g g t is the gradient term of a randomly sampled instance:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm", "sec_num": "3.1" }, { "text": "g g g t = \u2207 w w w t L stoch (z z z i , w w w t ) = \u2207 w w w t { \u2113(z z z i , w w w t ) \u2212 ||w w w t || 2 2n\u03c3 2 } In addition, \u03b3 \u03b3 \u03b3 t \u2208 R f", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm", "sec_num": "3.1" }, { "text": "+ is a positive vector-valued learning rate and \u2022 \u2022 \u2022 denotes the component-wise (Hadamard) product of two vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm", "sec_num": "3.1" }, { "text": "The learning rate vector \u03b3 \u03b3 \u03b3 t is automatically adapted based on feature frequency information in the updating process. Intuitively, a feature with higher frequency in the training process has a learning rate that decays faster. This is because a weight with higher frequency is expected to be more adequately trained, hence a lower learning rate is preferable for fast convergence. We assume that a high frequency feature should have a lower learning rate, and a low frequency feature should have a relatively higher learning rate in the training process. We systematically formalize this idea into a theoretically sound training algorithm. The proposed method with feature-frequency-adaptive learning rates can be seen as a learning method with specific diagonal approximation of the inverse of the Hessian matrix based on feature frequency information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm", "sec_num": "3.1" }, { "text": "Given a window size q (number of samples in a window), we use a vector v v v to record the feature frequency. The kth entry v v v k corresponds to the frequency of the feature k in this window. Given a feature k, we use u to record the normalized frequency:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm", "sec_num": "3.1" }, { "text": "u = v v v k /q", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm", "sec_num": "3.1" }, { "text": "For each feature, an adaptation factor \u03b7 is calculated based on the normalized frequency information, as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm", "sec_num": "3.1" }, { "text": "\u03b7 = \u03b1 \u2212 u(\u03b1 \u2212 \u03b2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm", "sec_num": "3.1" }, { "text": "where \u03b1 and \u03b2 are the upper and lower bounds of a scalar, with 0 < \u03b2 < \u03b1 < 1. Intuitively, the upper bound \u03b1 corresponds to the adaptation factor of the lowest frequency features, and the lower bound \u03b2 corresponds to the adaptation factor of the highest frequency features. The optimal values of \u03b1 and \u03b2 can be tuned based on specific realworld tasks, for example, via cross-validation on the training data or using held-out data. In practice, via cross-validation on the training data of different tasks, we found that the following setting is sufficient to produce adequate performance for most of the real-world natural language processing tasks: \u03b1 around 0.995, and \u03b2 around 0.6. This indicates that the feature frequency information has similar characteristics across many different natural language processing tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm", "sec_num": "3.1" }, { "text": "As we can see, a feature with higher frequency corresponds to a smaller scalar via linear approximation. Finally, the learning rate is updated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm", "sec_num": "3.1" }, { "text": "\u03b3 \u03b3 \u03b3 k \u2190 \u03b7\u03b3 \u03b3 \u03b3 k ADF learning algorithm 1: procedure ADF(Z Z Z, w w w, q, c, \u03b1, \u03b2) 2: w w w \u2190 0, t \u2190 0, v v v \u2190 0, \u03b3 \u03b3 \u03b3 \u2190 c 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm", "sec_num": "3.1" }, { "text": "repeat until convergence 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm", "sec_num": "3.1" }, { "text": ". Draw a sample z z z i at random from the data set Z Z Z 5:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm", "sec_num": "3.1" }, { "text": ". v v v \u2190 UPDATEFEATUREFREQ(v v v, z z z i ) 6:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm", "sec_num": "3.1" }, { "text": ". if t > 0 and t mod q = 0 7:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm", "sec_num": "3.1" }, { "text": ". . \u03b3 \u03b3 \u03b3 \u2190 UPDATELEARNRATE(\u03b3 \u03b3 \u03b3, v v v) 8: . . v v v \u2190 0 9: . g g g \u2190 \u2207 w w w L stoch (z z z i , w w w) 10: . w w w \u2190 w w w + \u03b3 \u03b3 \u03b3 \u2022 \u2022 \u2022 g g g 11: . t \u2190 t + 1 12: return w w w 13: 14: procedure UPDATEFEATUREFREQ(v v v, z z z i ) 15:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm", "sec_num": "3.1" }, { "text": "for k \u2208 features used in sample z z z i 16:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm", "sec_num": "3.1" }, { "text": ". v v v k \u2190 v v v k + 1 17: return v v v 18: 19: procedure UPDATELEARNRATE(\u03b3 \u03b3 \u03b3, v v v) 20: for k \u2208 all features 21: . u \u2190 v v v k /q 22: . \u03b7 \u2190 \u03b1 \u2212 u(\u03b1 \u2212 \u03b2) 23:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm", "sec_num": "3.1" }, { "text": ". \u03b3 \u03b3 \u03b3 k \u2190 \u03b7\u03b3 \u03b3 \u03b3 k 24: return \u03b3 \u03b3 \u03b3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm", "sec_num": "3.1" }, { "text": "The proposed ADF on-line learning algorithm. In the algorithm, Z Z Z is the training data set; q, c, \u03b1, and \u03b2 are hyper-parameters; q is an integer representing window size; c is for initializing the learning rates; and \u03b1 and \u03b2 are the upper and lower bounds of a scalar, with 0 < \u03b2 < \u03b1 < 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "With this setting, different features correspond to different adaptation factors based on feature frequency information. Our ADF algorithm is summarized in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 156, "end": 164, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "The ADF training method is efficient because the only additional computation (compared with traditional SGD) is the derivation of the learning rates, which is simple and efficient. As we know, the regularization of SGD can perform efficiently via the optimization based on sparse features (Shalev-Shwartz, Singer, and Srebro 2007) . Similarly, the derivation of \u03b3 \u03b3 \u03b3 t can also perform efficiently via the optimization based on sparse features. Note that although binary features are common in natural language processing tasks, the ADF algorithm is not limited to binary features and it can be applied to realvalued features.", "cite_spans": [ { "start": 289, "end": 330, "text": "(Shalev-Shwartz, Singer, and Srebro 2007)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "We want to show that the proposed ADF learning algorithm has good convergence properties. There are two steps in the convergence analysis. First, we show that the ADF update rule is a contraction mapping. Then, we show that the ADF training is asymptotically convergent, and with a fast convergence rate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convergence Analysis", "sec_num": "3.2" }, { "text": "To simplify the discussion, our convergence analysis is based on the convex loss function of traditional classification or regression problems:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convergence Analysis", "sec_num": "3.2" }, { "text": "L(w w w) = n \u2211 i=1 \u2113(x x x i , y i , w w w \u2022 f f f i ) \u2212 ||w w w|| 2 2\u03c3 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convergence Analysis", "sec_num": "3.2" }, { "text": "where f f f i is the feature vector generated from the training sample (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convergence Analysis", "sec_num": "3.2" }, { "text": "x x x i , y i ). L(w w w) is a func- tion in w w w \u2022 f f f i , such as 1 2 (y i \u2212 w w w \u2022 f f f i ) 2 for regression or log[1 + exp(\u2212y i w w w \u2022 f f f i )] for binary classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convergence Analysis", "sec_num": "3.2" }, { "text": "To make convergence analysis of the proposed ADF training algorithm, we need to introduce several mathematical definitions. First, we introduce Lipschitz continuity:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convergence Analysis", "sec_num": "3.2" }, { "text": "Definition 1 (Lipschitz continuity) A function F : X \u2192 R is Lipschitz continuous with the degree of D if |F(x) \u2212 F(y)| \u2264 D|x \u2212 y| for \u2200x, y \u2208 X .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convergence Analysis", "sec_num": "3.2" }, { "text": "X can be multi-dimensional space, and |x \u2212 y| is the distance between the points x and y.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convergence Analysis", "sec_num": "3.2" }, { "text": "Based on the definition of Lipschitz continuity, we give the definition of the Lipschitz constant ||F|| Lip as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convergence Analysis", "sec_num": "3.2" }, { "text": "Definition 2 (Lipschitz constant) ||F|| Lip := inf{D where |F(x) \u2212 F(y)| \u2264 D|x \u2212 y| for \u2200x, y}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convergence Analysis", "sec_num": "3.2" }, { "text": "In other words, the Lipschitz constant ||F|| Lip is the lower bound of the continuity degree that makes the function F Lipschitz continuous.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convergence Analysis", "sec_num": "3.2" }, { "text": "Further, based on the definition of Lipschitz constant, we give the definition of contraction mapping as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convergence Analysis", "sec_num": "3.2" }, { "text": "A function F : X \u2192 X is a contraction mapping if its Lipschitz constant is smaller than 1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 3 (Contraction mapping)", "sec_num": null }, { "text": "||F|| Lip < 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 3 (Contraction mapping)", "sec_num": null }, { "text": "Then, we can show that the traditional SGD update is a contraction mapping.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 3 (Contraction mapping)", "sec_num": null }, { "text": "Let \u03b3 be a fixed low learning rate in SGD updating.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 1 (SGD update rule is contraction mapping)", "sec_num": null }, { "text": "If \u03b3 \u2264 (||x 2 i || \u2022 ||\u2207 y \u2032 y \u2032 y \u2032 \u2113(x x x i , y i , y \u2032 )|| Lip ) \u22121", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 1 (SGD update rule is contraction mapping)", "sec_num": null }, { "text": ", the SGD update rule is a contraction mapping in Euclidean space with Lipschitz continuity degree 1 \u2212 \u03b3/\u03c3 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 1 (SGD update rule is contraction mapping)", "sec_num": null }, { "text": "The proof can be extended from the related work on convergence analysis of parallel SGD training (Zinkevich et al. 2010) . The stochastic training process is a one-followingone dynamic update process. In this dynamic process, if we use the same update rule F, we have w w w t+1 = F(w w w t ) and w w w t+2 = F(w w w t+1 ). It is only necessary to prove that the dynamic update is a contraction mapping restricted by this one-following-one dynamic process. That is, for the proposed ADF update rule, it is only necessary to prove it is a dynamic contraction mapping. We formally define dynamic contraction mapping as follows.", "cite_spans": [ { "start": 97, "end": 120, "text": "(Zinkevich et al. 2010)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Lemma 1 (SGD update rule is contraction mapping)", "sec_num": null }, { "text": "Given a function F : X \u2192 X , suppose the function is used in a dynamic one-followingone process:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 4 (Dynamic contraction mapping)", "sec_num": null }, { "text": "x t+1 = F(x t ) and x t+2 = F(x t+1 ) for \u2200x t \u2208 X . Then, the function F is a dynamic contraction mapping if \u2203D < 1, |x t+2 \u2212 x t+1 | \u2264 D|x t+1 \u2212 x t | for \u2200x t \u2208 X .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 4 (Dynamic contraction mapping)", "sec_num": null }, { "text": "We can see that a contraction mapping is also a dynamic contraction mapping, but a dynamic contraction mapping is not necessarily a contraction mapping. We first show that the ADF update rule with a fixed learning rate vector of different learning rates is a dynamic contraction mapping.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 4 (Dynamic contraction mapping)", "sec_num": null }, { "text": "Let \u03b3 \u03b3 \u03b3 be a fixed learning rate vector with different learning rates. Let \u03b3 max be the maximum learning rate in the learning rate vector \u03b3 \u03b3 \u03b3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 (ADF update rule with fixed learning rates)", "sec_num": null }, { "text": "\u03b3 max := sup{\u03b3 i where \u03b3 i \u2208 \u03b3 \u03b3 \u03b3}. Then if \u03b3 max \u2264 (||x 2 i || \u2022 ||\u2207 y \u2032 y \u2032 y \u2032 \u2113(x x x i , y i , y \u2032 )|| Lip ) \u22121", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 (ADF update rule with fixed learning rates)", "sec_num": null }, { "text": ", the ADF update rule is a dynamic contraction mapping in Euclidean space with Lipschitz continuity degree 1 \u2212 \u03b3 max /\u03c3 2 . The proof is sketched in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 (ADF update rule with fixed learning rates)", "sec_num": null }, { "text": "Further, we need to prove that the ADF update rule with a decaying learning rate vector is a dynamic contraction mapping, because the real ADF algorithm has a decaying learning rate vector. In the decaying case, the condition that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 (ADF update rule with fixed learning rates)", "sec_num": null }, { "text": "\u03b3 max \u2264 (||x 2 i || \u2022 ||\u2207 y \u2032 y \u2032 y \u2032 \u2113(x x x i , y i , y \u2032 )|| Lip ) \u22121", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 (ADF update rule with fixed learning rates)", "sec_num": null }, { "text": "can be easily achieved, because \u03b3 \u03b3 \u03b3 continues to decay with an exponential decaying rate. Even if the \u03b3 \u03b3 \u03b3 is initialized with high values of learning rates, after a number of training passes (denoted as T) \u03b3 \u03b3 \u03b3 T is guaranteed to be small enough so that \u03b3 max :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 (ADF update rule with fixed learning rates)", "sec_num": null }, { "text": "= sup{\u03b3 i where \u03b3 i \u2208 \u03b3 \u03b3 \u03b3 T } and \u03b3 max \u2264 (||x 2 i || \u2022 ||\u2207 y \u2032 y \u2032 y \u2032 \u2113(x x x i , y i , y \u2032 )|| Lip ) \u22121 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 (ADF update rule with fixed learning rates)", "sec_num": null }, { "text": "Without losing generality, our convergence analysis starts from the pass T and we take \u03b3 \u03b3 \u03b3 T as \u03b3 \u03b3 \u03b3 0 in the following analysis. Thus, we can show that the ADF update rule with a decaying learning rate vector is a dynamic contraction mapping:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 (ADF update rule with fixed learning rates)", "sec_num": null }, { "text": "Theorem 2 (ADF update rule with decaying learning rates) Let \u03b3 \u03b3 \u03b3 t be a learning rate vector in the ADF learning algorithm, which is decaying over the time t and with different decaying rates based on feature frequency information. Let \u03b3 \u03b3 \u03b3 t start from a low enough learning rate vector \u03b3 \u03b3 \u03b3 0 such that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 (ADF update rule with fixed learning rates)", "sec_num": null }, { "text": "\u03b3 max \u2264 (||x 2 i || \u2022 ||\u2207 y \u2032 y \u2032 y \u2032 \u2113(x x x i , y i , y \u2032 )|| Lip ) \u22121 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 (ADF update rule with fixed learning rates)", "sec_num": null }, { "text": "where \u03b3 max is the maximum element in \u03b3 \u03b3 \u03b3 0 . Then, the ADF update rule with decaying learning rate vector is a dynamic contraction mapping in Euclidean space with Lipschitz continuity degree 1 \u2212 \u03b3 max /\u03c3 2 . The proof is sketched in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 (ADF update rule with fixed learning rates)", "sec_num": null }, { "text": "Based on the connections between ADF training and contraction mapping, we demonstrate the convergence properties of the ADF training method. First, we prove the convergence of the ADF training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 (ADF update rule with fixed learning rates)", "sec_num": null }, { "text": "ADF training is asymptotically convergent. The proof is sketched in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 3 (ADF convergence)", "sec_num": null }, { "text": "Further, we analyze the convergence rate of the ADF training. When we have the lowest learning rate \u03b3 \u03b3 \u03b3 t+1 = \u03b2\u03b3 \u03b3 \u03b3 t , the expectation of the obtained w w w t is as follows (Murata 1998; Hsu et al. 2009) :", "cite_spans": [ { "start": 177, "end": 190, "text": "(Murata 1998;", "ref_id": "BIBREF17" }, { "start": 191, "end": 207, "text": "Hsu et al. 2009)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Theorem 3 (ADF convergence)", "sec_num": null }, { "text": "E(w w w t ) = w w w * + t \u220f m=1 (I I I \u2212 \u03b3 \u03b3 \u03b3 0 \u03b2 m H H H(w w w * ))(w w w 0 \u2212 w w w * )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 3 (ADF convergence)", "sec_num": null }, { "text": "where w w w * is the optimal weight vector, and H H H is the Hessian matrix of the objective function. The rate of convergence is governed by the largest eigenvalue of the function C C C t = \u220f t m=1 (I I I \u2212 \u03b3 \u03b3 \u03b3 0 \u03b2 m H H H(w w w * )). Following Murata (1998) and Hsu et al. (2009) , we can derive a bound of rate of convergence, as follows.", "cite_spans": [ { "start": 248, "end": 261, "text": "Murata (1998)", "ref_id": "BIBREF17" }, { "start": 266, "end": 283, "text": "Hsu et al. (2009)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Theorem 3 (ADF convergence)", "sec_num": null }, { "text": "Assume \u03d5 is the largest eigenvalue of the function C C C t = \u220f t m=1 (I I I \u2212 \u03b3 \u03b3 \u03b3 0 \u03b2 m H H H(w w w * )). For the proposed ADF training, its convergence rate is bounded by \u03d5, and we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 4 (ADF convergence rate)", "sec_num": null }, { "text": "\u03d5 \u2264 exp { \u03b3 \u03b3 \u03b3 0 \u03bb\u03b2 \u03b2 \u2212 1 }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 4 (ADF convergence rate)", "sec_num": null }, { "text": "where \u03bb is the minimum eigenvalue of H H H(w w w * ). The proof is sketched in Section 5. The convergence analysis demonstrates that the proposed method with featurefrequency-adaptive learning rates is convergent and the bound of convergence rate is analyzed. It demonstrates that increasing the values of \u03b3 \u03b3 \u03b3 0 and \u03b2 leads to a lower bound of the convergence rate. Because the bound of the convergence rate is just an up-bound rather than the actual convergence rate, we still need to conduct automatic tuning of the hyper-parameters, including \u03b3 \u03b3 \u03b3 0 and \u03b2, for optimal convergence rate in practice. The ADF training method has a fast convergence rate because the featurefrequency-adaptive schema can avoid the fluctuations on updating the weights of high frequency features, and it can avoid the insufficient training on updating the weights of low frequency features. In the following sections, we perform experiments to confirm the fast convergence rate of the proposed method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 4 (ADF convergence rate)", "sec_num": null }, { "text": "Our main focus is on training heavily structured classification models. We evaluate the proposal on three NLP structured classification tasks: biomedical named entity recognition (Bio-NER), Chinese word segmentation, and noun phrase (NP) chunking. For the structured classification tasks, the ADF training is based on the CRF model (Lafferty, McCallum, and Pereira 2001) . Further, to demonstrate that the proposed method is not limited to structured classification tasks, we also perform experiments on a nonstructured binary classification task: sentiment-based text classification. For the nonstructured classification task, the ADF training is based on the maximum entropy model (Berger, Della Pietra, and Della Pietra 1996; Ratnaparkhi 1996) .", "cite_spans": [ { "start": 332, "end": 370, "text": "(Lafferty, McCallum, and Pereira 2001)", "ref_id": "BIBREF14" }, { "start": 683, "end": 728, "text": "(Berger, Della Pietra, and Della Pietra 1996;", "ref_id": "BIBREF0" }, { "start": 729, "end": 746, "text": "Ratnaparkhi 1996)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4." }, { "text": "The biomedical named entity recognition (Bio-NER) task is from the BIONLP-2004 shared task. The task is to recognize five kinds of biomedical named entities, including DNA, RNA, protein, cell line, and cell type, on the MEDLINE biomedical text mining corpus (Kim et al. 2004) . A typical approach to this problem is to cast it as a sequential labeling task with the BIO encoding.", "cite_spans": [ { "start": 258, "end": 275, "text": "(Kim et al. 2004)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Biomedical Named Entity Recognition (Structured Classification)", "sec_num": "4.1" }, { "text": "This data set consists of 20,546 training samples (from 2,000 MEDLINE article abstracts, with 472,006 word tokens) and 4,260 test samples. The properties of the data are summarized in Table 1 . State-of-the-art systems for this task include Settles (2004) , Finkel et al. (2004 ), Okanohara et al. (2006 , Hsu et al. (2009) , , and Tsuruoka, Tsujii, and Ananiadou (2009) .", "cite_spans": [ { "start": 241, "end": 255, "text": "Settles (2004)", "ref_id": "BIBREF22" }, { "start": 258, "end": 277, "text": "Finkel et al. (2004", "ref_id": "BIBREF7" }, { "start": 278, "end": 303, "text": "), Okanohara et al. (2006", "ref_id": "BIBREF18" }, { "start": 306, "end": 323, "text": "Hsu et al. (2009)", "ref_id": "BIBREF10" }, { "start": 332, "end": 370, "text": "Tsuruoka, Tsujii, and Ananiadou (2009)", "ref_id": "BIBREF31" } ], "ref_spans": [ { "start": 184, "end": 191, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Biomedical Named Entity Recognition (Structured Classification)", "sec_num": "4.1" }, { "text": "Following previous studies for this task (Okanohara et al. 2006; , we use word token-based features, part-of-speech (POS) based features, and orthography pattern-based features (prefix, uppercase/lowercase, etc.), as listed in Table 2 . With the traditional implementation of CRF systems (e.g., the HCRF package), the edges features usually contain only the information of y i\u22121 and y i , and ignore the ", "cite_spans": [ { "start": 36, "end": 64, "text": "task (Okanohara et al. 2006;", "ref_id": null } ], "ref_spans": [ { "start": 227, "end": 234, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Biomedical Named Entity Recognition (Structured Classification)", "sec_num": "4.1" }, { "text": "{w i\u22122 , w i\u22121 , w i , w i+1 , w i+2 , w i\u22121 w i , w i w i+1 } \u00d7{y i , y i\u22121 y i } Part-of-Speech (POS)-based Features: {t i\u22122 , t i\u22121 , t i , t i+1 , t i+2 , t i\u22122 t i\u22121 , t i\u22121 t i , t i t i+1 , t i+1 t i+2 , t i\u22122 t i\u22121 t i , t i\u22121 t i t i+1 , t i t i+1 t i+2 } \u00d7{y i , y i\u22121 y i } Orthography Pattern-based Features: {o i\u22122 , o i\u22121 , o i , o i+1 , o i+2 , o i\u22122 o i\u22121 , o i\u22121 o i , o i o i+1 , o i+1 o i+2 } \u00d7{y i , y i\u22121 y i }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Biomedical Named Entity Recognition (Structured Classification)", "sec_num": "4.1" }, { "text": "information of the observation sequence (i.e., x x x). The major reason for this simple realization of edge features in traditional CRF implementation is to reduce the dimension of features. To improve the model accuracy, we utilize rich edge features following Sun, Wang, and Li (2012) , in which local observation information of x x x is combined in edge features just like the implementation of node features. A detailed introduction to rich edge features can be found in Sun, Wang, and Li (2012) . Using the feature templates, we extract a high dimensional feature set, which contains 5.3 \u00d7 10 7 features in total. Following prior studies, the evaluation metric for this task is the balanced F-score defined as 2PR/(P + R), where P is precision and R is recall.", "cite_spans": [ { "start": 262, "end": 286, "text": "Sun, Wang, and Li (2012)", "ref_id": "BIBREF28" }, { "start": 475, "end": 499, "text": "Sun, Wang, and Li (2012)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Biomedical Named Entity Recognition (Structured Classification)", "sec_num": "4.1" }, { "text": "Chinese word segmentation aims to automatically segment character sequences into word sequences. Chinese word segmentation is important because it is the first step for most Chinese language information processing systems. Our experiments are based on the Microsoft Research data provided by The Second International Chinese Word Segmentation Bakeoff. In this data set, there are 8.8 \u00d7 10 4 word-types, 2.4 \u00d7 10 6 wordtokens, 5 \u00d7 10 3 character-types, and 4.1 \u00d7 10 6 character-tokens. State-of-the-art systems for this task include Tseng et al. (2005) , Zhang, Kikui, and Sumita (2006) , Zhang and Clark (2007) , Gao et al. (2007) , Sun, Zhang, et al. (2009) , Sun (2010), Zhao et al. (2010) , and Zhao and Kit (2011) . The feature engineering follows previous work on word segmentation (Sun, Wang, and Li 2012) . Rich edge features are used. For the classification label y i and the label transition y i\u22121 y i on position i, we use the feature templates as follows (Sun, Wang, and Li 2012): r Character unigrams located at positions i \u2212 2, i \u2212 1, i, i + 1, and i + 2.", "cite_spans": [ { "start": 532, "end": 551, "text": "Tseng et al. (2005)", "ref_id": "BIBREF30" }, { "start": 554, "end": 585, "text": "Zhang, Kikui, and Sumita (2006)", "ref_id": "BIBREF32" }, { "start": 588, "end": 610, "text": "Zhang and Clark (2007)", "ref_id": "BIBREF33" }, { "start": 613, "end": 630, "text": "Gao et al. (2007)", "ref_id": "BIBREF9" }, { "start": 633, "end": 658, "text": "Sun, Zhang, et al. (2009)", "ref_id": "BIBREF29" }, { "start": 673, "end": 691, "text": "Zhao et al. (2010)", "ref_id": "BIBREF34" }, { "start": 698, "end": 717, "text": "Zhao and Kit (2011)", "ref_id": "BIBREF35" }, { "start": 787, "end": 811, "text": "(Sun, Wang, and Li 2012)", "ref_id": "BIBREF28" }, { "start": 966, "end": 991, "text": "(Sun, Wang, and Li 2012):", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Chinese Word Segmentation (Structured Classification)", "sec_num": "4.2" }, { "text": "r Character bigrams located at positions i \u2212 2, i \u2212 1, i and i + 1. r Whether x j and x j+1 are identical, for j = i \u2212 2, . . . , i + 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese Word Segmentation (Structured Classification)", "sec_num": "4.2" }, { "text": "r Whether x j and x j+2 are identical, for j = i \u2212 3, . . . , i + 1. r The character sequence x j,i if it matches a word w \u2208 U, with the constraint i \u2212 6 < j < i. The item x j,i represents the character sequence x j . . . x i . U represents the unigram-dictionary collected from the training data. r The character sequence x i,k if it matches a word w \u2208 U, with the constraint All feature templates are instantiated with values that occurred in training samples. The extracted feature set is large, and there are 2.4 \u00d7 10 7 features in total. Our evaluation is based on a closed test, and we do not use extra resources. Following prior studies, the evaluation metric for this task is the balanced F-score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese Word Segmentation (Structured Classification)", "sec_num": "4.2" }, { "text": "i < k < i + 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese Word Segmentation (Structured Classification)", "sec_num": "4.2" }, { "text": "In the phrase chunking task, the non-recursive cores of noun phrases, called base NPs, are identified. The phrase chunking data is extracted from the data of the CoNLL-2000 shallow-parsing shared task (Sang and Buchholz 2000) . The training set consists of 8,936 sentences, and the test set consists of 2,012 sentences. We use the feature templates based on word n-grams and part-of-speech n-grams, and feature templates are shown in Table 3 . Rich edge features are used. Using the feature templates, we extract 4.8 \u00d7 10 5 features in total. State-of-the-art systems for this task include Kudo and Matsumoto (2001) , Collins (2002 ), McDonald, Crammer, and Pereira (2005 ), Vishwanathan et al. (2006 , Sun et al. (2008) , and Tsuruoka, Tsujii, and Ananiadou (2009) . Following prior studies, the evaluation metric for this task is the balanced F-score.", "cite_spans": [ { "start": 201, "end": 225, "text": "(Sang and Buchholz 2000)", "ref_id": "BIBREF20" }, { "start": 590, "end": 615, "text": "Kudo and Matsumoto (2001)", "ref_id": "BIBREF13" }, { "start": 618, "end": 631, "text": "Collins (2002", "ref_id": "BIBREF4" }, { "start": 632, "end": 671, "text": "), McDonald, Crammer, and Pereira (2005", "ref_id": "BIBREF15" }, { "start": 672, "end": 700, "text": "), Vishwanathan et al. (2006", "ref_id": null }, { "start": 703, "end": 720, "text": "Sun et al. (2008)", "ref_id": "BIBREF27" }, { "start": 723, "end": 765, "text": "and Tsuruoka, Tsujii, and Ananiadou (2009)", "ref_id": "BIBREF31" } ], "ref_spans": [ { "start": 434, "end": 441, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Phrase Chunking (Structured Classification)", "sec_num": "4.3" }, { "text": "To demonstrate that the proposed method is not limited to structured classification, we select a well-known sentiment classification task for evaluating the proposed method on non-structured classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentiment Classification (Non-Structured Classification)", "sec_num": "4.4" }, { "text": "Feature templates used for the phrase chunking task. w i , t i , and y i are defined as before.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 3", "sec_num": null }, { "text": "{w i\u22122 , w i\u22121 , w i , w i+1 , w i+2 , w i\u22121 w i , w i w i+1 } \u00d7{y i , y i\u22121 y i } Part-of-Speech (POS)-based Features: {t i\u22121 , t i , t i+1 , t i\u22122 t i\u22121 , t i\u22121 t i , t i t i+1 , t i+1 t i+2 , t i\u22122 t i\u22121 t i , t i\u22121 t i t i+1 , t i t i+1 t i+2 } \u00d7{y i , y i\u22121 y i }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-Token-based Features:", "sec_num": null }, { "text": "Generally, sentiment classification classifies user review text as a positive or negative opinion. This task (Blitzer, Dredze, and Pereira 2007) consists of four subtasks based on user reviews from Amazon.com. Each subtask is a binary sentiment classification task based on a specific topic. We use the maximum entropy model for classification. We use the same lexical features as those used in Blitzer, Dredze, and Pereira (2007) , and the total number of features is 9.4 \u00d7 10 5 . Following prior work, the evaluation metric is binary classification accuracy.", "cite_spans": [ { "start": 109, "end": 144, "text": "(Blitzer, Dredze, and Pereira 2007)", "ref_id": "BIBREF1" }, { "start": 395, "end": 430, "text": "Blitzer, Dredze, and Pereira (2007)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Word-Token-based Features:", "sec_num": null }, { "text": "As for training, we perform gradient descent with the proposed ADF training method. To compare with existing literature, we choose four popular training methods, a representative batch training method, and three representative on-line training methods. The batch training method is the limited-memory BFGS (LBFGS) method (Nocedal and Wright 1999) , which is considered to be one of the best optimizers for log-linear models like CRFs. The on-line training methods include the SGD training method, which we introduced in Section 2.2, the structured perceptron (Perc) training method (Freund and Schapire 1999; Collins 2002) , and the averaged perceptron (Avg-Perc) training method (Collins 2002) . The structured perceptron method and averaged perceptron method are non-probabilistic training methods that have very fast training speed due to the avoidance of the computation on gradients (Sun, Matsuzaki, and Li 2013) . All training methods, including ADF, SGD, Perc, Avg-Perc, and LBFGS, use the same set of features.", "cite_spans": [ { "start": 321, "end": 346, "text": "(Nocedal and Wright 1999)", "ref_id": "BIBREF18" }, { "start": 582, "end": 608, "text": "(Freund and Schapire 1999;", "ref_id": "BIBREF8" }, { "start": 609, "end": 622, "text": "Collins 2002)", "ref_id": "BIBREF4" }, { "start": 680, "end": 694, "text": "(Collins 2002)", "ref_id": "BIBREF4" }, { "start": 888, "end": 917, "text": "(Sun, Matsuzaki, and Li 2013)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setting", "sec_num": "4.5" }, { "text": "We also compared the ADF method with the CW method (Dredze, Crammer, and Pereira 2008) and the AROW method (Crammer, Kulesza, and Dredze 2009) . The CW and AROW methods are implemented based on the Confidence Weighted Learning Library. 2 Because the current implementation of the CW and AROW methods do not utilize rich edge features, we removed the rich edge features in our systems to make more fair comparisons. That is, we removed rich edge features in the CRF-ADF setting, and this simplified method is denoted as ADF-noRich. The second-order stochastic gradient descent training methods, including the SMD method (Vishwanathan et al. 2006) and the PSA method (Hsu et al. 2009) , are not considered in our experiments because we find those methods are quite slow when running on our data sets with high dimensional features.", "cite_spans": [ { "start": 51, "end": 86, "text": "(Dredze, Crammer, and Pereira 2008)", "ref_id": "BIBREF5" }, { "start": 107, "end": 142, "text": "(Crammer, Kulesza, and Dredze 2009)", "ref_id": null }, { "start": 619, "end": 645, "text": "(Vishwanathan et al. 2006)", "ref_id": null }, { "start": 665, "end": 682, "text": "(Hsu et al. 2009)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setting", "sec_num": "4.5" }, { "text": "We find that the settings of q, \u03b1, and \u03b2 in the ADF training method are not sensitive among specific tasks and can be generally set. We simply set q = n/10 (n is the number of training samples). It means that feature frequency information is updated 10 times per iteration. Via cross-validation only on the training data of different tasks, we find that the following setting is sufficient to produce adequate performance for most of the real-world natural language processing tasks: \u03b1 around 0.995 and \u03b2 around 0.6. This indicates that the feature frequency information has similar characteristics across many different natural language processing tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setting", "sec_num": "4.5" }, { "text": "Thus, we simply use the following setting for all tasks: q = n/10, \u03b1 = 0.995, and \u03b2 = 0.6. This leaves c (the initial value of the learning rates) as the only hyper-parameter that requires careful tuning. We perform automatic tuning for c based on the training data via 4-fold cross-validation, testing with c = 0.005, 0.01, 0.05, 0.1, respectively, and the optimal c is chosen based on the best accuracy of cross-validation. Via this automatic tuning, we find it is proper to set c = 0.005, 0.1, 0.05, 0.005, for the Bio-NER, word segmentation, phrase chunking, and sentiment classification tasks, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setting", "sec_num": "4.5" }, { "text": "To reduce overfitting, we use an L 2 Gaussian weight prior (Chen and Rosenfeld 1999) for the ADF, LBFGS, and SGD training methods. We vary the \u03c3 with different values (e.g., 1.0, 2.0, and 5.0) for 4-fold cross validation on the training data of different tasks, and finally set \u03c3 = 5.0 for all training methods in the Bio-NER task; \u03c3 = 5.0 for all training methods in the word segmentation task; \u03c3 = 5.0, 1.0, 1.0 for ADF, SGD, and LBFGS in the phrase chunking task; and \u03c3 = 1.0 for all training methods in the sentiment classification task. Experiments are performed on a computer with an Intel(R) Xeon(R) 2.0-GHz CPU.", "cite_spans": [ { "start": 59, "end": 84, "text": "(Chen and Rosenfeld 1999)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setting", "sec_num": "4.5" }, { "text": "Convergence. First, we check the experimental results of different methods on their empirical convergence state. Because the perceptron training method (Perc) does not achieve empirical convergence even with a very large number of training passes, we simply report its results based on a large enough number of training passes (e.g., 200 passes). Experimental results are shown in Table 4 .", "cite_spans": [], "ref_spans": [ { "start": 381, "end": 388, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Comparisons Based on Empirical", "sec_num": "4.6.1" }, { "text": "As we can see, the proposed ADF method is more accurate than other training methods, either the on-line ones or the batch one. It is a bit surprising that the ADF method performs even more accurately than the batch training method (LBFGS). We notice that some previous work also found that on-line training methods could have Table 4 Results for the Bio-NER, word segmentation, and phrase chunking tasks. The results and the number of passes are decided based on empirical convergence (with score deviation of adjacent five passes less than 0.01). For the non-convergent case, we simply report the results based on a large enough number of training passes. As we can see, the ADF method achieves the best accuracy with the fastest convergence speed. better performance than batch training methods such as LBFGS (Tsuruoka, Tsujii, and Ananiadou 2009; Schaul, Zhang, and LeCun 2012) . The ADF training method can achieve better results probably because the feature-frequency-adaptive training schema can produce more balanced training of features with diversified frequencies. Traditional SGD training may over-train high frequency features and at the same time may have insufficient training of low frequency features. The ADF training method can avoid such problems. It will be interesting to perform further analysis in future work. We also performed significance tests based on t-tests with a significance level of 0.05. Significance tests demonstrate that the ADF method is significantly more accurate than the existing training methods in most of the comparisons, whether on-line or batch. For the Bio-NER task, the differences between ADF and LBFGS, SGD, Perc, and Avg-Perc are significant. For the word segmentation task, the differences between ADF and LBFGS, SGD, Perc, and Avg-Perc are significant. For the phrase chunking task, the differences between ADF and Perc and Avg-Perc are significant; the differences between ADF and LBFGS and SGD are non-significant.", "cite_spans": [ { "start": 811, "end": 849, "text": "(Tsuruoka, Tsujii, and Ananiadou 2009;", "ref_id": "BIBREF31" }, { "start": 850, "end": 880, "text": "Schaul, Zhang, and LeCun 2012)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 326, "end": 333, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Comparisons Based on Empirical", "sec_num": "4.6.1" }, { "text": "Moreover, as we can see, the proposed method achieves a convergence state with the least number of training passes, and with the least wall-clock time. In general, the ADF method is about one order of magnitude faster than the LBFGS batch training method and several times faster than the existing on-line training methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bio-NER", "sec_num": null }, { "text": "State-of-the-Art Systems. The three tasks are well-known benchmark tasks with standard data sets. There is a large amount of published research on those three tasks. We compare the proposed method with the state-of-the-art systems. The comparisons are shown in Table 5 .", "cite_spans": [], "ref_spans": [ { "start": 261, "end": 268, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Comparisons with", "sec_num": "4.6.2" }, { "text": "As we can see, our system is competitive with the best systems for the Bio-NER, word segmentation, and NP-chunking tasks. Many of the state-of-the-art systems use extra resources (e.g., linguistic knowledge) or complicated systems (e.g., voting over Table 5 Comparing our results with some representative state-of-the-art systems.", "cite_spans": [], "ref_spans": [ { "start": 250, "end": 257, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Comparisons with", "sec_num": "4.6.2" }, { "text": "Method F-score (Okanohara et al. 2006) Semi-Markov CRF + global features 71.5 (Hsu et al. 2009) CRF + PSA(1) training 69.4 (Tsuruoka, Tsujii, and Ananiadou 2009) CRF + SGD-L1 training 71.6 Our Method CRF + ADF training 72.3", "cite_spans": [ { "start": 15, "end": 38, "text": "(Okanohara et al. 2006)", "ref_id": "BIBREF18" }, { "start": 78, "end": 95, "text": "(Hsu et al. 2009)", "ref_id": "BIBREF10" }, { "start": 123, "end": 161, "text": "(Tsuruoka, Tsujii, and Ananiadou 2009)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Bio-NER", "sec_num": null }, { "text": "Method F-score (Gao et al. 2007) Semi-Markov CRF 97.2 (Sun, Zhang, et al. 2009) Latent-variable CRF 97.3 (Sun 2010) Multiple segmenters + voting 96.9 Our Method CRF + ADF training 97.5", "cite_spans": [ { "start": 15, "end": 32, "text": "(Gao et al. 2007)", "ref_id": "BIBREF9" }, { "start": 54, "end": 79, "text": "(Sun, Zhang, et al. 2009)", "ref_id": "BIBREF29" }, { "start": 105, "end": 115, "text": "(Sun 2010)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Segmentation", "sec_num": null }, { "text": "Chunking Method F-score (Kudo and Matsumoto 2001) Combination of multiple SVM 94. 2 (Vishwanathan et al. 2006) CRF + SMD training 93.6 (Sun et al. 2008) Latent-variable CRF 94.3 Our Method CRF + ADF training 94.5 multiple models). Thus, it is impressive that our single model-based system without extra resources achieves good performance. This indicates that the proposed ADF training method can train model parameters with good generality on the test data.", "cite_spans": [ { "start": 24, "end": 49, "text": "(Kudo and Matsumoto 2001)", "ref_id": "BIBREF13" }, { "start": 82, "end": 110, "text": "2 (Vishwanathan et al. 2006)", "ref_id": null }, { "start": 135, "end": 152, "text": "(Sun et al. 2008)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Segmentation", "sec_num": null }, { "text": "To study the detailed training process and convergence speed, we show the training curves in Figures 2-4 . Figure 2 focuses on the comparisons between the ADF method and the existing on-line training methods. As we can see, the ADF method converges faster than other on-line training methods in terms of both training passes and wall-clock time. The ADF method has roughly the same training speed per pass compared with traditional SGD training. Figure 3 (Top Row) focuses on comparing the ADF method with the CW method (Dredze, Crammer, and Pereira 2008) and the AROW method (Crammer, Kulesza, and Dredze 2009) . Comparisons are based on similar features. As discussed before, the ADF-noRich method is a simplified system, with rich edge features removed from the CRF-ADF system. As we can see, the proposed ADF method, whether with or without rich edge features, outperforms the CW and AROW methods. Figure 3 (Bottom Row) focuses on the comparisons with different mini-batch (the training samples in each stochastic update) sizes. Representative results with a mini-batch size of 10 are shown. In general, we find larger mini-batch sizes will slow down the convergence speed. Results demonstrate that, compared with the SGD training method, the ADF training method is less sensitive to mini-batch sizes. Figure 4 focuses on the comparisons between the ADF method and the batch training method LBFGS. As we can see, the ADF method converges at least one order magnitude faster than the LBFGS training in terms of both training passes and wallclock time. For the LBFGS training, we need to determine the LBFGS memory parameter m, which controls the number of prior gradients used to approximate the Hessian information. A larger value of m will potentially lead to more accurate estimation of the Hessian information, but at the same time will consume significantly more memory. Roughly, the LBFGS training consumes m times more memory than the ADF on-line training method. For most tasks, the default setting of m = 10 is reasonable. We set m = 10 for the word segmentation and phrase chunking tasks, and m = 6 for the Bio-NER task due to the shortage of memory for m > 6 cases in this task.", "cite_spans": [ { "start": 520, "end": 555, "text": "(Dredze, Crammer, and Pereira 2008)", "ref_id": "BIBREF5" }, { "start": 576, "end": 611, "text": "(Crammer, Kulesza, and Dredze 2009)", "ref_id": null } ], "ref_spans": [ { "start": 93, "end": 104, "text": "Figures 2-4", "ref_id": null }, { "start": 107, "end": 115, "text": "Figure 2", "ref_id": null }, { "start": 446, "end": 454, "text": "Figure 3", "ref_id": null }, { "start": 902, "end": 910, "text": "Figure 3", "ref_id": null }, { "start": 1306, "end": 1314, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Training Curves.", "sec_num": "4.6.3" }, { "text": "Results. Many real-world data sets can only observe the training data in one pass. For example, some Web-based on-line data streams can only appear once so that the model parameter learning should be finished in one-pass learning (see Zinkevich et al. 2010) . Hence, it is important to test the performance in the one-pass learning scenario.", "cite_spans": [ { "start": 235, "end": 257, "text": "Zinkevich et al. 2010)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "One-Pass Learning", "sec_num": "4.6.4" }, { "text": "In the one-pass learning scenario, the feature frequency information is computed \"on the fly\" during on-line training. As shown in Section 3.1, we only need to have a real-valued vector v v v to record the cumulative feature frequency information, which is updated when observing training instances one by one. Then, the learning rate vector \u03b3 \u03b3 \u03b3 is updated based on the v v v only and there is no need to observe the training instances again. This is the same algorithm introduced in Section 3.1 and no change is required for the one-pass learning scenario. Figure 5 shows the comparisons between the ADF method and baselines on one-pass learning. As we can see, the ADF method ", "cite_spans": [], "ref_spans": [ { "start": 560, "end": 568, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "One-Pass Learning", "sec_num": "4.6.4" }, { "text": "Comparisons between the ADF method and the batch training method LBFGS. (Top Row) Comparisons based on training passes. As we can see, the ADF method converges much faster than the LBFGS method, and with better accuracy on the convergence state. (Bottom Row) Comparisons based on wall-clock time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "consistently outperforms the baselines. This also reflects the fast convergence speed of the ADF training method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "In previous experiments, we showed that the proposed method outperforms existing baselines on structured classification. Nevertheless, we want to show that the ADF method also has good performance on non-structured classification. In addition, this task is based on real-valued features instead of binary features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Structured Classification Results", "sec_num": "4.7" }, { "text": "Comparisons among different methods based on one-pass learning. As we can see, the ADF method has the best accuracy on one-pass learning. Experimental results of different training methods on the convergence state are shown in Table 6 . As we can see, the proposed method outperforms all of the on-line and batch baselines in terms of binary classification accuracy. Here again we observe that the ADF and SGD methods outperform the LBFGS baseline.", "cite_spans": [], "ref_spans": [ { "start": 227, "end": 234, "text": "Table 6", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "The training curves are shown in Figure 6 . As we can see, the ADF method converges quickly. Because this data set is relatively small and the feature dimension is much smaller than previous tasks, we find the baseline training methods also have fast convergence speed. The comparisons on one-pass learning are shown in Fig ", "cite_spans": [], "ref_spans": [ { "start": 33, "end": 41, "text": "Figure 6", "ref_id": null }, { "start": 320, "end": 323, "text": "Fig", "ref_id": null } ], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "F-score curves on sentiment classification. (Top Row) Comparisons among the ADF method and on-line training baselines, based on training passes and wall-clock time, respectively. (Bottom Row) Comparisons between the ADF method and the batch training method LBFGS, based on training passes and wall-clock time, respectively. As we can see, the ADF method outperforms both the on-line training baselines and the batch training baseline, with better accuracy and faster convergence speed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 6", "sec_num": null }, { "text": "One-pass learning results on sentiment classification. outperforms the baseline methods on one-pass learning, with more than 12.7% error rate reduction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "This section gives proofs of Theorems 1-4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proofs", "sec_num": "5." }, { "text": "Proof of Theorem 1 Following Equation 5, the ADF update rule is F(w w w t ) := w w w t+1 = w w w t + \u03b3 \u03b3 \u03b3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proofs", "sec_num": "5." }, { "text": "\u2022 \u2022 \u2022 g g g t . For \u2200w w w t \u2208 X , |F(w w w t+1 ) \u2212 F(w w w t )| = |F(w w w t+1 ) \u2212 w w w t+1 | = |w w w t+1 + \u03b3 \u03b3 \u03b3 \u2022 \u2022 \u2022 g g g t+1 \u2212 w w w t+1 | = |\u03b3 \u03b3 \u03b3 \u2022 \u2022 \u2022 g g g t+1 | = [(a 1 b 1 ) 2 + (a 2 b 2 ) 2 + \u2022 \u2022 \u2022 + (a f b f ) 2 ] 1/2 \u2264 [(\u03b3 max b 1 ) 2 + (\u03b3 max b 2 ) 2 + \u2022 \u2022 \u2022 + (\u03b3 max b f ) 2 ] 1/2 = |\u03b3 max g g g t+1 | = |F SGD (w w w t+1 ) \u2212 F SGD (w w w t )| (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proofs", "sec_num": "5." }, { "text": "where a i and b i are the ith elements of the vector \u03b3 \u03b3 \u03b3 and g g g t+1 , respectively. F SGD is the SGD update rule with the fixed learning rate \u03b3 max such that \u03b3 max := sup{\u03b3 i where \u03b3 i \u2208 \u03b3 \u03b3 \u03b3}. In other words, for the SGD update rule F SGD , the fixed learning rate \u03b3 max is derived from the ADF update rule. According to Lemma 1, the SGD update rule F SGD is a contraction mapping in Euclidean space with Lipschitz continuity degree 1 \u2212 \u03b3 max /\u03c3 2 , given the condition that \u03b3 max \u2264 (||x 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proofs", "sec_num": "5." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "i || \u2022 ||\u2207 y \u2032 y \u2032 y \u2032 \u2113(x x x i , y i , y \u2032 )|| Lip ) \u22121 . Hence, it goes to |F SGD (w w w t+1 ) \u2212 F SGD (w w w t )| \u2264 (1 \u2212 \u03b3 max /\u03c3 2 )|w w w t+1 \u2212 w w w t |", "eq_num": "(7)" } ], "section": "Proofs", "sec_num": "5." }, { "text": "Combining Equations (6) and (7), it goes to |F(w w w t+1 ) \u2212 F(w w w t )| \u2264 (1 \u2212 \u03b3 max /\u03c3 2 )|w w w t+1 \u2212 w w w t | Thus, according to the definition of dynamic contraction mapping, the ADF update rule is a dynamic contraction mapping in Euclidean space with Lipschitz continuity degree 1 \u2212 \u03b3 max /\u03c3 2 . \u2293 \u2294 Proof of Theorem 4 First, we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proofs", "sec_num": "5." }, { "text": "eigen(C C C t ) = t \u220f m=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proofs", "sec_num": "5." }, { "text": "(1 \u2212 \u03b3 \u03b3 \u03b3 0 \u03b2 m \u03bb)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proofs", "sec_num": "5." }, { "text": "\u2264 exp { \u2212 \u03b3 \u03b3 \u03b3 0 \u03bb t \u2211 m=1 \u03b2 m } Then, we have 0 \u2264 n \u220f j=1 (1 \u2212 a j ) \u2264 n \u220f j=1 e \u2212a j = e \u2212 \u2211 n j=1 a j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proofs", "sec_num": "5." }, { "text": "This is because 1 \u2212 a j \u2264 e \u2212a j given 0 \u2264 a j < 1. Finally, because \u2211 t m=1 \u03b2 m \u2192 \u03b2 1\u2212\u03b2 when t \u2192 \u221e, we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proofs", "sec_num": "5." }, { "text": "eigen(C C C t ) \u2264 exp { \u2212 \u03b3 \u03b3 \u03b3 0 \u03bb t \u2211 m=1 \u03b2 m } \u2192 exp { \u2212\u03b3 \u03b3 \u03b3 0 \u03bb\u03b2 1 \u2212 \u03b2 }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proofs", "sec_num": "5." }, { "text": "This completes the proof. \u2293 \u2294", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proofs", "sec_num": "5." }, { "text": "In this work we tried to simultaneously improve the training speed and model accuracy of natural language processing systems. We proposed the ADF on-line training method, based on the core idea that high frequency features should result in a learning rate that decays faster. We demonstrated that the ADF on-line training method is convergent and has good theoretical properties. Based on empirical experiments, we can state the following conclusions. First, the ADF method achieved the major target of this work: faster training speed and higher accuracy at the same time. Second, the ADF method was robust: It had good performance on several structured and non-structured classification tasks with very different characteristics. Third, the ADF method worked well on both binary features and real-valued features. Fourth, the ADF method outperformed existing methods in a one-pass learning setting. Finally, our method achieved stateof-the-art performance on several well-known benchmark tasks. To the best of our knowledge, our simple method achieved a much better F-score than the existing best reports on the biomedical named entity recognition task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6." }, { "text": "ADF source code and tools can be obtained from http://klcl.pku.edu.cn/member/sunxu/index.htm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://webee.technion.ac.il/people/koby/code-index.html.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": " 12&ZD227) . This work is a substantial extension of the conference version presented at ACL 2012 (Sun, Wang, and Li 2012) .", "cite_spans": [ { "start": 98, "end": 122, "text": "(Sun, Wang, and Li 2012)", "ref_id": "BIBREF28" } ], "ref_spans": [ { "start": 1, "end": 10, "text": "12&ZD227)", "ref_id": null } ], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "As presented in Equation 5, the ADF update rule is F(w w w t ) := w w w t+1 = w w w t + \u03b3 \u03b3 \u03b3 t \u2022 \u2022 \u2022 g g g t . For \u2200w w w t \u2208 X ,where a i is the ith element of the vector \u03b3 \u03b3 \u03b3 t+1 . b i and F SGD are the same as before. Similar to the analysis of Theorem 1, the third step of Equation 8is valid because \u03b3 max is the maximum learning rate at the beginning and all learning rates are decreasing when t is increasing. The proof can be easily derived following the same steps in the proof of Theorem 1. To avoid redundancy, we do not repeat the derivation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof of Theorem 2", "sec_num": null }, { "text": "Proof of Theorem 3 Let M be the accumulative change of the ADF weight vector w w w t :To prove the convergence of the ADF, we need to prove the sequence M t converges as t \u2192 \u221e. Following Theorem 2, we have the following formula for the ADF training:where \u03b3 max is the maximum learning rate at the beginning. Let d 0 := |w w w 2 \u2212 w w w 1 | and q := 1 \u2212 \u03b3 max /\u03c3 2 , then we have:WhenHence, we have:Thus, M t is upper-bounded. Because we know that M t is a monotonically increasing function when t \u2192 \u221e, it follows that M t converges when t \u2192 \u221e. This completes the proof. \u2293 \u2294", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2293 \u2294", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A maximum entropy approach to natural language processing", "authors": [ { "first": "Adam", "middle": [ "L" ], "last": "Berger", "suffix": "" }, { "first": "J", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Stephen", "middle": [ "A" ], "last": "Della Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Della Pietra", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "22", "issue": "1", "pages": "39--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Berger, Adam L., Vincent J. Della Pietra, and Stephen A. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39-71.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Biographies, Bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification", "authors": [ { "first": "John", "middle": [], "last": "Blitzer", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "440--447", "other_ids": {}, "num": null, "urls": [], "raw_text": "Blitzer, John, Mark Dredze, and Fernando Pereira. 2007. Biographies, Bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 440-447, Prague.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Online algorithms and stochastic approximations", "authors": [ { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" } ], "year": 1998, "venue": "Online Learning and Neural Networks", "volume": "", "issue": "", "pages": "9--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bottou, L\u00e9on. 1998. Online algorithms and stochastic approximations. In D. Saad, editor. Online Learning and Neural Networks. Cambridge University Press, pages 9-42.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A Gaussian prior for smoothing maximum entropy models", "authors": [ { "first": "Stanley", "middle": [ "F" ], "last": "Chen", "suffix": "" }, { "first": "Ronald", "middle": [], "last": "Rosenfeld", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, Stanley F. and Ronald Rosenfeld. 1999. A Gaussian prior for smoothing maximum entropy models. Technical Report CMU-CS-99-108, Carnegie Mellon University.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2002, "venue": "Proceedings of EMNLP'02", "volume": "", "issue": "", "pages": "414--422", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins, Michael. 2002. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In Proceedings of EMNLP'02, pages 1-8, Philadelphia, PA. Crammer, Koby, Alex Kulesza, and Mark Dredze. 2009. Adaptive regularization of weight vectors. In NIPS'09, pages 414-422, Vancouver.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Confidenceweighted linear classification", "authors": [ { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" }, { "first": "Koby", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ICML'08", "volume": "", "issue": "", "pages": "264--271", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dredze, Mark, Koby Crammer, and Fernando Pereira. 2008. Confidence- weighted linear classification. In Proceedings of ICML'08, pages 264-271, Helsinki.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Adaptive subgradient methods for online learning and stochastic optimization", "authors": [ { "first": "John", "middle": [], "last": "Duchi", "suffix": "" }, { "first": "Elad", "middle": [], "last": "Hazan", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2010, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duchi, John, Elad Hazan, and Yoram Singer. 2010. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2,121-2,159.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Exploiting context for biomedical entity recognition: From syntax to the Web", "authors": [ { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Shipra", "middle": [], "last": "Dingare", "suffix": "" }, { "first": "Huy", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Malvina", "middle": [], "last": "Nissim", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Gail", "middle": [], "last": "Sinclair", "suffix": "" } ], "year": 2004, "venue": "Proceedings of BioNLP'04", "volume": "", "issue": "", "pages": "91--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Finkel, Jenny, Shipra Dingare, Huy Nguyen, Malvina Nissim, Christopher Manning, and Gail Sinclair. 2004. Exploiting context for biomedical entity recognition: From syntax to the Web. In Proceedings of BioNLP'04, pages 91-94, Geneva.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Large margin classification using the perceptron algorithm", "authors": [ { "first": "Yoav", "middle": [], "last": "Freund", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Schapire", "suffix": "" } ], "year": 1999, "venue": "Machine Learning", "volume": "37", "issue": "", "pages": "277--296", "other_ids": {}, "num": null, "urls": [], "raw_text": "Freund, Yoav and Robert Schapire. 1999. Large margin classification using the perceptron algorithm. Machine Learning, 37(3):277-296.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A comparative study of parameter estimation methods for statistical natural language processing", "authors": [ { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Galen", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL'07)", "volume": "", "issue": "", "pages": "824--831", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gao, Jianfeng, Galen Andrew, Mark Johnson, and Kristina Toutanova. 2007. A comparative study of parameter estimation methods for statistical natural language processing. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL'07), pages 824-831, Prague.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Periodic step-size adaptation in second-order gradient descent for single-pass on-line structured learning", "authors": [ { "first": "Chun-Nan", "middle": [], "last": "Hsu", "suffix": "" }, { "first": "Han-Shen", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Yu-Ming", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Yuh-Jye", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2009, "venue": "Machine Learning", "volume": "77", "issue": "", "pages": "195--224", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hsu, Chun-Nan, Han-Shen Huang, Yu-Ming Chang, and Yuh-Jye Lee. 2009. Periodic step-size adaptation in second-order gradient descent for single-pass on-line structured learning. Machine Learning, 77(2-3):195-224.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Increased rates of convergence through learning rate adaptation", "authors": [ { "first": "Robert", "middle": [ "A" ], "last": "Jacobs", "suffix": "" } ], "year": 1988, "venue": "Neural Networks", "volume": "1", "issue": "4", "pages": "295--307", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacobs, Robert A. 1988. Increased rates of convergence through learning rate adaptation. Neural Networks, 1(4):295-307.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Introduction to the bio-entity recognition task at JNLPBA", "authors": [ { "first": "Jin", "middle": [ "-" ], "last": "Kim", "suffix": "" }, { "first": "Tomoko", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Yoshimasa", "middle": [], "last": "Ohta", "suffix": "" }, { "first": "Yuka", "middle": [], "last": "Tsuruoka", "suffix": "" }, { "first": "", "middle": [], "last": "Tateisi", "suffix": "" } ], "year": 2004, "venue": "Proceedings of BioNLP'04", "volume": "", "issue": "", "pages": "70--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kim, Jin-Dong, Tomoko Ohta, Yoshimasa Tsuruoka, and Yuka Tateisi. 2004. Introduction to the bio-entity recognition task at JNLPBA. In Proceedings of BioNLP'04, pages 70-75, Geneva.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Chunking with support vector machines", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2001, "venue": "Proceedings of NAACL'01", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kudo, Taku and Yuji Matsumoto. 2001. Chunking with support vector machines. In Proceedings of NAACL'01, pages 1-8, Pittsburgh, PA.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "ICML'01", "volume": "", "issue": "", "pages": "282--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lafferty, John, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML'01, pages 282-289, Williamstown, MA.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Flexible text segmentation with structured multilabel classification", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Koby", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2005, "venue": "Proceedings of HLT/ EMNLP'05", "volume": "", "issue": "", "pages": "987--994", "other_ids": {}, "num": null, "urls": [], "raw_text": "McDonald, Ryan, Koby Crammer, and Fernando Pereira. 2005. Flexible text segmentation with structured multilabel classification. In Proceedings of HLT/ EMNLP'05, pages 987-994, Vancouver.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Adaptive bound optimization for online convex optimization", "authors": [ { "first": "H", "middle": [], "last": "Mcmahan", "suffix": "" }, { "first": "", "middle": [], "last": "Brendan", "suffix": "" }, { "first": "J", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "", "middle": [], "last": "Streeter", "suffix": "" } ], "year": 2010, "venue": "Proceedings of COLT'10", "volume": "", "issue": "", "pages": "244--256", "other_ids": {}, "num": null, "urls": [], "raw_text": "McMahan, H. Brendan and Matthew J. Streeter. 2010. Adaptive bound optimization for online convex optimization. In Proceedings of COLT'10, pages 244-256, Haifa.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A statistical study of on-line learning", "authors": [ { "first": "Noboru", "middle": [], "last": "Murata", "suffix": "" } ], "year": 1998, "venue": "Online Learning in Neural Networks", "volume": "", "issue": "", "pages": "63--92", "other_ids": {}, "num": null, "urls": [], "raw_text": "Murata, Noboru. 1998. A statistical study of on-line learning. In D. Saad, editor. Online Learning in Neural Networks. Cambridge University Press, pages 63-92.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Improving the scalability of semi-Markov conditional random fields for named entity recognition", "authors": [ { "first": "Jorge", "middle": [], "last": "Nocedal", "suffix": "" }, { "first": "Stephen", "middle": [ "J" ], "last": "Wright ; Okanohara", "suffix": "" }, { "first": "Yusuke", "middle": [], "last": "Daisuke", "suffix": "" }, { "first": "Yoshimasa", "middle": [], "last": "Miyao", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Tsuruoka", "suffix": "" }, { "first": "", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 1999, "venue": "Proceedings of COLING-ACL'06", "volume": "", "issue": "", "pages": "465--472", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nocedal, Jorge and Stephen J. Wright. 1999. Numerical optimization. Springer. Okanohara, Daisuke, Yusuke Miyao, Yoshimasa Tsuruoka, and Jun'ichi Tsujii. 2006. Improving the scalability of semi-Markov conditional random fields for named entity recognition. In Proceedings of COLING-ACL'06, pages 465-472, Sydney.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A maximum entropy model for part-of-speech tagging", "authors": [ { "first": "Adwait", "middle": [], "last": "Ratnaparkhi", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "133--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ratnaparkhi, Adwait. 1996. A maximum entropy model for part-of-speech tagging. In Proceedings of the Conference on Empirical Methods in Natural Language Processing 1996, pages 133-142, Pennsylvania.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Introduction to the CoNLL-2000 shared task: Chunking", "authors": [ { "first": "Erik", "middle": [ "Tjong" ], "last": "Sang", "suffix": "" }, { "first": "Sabine", "middle": [], "last": "Kim", "suffix": "" }, { "first": "", "middle": [], "last": "Buchholz", "suffix": "" } ], "year": 2000, "venue": "Proceedings of CoNLL'00", "volume": "", "issue": "", "pages": "127--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sang, Erik Tjong Kim and Sabine Buchholz. 2000. Introduction to the CoNLL-2000 shared task: Chunking. In Proceedings of CoNLL'00, pages 127-132, Lisbon.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "No more pesky learning rates. CoRR, abs/1206", "authors": [ { "first": "Tom", "middle": [], "last": "Schaul", "suffix": "" }, { "first": "Sixin", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schaul, Tom, Sixin Zhang, and Yann LeCun. 2012. No more pesky learning rates. CoRR, abs/1206.1106.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Biomedical named entity recognition using conditional random fields and rich feature sets", "authors": [ { "first": "Burr", "middle": [], "last": "Settles", "suffix": "" } ], "year": 2004, "venue": "Proceedings of BioNLP'04", "volume": "", "issue": "", "pages": "807--814", "other_ids": {}, "num": null, "urls": [], "raw_text": "Settles, Burr. 2004. Biomedical named entity recognition using conditional random fields and rich feature sets. In Proceedings of BioNLP'04, pages 104-107, Geneva. Shalev-Shwartz, Shai, Yoram Singer, and Nathan Srebro. 2007. Pegasos: Primal estimated sub-gradient solver for SVM. In Proceedings of ICML'07, pages 807-814, Corvallis, OR.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Speed up learning and network optimization with extended back propagation", "authors": [ { "first": "Alessandro", "middle": [], "last": "Sperduti", "suffix": "" }, { "first": "Antonina", "middle": [], "last": "Starita", "suffix": "" } ], "year": 1993, "venue": "Neural Networks", "volume": "6", "issue": "3", "pages": "365--383", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sperduti, Alessandro and Antonina Starita. 1993. Speed up learning and network optimization with extended back propagation. Neural Networks, 6(3):365-383.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Word-based and character-based word segmentation models: Comparison and combination", "authors": [ { "first": "Weiwei", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2010, "venue": "COLING'10 (Posters)", "volume": "1", "issue": "", "pages": "211--212", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sun, Weiwei. 2010. Word-based and character-based word segmentation models: Comparison and combination. In COLING'10 (Posters), pages 1,211-1,219, Beijing.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Latent structured perceptrons for large-scale learning with hidden information", "authors": [ { "first": "", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Takuya", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Wenjie", "middle": [], "last": "Matsuzaki", "suffix": "" }, { "first": "", "middle": [], "last": "Li", "suffix": "" } ], "year": 2013, "venue": "IEEE Transactions on Knowledge and Data Engineering", "volume": "25", "issue": "9", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sun, Xu, Takuya Matsuzaki, and Wenjie Li. 2013. Latent structured perceptrons for large-scale learning with hidden information. IEEE Transactions on Knowledge and Data Engineering, 25(9):2,063-2,075.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Latent variable perceptron algorithm for structured classification", "authors": [ { "first": "", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Takuya", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Daisuke", "middle": [], "last": "Matsuzaki", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Okanohara", "suffix": "" }, { "first": "", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI 2009)", "volume": "1", "issue": "", "pages": "236--237", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sun, Xu, Takuya Matsuzaki, Daisuke Okanohara, and Jun'ichi Tsujii. 2009. Latent variable perceptron algorithm for structured classification. In Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI 2009), pages 1,236-1,242, Pasadena, CA.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Modeling latent-dynamic in shallow parsing: A latent conditional model with improved inference", "authors": [ { "first": "", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Louis-Philippe", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Daisuke", "middle": [], "last": "Morency", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Okanohara", "suffix": "" }, { "first": "", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2008, "venue": "Proceedings of COLING'08", "volume": "", "issue": "", "pages": "841--848", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sun, Xu, Louis-Philippe Morency, Daisuke Okanohara, and Jun'ichi Tsujii. 2008. Modeling latent-dynamic in shallow parsing: A latent conditional model with improved inference. In Proceedings of COLING'08, pages 841-848, Manchester.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Fast online training with frequencyadaptive learning rates for Chinese word segmentation and new word detection", "authors": [ { "first": "", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Houfeng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Wenjie", "middle": [], "last": "Wang", "suffix": "" }, { "first": "", "middle": [], "last": "Li", "suffix": "" } ], "year": 2012, "venue": "Proceedings of ACL'12", "volume": "", "issue": "", "pages": "253--262", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sun, Xu, Houfeng Wang, and Wenjie Li. 2012. Fast online training with frequency- adaptive learning rates for Chinese word segmentation and new word detection. In Proceedings of ACL'12, pages 253-262, Jeju Island.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Probabilistic Chinese word segmentation with non-local information and stochastic training", "authors": [ { "first": "", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Yaozhong", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Takuya", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yoshimasa", "middle": [], "last": "Matsuzaki", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Tsuruoka", "suffix": "" }, { "first": ";", "middle": [], "last": "Tsujii", "suffix": "" }, { "first": "", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Yao", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Takuya", "middle": [], "last": "Zhong Zhang", "suffix": "" }, { "first": "Yoshimasa", "middle": [], "last": "Matsuzaki", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Tsuruoka", "suffix": "" }, { "first": "", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2009, "venue": "Proceedings of NAACL-HLT'09", "volume": "49", "issue": "", "pages": "626--636", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sun, Xu, Yaozhong Zhang, Takuya Matsuzaki, Yoshimasa Tsuruoka, and Jun'ichi Tsujii. 2009. A discriminative latent variable Chinese segmenter with hybrid word/character information. In Proceedings of NAACL-HLT'09, pages 56-64, Boulder, CO. Sun, Xu, Yao Zhong Zhang, Takuya Matsuzaki, Yoshimasa Tsuruoka, and Jun'ichi Tsujii. 2013. Probabilistic Chinese word segmentation with non-local information and stochastic training. Information Processing & Management, 49(3):626-636.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "A conditional random field word segmenter for SIGHAN bakeoff", "authors": [ { "first": "Huihsin", "middle": [], "last": "Tseng", "suffix": "" }, { "first": "Pichuan", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Galen", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Fourth SIGHAN Workshop", "volume": "", "issue": "", "pages": "168--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tseng, Huihsin, Pichuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Manning. 2005. A conditional random field word segmenter for SIGHAN bakeoff 2005. In Proceedings of the Fourth SIGHAN Workshop, pages 168-171, Jeju Island.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Stochastic gradient descent training for l1-regularized log-linear models with cumulative penalty", "authors": [ { "first": "Yoshimasa", "middle": [], "last": "Tsuruoka", "suffix": "" }, { "first": "Sophia", "middle": [], "last": "Tsujii", "suffix": "" }, { "first": "", "middle": [], "last": "Ananiadou", "suffix": "" } ], "year": 2006, "venue": "Proceedings of ICML'06", "volume": "", "issue": "", "pages": "969--976", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsuruoka, Yoshimasa, Jun'ichi Tsujii, and Sophia Ananiadou. 2009. Stochastic gradient descent training for l1-regularized log-linear models with cumulative penalty. In Proceedings of ACL'09, pages 477-485, Suntec. Vishwanathan, S. V. N., Nicol N. Schraudolph, Mark W. Schmidt, and Kevin P. Murphy. 2006. Accelerated training of conditional random fields with stochastic meta-descent. In Proceedings of ICML'06, pages 969-976, Pittsburgh, PA.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Subword-based tagging by conditional random fields for Chinese word segmentation", "authors": [ { "first": "Ruiqiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Genichiro", "middle": [], "last": "Kikui", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers", "volume": "", "issue": "", "pages": "193--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang, Ruiqiang, Genichiro Kikui, and Eiichiro Sumita. 2006. Subword-based tagging by conditional random fields for Chinese word segmentation. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, pages 193-196, New York City.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Chinese segmentation with a word-based perceptron algorithm", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "840--847", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang, Yue and Stephen Clark. 2007. Chinese segmentation with a word-based perceptron algorithm. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 840-847, Prague.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "A unified character-based tagging framework for Chinese word segmentation", "authors": [ { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Changning", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Mu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Bao-Liang", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2010, "venue": "ACM Transactions on Asian Language Information Processing", "volume": "9", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhao, Hai, Changning Huang, Mu Li, and Bao-Liang Lu. 2010. A unified character-based tagging framework for Chinese word segmentation. ACM Transactions on Asian Language Information Processing, 9(2): Article 5.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Integrating unsupervised and supervised word segmentation: The role of goodness measures", "authors": [ { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Chunyu", "middle": [], "last": "Kit", "suffix": "" } ], "year": 2011, "venue": "Information Sciences", "volume": "181", "issue": "1", "pages": "163--183", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhao, Hai and Chunyu Kit. 2011. Integrating unsupervised and supervised word segmentation: The role of goodness measures. Information Sciences, 181(1):163-183.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Parallelized stochastic gradient descent", "authors": [ { "first": "Martin", "middle": [], "last": "Zinkevich", "suffix": "" }, { "first": "Markus", "middle": [], "last": "Weimer", "suffix": "" }, { "first": "Alexander", "middle": [ "J" ], "last": "Smola", "suffix": "" }, { "first": "Lihong", "middle": [], "last": "Li", "suffix": "" } ], "year": 2010, "venue": "Proceedings of NIPS'10", "volume": "2", "issue": "", "pages": "595--597", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zinkevich, Martin, Markus Weimer, Alexander J. Smola, and Lihong Li. 2010. Parallelized stochastic gradient descent. In Proceedings of NIPS'10, pages 2,595-2,603, Vancouver.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Comparisons among the ADF method and other on-line training methods. (Top Row) Comparisons based on training passes. As we can see, the ADF method has the best accuracy and with the fastest convergence speed based on training passes. (Bottom Row) Comparisons based on wall-clock time. ) Comparing ADF and ADF-noRich with CW and AROW methods. As we can see, both the ADF and ADF-noRich methods work better than the CW and AROW methods. (Bottom Row) Comparing different methods with mini-batch = 10 in the stochastic learning setting.", "uris": null, "type_str": "figure", "num": null }, "FIGREF2": { "text": "ure 7. Just as for the experiments for structured classification tasks, the ADF method", "uris": null, "type_str": "figure", "num": null }, "TABREF0": { "text": "Summary of the Bio-NER data set.", "html": null, "content": "
#Abstracts#Sentences#Words
Train2,00020,546 (10/abs) 472,006 (23/sen)
Test4044,260 (11/abs)96,780 (23/sen)
", "num": null, "type_str": "table" }, "TABREF1": { "text": "Feature templates used for the Bio-NER task. w i is the current word token on position i. t i is the POS tag on position i. o i is the orthography mode on position i. y i is the classification label on position i. y i\u22121 y i represents label transition. A \u00d7 B represents a Cartesian product between two sets.", "html": null, "content": "
Word Token-based Features:
", "num": null, "type_str": "table" }, "TABREF2": { "text": "The word bigram candidate [x j,i\u22121 , x i,k ] if it hits a word bigram [w i , w j ] \u2208 B, and satisfies the aforementioned constraints on j and k. B represents the word bigram dictionary collected from the training data. The word bigram candidate [x j,i , x i+1,k ] if it hits a word bigram [w i , w j ] \u2208 B, and satisfies the aforementioned constraints on j and k.", "html": null, "content": "", "num": null, "type_str": "table" }, "TABREF4": { "text": "Results on sentiment classification (non-structured binary classification).", "html": null, "content": "
Accuracy Passes Train-Time (sec)
LBFGS (batch)87.008672.20
SGD (on-line)87.134455.88
Perc (on-line)84.55255.82
Avg-Perc (on-line)85.044612.22
ADF (proposal)87.893057.12
", "num": null, "type_str": "table" } } } }